DETAILED ACTION
Response to Amendment
1. This office action is in response to applicant’s communication filed on 02/09/2026 in response to PTO Office Action mailed 12/22/2025. The Applicant’s remarks and amendments to the claims and/or the specification were considered with the results as follows.
2. In response to the last Office Action, claims 1, 9, 16 and 18 are amended. Claims 21 and 22 are added. As a result, claims 1-6, 8-19, 21 and 22 are pending in this office action.
Response to Arguments
3. Applicant's arguments with respect to 35 USC 102 have been fully considered but are moot in view of new ground(s) of rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6, 8-19, 21 and 22 are rejected under 35 U.S.C. 103 as being unpatentable by Cruz Mota et al. (US 2015/0193695 A1), hereinafter Cruz and in view of Li et al. (US 2021/0209488 A1), hereinafter Li.
Referring to claim 1, Cruz discloses a method implemented by a federated learning server in a first management domain (See para. [0021], para. [0074] and Figure 1, a management device or server [e.g., NMS] 150 interconnected with nodes/devices 110 for collectively training a machine learning model using independently collected datasets), the method comprising:
obtaining a first machine learning model from a machine learning model management center(See para. [0017], para. [0131] and Figure 7, the device determines that a machine learning model is to be trained by a plurality of devices in a network);
performing federated learning in the first management domain based on the first machine learning model and local network service data in the first management domain to obtain a second machine learning model (See para. [0131], para. [0132] and Figures 7 & 8, the management device determines that a model is to be trained and obtain a set of local datasets from one or more training devices to generate new model parameters);
and sending the second machine learning model to the machine learning model management center to enable the second machine learning model to be used by a device in a second management domain (See para. [0131], para. [0132], Figures 7 & 8 and claim 1, receiving model parameters with at least a portion of the local set of training data to generate new model parameters and forward the new model parameters to a second training device in the set of training devices).
Cruz does not explicitly disclose sending the second machine learning model when the application effect meets a preset condition.
Li discloses determining that an application effect of the second machine learning model meets a preset condition; and sending the second machine learning model when the application effect meets a preset condition (See para. [0160] and para. [0178] verifying a preset condition, e.g., a model version information before sending a first interference model or the second interference model to the inference computing apparatus).
Therefore, it would have been obvious to a person of ordinary skill in the computer art to modify the system of the Cruz system to send a second machine learning model when the application effect meets a preset condition, as taught by Li, in order to determine whether the performance of a model fluctuates or decreases is determined according to a variation of the evaluation parameters in a continuous period of time (See Li, para. [0011]). In addition, both of the references (Li and Cruz) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as obtaining relevant results from a set of input values via a machine learning model. This close relation between both of the references highly suggests an expectation of success.
As to claim 2, Cruz discloses sending machine learning model requirement information to the machine learning model management center, wherein obtaining the first machine learning model comprises receiving the first machine learning model from the machine learning model management center based on the machine learning model requirement information (See para. [0131] and Figure 7, sending a training request and training instructions to the NMS from one of the network node/devices and selecting a first machine learning in response to the request and training instructions).
As to claims 3 and 11, Cruz discloses wherein the machine learning model requirement information comprises model service information corresponding to the first machine learning model or a machine learning model training requirement (See para. [0131] and Figure 7, sending a training request and training instructions to the NMS from one of the network node/devices and selecting a first machine learning in response to the request and training instructions, note the training instructions include data regarding what portion of a local data is to be used to train the model, an ordering for how the training devices train the model, and other such information).
As to claims 4 and 12, Cruz discloses wherein the machine learning model training requirement comprises at least one of a training environment, an algorithm type, a network structure, a training framework, an aggregation algorithm, or a security mode (See para. [0131] and Figure 7, sending a training request and training instructions to the NMS from one of the network node/devices and selecting a first machine learning in response to the request and training instructions, note the training instructions include data regarding what portion of a local data is to be used to train the model, an ordering for how the training devices train the model, and other such information).
As to claims 5 and 19, Cruz discloses sending access permission information of a second machine learning model to a machine learning model center (See para. [0021], para. [0061] an authentication procedure is carried out when a node joins the network based on the EAPOL protocol, which is carried directly over layer 2 messages and is used for transporting authentication data from the node to the field area router ‘FAR”).
As to claim 6, Cruz discloses sending the second machine learning model to federated learning clients (See para. [0131], para. [0132], Figures 7 & 8 and claim 1, receiving model parameters with at least a portion of the local set of training data to generate new model parameters and forward the new model parameters to a second training device in the set of training devices).
As to claim 8, Cruz discloses wherein performing the federated learning comprises: sending the first machine learning model to federated learning clients to enable the federated learning clients to perform the federated learning based on the first machine learning model and network service data and to obtain intermediate machine learning models of the federated learning client; obtaining the intermediate machine learning models from the federated learning clients; and aggregating the intermediate machine learning models to obtain the second machine learning model (See para. [0129], para. [0130] and Figures 6C, 6D, in FIG. 6C, FAR-1 may send the new ANN parameters to FAR-2 after FAR-1 trains the model using at least a portion of its own local dataset).
Referring claim 9, Cruz discloses a method implemented by a machine learning model management center (See para. [0021], para. [0074] and Figure 1, a management device or server [e.g., NMS] 150 interconnected with nodes/devices 110 for collectively training a machine learning model using independently collected datasets) and comprising:
sending a first machine learning model to a first federated learning server in a first management domain (See para. [0129], para. [0130], Figure 6C, sending new machine learning parameters to Far-2 after Far-1);
receiving a second machine learning model from the first federated learning server, wherein the second machine learning model is based on first federated learning in the first management domain using the first machine learning model and first local network service data in the first management domain (See para. [0129], para. [0130], Figure 6C, 6D, Far-2 receives the new learning parameters after training the first model at least a portion of its own local dataset); and
replacing the first machine learning model with the second machine learning model to enable the second machine learning model to be used by a device in a second management domain (See para. [0129] - [0132], Figures 6, 7 & 8 and claim 1, forwarding the new model parameters to a training device in the set of training devices, once all FARSs have performed their corresponding optimization steps, the controller/NMS receives back the ANN via a Collect_Step message. For example, as shown in FIG. 6D, FAR-N may send the final, trained model parameters back to the NMS. At this point the ANN optimization process has considered all of the samples in GD at least once. In one embodiment, the NMS can decide to stop the process if, for instance, some performance requirements have been achieved or the current conditions of the network does not allow the optimization).
Crus does not explicitly disclose receive a second machine learning model when an application effect of the second machine learning model meets a preset condition.
Li discloses receive a second machine learning model when an application effect of the second machine learning model meets a preset condition (See para. [0160] and para. [0178] verifying a preset condition, e.g., a model version information before sending a first interference model or the second interference model to the inference computing apparatus).
Therefore, it would have been obvious to a person of ordinary skill in the computer art to modify the system of the Cruz system to send a second machine learning model when the application effect meets a preset condition, as taught by Li, in order to determine whether the performance of a model fluctuates or decreases is determined according to a variation of the evaluation parameters in a continuous period of time (See Li, para. [0011]). In addition, both of the references (Li and Cruz) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as obtaining relevant results from a set of input values via a machine learning model. This close relation between both of the references highly suggests an expectation of success.
As to claim 10, Cruz discloses before sending the first machine learning model, the method further comprises: receiving machine learning model requirement information from the first federated learning server; and determining the first machine learning model based on the machine learning model requirement information (See para. [0131] and Figure 7, sending a training request and training instructions to the NMS from one of the network node/devices and selecting a first machine learning in response to the request and training instructions).
As to claim 13, Cruz discloses wherein the second machine learning model is based on a first training framework, wherein the method further comprises converting the second machine learning model into a third machine learning model based on a second training framework, and wherein the third machine learning model and the second machine learning model correspond to same model service information (See para. [0129] - [0132], Figures 6, 7 & 8 and claim 1, forwarding the new model parameters to a training device in the set of training devices, once all FARSs have performed their corresponding optimization steps, the controller/NMS receives back the ANN via a Collect_Step message. For example, as shown in FIG. 6D, FAR-N may send the final, trained model parameters back to the NMS. At this point the ANN optimization process has considered all of the samples in GD at least once. In one embodiment, the NMS can decide to stop the process if, for instance, some performance requirements have been achieved or the current conditions of the network does not allow the optimization).
As to claim 14, Cruz discloses receiving access permission information of the second machine learning model from the first federated learning server (See para. [0021], para. [0061] an authentication procedure is carried out when a node joins the network based on the EAPOL protocol, which is carried directly over layer 2 messages and is used for transporting authentication data from the node to the field area router ‘FAR”).
As to claim 15, Cruz discloses sending the second machine learning model to a second federated learning server in the second management domain; receiving a fourth machine learning model from the second federated learning server, wherein the fourth machine learning model is based on second federated learning in the second management domain using the second machine learning model and second local network service data in the second management domain; and replacing the second machine learning model with the fourth machine learning model (See para. [0129] - [0132], Figures 6, 7 & 8 and claim 1, forwarding the new model parameters to a training device in the set of training devices, once all FARSs have performed their corresponding optimization steps, the controller/NMS receives back the ANN via a Collect_Step message. For example, as shown in FIG. 6D, FAR-N may send the final, trained model parameters back to the NMS. At this point the ANN optimization process has considered all of the samples in GD at least once. In one embodiment, the NMS can decide to stop the process if, for instance, some performance requirements have been achieved or the current conditions of the network does not allow the optimization).
Referring to claim 16, Cruz discloses a federated learning system comprising: a federated learning server in a first management domain (See para. [0021], para. [0074] and Figure 1, a management device or server [e.g., NMS] 150 interconnected with nodes/devices 110 for collectively training a machine learning model using independently collected datasets) and configured to:
obtain a first machine learning model from a machine learning model management center (See para. [0017], para. [0131] and Figure 7, the device determines that a machine learning model is to be trained by a plurality of devices in a network); send the first machine learning model; obtain intermediate machine learning models; aggregate the intermediate machine learning models to obtain a second machine learning model (See para. [0129], para. [0130] and Figures 6C, 6D, in FIG. 6C, FAR-1 may send the new ANN parameters to FAR-2 after FAR-1 trains the model using at least a portion of its own local dataset); and send the second machine learning model to the machine learning model management center to enable the second machine learning model to be used by a device in a second management domain; and federated learning clients in the first management domain (See para. [0131], para. [0132], Figures 7 & 8 and claim 1, receiving model parameters with at least a portion of the local set of training data to generate new model parameters and forward the new model parameters to a second training device in the set of training devices); and configured to: receive the first machine learning model from the federated learning server; and perform first federated learning based on the first machine learning model and local network service data in the first management domain to obtain the intermediate machine learning models (See para. [0129] - [0132], Figures 6, 7 & 8 and claim 1, forwarding the new model parameters to a training device in the set of training devices, once all FARSs have performed their corresponding optimization steps, the controller/NMS receives back the ANN via a Collect_Step message. For example, as shown in FIG. 6D, FAR-N may send the final, trained model parameters back to the NMS. At this point the ANN optimization process has considered all of the samples in GD at least once. In one embodiment, the NMS can decide to stop the process if, for instance, some performance requirements have been achieved or the current conditions of the network does not allow the optimization).
Cruz does not explicitly disclose sending the second machine learning model when the application effect meets a preset condition.
Li discloses determining that an application effect of the second machine learning model meets a preset condition; and sending the second machine learning model when the application effect meets a preset condition (See para. [0160] and para. [0178] verifying a preset condition, e.g., a model version information before sending a first interference model or the second interference model to the inference computing apparatus).
Therefore, it would have been obvious to a person of ordinary skill in the computer art to modify the system of the Cruz system to send a second machine learning model when the application effect meets a preset condition, as taught by Li, in order to determine whether the performance of a model fluctuates or decreases is determined according to a variation of the evaluation parameters in a continuous period of time (See Li, para. [0011]). In addition, both of the references (Li and Cruz) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as obtaining relevant results from a set of input values via a machine learning model. This close relation between both of the references highly suggests an expectation of success.
As to claim 17, Cuz discloses wherein the federated learning server is further configured to send the second machine learning model to the federated learning clients (See para. [0131], para. [0132], Figures 7 & 8 and claim 1, receiving model parameters with at least a portion of the local set of training data to generate new model parameters and forward the new model parameters to a second training device in the set of training devices), and wherein the federated learning clients are further configured to execute, based on the second machine learning model, a model service corresponding to the second machine learning model (See para. [0025] and para. [0027], operating systems 242, portions of which are typically resident in memory 240 and executed by the processor, functionally organizes the device by, inter alia, invoking operations in support of software processes and/or services executing on the device. These software processes and/or services may comprise routing process/services 244 and an illustrative "learning machine" process 248, which may be configured depending upon the particular node/device within the network 100 with functionality ranging from intelligent learning machine algorithms to merely communicating with intelligent learning machines).
As to claim 18, Cruz in view Li discloses wherein the federated learning server is further configured to: send machine learning model requirement information to the machine learning model management center; and obtain the first machine learning model by receiving from the machine learning model management center based on the machine learning model requirement information (See para. [0131] and Figure 7, sending a training request and training instructions to the NMS from one of the network node/devices and selecting a first machine learning model in response to the request and training instructions, also see See Li, para. [0160] and para. [0178] verifying a preset condition, e.g., a model version information before sending a first interference model or the second interference model to the inference computing apparatus).
As to claim 21, Cruz in view of Li discloses the machine learning model requirement information comprises model service information corresponding to the first machine learning model or a machine learning model training requirement (See Li, para. [0050] and para. [0111], the present range of the training parameters refers to a range of training parameters which corresponds to the case where a training capacity of the interference computing apparatus).
Therefore, it would have been obvious to a person of ordinary skill in the computer art to modify the system of the Cruz system to include a machine learning model training requirement, as taught by Li, in order to determine whether the performance of a model fluctuates or decreases is determined according to a variation of the evaluation parameters in a continuous period of time (See Li, para. [0011]). In addition, both of the references (Li and Cruz) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as obtaining relevant results from a set of input values via a machine learning model. This close relation between both of the references highly suggests an expectation of success.
As to claim 22, Cruz in view of Li discloses the machine learning model training requirement comprises at least one of a training environment, an algorithm type, a network structure, a training framework, an aggregation algorithm or a security model (See Li, para. [0050] and para. [0111], the present range of the training parameters refers to a range of training parameters which corresponds to the case where a training capacity of the interference computing apparatus).
Therefore, it would have been obvious to a person of ordinary skill in the computer art to modify the system of the Cruz system to include a machine learning model training requirement, as taught by Li, in order to determine whether the performance of a model fluctuates or decreases is determined according to a variation of the evaluation parameters in a continuous period of time (See Li, para. [0011]). In addition, both of the references (Li and Cruz) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as obtaining relevant results from a set of input values via a machine learning model. This close relation between both of the references highly suggests an expectation of success.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YUK TING CHOI whose telephone number is (571)270-1637. The examiner can normally be reached Monday-Friday 9am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, AMY NG can be reached at 5712701698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YUK TING CHOI/Primary Examiner, Art Unit 2164