Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/04/2026 has been entered.
Status of Claims
This action is a responsive to the application filed on 02/04/2026.
Claims 1-20 are pending.
Claims 1, 8, and 15 have been amended.
Response to Arguments
Applicant’s arguments, with respect to the rejection(s) of claim(s) 1, 8, and 15 under 35 U.S.C. 103, have been considered but they are not persuasive. Applicant argues that no reference teaches the amended limitations, since “Sharad is silent regarding how the criteria is "related to span loss measurement performed by the respective optical system" as the amended claims recite”. The examiner respectfully disagrees.
The combination has been found to teach the amendments, since Sharad, para [0018] – “selecting, by the server, n out of N participants, according to filtering criteria (operational criteria)” and partitioning according to training determinations (operational criteria). Further, paragraph 0044 teaches loss function being minimized for training (related to…loss measurement). Further Xu teaches in combination with Sharad, since Xu, para [0018] teaches “directionality and/or value of a span loss fault may be determined by sending an OTDR signal from the line monitoring system and indicating the directionality or value of the span loss fault in response to a change in amplitude in the received OTDR data signal”; thus, teaching utilization of span loss.
See 35 U.S.C 103 section for full mapping of claim limitations necessitated by applicant amendments.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 2, 6, 8, 9, 13, 15, 16 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sharad et al. (US 20200285980 A1, hereinafter Sharad) in view of Xu et al (US 20190260468 A1, hereinafter Xu) and further in view of Portier et al. (US 20120070154 A1, hereinafter Portier).
Regarding Claim 1
Sharad discloses: A method comprising: sending a first machine learning model to a plurality of (([Para 0050]) outlines the process, wherein the global model is understood to be the first learning model and the participants are understood to be the plurality of systems) in which the first machine learning model is configured to predict […] ([Para 0048] states that the purpose is to train a shared prediction model.)
selected for reception of the first machine learning model based on satisfaction of one or more operational criteria related to ([Para [0018] – “selecting, by the server, n out of N participants, according to filtering criteria (operational criteria)” and partitioning according to training determinations (operational criteria). Further, paragraph 0044 teaches loss function being minimized for training (related to…loss measurement));
and configured to locally train the first machine learning model according to the ([0050] Locally training the model using their training data in Sharad is understood to be locally training the model using the local parameters.) to obtain a respective local first machine learning model that includes one or more respective local model parameters that predict. ([Para 0050] states that “each contributor locally trains a model L.sub.i using their training data, and generates the relevant updates U.sub.i needed to obtain model L.sub.i from the previous model G.sup.r-1.” The generated updates U.sub.i from with the local model L can be obtained are understood to be the model parameters that can be used to predict. [Para 0048] states that the purpose is to “train a shared prediction model”.)
obtaining the respective local model parameters from each respective ([Para 0050] teaches an embodiment where “the server retrieves all aggregated group updates AU.sup.1, . . . , AU.sup.g” and “derives a new global model G.sup.r from the previous model G.sup.r-1 and the aggregated update U.sup.final”. The local model parameters are obtained in the form of the aggregated update and the models themselves are never shared.)
generating a second machine learning model by updating the first machine learning model using the obtained local model parameters ([Para 0050] “the server derives a new global model G.sup.r from the previous model G.sup.r-1 and the aggregated update U.sup.final, and shares G.sup.r with all participants”. Examiner understands the new global model to be the second machine learning model and the aggregated updated to be the obtained local parameters.)
and predicting, by the second machine learning model ([Para 0050] “the server derives a new global model G.sup.r from the previous model G.sup.r-1 and the aggregated update U.sup.final, and shares G.sup.r with all participants”. Examiner understands the new global model to be the second machine learning model. This global model can be the one that makes a prediction in the next iteration or training round. [Para 0048] states that the purpose is to “train a shared prediction model”.)
Sharad does not explicitly disclose: “
“and satisfaction of one or more operational criteria related to span loss measurement”, and
“and in which each respective optical system of the plurality of optical systems is configured to identify the span losses associated with the respective optical system”, and
“and
However, Xu discloses in the same field of endeavor: [Para 0017] teaches “automated line monitoring in an optical communication system using high loss loopback (HLLB) data. The line monitoring may be performed using a machine learning fault classifier for determining whether a signature associated with the HLLB data matches a predetermined fault signature.” A fault classification according to a data signature is equivalent to predicting the type of fault associated with the data.)
and satisfaction of one or more operational criteria related to span loss measurement ([Para 0018] teaches “directionality and/or value of a span loss fault may be determined by sending an OTDR signal from the line monitoring system and indicating the directionality or value of the span loss fault in response to a change in amplitude in the received OTDR data signal”)
and in which each respective optical system of the plurality of optical systems is configured to identify the span losses associated with the respective optical system ([Para 0061] “a plurality of repeaters coupled to the optical transmission path, each of the plurality of repeaters comprising a high loss loopback (HLLB) path; and line monitoring equipment (LME) coupled to the transmission path, the LME being configured to transmit a LME test signal on the optical transmission path” and “compare the LME loopback data to baseline loopback data to obtain a first fault signature”. The plurality of repeaters and associated LMEs and paths can be understood to be a plurality of optical systems. The LMEs are configured to identify fault signatures. [Abstract] The fault may be a span loss);
and configured to ([Para 0028] discloses training a machine learning based fault classifier, the faults may be span losses).
([Abstract] “automated line monitoring using a machine learning fault classifier for determining whether a signature associated with the high loss loopback (HLLB) data matches a predetermined fault signature”. Classifying fault signatures using a machine learning model is equivalent to predicting their presence using such a model. Span losses are stated to be one such fault case in the abstract.)
([0017] ”a system and method consistent with the present disclosure provides automated line monitoring in an optical communication system”)
([Para 0017] “ a system and method consistent with the present disclosure provides automated line monitoring in an optical communication system using high loss loopback (HLLB) data. The line monitoring may be performed using a machine learning fault classifier for determining whether a signature associated with the HLLB data matches a predetermined fault signature.” The line monitoring is used to identify faults, these faults include span losses according to [Abstract])
It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine the federated learning method for span loss prediction disclosed in Sharad with the application of machine learning to span loss detection in Xu, as they are in the same field of endeavor of making machine-learning-based predictions about network components. Additionally, one would be motivated to do so because “Advantageously, implementing a fault classifier in a system and method consistent with the present disclosure using a machine learning technology, such as a neural network, allows small changes in signatures to be detected while providing a correct result.” [Xu, para 0045]
Further, Sharad/Xu do not distinctly disclose: in which a transmission power of an optical signal transmitted through the given optical system is adjusted based on the one or more span losses as predicted
However, Portier teaches this: ((Para 0004] – “automatic power adjustment, wherein a signal power level of an optical signal transmitted by an optical transceiver via at least one optical span to a far-end device is adjusted automatically in response to a determined span loss to achieve a predetermined receive signal power level of the optical signal at the far-end device”; also see [0050]).
It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine the federated learning method for span loss prediction disclosed in Sharad as modified by Xu above, with an automatic power adjustment based on span loss of Portier to compensates span loss to maintain a target receive power and a flat spectrum across wavelengths, reducing manual tuning while improving signal quality, minimizing crosstalk/nonlinear distortions, and adapting in real time to fiber changes.
Regarding Claim 8
Sharad discloses: One or more non-transitory computer-readable storage media configured to store instructions that, in response to being executed, cause a system to perform operations, the operations comprising ([Para 0030] “In an embodiment, the present invention provides a non-transitory computer readable medium storing instructions that when executed by a processor cause the following steps to be performed”)
The remaining limitations in claim 8 are similar to the limitations in claim 1 and are therefore rejected under the same rationale.
Regarding Claim 15
Sharad discloses: a system comprising: one or more processors; and one or more non-transitory computer-readable storage media configured to store instructions that, in response to being executed, cause a system to perform operations, the operations comprising ([Para 0030] “In an embodiment, the present invention provides a non-transitory computer readable medium storing instructions that when executed by a processor cause the following steps to be performed” [0001] “The present invention relates to methods and systems for secure federated learning in a machine learning model”)
The remaining limitations in claim 15 are similar to the limitations in claim 1 and are therefore rejected under the same rationale.
Regarding Claim 2
Sharad in view of Xu discloses the method of claim 1.
Sharad further discloses: sending the second machine learning model to the plurality of ([Para 0050] discloses sharing the second model with a number of participants, G.sup.r which was obtained from the previous model G.sup.r-1. The participants are understood to be a plurality of systems.) Obtaining one or more respective second local model parameters from each respective optical system without obtaining the corresponding respective locally trained second machine learning model; ([Para 0050] teaches an embodiment where “the server retrieves all aggregated group updates AU.sup.1, . . . , AU.sup.g” and “derives a new global model G.sup.r from the previous model G.sup.r-1 and the aggregated update U.sup.final”. The local model parameters are obtained in the form of the aggregated update and the models themselves are never shared. ([Para 0036] discloses an arbitrary number of training iterations for global model G.sup.r, including a “second” model. Furthermore, final model G.sup.R in [Para 0036] can be understood to be a “second” model. The global model is “continuously updated” by aggregating the contributions (local parameters) of the participants in [Para 0045] In consecutive iterations of the global model G after successive training rounds, the local parameters of the local models in Sharad can be understood to be second local model parameters. The superscript r of the Global model G can have natural number values, including numbers 2 and 3, therefore Sharad in view of Xu discloses the claimed limitation).
and generating a third machine learning model by updating the second machine learning model using the obtained second local model parameters. ([Para 0036] discloses an arbitrary number of training iterations for global model G.sup.r, including a “third” model. Furthermore, final model G.sup.R in [Para 0036] can be understood to be a “third” model. The global model is “continuously updated” by aggregating the contributions (local parameters) of the participants in [Para 0045] In consecutive iterations of the global model G after successive training rounds, the local parameters of the local models in Sharad can be understood to be second local model parameters and the resulting global model G in the training round after this can be understood to be the third global model. The superscript r of the Global model G can have natural number values, including numbers 2 and 3, therefore Sharad in view of Xu discloses the claimed limitation).
Regarding Claim 9
Claim 9 recites similar limitations as claim 2 except that it sets forth the claimed invention as a non-transitory computer-readable storage media and is therefore rejected under the same rationale.
Regarding Claim 16
Claim 16 recites similar limitations as claim 2 except that it sets forth the claimed invention as a system and is therefore rejected under the same rationale
Regarding claim 6
Sharad in view of Xu discloses: The method of claim 1, wherein the span losses predicted by the machine learning model include span losses corresponding to at least one of: fiber bending, fiber pinching, or fiber deterioration. ([Xu, Abstract] discusses fiber break faults as potential fault case, [Para 0004] discusses fiber breaks. To a person having ordinary skill in the art, fiber break faults include pinching and bending in less extreme cases).
Regarding Claim 13
Claim 13 recites similar limitations as claim 6 except that it sets forth the claimed invention as a non-transitory computer-readable storage media and is therefore rejected under the same rationale.
Regarding Claim 20
Claim 20 recites similar limitations as claim 6 except that it sets forth the claimed invention as a system and is therefore rejected under the same rationale.
Claims 3, 4, 10, 11, 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Sharad in view of Xu in view of Portier and further in view of Wei et al (“Gradient-Leakage Resilient Federated Learning”, 04 October 2021, hereinafter Wei).
Regarding Claim 3
Sharad in view of Xu discloses the method of claim 2.
Sharad in view of Xu does not explicitly disclose: determining an iteration criterion; and repeating the steps of claim 2 until the iteration criterion is satisfied.
However, Wei discloses, in the same field of endeavor: determining an iteration criterion; and repeating the steps of claim 2 until the iteration criterion is satisfied. ([Page 805, Col 1, Section B., paragraph 1, lines 3-4] “with two settings on the per-client location training iterations: L = 1 and L = 100.” The term ‘determining’, in the broadest reasonable interpretation, can refer to setting or assigning. The iteration criterion is determined to be a number of iterations/rounds here in Wei, the number of iterations is determined to be 1 or 100. Under the specification of the instant application, in paragraph [0036], the iteration criterion relates to a number of training rounds).
It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine the federated learning method disclosed in Sharad with the application of machine learning to span loss prediction in Xu and the iteration criteria disclosed in Wei, as they are in the same field of endeavor of making machine-learning based predictions in distributed systems. One would do this as setting training iterations to different numbers can impact training outcomes such as accuracy and computational expenses, for example: “However, local training with L = 1 takes more rounds to achieve the same accuracy as the local training with L = 100 iterations.” [Wei, page 805, column two, lines 16-17].
Regarding Claim 10
Claim 10 recites similar limitations as claim 3 except that it sets forth the claimed invention as a non-transitory computer-readable storage media and is therefore rejected under the same rationale.
Regarding Claim 17
Claim 17 recites similar limitations as claim 3 except that it sets forth the claimed invention as a system and is therefore rejected under the same rationale.
Regarding Claim 4
Sharad in view of Xu and Wei discloses the method of claim 3.
Sharad in view of Xu and Wei discloses: wherein the iteration criterion includes at least one of: a number of training rounds, a training accuracy of the machine learning model, or a testing accuracy of the machine learning model. ([Page 805, Col 2, Section B., paragraph 1, lines 3-4] “with two settings on the per-client location training iterations: L = 1 and L = 100.” The term ‘determining’, in the broadest reasonable interpretation, can refer to setting or assigning. The iteration criterion is determined to be a number of iterations/rounds here in Wei, the number of iterations is determined to be 1 or 100. Under the specification of the instant application, in paragraph [0036], the iteration criterion relates to a number of training rounds).
Regarding Claim 11
Claim 11 recites similar limitations as claim 4 except that it sets forth the claimed invention as a non-transitory computer-readable storage media and is therefore rejected under the same rationale.
Regarding Claim 18
Claim 18 recites similar limitations as claim 4 except that it sets forth the claimed invention as a system and is therefore rejected under the same rationale.
Claims 5, 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Sharad in view of Xu in view of Portier and further in view of Shariati et al (“Demonstration of Federated Learning over Edge-Computing Enabled Metro Optical Networks”, 6 December 2020, hereinafter Shariati).
Regarding Claim 5
Sharad in view of Xu discloses the method of claim 1.
Sharad in view of Xu does not explicitly disclose: wherein the plurality of optical systems includes reconfigurable optical add-drop multiplexer systems (ROADM systems).
However, Shariati discloses, in the same field of endeavor: wherein the plurality of optical systems includes reconfigurable optical add-drop multiplexer systems (ROADM systems) ([Shariati, Section “Demo Architecture, lines 1 – 3] Discloses a network on which to perform federated learning operations, in the instant case “three commercial 2-degree reconfigurable optical add-drop multiplexers (ROADM)”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sharad in view of Xu and Shariati to apply the federated learning framework to ROADM systems. One would do this because ROADM systems are state of the art, common optical systems and it enables applying the benefits of federated learning to geo-distributed data sources ([Shariati, Abstract]). Additionally, as stated in [Shariati, page 2, col 1, lines 20-26]: “Our federated learning framework targets exactly that challenge and allows shared ownership and governance of ML models in optical networks, which is the key enabler for the realization of ML based solutions that can work in real-field scenarios in a robust and reliable way.”
Regarding Claim 12
Claim 12 recites similar limitations as claim 5 except that it sets forth the claimed invention as a non-transitory computer-readable storage media and is therefore rejected under the same rationale.
Regarding Claim 19
Claim 19 recites similar limitations as claim 5 except that it sets forth the claimed invention as a system and is therefore rejected under the same rationale.
Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Sharad in view of Xu in view of Portier and further in view of Qian et al (US 20210374605 A1, hereinafter Qian).
Regarding claim 7
Sharad in view of Xu discloses: the method of claim 1.
Sharad in view of Xu does not explicitly disclose: wherein the machine learning models include at least one of: a long short-term memory model, a logistic regression model, or a naive Bayes model.
However, Qian discloses, in the same field of endeavor: wherein the machine learning models include at least one of: a long short-term memory model, a logistic regression model, or a naive Bayes model. ([Para 0105]: “For example, the deep learning algorithms may include […] long short term memory (LSTM)”.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sharad in view of Xu with the models disclosed in Qian to implement a federated learning setup that includes well-known models such as LSTMs for learning diversely typed data ([Qian, para 0105]: “In particular embodiments, the deep learning algorithms 1418 may include any artificial neural networks (ANNs) that may be utilized to learn deep levels of representations and abstractions from large amounts of data. For example, the deep learning algorithms 1418 may include […] long short term memory (LSTM) […]”).
Regarding Claim 14
Claim 14 recites similar limitations as claim 7 except that it sets forth the claimed invention as a system and is therefore rejected under the same rationale.
Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Ronald et al (US Pub 20160234582) teaches utilizing an optical network and learning pattern behavior of power consumption.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CLINT MULLINAX whose telephone number is 571-272-3241. The examiner can normally be reached on Mon - Fri 8:00-4:30 PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached on 571-270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.M./Examiner, Art Unit 2123
/ALEXEY SHMATOV/Supervisory Patent Examiner, Art Unit 2123