Prosecution Insights
Last updated: April 19, 2026
Application No. 18/660,495

Method and Apparatus for Sending Fault Alarm Information

Non-Final OA §103§112
Filed
May 10, 2024
Examiner
ABDELRAHEEM, MOHAMMED SAID
Art Unit
2635
Tech Center
2600 — Communications
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
23 currently pending
Career history
23
Total Applications
across all art units

Statute-Specific Performance

§103
57.5%
+17.5% vs TC avg
§102
6.4%
-33.6% vs TC avg
§112
29.8%
-10.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED OFFICE ACTION Information Disclosure Statement The information disclosure statement (IDS) submitted on 2025-01-22 in compliance with the provisions of 37 CFR 1.97 has been considered by the examiner and made of record in the application file. Claim Status Claims 1- 20 are pending in this application and are under examination in this Office Action. No claims have been allowed. Claim Rejections - 35 USC § 112(b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION. —The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 4, 6, 7, 11, 14 and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Regarding claim 4, the claim refers to "a fault that occurs in the apparatus [[and/]] or the target communication link". There is an indefiniteness within this limitation because the bracketed drafting artifact "[[and/]]" leaves the relationship between "apparatus" and "target communication link" unresolved. It is unclear whether Applicant intended "and", "or", "and/or", or another connective. Therefore, the metes and bounds of the claim cannot be determined with reasonable certainty. Applicant is required to amend the claim to remove the bracketed language and recite a single definite relationship. Regarding claim 6, the claim refers to the bracketed term "[[and]]" in the sequence "[[and]] each time a second faulty service is detected", the bracketed term "[[and]]" is a drafting artifact that leaves the intended connective structure unresolved. Accordingly, claim 6 is indefinite and must be amended to remove the bracketed artifact. Regarding claim 7, the claim refers to "[[when]] the first faulty service is a third target service". There is an indefiniteness within this limitation because the bracketed drafting artifact "[[when]]" renders the conditional structure unclear and grammatically incomplete. Applicant is required to amend the claim to remove the bracketed artifact and recite a complete, definite condition. Further, there is a lack of antecedent basis for "the third target service" in claim 7 because "a third target service" is recited in claim 6 on which claim 7 does not depend. For the purposes of prior art rejections, claim 7 will be taken to depend on claim 6. Regarding claim 11, the claim refers to "the at least one processor further configured to [[; and]] generate [[fault]] the fault alarm information". There is an indefiniteness within this limitation because the bracketed drafting artifacts "[[; and]]" and "[[fault]]" indicate unresolved claim language and render the processor-function limitation unclear. As written, it is not reasonably certain what operations Applicant intends the processor to perform (e.g., whether an additional function is recited by "; and" and how "fault" is to be incorporated). Applicant is required to amend claim 11 to remove the bracketed artifacts and present a single definite set of limitations. Regarding claim 14, the claim refers to "[[when]] the first faulty service is a third target service". There is an indefiniteness within this limitation because the bracketed drafting artifact "[[when]]" renders the conditional structure unclear and grammatically incomplete. Applicant is required to amend the claim to remove the bracketed artifact and recite a complete, definite condition. Further, there is a lack of antecedent basis for "the third target service" in claim 14 because "a third target service" is recited in claim 13 on which claim 14 does not depend. For the purposes of prior art rejections, claim 14 will be taken to depend on claim 13. Regarding claim 20, the claim refers to "select, from the target service classification tree". There is a lack of antecedent basis for "the target service classification tree" in claim 20 because "a target service classification tree" is recited in claim 19 on which claim 20 does not depend, and claim 18 (from which claim 20 depends) does not introduce the tree. Accordingly, claim 20 is indefinite. For the purposes of prior art rejections, claim 20 will be taken to depend on claim 19. Claim Rejections – 35 U.S.C. § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for the obviousness rejections set forth in this Office Action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. As reiterated by the Supreme Court in KSR, and as set forth in MPEP 2141 (R-01.2024), II, the factual inquiries of Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), applied for establishing a background for determining obviousness under 35 U.S.C. §103, are summarized as follows: Determining the scope and content of the prior art; Ascertaining the differences between the prior art and the claims at issue; Resolving the level of ordinary skill in the pertinent art; and Considering objective evidence indicative of obviousness or non-obviousness, if present. This application currently names joint inventors. In considering patentability of the claims, the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention. Claims 1,2,4,8,9,11,15,16 and 18 are rejected under 35 U.S.C. §103 as being unpatentable over Sareen et al. (US20160043797A1) in view of Ding et al. (US9071361B2). Claim 1 Sareen teaches detecting a failure and generating corresponding failure information, and then injecting that failure information into the overhead of the data path (fast path). This teaches generating fault alarm information indicative of a fault on the apparatus/link “generating fault alarm information, wherein the fault alarm information indicates a fault that occurs in an apparatus or a target communication link, and wherein the target communication link is for communication by the apparatus”, “……[0007] The fast path can operate in real-time via injection of the failure information in the data path overhead upon detection of the failure and is negligibly impacted in its time delay by a number of intermediate nodes between the node and the originating node, and the slow path can operate in Software based on processing and forwarding the control plane signaling sequentially through the intermediate nodes to the originating node and is delayed based on the number of the inter mediate nodes. The affected connection can utilize Optical Transport Network (OTN). The affected connection can utilize Optical Transport Network (OTN) and the failure information is inserted in Fault Type and Fault Location (FTFL) reporting communication channel bytes of the overhead. The failure information can be inserted in either forward or back ward operator-specific fields in the FTFL reporting communication channel bytes of the overhead, based on a direction of the failure………” [Sareen, ¶ [0007]]. Sareen does not expressly teach sending the fault alarm information through a SerDes bus. However, in analogous OTN transceiver/host-interface art, Ding teaches SerDes interfaces coupled to a G.709 framer/FEC block, which is the conventional path for conveying overhead/fault information over a SerDes bus between OTN blocks “sending, through a serializer/deserializer (SerDes) bus, the fault alarm information”, “FIG. 7 is a signal flow diagram of an optical transceiver with an MDIO bridge to provide a mechanism to communicate to the MDIO from a G.709 framer with FEC and from a XAUI serializer/de-serializer (SerDes)…….. Referring to FIG. 7, an exemplary embodiment of an optical transceiver 700 with an MDIO bridge provides a mechanism in the present disclosure to communicate to the MDIO from a G.709 framer with FEC 708 and from a XAUI serializer/deserializer (SerDes) 710. The MDIO bridge preserves the standard MDIO functionality found in MSA specifications such as XENPAK, XPAK, and X2 and allows the G.709 framer with FEC 708 to communicate utilizing the same MDIO. As such, a host system configured to communicate with an optical transceiver can operate with an optical transceiver 700 with an integrated G.709 framer. The host system can be modified in software only to receive MDIO communications from the MDIO bridge. The optical transceiver 700 includes a transmitter (TX) 702 and a receiver (RX) 704 connected at 10 Gbps to an SFI-4 SerDes 706. SFI-4 is Ser Des Framer Interface standard level 4 from the Optical Inter networking Forum (OIF). SIF-4 is one example of an inter face to the G.709 framer 708. Other interfaces to the G.709 frame can include XGMII, XFI, and XAUI. The SFI-4SerDes 706 connects to the G.709 framer 708 with an SFI 4.1 signal. The G.709 framer 708 connects at 10 Gbps to the XAUI SerDes 710 which in turn connects to a host device. The MDIO bridge includes a control field programmable gate array (FPGA) 716 which is configured to bridge the MDIO interface between the G.709 framer 708 and the XAUI SerDes 710. The FPGA 716 connects to the G.709 framer 708 and to the XAUI SerDes 710 and provides a single external MDIO 720 interface to the host device. This external MDIO interface 720 includes data from both the XAUI SerDes 710 and the G.709 framer 708. The FPGA 716 connects to the XAUI SerDes 710 through a XAUI MDIO 718 connection and to the G.709 framer 708 through a parallel microprocessor bus 712. Additionally, the FPGA 716 provides discrete control and Status 714 to the SFI-4 SerDes 706. The FPGA 716 has a serial packet interface (SPI) to a processor 724 which in turn has a 2-wire input/output (I/O) connection 726 to the XAUI SerDes 710 and a SPI interface to another processor 722. The FPGA 716 is configured to decode MDIO addresses and pass MDIO data between both the G.709 framer 708 and the XAUI SerDes 710. Also, the FPGA 716 is configured to combine MDIO data from both the G.709 framer 708 and the XAUI SerDes 710 to the external MDIO 720. As such, the MDIO bridge provides a mechanism for a single, MSA-compliant MDIO interface to operate with the additional circuitry of the G.709 framer with FEC 708.” [Ding, FIG.7, col.3; col. 12-13]. PNG media_image1.png 560 684 media_image1.png Greyscale Accordingly, it would have been obvious to a person of ordinary skill in the art (POSITA), at the time of the invention, to implement Sareen’s fault/failure detection and generation of corresponding failure information within the SerDes-based OTN transceiver / G.709 framer architecture of Ding. Sareen is directed to rapid dissemination of failure information by injecting such information into OTN overhead in a “fast path,” so that alarm information is available promptly when a fault occurs on a service signal. Ding, in turn, is directed to a practical OTN transceiver implementation in which a G.709 framer and associated logic communicate with a host/controller through standard high-speed SerDes interfaces. In real OTN equipment, overhead and status information generated at the framer/line side must be conveyed to host/control logic that performs alarm handling and initiates subsequent actions; Ding’s SerDes interconnect is a conventional, reliable mechanism for transporting precisely that kind of overhead/status data at line rates. A POSITA would have been motivated to combine Sareen and Ding for reasons consistent with KSR and common engineering practice. First, using a SerDes bus to carry framer/overhead-related status and alarm information is a known technique applied to a known system, yielding predictable results (high bandwidth, signal integrity, deterministic transfer, and modular integration). Second, integrating Sareen’s fast-path alarm generation into Ding’s established architecture reduces implementation risk and supports interoperability across modules, because SerDes-coupled boundaries are commonly used to expose overhead/status to host logic without custom sideband wiring. Third, the choice is an obvious selection among a finite set of conventional internal interconnect options for moving overhead/status information (e.g., parallel buses, proprietary links, SerDes); selecting the standard SerDes path is a routine design choice that predictably achieves the desired transfer of fault alarm information. Therefore, claim 1’s SerDes-bus aspect would have been obvious over Sareen in view of Ding. Claim 2 With respect to claim 2, all limitations of claim 1 are taught by Sareen and Ding, except wherein claim 2 additionally requires that the SerDes-bus transmission comprises sending an OTN overhead signal that carries the fault alarm information. Added “sending the fault alarm information comprises sending, through the SerDes bus, an optical transport network (OTN) overhead signal that carries the fault alarm information.” However, within analogous art, Sareen expressly teaches injecting failure information into the OTN overhead (fast path), and discusses use of FTFL overhead for failure information, “[0008] In yet another exemplary embodiment, a node, in a network using a control plane, configured for providing fast = restoration in the network includes one or more-line modules configured to inject information in overhead on connections; and a controller communicatively coupled to the one or more line modules, wherein the controller is configured to operate a distributed control plane through a communications channel in the overhead; wherein, responsive to failure on a link, the one or more-line modules are configured to inject information identifying the failure in the overhead of each one of affected connections, over a fast path, and wherein, responsive to the failure on the link, the controller is also configured to generate and forward control plane signaling towards originating nodes of the affected connections over a slow path relative to the fast path………….. [0029] In an exemplary embodiment, the fast mesh resto ration systems and methods can utilize Fault Type and Fault Location (FTFL) reporting communication channel bytes (FTFL message 32) for carrying release specific message data (e.g., control plane Connection Incarnation number, control plane Link ID and control plane Node ID of point of failure). The FTFL message 32 is allocated in the ODUk overhead to transport a 256-byte Fault Type and Fault Location (FTFL) message. The FTFL message 32 is located in row 2, column 14 of the ODUk overhead. The FTFL message 32 includes two 128-byte fields as shown in FIG. 4, a forward FTFL 34 and a backward FTFL 36. The forward FTFL 34 is allocated in bytes 0 to 127 of the FTFL message 32…………... [0030] The FTFL message 32 can be used to instantly propagate the same information in the RELEASE message 20 regarding the failure 18. At the failure 18, a line module, on detecting any failure that results in a mesh restoration…………………… [0031] Referring to FIG. 5, in an exemplary embodiment, a flow chart illustrates a fast mesh restoration process 60. The fast mesh restoration process 60 contemplates operation in the network 10. The fast mesh restoration process 60 includes operating a control plane in a network (step 62). For example, the control plane can be a distributed control plane Such as ASON, GMPLS, ORSP, etc………………... [0032] The fast mesh restoration process 60 includes, at intermediate node(s), receiving information in the overhead; parsing the information and passing it to the control plane; acting on the information immediately or after a hold-off period; and optionally, generating and forwarding a RELEASE message (step 68) …………... Since RELEASE information is included in frame ODU data, a number of nodes in the path has negligible Impact……...” [Sareen, ¶ [0008], ¶ ¶ [0029] - [0032]]. Sareen does not expressly teach G.709 overhead output However, in analogous OTN transceiver/host-interface art, Ding teaches explicit G.709 overhead output (“G.709 Overhead Out”) in a SerDes/framer context, which would carry such overhead/fault information, “FIG. 5 is a block diagram of an exemplary embodiment of a XAUI-XFI transceiver including integrated G.709 framing and FEC includes integrated circuitry to multiplex/de-multi plex, encode/decode, frame fun-frame, and process overhead and FEC, Referring to FIG. 5, an exemplary embodiment of a XAUIXFI transceiver 500 including integrated G.709 framing and PNG media_image2.png 722 673 media_image2.png Greyscale FEC includes integrated circuitry to multiplex/de-multiplex, encode/decode, frame fun-frame, and process overhead and FEC. XAUI clock and data recover (CDR) 505 inputs are configured to accept four 3.125 Gbps signals from a host system, to retime, recover the clock, and pass the four 3.125 Gbps signals to a PHY XS 8B/10B decoder 515. The decoder 515 is configured to de-multiplex four XAUI signals running at 3.125 Gbps using 8B/10B encoding and pass the output to a physical coding sub-layer (PCS) 525 module. The PCS 525 module performs 64B/66B encoding to provide a single lane XFI signal running at 10.3125 Gbps and PCS scrambling. The PCS 525 module outputs to a G.709 framer 535…………………… The PCS 530 module performs 64B/66B decoding and PCS de-scrambling. The PCS 530 module outputs to a PHY XS 8B/10B encoder 520. The encode 520 is configured to de-multiplex an XFI signal into four XAUI signals running at 3.125 Gbps using 8B/10B encoding and pass the output to four XAUI drivers 510. The XAUI drivers 510 provide four 3.125 Gbps signals to the host system. Additionally, the XAUI-XFI transceiver 500 includes a serial packet interface (SPI) and I2C interface 555 for communications to the host system. The MDIO 550 interface is utilized to provide standard MSA-compliant communications to the host system. Additionally, the present disclosure utilizes the MDIO 550 to communicate a subset of OAM&P and FEC overhead to the host system from the G.709 framer 535 and G.709 de-framer 540 through unused, undefined, reserved, or optional MDIO Registers, [Ding, FIG.5, col.3; col. 10-11].” Accordingly, it would have been obvious to a POSITA to further configure the Sareen-in-view-of-Ding system such that the SerDes-bus transmission comprises sending an OTN overhead signal that carries the fault alarm information. Sareen already teaches injecting failure information into overhead for rapid distribution, which is the standard in-band mechanism in OTN for conveying operational status and fault conditions with low latency. Ding teaches that a G.709 framer processes overhead and interfaces to SerDes-based links toward the host/controller, including explicit “G.709 overhead out” functionality. In this context, conveying Sareen’s alarm information via the overhead stream that the framer already generates and outputs is the most straightforward implementation choice for transporting alarm information across the SerDes boundary. A POSITA would be motivated to adopt overhead-carriage over the SerDes path because it predictably improves timeliness and reliability of fault dissemination while minimizing additional interfaces. Overhead is designed to carry such management/status information in a standardized way; using it avoids inventing a new side channel and avoids delay associated with separate management planes. Under KSR, this is the predictable use of known elements according to their established functions (OTN overhead conveys status/fault indications; SerDes transports high-speed digital signals). It is also an obvious choice among a finite set of known alternatives for conveying alarm information within a transponder (overhead versus separate out-of-band messaging). Therefore, claim 2’s additional requirement that the SerDes transmission comprises sending the overhead signal carrying the alarm information would have been obvious. Claim 4 With respect to claim 4, all limitations of claim 1 are taught by Sareen and Ding, except wherein claim 4 additionally requires (i) generating the fault alarm information based on a fault that occurs in the apparatus and/or target communication link, and (ii) the fault alarm information comprises a type of fault. However, within analogous art, Sareen teaches FTFL subfields including a fault type indication field (FIF), which explicitly provides a fault type as part of the failure/fault information carried in overhead, “fault alarm information comprises a type of fault.”, “[0030] The FTFL message 32 can be used to instantly propagate the same information in the RELEASE message 20 regarding the failure 18. At the failure 18, a line module, on detecting any failure that results in a mesh restoration, injects failure information in FTFL message 32 for all the paths configured over the failed link. For example, on seeing a failure at OTU or at a line TCM, the associated line module fills the FTFL message 32 with information corresponding to individual ODUk/ODU. In an exemplary embodiment, this information can be included in the operator-specific fields and can include Node Name/ID, Link ID, Incarnation, etc. The Node Name/ID uniquely identifies the node, the Link ID uniquely identifies the link associated with the node with the failure, and the Incarnation identifies specific connections on the link that are affected by the failure. That is, the information is the same information in the RELEASE message 20. [0031] Referring to FIG. 5, in an exemplary embodiment, a flow chart illustrates a fast mesh restoration process 60. The fast mesh restoration process 60 contemplates operation in the network 10. The fast mesh restoration process 60 includes operating a control plane in a network (step 62). For example, the control plane can be a distributed control plane Such as ASON, GMPLS, ORSP, etc. and the control plane can be source-based routed. The fast mesh restoration process 60 includes detecting a failure on or in a link in the network (step 64). Responsive to detecting the failure (step 64), the fast mesh restoration process 60 includes generating and forwarding a RELEASE message and injecting information from RELEASE message in the overhead of affected connections (step 66). The fast mesh restoration process 60 contemplates dual notification schemes using the RELEASE message in control plane signaling and injecting the same information in the overhead for instantaneous notifications. The RELEASE message in control plane signaling can be referred to as a slow path and the overhead can be referred to as a fast path. The references to the slow path and the fast path are relative to one another. Specifically, the slow path operates sequentially and in Software, and the fast path operates almost in parallel and in hardware through injection of data in overhead. Thus, the fast path is relatively faster than the slow path. [0032] The fast mesh restoration process 60 includes, at intermediate node(s), receiving information in the overhead; parsing the information and passing it to the control plane; acting on the information immediately or after a hold-off period; and optionally, generating and forwarding a RELEASE message (step 68). That is, each of the intermediate node(s) can receive the information in the overhead and pass this information to the control plane to act. The control plane can run a timer for the hold-off period (e.g., 50 ms, etc.) to see if the originating node has acted before performing any action. For backward compatibility, each node, on receipt of failure information in FTFL, could initiate a RELEASE message to the neighboring node to RELEASE the CALL, in case other nodes do not support reading FTFL bytes for release information (because it may be on legacy software). If a node Supports reading the FTFL information, the legacy RELEASE message may get ignored as a call object may already be deleted because of the information received in the FTFL. Thus, the fast mesh restoration process 60 is fully backward compatible with legacy nodes in the path. [Sareen, ¶ ¶ [0030] - [0032]]. Accordingly, it would have been obvious to a POSITA to generate the fault alarm information based on a fault that occurs on a service signal and to include, in that fault alarm information, a type of fault. In transport networks, alarms are expected to be actionable; therefore, fault signaling commonly includes fault classification (fault type) so that downstream logic and operators can distinguish between different failure modes (e.g., loss-of-signal, loss-of-frame, degradation, equipment fault) and select the correct restoration or maintenance response. Sareen’s system is already directed to producing and conveying failure information quickly via overhead, and Ding’s architecture provides the practical equipment interface for conveying such information through standard internal links. A POSITA would be motivated to include the type-of-fault field because doing so yields predictable, well-known benefits: faster troubleshooting, improved root-cause isolation, reduced false escalation, and better automated decision making for protection/restoration. Under KSR, adding fault classification to alarm information is a known technique applied to a known system, producing expected results without changing the fundamental architecture. Further, implementing fault type in the alarm payload is a routine design choice among a finite set of conventional alarm attributes (presence, type, location, severity). Therefore, claim 4’s fault-type requirement would have been obvious in view of the combined teachings. Claim 8 Sareen teaches generating failure information upon detecting a fault/failure and inserting/injecting that information into overhead, which corresponds to processor-generated fault alarm information, “at least one processor configured to generate fault alarm information, wherein the fault alarm information indicates a fault that occurs in the apparatus”, “…………………. the slow path can operate in Software based on processing and forwarding the control plane signaling sequentially through the intermediate nodes to the originating node and is delayed based on the number of the inter mediate nodes. The affected connection can utilize Optical Transport Network (OTN). The affected connection can utilize Optical Transport Network (OTN) and the failure information is inserted in Fault Type and Fault Location (FTFL) reporting communication channel bytes of the overhead. The failure information can be inserted in either forward or back ward operator-specific fields in the FTFL reporting communication channel bytes of the overhead, based on a direction of the failure………” [Sareen, ¶ [0007]]. Sareen does not expressly teach a SerDes bus. However, Ding teaches a SerDes-based OTN transmitter/host interface (SFI-4 SerDes, XAUI SerDes) coupled to a G.709 FEC/framer block, corresponding to a transmitter sending information through a SerDes bus “a transmitter coupled to the at least one processor and configured to send, through a serializer/deserializer (SerDes) bus, the fault alarm information” ”, “FIG. 7 is a signal flow diagram of an optical transceiver with an MDIO bridge to provide a mechanism to communicate to the MDIO from a G.709 framer with FEC and from a XAUI serializer/de-serializer (SerDes)…..” [Ding, FIG.7, col.3; col. 12-13]. Accordingly, it would have been obvious to a POSITA to implement the method-level functionality of claim 1 in an apparatus form comprising one or more processors and a transmitter. In OTN equipment, fault detection, alarm generation, and overhead handling are routinely realized as device logic within transponders/framers and associated controllers that must operate continuously and at line rate. Sareen provides the functional description of detecting failures and generating/injecting failure information into overhead, while Ding provides a concrete architecture showing how framer logic and SerDes-connected interfaces communicate with a host/controller. Translating these known functional steps into an apparatus with processing and transmission components is a standard engineering implementation step. A POSITA would be motivated to implement the claimed features as an apparatus to obtain predictable engineering advantages: deterministic timing, reduced latency between fault detection and alarm distribution, and straightforward integration with standard OTN modules. Under KSR, implementing known processing logic using known components in their established functions (processor performs the logic; transmitter outputs the information) is a predictable application yielding expected results. Therefore, claim 8 would have been obvious over Sareen in view of Ding. Claim 9 With respect to claim 9, all limitations of claim 8 are taught by Sareen and Ding, except wherein claim 9 additionally requires sending, through the SerDes bus, an OTN overhead signal that carries the fault alarm information.). However, within analogous art, Sareen and Ding expressly teach OTN overhead signal carrying the fault alarm information, “…………...The affected connection can utilize Optical Transport Network (OTN). The affected connection can utilize Optical Transport Network (OTN) and the failure information is inserted in Fault Type and Fault Location (FTFL)…………. [0009] The restoration procedure can exclude a node associated with the node identifier and a link associated with the link identifier, wherein the node and the link can be excluded since routing updates in the slow path are not available at the originating node upon receiving the information in the fast path. The affected connection can utilize Optical Transport Network (OTN). The affected connection can utilize Optical Transport Network (OTN) and the information based on the failure is inserted in Fault Type and Fault Location (FTFL) reporting communication channel bytes of the overhead…………The affected connection can utilize Optical Transport Network (OTN) and the information based on the failure is inserted in Fault Type and Fault Location (FTFL) reporting communication channel bytes of the overhead. [Sareen, ¶ ¶ [0007] - [0010], ¶ ¶ [0029] - [0032]]. “FIG. 5 is a block diagram of an exemplary embodiment of a XAUI-XFI transceiver including integrated G.709 framing and FEC includes integrated circuitry to multiplex/de-multi plex, encode/decode, frame fun-frame, and process overhead and FEC” [Ding, FIG.5, col.3; col. 10-11]. Accordingly, it would have been obvious to a POSITA to configure the apparatus such that the transmitter sends, through the SerDes bus, an OTN overhead signal carrying the fault alarm information. Ding teaches SerDes-based internal interfaces coupled to the G.709 framer, and Sareen teaches overhead-based dissemination of failure information. In a practical transponder implementation, it is conventional to move overhead/status information between the framer/line side and host/control side over the SerDes boundary. Therefore, sending the alarm information as part of the overhead signal over that SerDes interface is the most direct implementation choice for making the alarm information available to receiving logic. A POSITA would be motivated to do so because it is the predictable use of known elements according to their established functions: overhead conveys status/fault information, and SerDes provides reliable high-speed internal transport. This approach reduces custom sideband wiring and yields predictable results fast and reliable alarm propagation with minimal integration complexity. It also represents an obvious selection among a limited set of known internal transport options. Therefore, claim 9 would have been obvious. Claim 11 With respect to claim 11, all limitations of claim 8 are taught by Sareen and Ding, except wherein claim 11 additionally requires the fault alarm information comprises a type of fault. However, within analogous art, Sareen teaches types of faults taught by Sareen FTFL/FIF field, “[0015] FIG. 4 is a block diagram of a FTFL message in the G.709 OTN…………. [0029] In an exemplary embodiment, the fast mesh resto ration systems and methods can utilize Fault Type and Fault Location (FTFL) reporting communication channel bytes (FTFL message 32) for carrying release specific message data (e.g., control plane Connection Incarnation number, control plane Link ID and control plane Node ID of point of failure). The FTFL message 32 is allocated in the ODUk overhead to transport a 256-byte Fault Type and Fault Location (FTFL) message. The FTFL message 32 is located in row 2, column 14 of the ODUk overhead. The FTFL N message 32 includes two 128-byte fields as shown in FIG. 4, a forward FTFL 34 and a backward FTFL 36. The forward FTFL 34 is allocated in bytes 0 to 127 of the FTFL message 32. The backward FTFL 36 is allocated in bytes 128 to 255 of the FTFL message 32. The forward FTFL 34 and the backward FTFL 36 are further divided into three subfields as shown in FIG. 4, a forward/backward fault type indication field (FIF), a forward/ backward operator identifier field (OPRID), and a forward/ backward operator-specific field (OPRSPECIFIC). Note, the forward FTFL 34 and the backward FTFL 36 are also shown in FIGS. 7 and 8 with the operator-specific fields utilized to carry information related to failures, e.g. NodeID, Link ID, and information identifying failed connections…………….” PNG media_image3.png 369 620 media_image3.png Greyscale [Sareen, FIG.4, ¶ [0015], ¶ ¶ [0029] - [0032]]. Accordingly, it would have been obvious to a POSITA to configure the apparatus such that the fault alarm information comprises a type of fault. Transport equipment is expected to output alarms that are sufficiently informative to drive correct downstream decisions; fault type classification is a conventional part of fault alarm signaling in OTN systems. Including fault type in the alarm payload enables distinguishing different fault modes and selecting appropriate restoration and maintenance actions. A POSITA would be motivated to include fault type because it yields predictable and well-recognized operational benefits: improved diagnosability, reduced ambiguity, and better automated response behavior. Under KSR, adding a conventional alarm attribute to a known alarm message is a routine design choice producing expected results. Therefore, claim 11 would have been obvious. Claim 15 Sareen teaches detecting a failure and generating corresponding failure information, and then injecting that failure information into the overhead of the data path (fast path), “instructions cause generating fault alarm information indicating a fault”, “[0007] The fast path can operate in real-time via injection of the failure information in the data path overhead upon detection of the failure and is negligibly impacted in its time delay by a number of intermediate nodes between the node and the originating node, and the slow path can operate in Software based on processing and forwarding the control plane signaling sequentially through the intermediate nodes to the originating node and is delayed based on the number of the intermediate nodes. The affected connection can utilize Optical Transport Network (OTN). The affected connection can utilize Optical Transport Network (OTN) and the failure information is inserted in Fault Type and Fault Location (FTFL) reporting communication channel bytes of the overhead. The failure information can be inserted in either forward or back ward operator-specific fields in the FTFL reporting communication channel bytes of the overhead, based on a direction of the failure.” [Sareen, ¶ [0007]]. Sareen does not expressly teach SerDes. However, Ding teaches the SerDes-based OTN interface architecture used to send such information “instructions cause sending the fault alarm information through a SerDes bus” “FIG. 7 is a signal flow diagram of an optical transceiver with an MDIO bridge to provide a mechanism to communicate to the MDIO from a G.709 framer with FEC and from a XAUI serializer/de-serializer (SerDes) [Ding, FIG.7, col.3; col. 12-13] Accordingly, it would have been obvious to a POSITA to implement the claim 1 functionality as a computer program product. OTN equipment commonly uses firmware/software to configure framer behavior, populate overhead fields, detect alarm conditions, and exchange status/alarm information with a host/controller. Sareen’s overhead-based failure information generation can be implemented as instructions executed by processing logic that drives overhead insertion and alarm handling, and Ding’s architecture includes the host-side interfacing typical of programmable devices. Providing the functionality as software instructions is a standard way to realize such features in deployable products. A POSITA would be motivated to implement these functions as program instructions to obtain predictable advantages: easier updates and configurability, reuse across product variants, reduced hardware redesign, and simplified verification compared with pure hardwired logic. Under KSR, implementing known processing steps in software/firmware is a routine choice among finite alternatives (firmware-controlled versus fixed hardware) that yields predictable results. Therefore, claim 15 would have been obvious. Claim 16 With respect to claim 16, all limitations of claim 15 are taught by Sareen and Ding, except wherein claim 16 additionally requires sending, through the SerDes bus, an OTN overhead signal that carries the fault alarm information. However, within analogous art, Sareen expressly teaches injecting failure information into the OTN overhead (fast path), and discusses use of FTFL overhead for failure information, “[0008] In yet another exemplary embodiment, a node, in a network using a control plane, configured for providing fast = restoration in the network includes one or more-line modules configured to inject information in overhead on connections; and a controller communicatively coupled to the one or more line modules, wherein the controller is configured to operate a distributed control plane through a communications channel in the overhead; wherein, responsive to failure on a link, the one or more-line modules are configured to inject information identifying the failure in the overhead of each one of affected connections, over a fast path, and wherein, responsive to the failure on the link, the controller is also configured to generate and forward control plane signaling towards originating nodes of the affected connections over a slow path relative to the fast path restoration procedure can be initiated in the control plane, responsive to the fast path prior to the originating node receiving the control plane signaling via the slow path.” [Sareen, ¶ [0008]]. Sareen does not expressly teach G.709 overhead output However, in analogous OTN transceiver/host-interface art, Ding teaches explicit G.709 overhead output (“G.709 Overhead Out”) in a SerDes/framer context, which would carry such overhead/fault information, “FIG. 5 is a block diagram of an exemplary embodiment of a XAUI-XFI transceiver including integrated G.709 framing and FEC includes integrated circuitry to multiplex/de-multi plex, encode/decode, frame fun-frame, and process overhead and FEC………” [Ding, FIG.5, col.3; col. 10-11]. Accordingly, it would have been obvious to a POSITA to configure the computer program product such that the fault alarm information is sent through a SerDes bus. In OTN devices, firmware/software routinely coordinates the transfer of overhead and status information between a framer and host through established internal interfaces, and Ding teaches SerDes-based connectivity for precisely that purpose. Therefore, implementing the “sending” step using the SerDes path is a conventional internal signaling choice in such equipment. A POSITA would be motivated to use the SerDes bus because it provides predictable benefits needed in OTN equipment: sufficient bandwidth, deterministic timing, and robust signal integrity for high-rate transfer of overhead/status data. Under KSR, applying a known interface solution to transport known alarm/status content is a predictable modification yielding expected results. Therefore, claim 16 would have been obvious. Claim 18 With respect to claim 18, all limitations of claim 16 are taught by Sareen and Ding, except wherein claim 18 additionally requires generating the fault alarm information based on a fault that occurs in the apparatus or target communication link, wherein the fault alarm information comprises a type of fault. However, within analogous art, Sareen teaches FTFL subfields including a fault type indication field (FIF), which explicitly provides a fault type as part of the failure/fault information carried in overhead, “fault alarm information comprises a type of fault.”, “[0030] The FTFL message 32 can be used to instantly propagate the same information in the RELEASE message 20 regarding the failure 18. At the failure 18, a line module, on detecting any failure that results in a mesh restoration, injects failure information in FTFL message 32 for all the paths configured over the failed link. For example, on seeing a failure at OTU or at a line TCM, the associated line module fills the FTFL message 32 with information corresponding to individual ODUk/ODU. In an exemplary embodiment, this information can be included in the operator-specific fields and can include Node Name/ID, Link ID, Incarnation, etc. The Node Name/ID uniquely identifies the node, the Link ID uniquely identifies the link associated with the node with the failure, and the Incarnation identifies specific connections on the link that are affected by the failure. That is, the information is the same information in the RELEASE message 20. [0031] Referring to FIG. 5, in an exemplary embodiment, a flow chart illustrates a fast mesh restoration process 60………...” [Sareen, ¶ ¶ [0030] - [0032]]. Accordingly, it would have been obvious to a POSITA to configure the computer program product such that the fault alarm information includes a type of fault. Software-implemented alarm handling routinely classifies detected failures so that automated actions, logging, and troubleshooting can be performed correctly. Fault type is a conventional field for making an alarm actionable and enabling correct policy selection. A POSITA would be motivated to include fault type because it predictably improves diagnosis and response: it reduces ambiguity, supports faster restoration decisions, and supports consistent reporting across nodes and management systems. Encoding fault type in software is a routine implementation choice, and thus claim 18 would have been obvious. Claims 3,10 and 17 are rejected under 35 U.S.C. §103 as being unpatentable over Sareen et al. in view of Ding et al. and Hironaka et al. (US20160337031A1). Claim 3 With respect to claim 3, all limitations of claim 2 are taught by Sareen and Ding, except wherein claim 3 additionally requires: (i) overhead information for monitoring a service signal; and (ii) identification information that identifies the fault alarm information (and optionally second identification information identifying the overhead information). However, within analogous art, Sareen expressly teaches placing failure information into FTFL. Thus, FTFL serves as identification information identifying fault alarm information, “[0029] In an exemplary embodiment, the fast mesh resto ration systems and methods can utilize Fault Type and Fault Location (FTFL) reporting communication channel bytes (FTFL message 32) for carrying release specific message data (e.g., control plane Connection Incarnation number, control plane Link ID and control plane Node ID of point of failure). The FTFL message 32 is allocated in the ODUk overhead to transport a 256-byte Fault Type and Fault Location (FTFL) message. The FTFL message 32 is located in row 2, column 14 of the ODUk overhead. The FTFL message 32 includes two 128-byte fields as shown in FIG. 4, a forward FTFL 34 and a backward FTFL 36. The forward FTFL 34 is allocated in bytes 0 to 127 of the FTFL message 32…………... [0030] The FTFL message 32 can be used to instantly propagate the same information in the RELEASE message 20 regarding the failure 18. At the failure 18, a line module, on detecting any failure that results in a mesh restoration…………………… [0031] Referring to FIG. 5, in an exemplary embodiment, a flow chart illustrates a fast mesh restoration process 60. The fast mesh restoration process 60 contemplates operation in the network 10. The fast mesh restoration process 60 includes operating a control plane in a network (step 62). For example, the control plane can be a distributed control plane Such as ASON, GMPLS, ORSP, etc………………... [0032] The fast mesh restoration process 60 includes, at intermediate node(s), receiving information in the overhead; parsing the information and passing it to the control plane; acting on the information immediately or after a hold-off period; and optionally, generating and forwarding a RELEASE message (step 68) …………... Since RELEASE information is included in frame ODU data, a number of nodes in the path has negligible Impact……...” [Sareen, ¶ ¶ [0029] - [0032]]. Sareen does not expressly enumerate standardized monitoring-overhead definitions. However, in analogous OTN overhead definition art, Hironaka teaches monitoring overhead including Path Monitoring (PM) and Tandem Connection Monitoring (TCM), which are overhead bytes used to monitor an ODUk service signal, “overhead information is for monitoring a service signal of a service carried by the apparatus or the target communication link”, “[0086] Abbreviation terms indicated in FIG. 4 mean the followings: [0087] PM: Path Monitoring……. [0088] TCM: Tandem Connection Monitoring………... [0089] RES: Reserved for future international standardization…………... [0090] ACT: Activation/deactivation control channel [0091] FTFL: Fault Type & Fault Location reporting channel [0092] EXP: Experiment…… [0093] GCC: General Communication Channel…….” [Hironaka, FIG.4, ¶ ¶ [0086] - [0093]]. PNG media_image4.png 860 585 media_image4.png Greyscale Hironaka further teaches FTFL (Fault Type & Fault Location) as a defined overhead channel/field used to convey fault type/location; “first identification information identifies the fault alarm information.” [Hironaka, FIG.4, ¶ ¶ [0086] - [0093]]. Within analogous art, Hironaka’s PM vs TCM field types themselves identify the overhead information being used for monitoring; i.e., the overhead contains field-type identifiers (PM/TCM) that distinguish monitoring overhead, satisfying the alternative second-identification requirement, “second identification information identifies the overhead information.” [Hironaka, FIG.4, ¶ ¶ [0086] - [0093]]. Accordingly, it would have been obvious to a POSITA to modify Sareen in view of Ding and further in view of Hironaka so that the OTN overhead signal includes overhead information for monitoring a service signal and includes identification information identifying the fault alarm information and/or identifying the monitoring overhead information. Once fault alarm information and monitoring data are carried in overhead and transported through device interfaces, a practical implementation must use standardized monitoring overhead fields and unambiguous identifiers so that receivers can correctly interpret the overhead content and correlate monitoring results with fault alarms. Hironaka provides known OTN overhead constructs for monitoring and for fault identification, and a POSITA would incorporate such standardized constructs to ensure consistent field interpretation. A POSITA would be motivated to incorporate Hironaka’s monitoring and identification constructs because doing so yields predictable results: improved service assurance through continuous monitoring, clearer distinction between monitoring overhead and alarm information, and improved interoperability across equipment and network management systems. Under KSR, applying known standardized overhead definitions to the known problem of organizing and identifying overhead content is a routine design choice among finite options and produces expected benefits without requiring an inventive leap. Therefore, claim 3 would have been obvious over Sareen in view of Ding and Hironaka. Claim 10 With respect to claim 10, all limitations of claim 9 are taught by Sareen and Ding, except wherein claim 10 additionally requires monitoring-overhead information and identification information (FTFL) and/or second identification information identifying the overhead information. However, within analogous art, Hironaka expressly teaches monitoring-overhead information and identification information (FTFL) and/or second identification information identifying the overhead information, “[0086] Abbreviation terms indicated in FIG. 4 mean the followings: [0087] PM: Path Monitoring……. [0088] TCM: Tandem Connection Monitoring………... [0089] RES: Reserved for future international standardization…………... [0090] ACT: Activation/deactivation control channel [0091] FTFL: Fault Type & Fault Location reporting channel [0092] EXP: Experiment…… [0093] GCC: General Communication Channel…….” [Hironaka, FIG.4, ¶ ¶ [0086] - [0093]]. Accordingly, it would have been obvious to a POSITA to configure the claim 10 apparatus to include monitoring overhead information and identification information as taught by Hironaka. The same practical need exists at the device level: an apparatus that sends fault alarm information via OTN overhead must also support standardized monitoring overhead and identifiers so that the overhead stream can be parsed and used for service assurance and fault correlation. Hironaka provides known OTN overhead constructs that enable this behavior and would naturally be adopted in a compliant apparatus. A POSITA would be motivated to adopt these constructs in the apparatus to obtain predictable operational benefits such as better monitoring of the service signal, more reliable interpretation of overhead fields, and improved correlation between monitoring results and fault alarms. This is a predictable combination of known elements used according to their established functions, and therefore claim 10 would have been obvious. Claim 17 With respect to claim 17, all limitations of claim 16 are taught by Sareen and Ding, except wherein claim 17 additionally requires monitoring overhead information and identification information (and/or second identification information identifying the overhead information). However, within analogous art, Hironaka teaches monitoring overhead including Path Monitoring (PM) and Tandem Connection Monitoring (TCM), which are overhead bytes used to monitor an ODUk service signal, “overhead information is for monitoring a service signal of a service carried by the apparatus or the target communication link”, “[0086] Abbreviation terms indicated in FIG. 4 mean the followings: [0087] PM: Path Monitoring……. [0088] TCM: Tandem Connection Monitoring………... [0089] RES: Reserved for future international standardization…………... [0090] ACT: Activation/deactivation control channel [0091] FTFL: Fault Type & Fault Location reporting channel [0092] EXP: Experiment…… [0093] GCC: General Communication Channel…….” [Hironaka, FIG.4, ¶ ¶ [0086] - [0093]]. Accordingly, it would have been obvious to a POSITA to configure the claim 17 computer program product to include monitoring overhead information and identification information as taught by Hironaka. Firmware/software in OTN devices is typically responsible for enabling, populating, and interpreting overhead monitoring fields and identification fields, and Hironaka provides standardized constructs that a POSITA would implement to ensure consistent behavior and interoperability. A POSITA would be motivated to implement Hironaka’s constructs in program instructions to obtain predictable benefits: correct identification of monitoring versus alarm information, improved service assurance, and improved interoperability with standard OTN tooling and network management systems. Under KSR, this is a routine enhancement using known techniques that yields expected results. Therefore, claim 17 would have been obvious. Claims 5,6,7,12,13,14,19 and 20 are rejected under 35 U.S.C. §103 as being unpatentable over Sareen et al. in view of Ding et al. and Hironaka et al. and further in view of Noorhosseini et al. (US6707795B1) and Valadarsky et al. (US20020111755A1) (“Valadarsky ’755”) and further in view of Valadarsky et al. (“Valadarsky ’661”). Claim 5 With respect to claim 5, all limitations of claim 4 are taught by Sareen and Ding, except wherein claim 5 additionally requires obtaining a target service classification tree that indicates service levels and upper/lower associations among services, and detecting a first faulty service based on that tree. within analogous art, Sareen does not expressly teach a hierarchical service dependency model (tree). However, in analogous network modeling art, Noorhosseini teaches modeling each network element as a hierarchy of server-client TTPs where a lower-layer TTP is served by a higher-layer TTP, which corresponds to a service-level hierarchy and upper/lower associations (i.e., a classification tree/hierarchy), “ obtaining a target service classification tree, wherein the target service classification tree indicates a service level of each of first target services and an upper-level and lower-level association relationship among second target services”, “……………a network as a hierarchy of TTPs (transport termination points) and creates Several layers of connected TTPs. In the new correlation process, the network of connected TTPS is traversed once a root-cause alarm is raised and a problem object is created. A traversed TTP keeps its as Sociation with the problem object. In this manner a Symptomatic alarm raised on the TTP is correlated with the associated problem(s) without the need for a repeated Search of the network. More specifically, in the new network modelling Scheme provided by the invention, a network element is modelled as a hierarchy of virtual server-client TTPs. A TTP at a lower layer is served by a TTP at a higher layer. The whole network is then modelled by establishing connections between these TTPs. Since the TTPs are arranged in a hierarchy, the whole network will conform to a hierarchy. The connectivity of TTPs at the highest layer models the connectivity of network elements themselves. The connectivity of TTPs at lower layers represents the network at various topology/termination layers (e.g. optical, Section, line, path, etc.). A network at a lower layer is served by a network at the higher layer. The alarms in the new model are considered to be raised on TTPs and not on network elements. The correlation process is devised in harmony with the network modelling Scheme. The correlation process determines a new alarm to be either a root-cause alarm or a Symptomatic alarm. If it is a root-cause alarm then it is associated with a problem object with a generic attribute called correlation State. The correlation State of the problem is used to correlate Symptomatic alarms to the problem. Once a problem (and hence the correlation State) is created on a TTP at a certain layer, the directly connected TTPs at the same layer and all the client TTPs at the lower layers served by the problems TTP are traversed in search of creatable Symptomatic alarms. On each traversal the symptomatic alarms on the TTPs are examined by an inference engine and added to the problem if creatable. More generally, all the alarms on TTPs which satisfy criteria determined criteria are considered for correlation. Any TTP traversed will keep its association with the correlation State. Therefore, when a Symptomatic alarm arrives later on that TTP, it is readily examined against any associated correlation state(s). This method of correlation alleviates the need for Searching the network every time an alarm arrives. It greatly reduces the processing time of correlation Since the majority of alarms are of Symptomatic types and traversing of the network is only performed upon arrival of a root-cause alarm…………”, [ Noorhosseini, col.1-2]. Further, Noorhosseini teaches root-cause alarming and then traversing the connected TTP dependency network, which corresponds to detecting a faulty service based on the hierarchical/associated model, “detecting the target service classification tree, a first faulty service”, “………………. The invention is composed of two elements, namely a network modelling Scheme and a correlation process. The network modelling Scheme models a Set of network elements in a network as a hierarchy of TTPs (transport termination points) and creates Several layers of connected TTPs. In the new correlation process, the network of connected TTPS is traversed once a root-cause alarm is raised and a problem object is created. A traversed TTP keeps its association with the problem object. In this manner a Symptomatic alarm raised on the TTP is correlated with the associated problem(s) without the need for a repeated Search of the network…………” [ Noorhosseini, col.1-2]. Additionally, Valadarsky ’755 teaches constructing a topology graph based on a table of network entities and rules, which corresponds to obtaining structured association relationships among services/entities used for correlation, “[0151] Construction of a topology graph based table of network entities and fault rules (this phase is done offline).” [Valadarsky ’755, ¶ [0151]]. Additionally, in the closely analogous alarm-correlation decision art, Valadarsky’661 expressly teaches topology-based root-cause reasoning that selects a root cause alarm from a candidate group and clusters alarms using network topology considerations, which further supports obtaining/using structured upper/lower association relationships and detecting a first faulty service/root cause based on that model, “……...Through the expert system specific rules are defined. The preferred characteristics. for these rules are outlined below. The rules serve for 2 purposes: Filter out alarms that do not need to be correlated or part of the correlation process. Find the root cause alarm from a candidate group of alarms. When an event occurs and identified as an alarm, that alarm is given an ID, which uniquely identifies the alarming object in the configuration database. The correlation filter filters out non-relevant alarms. This process uses pre-defined rules. The alarms are grouped according to time characteristics and clustered by using network topology considerations. Each cluster is investigated by the correlation engine that deducts the probable root cause or root causes for that group……...” [Valadarsky’661, col.7-8]. Accordingly, it would have been obvious to a POSITA to extend Sareen in view of Ding and Hironaka further in view of Noorhosseini and Valadarsky to obtain a target service classification tree indicating service levels and upper/lower association relationships and to detect a first faulty service based on that model. Layered optical transport networks inherently involve hierarchical dependencies in which higher-layer services depend on lower-layer transport resources; as a result, a single physical fault can produce many symptomatic alarms across multiple services and layers. Once the system can generate and carry fault alarm information and monitoring overhead (Sareen/Ding/Hironaka), the next practical engineering step is to correlate those alarms using a dependency model to identify the most likely root cause and reduce alarm noise. A POSITA would be motivated to use Noorhosseini’s hierarchical network modeling (higher-layer entities serving lower-layer entities) as the basis for service levels and associations because it provides a predictable representation of dependencies in a tree-like hierarchy, and to use Valadarsky’s topology graph/correlation framework because it provides a predictable mechanism for structuring relationships among entities and applying correlation rules for root-cause identification. Under KSR, applying known correlation techniques (hierarchy plus topology-based correlation) to the known problem of alarm storms and root-cause ambiguity yields predictable improvements such as faster diagnosis, reduced alarm flooding, and improved automated restoration behavior. Therefore, claim 5 would have been obvious. Claim 6 With respect to claim 6, all limitations of claim 5 are taught by Sareen, Ding, Hironaka, Noorhosseini, Valadarsky ’755 and Valadarsky’661 except wherein claim 6 additionally requires an ordered detection procedure: select a highest-level service first; detect in descending order through associated lower-level services; each time a faulty service is detected save it; stop detecting lower-level services associated with the detected faulty service (suppression); then select the next-highest service and continue until completion. However, within analogous art, Noorhosseini’s hierarchy (higher layer serving lower layer) provides ordered service levels, making selection of the highest service level first a predictable root-cause isolation strategy, “selecting, from the target service classification tree, a third target service with a first highest service level as a first to-be-detected service”, “……………a network as a hierarchy of TTPs (transport termination points) and creates Several layers of connected TTPs. In the new correlation process, the network of connected TTPS is traversed once a root-cause alarm is raised and a problem object is created. A traversed TTP keeps its as Sociation with the problem object. In this manner a Symptomatic alarm raised on the TTP is correlated with the associated problem(s) without the need for a repeated Search of the network. More specifically, in the new network modelling Scheme provided by the invention, a network element is modelled as a hierarchy of virtual server-client TTPs. A TTP at a lower layer is served by a TTP at a higher layer. The whole network is then modelled by establishing connections between these TTPs. Since the TTPs are arranged in a hierarchy, the whole network will conform to a hierarchy………...” [ Noorhosseini, col.1-2]. Further, Noorhosseini teaches that once a root-cause alarm is raised, the network of connected TTPs is traversed, consistent with descending traversal/detection of associated lower-level services dependent from the selected higher-level service, “sequentially detecting, in a descending order of service levels, the first to-be-detected service and all fourth target services that have a first lower-level association relationship with the first to-be-detected service”, “…………. The correlation process is devised in harmony with the network modelling Scheme. The correlation process determines a new alarm to be either a root-cause alarm or a Symptomatic alarm. If it is a root-cause alarm then it is associated with a problem object with a generic attribute called correlation State. The correlation State of the problem is used to correlate Symptomatic alarms to the problem. Once a problem (and hence the correlation State) is created on a TTP at a certain layer, the directly connected TTPs at the same layer and all the client TTPs at the lower layers served by the problems TTP are traversed in search of creatable Symptomatic alarms. On each traversal the symptomatic alarms on the TTPs are examined by an inference engine and added to the problem if creatable. More generally, all the alarms on TTPs which satisfy certain predetermined criteria are considered for correlation. Any TTP traversed will keep its association with the correlation State. Therefore, when a Symptomatic alarm arrives later on that TTP, it is readily examined against any associated correlation state(s). This method of correlation alleviates the need for Searching the network every time an alarm arrives. It greatly reduces the processing time of correlation Since the majority of alarms are of Symptomatic types and traversing of the network is only performed upon arrival of a root-cause alarm……………” [ Noorhosseini, col.1-2]. Noorhosseini does not expressly teach alarm history storage However, in analogous art, Valadarsky ’755 teaches alarm history storage, which corresponds to saving detected faulty services/alarms as the algorithm identifies them “each time a second faulty service is detected, saving the second faulty service.”, “……... [0155] The input of the algorithm includes: [0156] Network topology information……...” [Valadarsky ’755, ¶ ¶ [0155] - [0156]]. Within analogous art, Valadarsky ’755 teaches an iterative correlation decision algorithm implemented in phases (construction of topology graph; collection; clustering; execution of decision algorithm), which supports iterative processing across remaining services/candidates after completing a first selected service’s related cluster “selecting, from the target service classification tree after detecting all the fourth target services, a next target service with a second highest service level as a second to-be-detected service to continue detection until detecting a last target service with a third highest service level”, “……..[0150] The correlation decision algorithm is preferably implemented in four phases…….[0154] Execution of the decision algorithm on a cluster of alarms…..” [Valadarsky ’755, ¶ [0150], ¶ [0154]]. Further, Valadarsky ’755 expressly teaches a correlation filter that filters out low-level alarms when a higher-level one is still active, which corresponds to stopping further detection/processing of associated lower-level services after a higher-level/root fault is identified “stopping detecting all fifth target services that have a second lower-level association relationship with the second faulty service”, “…… [0153] Clustering of the alarms by network topology and by rules…...” [Valadarsky ’755, ¶ [0153]]. Valadarsky’661 likewise expressly teaches maintaining and displaying correlation history (“history” of correlated alarms/decisions), which further supports saving detected faulty services/alarms each time they are detected, “…………Manually subtract alarms from a derived/parent alarm. Manually clearing out a derived/parent alarm results in all children being orphaned. 6. Access permission to any action (including view) can be secured. 7. Manually undo derived alarms. 8. Manually undo parent/child correlation. 9. Be able to confirm recommended correlation. 10. Be able to display by criteria any combinations of alarms, which are parent, derived, and children. 11. Display the following alarm indications: parent, derived, child, orphan, toggling, repeated, orphan, recommended parent, recommended derived and recommended child. 12. Display the correlation history with all the relevant information. 13. When opening a trouble ticket for a parent alarm all the children information can also be referenced in the trouble ticket system. 14. Provide on-line and context sensitive help. Preferred External Configuration Interface The correlation system needs to query preferred external Sources in order to do advanced correlation. This means mainly to get configuration ad topology information but also for example to determine if the alarms are service affecting or not. The nature of this process is real-time and near real-time. This process has a human like intelligence that can discover knowledge from external sources to determine root cause alarm. The expert System will run appropriate rules that get this information. In order for the results to be accurate as possible the information about reconfiguration and network testing needs to be updated dynamically. Following are the preferred characteristics of this interface……...” [Valadarsky’661, col.9-10]. Additionally, Valadarsky’661 teaches using graph traversal to find the root cause and the alarms that belong to that root cause, which further supports the claimed descending detection of associated lower-level services from a selected higher-level service in a dependency graph, “………TRS can identify incoming alarms that were generated by maintenance activities. TRS is independent of a specific type of network and it can be used on any type of network by adding a TRS rule-set. TRS can Support manufacture dependent anomalies. For example, there can be, for the same type of network problem and equipment class, a different rule for equipment manufactures X and a different rule for equipment manufacture y. Both rules can coexist on the system simultaneously. TRS can change its decision according to new alarms that have arrived after the decision was first made. TRS uses graph traverse in order to find the root cause and in order to find all the alarms that belong to a root cause. TRS divides (clusters) the stream of alarms into groups according to the time they arrive and the way the group acts. (The alarm has a statistic way of arrival, and TRS uses this information in order to find the right groups). TRS uses topologic distance between the alarms in order to make the groups. TRS updates its network topology data on-line—there is no disruption to system operation. TRS gives the user a friendly interface in order to define the rules. (Most correlation systems have much more com plex rule definition.) TRS allow the users to change the rules while TRS is still running. TRS automatically adjusts to changing network topology. TRS is designed as a part of a network control system and can be connected to the event system directly. TRS imitates the flow of alarms as it is in the network, which is why anyone with good knowledge of the alarm flow in a given network type can generate the rules for that network type. TRS issues derived alarms to describe root causes when no incoming alarm accurately does so. TRS results are preferably sent to the standard Netracfault management GUI. Operator doesn’t need to look at a separate correlation screen. TRS stores its results in a history database. Users can review the decisions it made, and the alarm groups it correlated, long after the faults that generated those decisions and alarm groups have been resolved.” [Valadarsky’661, col.1-2]. Valadarsky’661 further expressly teaches a correlation filter that filters out non-relevant alarms, including filtering out low-level alarms when a higher-level alarm is still active, which directly supports the claimed stopping/suppression of detecting lower-level associated services after a faulty service has been identified, “………FIG. 1 represents a schematic interaction diagram. It does not represent how the modules may be implemented. It describes how alarms are processed centralized around alarm correlation. The schema Suggests that a correlation system can comprise of a real-time component and a near real-time component. The real-time one can perform simple correlation but also alarm filtering. The near real-time component can perform correlation rules that are defined against external information. Alarm notification Identify alarm condition by some algorithmic applied to the condition. Alarm modification—Apply changes to the alarm when the alarm status changes and when new occurrences of existing alarms come in. Alarm filtering Select a specific alarm that matches a set of conditions and apply a rule to it. Correlation Filter out non-relevant alarms from correlation aspects, e.g. filter out low-level alarms when a higher level one is still active. Alarm correlation Identify alarms that may be correlated without obtaining data from external sources. In parallel to that build groups of alarms to apply topological correlation. Perform the rough and fine reasoning algorithms. Alarm history. Store history information of alarms. External configuration Give information about topology and States of a network and its component NES, to facilitate advanced alarm correlation. It provides access to information concerning identified root causes. Rules The user work station to facilitate system and rule administration. Client The user work station to facilitate alarm viewing and correcting………” [Valadarsky’661, col.7-8]. Valadarsky’661 additionally teaches alarm suppression to prevent raising redundant alarms (“single alarm suppression”), which further supports stopping detection of dependent lower-level services once the higher-level/root service is detected faulty, “………. Data from one field to another can be copied and pasted. Provide on-line and context sensitive help. Preferred Expert System The expert system acts upon the rules. It preferably processes the incoming alarms and decides according to the rules if the alarms are correlated. Following is these system preferred characteristics: Can process single or groups of alarms. Process rules according to their start and end dates. Can count the number of related alarms. Can compute percentage. Can measure the time difference between alarms. Compares alarm data to rule data. Single alarm Suppression, i.e. an alarm will not be raised if it is still up when another alarm like that comes in. Accept now and changed rules without downtime. Support a large number of rules. Canarchive rule sets to allow reversion to previous sets of rules. Runs on UNIX and NT. Preferred Correlation Engine and Interface The correction engine preferably extends the expert system by using the expert system results to further investigate the alarms and find the root cause. It groups the alarms under a group leader. Then finds the candidate group that contains the root cause. The final step is to apply rules to find the root CalSC………….” [Valadarsky’661, col.9-10]. Accordingly, it would have been obvious to a POSITA to implement claim 6’s ordered hierarchical detection workflow (select the highest service level first, sequentially detect associated lower-level services in descending order, save detected faulty services, and stop processing lower-level associated services once a fault is detected). In hierarchical fault correlation systems, starting from the highest level is a well-known optimization because higher-level/root-cause conditions explain multiple dependent symptoms; evaluating higher-level candidates first reduces redundant processing and accelerates convergence to the root cause. Noorhosseini teaches traversal from a problem entity to connected entities and client entities at lower layers, which naturally supports descending detection through associated lower-level services in a dependency model. A POSITA would further be motivated by Valadarsky’s teachings of correlation history (storing correlation decisions/results) and correlation filtering/suppression actions (including filtering out alarms and other actions) to implement the “save” and “stop” aspects of the workflow. Saving detected faults/decisions provides traceability and stabilizes subsequent correlation decisions; filtering/suppressing lower-level symptom processing prevents alarm storms and prevents duplicate actions based on symptoms rather than causes. Under KSR, combining known traversal with known history and filtering techniques is a predictable use of prior art elements according to their established functions, yielding expected results in scalability and alarm reduction. Therefore, claim 6 would have been obvious. Claim 7 With respect to claim 7, all limitations of claim 5 are taught by Sareen, Ding, Hironaka, Noorhosseini and Valadarsky ’755 and Valadarsky’661 except wherein claim 7 additionally requires that the fault alarm information comprises fault indication information indicating the third target service (i.e., identifying the affected service). However, within analogous art, Sareen teaches carrying failure-related identifiers in FTFL such as NodeID and Link ID (point-of-failure identifiers), which identify the affected service/link “fault indication information indicating the third target service.” “[0032] The fast mesh restoration process 60 includes, at intermediate node(s), receiving information in the overhead; parsing the information and passing it to the control plane; acting on the information immediately or after a hold-off period; and optionally, generating and forwarding a RELEASE message (step 68). That is, each of the intermediate node(s) can receive the information in the overhead and pass this information to the control plane to act. The control plane can run a timer for the hold-off period (e.g., 50 ms, etc.) to see if the originating node has acted before performing any action. For backward compatibility, each node, on receipt of failure information in FTFL, could initiate a RELEASE message to the neighboring node to RELEASE the CALL, in case other nodes do not support reading FTFL bytes for release information (because it may be on legacy software). If a node Supports reading the FTFL information, the legacy RELEASE message may get ignored as a call object may already be deleted because of the information received in the FTFL. Thus, the fast mesh restoration process 60 is fully backward compatible with legacy nodes in the path……….” [Sareen, ¶ [0032]]. Hironaka likewise teaches inserting link identification information (link ID) into overhead, “……... [0096] In the present embodiment, as will be described later, the source nodes #1 and #8 insert link identification information (link ID) of the user traffic into the RES field. Thus, each of the end nodes #4 and #9 is available to determine whether or not the user traffic is allowed to transfer based on the link ID inserted in the RES field………...” [ Hironaka, ¶ [0096]. Accordingly, it would have been obvious to a POSITA to include, within the fault alarm information, fault indication information indicating the target service, because hierarchical dependency-based diagnosis requires unambiguous association between an alarm payload and the affected service/link/entity in the classification model. Without an identifier, the system cannot reliably attach the alarm to the correct node in the dependency tree, cannot traverse the correct branch, and cannot safely perform suppression or restoration actions tied to that service. A POSITA would be motivated to embed identifiers such as a link ID or service identifier in overhead because this is a conventional OTN practice and yields predictable improvements: clearer alarm interpretation, reduced false correlation, and safer automated actions. Hironaka’s teaching of inserting link identification information into overhead so that receiving nodes can determine whether and how to act based on the link ID exemplifies the conventional use of identifiers in overhead. Under KSR, adding an identifier to overhead-carried alarm information is a routine modification using known techniques that produces expected benefits. Therefore, claim 7 would have been obvious. Claim 12 With respect to claim 12, all limitations of claim 11 are taught by Sareen and Ding, except wherein claim 12 additionally requires obtaining a target service classification tree (indicating service levels and upper-level/lower-level association relationships among services) and detecting a first faulty service based on the target service classification tree. Sareen and Ding do not expressly teach a hierarchical service-dependency model (tree/graph) defining service levels and upper/lower associations for root-cause identification across services. However, in analogous alarm-correlation and network modelling art, Noorhosseini et al. teaches modelling a network as a hierarchy of server-client entities (TTPs) and traversing the connected hierarchy upon a root-cause alarm to identify and correlate faults, “……...The invention is composed of two elements, namely a network modelling Scheme and a correlation process. The network modelling Scheme models a set of network elements in a network as a hierarchy of TTPs (transport termination points) and creates several layers of connected TTPs. In the new correlation process, the network of connected TTPs is traversed once a root-cause alarm is raised and a problem object is created. A traversed TTP keeps its association with the problem object. In this manner a symptomatic alarm raised on the TTP is correlated with the associated problem(s) without the need for a repeated search of the network. More specifically, in the new network modelling scheme provided by the invention, a network element is modelled as a hierarchy of virtual server-client TTPs. A TTP at a lower layer is served by a TTP at a higher layer. The whole network is then modelled by establishing connections between these TTPs………” [Noorhosseini, col.1-2]. Noorhosseini’s hierarchy of higher-layer TTPs serving lower-layer TTPs corresponds to the claimed target service classification tree (service levels and upper/lower association relationships). Noorhosseini’s traversal of the connected hierarchy once a root-cause alarm is raised corresponds to detecting a first faulty service based on that classification structure (i.e., identify/select the root-cause service/entity and correlate dependent symptomatic alarms). Additionally, within analogous alarm-correlation decision art, Valadarsky ’755 teaches explicitly constructing a topology graph-based table of network entities and fault rules to support correlation and root-cause identification, which corresponds to obtaining structured association relationships among services/entities used for detecting faulty services, “Construction of a topology graph-based table of network entities and fault rules (this phase is done offline).” [Valadarsky ’755, ¶ [0151]]. Additionally, Valadarsky’661 teaches an alarm-correlation architecture that clusters alarms using network topology considerations and finds a root cause alarm from a candidate group, which further supports the apparatus obtaining/using a structured hierarchical association model to detect a faulty service/root cause, “Find the root cause alarm from a candidate group of alarms. When an event occurs and identified as an alarm, that alarm is given an ID, which uniquely identifies the alarming object in the configuration database. The correlation filter filters out non-relevant alarms. This process uses pre-defined rules. The alarms are grouped according to time characteristics and clustered by using network topology considerations. Each cluster is investigated by the correlation engine that deducts the probable root cause or root causes for that group.” [Valadarsky’661, col.7-8]. Accordingly, it would have been obvious to a POSITA to implement, in apparatus form, the acquisition of a hierarchical service classification model and detection of a faulty service based on that model. Carrier-grade transport devices increasingly embed correlation intelligence at the equipment level to reduce alarm storms, accelerate root-cause localization, and support rapid restoration. Once the apparatus process’s fault alarm information and monitoring overhead (Sareen/Ding/Hironaka), incorporating hierarchical diagnosis logic based on Noorhosseini’s hierarchy and Valadarsky’s correlation framework is a predictable engineering step to convert raw alarms into actionable root-cause determinations. A POSITA would be motivated by predictable design incentives: improved scalability under high alarm volume, faster restoration triggers, reduced operator burden, and improved SLA compliance. Under KSR, implementing known correlation methods in known equipment components is a predictable application yielding expected results. Therefore, claim 12 would have been obvious. Claim 13 With respect to claim 13, all limitations of claim 12 are taught by Sareen, Ding, Hironaka, Noorhosseini, Valadarsky ’755 and Valadarsky’661, except wherein claim 13 additionally requires an ordered hierarchical detection procedure comprising: (i) selecting, from the target service classification tree, a third target service with a first highest service level as a first to-be-detected service; (ii) sequentially detecting, in a descending order of service levels, the first to-be-detected service and all fourth target services that have a first lower-level association relationship with the first to-be-detected service; (iii) each time a second faulty service is detected, saving the second faulty service; (iv) stopping detecting all fifth target services that have a second lower-level association relationship with the second faulty service (suppression); and (v) selecting, from the target service classification tree after detecting all the fourth target services, a next target service with a second highest service level as a next to-be-detected service and continuing until detecting a last target service with a third highest service level. Sareen, Ding, and Hironaka do not expressly teach the claimed ordered root-cause selection/suppression workflow over a hierarchical service classification tree. However, in analogous hierarchical alarm correlation art, Noorhosseini teaches hierarchical service dependency (higher-layer serving lower-layer) and correlation by traversing the connected hierarchy after a root-cause alarm is raised and a problem object/correlation state is created. Noorhosseini’s traversal from the problem TTP to its client (lower-layer) TTPs corresponds to sequentially detecting associated lower-level services in a descending order of service levels. “……...The invention is composed of two elements, namely a network modelling Scheme and a correlation process. The network modelling Scheme models a set of network elements in a network as a hierarchy of TTPs (transport termination points) and creates several layers of connected TTPs. In the new correlation process, the network of connected TTPs is traversed once a root-cause alarm is raised and a problem object is created. A traversed TTP keeps its association with the problem object. In this manner a symptomatic alarm raised on the TTP is correlated with the associated problem(s) without the need for a repeated search of the network. More specifically, in the new network modelling scheme provided by the invention, a network element is modelled as a hierarchy of virtual server-client TTPs. A TTP at a lower layer is served by a TTP at a higher layer. The whole network is then modelled by establishing connections between these TTPs…………” [Noorhosseini, col.1-2]. Noorhosseini further teaches that “A traversed TTP keeps its association with the problem object” and that traversed entities maintain association with the correlation state, which corresponds to saving detected faulty services (and their associated correlation state) as the procedure progresses. [Noorhosseini, col.1-2]. Noorhosseini does not expressly teach explicit alarm-history storage, nor explicit filtering/suppression rules that stop processing lower-level symptomatic items once a higher-level root-cause is identified. However, in analogous alarm correlation art, Valadarsky ’755 teaches storing correlation decisions/results for later review, which corresponds to saving detected faulty services/results, “TRS stores its results in a history database. Users can review the decisions it made, and the alarm groups it correlated, long after the faults that generated those decisions and alarm groups have been resolved.” [Valadarsky ’755, ¶ [0032]]. Valadarsky ’755 further teaches supplementary rules for “filtering out” alarms (and deferring alarms), which corresponds to stopping further detection/processing of lower-level services associated with an already-detected faulty service to suppress dependent symptomatic alarms and reduce alarm flooding, “Supplementary rules used to define actions in the system, such as filtering out of alarms, creating trouble tickets, automatic defer of alarms etc.” [Valadarsky ’755, ¶ [ 0168]]. Additionally, Valadarsky ’755teaches a structured decision workflow including constructing a topology graph, clustering alarms by topology/rules, and executing a decision algorithm on clusters, which supports iterating through remaining candidate services/clusters after completing a first selected service’s associated lower-level detections, “The correlation decision algorithm is preferably implemented in four phase: 1. Construction of a topology graph based table of network entities and fault rules (this phase is done offline). 2. Collection of the income alarms and dividing them into groups by time. 3. Clustering of the alarms by network topology and by rules. 4. Execution of the decision algorithm on a cluster of alarms.” [Valadarsky ’755, ¶ [0150]-¶ [0154]]. Valadarsky’661 likewise teaches explicit alarm filtering/suppression actions performed by the correlation filter, including filtering out low-level alarms when a higher-level alarm is still active, which corresponds to the claimed stopping/suppression of lower-level associated services, “The correlation filter filters out non-relevant alarms … filter out low-level alarms when a higher level one is still active.” [Valadarsky’661, col.7-8]. Valadarsky’661 also teaches maintaining and presenting correlation history, which corresponds to saving detected faulty services/results for later review, “Display the correlation history with all the relevant information.” [Valadarsky’661, col.9-10]. Valadarsky’661 further teaches suppression of redundant alarms (“single alarm suppression”), reinforcing the claimed stop-detecting/suppression behavior once a faulty service has been detected, “Single alarm Suppression, i.e. an alarm will not be raised if it is still up when another alarm like that comes in.” [Valadarsky’661, col.9-10]. Accordingly, it would have been obvious to a POSITA to implement claim 13’s ordered hierarchy-based selection, descending detection, saving, and suppression workflow in the apparatus context. An apparatus that performs hierarchical correlation must execute a structured workflow that prioritizes likely root causes, records decisions for traceability, and suppresses redundant symptom processing to avoid alarm floods and duplicate actions. Noorhosseini provides the hierarchical traversal basis (moving from a problem entity to served lower-layer entities), and Valadarsky provides the history and filtering concepts needed to store correlation outcomes and suppress lower-level items once a higher-level condition is identified. A POSITA would be motivated to implement these steps because the benefits are predictable: fewer alarms propagated to operators, more stable automated actions, and faster root-cause identification. Under KSR, combining known traversal with known history and filtering techniques is a routine design choice and yields expected results. Therefore, claim 13 would have been obvious. Claim 14 With respect to claim 14, all limitations of claim 12 are taught by Sareen, Ding, Hironaka, Noorhosseini, Valadarsky ’755 and Valadarsky’661 except wherein claim 14 additionally requires fault indication information indicating the third target service. Sareen does not expressly teach 'third target service' naming, but teaches failure identifiers (NodeID/Link ID) carried in overhead that identify the affected service/link, “……….[0032] The fast mesh restoration process 60 includes, at intermediate node(s), receiving information in the overhead; parsing the information and passing it to the control plane; acting on the information immediately or after a hold-off period; and optionally, generating and forwarding a RELEASE message (step 68). That is, each of the intermediate node(s) can receive the information in the overhead and pass this information to the control plane to act. The control plane can run a timer for the hold-off period (e.g., 50 ms, etc.) to see if the originating node has acted before performing any action. For backward compatibility, each node, on receipt of failure information in FTFL, could initiate a RELEASE message to the neighboring node to RELEASE the CALL, in case other nodes do not support reading FTFL bytes for release information (because it may be on legacy software). If a node Supports reading the FTFL information, the legacy RELEASE message may get ignored as a call object may already be deleted because of the information received in the FTFL. Thus, the fast mesh restoration process 60 is fully backward compatible with legacy nodes in the path……….” [Sareen, ¶ [0032]]. Sareen does not expressly teach inserting link ID into overhead. However, in analogous art, Hironaka teaches inserting link ID into overhead “……... [0096] In the present embodiment, as will be described later, the source nodes #1 and #8 insert link identification information (link ID) of the user traffic into the RES field. Thus, each of the end nodes #4 and #9 is available to determine whether or not the user traffic is allowed to transfer based on the link ID inserted in the RES field………...” [ Hironaka, ¶ [0096]]. Accordingly, it would have been obvious to a POSITA to include fault indication information indicating the relevant target service when the fault relates to a particular target service in the hierarchy. Hierarchical correlation and suppression depend on binding the alarm to the correct service node so that suppression and restoration actions are applied to the correct branch; otherwise, the system risks suppressing unrelated alarms or acting on the wrong service. A POSITA would be motivated to embed link/service identification information into the overhead-carried fault alarm payload because this is a conventional solution to the known problem of ambiguity. Hironaka’s overhead link ID insertion demonstrates that embedding identifiers in overhead is an established technique that yields predictable results: receivers can determine permitted traffic/actions and associate overhead information with the correct link/service. Under KSR, adopting that technique in the combined system is a routine modification producing expected benefits. Therefore, claim 14 would have been obvious. Claim 19 With respect to claim 19, all limitations of claim 18 are taught by Sareen, Ding, Hironaka, Noorhosseini, Valadarsky ’755 and Valadarsky’661, except wherein claim 19 additionally requires obtaining a target service classification tree (indicating service levels and upper-level/lower-level association relationships among services) and detecting a first faulty service based on the target service classification tree. Sareen and Ding do not expressly teach a hierarchical service-dependency model (tree/graph) defining service levels and upper/lower associations for root-cause identification across services. However, in analogous hierarchical alarm correlation art, Noorhosseini teaches hierarchical service dependency (higher-layer serving lower-layer) and traversing the connected hierarchy once a root-cause alarm is raised to identify and correlate faults, “……The invention is composed of two elements, namely a network modelling Scheme and a correlation process. The network modelling Scheme models a set of network elements in a network as a hierarchy of TTPs (transport termination points) and creates several layers of connected TTPs. In the new correlation process, the network of connected TTPs is traversed once a root-cause alarm is raised and a problem object is created. A traversed TTP keeps its association with the problem object. In this manner a symptomatic alarm raised on the TTP is correlated with the associated problem(s) without the need for a repeated search of the network. More specifically, in the new network modelling scheme provided by the invention, a network element is modelled as a hierarchy of virtual server-client TTPs. A TTP at a lower layer is served by a TTP at a higher layer. The whole network is then modelled by establishing connections between these TTPs.” [Noorhosseini, col.1-2]. Noorhosseini’s hierarchy corresponds to the claimed target service classification tree (service levels and upper/lower association relationships), and Noorhosseini’s traversal upon a root-cause alarm corresponds to detecting a first faulty service based on that classification structure. [Noorhosseini, col.1-2]. Additionally, Valadarsky ’755 teaches explicitly constructing a topology graph based table of network entities and fault rules to support correlation and root-cause identification, which corresponds to obtaining structured association relationships among services/entities used for detecting faulty services, “Construction of a topology graph-based table of network entities and fault rules (this phase is done offline).” [Valadarsky ’755, ¶ [0151]]. Additionally, Valadarsky’661 teaches clustering alarms using network topology considerations and selecting a root-cause alarm from a candidate group, which further supports implementing, as program instructions, a structured dependency model and first-faulty-service/root-cause selection, “Find the root cause alarm from a candidate group of alarms. When an event occurs and identified as an alarm, that alarm is given an ID, which uniquely identifies the alarming object in the configuration database. The correlation filter filters out non-relevant alarms. This process uses pre-defined rules. The alarms are grouped according to time characteristics and clustered by using network topology considerations. Each cluster is investigated by the correlation engine that deducts the probable root cause or root causes for that group.” [Valadarsky’661, col.7-8]. Accordingly, it would have been obvious to a POSITA to implement, as a computer program product, instructions to obtain a hierarchical service classification tree and detect a faulty service based on that model. Software/firmware is the conventional place to represent dependency models, topology information, and correlation rules that may change with network configuration. Implementing the hierarchy and correlation logic in software enables updates and scalability across deployments while using the same underlying hardware platform. A POSITA would be motivated to encode Noorhosseini’s hierarchical relationships and traversal and Valadarsky’s topology-graph correlation approach as instructions because this yields predictable benefits: scalable root-cause identification across service layers, reduced alarm storms, and faster restoration decisions based on overhead fault information already provided by Sareen/Ding/Hironaka. Under KSR, applying known correlation techniques in software to a known fault management problem yields predictable improvements. Therefore, claim 19 would have been obvious. Claim 20 With respect to claim 20, all limitations of claim 18 are taught by Sareen, Ding, Hironaka, Noorhosseini, Valadarsky’755 and Valadarsky’661, except wherein claim 20 additionally requires an ordered hierarchical detection procedure over the target service classification tree comprising: (i) selecting, from the target service classification tree, a third target service with a first highest service level as a first to-be-detected service; (ii) sequentially detecting, in a descending order of service levels, the first to-be-detected service and all fourth target services that have a first lower-level association relationship with the first to-be-detected service; (iii) each time a second faulty service is detected, saving the second faulty service; (iv) stopping detecting all fifth target services that have a second lower-level association relationship with the second faulty service (suppression); and (v) selecting, from the target service classification tree after detecting all the fourth target services, a next target service with a second highest service level as a next to-be-detected service and continuing until detecting a last target service with a third highest service level. Sareen and Ding do not expressly teach this ordered root-cause selection and suppression workflow operating over a hierarchical service-dependency model. However, in analogous hierarchical alarm correlation art, Noorhosseini teaches modelling the network as a hierarchy of server-client TTPs and traversing the connected hierarchy once a root-cause alarm is raised and a problem object/correlation state is created. Noorhosseini’s traversal from the problem TTP to its client (lower-layer) TTPs corresponds to selecting a higher-level service/entity for detection and sequentially detecting associated lower-level services in a descending order of service levels. “………. The invention is composed of two elements, namely a network modelling Scheme and a correlation process. The network modelling Scheme models a set of network elements in a network as a hierarchy of TTPs (transport termination points) and creates several layers of connected TTPs. In the new correlation process, the network of connected TTPs is traversed once a root-cause alarm is raised and a problem object is created. A traversed TTP keeps its association with the problem object. In this manner a symptomatic alarm raised on the TTP is correlated with the associated problem(s) without the need for a repeated search of the network. More specifically, in the new network modelling scheme provided by the invention, a network element is modelled as a hierarchy of virtual server-client TTPs. A TTP at a lower layer is served by a TTP at a higher layer. The whole network is then modelled by establishing connections between these TTPs.” [Noorhosseini, col.1-2]. Noorhosseini further teaches that traversed entities keep association with the problem object/correlation state (“A traversed TTP keeps its association with the problem object”), which corresponds to saving detected faulty services (and their associated correlation state) as the procedure progresses. [Noorhosseini, col.1-2]. Noorhosseini does not expressly teach explicit alarm-history storage, nor explicit filtering/suppression rules that stop processing lower-level symptomatic items once a higher-level root-cause is identified. However, in analogous alarm correlation art, Valadarsky ’755 teaches saving correlation decisions/results in a history database for later review, which corresponds to saving detected faulty services/results, “TRS stores its results in a history database. Users can review the decisions it made, and the alarm groups it correlated, long after the faults that generated those decisions and alarm groups have been resolved.” [Valadarsky ’755, ¶[0032]]. Valadarsky ’755 also teaches supplementary rules for “filtering out” alarms (and deferring alarms), which corresponds to stopping further detection/processing of lower-level services associated with an already-detected faulty service (suppression) to reduce alarm flooding and focus on root-cause,“Supplementary rules used to define actions in the system, such as filtering out of alarms, creating trouble tickets, automatic defer of alarms etc.” [Valadarsky ’755, ¶ [0168]]. Additionally, Valadarsky ’755 teaches a structured decision workflow including constructing a topology graph, clustering alarms by topology/rules, and executing a decision algorithm on clusters, which supports iterating through remaining candidate services/clusters after completing a first selected service’s associated lower-level detections, “The correlation decision algorithm is preferably implemented in four phase: 1. Construction of a topology graph based table of network entities and fault rules (this phase is done offline). 2. Collection of the income alarms and dividing them into groups by time. 3. Clustering of the alarms by network topology and by rules. 4. Execution of the decision algorithm on a cluster of alarms.” [Valadarsky ’755, ¶ [0150]-¶ [0154]]. Within analogous art, Valadarsky’661 provides explicit support for the claimed ordered root-cause selection/suppression workflow by teaching (i) a correlation filter that filters out non-relevant/low-level alarms when a higher-level alarm is active, and (ii) alarm suppression to avoid redundant alarms, “The correlation filter filters out non-relevant alarms … filter out low-level alarms when a higher level one is still active.” [Valadarsky’661, col.7-8]. “Single alarm Suppression, i.e. an alarm will not be raised if it is still up when another alarm like that comes in.” [Valadarsky’661, col.9-10]. Valadarsky’661 also teaches using graph traversal to find the root cause and the alarms belonging to that root cause, supporting descending detection across associated services in the dependency graph/tree, “TRS uses graph traverse in order to find the root cause and … find all the alarms that belong to a root cause.” [Valadarsky’661, col.1-2]. Valadarsky’661 further teaches maintaining/displaying correlation history, supporting the “saving” aspect of claim 20, “Display the correlation history with all the relevant information.” [Valadarsky’661, col.9-10]. Accordingly, it would have been obvious to a POSITA to implement claim 20’s ordered hierarchical selection, descending detection, saving, and suppression workflow in the computer program product context. Scalable alarm correlation in layered transport networks routinely uses structured hierarchical processing to prioritize root causes, record correlation decisions, and suppress redundant symptomatic processing. Selecting the highest service level first is a predictable optimization because higher-level faults often explain multiple dependent symptoms; performing lower-level detection only as needed reduces unnecessary work and reduces alarm flooding. A POSITA would be motivated to implement these steps by applying Noorhosseini’s hierarchical traversal teachings (traversing client entities at lower layers served by the problem entity) together with Valadarsky’s teachings of storing correlation results/history and filtering out alarms or taking suppression actions. Saving detected faulty services corresponds to maintaining correlation outcomes for traceability and stable decisions, while stopping further processing of lower-level associated services corresponds to filtering/suppression to prevent redundant symptom processing once a likely root cause is identified. Under KSR, combining known traversal with known history and filtering techniques is a predictable use of prior art elements yielding expected results in alarm reduction and root-cause identification speed. Therefore, claim 20 would have been obvious. It is noted that any citations to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the reference should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. See MPEP 2123. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Mohammed Abdelraheem, whose telephone number is (571) 272-0656. The examiner can normally be reached Monday–Thursday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO-supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Payne, can be reached at (571) 272-3024. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (in USA or Canada) or 571-272-1000. /MOHAMMED ABDELRAHEEM/Examiner, Art Unit 2635 /DAVID C PAYNE/Supervisory Patent Examiner, Art Unit 2635
Read full office action

Prosecution Timeline

May 10, 2024
Application Filed
Feb 12, 2026
Non-Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month