DETAILED ACTION
This communication is in responsive to Application 18/689629 filed on 3/6/2024. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims:
Claims 1-14 are presented for examination.
Information Disclosure Statement
3. The Information Disclosure Statement (IDS) submitted on 3/6/2024 complies with 37 CFR 1.97 provisions. Accordingly, the Examiner has considered the IDS.
Allowable Subject Matter
Claims 4-5 and 9-10 are objected to as being dependent upon a rejected base claim, but would be allowable if 1) overcome all the outstanding rejections and 2) rewritten in independent form including all of the limitations of the base claim and any intervening claims. The claims are objected to because the cited art of records fail to teach the limitations of claims 4-5 and 9-10.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 3-5 and 6-10 and 13-14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 3-5 include the limitation “…a deterministic network…,” this limitation is not clear because it is not clear whether the limitation refers back to claim 1’s limitation “a deterministic network” or claim 2’s limitation “the deterministic network” or is it a second “deterministic network.” Thus, the claims are rejected.
Similarly, claims 3-5 include the limitation “…a check result…” this limitation is not clear because it is not clear whether the limitation refers back to claim 1’s limitation “…a check result…” or not. Claim 4 also adds “…a first check result…” which is not clear because it is not clear whether this “check result” is same or different limitation than claim 1’s limitation “check result.”
Claim 6 recites “…used for checking service performance of the TSN network, so as to enable the target node to perform check processing…” it is not clear whether the underline limitations following it are part of the claim or not. Examiner consider this limitation to be intended use. Claims 7-10 include similar limitations, thus the same rationale applies. Claims 13-14 are also rejected for depending on rejected claim 6.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-3, 6-8 and 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over Mohan et al. (hereinafter Mohan) US 2005/0099952 A1 in view of Wetterwald et al. (hereinafter Wetterwald) US 2019/0349392 A1.
Regarding Claim 1, Mohan teaches a network monitoring method, applied to a target node deployed in a time-sensitive networking (TSN) network, the method comprising:
receiving an Operations, Administration, and Maintenance (OAM) frame from a source node (¶0062-¶0064, ¶0148 & ¶0154; edge network elements or a requestor [source node] within an OAM domain receives OAM frame from receiver/edge network elements), wherein the OAM frame comprises an operation code (Fig. 4 & ¶0085; OAM OpCode field 62. Note that this code includes different fields for performance checks listed in ¶0086-¶0102) and check data corresponding to the operation code (Fig. 4 & ¶0103; OAM data field 74 is a variable length field that is associated with the corresponding OAM OpCode and is specified for each OAM function),
performing check processing on performance of a deterministic network according to the operation code and the check data to obtain a check result (Fig. 4 & ¶0142-¶0154; frame loss measurements [performance of a deterministic network] examples include using OAM OpCode field and OAM data field 74. Mohan provides different methods (solicited/unsolicited/statistical methods) of performing checks after OAM frame is sent. Further, the calculations when N=1 after OAM frame is sent. All the methods return an estimation of frame loss [check result]. For example, the requester may send a number (N) of OAM request frames to a recipient and may receive a different number (M) response frames back from the recipient such that M .rarw.N. The data path frame loss can be estimated as Frame Loss=(N-M) per measurement time interval);
and transmitting the check result to the source node (¶0147 & ¶0149 & Fig. 4; Upon receiving an OAM response frame [transmitted back], the edge network elements or a requestor [source node] compares the original sent value with the received values, in a manner similar to the receiver. It is possible that the receiver returns the results of frame loss instead of the managed object information in the response).
Mohan is directed to using OAM in an Ethernet network for performance management. See ¶0107. However, Mohan does not expressly teach “applied to a target node deployed in a time-sensitive networking (TSN) network,” “and the OAM frame is used for checking service performance of the TSN network;”
Wetterwald on the other hand is directed to using OAM message in TSN/DeNet flow to determine performance. See ¶0068.
Wetterwald teaches “applied to a target node deployed in a time-sensitive networking (TSN) network, (¶0066-¶0068; applying OAM techniques/mechanism to detect an issue in a deterministic flow to the source of a TSN/DeNet flow)” “and the OAM frame is used for checking service performance of the TSN network” (¶0066-¶0068; applying OAM techniques to detect an issue in a deterministic flow to the source of TSN/DeNet flow).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed limitation to incorporate the teachings of Wetterwald into the system of Mohan in order to trigger sending an OAM message asynchronously and effectively asking each device along the path, for that specific flow, to time stamp the packets and compares them with the schedule (¶0068). Utilizing such teachings enable the system to locate and time issues for flow and address it accordingly. Id.
Regarding Claim 2, Mohan in view of Wetterwald teaches the network monitoring method of claim 1, Mohan further teaches wherein the performance of the deterministic network comprises at least one of: packet redundancy (Fig. 4 & ¶0104; cyclic redundancy check CRC); packet cycle (Fig. 4 & ¶0104; cyclic redundancy check CRC); or frame preemption capability.
Regarding Claim 3, Mohan in view of Wetterwald teaches the network monitoring method of claim 2, Mohan further teaches wherein in response to the operation code being a first operation code, the check data comprise: sequence number (Fig. 4 & ¶0104; FCS), number of redundant packets (Fig. 4 & ¶0104; CRC), first receiving time (obvious from Fig. 4 & ¶0119-¶0012 & ¶0134; parameter for transmission time), transmission time (¶0119-¶0012 & ¶0134; parameter for transmission time), maximum delay value (¶0159; maximum frame delay (FD.sub.max) and minimum frame delay (FD.sub.min)), and minimum delay value (¶0159; maximum frame delay (FD.sub.max) and minimum frame delay (FD.sub.min));
performing check processing on performance of a deterministic network according to the operation code and the check data to obtain a check result comprises the steps of:
obtaining a number of received packets according to a timing duration, the sequence numbers of the received packets being the same, and the timing duration being obtained according to the first receiving time, the transmission time and the maximum delay value (obvious from ¶0125 and ¶0143-¶0154 calculations);
and obtaining a packet redundancy check result according to the number of received packets, the number of redundant packets, and a time comparison result, wherein the time comparison result is obtained by comparing a difference between the first receiving time and the transmission time with a difference between the maximum delay value and the minimum delay value (obvious from ¶0104; crc and ¶0159-¶0160; loopback method measures the round-trip or two-way frame delay per request and response frame. Within the period of observation, the requestor keeps track of maximum frame delay (FD.sub.max) and minimum frame delay (FD.sub.min). The frame delay variation is then calculated as: frame delay variation or jitter=FD.sub.max-FD.sub.min. Information elements that may be used in connection with the frame delay variation include the sequence number and the request timestamp, although other elements may be included as well. Additionally, one-way Frame Delay Variation (FDM) may be measured, for example at the receiver the frame delay variation may be measured as FDV=[Time(rx2)"Time(rx1)]-[Time(tx2)-Time (tx1)], to provide the one-way delay variation between the two samples. This does not require time synchronization between requestor and responder. The invention is not limited to this particular example as other measurements may be made as well).
Regarding Claim 6, Mohan teaches a network monitoring method, applied to a source node deployed in a time-sensitive networking (TSN) network, the method comprising:
transmitting an Operations, Administration, and Maintenance (OAM) frame to a target node (¶0062-¶0064, ¶0148 & ¶0154; edge network elements or a requestor within an OAM domain receives OAM frame from receiver/one or more edge network elements [target node]), wherein the OAM frame comprises an operation code (Fig. 4 & ¶0085; OAM OpCode field 62. Note that this code includes different fields for performance checks listed in ¶0086-¶0102) and check data corresponding to the operation code (Fig. 4 & ¶0103; OAM data field 74 is a variable length field that is associated with the corresponding OAM OpCode and is specified for each OAM function),
and the OAM frame is used for checking service performance of the TSN network (Fig. 4 & ¶0142-¶0154; frame loss measurements [performance of a network] examples include using OAM OpCode field and OAM data field 74. Mohan provides different methods (solicited/unsolicited/statistical methods) of performing checks after OAM frame is sent. Further, the calculations when N=1 after OAM frame is sent. All the methods return an estimation of frame loss [check result]. For example, the requester may send a number (N) of OAM request frames to a recipient and may receive a different number (M) response frames back from the recipient such that M .rarw.N. The data path frame loss can be estimated as Frame Loss=(N-M) per measurement time interval), so as to enable the target node to perform check processing on performance of a deterministic network according to the operation code and the check data to obtain a check result and transmit the check result to the source node (Fig. 4 & ¶0142-¶0154; frame loss measurements [performance of a deterministic network] examples include using OAM OpCode field and OAM data field 74. Mohan provides different methods (solicited/unsolicited/statistical methods) of performing checks after OAM frame is sent. Further, the calculations when N=1 after OAM frame is sent. All the methods return an estimation of frame loss [check result]. For example, the requester may send a number (N) of OAM request frames to a recipient and may receive a different number (M) response frames back from the recipient such that M .rarw.N. The data path frame loss can be estimated as Frame Loss=(N-M) per measurement time interval);
and receiving the check result from the target node (¶0147 & ¶0149 & Fig. 4; Upon receiving an OAM response frame [from the receiver], the edge network elements or a requestor compares the original sent value with the received values, in a manner similar to the receiver. It is possible that the receiver returns the results of frame loss instead of the managed object information in the response).
Mohan is directed to using OAM in an Ethernet network for performance management. See ¶0107. However, Mohan does not expressly teach “applied to a source node deployed in a time-sensitive networking (TSN) network,” “…performance of the TSN network;”
Wetterwald on the other hand is directed to using OAM message in TSN/DeNet flow to determine performance. See ¶0068.
Wetterwald teaches “applied to a source node deployed in a time-sensitive networking (TSN) network, (¶0066-¶0068; applying OAM techniques/mechanism to detect an issue in a deterministic flow to the source of a TSN/DeNet flow)” “… performance of the TSN network” (¶0066-¶0068; applying OAM techniques to detect an issue in a deterministic flow to the source of TSN/DeNet flow).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed limitation to incorporate the teachings of Wetterwald into the system of Mohan in order to trigger sending an OAM message asynchronously and effectively asking each device along the path, for that specific flow, to time stamp the packets and compares them with the schedule (¶0068). Utilizing such teachings enable the system to locate and time issues for flow and address it accordingly. Id.
Regarding Claim 7, Mohan in view of Wetterwald teaches the network monitoring method of claim 6, Mohan further teaches wherein in response to the operation code being a first operation code, the check data are first check data (see Fig. 4 & ¶0140-¶0154; Mohan is not limited to first/second check data);
and transmitting an OAM frame to a target node comprises the steps of: transmitting a first OAM frame to a target node, wherein the first OAM frame comprises the first operation code and the first check data, and the first check data comprise sequence number (Fig. 4 & ¶0104; FCS), number of redundant packets (Fig. 4 & ¶0104; CRC), first receiving time (obvious from Fig. 4 & ¶0119-¶0012 & ¶0134; parameter for transmission time), transmission time (¶0119-¶0012 & ¶0134; parameter for transmission time), maximum delay value (¶0159; maximum frame delay (FD.sub.max) and minimum frame delay (FD.sub.min)), and minimum delay value (¶0159; maximum frame delay (FD.sub.max), so as to enable the target node to obtain a packet redundancy check result according to the first OAM frame (intended use, see Fig. 4).
Regarding Claim 8, Mohan in view of Wetterwald teaches the network monitoring method of claim 6, Mohan further teaches wherein in response to the operation code being a second operation code, the check data are second check data (see Fig. 4 & ¶0140-¶0154; Mohan is not limited to first/second check data);
and transmitting an OAM frame to a target node comprises the steps of: transmitting a second OAM frame to a target node, wherein the second OAM frame comprises the second operation code and the second check data (obvious from ¶0125 and ¶0143-¶0154 see different calculations), and the second check data comprise second receiving time, packet cycle value, cycle jitter value, and fail packet rate (all the different parameters are obvious from ¶0104; crc and ¶0159-¶0160; because loopback method measures the round-trip or two-way frame delay per request and response frame. Within the period of observation, the requestor keeps track of maximum frame delay (FD.sub.max) and minimum frame delay (FD.sub.min). The frame delay variation is then calculated as: frame delay variation or jitter=FD.sub.max-FD.sub.min. Information elements that may be used in connection with the frame delay variation include the sequence number and the request timestamp, although other elements may be included as well. Additionally, one-way Frame Delay Variation (FDM) may be measured, for example at the receiver the frame delay variation may be measured as FDV=[Time(rx2)"Time(rx1)]-[Time(tx2)-Time (tx1)], to provide the one-way delay variation between the two samples. This does not require time synchronization between requestor and responder. The invention is not limited to this particular example as other measurements may be made as well), so as to enable the target node to obtain a packet cycle check result according to the second OAM frame and a preset number threshold (intended use, see Fig. 4).
Claims 11-14 are substantially similar to claims 1 and 6, thus the same rationale applies.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAHRAN ABU ROUMI whose telephone number is (469)295-9170. The examiner can normally be reached Monday-Thursday 6AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emmanuel Moise can be reached at 571-272-3865. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
MAHRAN ABU ROUMI
Primary Examiner
Art Unit 2455
/MAHRAN Y ABU ROUMI/Primary Examiner, Art Unit 2455