DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements filed on 12/28/2023 and 01/12/2024 comply with all application rules and regulations. Therefore, the information referred to therein have been considered
Claim Objection
Claims 6, 7, 25 and 35 objected to because of the following informalities:
• Claim 6, line 4, “effect of a QoS boost” should be “effect of the QoS boost”.
• Claim 7, line 2, “corresponds” should be “correspond”.
• Claim 7, line 5, “a number packets” should be “a number of packets”.
• Claim 11, line 3, there is no definition for the abbreviation “POST” in the Claims or Specification Section.
• Claim 25, line 5, “the clocks of the UE and test head module of the node” should be “clocks of the UE and the test head module of the node”.
• Claim 35, line 1, “the clocks of the node and end-user device” should be “clocks of the node and the end-user device”.
Appropriate correction is required.
Specification Objection
Applicant is reminded of the proper language and format for an abstract of the
disclosure.
The abstract should be in narrative form and generally limited to a single paragraph on a
separate sheet within the range of 50 to 150 words in length. The abstract should describe the
disclosure sufficiently to assist readers in deciding whether there is a need for consulting the
full patent text for details.
The language should be clear and concise and should not repeat information given in
the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided.
The use of the term Bluetooth, Wi-Fi, Zigbee or Z-Wave which is a trade name or a mark used in commerce, has been noted in this application. The term should be accompanied by the generic terminology; furthermore, the term should be capitalized wherever it appears or, where appropriate, include a proper symbol indicating use in commerce such as ™, SM , or ® following the term.
Although the use of trade names and marks used in commerce (i.e., trademarks, service marks, certification marks, and collective marks) are permissible in patent applications, the proprietary nature of the marks should be respected and every effort made to prevent their use in any manner which might adversely affect their validity as commercial marks.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 5-7, 13, 25-26, 29, 35-36 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Jump et al. (US-20210194782-A1), as field on Dec. 17, 2020 and published on Jun. 24, 2021.
Regarding claim 1 (Currently Amended), Jump teaches A method performed by a user equipment, (UE) (Figs. 1 and 3, [0022] states “ the end point devices 110 may be or include computing devices (e.g., user devices, desktop computing devices, portable computing devices, mobile computing devices, server devices, network devices, or the like). As shown in FIG. 1, the end point devices 110 may communicate with each other within the subnet 115.” Which indicates that the method may performed by host end point device/the UE, as also indicated in claim 1) for active network measurement ([0017] states “the testing, and the measurements taken during testing, may be customized or dynamically altered in order to obtain more accurate connectivity performance quality test results” and “a series of preliminary measurements may be taken to determine the minimum size of the payload for actual testing.” Which implies the method includes taking/measuring active network measurements or (connectivity performance measurements)), the method comprising: sending one or more first test packets to a node, wherein the first test packets are not boosted for improved Quality of Service (QoS) ([0016]-[0017] describe the UE, host end device, can send the test packets to the node, test endpoint device, in both scenarios (without boosting or with boosting, if desired) as states “testing policies may be implemented to prevent inaccurate testing, and/or to prevent connectivity testing from adversely impacting normal operational communications between the endpoints”. This implies the test packets can be sent via normal conditions without special prioritization (boosting) to ensure not affected with normal operations, as stated “bandwidth limits, QoS policies, and/or other policies may be implemented for connectivity testing such that bandwidth and/or other network resources are not unrestrictedly consumed for connectivity testing, which could adversely affect endpoint communications for normal operations.”): receiving one or more first reflected packets from the node (Fig. 2, [0034] states “Process 200 also may include determining round trip latency between the host and test endpoints and storing the results (step 230). For example, when the test end point device 110 is available, the host end point device 110 may begin the connectivity performance quality test by performing a round trip latency test with the test end point device 110.” That implies the host end point device can test the reflected packet after receiving from the test end point. [0034], lines 9-13 and Claim 3 confirm that the host end point device sends the packets and receives the responses (reflected) packets to complete the test); calculating preliminary results based at least in part on the received first reflected packets ([0034], lines 3-7 states “For example, when the test end point device 110 is available, the host end point device 110 may begin the connectivity performance quality test by performing a round trip latency test with the test end point device 110” and lines 7-9 states “In some embodiments, the host end point device 110 may perform a round trip latency test using any suitable latency testing technique.”[0037], lines 1-4 also indicated that): sending one or more second test packets to the node, wherein the second test packets are boosted for improved QoS ([0016]-[0017] describe the UE, host end device, can send the test packets to the node, test endpoint device, in both scenarios (without boosting or with boosting, if desired) as states “testing policies may be implemented to prevent inaccurate testing, and/or to prevent connectivity testing from adversely impacting normal operational communications between the endpoints”. This implies the test packets can be sent via normal conditions without special prioritization (boosting) to ensure not affected with normal operations, as stated “bandwidth limits, QoS policies, and/or other policies may be implemented for connectivity testing such that bandwidth and/or other network resources are not unrestrictedly consumed for connectivity testing, which could adversely affect endpoint communications for normal operations.” Which implies the test packet can affected by QoS policies or other network configurations and “connectivity performance quality may be measured with consideration and adjustments to compensate for these policies”. [0017] states “the testing, and the measurements taken during testing, may be customized or dynamically altered in order to obtain more accurate connectivity performance quality test results.” Which implies including sending test boosted test packets to evaluate the impact of QoS or related policies as stated in [0016], lines 24-29, “Similar examples may apply for other connectivity performance metrics, such as latency and jitter. In general, connectivity performance may be measured with consideration or compensation for network policies, endpoint capabilities, hardware configurations, and/or software configurations.”): receiving one or more second reflected packets from the node ([0034], lines 9-13 sates “the host end point device 110 may exchange a relatively small test payload file (e.g., approximately 1 kilobit in size), and repeat the exchange for several iterations (e.g., twenty iterations or other number of iterations).” Claim 2 also states “executing a plurality of individual connectivity performance quality tests between the host endpoint device and a respective plurality of test endpoint devices within the subnet” these paragraphs indicate that receiving one or more second reflected packets from the node); and calculating test results based at least in part on the received second reflected packets and the preliminary results (Fig. 2, Steps 240-245, [0037], lines 1-13, states “ Process 200 further may include applying a statistical model to the latency and transfer throughput measurements between endpoints to determine measure of connectivity performance quality (step 245)…. inputs to the statistical model. “ That implies using statistical models to analyze the data from different types of packets (with/without boost for QoS) to compute the connectivity performance quality. [0017] states “the testing, and the measurements taken during testing, may be customized or dynamically altered in order to obtain more accurate connectivity performance quality test results.” this indicates the host can dynamically adjust its testing methodology different types of packets to obtain accurate test results).
Regarding claim 2 (Currently Amended), Jump teaches the method of claim 1.
Jump teaches further comprising: performing a quality of service, QoS determination ([0037], lines 1-4, Step 245, Fig. 2, illustrates determination of connectivity performance quality based on the test results, where connectivity performance quality information can be considered as QoS as stated in [0014], lines 5-9, “the connectivity performance quality (also referred to herein as “connectivity quality”) may be quantified or determined based on one or more connectivity performance metrics (e.g., roundtrip latency, bandwidth rates, jitter, etc.)”), a QoS request, or QoS reporting operation based at least in part on the test results([0037], lines 4-9 and Step 250, Fig. 2 describes the host end point device stores and/or outputs (reporting) the connectivity performance quality information, as stated “the host end point device 110 may apply one or more statistical models, functions, or the like to the latency and transfer throughput measurements with the test end point device 110 (e.g., as obtained, recorded, and stored at blocks 230 and 240, described herein)”).
Regarding claim 5 (Currently Amended), Jump teaches the method of claim 1.
Jump teaches wherein the preliminary results comprise one or more time or delay values ([0034], lines 1-3 and lines 13-15 explicitly stated that the results includes duration of the time and round trip latency/delay values), and wherein the one or more time or delay values are calculated based at least in part on timestamps of the received first reflected packets ([0034] states “the duration of time for each transfer may be recorded (e.g., in milliseconds) and stored.” And [0035], lines 16-31 illustrate how to iteratively adjusted the time duration, which implies the difference between these timestamps can provide these duration).
Regarding claim 6 (Currently Amended), Jump teaches the method of claim 1.
Jump further teaches wherein the test results comprise one or more time ([0035], lines 18-21 states “the network throughput test should have a sufficiently long duration to achieve accurate network throughput test results (e.g., ten seconds to twenty seconds)” implies including the time), delay ([0034], lines 1-4, states “Process 200 also may include determining round trip latency between the host and test endpoints and storing the results (step 230)” explicitly mentions about measuring the latency/delay), effect of a QoS boost ([0016], lines 1-4 illustrates the impact of the QoS on the connectivity testing as stated “bandwidth limits, QoS policies, and/or other policies may be implemented for connectivity testing such that bandwidth and/or other network resources are not unrestrictedly consumed for connectivity testing, which could adversely affect endpoint communications for normal operations.” Implies the QoS policies can influence the results of connectivity performance test. Further explanation, as states “In general, connectivity performance may be measured with consideration or compensation for network policies, endpoint capabilities, hardware configurations, and/or software configurations.” and the example, lines 12-24, describes how to limit the bandwidth, which is a part of QoS, during the test), or congestion values , and wherein the one or more time, delay, effect of a QoS boost, or congestion values are calculated based at least in part on timestamps of the received second reflected packets ([0034], lines 9-13, states “ the host end point device 110 may exchange a relatively small test payload file (e.g., approximately 1 kilobit in size), and repeat the exchange for several iterations (e.g., twenty iterations or other number of iterations). In some embodiments, the duration of time for each transfer may be recorded (e.g., in milliseconds) and stored.” and Claim 3, this implies the process relies on the timestamps to measure the time taken for packets to travel to the test point and back to the host point device. [0035] states “the host end point device 110 may iteratively or progressively select increasing sizes of the payload test file after each transfer until the network throughput test duration is within a predetermined range (e.g., the ten second to twenty second range, or other range).” Which also confirm the process relies on timestamps to measure the transfer time for each payload test file).
Regarding claim 7 (Currently Amended), Jump teaches the method of claim 1.
Jump further teaches further comprising: obtaining a test request, wherein one or more of the first and second test packets corresponds to the test request (Claim 7 explains how the host end point device negotiate with the test end point device at the established start time to check the availability of the test end point device for testing, which can be performed through sending several test packets as stated “establishing a start time to perform the connectivity performance quality test with the test endpoint device; at the established start time, probing the test endpoint device; receiving an indication as to whether the test endpoint device is currently available for testing; performing the connectivity performance quality test with the test endpoint device if the indication indicates that the test endpoint device is currently available for testing …. at the time that was identified.” [0032], lines 3-5); and wherein the test request comprises one or more of a number packets ([0036], describes the host end point can determine the multiple network throughput test (number of packets) ,which could be 5-10, as states in lines 10-13 that “the test payload may be transferred approximately five to ten times, although any number of transfers may be configured to obtain a reliable measurement of the transfer times.” ), packet size (claim 19 states “determining, by the host endpoint device, a payload size for a test file to be used for performing a network throughput test as part of the connectivity performance quality test” which confirm the host end point is responsible to determine the packet size, Step 235, Fig. 2, [0017] and [0035]), boost type, or Data Network Name (DNN) , and submitting a callback, wherein the callback comprises the test results or a network congestion estimation (Claim 1, lines 16-21 and claim 10, lines 4-6 states “providing, by the host endpoint device, second results data from the connectivity performance quality test to the central network administration device.” Which confirm providing the results as a callback. Where the results include, as stated in [0038], “the host end point device 110 may provide results data (e.g., latency measurements, transfer throughput measurements, and connectivity performance quality values) to the central network administration device 130 (or another source)”).
Regarding claim 13 (Currently Amended), Jump teaches the method of claim 1.
Jump further teaches wherein the UE is an end-user device connected to one or more of a cellular or wide area network (WAN) data path ([0002]-[0003] and [0026] describe the UE as an end-user device can be connected to a network/sub-network through WAN, LAN, or cellular network with different generation. [0013], lines 9-13, states ” some host endpoints may be configured as local network peers within a subnet even when those host endpoints are not co-located (e.g., when the endpoints access the subnet through a WAN, a Virtual Private Network (VPN), etc.), for the reason of distribution devices using these data paths as stated in [0023], lines 9-12), and wherein the node comprises a test head module (Claim 1 states “providing, by the host endpoint device, results data from the connectivity performance quality test to a central network administration device, whereby the central network administration device executes a computer-based instruction based on the results data from the connectivity performance quality test.” Where the central network administration device can be considered as test head module for the system).
Regarding claim 25 (Currently Amended), Jump teaches the method of claim 1.
Jump further teaches wherein the UE is an end-user device connected to one or more of a cellular or WAN data path ([0013], lines 9-13, states ” some host endpoints may be configured as local network peers within a subnet even when those host endpoints are not co-located (e.g., when the endpoints access the subnet through a WAN, a Virtual Private Network (VPN), etc.), for the reason of distribution devices using these data paths as stated in [0023], lines 9-12), wherein the node comprises a test head module (Claim 1 states “providing, by the host endpoint device, results data from the connectivity performance quality test to a central network administration device, whereby the central network administration device executes a computer-based instruction based on the results data from the connectivity performance quality test.” Where the V can be considered as test head module for the system), and wherein the clocks of the UE and test head module of the node are not synchronized (Claim 11, lines 7-13, and [0038] states “ the host end point device 110 may provide results data (e.g., latency measurements, transfer throughput measurements, and connectivity performance quality values) to the central network administration device 130 (or another source). In some embodiments, the central network administration device 130 may execute a computer-based instruction based on the results data.” That implies the central device collects and processes the test results after receiving them from the end point device).
Regarding claim 26 (Currently Amended), Jump teaches a user equipment (UE) (Figs. 1 and 3, [0022] states “ the end point devices 110 may be or include computing devices (e.g., user devices, desktop computing devices, portable computing devices, mobile computing devices, server devices, network devices, or the like). As shown in FIG. 1, the end point devices 110 may communicate with each other within the subnet 115.” Which indicates that the method may performed by host end point device/the UE, as also indicated in claim 1), configured to: send one or more first test packets to a node, wherein the first test packets are not boosted for improved Quality of Service (QoS) ([0016]-[0017] describe the UE, host end device, can send the test packets to the node, test endpoint device, in both scenarios (without boosting or with boosting, if desired) as states “testing policies may be implemented to prevent inaccurate testing, and/or to prevent connectivity testing from adversely impacting normal operational communications between the endpoints”. This implies the test packets can be sent via normal conditions without special prioritization (boosting) to ensure not affected with normal operations, as stated “bandwidth limits, QoS policies, and/or other policies may be implemented for connectivity testing such that bandwidth and/or other network resources are not unrestrictedly consumed for connectivity testing, which could adversely affect endpoint communications for normal operations.”): receive one or more first reflected packets from the node (Fig. 2, [0034] states “Process 200 also may include determining round trip latency between the host and test endpoints and storing the results (step 230). For example, when the test end point device 110 is available, the host end point device 110 may begin the connectivity performance quality test by performing a round trip latency test with the test end point device 110.” That implies the host end point device can test the reflected packet after receiving from the test end point. [0034], lines 9-13 and Claim 3 confirm that the host end point device sends the packets and receives the responses (reflected) packets to complete the test); calculate preliminary results based at least in part on the received first reflected packets ([0034], lines 3-7 states “For example, when the test end point device 110 is available, the host end point device 110 may begin the connectivity performance quality test by performing a round trip latency test with the test end point device 110” and lines 7-9 states “In some embodiments, the host end point device 110 may perform a round trip latency test using any suitable latency testing technique.”[0037], lines 1-4 also indicated that): send one or more second test packets to the node, wherein the second test packets are boosted for improved QoS ([0016]-[0017] describe the UE, host end device, can send the test packets to the node, test endpoint device, in both scenarios (without boosting or with boosting, if desired) as states “testing policies may be implemented to prevent inaccurate testing, and/or to prevent connectivity testing from adversely impacting normal operational communications between the endpoints”. This implies the test packets can be sent via normal conditions without special prioritization (boosting) to ensure not affected with normal operations, as stated “bandwidth limits, QoS policies, and/or other policies may be implemented for connectivity testing such that bandwidth and/or other network resources are not unrestrictedly consumed for connectivity testing, which could adversely affect endpoint communications for normal operations.” Which implies the test packet can affected by QoS policies or other network configurations and “connectivity performance quality may be measured with consideration and adjustments to compensate for these policies”. [0017] states “the testing, and the measurements taken during testing, may be customized or dynamically altered in order to obtain more accurate connectivity performance quality test results.” Which implies including sending test boosted test packets to evaluate the impact of QoS or related policies as stated in [0016], lines 24-29, “Similar examples may apply for other connectivity performance metrics, such as latency and jitter. In general, connectivity performance may be measured with consideration or compensation for network policies, endpoint capabilities, hardware configurations, and/or software configurations.”): receive one or more second reflected packets from the node ([0034], lines 9-13 sates “the host end point device 110 may exchange a relatively small test payload file (e.g., approximately 1 kilobit in size), and repeat the exchange for several iterations (e.g., twenty iterations or other number of iterations).” Claim 2 also states “executing a plurality of individual connectivity performance quality tests between the host endpoint device and a respective plurality of test endpoint devices within the subnet” these paragraphs indicate that receiving one or more second reflected packets from the node); and calculate test results based at least in part on the received second reflected packets and the preliminary results (Fig. 2, Steps 240-245, [0037], lines 1-13, states “ Process 200 further may include applying a statistical model to the latency and transfer throughput measurements between endpoints to determine measure of connectivity performance quality (step 245)…. inputs to the statistical model. “ That implies using statistical models to analyze the data from different types of packets (with/without boost for QoS) to compute the connectivity performance quality. [0017] states “the testing, and the measurements taken during testing, may be customized or dynamically altered in order to obtain more accurate connectivity performance quality test results.” this indicates the host can dynamically adjust its testing methodology different types of packets to obtain accurate test results).
Regarding claim 29 (Currently Amended), Jump teaches a method performed by a node, comprising: receiving and reflecting one or more first test packets from an end-user device (Fig. 2, [0034] states “Process 200 also may include determining round trip latency between the host and test endpoints and storing the results (step 230). For example, when the test end point device 110 is available, the host end point device 110 may begin the connectivity performance quality test by performing a round trip latency test with the test end point device 110.” That implies the host end point device can test the reflected packet after receiving from the test end point (the node). [0034], lines 9-13 and Claim 3 confirm that the host end point device sends the packets and receives the responses (reflected) packets to complete the test), wherein the first test packets are not boosted for improved Quality of Service (QoS) ([0016]-[0017] describe the UE, host end device, can send the test packets to the node, test endpoint device, in both scenarios (without boosting or with boosting, if desired) as states “testing policies may be implemented to prevent inaccurate testing, and/or to prevent connectivity testing from adversely impacting normal operational communications between the endpoints”. This implies the test packets can be sent via normal conditions without special prioritization (boosting) to ensure not affected with normal operations, as stated “bandwidth limits, QoS policies, and/or other policies may be implemented for connectivity testing such that bandwidth and/or other network resources are not unrestrictedly consumed for connectivity testing, which could adversely affect endpoint communications for normal operations.”); receiving and reflecting one or more second test packets from the end-user device, wherein the second test packets are boosted for improved QoS ([0016]-[0017] describe the UE, host end device, can send the test packets to the node, test endpoint device, in both scenarios (without boosting or with boosting, if desired) as states “testing policies may be implemented to prevent inaccurate testing, and/or to prevent connectivity testing from adversely impacting normal operational communications between the endpoints”. This implies the test packets can be sent via normal conditions without special prioritization (boosting) to ensure not affected with normal operations, as stated “bandwidth limits, QoS policies, and/or other policies may be implemented for connectivity testing such that bandwidth and/or other network resources are not unrestrictedly consumed for connectivity testing, which could adversely affect endpoint communications for normal operations.” Which implies the test packet can affected by QoS policies or other network configurations and “connectivity performance quality may be measured with consideration and adjustments to compensate for these policies”. [0017] states “the testing, and the measurements taken during testing, may be customized or dynamically altered in order to obtain more accurate connectivity performance quality test results.” Which implies including sending test boosted test packets to evaluate the impact of QoS or related policies as stated in [0016], lines 24-29, “Similar examples may apply for other connectivity performance metrics, such as latency and jitter. In general, connectivity performance may be measured with consideration or compensation for network policies, endpoint capabilities, hardware configurations, and/or software configurations”. [0034], lines 9-13 sates “the host end point device 110 may exchange a relatively small test payload file (e.g., approximately 1 kilobit in size), and repeat the exchange for several iterations (e.g., twenty iterations or other number of iterations).” Claim 2 also states “executing a plurality of individual connectivity performance quality tests between the host endpoint device and a respective plurality of test endpoint devices within the subnet” these paragraphs indicate that receiving one or more second reflected packets from the node); and receiving a message from the end-user device indicating one or more of a QoS test result, QoS determination, a QoS request, or a boost request based at least in part on the first and second test packets ([0037], lines 4-9 and Step 250, Fig. 2 describes the host end point device stores and/or outputs (reporting) the connectivity performance quality information, as stated “the host end point device 110 may apply one or more statistical models, functions, or the like to the latency and transfer throughput measurements with the test end point device 110 (e.g., as obtained, recorded, and stored at blocks 230 and 240, described herein)”. Claim 1 states “providing, by the host endpoint device, results data from the connectivity performance quality test to a central network administration device, whereby the central network administration device executes a computer-based instruction based on the results data from the connectivity performance quality test.” Where the central network administration device can be considered as test head module for the system).
Regarding claim 35 (Currently Amended), Jump teaches the method of claim 29.
Jump further teaches wherein the clocks of the node and end-user device are not synchronized (Claim 11, lines 7-13, and [0038] states “ the host end point device 110 may provide results data (e.g., latency measurements, transfer throughput measurements, and connectivity performance quality values) to the central network administration device 130 (or another source). In some embodiments, the central network administration device 130 may execute a computer-based instruction based on the results data.” That implies the central device collects and processes the test results after receiving them from the end point device).
Regarding claim 36 (Currently Amended), Jump teaches a node configured to: receive and reflect one or more first test packets from an end-user device (Fig. 2, [0034] states “Process 200 also may include determining round trip latency between the host and test endpoints and storing the results (step 230). For example, when the test end point device 110 is available, the host end point device 110 may begin the connectivity performance quality test by performing a round trip latency test with the test end point device 110.” That implies the host end point device can test the reflected packet after receiving from the test end point (the node). [0034], lines 9-13 and Claim 3 confirm that the host end point device sends the packets and receives the responses (reflected) packets to complete the test), wherein the first test packets are not boosted for improved Quality of Service (QoS) ([0016]-[0017] describe the UE, host end device, can send the test packets to the node, test endpoint device, in both scenarios (without boosting or with boosting, if desired) as states “testing policies may be implemented to prevent inaccurate testing, and/or to prevent connectivity testing from adversely impacting normal operational communications between the endpoints”. This implies the test packets can be sent via normal conditions without special prioritization (boosting) to ensure not affected with normal operations, as stated “bandwidth limits, QoS policies, and/or other policies may be implemented for connectivity testing such that bandwidth and/or other network resources are not unrestrictedly consumed for connectivity testing, which could adversely affect endpoint communications for normal operations.”); receive and reflect one or more second test packets from the end-user device, wherein the second test packets are boosted ([0016]-[0017] describe the UE, host end device, can send the test packets to the node, test endpoint device, in both scenarios (without boosting or with boosting, if desired) as states “testing policies may be implemented to prevent inaccurate testing, and/or to prevent connectivity testing from adversely impacting normal operational communications between the endpoints”. This implies the test packets can be sent via normal conditions without special prioritization (boosting) to ensure not affected with normal operations, as stated “bandwidth limits, QoS policies, and/or other policies may be implemented for connectivity testing such that bandwidth and/or other network resources are not unrestrictedly consumed for connectivity testing, which could adversely affect endpoint communications for normal operations.” Which implies the test packet can affected by QoS policies or other network configurations and “connectivity performance quality may be measured with consideration and adjustments to compensate for these policies”. [0017] states “the testing, and the measurements taken during testing, may be customized or dynamically altered in order to obtain more accurate connectivity performance quality test results.” Which implies including sending test boosted test packets to evaluate the impact of QoS or related policies as stated in [0016], lines 24-29, “Similar examples may apply for other connectivity performance metrics, such as latency and jitter. In general, connectivity performance may be measured with consideration or compensation for network policies, endpoint capabilities, hardware configurations, and/or software configurations”. [0034], lines 9-13 sates “the host end point device 110 may exchange a relatively small test payload file (e.g., approximately 1 kilobit in size), and repeat the exchange for several iterations (e.g., twenty iterations or other number of iterations).” Claim 2 also states “executing a plurality of individual connectivity performance quality tests between the host endpoint device and a respective plurality of test endpoint devices within the subnet” these paragraphs indicate that receiving one or more second reflected packets from the node); and receive a message from the end-user device indicating one or more of a QoS test result, QoS determination, QoS request, or a boost request based at least in part on the first and second test packets ([0037], lines 4-9 and Step 250, Fig. 2 describes the host end point device stores and/or outputs (reporting) the connectivity performance quality information, as stated “the host end point device 110 may apply one or more statistical models, functions, or the like to the latency and transfer throughput measurements with the test end point device 110 (e.g., as obtained, recorded, and stored at blocks 230 and 240, described herein)”. Claim 1 states “providing, by the host endpoint device, results data from the connectivity performance quality test to a central network administration device, whereby the central network administration device executes a computer-based instruction based on the results data from the connectivity performance quality test.” Where the central network administration device can be considered as test head module for the system).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 3, 9 are rejected under 35 U.S.C. 103 as being unpatentable over Jump et al. (US-20210194782-A1), in view of Maria et al. (GB-2580595-A).
Regarding claim 3 (Currently Amended), Jump teaches the method of claim 1.
Jump fails to teach , further comprising: providing an indication regarding use of QoS boost for subsequent transmissions from the UE, wherein the indication is an indication to a user that a QoS boost would improve the user's quality of experience.
However, Maria teaches providing an indication regarding use of QoS boost for subsequent transmissions from the UE (Step 510, Fig. 5 illustrates that the UE can communicate its activity/flag to the network regarding the improving the network performance, which allow the network to adapt the configuration to improve the performance for the user as stated in 15-20, page 8, “the network 100 adapts its configuration in order to improve the fairness of the network for users; this is achieved by reducing significant inequality in network performance amongst the UEs 110, when such UEs are together partaking in an appropriate competitive activity over the network 100 and/or are relying upon the network to support a competitive activity.”, where “the network performance includes: jitter; latency; bandwidth (download and/or upload); Round-Trip Time (RTT) delay; and error rate (reliability)”. See also lines 27-31, page 12), wherein the indication is an indication to a user that a QoS boost would improve the user's quality of experience (lines 10-15, page 17 states “In a first step 510, the network identifies a first UE 110-1 as being eligible to have its network performance reconfigured in order to improve fairness; this is, for example, identified by way of a flag that it communicates to the network when the first UE 110-1 attaches to the network” and lines 5-10, page 18 states “An option is then given to the UEs -or at least one of the UEs (such as the UE with the lower/lowest network performance)- for the network to take action so as to improve fairness.” Which allows the network to optimize the performance for the user to improve QoE).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jump to incorporate the teachings of Maria (in analogous art) by adding the indication to indicate a user that a QoS boost would improve the user's quality of experience and improve the performance of the network, (Maria, lines 5-6, page 14).
Regarding claim 9 (Currently Amended), Jump teaches the method of claim 1.
Jump fails to teach further comprising the steps of: sending a QoS boost request to a boost service node; and sending an un-boost request to the boost service node
However, Maria teaches further comprising the steps of: sending a QoS boost request to a boost service node; and sending an un-boost request to the boost service node (lines 24-27, page 5 describes the step of triggering the users in competitive activity when is detected as stated “the method further comprises the step of identifying when the first and second users are both competing in a competitive activity over the telecommunications network, and performing said method when identifying that the first and second users are both competing in the competitive activity”. Fig. 5 describes steps as an example, where first, Step 510 (lines 11-13, page 17): indicates the eligibility to improve the network fairness (QoS boost), Step 550 (lines 22-24, page 17): adjust the configuration to improve the fairness, which may include the boosting QoS and Step 560 (lines 26-29, page 17): describes revert network configuration for the UE after the end of competitive activity, effectively un boosting the QoS. lines 32-34, page 18, states “the network 100 is also configured to adapt network performance so as to improve fairness in dependence on user characteristics, such as skill at a competitive activity and/or hardware capabilities”).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jump to incorporate the teachings of Maria (in analogous art) by adding the indication to indicate a user that a QoS boost would improve the user's quality of experience and improve the performance of the network, (Maria, lines 5-6, page 14).
Claims 11 is rejected under 35 U.S.C. 103 as being unpatentable over Jump et al. (US-20210194782-A1), in view of Smith et al. (US-20190349426-A1).
Regarding claim 11 (Currently Amended), Jump teaches the method of claim 1.
Jump further teaches further comprising: sending the test results or results of a quality of service QoS determination to the node (Claim 1 explicitly states “ providing, by the host endpoint device, results data from the connectivity performance quality test to a central network administration device, whereby the central network administration device executes a computer-based instruction based on the results data from the connectivity performance quality test.”),
Jump fails to teach wherein the results are sent in a POST message comprising one or more of a test result, lot, long, or cellID value.
However, Smith teaches wherein the results are sent in a POST message comprising one or more of a test result, lot, long ([2479], illustrates the posting of WD message, which can include Lot (batch number or ID and size ) and long (time/duration or long term) as shown in Fig. 57 and Table 1, for the payload message, as stated in [0863] and illustrated in Fig. 118, Block no. 11828, to prepare the message for transmitting, which including the data /results), or cellID value.
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jump to incorporate the teachings of Smith (in analogous art) by adding wherein the results are sent in a POST message comprising one or more of a test result, lot, long for enabling reliable, secure, and identifiable devices that can form networks as needed to accomplish tasks (Smith, [0005], lines 6-8).
Claims 14 and 33 are rejected under 35 U.S.C. 103 as being unpatentable over Jump et al. (US-20210194782-A1), in view of Thangavel et al. (US 10749785 B1).
Regarding claim 14 (Currently Amended), Jump teaches the method of claim 13.
Jump fails to teach wherein the UE comprises one or more of a boost evaluation software development kit (SDK), a boost agent, or a two-way active measurement protocol, (TWAMP) control client and session sender, and wherein the test head module is running in a cloud environment and comprises a TWAMP server and session-reflector.
However, Thangavel teaches wherein the UE comprises one or more of a boost evaluation software development kit (SDK), a boost agent, or a two-way active measurement protocol, (TWAMP) control client and session sender, and wherein the test head module is running in a cloud environment and comprises a TWAMP server and session-reflector (Col. 1, lines 18-25, states “A Two-Way Active Measurement Protocol (TWAMP) is based on OWAMP and adds the ability to measure two-way or round-trip metrics of network performance between the two network devices. For example, TWAMP may be used to measure both two-way and one-way network performance indicators, such as latency, delay (inter frame gap), jitter, packet loss, throughput, and the like (referred to as “service level agreement (SLA) metrics”)“, and Col. 1, lines 26-29, states “A TWAMP measurement architecture includes at least two network devices, also referred to as hosts or endpoints, that each support TWAMP and perform specific roles to start test sessions and exchange test packets over the test sessions.” Which implies the UE comprises A TWAMP, where In an example network architecture, the logical roles of the TWAMP control-client and the TWAMP session-sender may both be executed by a first endpoint (UE). Fig. 1 illustrates service provider network 2 comprises a software defined network (SDN) and network functions virtualization (NFV) architecture, where SDN controller 14 manages deployment of virtual machines (VMs) within the operating environment of data center 9, as stated in Col. 8, lines 60-63 and Col. 9, lines 1-3).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jump to incorporate the teachings of Thangavel (in analogous art) by adding (TWAMP) control client and session sender, and wherein the test head module is running in a cloud environment and comprises a TWAMP server and session-reflector. In this way, the performance of the enhanced TWAMP is dynamically offloaded to the network device with the more robust system resources (Thangavel, Col. 14, lines 1-3).
Regarding claim 33 (Currently Amended), Jump teaches the method of claim 29.
Jump fails to teach wherein the node comprises a test head module running in a cloud environment and comprises a two-way active measurement (TWAMP) server and session-reflector, and wherein the test head module runs alongside one or more specific apps or servers accessed by the end-user device
However, Thangavel teaches wherein the node comprises a test head module running in a cloud environment and comprises a two-way active measurement (TWAMP) server and session-reflector (Col. 1, lines 18-25, states “A Two-Way Active Measurement Protocol (TWAMP) is based on OWAMP and adds the ability to measure two-way or round-trip metrics of network performance between the two network devices. For example, TWAMP may be used to measure both two-way and one-way network performance indicators, such as latency, delay (inter frame gap), jitter, packet loss, throughput, and the like (referred to as “service level agreement (SLA) metrics”)“, and Col. 1, lines 26-29, states “A TWAMP measurement architecture includes at least two network devices, also referred to as hosts or endpoints, that each support TWAMP and perform specific roles to start test sessions and exchange test packets over the test sessions.” Which implies the UE comprises A TWAMP, where In an example network architecture, the logical roles of the TWAMP control-client and the TWAMP session-sender may both be executed by a first endpoint (UE). Fig. 1 illustrates service provider network 2 comprises a software defined network (SDN) and network functions virtualization (NFV) architecture, where SDN controller 14 manages deployment of virtual machines (VMs) within the operating environment of data center 9, as stated in Col. 8, lines 60-63 and Col. 9, lines 1-3), and wherein the test head module runs alongside one or more specific apps or servers accessed by the end-user device (Fig. 2, Col. 8 lines 16-24 states “As examples, service nodes 10 may apply stateful firewall (SFW) and security services, deep packet inspection (DPI), carrier grade network address translation (CGNAT), traffic destination function (TDF) services, media (voice/video) optimization, Internet Protocol security (IPSec)/virtual private network (VPN) services, hypertext transfer protocol (HTTP) filtering, counting, accounting, charging, and/or load balancing of packet flows, or other types of services applied to network traffic.” Where each service node may run as VMs).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jump to incorporate the teachings of Thangavel (in analogous art) by adding (TWAMP) control client and session sender, and wherein the test head module is running in a cloud environment and comprises a TWAMP server and session-reflector. In this way, the performance of the enhanced TWAMP is dynamically offloaded to the network device with the more robust system resources (Thangavel, Col. 14, lines 1-3).
Claims 18, 20, 22, 30 are rejected under 35 U.S.C. 103 as being unpatentable over Jump et al. (US-20210194782-A1), in view of Sziagyi et al. (US-20170373950-A1).
Regarding claim 18 (Currently Amended), Jump teaches the method of claim 1.
Jump fails to teach further comprising performing a QoS determination, QoS request, or QoS reporting operation based at least in part on the test results by performing a segmented congestion detection operation.
However, Sziagyi teaches further comprising performing a QoS determination, QoS request, or QoS reporting operation based at least in part on the test results by performing a segmented congestion detection operation (Figs. 10, 12 and [0057] states “congestion detection is performed based on the correlation of measured QoE degradation, and network state detection is performed based on advanced indicators such as loss pattern detection, delay profile analysis and correlated delay/loss/throughput profiling and classification. “ Claim 51describes the QoS measurements are carried out at multiple points. [0066] describe enable congestion detection and the measurement of the resources available in the network. [0067] states “The aggregation is performed by generating the per network segment union or sum of the measurements of each flow that satisfies the aggregation criteria.”. [0080] states “By profiling the delay on each network segment, the CE agent is able to put the measurements in context and detect, if the increased delay on a network segment is due to increased load and thus the network segment is congested. FIG. 17 illustrates a delay profile on a network segment. “ [0110] describes the enhanced detector is provided by correlating the insights obtained from the user level, application specific QoE and QoS measurements, congestion detection and localization, bottleneck classification. All these parts illustrates that indicate this segmented approach can help for accurate QoS reporting enables decision making and corrective/preventive actions, [0035], lines 17-23).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jump to incorporate the teachings of Sziagyi (in analogous art) by adding QoS reporting operation based at least in part on the test results by performing a segmented congestion detection operation for managing the QoE requires a correlated insight and measurements to the applications, the experience user behavior, network status and quality of service (Sziagyi, [0002], lines 7-10).
Regarding claim 20 (Currently Amended), Jump and Sziagyi the method of claim 18.
Jump fails to teach wherein the segmented congestion detection operation comprises comparing one or more of an uplink effect of a QoS boost or downlink effect of a QoS boost to a threshold.
However, Sziagyi teaches wherein the segmented congestion detection operation comprises comparing one or more of an uplink effect of a QoS boost or downlink effect of a QoS boost to a threshold ([0065] describes how the QoS measurements can be categorized in two categories and how to enabling the protocol header enrichment to measure the separate uplink and downlink one way delay between each CE agent in addition to the per network segment RTTs, while [0046] states “The collected measurements data represent a superset of existing legacy QoS measurements, therefore they enable the existing QoS measurement-based use cases as well (e.g. delay/load threshold based detection, triggers or actions). That confirm using threshold for comparison).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jump to incorporate the teachings of Sziagyi (in analogous art) by adding QoS reporting operation based at least in part on the test results by performing a segmented congestion detection operation for managing the QoE requires a correlated insight and measurements to the applications, the experience user behavior, network status and quality of service (Sziagyi, [0002], lines 7-10).
Regarding claim 22 (Currently Amended), Jump and Sziagyi the method of claim 18.
Jump fails to teach, further comprising: requesting a boost for subsequent UE transmissions based at least in part on the result of the segmented congestion detection operation, wherein the requested boost is for one or more cellular uplink, cellular downlink, WAN uplink, or WAN downlink paths.
However, Sziagyi teaches further comprising: requesting a boost for subsequent UE transmissions based at least in part on the result of the segmented congestion detection operation ([0080], lines 3-7 describes how the CE agent performs segment congestion detection within specific network segments. [0087], lines 18-33 describes how the CE agent can trigger a suitable /correct actions, including boosting subsequent UE transmissions to improve the performance “When congestion is detected, the CE agent also calculates the optimal state including the desired bandwidth of the applications and the corresponding radio and transport configuration that enables reaching or approaching the optimum (i.e. bearer QoS parameters, weight configuration, shaper rate, capacity allocation, etc.)”), wherein the requested boost is for one or more cellular uplink, cellular downlink, WAN uplink, or WAN downlink paths (Different scenarios are described for boosting the cellular uplink/downlink and WAN uplink/downlink. [0071] and [0084] describes the scenarios of boosting cellular uplink and downlink, for example the video download using progressive HTTP download and keeps buffering due to insufficient bandwidth that leads to QoE degradation, “Every time the playout buffer depletes, the video playback at UE stalls, causing QoE degradation. The degradations visible to UE are referred to as QoE incidents.” [0084], lines 15-17. Another scenario related to WAN uplink, that describe related the backup of data to the server, where the process is slow, the CE agent can detect the high demand of bandwidth as described in [0084], lines 18-32. Another scenario for WAN downlink, [0087]).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jump to incorporate the teachings of Sziagyi (in analogous art) by adding QoS reporting operation based at least in part on the test results by performing a segmented congestion detection operation for managing the QoE requires a correlated insight and measurements to the applications, the experience user behavior, network status and quality of service (Sziagyi, [0002], lines 7-10).
Regarding claim 30 (Currently Amended), Jump teaches the method of claim 29.
Jump Fails to teach further comprising: performing tag optimization on one or more of the first and second test packets.
However, Sziagyi teaches further comprising: performing tag optimization on one or more of the first and second test packets ([0060], lines 17-21, sates “ The measurements are executed by monitoring the header content of the data packets, recording the time when the packets were intercepted and by adding/removing additional header fields through a mechanism referred to as header enrichment.” Where [0068], lines 10-12, describe the tagging of packet can enhance the accuracy).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jump to incorporate the teachings of Sziagyi (in analogous art) by adding QoS reporting operation based at least in part on the test results by performing a segmented congestion detection operation for managing the QoE requires a correlated insight and measurements to the applications, the experience user behavior, network status and quality of service (Sziagyi, [0002], lines 7-10).
Relevant Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's
disclosure.
Morper et al. (US-20120269082-A1), Cavaliere et al. (US-20160105341-A1), Cavaliere et al. (US-20160105341-A1), and Robitaille et al. (US-20140211636-A1) teach method involved identifying network performance, including QoS and QoE in network systems.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SANAA AL SAMAHI whose telephone number is (571)272-4171. The examiner can normally be reached M-F 8-5 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Asad Nawaz can be reached at (571) 272-3988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SANAA AL SAMAHI/Examiner, Art Unit 2463
/ASAD M NAWAZ/Supervisory Patent Examiner, Art Unit 2463