DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
2. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
3.Claims 1, 8 and 15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 8 and 15, respectively of U.S. Patent No. 12,273,744. Although the claims at issue are not identical, they are not patentably distinct from each other because at least one examined application claim is either anticipated by, or would have been obvious over, the reference claim(s) as shown in the following comparison.
Current Examined Application
Reference Patent No. 12,273,744
1. A method comprising:
partitioning a set of configuration management (CM) data for one or more cellular network devices into multiple distinct time intervals;
determining one or more temporal points of interest in each time interval based on whether CM changes exist during that time interval;
for each temporal point of interest in each time interval:
identifying a first set of data samples before that temporal point of interest and a second set of data samples after that temporal point of interest; and
averaging features and a target key performance indicator (KPI) in the first set of data samples and in the second set of data samples; and
performing regression analysis to determine an impact of the features on the target KPI.
A method comprising:
partitioning a set of configuration management (CM) data for one or more cellular network devices into multiple distinct time intervals, each time interval associated with a distinct set of CM settings at the one or more cellular network devices, the CM data comprising multiple CM parameters;
determining a regression model based on the set of CM data; and
applying the regression model to compute a distinct set of scores and compare the set of scores to estimate whether a performance of the one or more cellular network devices has changed during a second time interval relative to a first time interval.
8. A device comprising:
a transceiver; and
a processor operably connected to the transceiver, the processor configured to:
partition a set of configuration management (CM) data for one or more cellular network devices into multiple distinct time intervals;
determine one or more temporal points of interest in each time interval based on whether CM changes exist during that time interval;
for each temporal point of interest in each time interval:
identify a first set of data samples before that temporal point of interest and a second set of data samples after that temporal point of interest; and
average features and a target key performance indicator (KPI) in the first set of data samples and in the second set of data samples; and
perform regression analysis to determine an impact of the features on the target KPI.
8. A device comprising:
a transceiver configured to transmit and receive information; and
a processor operably connected to the transceiver, the processor configured to:
partition a set of configuration management (CM) data for one or more cellular network devices into multiple distinct time intervals, each time interval associated with a distinct set of CM settings at the one or more cellular network devices, the CM data comprising multiple CM parameters;
determine a regression model based on the set of CM data; and
apply the regression model to compute a distinct set of scores and compare the set of scores to estimate whether a performance of the one or more cellular network devices has changed during a second time interval relative to a first time interval.
15. A non-transitory computer readable medium comprising program code that, when executed by a processor of a device, causes the device to:
partition a set of configuration management (CM) data for one or more cellular network devices into multiple distinct time intervals;
determine one or more temporal points of interest in each time interval based on whether CM changes exist during that time interval;
for each temporal point of interest in each time interval:
identify a first set of data samples before that temporal point of interest and a second set of data samples after that temporal point of interest; and
average features and a target key performance indicator (KPI) in the first set of data samples and in the second set of data samples; and
perform regression analysis to determine an impact of the features on the target KPI.
15. A non-transitory computer readable medium comprising program code that, when executed by a processor of a device, causes the device to:
partition a set of configuration management (CM) data for one or more cellular network devices into multiple distinct time intervals, each time interval associated with a distinct set of CM settings at the one or more cellular network devices, the CM data comprising multiple CM parameters;
determine a regression model based on the set of CM data; and
apply the regression model to compute a distinct set of scores and compare the set of scores to estimate whether a performance of the one or more cellular network devices has changed during a second time interval relative to a first time interval.
Corrections are required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kwan (Pub. No. US 2016/0381580).
Regarding claim 1. Kwan teaches a method (Kwan, the Abstract), comprising:
partitioning a set of configuration management (CM) data for one or more cellular network devices into multiple distinct time intervals (Kwan, Fig. 17, pp [134]: a set of clusters of data collected over time intervals);
determining one or more temporal points of interest in each time interval based on whether CM changes exist during that time interval (Kwan, Fig. 17, pp [134]: time points indicated in different time intervals in a set of managed clusters);
for each temporal point of interest in each time interval:
identifying a first set of data samples before that temporal point of interest and a second set of data samples after that temporal point of interest (Kwan, Fig. 9A, pp [79]-[82]: subset data in groups 1 and 2 of KPIs are selected); and
averaging features and a target key performance indicator (KPI) in the first set of data samples and in the second set of data samples (Kwan, pp [89]: KPIs are averaged over periods of time); and
performing regression analysis to determine an impact of the features on the target KPI (Kwan, Fig. 8A, pp [75]-[76]: groups of KPIs are collected and analyzed).
Regarding claim 8. Kwan teaches a device (Kwan, the Abstract, Fig. 1), comprising:
a transceiver (Kwan, Fig. 1, pp [35]); and
a processor operably connected to the transceiver (Kwan, Fig. 1, pp [35]), the processor configured to:
partition a set of configuration management (CM) data for one or more cellular network devices into multiple distinct time intervals (Kwan, Fig. 17, pp [134]: a set of clusters of data collected over time intervals);
determine one or more temporal points of interest in each time interval based on whether CM changes exist during that time interval (Kwan, Fig. 17, pp [134]: time points indicated in different time intervals in a set of managed clusters);
for each temporal point of interest in each time interval:
identify a first set of data samples before that temporal point of interest and a second set of data samples after that temporal point of interest (Kwan, Fig. 9A, pp [79]-[82]: subset data in groups 1 and 2 of KPIs are selected); and
average features and a target key performance indicator (KPI) in the first set of data samples and in the second set of data samples (Kwan, pp [89]: KPIs are averaged over periods of time); and
perform regression analysis to determine an impact of the features on the target KPI (Kwan, Fig. 8A, pp [75]-[76]: groups of KPIs are collected and analyzed).
Regarding claim 15. Kwan teaches a non-transitory computer readable medium comprising program code that, when executed by a processor of a device (Kwan, the Abstract, pp [38]-[38]), causes the device to:
partition a set of configuration management (CM) data for one or more cellular network devices into multiple distinct time intervals (Kwan, Fig. 17, pp [134]: a set of clusters of data collected over time intervals);
determine one or more temporal points of interest in each time interval based on whether CM changes exist during that time interval (Kwan, Fig. 17, pp [134]: time points indicated in different time intervals in a set of managed clusters);
for each temporal point of interest in each time interval:
identify a first set of data samples before that temporal point of interest and a second set of data samples after that temporal point of interest (Kwan, Fig. 9A, pp [79]-[82]: subset data in groups 1 and 2 of KPIs are selected); and
average features and a target key performance indicator (KPI) in the first set of data samples and in the second set of data samples (Kwan, pp [89]: KPIs are averaged over periods of time); and
perform regression analysis to determine an impact of the features on the target KPI (Kwan, Fig. 8A, pp [75]-[76]: groups of KPIs are collected and analyzed).
Regarding claim 2. Kwan teaches the method of Claim 1, further comprising:
calculating a performance score indicating an improvement or a degradation in the target KPI (Kwan, pp [26]-[27], [41]-[42]: calculating and analyzing performance data of KPIs); and
outputting the performance score (Kwan, pp [26]-[27], [41]-[42]: calculating and analyzing performance data of KPIs).
Regarding claim 3. Kwan teaches the method of Claim 1, wherein the set of CM data is partitioned into the multiple distinct time intervals such that each time interval does not include any time gaps of no data that are longer than a predetermined threshold period (Kwan, Fig. 2, pp [54]; Fig. 4, pp [64]-[65]: the reporting information of KPIs data over intervals without interruption gaps).
Regarding claim 4. Kwan teaches the method of Claim 1, wherein determining the one or more temporal points of interest in each time interval based on whether or not any CM changes exist during that time interval comprises:
for a time interval that has no CM change, selecting a midpoint of that time interval as a temporal point of interest (Kwan, Fig. 7C, pp [73]-[74]; Fig. 17, pp [134]); and
for a time interval that has at least one CM change, selecting a time coinciding with each of the at least one CM change as a temporal point of interest (Kwan, Fig. 7C, pp [73]-[74]; Fig. 17, pp [134]).
Regarding claim 5. Kwan teaches the method of Claim 1, further comprising:
performing at least one of multiple data preprocessing operations on the set of CM data, the multiple data preprocessing operations comprising (i) removing invalid data samples (Kwan, pp [97], [105]), (ii) normalizing or scaling the CM data (Kwan, pp [115], [121]-[122], [127]), (iii) removing trends or seasonality in the CM data (Kwan, pp [97], [105]), (iv) generating additional synthetic features from existing KPIs in the CM data (Kwan, pp [80]-[82], [104]-[106]), and (v) selecting a subset of the CM data associated with a specific timeframe or a specific group of the cellular network devices (Kwan, pp [60], [65], [79]-[82]).
Regarding claim 6. Kwan teaches the method of Claim 1, wherein the target KPI comprises a packet loss rate (Kwan, pp [75]-[77]).
Regarding claim 7. Kwan teaches the method of Claim 1, wherein the features comprise at least one of:
one or more KPIs other than the target KPI (Kwan, pp [51]-[52], [59]-[60]); and
one or more synthetic features generated from existing KPIs (Kwan, pp [80]-[82], [104]-[106]).
Regarding claim 9. Kwan teaches the device of Claim 8, wherein the processor is further configured to:
calculate a performance score indicating an improvement or a degradation in the target KPI (Kwan, pp [26]-[27], [41]-[42]: calculating and analyzing performance data of KPIs); and
output the performance score (Kwan, pp [26]-[27], [41]-[42]: calculating and analyzing performance data of KPIs).
Regarding claim 10. Kwan teaches the device of Claim 8, wherein the set of CM data is partitioned into the multiple distinct time intervals such that each time interval does not include any time gaps of no data that are longer than a predetermined threshold period (Kwan, Fig. 2, pp [54]; Fig. 4, pp [64]-[65]: the reporting information of KPIs data over intervals without interruption gaps).
Regarding claim 11. Kwan teaches the device of Claim 8, wherein to determine the one or more temporal points of interest in each time interval based on whether or not any CM changes exist during that time interval, the processor is configured to:
for a time interval that has no CM change, select a midpoint of that time interval as a temporal point of interest (Kwan, Fig. 7C, pp [73]-[74]; Fig. 17, pp [134]); and
for a time interval that has at least one CM change, select a time coinciding with each of the at least one CM change as a temporal point of interest (Kwan, Fig. 7C, pp [73]-[74]; Fig. 17, pp [134]).
Regarding claim 12. Kwan teaches the device of Claim 8, wherein the processor is further configured to:
perform at least one of multiple data preprocessing operations on the set of CM data, the multiple data preprocessing operations comprising (i) removing invalid data samples (Kwan, pp [97], [105]), (ii) normalizing or scaling the CM data (Kwan, pp [115], [121]-[122], [127]), (iii) removing trends or seasonality in the CM data (Kwan, pp [97], [105]), (iv) generating additional synthetic features from existing KPIs in the CM data (Kwan, pp [80]-[82], [104]-[106]), and (v) selecting a subset of the CM data associated with a specific timeframe or a specific group of the cellular network devices (Kwan, pp [60], [65], [79]-[82]).
Regarding claim 13. Kwan teaches the device of Claim 8, wherein the target KPI comprises a packet loss rate (Kwan, pp [75]-[77]).
Regarding claim 14. Kwan teaches the device of Claim 8, wherein the features comprise at least one of:
one or more KPIs other than the target KPI (Kwan, pp [51]-[52], [59]-[60]); and
one or more synthetic features generated from existing KPIs (Kwan, pp [80]-[82], [104]-[106]).
Regarding claim 16. Kwan teaches the non-transitory computer readable medium of Claim 15, wherein the program code further causes the device to:
calculate a performance score indicating an improvement or a degradation in the target KPI (Kwan, pp [26]-[27], [41]-[42]: calculating and analyzing performance data of KPIs); and
output the performance score (Kwan, pp [26]-[27], [41]-[42]: calculating and analyzing performance data of KPIs).
Regarding claim 17. Kwan teaches the non-transitory computer readable medium of Claim 15, wherein the set of CM data is partitioned into the multiple distinct time intervals such that each time interval does not include any time gaps of no data that are longer than a predetermined threshold period (Kwan, Fig. 2, pp [54]; Fig. 4, pp [64]-[65]: the reporting information of KPIs data over intervals without interruption gaps).
Regarding claim 18. Kwan teaches the non-transitory computer readable medium of Claim 15, wherein the program code to determine the one or more temporal points of interest in each time interval based on whether or not any CM changes exist during that time interval comprises program code to:
for a time interval that has no CM change, select a midpoint of that time interval as a temporal point of interest (Kwan, Fig. 7C, pp [73]-[74]; Fig. 17, pp [134]); and
for a time interval that has at least one CM change, select a time coinciding with each of the at least one CM change as a temporal point of interest (Kwan, Fig. 7C, pp [73]-[74]; Fig. 17, pp [134]).
Regarding claim 19. Kwan teaches the non-transitory computer readable medium of Claim 15, wherein the program code further causes the device to:
perform at least one of multiple data preprocessing operations on the set of CM data, the multiple data preprocessing operations comprising (i) removing invalid data samples (Kwan, pp [97], [105]), (ii) normalizing or scaling the CM data (Kwan, pp [115], [121]-[122], [127]), (iii) removing trends or seasonality in the CM data (Kwan, pp [97], [105]), (iv) generating additional synthetic features from existing KPIs in the CM data (Kwan, pp [80]-[82], [104]-[106]), and (v) selecting a subset of the CM data associated with a specific timeframe or a specific group of the cellular network devices (Kwan, pp [60], [65], [79]-[82]).
Regarding claim 20. Kwan teaches the non-transitory computer readable medium of Claim 15, wherein the target KPI comprises a packet loss rate (Kwan, pp [75]-[77]).
Related references but are not used in the rejection
Borsos et al. (Pub. No. US 2023/0216737), teaches a method for acquiring network measurements indicative of any changes in a network following a change to a configuration of the network and data indicative of one or more factors capable of causing the changes in the network. The one or more factors are independent of the change to the configuration of the network. The method includes analyzing the acquired network measurements and data to identify a contribution of the one or more factors to a key performance indicator (KPI) and a contribution of the change to the configuration of the network to the KPI. The KPI is predicted by a machine learning model and is a measure of the network performance following the change to the configuration of the network.
Li (Pub. No. US 2019/0068443), teaches a method and system for configuring parameters in a wireless communications network. Parameter configurations resulting in a change to key quality indicator (KQI) and key performance indicator (KPI) measurements are determined based on collected data samples. The data samples are divided into subsets including a first subset including the data samples associated with the parameter configurations failing to result in the change to the KQI and KPI measurements, and a second subset including the data samples associated with the parameter configurations resulting in the change to the KQI and KPI measurements dependent upon satisfying conditions in the wireless communications network. The subsets of the data samples are then determined for using machine learning to optimize the parameter configurations, and subsets of the data samples are provided as an input to machine learning for the parameter configurations to optimize the wireless communications network.
Sofuoglu (Pub. No. US 2016/0014617), teaches a system and method for dynamically improving or optimizing the performance and robustness of a wireless communication network such as a mobile communication system or cellular telephony network are disclosed. A plurality of time and space dependent key performance indicators (KPI) are used as part of a statistical determination of a pattern and schedule for optimizing the design, configuration and operation of the network. By dynamically applying a method of multiple KPI deviations the system and method improves handover execution in cellular or similar systems and reduces radio link failures and improves overall subscriber service quality.
Soundrarajan (Pub. No. US 2020/0236562) teaches a system for profiling one or more nodes based on a Key Performance Indicator (KPI) associated to a node. Initially, a receiving module receives a flag indicating an issue with the KPI of a node present in a network of nodes. An identification module identifies a set of Performance Management (PM) counters influencing the KPI using machine learning based statistical method of correlation. A determination module determines a subset of PM counters, from the set of PM counters, by comparing each PM counter with a corresponding predefined threshold limit. A normalization module normalizes the subset of PM counters by computing a variance of the subset of PM counters. A profile module profiles one or more nodes, present in the network of nodes, by comparing the variance associated to the node with variance corresponding to each of the one or more nodes.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUY C HO whose telephone number is (571)270-1108. The examiner can normally be reached M-F 8AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KATHY WANG-HURST can be reached at (571)270-5371. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HUY C HO/Primary Examiner, Art Unit 2644