Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 25th November 2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
========== ========== ==========
Claim Objections
Claims 2 – 20 are objected to because of the following informalities: please amend the preambles of “A method” to -- The method -- as the claims are dependent on claim 1. Appropriate correction is required.
========== ========== ==========
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 11 and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1: it is unclear what the “key performance indicator dimension” refers to, please define in the claim language. Claims 11 and 20: the limitations seem to be run-on sentences, please rephrase the limitations properly.
========== ========== ==========
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1 – 5, 7, 10, 14, 18 and 19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Alaaeldin et al (US 2022/0383226 A1).
Claim 1. Alaaeldin shows a method for generating key performance indicator data for network monitoring in a mobile telecommunication network (abstract), wherein the method comprises: providing, relative to an event in the mobile telecommunication network ([0046]: service rankings may be assigned per user/cluster based on at least any of (a) a service traffic volume, (b) a throttling status, (c) an RAT access such as one of 2G/3G/4G/5G, and (d) a BSS profile, such as a subscription profile), one or both of a first priority value of a key performance indicator ([0046]: if a first user had Data Service “within one Time Stamp” and experience 2 handovers and 8 session setup then session setup KPI will be W1) and a second priority value of a key performance indicator dimension ([0046]: handover KPI will be W2 where W1>W2 and all weights value will be based on the available KPIs within this time Stamp, T1 401); and determining, based on one or both of the first priority value and the second priority value, whether to generate the key performance indicator data for the network monitoring in the mobile telecommunication network ([0069]: there are technical solutions to technical problems in network monitoring such that… all priorities, weights, number of CEIs, number of KPIs, and number of CEI and KPI scenarios are freely editable at any time, all null values may be removed and not used in the equations, processing may only requires one KPI in a service to be able to provide a CEI for that service and an overall CEI, weights may be set once and can be applied to all KPIs rather than individual setting of every weight for every KPI and every CEI and will save a significant amount of time in initial setting and future adjustment).
Claim 2. Alaaeldin shows method as claimed in claim 1, further comprising: calculating or obtaining a combined priority value for a combination of the key performance indicator and the key performance indicator dimension ([0068] with equations: the processor may combine such normalized values by Service-Priority, and use Scenarios in config files to find Service-level CEI(s)), wherein the combined priority value is based on the first priority value of the key performance indicator and the second priority value of the key performance indicator dimension ([0068]: “Overall CEI” equation shows priority values of KPIs being combined), and wherein said determining whether to generate the key performance indicator data for the network monitoring in the mobile telecommunication network is based on the combined priority value ([0069]: there are technical solutions to technical problems in network monitoring… such that all priorities, weights, number of CEIs, number of KPIs, and number of CEI and KPI scenarios are freely editable at any time).
Claim 3. Alaaeldin shows method as claimed in claim 1, wherein the key performance indicator dimension relates to a potential source of a network performance degradation measurable via the key performance indicator ([0041]: the quality 301 of the service to the IMSI is lower than the quality 302 across the network, and as such, if a user corresponding to that IMSI were to complain about a problem with a service at t10, then it may be confirmed whether that user is experiencing some service quality relative to that of the network; [0043]: a quality of a CEI – CEI is derived from KPIs).
Claim 4. Alaaeldin shows method as claimed in claim 1, wherein the key performance indicator dimension comprises or relates to one or more network entities, in particular one or more of: one or more nodes in the network (n/a), one or more terminals in the network (n/a), one or more services provided in the network ([0067]: iterating by priority, and then by service, a calculating of the weight each service had on the overall CEI data such as output), and one or more subscribers in the network ([0040]: such information may be transmitted to a machine learning (ML) network for modeling various collection effectiveness indices (CEIs), outputs of such network may be provided, as data, including any of the CEIs, predictions, and international mobile subscriber identity (IMSI) information, to another ML model network for an overall CEI calculation which may be output as an overall CEI to the ML network thereby continuously improving the accuracy of various network metrics and more accurately reflecting customer experiences).
Claim 5. Alaaeldin shows method as claimed in claim 1, wherein the first priority value defines a first prioritization of a first said key performance indicator relative to a second said key performance indicator ([0005]: in response to obtaining the KPIs, one or more dynamic KPI weights based on classifications of the KPIs as indicated by pre-stored information based on at least a first cluster of first customers in which the customer is preassigned, normalizing code configured to cause the at least one processor to normalize values indicated by the KPIs and separating the normalized values into at least a first group and a second group based on priority information for each of the KPIs as indicated by the pre-stored information, training code configured to cause the at least one processor to obtain a plurality of customer experience indicators (CEIs) by averaging the normalized values of the KPIs per group and scaling the averaged values of each group by respective ones of the dynamic KPI weights indicated by the pre-stored information, determining code configured to cause the at least one processor to determine whether the CEIs indicate that at least one of the services affects an overall CEI more than another one of the services), and wherein the second priority value defines a second prioritization of a first said key performance indicator dimension relative to a second said key performance indicator dimension (see above).
Claim 7. Alaaeldin shows method as claimed in claim 2, wherein the combined priority value is unique for each pair of key performance indicator and key performance indicator dimension ([0067]: the processor may further implement iterating by a priority-service pair and then by KPI in that pair so as to then calculate a weight and impact similar to previous impacts as well as to also multiply by a weight of a Service).
Claim 10. Alaaeldin shows method as claimed in claim 1, wherein the generated key performance indicator data is aggregated into a database portion of a database if the key performance indicator satisfies an accuracy condition ([0040]: to another ML model network for an overall CEI calculation which may be output as an overall CEI to the ML network thereby continuously improving the accuracy of various network metrics and more accurately reflecting customer experiences).
Claim 14. Alaaeldin shows method as claimed in claim 10, wherein the aggregation into the database portion for a plurality of combinations of key performance indicators and key performance indicator dimensions is performed in an order of the combined priority values of the respective combinations, starting with the highest combined priority value amongst the combined priority values of the combinations (pg. 6: tables 2, 3 and 4 rankings).
Claim 18. Alaaeldin shows method as claimed in claim 1, wherein an accuracy target for the key performance indicator is common for different key performance indicators ([0068]: the processor may read data and implement normalization, and dependent on the configuration sheets, may also either use target values or a legend to rate the values).
Claim 19. Alaaeldin shows a method as claimed in claim 2 wherein the combined priority value is a product of the first priority value and the second priority value ([0005]: in response to obtaining the KPIs, one or more dynamic KPI weights based on classifications of the KPIs as indicated by pre-stored information based on at least a first cluster of first customers in which the customer is preassigned, normalizing code configured to cause the at least one processor to normalize values indicated by the KPIs and separating the normalized values into at least a first group and a second group based on priority information for each of the KPIs as indicated by the pre-stored information, training code configured to cause the at least one processor to obtain a plurality of customer experience indicators (CEIs) by averaging the normalized values of the KPIs per group and scaling the averaged values of each group by respective ones of the dynamic KPI weights indicated by the pre-stored information, determining code configured to cause the at least one processor to determine whether the CEIs indicate that at least one of the services affects an overall CEI more than another one of the services).
---------- ---------- ----------
Claims 6 is rejected under 35 U.S.C. 103 as being unpatentable over Alaaeldin et al in view of Rajendran et al (US 2020/0366575 A1).
Claim 6. Alaaeldin shows method as claimed in claim 1; Alaaeldin does not expressly describe wherein the determination whether to generate the key performance indicator data is dependent on one or both of a processing capacity and a storage capacity in the mobile telecommunication network.Rajendran teaches feature of generating key performance indicator data is dependent on one or both of a processing capacity and a storage capacity in the mobile telecommunication network ([0015]: telemetry data may indicate a signal strength of a wireless connection of an antenna associated with a network device, memory capacity, central processing unit (CPU) utilization, power consumption, etc. wherein the telemetry data may be used to generate key performance indicators (KPIs) for a network device, which can indicate a device’s health).
---------- ---------- ----------
Claims 8 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Alaaeldin et al in view of Myron et al (US 2022/0256474 A1).
Claim 8. Alaaeldin shows method as claimed in claim 2; Alaaeldin does not expressly describe wherein the key performance indicator data is generated when a time interval during which aggregated key performance indicator data has reached a predefined accuracy threshold has elapsed.Myron teaches key performance indicator data being generated when a time interval during which aggregated key performance indicator data has reached a predefined accuracy threshold has elapsed ([0043]-[0044]: the data acquisition module of the power usage optimization server may determine a baseline reference level for a network demand at a base station in a network during a first time interval… the demand forecasting module may forecast the network demand at the base station in the network during a second time interval… based at least on the prediction model, the demand forecasting module may determine whether the network demand is high or low during a time interval).It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the key performance indicator data generation feature as taught by Myron in the method of Alaaeldin to help reduce power usage (Myron, [0011]).
Claim 9. Alaaeldin, modified by Myron, shows method as claimed in claim 8, wherein the time interval is defined to be between a first boundary time interval and a second boundary time interval (Myron, claim 1: determining a baseline reference level for a network demand at a base station in a network during a first time interval, determining a transmission power of the base station corresponding to the baseline reference level, forecasting the network demand at the base station in the network during a second time interval).
---------- ---------- ----------
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Alaaeldin et al in view of Wilkinson (US 8,964,582 B2).
Claim 11. Alaaeldin shows method as claimed in claim 10; Alaaeldin does not expressly describe wherein, if the key performance indicator does not satisfy the accuracy condition, data relating to the combination is stored in the database outside the database portion and in a category common for combinations of different types of key performance indicators and key performance indicator dimensions for which the key performance indicator does not satisfy the accuracy condition.Wilkinson teaches process of if a key performance indicator does not satisfy an accuracy condition, data relating to the combination is stored in the database outside the database portion and in a category common for combinations of different types of key performance indicators and key performance indicator dimensions for which the key performance indicator does not satisfy the accuracy condition (col. 15 line 49 – col. 16 line 7: each vector in the first set of vectors including a plurality of dimensions and a first plurality of values, each of the first plurality of values associated with a corresponding one of the plurality of dimensions; identify a second set of vectors representing at least a portion of the network events as observed by a telecommunication network monitoring system distinct from the telecommunication network testing system, each vector in the second set of vectors including the plurality of dimensions and a second plurality of values, each of the second plurality of values associated with a corresponding one of the plurality of dimensions, and each vector in the second set of vectors correlated to a vector in the first set of vectors; calculate a Key Performance Indicator (KPI) indicative of user's experience quality for a selected one of the plurality of dimensions based, at least in part, upon values corresponding to the selected dimension in the second set of vectors; calculate a data integrity confidence value indicative of accuracy of the calculated KPI; and adjust network services available to the user based on the calculated KPI and the calculated data integrity confidence value).It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the process as taught by Wilkinson in the accuracy condition determination method of Alaaeldin to improve data integrity scoring, visualization for network and customer experience monitoring.
---------- ---------- ----------
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Alaaeldin et al in view of Szilagyi et al (US 2016/0065419 A1).
Claim 12. Alaaeldin shows method as claimed in claim 10; Alaaeldin does not expressly describe wherein a storage time for storing the aggregated key performance indicator data in the database portion is dependent on a time resolution for aggregating the generated key performance indicator data into the database portion.Szilagyi teaches features of a storing time for storing an aggregated key performance indicator data in a database portion is dependent on a time resolution for aggregating the generated key performance indicator data into the database portion ([0050]: obtaining the service availability and network side KPIs is possible from the network management system (NMS)… the task of the traffic analysis tool (e.g. Traffica) is to collect, store and serve (to various network analytics and reporting tools) information on traffic volume and application usage distribution corresponding to different aggregation levels (from an individual user up to aggregated cell/eNB/RNC/etc. throughput) and different time granularity (e.g. aggregating measurements and presenting statistics in an hourly resolution) wherein some network side QoS and performance KPIs are also directly measured and stored by the traffic analysis tool, such as cell radio load, transport load, bearer establishment success ratio, handover statistics, etc.).It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the storing features as taught by Szilagyi in the method of Alaaeldin to facilitate providing real time reporting of various events, such as data bearer establishment, modification or deactivation.
---------- ---------- ----------
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Alaaeldin et al in view of McCarthy et al (US 11,294,584 B1).
Claim 13. Alaaeldin shows method as claimed in claim 10; Alaaeldin does not expressly describe wherein, if a storage limit in the mobile telecommunication network has been reached, the key performance indicator data aggregated into the database portion is deleted if the key performance indicator data was generated before a predefined point in time.McCarthy teaches feature of if a storage limit in the mobile telecommunication network has been reached, a key performance indicator data aggregated into a database portion is deleted if the key performance indicator data was generated before a predefined point in time (col. 10 line 60 – col. 11 line 10: if the analysis engine determines that the storage system is out of compliance with the storage group SLE over the preceding two weeks, one of the rules from the rules engine is that the analysis engine will query the aggregate KPI values data structure for buckets where the respective key performance indicator exceeded the storage group SLE response time threshold… the analysis engine then re-calculates the storage group SLE compliance based on the redacted time series, i.e. with those buckets removed and if the storage group SLE is compliant with the buckets removed, the storage system is determined to be in compliance with the SLE requirements for the storage group).It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the storage limit deletion feature as taught by McCarthy in the method Alaaeldin to facilitate automatically resolving headroom and service level compliance discrepancies.
---------- ---------- ----------
Claims 15, 16 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Alaaeldin et al in view of Singh et al (US 2024/0152820 A1).
Claim 15. Alaaeldin shows method as claimed in claim 10; Alaaeldin does not expressly describe wherein the aggregation is limited to a said combination for which a frequency of the key performance indicator dimension being read is above a frequency threshold.Singh teaches an aggregation being limited to a combination for which a frequency of the key performance indicator dimension being read being above a frequency threshold ([0076]: the RRM/RAN optimization/algorithm service may be used to optimize other aspects of the RAN as well, including at least i) intra/inter-frequency load-balancing (handing off users, or modifying one or more measurement offsets that change the signal levels at which handovers are to be triggered), ii) admission control (changing a threshold or applying an offset to a threshold based on a function of signal or load/traffic level for admitting users to a network), and iii) CA Scell selection (changing a threshold or applying an offset to a threshold based on a function of signal quality or load/traffic level for selecting an Scell)).It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the frequency threshold determination feature as taught by Singh in the aggregation method of Alaaeldin to facilitate load balancing.
Claim 16. Alaaeldin shows method as claimed in claim 10; Alaaeldin does not expressly describe wherein an accuracy of the key performance indicator is expressed as a criterion for a confidence interval for the key performance indicator data.Singh teaches a key performance indicator being expressed as a criterion for a confidence interval for the key performance indicator data ([0087]: a formula/function (amongst predefined choices) of one or more performance metric KPIs from a RAN/RRM optimization/algorithm service… (i) a threshold of the evaluated formula/function performance metric over which shift is detected and the model updated (ii) a type of characteristic based on which distribution shift is detected, including at least one of average, maximum, a given percentile for confidence interval, trends, peak to average ratio, standard deviation and higher moments).It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the confidence interval feature as taught by Singh in the key performance indicator accuracy method of Alaaeldin to facilitate load balancing.
Claim 17. Alaaeldin, modified by Singh, shows method as claimed in claim 16, wherein the confidence interval is defined based on one or both of a z-distribution (n/a) and a standard deviation for data collected for key performance indicators (Myron, [0087]: (i) a threshold of the evaluated formula/function performance metric over which shift is detected and the model updated (ii) a type of characteristic based on which distribution shift is detected, including at least one of average, maximum, a given percentile for confidence interval, trends, peak to average ratio, standard deviation and higher moments).
---------- ---------- ----------
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Alaaeldin et al in view of Dong et al (US 2013/0272144 A1).
Claim 20. Alaaeldin shows method as claimed in claim 2; Alaaeldin does not expressly describe wherein, if a difference between the second priority value of the key performance indicator dimension relative to a first said key performance indicator and the second priority value of the key performance indicator dimension relative to a second said key performance indicator is above a difference threshold, the combined priority value for the combination of the key performance indicator dimension and the first key performance indicator and for the combination of the key performance indicator dimension and the second key performance indicator, respectively, is a corresponding, respective predefined, fixed combined priority value for each combination.Dong teaches features of: if a difference between a second priority value of a key performance indicator dimension relative to a first said key performance indicator and the second priority value of the key performance indicator dimension relative to a second said key performance indicator is above a difference threshold ([0057]: method may compare the first and second priority levels. In some cases, if the first priority is greater than the second priority (e.g. first priority is “0” and second priority is “1”), the method may reduce the second sampling ratio… if the second priority is greater than the first priority, then method may reduce the first sampling ratio… both the first and second sampling ratios may be reduced, each in proportion to its respective priority ratio), the combined priority value for the combination of the key performance indicator dimension and the first key performance indicator and for the combination of the key performance indicator dimension and the second key performance indicator, respectively, is a corresponding, respective predefined, fixed combined priority value for each combination ([0059]: method may increase the sampling ratio associated with the second probe… the increase in the second probe may be such as to maintain a statistical confidence level associated with a performance indicator (e.g. an aggregated indicator that combines KPIs resulting from the first and second probe) calculated for traffic that is selected based on the rule).It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the priority values comparison features as taught by Dong in the method of Alaaeldin to facilitate offsetting reduction in monitoring sampling ratio and maintain a statistical confidence level associated with a performance indicator.
========== ========== ==========
Conclusion
The prior art made of record is considered pertinent to applicant’s disclosure.
1. Jayakumar et al, US 2020/0313985 A1: a system for effective data collection, aggregation, and analysis in a distributed heterogeneous communication network for effective fault detection or performance anomaly detection using, for example, KPIs.
2. Tiwari et al, US 2019/0155712 A1: a method to manage economics and operational dynamics of various information technology (IT) systems.
3. Starr, US 2014/0336984 A1: a method for conditionally monitoring performance of components of an industrial system as a function of key performance indicator data.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Xavier Szewai Wong whose telephone number is 571.270.1780. The examiner can normally be reached on 11:30 am - 8:30 pm Mon to Fri.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey Rutkowski can be reached on 571.270.1215. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XAVIER S WONG/Primary Examiner, Art Unit 2415 3rd February 2026