DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 1, 8, and 15 are objected to because of the following informalities:
In Claims 1, 8, and 15, “key performance indictors” should be “key performance indicators”.
Appropriate correction is required.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2, 8-9, 15-16 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by ICKIN et al. (US 20230370341, hereinafter, “ICKIN”).
Claim 1. ICKIN teaches: A system, comprising: - See Fig. 13
one or more hardware processors; - See Fig. 13, ¶ [0105], (processor) and
one or more non-transitory machine-readable storage media encoded with instructions that, when executed by the one or more hardware processors, cause the system to perform operations comprising: - See Fig. 13, ¶ [0105], (memory)
obtaining key performance indictors (KPIs) and current configuration information - See Fig. 17, ¶ [0047], (“Thus, after a training process, a decoder of the conditional generative model can take the demanded conditions, such as energy and predicted KPI together with a latent variable, and generate a corresponding recommendation bounded by the desired conditions...a trained conditional generative model may generate a full configuration file conditioned on the given criteria, e.g., a KPI level”, KPIs are obtained and used as input to generate configuration recommendations. Current configuration is used in generating these recommendations.) for a radio access network (RAN) from equipment management services (EMS) of the RAN, - in ¶ [0100], (“Network Operations Center (NOC)…NOC 1202 can handle one or more nodes 1204 ...1204n (e.g., network nodes) of the radio network…¶ [0101], (“NOC 1202 triggers…computing device 302 generates…configuration data…each node receives the corresponding CM [configuration management] data.”, Network Operations Center/nodes manage the network configuration (equivalent to EMS)) the current configuration information describing a configuration of the RAN; - See Fig. 12, ¶ [0100], (“Network Operations Center (NOC)…NOC 1202 can handle one or more nodes 1204 ...1204n (e.g., network nodes) of the radio network”…¶ [0101], (NOC 1202 triggers…computing device 302 generates…configuration data…”)
generating latent space information by expanding the KPIs; - See Fig. 5, ¶ [0078], (“latent space variables, z, are generated using encoder 302a of conditional generative model 302 such that each input combination value is represented by a latent variable in the latent space…the latent space variables that represent each predicted category of KPI and energy (discretized values) can be grouped and an array of latent space variables that represent each combination of category are obtained.”, This describes KPIs being converted into latent variables and grouped to form latent space representations.) and
generating a configuration update for the RAN - See Fig. 14, ¶ [0107], (“outputting (1402) from the conditional generative model a configuration data for a future time period for a network node of the radio network, wherein the configuration data is bounded by the predicted KPI constraint parameter, the target value for the optimization parameter, and the latent variable”) based on the current configuration information and the latent space information; - in ¶ [0107], (“The method includes receiving (1400) inputs to a conditional generative model, the inputs comprising a value for a predicted key performance indicator, KPI, constraint parameter for a time period, a target value for a optimization parameter, and a latent variable. The method further includes outputting (1402) from the conditional generative model a configuration data for a future time period for a network node of the radio network, wherein the configuration data is bounded by the predicted KPI constraint parameter, the target value for the optimization parameter, and the latent variable.”, The inputs to the conditional generative model correspond to the current configuration information and the latent variable input corresponds to the latent space information.) and
providing the configuration update to the EMS. - See Fig. 12, ¶ [0101], (“NOC 1202 triggers operations in operation 1206 by requesting a CM attribute recommendation for one or more nodes as indicated by input array [node1..nodeN]. Responsive to request 1206, in operation 1208, computing device 302 generates that information as a list of multiple configuration data (or scripts or files) for each node in the same order as indicated previously by the input array. In operation 1210, by way of mo_shell or other secure remote interfaces (e.g., secure shell (SSH)), each node receives the corresponding CM data…if the connection between NOC 1202 and a node is not established (operation 1228), this can be an indication that the node is misbehaving and, therefore, the new configuration setting may not be applicable…”)
Claim 2. ICKIN teaches The system of claim 1, - refer to the indicated claim for reference(s).
ICKIN teaches:
wherein generating latent space information comprises: - See Fig. 5, ¶ [0078], (“the latent space variables that represent each predicted category of KPI…Given a target KPI predicted in advance…the target energy consumption value and corresponding CM dataset as input to decoder 302b of conditional generative model 302”) applying the KPIs as input to a trained artificial intelligence (Al) model, - See Fig. 5, ¶ [0124], (“The method includes receiving (1400) inputs to a conditional generative model. The inputs include a value for a predicted key performance indicator, KPI, constraint parameter”) wherein responsive to the input, the Al model outputs the latent space information, - in ¶ [0078], (“(latent space variables, z, are generated using encoder 302a of conditional generative model 302 such that each input combination value is represented by a latent variable in the latent space 506…the latent space variables that represent each predicted category of KPI and energy (discretized values) can be grouped…”); ¶ [0108], (“the latent variable comprises an encoded representation of a value corresponding to the optimization parameter and a value corresponding to the KPI constraint parameter.”)
wherein the Al model has been trained with a training data set, - See Fig. 5, ¶ [0114], (“training (1504) the conditional generative model with a configuration management dataset”) and wherein the training data set includes (i) historical KPIs and/or historical latent space information - See Fig. 5, ¶ [0114], (“Each of the plurality of configuration management attributes is associated with a corresponding conditional variable comprising…a corresponding quantized form of each of the KPI constraint parameter”); ¶ [0078], (“latent space variables, z, are generated using encoder 302a…¶ [0081], (“PM dataset 204, 206 comprises a timeseries dataset where the performance counters are recorded every hour…the first four (4) days of a week can be used as training data”) and (ii) corresponding historical RAN configurations. - in ¶ [0114], (“training (1504) the conditional generative model with a configuration management dataset including a plurality of configuration management attributes…”); ¶ [0040]
Claims 8, 15 are rejected under the same rationale as Claim 1 since they recite nearly identical limitations.
Claims 9, 16 are rejected under the same rationale as Claim 2 since they recite nearly identical limitations.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3, 10 are rejected under 35 U.S.C. 103 as being unpatentable over ICKIN et al. (US 20230370341, hereinafter, “ICKIN”) in view of MENDO MATEO et al. (US 20250350972, hereinafter, “MENDO”).
Claim 3. ICKIN teaches The system of claim 2, - refer to the indicated claim for reference(s).
ICKIN does not explicitly teach:
the operations further comprising: validating the Al model by applying a testing data set as input to the Al model, wherein the testing data set is different from the training data set, and determining an error rate by comparing a resulting output of the Al model with known genie values.
However, MENDO teaches:
the operations further comprising: validating the Al model by applying a testing data set as input to the Al model, - in ¶ [0043], (“Once the entire model 22 is trained, at the time of exploitation, different values can be tested for the configuration parameter of interest, and the performance estimator model 22 provides different predictions for the values of the KPIs 34.”)
wherein the testing data set is different from the training data set, - in ¶ [0040], (“these form a training data set that are input cell features 28 that are input to the encoder stage 24”); ¶ [0043], (“Once the entire model 22 is trained, at the time of exploitation, different values can be tested for the configuration parameter of interest, and the performance estimator model 22 provides different predictions for the values of the KPIs”) and
determining an error rate by comparing a resulting output of the Al model with known genie values. - in ¶ [0041], (“The KPIs in the KPI set 34 output by the decoder stage 26 are intended to be as close as possible to those contained in the input features 28. The difference between the output 34 and the real KPIs constitutes the loss to be minimised in the training phase, using, for example, mean square error (MSE), or another form of loss metric. The loss metric is based on a difference between the input training data set and the output of the decoders.”, This teaches computing error between predicted KPIs (mode output) and actual KPIs (known values (eq. genie values))
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified ICKIN with MENDO to include validating the Al model by applying a testing data set, is different from the training data set, as input to the Al model, and determining an error rate by comparing a resulting output of the Al model with known genie values, as taught by MENDO. One of ordinary skill in the art would have been motivated to make this modification to improve network performance, as suggested by MENDO, training and using a performance estimator model to estimate the performance of a cellular network when one or more configuration parameters for one or more cells are changed. - ¶ [0001]
Claim 10 is rejected under the same rationale as Claim 3 since they recite nearly identical limitations.
Claims 4, 11, 17 are rejected under 35 U.S.C. 103 as being unpatentable over ICKIN et al. (US 20230370341, hereinafter, “ICKIN”) in view of MENDO MATEO et al. (US 20250350972, hereinafter, “MENDO”), and further in view of Tapia et al. (US 20170019315, hereinafter, “Tapia”).
Claim 4. Combination of ICKIN and MENDO teaches The system of claim 3, - refer to the indicated claim for reference(s).
Combination of ICKIN and MENDO does not explicitly teach:
the operations further comprising: retraining the Al model using additional training data sets responsive to the error rate exceeding a threshold value.
However, Tapia teaches:
the operations further comprising: retraining the Al model using additional training data sets responsive to the error rate exceeding a threshold value. - See Fig. 8, ¶ [0051 – 0052], (“if the training error measurement exceeds a training error threshold, the model training module 220 may use a rules engine 224 to select an additional type of machine learning algorithm based on a magnitude of the training error measurement…the model training module 220 may also supplement the training corpus with additional training datasets prior to the additional execution”)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified ICKIN and MENDO with Tapia to include retraining the Al model using additional training data sets responsive to the error rate exceeding a threshold value, as taught by Tapia. One of ordinary skill in the art would have been motivated to make this modification to improve quality of service, as suggested by Tapia, analysis of user device performance data and network performance data of a wireless carrier network to resolve quality of service issues for subscribers of the network. - ¶ [0020]
Claim 17. ICKIN teaches The computer-implemented method of claim 16, - refer to the indicated claim for reference(s).
ICKIN does not explicitly teach:
further comprising: validating the Al model by applying a testing data set as input to the Al model, wherein the testing data set is different from the training data set, and determining an error rate by comparing a resulting output of the Al model with known genie values; and retraining the Al model using additional training data sets responsive to the error rate exceeding a threshold value.
However, MENDO teaches:
further comprising: validating the Al model by applying a testing data set as input to the Al model, - in ¶ [0043], (“Once the entire model 22 is trained, at the time of exploitation, different values can be tested for the configuration parameter of interest, and the performance estimator model 22 provides different predictions for the values of the KPIs 34.”) wherein the testing data set is different from the training data set, - in ¶ [0040], (“these form a training data set that are input cell features 28 that are input to the encoder stage 24”); ¶ [0043], (“Once the entire model 22 is trained, at the time of exploitation, different values can be tested for the configuration parameter of interest, and the performance estimator model 22 provides different predictions for the values of the KPIs”) and determining an error rate by comparing a resulting output of the Al model with known genie values; - in ¶ [0041], (“The KPIs in the KPI set 34 output by the decoder stage 26 are intended to be as close as possible to those contained in the input features 28. The difference between the output 34 and the real KPIs constitutes the loss to be minimised in the training phase, using, for example, mean square error (MSE), or another form of loss metric. The loss metric is based on a difference between the input training data set and the output of the decoders.”, This teaches computing error between predicted KPIs (mode output) and actual KPIs (known values (eq. genie values))
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified ICKIN with MENDO to include validating the Al model by applying a testing data set, is different from the training data set, as input to the Al model, and determining an error rate by comparing a resulting output of the Al model with known genie values, as taught by MENDO. One of ordinary skill in the art would have been motivated to make this modification to improve network performance, as suggested by MENDO, training and using a performance estimator model to estimate the performance of a cellular network when one or more configuration parameters for one or more cells are changed. - ¶ [0001]
Combination of ICKIN and MENDO does not explicitly teach:
retraining the Al model using additional training data sets responsive to the error rate exceeding a threshold value.
However, Tapia teaches:
retraining the Al model using additional training data sets responsive to the error rate exceeding a threshold value. - See Fig. 8, ¶ [0051 – 0052], (“if the training error measurement exceeds a training error threshold, the model training module 220 may use a rules engine 224 to select an additional type of machine learning algorithm based on a magnitude of the training error measurement…the model training module 220 may also supplement the training corpus with additional training datasets prior to the additional execution”)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified ICKIN and MENDO with Tapia to include retraining the Al model using additional training data sets responsive to the error rate exceeding a threshold value, as taught by Tapia. One of ordinary skill in the art would have been motivated to make this modification to improve quality of service, as suggested by Tapia, analysis of user device performance data and network performance data of a wireless carrier network to resolve quality of service issues for subscribers of the network. - ¶ [0020]
Claim 11 is rejected under the same rationale as Claim 4 since they recite nearly identical limitations.
Claims 5-7, 12-14, 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over ICKIN et al. (US 20230370341, hereinafter, “ICKIN”) in view of CHANG et al. (US 20230209367, hereinafter, “CHANG”).
Claim 5. ICKIN teaches The system of claim 1, - refer to the indicated claim for reference(s).
ICKIN teaches:
wherein generating a configuration update comprises: providing a candidate RAN configuration; - in ¶ [0047], (“after a training process, a decoder of the conditional generative model can take the demanded conditions…and generate a corresponding recommendation bounded by the desired conditions…a trained conditional generative model may generate a full configuration file conditioned on the given criteria, e.g., a KPI level, low energy consumption, golden parameters”, recommendation (eq. candidate RAN configuration))
predicting energy that would be consumed by the RAN - in ¶ [0076], (“To quantify the energy saving and the potential effect on the KPI, inject the generated configuration data 312 to the pretrained CM model 406 in operations 1 and 2, and infer the (a) energy consumption, and (b) KPI value. The result of operation 11 can be expected (e.g., ideally) to estimate energy consumption”) using the candidate RAN configuration - in ¶ [0047], (“after a training process, a decoder of the conditional generative model can take the demanded conditions…and generate a corresponding recommendation bounded by the desired conditions…a trained conditional generative model may generate a full configuration file conditioned on the given criteria, e.g., a KPI level, low energy consumption”, recommendation (eq. candidate RAN configuration)) based on the current configuration information and the latent space information; - in ¶ [0070], (“(d) The latent variable with the corresponding category of selected KPIs and energy can then be used as input to the decoder of conditional generative model 302 to generate the configuration data 312.”); ¶ [0076], (“To quantify the energy saving and the potential effect on the KPI, inject the generated configuration data 312 to the pretrained CM model 406 in operations 1 and 2, and infer the (a) energy consumption, and (b) KPI value. The result of operation 11 can be expected (e.g., ideally) to estimate energy consumption”)
determining a supportable customer experience that would be provided by the RAN using the candidate RAN configuration - in ¶ [0047], (“after a training process, a decoder of the conditional generative model can take the demanded conditions…and generate a corresponding recommendation bounded by the desired conditions…generate a full configuration file conditioned on the given criteria, e.g., a KPI level, low energy consumption, golden parameters”, recommendation (eq. candidate RAN configuration)); ¶ [0045], (“Golden parameters include, without limitation, important parameters for a customer (e.g., a feature that is part of a Service Level Agreement (SLA)).”) based on the current configuration information and the latent space information; - in ¶ [0070], (“(d) The latent variable with the corresponding category of selected KPIs and energy can then be used as input to the decoder of conditional generative model 302 to generate the configuration data 312.”); ¶ [0076], (“To quantify the energy saving and the potential effect on the KPI, inject the generated configuration data 312 to the pretrained CM model 406 in operations 1 and 2, and infer the (a) energy consumption, and (b) KPI value. The result of operation 11 can be expected (e.g., ideally) to estimate energy consumption”)
generating a RAN configuration that minimizes the energy that would be consumed by the RAN - in ¶ [0047], (“after a training process, a decoder of the conditional generative model can take the demanded conditions…and generate a corresponding recommendation bounded by the desired conditions…generate a full configuration file conditioned on the given criteria, e.g., a KPI level, low energy consumption, golden parameters”, recommendation (eq. candidate RAN configuration)) while providing the supportable customer experience; - in ¶ [0045], (“Golden parameters include, without limitation, important parameters for a customer (e.g., a feature that is part of a Service Level Agreement (SLA)).”) and
generating the configuration update based on the generated RAN configuration. – in ¶ [0098], (“The generated configuration data 312 of various embodiments can be installed for base stations as is (e.g., as a full configuration file), as the generated configuration data 312 can sustain the interdependency in between the parameters.”); ¶ [0107], (“outputting (1402) from the conditional generative model a configuration data for a future time period for a network node of the radio network, wherein the configuration data is bounded by the predicted KPI constraint parameter, the target value for the optimization parameter, and the latent variable”)
ICKIN does not explicitly teach:
determining a supportable customer experience that would be provided by the RAN using the candidate RAN configuration based on the current configuration information and the latent space information;
However, Chang teaches:
determining a supportable customer experience that would be provided by the RAN using the candidate RAN configuration based on the current configuration information and the latent space information; - in ¶ [0031], (“a machine learning model can receive an input (e.g., network KPIs, customer data…in order to predict a given output (e.g., customer experience score, customer acquisition propensity, customer dissatisfaction metric) and make predictions based on the output (e.g., network solutions, categories of solutions, etc.).”)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified ICKIN with Chang to include determining a supportable customer experience, as taught by Chang. One of ordinary skill in the art would have been motivated to make this modification to meet customer needs, as suggested by Chang, current telecommunications market level tracking is not granular enough to provide adequate indications of local area network issues for network improvements and meet local customer satisfaction goals. - ¶ [0001]
Claim 6. Combination of ICKIN and Chang The system of claim 5, - refer to the indicated claim for reference(s).
ICKIN teaches:
wherein predicting energy that would be consumed by the RAN comprises: applying the current configuration information - in ¶ [0076], (“To quantify the energy saving and the potential effect on the KPI, inject the generated configuration data 312 to the pretrained CM model 406 in operations 1 and 2, and infer the (a) energy consumption, and (b) KPI value. The result of operation 11 can be expected (e.g., ideally) to estimate energy consumption”) and the latent space information as input to a trained artificial intelligence (AI) model, - in ¶ [0070], (“(d) The latent variable with the corresponding category of selected KPIs and energy can then be used as input to the decoder of conditional generative model 302 to generate the configuration data 312.”); ¶ [0076], (“…inject the generated configuration data 312 to the pretrained CM model 406”)
wherein responsive to the input, the Al model outputs a prediction of the energy that would be consumed by the RAN using the candidate RAN configuration. - in ¶ [0076], (“To quantify the energy saving and the potential effect on the KPI, inject the generated configuration data 312 to the pretrained CM model 406 in operations 1 and 2, and infer the (a) energy consumption, and (b) KPI value. The result of operation 11 can be expected (e.g., ideally) to estimate energy consumption”)
Claim 7. Combination of ICKIN and Chang teaches The system of claim 5, - refer to the indicated claim for reference(s).
ICKIN teaches:
wherein determining a supportable customer experience that would be provided by the RAN comprises: - in ¶ [0045], (“Golden parameters include, without limitation, important parameters for a customer (e.g., a feature that is part of a Service Level Agreement (SLA)).”); ¶ [0096], (“advantages provided by some embodiments may include…supporting customers in reaching their sustainability goals (e.g., CO.sub.2 reduction).”)
applying the current configuration information and the latent space information as input to a trained artificial intelligence (AI) model, - in ¶ [0070], (“(d) The latent variable with the corresponding category of selected KPIs and energy can then be used as input to the decoder of conditional generative model 302 to generate the configuration data 312.”); ¶ [0048], (“The KPIs can be used as constraints to applicable energy use cases such that the KPIs are kept intact while sustaining SLAs.”)
ICKIN does not explicitly teach:
wherein responsive to the input, the Al model outputs a determination of the supportable customer experience.
However, Chang teaches:
wherein responsive to the input, the Al model outputs a determination of the supportable customer experience. - in ¶ [0031], (“a machine learning model can receive an input (e.g., network KPIs, customer data…in order to predict a given output (e.g., customer experience score, customer acquisition propensity, customer dissatisfaction metric) and make predictions based on the output (e.g., network solutions, categories of solutions, etc.).”)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified ICKIN with Chang to include responsive to the input, the Al model outputs a determination of the supportable customer experience, as taught by Chang. One of ordinary skill in the art would have been motivated to make this modification to meet customer needs, as suggested by Chang, current telecommunications market level tracking is not granular enough to provide adequate indications of local area network issues for network improvements and meet local customer satisfaction goals. - ¶ [0001]
Claims 12, 18 are rejected under the same rationale as Claim 5 since they recite nearly identical limitations.
Claims 13, 19 are rejected under the same rationale as Claim 6 since they recite nearly identical limitations.
Claims 14, 20 are rejected under the same rationale as Claim 7 since they recite nearly identical limitations.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Shima Wasel whose telephone number is (703)756-4725. The examiner can normally be reached Monday - Friday 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khaled Kassim can be reached at (571) 270-3770. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHIMA WASEL/Patent Examiner, Art Unit 2475
/KHALED M KASSIM/supervisory patent examiner, Art Unit 2475