Prosecution Insights
Last updated: April 19, 2026
Application No. 18/421,947

Methods and Systems for Predicting Incidents in a Network

Non-Final OA §101§103
Filed
Jan 24, 2024
Examiner
CHOWDHURY, MOHAMMED SHAMSUL
Art Unit
2467
Tech Center
2400 — Computer Networks
Assignee
T-Mobile Innovations LLC
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
288 granted / 344 resolved
+25.7% vs TC avg
Strong +25% interview lift
Without
With
+25.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
50 currently pending
Career history
394
Total Applications
across all art units

Statute-Specific Performance

§101
1.7%
-38.3% vs TC avg
§103
64.4%
+24.4% vs TC avg
§102
16.1%
-23.9% vs TC avg
§112
6.9%
-33.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 344 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The Applicant didn’t submit any information disclosure statement (IDS). The Applicants and other individuals substantially involved with the preparation and/or prosecution of the application do have a duty to disclose to the U.S. Patent and Trademark Office, all material information known to the applicant(s) as defined in 37 CFR §1.56. See Brasseler, U.S.A. I, L.P. v. Stryker Sales Corp., 267 F.3d 1370, 1383, 60 USPQ2d 1482, 1490 (Fed. Cir. 2001) ("Once an attorney, or an applicant has notice that information exists that appears material and questionable, that person cannot ignore that notice in an effort to avoid his or her duty to disclose."). Materiality controls whether information must be disclosed to the Office, not the circumstances under which or the source from which the information is obtained. The duty to disclose material information extends to information such individuals are aware of prior to or at the time of filing the application or become aware of during the prosecution thereof. See MPEP § 2001.06. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 7-11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the inventor does not define “a prediction application executing on a computer system” and “a validation application” in the independent claim 7, alone as being non-transitory. Both the claimed features “prediction application” and “validation application” are interpreted as software programs. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 5, 7-9, 12, 15 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (2024/0121636), Wang hereinafter, in view of BIRR et al. (2024/0305536), BIRR hereinafter. Re. claims 1 and 7, Wang teaches a communications network (Fig. 1/Fig. 2B/Fig. 2D) implemented in a network (Fig. 1) comprising a radio access network (Fig. 1, 122/125/Fig. 2B, RAN), wherein the communications network (Fig. 1/Fig. 2B/Fig. 2D) comprises: a prediction application (Fig. 2D) executing on a computer system (Fig.2D/Fig. 5 & ¶0220) in the communication network, wherein the prediction application (Fig.1/2A-2P & ¶0026/¶0028/¶0074/¶0103/¶0105/¶0155/¶0172-¶0175) is configured to, and a method for maintaining a communication network (Fig. 1/Fig. 2B/Fig. 2D) by predicting incidents (Fig.1/2A-2P & ¶0026/¶0028/¶0074/¶0103/¶0105/¶0155/¶0172-¶0175) in a radio access network of the communication network (Fig. 1/Fig. 2B), wherein the method comprises: obtaining, by a prediction application executing on a computer system in the communication network, signature data associated with a network element using a predictive model based on historical radio access data describing a prior incident at the network element in the radio access network (Fig.1/2A-2P & ¶0026 - receiving, from a customer, information about a service degradation at a user equipment (UE) device of the customer in a cellular network, receiving, from a cell-level network-state prediction model, a prediction about likelihood of network issues that impact customers in cell sites of the cellular network, and receiving information about current usage of the UE device. .. identifying a source of the service degradation, wherein the identifying is based on the prediction about likelihood of network issues and the current usage of the UE device and modifying one of a network component of the cellular network and the UE device, based on the identifying the source of the service degradation to correct the service degradation. Fig.1/2A-2P & ¶0028 - training a cell-level machine learning model to predict a likelihood of a cell site in a cellular network having service issues that impact customers of the cellular network, training a user equipment (UE) level machine learning model using output information from the cell-level machine learning model and historical information about UE-level performance metrics, receiving, from a customer associated with a UE device operating on the cellular network, information about a service degradation experienced by the customer on the UE device, providing the information about the service degradation to the UE-level machine learning model; and receiving, from the UE-level machine learning model, information identifying a source of the service degradation. Fig.1/2A-2P & ¶0074 - During the training phase 211, the cell-level prediction model 209 is trained using historical usages, user mobility, performance metrics at the cell site level, and customer care contact and ticket data. This is illustrated generally as care logs 213 in FIG. 2D. The UE-level inference model 210 is trained using the output 214 of the cell-level prediction model 209, the historical UE level usages, user mobility, performance metrics, and the customer care contact and ticket data. This is illustrated generally as UE-level network logs 215 in FIG. 2D. Fig.1/2A-2P & ¶0103 - The UE-level model 210 includes a cell-level log (CLL) feature and a UE-level feature. The CLL feature is a network side feature and receives cell-level KPI information and logs. The CLL feature applies one or more sliding window 238 to scan over historical period of the cell side KPIs that is relevant to the corresponding user. That enables extraction of a likelihood of network problems over time. The top k′ cells that are most relevant to the user are retained and used to generate the feature profile of the of the network level. In addition, the UE-level model 210 has a UE-level feature. The UE-level feature is from the network session logs for the particular user device. That provides information such as times when the user cannot connect to cellular sessions or whether the session was terminated normally.), wherein the signature data indicates a pattern of the historical radio access data associated with the prior incident at the network element, and wherein the signature data is based on correlations identified between different types of the historical radio access data associated with the prior incident at the network element (Fig.1/2A-2P & ¶0103 - The UE-level model 210 includes a cell-level log (CLL) feature and a UE-level feature. The CLL feature is a network side feature and receives cell-level KPI information and logs. The CLL feature applies one or more sliding window 238 to scan over historical period of the cell side KPIs that is relevant to the corresponding user. That enables extraction of a likelihood of network problems over time. The top k′ cells that are most relevant to the user are retained and used to generate the feature profile of the of the network level. In addition, the UE-level model 210 has a UE-level feature. The UE-level feature is from the network session logs for the particular user device. That provides information such as times when the user cannot connect to cellular sessions or whether the session was terminated normally.. The UE-level model 210 further correlates the two features by the time channel in order to learn the temporal feature correlations between the CLL feature and the UE-level feature. Fig.1/2A-2P & ¶0105 - For effective troubleshooting, the system and method correlate the UE-level profile features with the network status of the top k′ reference cell sites. To achieve this goal, the system and method creates a cell-level profile 244 for the reference cell sites by using the learned features from the cell-level model, namely, HIV and H. For each UE, the system and method look back over a one-week historical time window and construct the corresponding feature profiles 244. One week or seven days is an exemplary time window size. However, any suitable window size may be used. The extracted UE-level and cell-level features are concatenated over the time dimension for temporal correlation learning, forming UE-level feature profiles 246. The feature engineering method of the UE-level model 210 is shown in FIG. 2I. In the left side of FIG. 2I, the system and method apply the pre-trained cell-level model 209 of FIG. 2G and uses a sliding window 238 to extract the cell-level profile features 242 over the one-week history. Also, see 259 & 262, 263 & 265 in Fig. 2N); obtaining, by the prediction application, current radio access data associated with the network element (Fig.1/2A-2P & ¶0046 - the agent 203 may have access to current information about network outages or limitations or known issues with a particular user equipment device like the customer owns. Fig.1/2A-2P & ¶0058 - The automatic troubleshooting system 218 is based on machine learning and rich data sources 219. The data sources 219 include historical data about network operation as well as current data about network status and operation. Fig.1/2A-2P & ¶0075 - Upon receiving a customer contact 216 between a customer 202 and a care agent 203 reporting a service issue, the UE level inference model will take the cell site level prediction on current customer-impacting network issues in the related cells and current UE level usage, mobility, performance metrics to infer whether the customer reported service issues is caused by network related issues. Fig.1/2A-2P & ¶0103 - The UE-level model 210 includes a cell-level log (CLL) feature and a UE-level feature. The CLL feature is a network side feature and receives cell-level KPI information and logs. The CLL feature applies one or more sliding window 238 to scan over historical period of the cell side KPIs that is relevant to the corresponding user. That enables extraction of a likelihood of network problems over time. The top k′ cells that are most relevant to the user are retained and used to generate the feature profile of the of the network level. In addition, the UE-level model 210 has a UE-level feature. The UE-level feature is from the network session logs for the particular user device. That provides information such as times when the user cannot connect to cellular sessions or whether the session was terminated normally. The UE-level model 210 further correlates the two features by the time channel in order to learn the temporal feature correlations between the CLL feature and the UE-level feature. Fig.1/2A-2P & ¶0105 - For effective troubleshooting, the system and method correlate the UE-level profile features with the network status of the top k′ reference cell sites. To achieve this goal, the system and method creates a cell-level profile 244 for the reference cell sites by using the learned features from the cell-level model, namely, HIV and H. For each UE, the system and method look back over a one-week historical time window and construct the corresponding feature profiles 244. One week or seven days is an exemplary time window size. However, any suitable window size may be used. The extracted UE-level and cell-level features are concatenated over the time dimension for temporal correlation learning, forming UE-level feature profiles 246. The feature engineering method of the UE-level model 210 is shown in FIG. 2I. In the left side of FIG. 2I, the system and method apply the pre-trained cell-level model 209 of FIG. 2G and uses a sliding window 238 to extract the cell-level profile features 242 over the one-week history. The stride of the sliding window is 1 hour in one example though any suitable value may be used. Also, see 259 & 262, 263 & 265 in Fig. 2N); inputting, by the prediction application, the current radio access data into the predictive model to obtain a prediction output based on the signature data, wherein the prediction output indicates data regarding a predicted incident at the network element (Fig.1/2A-2P & ¶0074 - During the training phase 211, the cell-level prediction model 209 is trained using historical usages, user mobility, performance metrics at the cell site level, and customer care contact and ticket data. This is illustrated generally as care logs 213 in FIG. 2D. The UE-level inference model 210 is trained using the output 214 of the cell-level prediction model 209, the historical UE level usages, user mobility, performance metrics, and the customer care contact and ticket data. This is illustrated generally as UE-level network logs 215 in FIG. 2D. Fig.1/2A-2P & ¶0103 - The UE-level model 210 includes a cell-level log (CLL) feature and a UE-level feature. The CLL feature is a network side feature and receives cell-level KPI information and logs. The CLL feature applies one or more sliding window 238 to scan over historical period of the cell side KPIs that is relevant to the corresponding user. That enables extraction of a likelihood of network problems over time. The top k′ cells that are most relevant to the user are retained and used to generate the feature profile of the of the network level. In addition, the UE-level model 210 has a UE-level feature. The UE-level feature is from the network session logs for the particular user device. That provides information such as times when the user cannot connect to cellular sessions or whether the session was terminated normally), a predefined time period in which the predicted incident is likely to occur, and a preventative resolution to the predicted incident, wherein the preventative resolution comprises one or more corrective actions to prevent the predicted incident from occurring (Fig.1/2A-2P & ¶0155 - In FIG. 2J(c), the upper portion shows values for a network problem probability that is learned by the ML model for two different cell sites, cell site R1 and cell site R2. The network problem probability is labelled R1 Risk and the network problem probability for cell site R2 is labelled R2 risk. The lower portion of FIG. 2J(c) illustrates an aspect of user in relation to the network problem probability. This illustrates what happened on the user side at the UE device. The lower portion of FIG. 2J(c) illustrates the intervals of the cellular sessions of the UE that were carried by each of the two major reference cell sites, cell site R1 and cell site R2. Specifically, a designation Other/Idle means the device was carried by other cell sites or the device was idle; a designation R1/R2 Normal means the sessions with R1/R2 were closed normally; a designation R1/R2 RAT means the radio access technology (RAT) was changed. The simultaneous session occupations with cell site R1 and cell site R2 represent that the device was handed off from one cell site to another cell site. Fig.1/2A-2P & ¶0156 - after the occurrence of Day 6 and Day 7 outages on cell site designated 1st NB, the risk of a network issue at cell site R1 increases dramatically. The network problem probability stays high over Day 6 and Day 7. In contrast, the network problem probability for cell site R2 remains relatively low. The cell site designated 1st NB is the nearest neighbor cell site to cell site R1. After the occurrence of the outages on Day 6 and Day 7, the total session length with cell site R2 was significantly reduced, and service for the device was mostly carried by R1. Thus, the ML model determines from KPI data for the cell sites in the local area, an outage or disruption at the cell site designated 1st NB causes substantial disruption on cell site R1 but only slight disruption on cell site R2. Fig.1/2A-2P & ¶0157 - Referring again to FIG. 2O, at step 273, the method 268 determines if a network problem identified at step 271 and step 272 may affect one or more categories of users. For example, the method may identify one or more users who were impacted by the network problem and identify one or more categories in which those users are categorized. Other users in the same categories may be affected as well. In an example, a user was affected by a hard outage by losing all service in the region served by a particular cell site. The user is assigned to a particular category of users served by that cell site at the time. Step 273 may include identifying all other users in that particular category. Such identification may be used to mitigate further problems for those users in the same category. Fig.1/2A-2P & ¶0164 - if no suitable network protection is available, at step 275, the method 268 includes operations to take action to protect users in the affected categories. Actions may be taken to isolate the users in an affected category from the service degradation. Such actions may be taken prophylactically to prevent a service disruption or degradation for other users in the affected category. Such actions may be taken to maintain a continuous user experience for other users in the affected category.); generating, by the prediction application, a service report indicating at least one of the predicted incidents at the network element, the predefined time period, or the preventative resolution (Fig.1/2A-2P & ¶0172 - At step 282, the KPI information may be analyzed to identify any operational problems in the network. Such operational problems may be due to a natural disaster or some other event, such as a hurricane affecting part of the network serving a specified region. Further, the KPI information may be used by a properly trained machine learning model to estimate the impact of the natural disaster or other event and how the event will impact users experience using the network. …. The machine learning model in accordance with aspects described herein may predict those affects outside the first part of the network and may permit rapid network modifications to accommodate the changes to parts of the network that were not directly affected by the hurricane or other event. Fig.1/2A-2P & ¶0174 - At step 283, the method 280 includes a step of identifying a location of the event. In embodiments, the model may provide information identifying the location of the event or identifying network elements impacted by the event. Fig.1/2A-2P & ¶0175 - At step 284, the model operates to provide a prediction about network elements affected by the event. The model may provide a prediction about likelihood of network issues that impact customers in cell sites of the cellular network. The model may retrieve information about current usage of UE devices in the network. Further, the model may identify a source of the service degradation. The identification may be based on the prediction about likelihood of network issues and the current usage of UE devices. Fig.1/2A-2P & ¶0182 - At step 287, the model may provide prediction information about other impacts that may be expected due to the event. The prediction information may be based on the categories assigned to the event. For example, some network components such as switching sites or cell sites may be directly affected by the event. Fig.1/2A-2P & ¶0184 - At step 288, the method 280 includes receiving from the model prediction information about proper resources for resolution of the event. In embodiments, the model relies on the categorization of the event for deciding what resources are most appropriate for the situation. For example, if the model determines that a particular component is damaged and needs repair or replacement, the model may provide at step 287 a prediction identifying the damaged component. The component may be categorized with other likely-affected components. The damaged component may further be categorized with resources recommended for making the repair considered to be necessary. If a crew with particular equipment such as a lift-bucket is required to effect the repair, the model may provide at step 288 a prediction identifying the repair crew and needed repair equipment.); transmitting, by the prediction application, the service report to a processing entity for resolution (Fig.1/2A-2P & ¶0185 - At step 289, the resources necessary to resolve the event and its effect on the network are dispatched. For example, if the model determined that a repair crew with a lift-bucket is required, the crew may be selected, designated and assigned to the repair job. The progress of the crew may be monitored to determine when the repair is complete and indirectly affected network facilities may be activated or reactivated. If a network configuration update is required that only involves human interaction with network control facilities to make the necessary changes, the model may provide the necessary information and monitor the progress. Some repairs following some events may require a series of repairs or other discrete steps. The model may recommend each step in sequence and monitor network KPIs and device KPIs to determine completion.); PNG media_image2.png 396 1074 media_image2.png Greyscale Yet, Wang does not expressly teach receiving, by the prediction application, feedback data regarding the predicted incident; and updating, by a validation application executing on the computer system, at least one of the signature data or the predictive model based on the feedback data. However, in the analogous art, BIRR explicitly discloses receiving, by the prediction application, feedback data regarding the predicted incident; and updating, by a validation application executing on the computer system, at least one of the signature data or the predictive model based on the feedback data. (Fig. 1A-1O / Fig. 2 & ¶0051 - the trained machine learning model 225 may be re-trained using feedback information. For example, feedback may be provided to the machine learning model. The feedback may be associated with actions performed based on the recommendations provided by the trained machine learning model 225 and/or automated actions performed, or caused, by the trained machine learning model 225. In other words, the recommendations and/or actions output by the trained machine learning model 225 may be used as inputs to re-train the machine learning model (e.g., a feedback loop may be used to train and/or update the machine learning model). Fig. 1A-1O / Fig. 2 & ¶0052 - the machine learning system may apply a rigorous and automated process to determine RAN antenna performance impact. The machine learning system may enable recognition and/or identification of tens, hundreds, thousands, or millions of features and/or feature values for tens, hundreds, thousands, or millions of observations, thereby increasing accuracy and consistency and reducing delay associated with determining RAN antenna performance impact relative to requiring computing resources to be allocated for tens, hundreds, or thousands of operators to manually determine RAN antenna performance impact using the features or feature values). PNG media_image3.png 434 642 media_image3.png Greyscale Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filling date of the claimed invention to combine Wang’s invention of a system and a method for automatically identifying and resolving service issues in a cellular communication network or other mobility network to include BIRR’s invention of a system and a method for utilizing machine learning models to determine RAN (Radio Access Network) antenna performance impact in a wireless communication system, because it provides an efficient management system which reduces unnecessary dispatching technicians to unsuccessfully investigated false positive antenna alerts, by utilizing machine learning models in determining the RAN antenna performance which identifies issues with antenna alignment, line swaps, bad lines, and other antenna issues caused by severe weather conditions, remote electrical tilt (RET) issues, misconfigurations, in turns, improves customer experience in the wireless communication system. (¶0002-¶0010, BIRR) Re. Claim 5, Wang and BIRR teach claim 1. Wang further teaches wherein the processing entity is a network operations center (NOC) operator, a field technician, or an automated system. (Fig.1/2A-2P & ¶0058 - In the automatic troubleshooting and resolution process 200, the human care agent 203 can interact with an automatic troubleshooting system 218. The automatic troubleshooting system 218 is based on machine learning and rich data sources 219. The data sources 219 include historical data about network operation as well as current data about network status and operation. In embodiments, the automatic troubleshooting system 218 implements a learning-based troubleshooting framework and relies on one or more machine learning models to determine a probability that the source of the problem is in the user's UE device or that the problem is in the network. Fig.1/2A-2P & ¶0059 - the customer 202 may be given automatic voice prompts or text-based prompts and may provide suitable information in response. If the source of the problem is the account of the customer 202 or provisioning of service for the customer 202, the agent 203 generally can promptly resolve the problem for the customer. Fig.1/2A-2P & ¶0060 - the agent 203 may provide information to the automatic troubleshooting system 218 about the symptoms and issues reported by the customer 202. .. information about the customer's identification, account, provisioning, UE device and network activities may be automatically forwarded to the automatic troubleshooting system 218… the agent 203 begins interacting with the automatic troubleshooting system 218 if the agent 203 cannot identify and resolve an account or provisioning problem for the customer. … the automatic troubleshooting system 218 silently monitors the interaction between the customer 202 and the agent 203 during the customer interaction phase 204 and may proactively provide information about the location of the issue to the agent 203. Fig.1/2A-2P & ¶0063 - The automatic troubleshooting and resolution process 200 enables the agent 203 and the network provider in general to properly respond to the customer 202, leaving the customer more likely to be satisfied that the customer's issue is being resolved. Fig.1/2A-2P & ¶0185 - At step 289, the resources necessary to resolve the event and its effect on the network are dispatched. For example, if the model determined that a repair crew (i.e., field technician) with a lift-bucket is required, the crew (i.e., field technician) may be selected, designated and assigned to the repair job. The progress of the crew (i.e., field technician) may be monitored to determine when the repair is complete and indirectly affected network facilities may be activated or reactivated. If a network configuration update is required that only involves human interaction with network control facilities (i.e., network operations center (NOC) operator) to make the necessary changes, the model may provide the necessary information and monitor the progress. Some repairs following some events may require a series of repairs or other discrete steps. The model may recommend each step in sequence and monitor network KPIs and device KPIs to determine completion.). Re. claim 12, Wang teaches a method (Fig.1/2A-2P & ¶0026/¶0028/¶0074/¶0103/¶0105/¶0155/¶0172-¶0175) for maintaining a communication network (Fig. 1/Fig. 2B/Fig. 2D), wherein the method comprises: collecting, by a prediction application executing on a computer system in the communication network, historical radio access data and historical incident data, each associated with prior incidents across a plurality of network elements in the radio access network (Fig.1/2A-2P & ¶0026 - receiving, from a customer, information about a service degradation at a user equipment (UE) device of the customer in a cellular network, receiving, from a cell-level network-state prediction model, a prediction about likelihood of network issues that impact customers in cell sites of the cellular network, and receiving information about current usage of the UE device. .. identifying a source of the service degradation, wherein the identifying is based on the prediction about likelihood of network issues and the current usage of the UE device and modifying one of a network component of the cellular network and the UE device, based on the identifying the source of the service degradation to correct the service degradation. Fig.1/2A-2P & ¶0028 - training a cell-level machine learning model to predict a likelihood of a cell site in a cellular network having service issues that impact customers of the cellular network, training a user equipment (UE) level machine learning model using output information from the cell-level machine learning model and historical information about UE-level performance metrics, receiving, from a customer associated with a UE device operating on the cellular network, information about a service degradation experienced by the customer on the UE device, providing the information about the service degradation to the UE-level machine learning model; and receiving, from the UE-level machine learning model, information identifying a source of the service degradation. Fig.1/2A-2P & ¶0074 - During the training phase 211, the cell-level prediction model 209 is trained using historical usages, user mobility, performance metrics at the cell site level, and customer care contact and ticket data. This is illustrated generally as care logs 213 in FIG. 2D. The UE-level inference model 210 is trained using the output 214 of the cell-level prediction model 209, the historical UE level usages, user mobility, performance metrics, and the customer care contact and ticket data. This is illustrated generally as UE-level network logs 215 in FIG. 2D. Fig.1/2A-2P & ¶0103 - The UE-level model 210 includes a cell-level log (CLL) feature and a UE-level feature. The CLL feature is a network side feature and receives cell-level KPI information and logs. The CLL feature applies one or more sliding window 238 to scan over historical period of the cell side KPIs that is relevant to the corresponding user. That enables extraction of a likelihood of network problems over time. The top k′ cells that are most relevant to the user are retained and used to generate the feature profile of the of the network level. In addition, the UE-level model 210 has a UE-level feature. The UE-level feature is from the network session logs for the particular user device. That provides information such as times when the user cannot connect to cellular sessions or whether the session was terminated normally); generating, by the prediction application, signature data associated with a network element using a predictive model based on the historical radio access data and the historical incident data, wherein the signature data indicates a pattern identified in the historical radio access data and the historical incident data associated with a prior incident at the network element (Fig.1/2A-2P & ¶0028 - training a cell-level machine learning model to predict a likelihood of a cell site in a cellular network having service issues that impact customers of the cellular network, training a user equipment (UE) level machine learning model using output information from the cell-level machine learning model and historical information about UE-level performance metrics, receiving, from a customer associated with a UE device operating on the cellular network, information about a service degradation experienced by the customer on the UE device, providing the information about the service degradation to the UE-level machine learning model; and receiving, from the UE-level machine learning model, information identifying a source of the service degradation. Fig.1/2A-2P & ¶0074 - During the training phase 211, the cell-level prediction model 209 is trained using historical usages, user mobility, performance metrics at the cell site level, and customer care contact and ticket data. This is illustrated generally as care logs 213 in FIG. 2D. The UE-level inference model 210 is trained using the output 214 of the cell-level prediction model 209, the historical UE level usages, user mobility, performance metrics, and the customer care contact and ticket data. This is illustrated generally as UE-level network logs 215 in FIG. 2D. Fig.1/2A-2P & ¶0103 - The UE-level model 210 includes a cell-level log (CLL) feature and a UE-level feature. The CLL feature is a network side feature and receives cell-level KPI information and logs. The CLL feature applies one or more sliding window 238 to scan over historical period of the cell side KPIs that is relevant to the corresponding user. That enables extraction of a likelihood of network problems over time. The top k′ cells that are most relevant to the user are retained and used to generate the feature profile of the of the network level. In addition, the UE-level model 210 has a UE-level feature. The UE-level feature is from the network session logs for the particular user device. That provides information such as times when the user cannot connect to cellular sessions or whether the session was terminated normally); obtaining, by the prediction application, current radio access data associated with the network element (Fig.1/2A-2P & ¶0046 - the agent 203 may have access to current information about network outages or limitations or known issues with a particular user equipment device like the customer owns. Fig.1/2A-2P & ¶0058 - The automatic troubleshooting system 218 is based on machine learning and rich data sources 219. The data sources 219 include historical data about network operation as well as current data about network status and operation. Fig.1/2A-2P & ¶0075 - Upon receiving a customer contact 216 between a customer 202 and a care agent 203 reporting a service issue, the UE level inference model will take the cell site level prediction on current customer-impacting network issues in the related cells and current UE level usage, mobility, performance metrics to infer whether the customer reported service issues is caused by network related issues. Fig.1/2A-2P & ¶0103 - The UE-level model 210 includes a cell-level log (CLL) feature and a UE-level feature. The CLL feature is a network side feature and receives cell-level KPI information and logs. The CLL feature applies one or more sliding window 238 to scan over historical period of the cell side KPIs that is relevant to the corresponding user. That enables extraction of a likelihood of network problems over time. The top k′ cells that are most relevant to the user are retained and used to generate the feature profile of the of the network level. In addition, the UE-level model 210 has a UE-level feature. The UE-level feature is from the network session logs for the particular user device. That provides information such as times when the user cannot connect to cellular sessions or whether the session was terminated normally.. The UE-level model 210 further correlates the two features by the time channel in order to learn the temporal feature correlations between the CLL feature and the UE-level feature. Fig.1/2A-2P & ¶0105 - For effective troubleshooting, the system and method correlate the UE-level profile features with the network status of the top k′ reference cell sites. To achieve this goal, the system and method creates a cell-level profile 244 for the reference cell sites by using the learned features from the cell-level model, namely, HIV and H. For each UE, the system and method look back over a one-week historical time window and construct the corresponding feature profiles 244. One week or seven days is an exemplary time window size. However, any suitable window size may be used. The extracted UE-level and cell-level features are concatenated over the time dimension for temporal correlation learning, forming UE-level feature profiles 246. The feature engineering method of the UE-level model 210 is shown in FIG. 2I. In the left side of FIG. 2I, the system and method apply the pre-trained cell-level model 209 of FIG. 2G and uses a sliding window 238 to extract the cell-level profile features 242 over the one-week history. The stride of the sliding window is 1 hour in one example though any suitable value may be used. Also, see 259 & 262, 263 & 265 in Fig. 2N); inputting, by the prediction application, the current radio access data into the predictive model to obtain a prediction output based on the signature data, wherein the prediction output indicates data regarding a predicted incident at the network element, a predefined time period in which the predicted incident is likely to occur, and a preventative resolution to the predicted incident (Fig.1/2A-2P & ¶0074 - During the training phase 211, the cell-level prediction model 209 is trained using historical usages, user mobility, performance metrics at the cell site level, and customer care contact and ticket data. This is illustrated generally as care logs 213 in FIG. 2D. The UE-level inference model 210 is trained using the output 214 of the cell-level prediction model 209, the historical UE level usages, user mobility, performance metrics, and the customer care contact and ticket data. This is illustrated generally as UE-level network logs 215 in FIG. 2D. Fig.1/2A-2P & ¶0103 - The UE-level model 210 includes a cell-level log (CLL) feature and a UE-level feature. The CLL feature is a network side feature and receives cell-level KPI information and logs. The CLL feature applies one or more sliding window 238 to scan over historical period of the cell side KPIs that is relevant to the corresponding user. That enables extraction of a likelihood of network problems over time. The top k′ cells that are most relevant to the user are retained and used to generate the feature profile of the of the network level. In addition, the UE-level model 210 has a UE-level feature. The UE-level feature is from the network session logs for the particular user device. That provides information such as times when the user cannot connect to cellular sessions or whether the session was terminated normally); generating, by the prediction application, a service report indicating at least one of the predicted incident at the network element, the predefined time period, or the preventative resolution to the predicted incident (Fig.1/2A-2P & ¶0172 - At step 282, the KPI information may be analyzed to identify any operational problems in the network. Such operational problems may be due to a natural disaster or some other event, such as a hurricane affecting part of the network serving a specified region. Further, the KPI information may be used by a properly trained machine learning model to estimate the impact of the natural disaster or other event and how the event will impact users experience using the network. …. The machine learning model in accordance with aspects described herein may predict those affects outside the first part of the network and may permit rapid network modifications to accommodate the changes to parts of the network that were not directly affected by the hurricane or other event. Fig.1/2A-2P & ¶0174 - At step 283, the method 280 includes a step of identifying a location of the event. In embodiments, the model may provide information identifying the location of the event or identifying network elements impacted by the event. Fig.1/2A-2P & ¶0175 - At step 284, the model operates to provide a prediction about network elements affected by the event. The model may provide a prediction about likelihood of network issues that impact customers in cell sites of the cellular network. The model may retrieve information about current usage of UE devices in the network. Further, the model may identify a source of the service degradation. The identification may be based on the prediction about likelihood of network issues and the current usage of UE devices. Fig.1/2A-2P & ¶0182 - At step 287, the model may provide prediction information about other impacts that may be expected due to the event. The prediction information may be based on the categories assigned to the event. For example, some network components such as switching sites or cell sites may be directly affected by the event. Fig.1/2A-2P & ¶0184 - At step 288, the method 280 includes receiving from the model prediction information about proper resources for resolution of the event. In embodiments, the model relies on the categorization of the event for deciding what resources are most appropriate for the situation. For example, if the model determines that a particular component is damaged and needs repair or replacement, the model may provide at step 287 a prediction identifying the damaged component. The component may be categorized with other likely-affected components. The damaged component may further be categorized with resources recommended for making the repair considered to be necessary. If a crew with particular equipment such as a lift-bucket is required to effect the repair, the model may provide at step 288 a prediction identifying the repair crew and needed repair equipment); instructing, by the prediction application, an automated system to perform the preventative resolution at the network element (Fig.1/2A-2P & ¶0058 - In the automatic troubleshooting and resolution process 200, the human care agent 203 can interact with an automatic troubleshooting system 218. The automatic troubleshooting system 218 is based on machine learning and rich data sources 219. The data sources 219 include historical data about network operation as well as current data about network status and operation. In embodiments, the automatic troubleshooting system 218 implements a learning-based troubleshooting framework and relies on one or more machine learning models to determine a probability that the source of the problem is in the user's UE device or that the problem is in the network. Fig.1/2A-2P & ¶0059 - the customer 202 may be given automatic voice prompts or text-based prompts and may provide suitable information in response. If the source of the problem is the account of the customer 202 or provisioning of service for the customer 202, the agent 203 generally can promptly resolve the problem for the customer. Fig.1/2A-2P & ¶0060 - the agent 203 may provide information to the automatic troubleshooting system 218 about the symptoms and issues reported by the customer 202. .. information about the customer's identification, account, provisioning, UE device and network activities may be automatically forwarded to the automatic troubleshooting system 218… the agent 203 begins interacting with the automatic troubleshooting system 218 if the agent 203 cannot identify and resolve an account or provisioning problem for the customer. … the automatic troubleshooting system 218 silently monitors the interaction between the customer 202 and the agent 203 during the customer interaction phase 204 and may proactively provide information about the location of the issue to the agent 203. Fig.1/2A-2P & ¶0063 - The automatic troubleshooting and resolution process 200 enables the agent 203 and the network provider in general to properly respond to the customer 202, leaving the customer more likely to be satisfied that the customer's issue is being resolved.); and PNG media_image4.png 396 1074 media_image4.png Greyscale PNG media_image5.png 311 560 media_image5.png Greyscale Yet, Wang does not expressly teach receiving, by a validation application executing on the computer system, feedback data regarding the predicted incident, wherein the feedback data is used to update at least one of the signature data or the predictive model. However, in the analogous art, BIRR explicitly discloses receiving, by a validation application executing on the computer system, feedback data regarding the predicted incident, wherein the feedback data is used to update at least one of the signature data or the predictive model. (Fig. 1A-1O / Fig. 2 & ¶0051 - the trained machine learning model 225 may be re-trained using feedback information. For example, feedback may be provided to the machine learning model. The feedback may be associated with actions performed based on the recommendations provided by the trained machine learning model 225 and/or automated actions performed, or caused, by the trained machine learning model 225. In other words, the recommendations and/or actions output by the trained machine learning model 225 may be used as inputs to re-train the machine learning model (e.g., a feedback loop may be used to train and/or update the machine learning model). Fig. 1A-1O / Fig. 2 & ¶0052 - the machine learning system may apply a rigorous and automated process to determine RAN antenna performance impact. The machine learning system may enable recognition and/or identification of tens, hundreds, thousands, or millions of features and/or feature values for tens, hundreds, thousands, or millions of observations, thereby increasing accuracy and consistency and reducing delay associated with determining RAN antenna performance impact relative to requiring computing resources to be allocated for tens, hundreds, or thousands of operators to manually determine RAN antenna performance impact using the features or feature values). PNG media_image3.png 434 642 media_image3.png Greyscale Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filling date of the claimed invention to combine Wang’s invention of a system and a method for automatically identifying and resolving service issues in a cellular communication network or other mobility network to include BIRR’s invention of a system and a method for utilizing machine learning models to determine RAN (Radio Access Network) antenna performance impact in a wireless communication system, because it provides an efficient management system which reduces unnecessary dispatching technicians to unsuccessfully investigated false positive antenna alerts, by utilizing machine learning models in determining the RAN antenna performance which identifies issues with antenna alignment, line swaps, bad lines, and other antenna issues caused by severe weather conditions, remote electrical tilt (RET) issues, misconfigurations, in turns, improves customer experience in the wireless communication system. (¶0002-¶0010, BIRR) Re. Claims 2, 8 and 20, Wang and BIRR teach claims 1, 7 and 12. Yet, Wang does not expressly teach further comprising updating, by the validation application, at least one of the signature data or the predictive model based on updated radio access data describing the radio access network. However, in the analogous art, BIRR explicitly discloses further comprising updating, by the validation application, at least one of the signature data or the predictive model based on updated radio access data describing the radio access network. (Fig. 1A-1O / Fig. 2 & ¶0051 - the trained machine learning model 225 may be re-trained using feedback information. For example, feedback may be provided to the machine learning model. The feedback may be associated with actions performed based on the recommendations provided by the trained machine learning model 225 and/or automated actions performed, or caused, by the trained machine learning model 225. In other words, the recommendations and/or actions output by the trained machine learning model 225 may be used as inputs to re-train the machine learning model (e.g., a feedback loop may be used to train and/or update the machine learning model). Fig. 1A-1O / Fig. 2 & ¶0052 - the machine learning system may apply a rigorous and automated process to determine RAN antenna performance impact. The machine learning system may enable recognition and/or identification of tens, hundreds, thousands, or millions of features and/or feature values for tens, hundreds, thousands, or millions of observations, thereby increasing accuracy and consistency and reducing delay associated with determining RAN antenna performance impact relative to requiring computing resources to be allocated for tens, hundreds, or thousands of operators to manually determine RAN antenna performance impact using the features or feature values). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filling date of the claimed invention to combine Wang’s invention of a system and a method for automatically identifying and resolving service issues in a cellular communication network or other mobility network to include BIRR’s invention of a system and a method for utilizing machine learning models to determine RAN (Radio Access Network) antenna performance impact in a wireless communication system, because it provides an efficient management system which reduces unnecessary dispatching technicians to unsuccessfully investigated false positive antenna alerts, by utilizing machine learning models in determining the RAN antenna performance which identifies issues with antenna alignment, line swaps, bad lines, and other antenna issues caused by severe weather conditions, remote electrical tilt (RET) issues, misconfigurations, in turns, improves customer experience in the wireless communication system. (¶0002-¶0010, BIRR) Re. Claims 3, 9 and 19, Wang and BIRR teach claims 1, 7 and 12. Yet, Wang does not expressly teach wherein the feedback data indicates an accuracy of the prediction output, and wherein the method further comprises updating, by the validation application, the predictive model based on the feedback data. However, in the analogous art, BIRR explicitly discloses wherein the feedback data indicates an accuracy of the prediction output, and wherein the method further comprises updating, by the validation application, the predictive model based on the feedback data. (Fig. 1A-1O / Fig. 2 & ¶0051 - the trained machine learning model 225 may be re-trained using feedback information. For example, feedback may be provided to the machine learning model. The feedback may be associated with actions performed based on the recommendations provided by the trained machine learning model 225 and/or automated actions performed, or caused, by the trained machine learning model 225. In other words, the recommendations and/or actions output by the trained machine learning model 225 may be used as inputs to re-train the machine learning model (e.g., a feedback loop may be used to train and/or update the machine learning model). Fig. 1A-1O / Fig. 2 & ¶0052 - the machine learning system may apply a rigorous and automated process to determine RAN antenna performance impact. The machine learning system may enable recognition and/or identification of tens, hundreds, thousands, or millions of features and/or feature values for tens, hundreds, thousands, or millions of observations, thereby increasing accuracy and consistency and reducing delay associated with determining RAN antenna performance impact relative to requiring computing resources to be allocated for tens, hundreds, or thousands of operators to manually determine RAN antenna performance impact using the features or feature values). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filling date of the claimed invention to combine Wang’s invention of a system and a method for automatically identifying and resolving service issues in a cellular communication network or other mobility network to include BIRR’s invention of a system and a method for utilizing machine learning models to determine RAN (Radio Access Network) antenna performance impact in a wireless communication system, because it provides an efficient management system which reduces unnecessary dispatching technicians to unsuccessfully investigated false positive antenna alerts, by utilizing machine learning models in determining the RAN antenna performance which identifies issues with antenna alignment, line swaps, bad lines, and other antenna issues caused by severe weather conditions, remote electrical tilt (RET) issues, misconfigurations, in turns, improves customer experience in the wireless communication system. (¶0002-¶0010, BIRR) Re. Claim 15, Wang and BIRR teach claim 12. Wang further teaches wherein the signature data is based on correlations identified between different types of the historical radio access data and the historical incident data associated with the prior incident at the network element. (Fig.1/2A-2P & ¶0103 - The UE-level model 210 includes a cell-level log (CLL) feature and a UE-level feature. The CLL feature is a network side feature and receives cell-level KPI information and logs. The CLL feature applies one or more sliding window 238 to scan over historical period of the cell side KPIs that is relevant to the corresponding user. That enables extraction of a likelihood of network problems over time. The top k′ cells that are most relevant to the user are retained and used to generate the feature profile of the of the network level. In addition, the UE-level model 210 has a UE-level feature. The UE-level feature is from the network session logs for the particular user device. That provides information such as times when the user cannot connect to cellular sessions or whether the session was terminated normally.. The UE-level model 210 further correlates the two features by the time channel in order to learn the temporal feature correlations between the CLL feature and the UE-level feature. Fig.1/2A-2P & ¶0105 - For effective troubleshooting, the system and method correlate the UE-level profile features with the network status of the top k′ reference cell sites. To achieve this goal, the system and method creates a cell-level profile 244 for the reference cell sites by using the learned features from the cell-level model, namely, HIV and H. For each UE, the system and method look back over a one-week historical time window and construct the corresponding feature profiles 244. One week or seven days is an exemplary time window size. However, any suitable window size may be used. The extracted UE-level and cell-level features are concatenated over the time dimension for temporal correlation learning, forming UE-level feature profiles 246. The feature engineering method of the UE-level model 210 is shown in FIG. 2I. In the left side of FIG. 2I, the system and method apply the pre-trained cell-level model 209 of FIG. 2G and uses a sliding window 238 to extract the cell-level profile features 242 over the one-week history. Also, see 259 & 262, 263 & 265 in Fig. 2N). Re. Claim 17, Wang and BIRR teach claim 12. Wang also teaches further comprising storing, by the prediction application, the service report in a data store of the communication network (Fig. 2C-2D & ¶0057 - the troubleshooting and resolution process 200 starts from retrieving the network logs on both the cell-site level and the user equipment level and creating a comprehensive feature profile for each customer who contacts the care service. The troubleshooting and resolution process 200 further uses a learning-based troubleshooting model that can automatically and efficiently find the root cause of the service problems by learning from the customer profile features. Fig. 2C-2D & ¶0071 - UE-level model 210 learns from the symptoms reported by the user and from user-level network log information… The input to the cell-level model 209 is information from cell-level network logs. The goal of the cell-level model 209 model is to identify the anomalies in the network side which can cause the customer care issue.). Re. Claim 18, Wang and BIRR teach claim 12. Wang further teaches wherein the preventative resolution comprises one or more corrective actions to be taken in association with the network element to prevent the predicted incident. (Fig.1/2A-2P & ¶0155 - In FIG. 2J(c), the upper portion shows values for a network problem probability that is learned by the ML model for two different cell sites, cell site R1 and cell site R2. The network problem probability is labelled R1 Risk and the network problem probability for cell site R2 is labelled R2 risk. The lower portion of FIG. 2J(c) illustrates an aspect of user in relation to the network problem probability. This illustrates what happened on the user side at the UE device. The lower portion of FIG. 2J(c) illustrates the intervals of the cellular sessions of the UE that were carried by each of the two major reference cell sites, cell site R1 and cell site R2. Specifically, a designation Other/Idle means the device was carried by other cell sites or the device was idle; a designation R1/R2 Normal means the sessions with R1/R2 were closed normally; a designation R1/R2 RAT means the radio access technology (RAT) was changed. The simultaneous session occupations with cell site R1 and cell site R2 represent that the device was handed off from one cell site to another cell site. Fig.1/2A-2P & ¶0156 - after the occurrence of Day 6 and Day 7 outages on cell site designated 1st NB, the risk of a network issue at cell site R1 increases dramatically. The network problem probability stays high over Day 6 and Day 7. In contrast, the network problem probability for cell site R2 remains relatively low. The cell site designated 1st NB is the nearest neighbor cell site to cell site R1. After the occurrence of the outages on Day 6 and Day 7, the total session length with cell site R2 was significantly reduced, and service for the device was mostly carried by R1. Thus, the ML model determines from KPI data for the cell sites in the local area, an outage or disruption at the cell site designated 1st NB causes substantial disruption on cell site R1 but only slight disruption on cell site R2. Fig.1/2A-2P & ¶0157 - Referring again to FIG. 2O, at step 273, the method 268 determines if a network problem identified at step 271 and step 272 may affect one or more categories of users. For example, the method may identify one or more users who were impacted by the network problem and identify one or more categories in which those users are categorized. Other users in the same categories may be affected as well. In an example, a user was affected by a hard outage by losing all service in the region served by a particular cell site. The user is assigned to a particular category of users served by that cell site at the time. Step 273 may include identifying all other users in that particular category. Such identification may be used to mitigate further problems for those users in the same category. Fig.1/2A-2P & ¶0164 - if no suitable network protection is available, at step 275, the method 268 includes operations to take action to protect users in the affected categories. Actions may be taken to isolate the users in an affected category from the service degradation. Such actions may be taken prophylactically to prevent a service disruption or degradation for other users in the affected category. Such actions may be taken to maintain a continuous user experience for other users in the affected category). Claims 4, 10 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Wang, in view of BIRR, further in view of MARZBAN et al. (2025/0175822), MARZBAN hereinafter. Re. Claims 4, 10 and 16, Wang and BIRR teach claims 1, 7 and 12. Yet, Wang and BIRR do not expressly teach wherein the prediction output further comprises a confidence score associated with the predicted incident at the network element, wherein the confidence score indicates a level of certainty regarding the predicted incident and is based on a history of the predictive model successfully predicting the predicting incident at one or more other network elements in the radio access network. However, in the analogous art, MARZBAN explicitly discloses wherein the prediction output further comprises a confidence score associated with the predicted incident at the network element, wherein the confidence score indicates a level of certainty regarding the predicted incident and is based on a history of the predictive model successfully predicting the predicting incident at one or more other network elements in the radio access network. (Fig. 1-11 & ¶0132 - an interference prediction engine (e.g., implemented at a UE 704, implemented at a network entity 715, or both, etc.) can be configured to perform interference prediction based on a combination of one or more previous interference measurements (e.g., using one or more previous IMRs and/or historical data associated with one or more previous IMRs) and/or one or more previous interference predictions (e.g., interference predictions determined during one or more previous interference prediction resources or time allocations for interference prediction). Different configurations of the type, periodicity, number, and patterns of the IMRs used by the interference prediction engine to obtain interference measurements can correspond to different interference prediction accuracy and/or confidence levels (e.g., where one or more past interference measurements using the configured IMRs are used to determine the interference prediction. Fig. 1-11 & ¶0138 - machine learning interference prediction can be associated with a corresponding confidence level (e.g., a percentage, etc.) for each respective interference value that is predicted by the machine learning interference prediction network. In some cases, a UE can be configured (e.g., pre-configured, configured based on signaling from a network entity, etc.) with an interference prediction confidence threshold corresponding to a minimum or target confidence level for the interference prediction determined by the machine learning interference prediction network implemented by the UE. For instance, an interference prediction with a 90% confidence level is within a configured interference prediction confidence threshold of 80%, but is outside of a configured interference prediction confidence threshold of 95%. In another example, an interference prediction with an 80% confidence level is within a configured interference prediction confidence threshold of 60% but is no within a configured interference prediction confidence level of 90% or 95%, etc.). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filling date of the claimed invention to combine Wang’s invention of a system and a method for automatically identifying and resolving service issues in a cellular communication network or other mobility network and BIRR’s invention of a system and a method for utilizing machine learning models to determine RAN (Radio Access Network) antenna performance impact in a wireless communication system to include MARZBAN’s invention of a system and a method for reporting information associated with interference measurement and/or prediction resources in a wireless communication system, because it provides reporting information indicative of interference prediction accuracy and/or confidence level associated with a UE interference prediction operating in the wireless communication system. (¶0002-¶0008, MARZBAN) Claims 6 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Wang, in view of BIRR, further in view of Khanka; Bhagwan Singh Bothell (2021/0321274, same assignee (T-Mobile) but published more than a year before the EFD of the instant application), Khanka hereinafter. Re. Claims 6 and 11, Wang and BIRR teach claims 1 and 7. Yet, Wang and BIRR do not expressly teach wherein the current radio access data comprises fault management data indicating a warning alarm triggered at the network element and performance data indicating a degrading performance count at the network element. However, in the analogous art, Khanka explicitly discloses wherein the current radio access data comprises fault management data indicating a warning alarm triggered at the network element and performance data indicating a degrading performance count at the network element. (Fig. 1-5 & ¶0012 - Due to a fault 124, signals along the transmission pathway 112 can be blocked or degraded. Blocking of signals along the transmission pathway 112 can disable the cell tower 110, preventing its use in the cellular network. Degradation of signals along the transmission pathway 112 may not disable the cell tower 110 but can reduce its operating efficiency, such as reducing Key Performance Indicators (KPIs) of the cell tower, or creating Quality of Service (QoS) issues. The KPIs are various metrics that are tracked to assess how efficiently or correctly the cell tower 110 is operating. The QoS are indications of how satisfied the users of the cellular network are with the cellular service. The reduction in a KPI or the creation of a QoS issue can adversely affect the operating efficiency of the cell tower 110 and the cellular network of which the cell tower 110 is a part. As such, faults 124 can have a significant impact on the cellular network and its users. Fig. 1-5 & ¶0031 - The data message can include faults 124 identified on the transmission pathway 112 of the cell tower 110. The network management center can use this fault 124 information to schedule repairs or take other actions to correct the faults 124. Additionally, the network management center can communicate with the base station 250, such as to send instructions. For example, the network management center can monitor performance parameters of the cell tower 110, such as the KPIs. If the network management center notices that the KPIs are decreasing, the network management center can instruct the base station 250 to cause the monitoring device 210 generate the output 234 so that the base station 250 can identify if a fault 124 is present on the transmission pathway 112.) Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filling date of the claimed invention to combine Wang’s invention of a system and a method for automatically identifying and resolving service issues in a cellular communication network or other mobility network and BIRR’s invention of a system and a method for utilizing machine learning models to determine RAN (Radio Access Network) antenna performance impact in a wireless communication system to include Khanka’s invention of cell tower monitoring systems in a wireless communication system, because it provides an efficient mechanism in monitoring and diagnostic systems which can monitor status of a cell tower, identify faults or failures in the cell tower and provide fault information to network management so that the cell tower can be effectively repaired by a technician with precise diagnostic fault information, thereby, improves user experience in the wireless communication system. (¶0002-¶0003, Khanka) Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Wang, in view of BIRR, further in view of Prakash et al. (2021/0397495, same assignee (T-Mobile) but published more than a year before the EFD of the instant application), Prakash hereinafter. Re. Claim 13, Wang and BIRR teach claim 12. Yet, Wang and BIRR do not expressly teach wherein the historical incident data indicates a prior resolution of the prior incident at the network element, wherein the historical radio access data comprise at least one of fault management data indicating one or more alarms triggered at the network element at a time of the prior incident at the network element, performance data indicating one or more counts associated with a performance of the network element at the time of the prior incident at the network element, or event logs associated with one or more events occurring at the network element at the time of the prior incident at the network element. However, in the analogous art, Prakash explicitly discloses wherein the historical incident data indicates a prior resolution of the prior incident at the network element, wherein the historical radio access data comprise at least one of fault management data indicating one or more alarms triggered at the network element at a time of the prior incident at the network element, performance data indicating one or more counts associated with a performance of the network element at the time of the prior incident at the network element, or event logs associated with one or more events occurring at the network element at the time of the prior incident at the network element. (Fig. 1-13 & ¶0080 - In step 1310, the processor can obtain a machine learning model trained to predict and resolve the hardware error based on the performance indicator. To obtain the machine learning model, the processor can train the machine learning model. To train the machine learning model, the processor can obtain a historical application log, historical system performance indicator, or a historical record of prior hardware errors including the multiple issue tickets and the multiple issue ticket resolutions. The processor can train the machine learning model to detect the anomaly, predict the occurrence of the hardware error, and obtain the resolution to the prior hardware error based on the historical application log, the historical system performance indicator, and the historical record of prior hardware errors comprising multiple issue tickets and multiple issue ticket resolutions. Also, examiner interprets that only one of the claimed features to be mapped because of the presence of “or” in the limitation). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filling date of the claimed invention to combine Wang’s invention of a system and a method for automatically identifying and resolving service issues in a cellular communication network or other mobility network and BIRR’s invention of a system and a method for utilizing machine learning models to determine RAN (Radio Access Network) antenna performance impact in a wireless communication system to include Prakash’s invention of a system and a method for predicting and reducing hardware related outages in a wireless communication system, because it provides an efficient mechanism in obtaining a performance indicator associated with a wireless telecommunication network including a system performance indicator or an application log, along with a machine learning model trained to predict and resolve a hardware error based on the performance indicator in the wireless communication system. (Abstract, Prakash) Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Wang, in view of BIRR, further in view of Rao et al. (2025/0126487), Rao hereinafter. Re. Claim 14, Wang and BIRR teach claim 12. Yet, Wang and BIRR do not expressly teach wherein the current radio access data comprises fault management data indicating a minor alarm triggered at the network element and performance data indicating a dropped call count at the network element. However, in the analogous art, Rao explicitly discloses wherein the current radio access data comprises fault management data indicating a minor alarm triggered at the network element and performance data indicating a dropped call count at the network element. (Fig. 1-8 & ¶0085 - Based on the data, a determination is made whether a count of the plurality of the sites experiencing the power outage is greater than a predetermined count threshold. Based on the count being greater than the predetermined count threshold, a determination is made whether the plurality of the sites experiencing the power outage are associated with a same geography/administrative district. A determination is made whether the plurality of the sites in the mobile network experience the power outage based on planned activity. The plurality of the sites in the mobile network that are in a higher configuration are converted to a lower configuration. A check is made whether a current configuration of the plurality of the sites in the mobile network experiences the power outage. A determination is made whether the sites experiencing the power outage are in the higher configuration or the lower configuration. The sites experiencing the power outage that are in the higher configuration are converted to the lower configuration by executing a change request. A determination is made whether a cause for the power outage of the plurality of the sites has been addressed. After waiting for a predetermined delay time threshold, a determination is made whether an external alarm for the power outage of the plurality of the sites converted to the lower configuration has been cleared and whether the plurality of the sites converted to the lower configuration are able to operate in the higher configuration. Fig. 1-8 & ¶0094 - criteria or parameters associated with the crisis are determined, such as a number of site outages, a number of people impacted, whether a high density area or low density area is involved. On the basis of these parameters, a determination is made whether a crisis is identified or whether the crisis is to be analyzed. A count of Sites Experiencing A Power Outage 846 is compared to a Predetermined Site Power Outage Threshold 848, e.g., 50%. Management Platform 840 includes a Fault Manager 832 for tracking issues across a mobile network. The Fault Manager 830 provides a number of total sites and the number of sites that are impacted by the crisis, e.g., the number of sites that experience a power outage.) Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filling date of the claimed invention to combine Wang’s invention of a system and a method for automatically identifying and resolving service issues in a cellular communication network or other mobility network and BIRR’s invention of a system and a method for utilizing machine learning models to determine RAN (Radio Access Network) antenna performance impact in a wireless communication system to include Rao’s invention of a system and a method for automating a change in site configuration during a crisis in a wireless communication system, because it provides an efficient mechanism in modifying a site configuration during a system wide outage, which is quick and efficient, compared to manual intervention, which is slow to react to crisis situations or other situations resulting in power disruption in the wireless communication system. (¶0002-¶0009, Rao) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Wang et al. (2024/0121628); See ¶0074-¶0171 along with Fig. 1, 2A-2P. Wang et al. (2024/0114362); See Abstract, ¶0028-¶0166 along with Fig. 1, 2A-2P. IEEE - The Future of Broadband Access Network Architecture and Intelligent Operations; Charlie Chen-Yui Yang, Guangzhi Li, Zonghuan Wu, Kaiyu Zhang, Xiang Liu; Futurewei; 2019 - International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC); See §I-IV. IEEE - Using Anomaly Detection Techniques for Securing 5G Infrastructure and Applications. Athanasios Priovolos, Dimitris Lioprasitis, Georgios Gardikis, Socrates Costicoglou R&D Department Space Hellas S.A. Athens, Greece; 2021 - IEEE International Mediterranean Conference on Communications and Networking (MeditCom). See §I-IV. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED SHAMSUL CHOWDHURY whose telephone number is (571)272-0485. The examiner can normally be reached on Monday-Thursday 9 AM- 6 PM EST (Friday Var.). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hassan Phillips can be reached on 571-272-3940. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMMED S CHOWDHURY/Primary Examiner, Art Unit 2467
Read full office action

Prosecution Timeline

Jan 24, 2024
Application Filed
Mar 11, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604266
Terminal State Control Method, Terminal, and Non-transitory Readable Storage Medium
2y 5m to grant Granted Apr 14, 2026
Patent 12604230
ADAPTIVE CONFIGURED GRANT SCHEDULING
2y 5m to grant Granted Apr 14, 2026
Patent 12598033
NETWORK CODING FOR MULTI-LINK DEVICE NETWORKS
2y 5m to grant Granted Apr 07, 2026
Patent 12593373
Discontinuous Reception Configuration Method and Device
2y 5m to grant Granted Mar 31, 2026
Patent 12587963
DEVICE, METHOD, AND SYSTEM FOR CHANNEL SWITCHING
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+25.2%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 344 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month