Prosecution Insights
Last updated: April 19, 2026
Application No. 18/788,360

AI-BASED SYSTEM FOR ROOT CAUSE ANALYSES OF OPERATIONAL ANOMALIES IN WIRELESS NETWORKS

Final Rejection §103
Filed
Jul 30, 2024
Examiner
ZARKA, DAVID PETER
Art Unit
2449
Tech Center
2400 — Computer Networks
Assignee
T-Mobile Innovations LLC
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
96%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
468 granted / 567 resolved
+24.5% vs TC avg
Moderate +13% lift
Without
With
+13.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
29 currently pending
Career history
596
Total Applications
across all art units

Statute-Specific Performance

§101
12.6%
-27.4% vs TC avg
§103
41.7%
+1.7% vs TC avg
§102
14.0%
-26.0% vs TC avg
§112
25.7%
-14.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 567 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the America Invents Act (AIA ). Response and Claim Status The instant Office action is responsive to the response received January 13, 2026 (the Response). In response to the Response, the previous (1) rejection of claims 6 and 14 under 35 U.S.C. § 112(b); (2) rejection of claims 1–20 under 35 U.S.C. § 103 are WITHDRAWN. Claims 1–20 are currently pending. Claim Rejections – 35 U.S.C. § 103 The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Sun and Polisetty Claims 1–3, 7, 9–11, 15, 17, and 18 are rejected under 35 U.S.C. § 103 as being obvious over Sun et al. (US 2025/0112815 A1; filed Sept. 29, 2023) in view of Polisetty et al. (US 2026/0003723 A1; filed June 27, 2024). Response to arguments Applicants argue Kenny does not ascribe any particular functionality to the AI model and instead only discusses an AI model as an example of a system that may be provided by the accelerator. For example, Kenny does not discuss using the AI model to detect packet errors but instead discusses using packet processors to detect errors in the accelerator. Id, ¶ 13. Merely mentioning an AI model in passing is insufficient to render obvious an AI model trained to accomplish a specific task. As such, a literal combination of Sun and Kenny would not result in the above limitation. Response 6–7. Applicants’ arguments have been considered but are now moot in view of the new ground of rejection below. Next, Applicants assert “Sun generally relates to analyzing error codes with a Bayesian network to detect a cause of the failure condition. Sun, Abstract. Sun discusses that a Bayesian network is a probabilistic graphical model that includes a plurality of nodes connected based on dependencies or correlations between the nodes. Id, ¶ 37.” Id. at 7. Applicants argue [w]hile a Bayesian network is an example of a model, Sun does not discuss providing the network with error code information, the network operations data, and the contextual information. Instead, the Bayesian network discussed by Sun is created based only on error codes and includes a plurality of nodes that indicate probabilistic associations between different error codes. To determine the cause of the error, Sun discusses determining the cause of the error based on the conditional probability distribution indicated by the Bayesian network. Id, ¶ 66. This is different than prompting an AI model with a combination of information and tasking the AI model to determine the cause of the error based on the provided information. As such, the cited art does not teach or reasonably suggest prompt an artificial intelligence (AI) model trained to identify root causes of operational anomalies by providing the Al model with the error code information, the network operations data, and the contextual information and tasking the AI model to identify a root cause of the operational anomaly as recited by amended claim 1. Id. The Examiner is unpersuaded of error. At the outset, the Examiner notes Applicants’ argument is not commensurate with the scope of claim 1, which does not recite providing a network with error code information, network operations data, and contextual information. See In re Self, 671 F.2d 1344, 1348 (CCPA 1982) (limitations not appearing in the claims cannot be relied upon for patentability). Claim 1 recites providing an AI model with error code information, network operations data, and contextual information. Moreover, Applicants’ arguments are non-responsive to the rejection, which relies on Sun’s item 200 to teach a model, and not Sun’s Bayesian network. Of particular note, the Examiner finds Sun teaches prompting model item 200 to identify root causes of operational anomalies by providing the model item 200 with error code information, network operations data, and contextual information and tasking the model item 200 to identify a root cause of the operational anomaly. The Examiner further finds Sun’s model item 200 is not an AI model trained to identify the root causes, turning to Polisetty to show that an AI model trained to identify the root causes is known in the art. Thus, the Examiner proposes to include Polisetty’s teaching with Sun, such that the combined system predictably yields prompting an AI model trained to identify root causes of operational anomalies by providing the AI model with error code information, network operations data, and contextual information and tasking the AI model to identify a root cause of the operational anomaly. Accordingly, Applicants’ arguments regarding Sun’s alleged individual shortcomings (see Response 7) are unavailing. Here, the rejection is not based solely on Sun, but rather on the cited references’ collective teachings. See In re Keller, 642 F.2d 413, 426 (CCPA 1981); In re Merck & Co., Inc., 800 F.2d 1091, 1097 (Fed. Cir. 1986). The Rejection Regarding claim 1, while Sun teaches a computing apparatus (fig. 1, item 115) comprising: one or more computer readable storage media (fig. 1, item 135); one or more processors (fig. 1, item 130) operatively coupled with the one or more computer readable storage media; and program instructions (fig. 1, items 140, 150–160) stored on the one or more computer readable storage media that, when executed by the one or more processors, direct the computing apparatus to at least: detect an operational anomaly (fig. 2, item 205; “detect the failure condition based on the collected data from the network 120” at ¶ 32) in a wireless network (fig. 1, item 120) based on error code information (“determine from the collected data if the response status codes include any error codes” at ¶ 32); capture network operations data (fig. 2, item 210; ¶¶ 33–34) relating to the operational anomaly; capture contextual information (fig. 2, item 215; ¶¶ 35–36) relating to the operational anomaly; prompt a model (fig. 2, item 200) to identify root causes (fig. 2, item 235; “determine that the event that generated error code EC1 is most likely the root cause of the failure condition” at ¶ 66) of operational anomalies by providing the model with the error code information (fig. 2, item 235 does not occur but for item 205), the network operations data (fig. 2, item 235 does not occur but for item 210), and the contextual information (fig. 2, item 235 does not occur but for item 215) and tasking the model to identify a root cause (“analyze error codes to determine the root cause of a failure condition.” at ¶ 14; fig. 2, item 235) of the operational anomaly; and receive, from the model in response to the prompt, output comprising a root cause analysis (fig. 2, item 240; ¶ 68) of the operational anomaly, Sun does not teach the model being an artificial intelligence (AI) model trained to identify the root causes. Polisetty teaches an AI model trained to identify root cause (“the generative AI model 970 is trained to identify one or more root causes for a failure” at ¶ 92). It would have been obvious to one of ordinary skill in the art before the filing date of the invention for Sun’s model to be an AI model trained to identify Sun’s root causes as taught by Polisetty to “provide significant advantages relative to conventional techniques. For example, technical problems related to such conventional techniques are mitigated in one or more embodiments by automatically predicting a failure of a software deployment pipeline and suggesting one or more mitigation actions to mitigate one or more causes of the failure..” Polisetty ¶ 3. Regarding claim 2, Sun teaches wherein the error code information (“determine from the collected data if the response status codes include any error codes” at ¶ 32) comprises an indication that a transaction completion metric (“any error codes” at ¶ 32 is one or more error codes) of the wireless network exceeds a respective threshold (zero error codes). Regarding claim 3, Sun teaches wherein the transaction completion metric comprises a quantity of error codes (“any error codes” at ¶ 32 is one or more error codes) associated with a network function (¶¶ 31–32) of the wireless network within a given period of time (the time at which the collected data is received at ¶ 32). Regarding claim 7, while Sun teaches wherein the model (fig. 2, item 200) correlates the root causes of network anomalies (“generated error code EC1 is most likely the root cause of the failure condition” at ¶ 66) to network operations data (fig. 3, item 315) based on a historical operational anomaly dataset (fig. 3, items 300, 305, 310), Sun does not teach the model being an AI model trained for the correlation. Kenny teaches a trained AI model (“the generative AI model 970 is trained to identify one or more root causes for a failure” at ¶ 92). It would have been obvious to one of ordinary skill in the art before the filing date of the invention for Sun’s model to be a trained AI model as taught by Polisetty to “provide significant advantages relative to conventional techniques. For example, technical problems related to such conventional techniques are mitigated in one or more embodiments by automatically predicting a failure of a software deployment pipeline and suggesting one or more mitigation actions to mitigate one or more causes of the failure..” Polisetty ¶ 3. Regarding claim 9, Sun teaches a method (fig. 2, item 200) of operating a computing device (fig. 1, item 115) comprising operations according to claim 1. Thus, references/arguments equivalent to those present for claim 1 are equally applicable to claim 9. Regarding claims 10, 11, and 15, claims 2, 3, and 7, respectively, recite substantially similar features. Thus, references/arguments equivalent to those present for claims 2, 3, and 7 are equally applicable to, respectively, claims 10, 11, and 15. Regarding claim 17, while Sun teaches one or more computer readable storage media (fig. 1, item 135) having program instructions stored thereon that, when executed by one or more processors (fig. 1, item 130), direct a computing apparatus (fig. 1, item 115) to at least: detect an operational anomaly (fig. 2, item 205; “detect the failure condition based on the collected data from the network 120” at ¶ 32) in a wireless network (fig. 1, item 120) based on a transaction completion metric (“determine from the collected data if the response status codes include any error codes” at ¶ 32); generate a reduced information set (fig. 2, item 210; “process 200 includes operation 210 of identifying, by the processor, a subset of subscribers that are impacted by the failure condition” at ¶ 33) relating to the operational anomaly; capture contextual information (fig. 2, item 215; ¶¶ 35–36) relating to the operational anomaly; prompt a model (fig. 2, item 200) to identify root causes (fig. 2, item 235; “determine that the event that generated error code EC1 is most likely the root cause of the failure condition” at ¶ 66) of the operational anomalies by providing the model with the transaction completion metric (fig. 2, item 235 does not occur but for item 205), the reduced information set (fig. 2, item 235 does not occur but for item 210), and the contextual information (fig. 2, item 235 does not occur but for item 215) and tasking the model to identify a root cause of the operational anomaly (“analyze error codes to determine the root cause of a failure condition.” at ¶ 14; fig. 2, item 23); and receive, from the model in response to the prompt, output comprising a root cause analysis (fig. 2, item 240; ¶ 68) of the operational anomaly, Sun does not teach the model being an artificial intelligence (AI) model trained to identify the root causes. Polisetty teaches an AI model trained to identify root cause (“the generative AI model 970 is trained to identify one or more root causes for a failure” at ¶ 92). It would have been obvious to one of ordinary skill in the art before the filing date of the invention for Sun’s model to be an AI model trained to identify Sun’s root causes as taught by Polisetty to “provide significant advantages relative to conventional techniques. For example, technical problems related to such conventional techniques are mitigated in one or more embodiments by automatically predicting a failure of a software deployment pipeline and suggesting one or more mitigation actions to mitigate one or more causes of the failure..” Polisetty ¶ 3. Regarding claim 18, claim 3 recites substantially similar features. Thus, references/arguments equivalent to those present for claim 3 are equally applicable to claim 18. Sun, Polisetty, and Barrett Claims 4, 5, 12, and 13 are rejected under 35 U.S.C. § 103 as being obvious over Sun in view of Polisetty, and in further view of Barrett et al. (US 2022/0329510 A1; filed Apr. 7, 2022). Regarding claim 4, while Sun teaches wherein the network operations data (fig. 2, item 210; ¶¶ 33–34) comprises a subset of subscribers (“process 200 includes operation 210 of identifying, by the processor, a subset of subscribers that are impacted by the failure condition” at ¶ 33) on the wireless network, Sun does not teach the subset of subscribers including packet capture trace records of transactions. Barrett teaches packet capture trace records of transactions (“a packet capture trace file comprising packets associated with a synthetic transaction of a service provided by a service provider” at ¶ 16). It would have been obvious to one of ordinary skill in the art before the filing date of the invention for Sun’s subset of subscribers to include packet capture trace records of transactions as taught by Barrett “to evaluate the performance of the digital services, or identify issues associated with providing the digital services.” Barrett ¶ 2. Regarding claim 5, while Sun teaches wherein the network operations data (fig. 2, item 210; ¶¶ 33–34) comprises a reduced information set (“process 200 includes operation 210 of identifying, by the processor, a subset of subscribers that are impacted by the failure condition” at ¶ 33) based on filtered records (“all subscribers” at ¶ 33 are filtered), Sun does not teach the filtered records being filtered packet capture trace records. Barrett teaches packet capture trace records (“a packet capture trace file comprising packets associated with a synthetic transaction of a service provided by a service provider” at ¶ 16). It would have been obvious to one of ordinary skill in the art before the filing date of the invention for Sun’s filtered records to be filtered packet capture trace records as taught by Barrett “to evaluate the performance of the digital services, or identify issues associated with providing the digital services.” Barrett ¶ 2. Regarding claims 12 and 13, claims 4 and 5, respectively, recite substantially similar features. Thus, references/arguments equivalent to those present for claims 4 and 5 are equally applicable to, respectively, claims 12 and 13. Sun, Polisetty, and Schwarzmann Claims 6 and 14 are rejected under 35 U.S.C. § 103 as being obvious over Sun in view of Polisetty, and in further view of Schwarzmann (US 2007/0166014 A1; filed Jan. 17, 2006). Regarding claim 6, Sun does not teach wherein the program instructions further direct the computing apparatus to filter out nonessential information from the packet capture trace records resulting in the filtered packet capture trace records. Schwarzmann teaches program instructions further direct a computing apparatus (fig. 2, item 204) to filter out nonessential information from data resulting in the filtered data (“the removal of redundant or ‘non-essential’ data from a DVD data stream” at ¶ 11). It would have been obvious to one of ordinary skill in the art before the filing date of the invention for the Sun/Polisetty/Barrett combination’s program instructions to further direct the computing apparatus to filter out nonessential information from the packet capture trace records resulting in the filtered packet capture trace records as taught by Schwarzmann so that “the data storage drive is provided with additional capacity.” Schwarzmann ¶ 11. Regarding claim 14, claim 6 recites substantially similar features. Thus, references/arguments equivalent to those present for claim 6 are equally applicable to claim 14. Sun, Polisetty, and Mattison Claims 8, 16, and 20 are rejected under 35 U.S.C. § 103 as being obvious over Sun in view of Polisetty, and in further view of Mattison et al. (US 2024/0320642 A1; filed Mar. 21, 2023). Regarding claim 8, Sun does not teach wherein the historical operational anomaly dataset comprises identified root causes of historical operational anomalies correlated to historical network operations data. Mattison teaches identified root causes of historical operational anomalies correlated to historical network operations data (“historical event codes associated with corresponding historical anomalies, historical actions that resolved corresponding historical anomalies” at ¶ 33). It would have been obvious to one of ordinary skill in the art before the filing date of the invention for Sun’s historical operational anomaly dataset to comprise identified root causes of historical operational anomalies correlated to historical network operations data as taught by Mattison “to identify the anomalies and/or identify actions that may be performed . . . in response to identifying particular anomalies.” Mattison ¶ 33. Regarding claims 16 and 20, claim 8 recites substantially similar features. Thus, references/arguments equivalent to those present for claim 8 are equally applicable to claims 16 and 20. Sun, Polisetty, Barret, and Malboubi Claim 19 is rejected under 35 U.S.C. § 103 as being obvious over Sun in view of Polisetty, in further view of Barrett, and in further view of Malboubi et al. (US 2023/0136756 A1; filed Nov. 3, 2021). Regarding claim 19, while Sun teaches wherein the reduced information set (“process 200 includes operation 210 of identifying, by the processor, a subset of subscribers that are impacted by the failure condition” at ¶ 33) comprises data extracted from records (¶ 33), Sun does not teach (A) the data being transaction data; (B) the records being packet capture trace records; and (C) the data being formatted in a natural language format. (A), (B) Barrett teaches transaction data (“generating synthetic transactions” at ¶ 16) and packet capture trace records (“generate a packet capture trace file comprising packets associated with a synthetic transaction of a service provided by a service provider” at ¶ 16). It would have been obvious to one of ordinary skill in the art before the filing date of the invention for Sun’s data to be transaction data and for Sun’s records to be packet capture trace records as taught by Barrett “to evaluate the performance of the digital services, or identify issues associated with providing the digital services.” Barrett ¶ 2. (C) Malboubi teaches data formatted in a natural language format (“the user can enter all or a portion of the query in a desired format (e.g., natural language format” at ¶ 37). It would have been obvious to one of ordinary skill in the art before the filing date of the invention for Sun’s data to be formatted in a natural language format as taught by Malboubi for “enhanced (e.g., improved, faster, more efficient, and/or optimized) performance and lower (e.g., reduced or minimized) latencies.” Malboubi ¶ 55. Conclusion The prior art made of record and not relied upon is considered pertinent to Applicants’ disclosure: US-20250130884-A1. Applicants’ amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicants are reminded of the extension of time policy as set forth in 37 C.F.R. § 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to § 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to DAVID P. ZARKA whose telephone number is (703) 756-5746. The Examiner can normally be reached Monday–Friday from 9:30AM–6PM ET. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, Vivek Srivastava, can be reached at (571) 272-7304. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://portal.uspto.gov/external/portal. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicants are encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. /DAVID P ZARKA/PATENT EXAMINER, Art Unit 2449
Read full office action

Prosecution Timeline

Jul 30, 2024
Application Filed
Oct 14, 2025
Non-Final Rejection — §103
Jan 13, 2026
Response Filed
Mar 11, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602516
IMPLEMENTING USER-SPECIFIC LOCAL ADMINISTRATOR RIGHTS USING ARTIFICIAL INTELLIGENCE TECHNIQUES
2y 5m to grant Granted Apr 14, 2026
Patent 12598157
APPARATUS HAVING A NETWORK COMPONENT, CONNECTED BETWEEN AT LEAST TWO NETWORKS, WITH RECORDING FUNCTIONALITY FOR RECORDING COMMUNICATION RELATIONSHIPS PRESENT DURING THE PASSAGE OF DATA TRAFFIC, AND METHOD FOR OPERATING A NETWORK COMPONENT
2y 5m to grant Granted Apr 07, 2026
Patent 12587514
ROUTING PACKET TO TCP TUNNEL CLIENT PROGRAM
2y 5m to grant Granted Mar 24, 2026
Patent 12580804
NETWORK DEVICE DETERMINING A SYSTEM ISSUE OF ANOTHER NETWORK DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12580890
PREVENTING THE INTRODUCTION OF MALICIOUS-EDGE-GATEWAY THE EDGE MANAGEMENT'S FLEET VIA NETWORK INTERCEPTOR AND IDENTITY VALIDATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
96%
With Interview (+13.1%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 567 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month