Prosecution Insights
Last updated: April 19, 2026
Application No. 17/387,979

AI Based Traffic Classification

Final Rejection §103
Filed
Jul 28, 2021
Examiner
ALAM, HOSAIN T
Art Unit
2132
Tech Center
2100 — Computer Architecture & Software
Assignee
Parallel Wireless Inc.
OA Round
2 (Final)
36%
Grant Probability
At Risk
3-4
OA Rounds
1y 9m
To Grant
56%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
5 granted / 14 resolved
-19.3% vs TC avg
Strong +20% interview lift
Without
With
+20.0%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 9m
Avg Prosecution
12 currently pending
Career history
26
Total Applications
across all art units

Statute-Specific Performance

§101
22.6%
-17.4% vs TC avg
§103
45.6%
+5.6% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
10.8%
-29.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 14 resolved cases

Office Action

§103
Office Action This action is in response to the amendment and/or request for reconsideration filed on 07/17/2025. Claims 1—20 are currently pending, Claims 1-2, 4-9, 11-13, 15 and 17-20 having been amended. In view of the applicants’ amendment, the rejections set forth under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIJA), second paragraph and 35 U.S.C. 101, the claimed invention is allegedly directed to an abstract idea without significantly more, are hereby withdrawn. Applicants’ arguments with respect to the rejection of claims 1-20 under 35 U.S.C. 103 as allegedly being unpatentable are directed to claims as currently amended. A new rejection of amended claims are set forth below. Claims are interpreted in light of applicants’ disclosure. Particular attention has been paid to the following paragraphs of applicants disclosure. [0005] Network providers have used Deep Packet Inspection (DPI) in the past to gain insight into the traffic flowing through their infrastructure. They used this insight to throttle and prioritize the traffic so that critical applications don’t see any service degrade. Over a period of time when most of the traffics were end-to-end encrypted, the deep packet inspection approach was not able to identify and classify the traffic type. The network providers then started using rule-based algorithms to classify network traffic. The rule-based algorithms use traffic patterns like the number of packets per second, packet sizes, delay patterns, etc. to classify traffic types. The rule-based traffic pattern classification is still the widely used approach for traffic classification. But the disadvantage of rule-based traffic pattern classification is that it is difficult to keep up with new applications, changing traffic patterns across different application versions, etc. and hence are mostly ineffective. Instead of hand-coding the rules, AI models can be used to learn the traffic pattern and classify the traffic types more efficiently and quickly. [0006] The effectiveness of AI for classifying encrypted traffic is already established in many publication and public literature. But its practical and deployment aspects of how to integrate AI into a network service and what use cases it will help solve are not well explored. [0022] There are many types of Al models, few basic models are based on simple probalistic distribution like linear regression and some are based on more complex convoluted distributions like encode-decoder models or Siamese net. Simple models like probability distribution models like linier regression or light weight neural networks like mobile net can be run on x86 processor and the complex algorithms like encoder-decoder models or Siamese net need good amount of GPU to do the prediction in a reasonable time. As GPUs need lots of power and generate lots of heat and hence it is not practical to add GPUs to edge nodes like vRU/DU. But on the other hand GPUs can be added to centralized entities like HNG/CU. [0023] The idea is to use two types of models, light weight and heavy weight models for traffic classification. The light weight model will run in CWS/DU (close to input of traffic) and classify the traffic. For those traffic types the light weight model can't classify, the packet traces along with metadata like time of day, location, etc. are sent to HNG/CU. HNG/CU will analyze the traffic with heavy weight model and send back the classification type to CWS/DU. HNG/DU will also archive these packet traces from CWS/DU and the classified traffic class. This information is used to periodically train the light weight model automatically or manually and updated light weight model. Updated model is then pushed by HNG/CU to CWS/DU so that further traffic has better chance to be classified at CWS/DU itself and thus reducing the load on HNG/DU. [0024] FIG. 1 shows architecture 100 for intelligent traffic classification, which consists of two major components: Traffic prediction function at the edge called Prediction function (PF) 101 and Learning function (LF) 102 in the cloud as part of HNG Cluster. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-3, 8, and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US PG-PUB 20190327175 A1, Ying et al., herein after “Ying”, in view of US PG-PUB 20100014420 A1, Wang et al., hereinafter “Wang,” , and further in view of Feamster et al. (Why (and How) Networks Should Run Themselves, hereinafter "Feamster"). Regarding Claim 1, Ying teaches a method for providing intelligent traffic classification at a mobile edge (Ying, Fig. 2, 210, p3, col. 1, [0034]) using Artificial Intelligence (AI), comprising: receiving a packet (step 410 of Fig. 4 is performed after receiving the packet, Ying, par. [0045]) by an edge node; performing classification of network packets at an edge node and if the packet is unable to be classified into a traffic type class by the light weight AI model, sending the packet to a centralized entity coupled to the edge node; (Ying, par. [0027] “ ….to run these applications quickly and efficiently, the client 120 may request to use the GPU resources in the server 110 to run the application 150.” Server 110 is equated with the claimed “centralized entity.”) determining, using a heavy weight AI model, features and predictions for the packet by the centralized entity; (Ying, par. [0040] – “for the same source edge node 210 and destination edge node 220, the controller 240 may generate a plurality of network paths with different network characteristics for different types of traffic.”) classifying, by the centralized entity, the packet into a traffic type class (Ying, p3, par. [0039], Table 1 includes “Traffic Type 1-2,” “(such as “type 1” in Table 1); while the second network path passing sequentially the center nodes 230-2 and 230-4 has a latency of 30 ms and a bandwidth of 10 Gbps, and thus may be applied to the latency sensitive (for example, “type 2” in Table 1) network traffic”) using the heavy weight AI model (Ying, [0045], “…the application 150 on the client 120 may operate GPU resources of the server 110 by calling a particular API…” “…the function specifically called … for duplicating data between the client memory and the GPU memory. Such calling of API generally causes the transmission of a large amount of data and thus requires a larger network bandwidth. In this case, the client 120 may determine that the generated data packet is bandwidth sensitive.”. See also, Ying, par. [0026], “… the application 150 may perform data processing or analysis tasks related to high performance computing (HPC), machine learning (ML) or deep learning (DL) and artificial intelligence (AI)), and the like.” Client calling API and involving larger bandwidth is equated with the claimed “using of heavy weight model.”); and sending, by the centralized entity, a determined the traffic type class (Ying, p3, Table 1, Type 1-2, par. [0040]) to the light weight AI model used by the edge node. Ying, even though suggests the use of AI models (Ying, par. [0026], “the application 150 may perform data processing or analysis tasks related to high performance computing (HPC), machine learning (ML) or deep learning (DL) and artificial intelligence (AI)), and the like”), it does not explicitly indicate the step of using a lightweight application for packet classification on the client, Wang is directed to network traffic classification and teaches the use of lightweight application (Wang, p1, [0001]) involving AI model ( Wang, par. [0010], “..at least one computing device manages network traffic to improve availability of network services by classifying network traffic flows using flow-level statistical information and a machine learning estimation based on a measurement of at least one of relevance and goodness of network features.”) It would have been obvious to incorporate the lightweight application of Wang in Ying to improve the versatility of the combined Ying-Wing as suggested by Wang. See Wang, par. [0020] According to the present invention, a lightweight, low overhead and low-cost mechanism of classification provides network application information with a considerably high accuracy, which is useful in effective network management. The Wang system can be usefully employed for application classification and management of network traffic. These Examples have demonstrated that the machine learning technique, according to the present invention, is able to provide a high identification accuracy…, while requiring significantly less network resources to achieve this result. The Ying-Wang combination does not explicitly indicate the steps performing Prediction Function (PF) feature extraction on the packet by the edge node; performing, using a light weight AI model, traffic type classification for the packet based on the feature extraction by the edge node; and performing Learning Function (LF) feature extraction on the packet by the centralized entity, as recited in claim 1. Feamster teaches a method for providing intelligent traffic classification at a mobile edge using Artificial Intelligence (Al), comprising: receiving a packet ("Traffic is collected from the network in the form of packet captures, IPFIX records, or DNS query logs and is used to train a detection model” sec. 4.2, p. 6); performing Prediction Function (PF) feature extraction on the packet ("Yet, many of these models incorporate simple features—often ones that can be computed or inferred from a single packet. Programmable switches could extract these features from the packets in the data plane and even compute regression functions based on these learned models, essentially computing the prediction function in-line and making real-time decisions about the nature of traffic in the network, without ever requiring off-path analysis.” sec. 4.2, p. 6); performing, using a light weight Al model, traffic type classification for the packet based on the feature extraction ("simple regression models based on lightweight features could be executed in programmable switches that support customizable feature extraction and computation.... An additional challenge involves developing a new class of machine learning algorithms whereby an algorithm could perform an initial rough classification based on lightweight features (e.g., those based on metadata or coarse statistics) (e.g., those based on metadata or coarse statistics) and trigger collection of more heavyweight features (e.g., those from packets) when classification is uncertain” sec. 3.1, p. 4); performing Learning Function (LF) feature extraction on the packet ("Yet, many of these models incorporate simple features—often ones that can be computed or inferred from a single packet. Programmable switches could extract these features from the packets in the data plane and even compute regression functions based on these learned models, essentially computing the prediction function in-line and making real-time decisions about the nature of traffic in the network, without ever requiring off-path analysis." sec. 4.2, p. 6); and sending a determined traffic class to the light weight Al model ("A network that learns could use a coarse detection algorithm based on network data that is relatively lightweight or easy to collect (e.g., sampled IPFIX logs, SNMP) to develop a classifier that might have a false positive rate that is higher than acceptable. The output of this classifier might trigger additional measurements—either active measurements (e.g., probes) to and from different parts of the network or, in some cases, more expensive packet captures that could provide more precise information about the traffic (e.g. DNS query logs, timing information). The emergence of technologies such as in-band network telemetry [13] make it possible not only to write additional fine-grained information into packets, but also to generate probe traffic on demand, making it possible to trigger fine-grained active and passive measurements either end-to-end or from within the network, should an algorithm need that information... A network that learns could incorporate information directly from operators, from network configuration, or perhaps even from users or applications to increase the amount of labeled data that detection and inference algorithms could use to train.” sec. 3.2.1-3.2.2, p. 5). It would have been obvious to a person of ordinary skills in the art to incorporate the teachings of Feamster in Ying-Wang improve the versatility of the Ying-Wing system as suggested by Feamster (See page 3, Section 3 Navigating in a Dynamic Environment of Feamster incorporating machine learning-based inference into the network so that, in many cases, the network can learn to run itself, removing many of the decisions from network operators (Section 3.1); (2) incorporating input from applications and human users to better improve the inputs to learning algorithms (Section 3.2). Feamster teaches: In section 3.2 Improving learning with better data Networks should also be tailored to improve the quality of input data provided to real-time inference and prediction algorithms. For example, machine learning algorithms for network security such as intrusion detection often train on labeled data. Yet, for the domain of network security, obtaining labeled data is difficult: attacks are rare, threats are dynamic, and new classes of threats and attacks are continually emerging. Similarly, identifying quality of experience degradations often requires input from applications, users, or both. In this section, we discuss how future networks might be co-designed with learning algorithms to improve algorithm accuracy, and to improve the quality and quantity of data that provides input to these algorithms. See also, Feamster, Section 4.2 Prediction models in the data plane As discussed in Section 3, machine learning has been applied to a wide variety of network monitoring tasks, ranging from performance monitoring to security. To date, however, many of these models have been demonstrated and deployed in a purely offline fashion: Traffic is collected from the network in the form of packet captures, IPFIX records, or DNS query logs and is used to train a detection model, which is also evaluated offline. Yet, many of these models incorporate simple features-often ones that can be computed or inferred from a single packet. Programmable switches could extract these features from the packets in the data plane and even compute regression functions based on these learned models, essentially computing the prediction function in-line and making real-time decisions about the nature of traffic in the network, without ever requiring off-path analysis. Claim(s) 4-7 and 9-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over US PG-PUB 20190327175 A1, Ying et al., herein after “Ying”, in view of US PG-PUB 20100014420 A1, Wang et al., hereinafter “Wang,” , and further in view of Feamster et al. (Why (and How) Networks Should Run Themselves, hereinafter "Feamster"), in view of Silva et al. (ATLANTIC: A Framework for Anomaly Traffic Detection, Classification, and Mitigation in SDN, hereinafter "Silva"), and further in view of Rimal et al. (Mobile Edge Computing Empowered Fiber-Wireless Access Networks in the 5G Era, hereinafter "Rimal"). The Feamster, Silva and Rimal references were applied in the 01/17/2025 rejection. Copies of the Feamster, Silva and Rimal references are available to the applicants. Regarding Claim 4, the Ying-Wang-Feamster-Silva combination teaches the method of claim 1. Feamster does not explicitly teach wherein the light weight model runs in a Converged Wireless System (CWS). Rimal teaches wherein the light weight model runs in a Converged Wireless System (CWS) ("Given the importance of scaling up research in the area of network integration and convergence in support of MEC toward 5G, the article explores the possibilities of empowering integrated fiber-wireless (FiWi) access networks to offer MEC capabilities." sec. Abstract, p. 192; the combination of Feamster teaching the light weight model software and Rimal teaching the converged wireless system hardware teach the limitations of the claim). Feamster and Rimal are analogous art because both are directed to network management. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the network management of the Ying-Wang-Feamster-Silva combination with the hardware configuration of Rimal. The modification would have been obvious because one of ordinary skill in the art would be motivated to implement systems having ultra-low latency, user experience continuity, and high reliability, as suggested by Rimal ("The expected stringent requirements of future 5G networks such as ultra-low latency, user experience continuity, and high reliability will drive the need for highly localized services within RANs in close proximity to mobile subscribers." p. 192). Regarding Claim 5, the Ying-Wang-Feamster-Silva combination teaches the method of claim 1. Feamster does not explicitly teach wherein the light weight model runs in a Distributed Unit (DU). Rimal teaches wherein the light weight model runs in a Distributed Unit (DU) ("Distributed Resource Management: It is important to ensure that different and diverse edge devices (e.g., moving users, mobile devices, and connected vehicles) have access to network resources (e.g., bandwidth, storage) at the edge [10]." p. 195; the combination of Feamster teaching the light weight model software and Rimal teaches distributed resource management teach the limitations of the claim). Feamster and Rimal are analogous art because both are directed to network management. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the network management of the Ying-Wang-Feamster-Silva combination with the hardware configuration of Rimal. The modification would have been obvious because one of ordinary skill in the art would be motivated to implement systems having ultra-low latency, user experience continuity, and high reliability, as suggested by Rimal ("The expected stringent requirements of future 5G networks such as ultra-low latency, user experience continuity, and high reliability will drive the need for highly localized services within RANs in close proximity to mobile subscribers." p. 192). Regarding Claim 6, the Ying-Wang-Feamster-Silva combination teaches the method of claim 1. Feamster does not explicitly teach wherein the heavy weight model runs in a HNG. Rimal teaches wherein the heavy weight model runs in a HetNet Gateway (HNG) ("More specifically, envisioned design scenarios of MEC over FiWi networks for typical RAN technologies (i-e., WLAN, 4G LTE, LTE-A HetNets) are investigated, accounting for both network architecture and enhanced resource management.” p. 192; the combination of Feamster teaching the heavy weight model software and Rimal teaching the HetNet Gateway teach the limitations of the claim). Feamster and Rimal are analogous art because both are directed to network management. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the network management of the Ying-Wang-Feamster-Silva combination with the hardware configuration of Rimal. The modification would have been obvious because one of ordinary skill in the art would be motivated to implement systems having ultra-low latency, user experience continuity, and high reliability, as suggested by Rimal ("The expected stringent requirements of future 5G networks such as ultra-low latency, user experience continuity, and high reliability will drive the need for highly localized services within RANs in close proximity to mobile subscribers." p. 192). Regarding Claim 7, the Ying-Wang-Feamster-Silva combination teaches the method of claim 1. Feamster does not explicitly teach wherein the heavy weight model runs in a Central Unit (CU). Rimal teaches wherein the heavy weight model runs in a Central Unit (CU) ("Cloud and Cloudlet Coexistence: Centralized clouds and distributed cloudlets may coexist and be complementary to each other, and thus support a more diverse set of emerging applications and services in 5G networks. However, determining where an application is executed, at either a cloudlet or a conventional cloud, is a nontrivial task. It depends on the available infrastructure and application requirements, as well as willingness of users to pay. Some applications or parts of an application may be executed at the edge device itself, cloudlets, or centralized clouds.” p. 195; the combination of Feamster teaching the heavy weight model and Rimal teaching the centralized units teach the limitations of the claims). Feamster and Rimal are analogous art because both are directed to network management. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the network management of the Ying-Wang-Feamster-Silva combination with the hardware configuration of Rimal. The modification would have been obvious because one of ordinary skill in the art would be motivated to implement systems having ultra-low latency, user experience continuity, and high reliability, as suggested by Rimal ("The expected stringent requirements of future 5G networks such as ultra-low latency, user experience continuity, and high reliability will drive the need for highly localized services within RANs in close proximity to mobile subscribers." p. 192). Regarding Claim 8, the Ying-Wang-Feamster-Silva combination teaches the method of claim 1. Feamster further teaches wherein performing Prediction Function (PF) feature extraction on the packet includes extracting at least one of a port number ("Rather than detecting DoS attacks using specific thresholds, the policy could specify a detection technique (e.g., sequential hypothesis testing for port-scan detection) for identifying attacks.” sec. 2.2, p. 3; Examiner notes that only one type of feature is required for the claims, and that Silva teaches additional features not cited here), a number of packets ("Research has developed learning algorithms to detect (and even predict) attacks based on analysis of network traffic (from packet traces to IPFIX records) [17], DNS queries [2] and domain registrations [8], and even BGP routing messages [15].” sec. 3.1, p. 4). Regarding Claim 9, the Ying-Wang-Feamster-Silva combination teaches the method of claim 1. Feamster does not explicitly teach during virtual Radio Unit (vRU) bootup, requesting an_ initial configuration from a HetNet Gateway (HNG). Rimal teaches during virtual Radio Unit (vRU) bootup (Further, MEC helps provide the backhaul with real-time information about radio access network (RAN) and traffic requirements, and thus facilitates coordination between the backhaul and RAN segments, which has not been fully realized so far [4]. Such coordination is required when, for example, radio networks need less bandwidth but the backhaul is not aware of it, and vice versa. From a business viewpoint, the emergence of MEC allows network operators, independent software vendors, and web service and content providers to create new value chains [3]." p. 192), requesting an initial configuration from a HetNet Gateway (HNG) ("More specifically, envisioned design scenarios of MEC over FiWi networks for typical RAN technologies (i-e., WLAN, 4G LTE, LTE-A HetNets) are investigated, accounting for both network architecture and enhanced resource management.” p. 192). Feamster and Rimal are analogous art because both are directed to network management. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the network management of the Ying-Wang-Feamster-Silva combination with the hardware configuration of Rimal. The modification would have been obvious because one of ordinary skill in the art would be motivated to implement systems having ultra-low latency, user experience continuity, and high reliability, as suggested by Rimal ("The expected stringent requirements of future 5G networks such as ultra-low latency, user experience continuity, and high reliability will drive the need for highly localized services within RANSs in close proximity to mobile subscribers." p. 192). Regarding claim 10, The Ying-Wang-Feamster-Silva combination teaches the method of claim 1. Feamster does not explicitly teach requesting, by the vRU, parameters for the prediction function. Rimal teaches requesting, by the vRU, parameters for the prediction function ("Further, MEC helps provide the backhaul with real-time information about radio access network (RAN) and traffic requirements, and thus facilitates coordination between the backhaul and RAN segments, which has not been fully realized so far [4]. Such coordination is required when, for example, radio networks need less bandwidth but the backhaul is not aware of it, and vice versa. From a business viewpoint, the emergence of MEC allows network operators, independent software vendors, and web service and content providers to create new value chains [8].” p. 192; the combination of Feamster teaching the prediction function and Rimal teaching the virtual radio unit teach the limitations of the claim). Feamster and Rimal are analogous art because both are directed to network management. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the network management of the Ying-Wang-Feamster-Silva combination with the hardware configuration of Rimal. The modification would have been obvious because one of ordinary skill in the art would be motivated to implement systems having ultra-low latency, user experience continuity, and high reliability, as suggested by Rimal ("The expected stringent requirements of future 5G networks such as ultra-low latency, user experience continuity, and high reliability will drive the need for highly localized services within RANs in close proximity to mobile subscribers." p. 192). Regarding Claim(s) 11-16, Claim(s) 11 is directed to an apparatus performing functions corresponding to the method steps recited in the combination of claim(s) 1, 4, 6, and 7, and is rejected for the same reasons and under the same rationale applied to claims 1, 4, 6 and 7 above. Dependent claims 12-16 recite a system performing functions corresponding to the method steps in claims 2-3 and 8-10, respectively, and are rejected under the same rationale. The Ying/Wang/Feamster/Silva/Rimal combination teaches the limitations of claim(s) 11-16 as set forth above in connection with claim(s) 1-4 and 6-10. Therefore, claim(s) 11-16 is/are rejected under the same rationale as respective claim(s) 1-4 and 6-10. Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US Patent No. 10,911,266 issued to Cao discusses machine learning and lightweight agent , see par. [0049]. US PG-PUB 20200027033 A1, Garg et al., discusses machine learning in edge servers. See abstract. US PG-PUB 20160021014 , Wetterwald et al., discusses network data packet processing Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HOSAIN T ALAM whose telephone number is (571)272-3978. The examiner can normally be reached Mon-Thu, 8:00 - 4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HOSAIN T ALAM/Supervisory Patent Examiner, Art Unit 2132
Read full office action

Prosecution Timeline

Jul 28, 2021
Application Filed
Jan 11, 2025
Non-Final Rejection — §103
Jul 17, 2025
Response Filed
Nov 03, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585624
COMPUTER-IMPLEMENTED METHOD FOR PROVIDING AN OUTPUT DATA SET, METHOD FOR DETERMINING STATISTICAL INFORMATION, APPARATUS, COMPUTER PROGRAM AND DATA MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12499083
SYSTEM AND METHOD FOR DATA DISCOVERY IN CLOUD ENVIRONMENTS
2y 5m to grant Granted Dec 16, 2025
Patent null
AUTOMATED COLLATION CREATION
Granted
Patent null
Displayname and Resource Identifier Synchronization
Granted
Patent null
COMMUNICATION DEVICE, COMMUNICATION ANALYSIS METHOD, AND COMMUNICATION ANALYSIS PROGRAM
Granted
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
36%
Grant Probability
56%
With Interview (+20.0%)
1y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 14 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month