Prosecution Insights
Last updated: April 19, 2026
Application No. 18/710,907

NETWORK TRAFFIC CLASSIFICATION

Final Rejection §103
Filed
May 16, 2024
Examiner
SISON, JUNE Y
Art Unit
2455
Tech Center
2400 — Computer Networks
Assignee
Canopus Networks Assets Pty Ltd.
OA Round
2 (Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
316 granted / 461 resolved
+10.5% vs TC avg
Strong +36% interview lift
Without
With
+36.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
20 currently pending
Career history
481
Total Applications
across all art units

Statute-Specific Performance

§101
16.7%
-23.3% vs TC avg
§103
52.8%
+12.8% vs TC avg
§102
4.7%
-35.3% vs TC avg
§112
14.6%
-25.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 461 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Remarks This communication is considered fully responsive to the Amendment filed on 12/22/25. 101 rejections withdrawn since amended accordingly. Response to Arguments Applicant's arguments filed 12/22/25 have been fully considered but they are not persuasive. 1] applicant argues: (emphasis added) ... To establish prima facie obviousness of a claimed invention, all the claim limitations must be taught or suggested by the cited references. Independent claim 1 recites in part: monitoring network traffic flows to dynamically generate, for each of the network traffic flows and in real-time, time series data sets representing, for each of upstream and downstream directions of the network traffic flow, for each of a plurality of successive timeslots, and for each of a plurality of packet length bins, a packet count and a byte count of packets received within the timeslot and having one or more lengths within the corresponding packet length bin; and processing the time series data sets of each network traffic flow to classify the network flow into one of a plurality of predetermined network traffic classes, without using payload content of the network traffic flow. (Emphasis added.) ... In particular, the cited references fail to teach or suggest generating time series data sets for each of upstream and downstream directions of the network traffic flow and for each of a plurality of successive timeslots. The Office Action asserts that the generation of the time series data sets is found in the discussion in Zhang regarding a traffic flow classifier in Figs. 1-12, col. 1, lines 45-67 and col. 12, lines 1-50. Although Zhang may be relevant in a general sense, the reference uses different methods. Contrary to the assertions of the Office Action, Zhang's use of the inter-arrival times of data packets is not equivalent to the claimed time-series data. Additionally, Zhang does not store cumulative byte counts of packets, let alone time series of same. ... The examiner respectfully disagrees. Specifically, the claim recites ‘bye count of packets’ and not ‘cumulative byte counts of packets.’ Furthermore, per applicant’s own IFW specification (IFW, pg 7, fourth paragraph (PGPub: [0047]) given below (emphasis added) The apparatus 100 executes a network traffic classification process 200, as shown in FIG. 2, which generally involves monitoring network traffic flows received by the apparatus to dynamically generate, for each network traffic flow and in real-time, time series data sets representing packet and byte counts as a function of (binned) packet length, separately for upstream and downstream traffic flow directions. Therefore, consistent with applicant’s IFW specification Zhang and Sivaraman disclose a packet count and a byte count of packets received within the timeslot and having one or more lengths within the corresponding packet length bin (Zhang: fig 1-12, col 1 ll 45-67 through col 12 ll 1-50: ... for example, assume buffer 506 has intervals of table 1 (... within the timeslot ...) and module 502 obtains sample values 516 from sliding window sample of traffic flow 514: (a) 70 bytes (b) 135 bytes (c) 122 bytes (... byte count of packets received ...) ... module 502 assigns sample values (a) and (c) h. bin 526(2) as each of these sample is between 64 bytes and 128 bytes and assigns sample value (b) to bin 526(3) since sample value is between 128 bytes and 192 bytes (... and having one or more lengths within the corresponding packet length bin) .... extracting module 500 counts the number of sample values 516 assigned to each bin 526 (a packet count) while assigning sample values or after all sample values have been assigned (col 10 ll 14-39)). Furthermore, time series data and inter-arrival times are mathematical equivalents and a person of ordinary skill in the arts would know how to calculate absolute arrival times (time-series data) by computing the cumulative sum of inter-arrival times. Furthermore, per the office action, Zhang did not explicitly disclose time series data sets representing for each of upstream and downstream directions of the network traffic flow (emphasis added). Specifically, Zhang discloses time series data representing for each of uplink and downlink of the network traffic flow (emphasis added) (Zhang: fig 1-12, col 1 ll 45-67 through col 12 ll 1-50: fig 4-5 extracting module 104 discussed with respect to fig 8-9 machine learning module 106 ... extracting module 104 implemented by buffers 312 (fig 3) ... buffer 402 holds M downlink packet size parameters, M is integer greater than one ... buffer 402 holds a sliding window sample of downlink packet size parameters 4160 contents of the sliding window change when a new packet enters the buffer and old packet exits buffer ... each downlink (used to send data downstream) inter-arrival time parameter 418 represents the difference between arrival time of two sequential data packets of the same downlink data flow instance 110 (time series data sets calculated by computing sum of inter-arrival times representing downlink used to send data downstream) ... buffer 406 holds a sliding window sample of uplink inter-arrival time parameters 422 and each uplink data packet size parameter represents a size of a corresponding data packet of an uplink flow 110 instance related to downlink data flow 110 instance (time series data sets calculated by computing sum of inter-arrival times representing downlink used to send data downstream corresponding to uplink used to send data upstream) (col 5 ll 11-60)). Nonetheless, Zhang did not explicitly disclose time series data sets representing for each of upstream and downstream directions of the network traffic flow (emphasis added). Sivaraman discloses time series data sets representing for each of upstream and downstream directions of the network traffic flow (emphasis added) (Sivaraman: pg 2: in accordance with the present invention, there is provided a network device classification process including: monitoring network traffic .. to generate device behaviour data representing network traffic behaviours at different time granularities (time series data sets representing ...) .... the device behaviour data includes network flow attributes generated from packet and byte counts of upstream and downstream flows at different time granularities ... the different time granularities are substantially in the form of a geometric series (time series data sets representing for each of upstream and downstream directions of the network traffic flow) ... include at least four different time granularities). Zhang and Sivaraman are analogous art because they are from the same field of endeavor with respect to classification. Before the effective filing date, for AIA , it would have been obvious to a person of ordinary skill in the art to incorporate the strategies by Sivaraman into the process by Zhang. The suggestion/motivation would have been to provide for identifying correlations between generated attributes, selecting a subset of attributes based on correlations and using selected subset of attributes to classify (Sivaraman pg 2 ll 25-30). Therefore, the prior art rejection is maintained. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 9 and 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent No. 11979328 to Zhang et al. (“Zhang”) in view of WIPO Publication No. WO 2020/118376 to Sivaraman et al. (“Sivaraman”). As to claim 1, Zhang discloses a network traffic classification process (Zhang: fig 1-12), including the steps of: monitoring network traffic flows to dynamically generate (Zhang: fig 1-12, col 1 ll 45-67 through col 12 ll 1-50: fig 11 analytics system 1100 includes traffic flow classifier and analytics server 1104 deployed, for example, in a business or network node, network hub, etc to provide insight into a communications network ... traffic flow classifier 1102 receives communication network traffic 1106 and outputs classification data 1108 identifying one or more traffic flows (monitoring network traffic flows ...) (col 13 ll 45-67) ... new traffic flow classifiers advantageously use machine learning technology to classify a traffic flow in real time or substantially real-time from network traffic flow features such as packet size and packet inter-arrival time (monitoring network traffic flows to dynamically generate ...) (col 2 ll 32-47)), for each of the network traffic flows and in real-time generate (Zhang: fig 1-12, col 1 ll 45-67 through col 12 ll 1-50: ... new traffic flow classifiers advantageously use machine learning technology to classify a traffic flow in real time or substantially real-time from network traffic flow features such as packet size and packet inter-arrival time (col 2 ll 32-47)). Zhang did not explicitly disclose time series data sets representing for each of upstream and downstream directions of the network traffic flow (emphasis added). Specifically, Zhang discloses time series data sets representing for each of uplink and downlink of the network traffic flow (emphasis added) (Zhang: fig 1-12, col 1 ll 45-67 through col 12 ll 1-50: fig 4-5 extracting module 104 discussed with respect to fig 8-9 machine learning module 106 ... extracting module 104 implemented by buffers 312 (fig 3) ... buffer 402 holds M downlink packet size parameters, M is integer greater than one ... buffer 402 holds a sliding window sample of downlink packet size parameters 4160 contents of the sliding window change when a new packet enters the buffer and old packet exits buffer ... each downlink (used to send data downstream) inter-arrival time parameter 418 represents the difference between arrival time of two sequential data packets of the same downlink data flow instance 110 (time series data sets calculated by computing sum of inter-arrival times representing downlink used to send data downstream) ... buffer 406 holds a sliding window sample of uplink inter-arrival time parameters 422 and each uplink data packet size parameter represents a size of a corresponding data packet of an uplink flow 110 instance related to downlink data flow 110 instance (time series data sets calculated by computing sum of inter-arrival times representing downlink used to send data downstream corresponding to uplink used to send data upstream) (col 5 ll 11-60)). Nonetheless, Zhang did not explicitly disclose time series data sets representing for each of upstream and downstream directions of the network traffic flow (emphasis added). Sivaraman discloses time series data sets representing for each of upstream and downstream directions of the network traffic flow (emphasis added) (Sivaraman: pg 2: in accordance with the present invention, there is provided a network device classification process including: monitoring network traffic .. to generate device behaviour data representing network traffic behaviours at different time granularities (time series data sets representing ...) .... the device behaviour data includes network flow attributes generated from packet and byte counts of upstream and downstream flows at different time granularities ... the different time granularities are substantially in the form of a geometric series (time series data sets representing for each of upstream and downstream directions of the network traffic flow) ... include at least four different time granularities). Zhang and Sivaraman are analogous art because they are from the same field of endeavor with respect to classification. Before the effective filing date, for AIA , it would have been obvious to a person of ordinary skill in the art to incorporate the strategies by Sivaraman into the process by Zhang. The suggestion/motivation would have been to provide for identifying correlations between generated attributes, selecting a subset of attributes based on correlations and using selected subset of attributes to classify (Sivaraman pg 2 ll 25-30). Zhang and Sivaraman further disclose for each of a plurality of successive timeslots, and for each of a plurality of packet length bins (Zhang: fig 1-12, col 1 ll 45-67 through col 12 ll 1-50: each buffer 506 508 510 512 holds a plurality of bins, for example, buffer 506 holds K bins 526 where K is integer greater than one and bins 526 continuously span a range from a minimum packet size value 528 to a maximum packet size value 530 (... and for each of a plurality of packet length bins) and bins 526 are non-overlapping (for each of a plurality of successive timeslots ...) ... each bin 532 representing a respective range of downlink inter-arrival times continuously span a range from a minimum inter-arrival time value 534 to maximum inter-arrival time 536 value and bins 532 are non-overlapping (col 7 ll 47-67 & col 8 ll 1-8)), a packet count and a byte count of packets received within the timeslot and having one or more lengths within the corresponding packet length bin (Zhang: fig 1-12, col 1 ll 45-67 through col 12 ll 1-50: ... for example, assume buffer 506 has intervals of table 1 (... within the timeslot ...) and module 502 obtains sample values 516 from sliding window sample of traffic flow 514: (a) 70 bytes (b) 135 bytes (c) 122 bytes (... byte count of packets received ...) ... module 502 assigns sample values (a) and (c) h. bin 526(2) as each of these sample is between 64 bytes and 128 bytes and assigns sample value (b) to bin 526(3) since sample value is between 128 bytes and 192 bytes (... and having one or more lengths within the corresponding packet length bin) .... extracting module 500 counts the number of sample values 516 assigned to each bin 526 (a packet count) while assigning sample values or after all sample values have been assigned (col 10 ll 14-39)); and processing the time series data sets of each network traffic flow to classify the network flow into one of a plurality of predetermined network traffic classes (Zhang: fig 1-12, col 1 ll 45-67 through col 12 ll 1-50 & col 15 through col 20: experimental results of traffic flow classifier 100 ... table 6 shows results with five flow classes (gaming, uplink video, downlink video, web browsing, and video conferencing) (see with col 10 ll 14-39- processing the time series data sets of each network traffic flow to classify the network flow into one of a plurality of predetermined network traffic classes) (col 6 ll 6-24), without using payload content of the network traffic flow (Zhang: fig 1-12, col 1 ll 45-67 through col 12 ll 1-50: ... new traffic flow classifiers advantageously use machine learning technology to classify a traffic flow in real time or substantially real-time from network traffic flow features such as packet size and packet inter-arrival time ... new traffic flow classifiers do not require port number or payload information for classification and, accordingly, are protocol independent and are payload independent (without using payload content of the network traffic flow) (col 2 ll 32-47)). Same motivation applies as mentioned above to make the proposed modification. As to claim 2, Zhang and Sivaraman disclose wherein the predetermined network traffic classes represent respective network application types including at least two network application types of: video streaming, live video streaming, conferencing, gameplay, and download (Zhang: fig 1-12, col 1 ll 45-67 through col 12 ll 1-50 & col 15 through col 20: experimental results of traffic flow classifier 100 ... table 6 shows results with five flow classes (gaming (gameplay), uplink video, downlink video, web browsing, and video conferencing (conferencing) (col 6 ll 6-24)). For motivation, see rejection of claim 1. As to claim 3, Zhang and Sivaraman disclose wherein the predetermined network traffic classes represent respective specific network applications (Sivaraman: pg 30 table 10 (given below) lists device types i.e. one class, for example “Dropcam” that represents a represent respective specific network application (pg 30 ll 5-10)). For motivation, see rejection of claim 1. Sivaraman: table 10 pg 30 PNG media_image1.png 586 766 media_image1.png Greyscale As to claim 4, Zhang and Sivaraman disclose wherein the processing includes dividing each byte count by the corresponding packet count to generate a corresponding average packet length, wherein the average packet lengths are processed to classify the network flow into one of the plurality of predetermined network traffic classes (Zhang: fig 1-12, col 1 ll 45-67 through col 12 ll 1-50 & col 15 through col 20: normalizing module 414 ... yields normalized statistical values 424 of features 112 ... normalized sliding window average of downlink packet size from buffer 402 (generate a corresponding average packet length) ... normalized sliding window average of uplink packet size from buffer 408 (generate a corresponding average packet length) ... for example, by dividing each raw statistical value 422 by a maximum observed value of a class of statistical values, (wherein the average packet lengths are processed to classify the network flow into one of the plurality of predetermined network traffic classes) for example each value of standard deviation of downlink packet size may be normalized by dividing the value by a maximum observed standard deviation of downlink data packet size (dividing each byte count by the corresponding packet count to generate a corresponding average packet length) (col 6 ll 44-67)). For motivation, see rejection of claim 1. As to claim 5, Zhang and Sivaraman disclose herein the packet length bins are determined from a list of packet length boundaries (Zhang: fig 1-12, col 1 ll 45-67 through col 12 ll 1-50 & col 15 through col 20: each interval in table 1 and table 2 represents a boundary of one or more bins of its respective buffer and whether a particular interval is included in a bin, or just marks a boundary of the bin, is implementation dependent (col 8 ll 34-67 & see tables 1-2 in col 8-19). For motivation, see rejection of claim 1. As to claim 9, Zhang and Sivaraman disclose including processing packet headers to generate identifiers of respective ones of the network traffic flows (Sivaraman: pg 14: ... datasets of network traffic traces recorded from a testbed network were used to demonstrate the basis for and performance of network device classification ... the MAC address of each device was used as its unique network identifier in order to isolate its traffic from the traffic mix of other devices in the network ... however, it will be apparent to those skilled in the art that other identifiers such as IP address, physical port number or VLAN can be used to provide one-to-one mapping of each traffic trace to its physical device (pg 14 ll 13-27)). For motivation, see rejection of claim 1. As to claims 11 and 12, see similar rejection to claim 1 where the medium and apparatus, respectively, is/are taught by the method. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent No. 11979328 to Zhang et al. (“Zhang”) in view of WIPO Publication No. WO 2020/118376 to Sivaraman et al. (“Sivaraman”) and further over U.S. Patent Publication No. 2020/0186547 to Bartos et al. (“Bartos”). As to claim 6, Zhang and Sivaraman disclose the process of claim 1. For motivation, see rejection of claim 1. Zhang did not explicitly disclose wherein the step of processing the time series data sets includes applying an artificial neural network deep learning model to the time series data sets of each network traffic flow to classify the network flow into one of the plurality of predetermined network traffic classes. Bartos discloses wherein the step of processing the time series data sets includes applying an artificial neural network deep learning model to the time series data sets of each network traffic flow to classify the network flow into one of the plurality of predetermined network traffic classes (Bartos: fig 1-6, [0006-84]: a key feature is to combine SPLT (sequence of packet lengths and time) features, such as data length and/or timestamp into a single representation automatically (wherein the step of processing the time series data sets includes ...) ... by leveraging large amounts of unlabeled traffic telemetry data for flows observed in the networks) and optimizing parameters of the representation for the classification task ... may perform any or all of steps: ... transforming SPLT data length and timestamp into histograms -to create a single input feature vector for the deep learning neural network (... applying an artificial neural network deep learning model to the time series data sets of each network traffic flow to classify the network flow ...) ... learning/optimizing parameters (weights) of the representation – this can be achieved by using sets, such as triplets of samples from clusters above ... the primary goal of architecture 400 is to detect encrypted malicious traffic with raw SPLT information with no manually-defined features ... and take advantage of the fact that a vast majority of data is unlabeled and performs clustering of data into classes based on the features that are available in the encrypted traffic and, then a deep learning neural network is trained to leverage the unlabeled data with only a few malicious examples per malware class (... to classify the network flow into one of the plurality of predetermined network traffic classes) [0051-55]). Zhang, Sivaraman and Bartos are analogous art because they are from the same field of endeavor with respect to classification. Before the effective filing date, for AIA , it would have been obvious to a person of ordinary skill in the art to incorporate the strategies by Bartos into the process by Zhang and Sivaraman. The suggestion/motivation would have been to provide techniques transform the raw SPLT (sequence of packet lengths and time) information into histogram representations used as input for a deep neural network classifier and the neural network is trained to distinguish between classes (Bartos: [0048]). Claims 7-8, 10 and 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent No. 11979328 to Zhang et al. (“Zhang”) in view of WIPO Publication No. WO 2020/118376 to Sivaraman et al. (“Sivaraman”), U.S. Patent Publication No. 2020/0186547 to Bartos et al. (“Bartos”) and further over U.S. Patent Publication No. 2023/0217308 to Sandburg et al. (“Sandburg”). As to claim 7, Zhang, Sivaraman and Bartos disclose the process of claim 1. For motivation, see rejection of claim 6. Zhang did not explicitly disclose wherein the step of processing the time series data sets includes applying a transformer encoder with an attention mechanism to the time series data sets of each network traffic flow, and applying the resulting output to an artificial neural network deep learning model to classify the network flow into a corresponding one of the plurality of predetermined network traffic classes. Sandburg discloses wherein the step of processing the time series data sets includes applying a transformer encoder with an attention mechanism to the time series data sets of each network traffic flow (Sandburg: fig 1-9, [0057-160]: ... a new type of sequence model called a transformer has shown very good results on sequential data (wherein the step of processing the time series data sets includes ...) ... transformers can learn long-range dependencies without vanishing or exploding gradients and are amendable to parallelization and can therefore be scaled to larger datasets ... transformer consists of two main components, a set of encoders chained together and a set of decoders chained together and the function of each encoder is to process its input vectors to generate what are known as encodings (... applying a transformer encoder ...) ... each decoder does the opposite, taking all encodings and processing them, using their incorporated contextual information to generate an output sequence and to achieve this, each encoder and decoder makes use of an attention mechanism which, for each input, weighs the relevance of every input and draws information from them accordingly when producing the output ... both the encoders and decoders have a final feed-forward neural network for additional processing of the outputs (... applying a transformer encoder with an attention mechanism to the time series data sets of each network traffic flow) [0124-125]), and applying the resulting output to an artificial neural network deep learning model to classify the network flow into a corresponding one of the plurality of predetermined network traffic classes (Sandburg: fig 1-9, [0057-160]: ... the optional packet predictor 208 uses a sequence model 210 to predict one or more packet parameters for a traffic flow ... the context output from the traffic type predictor 202 is used as an initial value to condition the sequence model 210 on the predicted type of traffic (see with [0124-125] - and applying the resulting output to an artificial neural network deep learning model to classify the network flow into a corresponding one of the plurality of predetermined network traffic classes) [0126]). Zhang, Sivaraman, Bartos and Sandburg are analogous art because they are from the same field of endeavor with respect to sequence models. Before the effective filing date, for AIA , it would have been obvious to a person of ordinary skill in the art to incorporate the strategies by Sandburg into the process by Zhang, Sivaraman and Bartos. The suggestion/motivation would have been to provide use of a new type of sequence model called a transformer that has very good results on sequential data (Sandburg: [0124]). As to claim 8, Zhang, Sivaraman, Bartos and Sandburg disclose wherein the artificial neural network deep learning model is a convolutional neural network model (CNN) or a long short-term memory network model (LSTM) (Sandburg: fig 1-9, [0057-160]: there are numerous machine learning models that are designed to work well for temporal sequences ... long short-term memory (LSTM) model solves the problem of vanishing gradients [0123]). For motivation, see rejection of claim 7. As to claims 10 and 13, see similar rejection to claims 1 and 7 where the process and apparatus, respectively, is/are taught by the method. As to claims 14-15, see similar rejection to claims 2-3, respectively. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUNE SISON whose telephone number is (571)270-5693. The examiner can normally be reached 9:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emmanuel Moise can be reached at 571-272-3865. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JUNE SISON/Primary Examiner, Art Unit 2455
Read full office action

Prosecution Timeline

May 16, 2024
Application Filed
Sep 18, 2025
Non-Final Rejection — §103
Dec 22, 2025
Response Filed
Mar 08, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602306
RESTORATION OF SYSTEM STATES IN DATA PROCESSING SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12592896
METHOD AND APPARATUS FOR QUALITY OF SERVICE ASSURANCE FOR WEBRTC SESSIONS IN 5G NETWORKS
2y 5m to grant Granted Mar 31, 2026
Patent 12592982
INFORMATION PROCESSING DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12587585
SYSTEM, METHOD, AND STORAGE MEDIUM OF DISTRIBUTED EDGE COMPUTING FOR COOPERATIVE AUGMENTED REALITY WITH MOBILE SENSING CAPABILITY
2y 5m to grant Granted Mar 24, 2026
Patent 12580829
SERVICE ORCHESTRATION IN A COMMUNICATION INFRASTRUCTURE WITH DIFFERENT NETWORK DOMAINS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+36.2%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 461 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month