Prosecution Insights
Last updated: April 19, 2026
Application No. 17/723,047

SYSTEM AND METHOD FOR IDENTIFICATION OF VIDEO CONTENT IN QUIC-BASED PACKET DATA NETWORKS

Non-Final OA §103
Filed
Apr 18, 2022
Examiner
ROHD, BENJAMIN MATTHEW
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
AT&T Intellectual Property I, L.P.
OA Round
3 (Non-Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 1 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
31
Total Applications
across all art units

Statute-Specific Performance

§101
23.5%
-16.5% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§103
DETAILED ACTION This office action is in response to amendments filed on 02/18/2026. Claims 1, 3-8, 10, 13, 15, and 17-20 have been amended. Claims 1-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/18/2026 has been entered. Response to Arguments Prior Art Rejections: Applicant’s arguments regarding the prior art rejections (pg. 9-13) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant argues that the specific arrangement of ML model layers present in the amended independent claims (“multiple consecutive [LSTM] layers, each comprising a respective [RNN], precede a convolution layer comprising a [CNN], and the convolution layer precedes a final layer that produces a model classification”) is not taught by any of the cited references (Schuster, Guo, Lopez-Martin). Examiner notes that the Bai reference has been brought in to teach this limitation. Bai teaches processing network traffic flow data using a model which includes LSTM layers preceding a CNN layer, the CNN layer preceding a classification layer. The prior art rejections have been updated to include the amended limitations and to clarify the reasoning given for the limitations that were not amended. Claims 1-6, 8-17, and 19 are now rejected under 35 U.S.C. 103 as being unpatentable over Schuster in view of Guo and Bai. Claims 7, 18, and 20 are now rejected under 35 U.S.C. 103 as being unpatentable over Schuster in view of Guo and Bai, and further in view of Ioffe. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 8-17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Schuster et al. (hereinafter Schuster), “Beauty and the Burst: Remote Identification of Encrypted Video Streams” (published 08/16/2017) in view of Guo et al. (hereinafter Guo), U.S. Patent Application Publication US 20190230010 A1 (published 07/25/2019) and Bai et al. (hereinafter Bai), “Automatic Device Classification from Network Traffic Streams of Internet of Things” (published 12/24/2018). Regarding Claim 1, Schuster teaches A device, comprising: a processing system including a processor; and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, the operations comprising: (Examiner notes this limitation is interpreted as a general-purpose computing environment. Schuster’s use of “JavaScript code” (pg. 1, section 1), Wireshark software (pg. 5, section 5.2), and access to “network traffic at the network (IP) or transport (TCP/UDP) layers” (pg. 3, section 3.1) necessitates implementation of the described methods on a computer.) receiving a plurality of data packets captured from a network, wherein the data packets are associated with streaming video content across the network, and wherein the video content is encrypted; (Schuster teaches a method for “Identification of Encrypted Video Streams” (pg. 1, title). “Even if packets are encrypted at the transport layer (e.g., using TLS), their sizes and times of arrival—and, consequently, the sizes of packet bursts and inter-burst intervals—are visible to anyone watching the network” (pg. 2, section 2). “We captured the network traffic of each streaming session for a certain duration (see below) using Wireshark’s tshark” (pg. 5, section 5.2).) processing the plurality of data packets to extract features both individually and collectively from the data packets in the plurality of data packets, the processing comprising reconstructing a plurality of application data units (ADUs) of the streaming video content, each ADU representing a video segment of the streaming video content, the reconstructing based at least in part on one or more network characteristics, the one or more network characteristics comprising a number of bits transmitted in a given time interval, (Pg. 1, section 1: “We demonstrate that packet bursts in encrypted streams correspond to segment requests from the client…” Pg. 5, section 5.2 – Feature extraction: “A burst is a sequence of points in a time series ( t i ,   y i ) such that t i - t i - 1 < I for some   I (we used I = 0.5 ). When the points correspond to arrival times and packet sizes, bursts are presumably associated with the transmission of higher-level elements such as HTTP responses (see Section 2).” Each packet’s arrival time and size are recorded as a point (i.e. features are extracted individually), and bursts are identified within sequences of packets (i.e. features are extracted collectively). Packet bursts corresponding to segments of video (i.e. ADUs) are identified (i.e. reconstructed) when points representing arrival times and packet sizes (i.e. number of bits transmitted) occur within 0.5 seconds of each other (i.e. in a given time interval).) providing the features to a trained machine learning (ML) model comprising a plurality of layers (Schuster teaches “we use machine-learning models as detectors,” and in the online phase of the operation, the user “applies his detectors to the collected measurements to identify the streamed video” (pg. 4, section 4). Regarding the structure of the models, Schuster specifies “We use CNNs with three convolution layers, max pooling, and two dense layers (see Figure 7.1). We train them using an Adam [26] optimizer on batches of 64 samples, with categorical cross-entropy as the error function” (pg. 7, section 7.2).) a final layer that produces a model classification comprising an identification of the streaming video content; (Pg. 1, section 1: “[W]e develop a new video identification methodology based on convolutional neural networks and evaluate it on video titles streamed by YouTube, Netflix, Amazon, and Vimeo.” Pg. 8, section 7.3: “The output of the last, softmax layer of the neural network is traditionally interpreted as a vector of probabilities. The classifier’s prediction is the class with the highest probability.”) outputting the identification of the streaming video content determined by the trained ML model. (See the portions of sections 1 and 7.3 cited above. The model outputs an identification of the video content.) Schuster does not appear to explicitly disclose the reconstructing further based at least in part on flow-boundary information identified from uplink request data packets whose payloads exceed a threshold size; However, Guo teaches the reconstructing further based at least in part on flow-boundary information identified from uplink request data packets whose payloads exceed a threshold size; (0086: “The monitoring station 110 is configured to examine IP packets and identify a flow 300 in a media session 200, and the start and the end of the download of ‘a chunk’ of the video content in real time as IP packets pass through.” 0089-0090: “The monitoring station 110 detects a request message in a flow 300 of a media session 200 between a client 130 and video server 120 using a size pattern or predetermined size range of a typical request message for a video service… The predetermined size range or size pattern may include only a lower bound or threshold.” 0093-0095: “Once a request message is identified, the start of the ‘chunk’ is defined… The end of a current chunk ‘i’ is detected when another request message is detected in the flow.” The start and end (i.e. flow boundary information) of a video chunk (i.e. ADU) is identified based on HTTP request messages (i.e. uplink request data packets) whose payload size exceeds a threshold.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Schuster and Guo. Schuster teaches identifying encrypted video content using machine learning and network traffic patterns. Guo teaches monitoring network traffic in order to detect and measure statistics associated with video segment chunks. One of ordinary skill would have motivation to combine Schuster and Guo because, according to Schuster, “the segmentation prescribed by the [MPEG-DASH] standard causes content dependent packet bursts. We show that many video streams are uniquely characterized by their burst patterns” (Schuster, pg. 1, abstract) and “the sequence of segment sizes identifies the video with virtually no false positives” (Schuster, pg. 5, section 6). Measuring the packet and segment statistics described by Guo and using them as features would help Schuster’s model to “accurately identify these patterns” (Schuster, pg. 1, abstract). Schuster and Guo do not appear to explicitly disclose layers arranged in a processing sequence in which: multiple consecutive long-short term memory (LSTM) layers, each comprising a respective recurrent neural network (RNN), precede a convolution layer comprising a convolutional neural network (CNN), and the convolution layer precedes a final layer that produces a model classification However, Bai teaches a trained machine learning (ML) model comprising a plurality of layers arranged in a processing sequence in which: multiple consecutive long-short term memory (LSTM) layers, each comprising a respective recurrent neural network (RNN), precede a convolution layer comprising a convolutional neural network (CNN), and (Pg. 5, section III.C: “The inputs are fed into two LSTM layers at first to capture the temporal relationship of network traffic. LSTM is a prominent variation of Recurrent Neural Networks (RNN) specially designed for processing sequential data… Outputs of LSTM layer are t vectors. We concatenate them as columns to form a 2-D vector and feed the 2-D vector into the convolution layer, which is a special type of constrained feed-forward neural networks.” Figure 5 (pg. 6) shows the model architecture, including multiple consecutive LSTM layers preceding a convolution layer.) the convolution layer precedes a final layer that produces a model classification (Pg. 5, section III.C: “In the output layer, softmax function is chosen as the active function to calculate the probabilities of different classes. The class with the highest probability will be final prediction for inputs.” Figure 5 (pg. 6) shows the model architecture, including the convolution layer preceding the final softmax layer, which produces the model classification output.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Schuster, Guo, and Bai. Schuster teaches identifying encrypted video content using machine learning and network traffic patterns. Guo teaches monitoring network traffic in order to detect and measure statistics associated with video segment chunks. Bai teaches processing network traffic flow data using an LSTM-CNN cascade model. One of ordinary skill would have motivation to combine Schuster, Guo, and Bai because Bai’s LSTM-CNN for network traffic classification enables “capturing the global and local temporal correlations [of network traffic data] in a supervised manner,” and “outperforms a wide range of baseline algorithms” (Bai, pg. 2, section I). Regarding Claim 2, Schuster, Guo, and Bai teach The device of claim 1, as shown above. Guo also teaches wherein the features comprise a cumulative sum of sizes of the ADUs in the plurality of ADUs, a number of ADUs in the plurality of ADUs, and a time between ADUs in the plurality of ADUs. (Guo teaches “a method for determining one or more chunk statistics” (0011) associated with encrypted video segments or ‘chunks’ transmitted over a network, including cumulative sum of sizes of segments: “a sum of a total size of the plurality of chunks in the media session” (0018); number of segments: “the number of the plurality of chunks” (0018); and time between segments: “The chunk gap is the time between an end of a chunk i and a request for another chunk i+1” (0103).) Regarding Claim 3, Schuster, Guo, and Bai teach The device of claim 2, as shown above. Guo also teaches wherein operations further comprise calculating a size of each ADU in the plurality of ADUs, wherein the size of each ADU comprises a sum of a number of bytes in payloads of a series of data packets in the plurality of data packets, wherein the series of data packets are between two of the uplink request data packets in the plurality of data packets. (Guo teaches calculating the size of each chunk (i.e. ADU) by summing the number of bytes in its data packets: “The size of chunk ‘i’ is the cumulative bytes sent in the downstream direction on the given transport level flow between the start of chunk ‘i’ and the start of chunk ‘i+1’… the amount of data or bytes in data packets are included in the chunk size while bytes in signaling packets are ignored” (0100). Guo also teaches identifying the start and end of each chunk by identifying HTTP requests (i.e. uplink requests): “Once a request message is identified, the start of the ‘chunk’ is defined… The end of a current chunk ‘i’ is detected when another request message is detected in the flow” (0093-0095).) Regarding Claim 4, Schuster, Guo, and Bai teach The device of claim 3, as shown above. Guo also teaches wherein the operations further comprise identifying the two uplink request data packets based on a payload size greater than the threshold size. (Guo teaches “The monitoring station 110 detects a request message in a flow 300 of a media session 200 between a client 130 and video server 120 using a size pattern or predetermined size range of a typical request message for a video service… The predetermined size range or size pattern may include only a lower bound or threshold” (0089-0090). Request messages (i.e. uplink request data packets) are identified based on a threshold size.) Regarding Claim 5, Schuster, Guo, and Bai teach The device of claim 4, as shown above. Guo also teaches wherein the threshold size is 500 bytes. (Guo teaches “The predetermined size range or size pattern may include only a lower bound or threshold. For example, for the YouTube® video service, the size range is approximately L=500 B, wherein L is the lower threshold in the size range” (0090).) Regarding Claim 6, Schuster, Guo, and Bai teach The device of claim 1, as shown above. Bai also teaches wherein, in the processing sequence, an input layer precedes the multiple consecutive LSTM layers (Pg. 5, section III.C: “The inputs are fed into two LSTM layers at first to capture the temporal relationship of network traffic.” Figure 5 (pg. 6) shows the model architecture, including the input sequence layer preceding the LSTM layers.) Regarding Claim 8, Schuster, Guo, and Bai teach The device of claim 1, as shown above. Bai also teaches wherein, in the processing sequence, a flatten layer succeeds the convolution layer and precedes the final layer, and the final layer comprises a classifier network. (Pg. 5, section III.C: “The output of convolution layer is then fed into maxpooling layer directly. The maxpooling layer will reduce the dimension of inputs by only selecting the maximum value from n*n features, where n*n is the maxpooling filter size… After the maxpooling layer, data is reshaped to a vector again and passed into a fully connection layer with dropout operation before feeding into the output layer… In the output layer, softmax function is chosen as the active function to calculate the probabilities of different classes. The class with the highest probability will be final prediction for inputs.” Figure 5 (pg. 6) shows the model architecture, where the convolution layer is succeeded by maxpooling and reshaping, which reduces the dimensionality of the data to a vector (i.e. a flatten layer). The output layer (i.e. final layer), which is preceded by the flatten layer, is a classifier.) Regarding Claim 9, Schuster, Guo, and Bai teach The device of claim 1, as shown above. Schuster also teaches wherein the video content is encrypted and streamed across the network using a QUIC protocol. (Schuster teaches “We captured the network traffic of each streaming session for a certain duration (see below) using Wireshark’s tshark [60]. For Amazon, Netflix, and Vimeo, the application-layer protocol is TLS; for YouTube, it is either QUIC, or TLS” (pg. 5, section 5.2).) Regarding Claim 10, Schuster, Guo, and Bai teach The device of claim 1, as shown above. Schuster also teaches wherein operations further comprise training the trained ML model with features extracted from streamed data packets of known encrypted video content. (Schuster teaches “We use supervised training on a corpus that consists of traffic measurements labeled with their correct class, i.e., the identity of the corresponding video” (pg. 7, section 7.1).) Regarding Claim 11, Schuster, Guo, and Bai teach The device of claim 10, as shown above. Schuster also teaches wherein the training utilizes one or more similarity metrics from a group comprising categorical cross entropy loss, least absolute difference, and a sum of squared difference. (Schuster teaches “We train [the model] using an Adam [26] optimizer on batches of 64 samples, with categorical cross-entropy as the error function” (pg. 7, section 7.2).) Regarding Claim 12, Schuster, Guo, and Bai teach The device of claim 1, as shown above. Schuster also teaches wherein the processing system comprises a plurality of processors operating in a distributed computing environment. (Schuster teaches “the attacker executes his JavaScript client code either in the same browser that is receiving the target stream (the cross-site attack), or on a machine on the same local network as the device that is receiving the target stream (the cross-device attack). In both cases, the attacker’s client is communicating with a colluding attack server. In both the cross-site and cross-device scenarios, (1) the attacker’s client and the recipient of the target stream are behind a congested home router, while (2) the attack server and the streaming server are outside this router, in different Internet locations” (pg. 4-5, section 5.1). The “attacker” is the user of the system, and the attacker’s client and attack server are distributed processors.) Claim 13 is a product claim containing substantially the same elements as system claim 1. Schuster, Guo, and Bai teach the elements of claim 1, as shown above. Schuster also teaches A non-transitory, machine-readable medium, comprising executable instructions that, when executed by a processing system including a processor, facilitate performance of operations, the operations comprising: (Examiner notes this limitation is interpreted as a general-purpose computing environment. Schuster’s use of “JavaScript code” (pg. 1, section 1), Wireshark software (pg. 5, section 5.2), and access to “network traffic at the network (IP) or transport (TCP/UDP) layers” (pg. 3, section 3.1) necessitates implementation of the described methods on a computer.) outputting a probability score and an identification of the streaming video content determined by the ML model. (Pg. 8, section 7.3: “the output of the last, softmax layer of the neural network is traditionally interpreted as a vector of probabilities. The classifier’s prediction is the class with the highest probability. We can use this probability as a confidence measure.”) Claims 14-16 are system claims containing substantially the same elements as method claims 2-4, respectively. Schuster, Guo, and Bai teach the elements of claims 2-4, as shown above. Regarding Claim 17, Schuster, Guo, and Bai teach The non-transitory, machine-readable medium of claim 13, as shown above. Bai also teaches wherein, in the processing sequence, an input layer precedes the multiple consecutive LSTM layers, a flatten layer succeeds the convolution layer and precedes the final layer, and the final layer comprises a classifier network. (Pg. 5, section III.C: “The inputs are fed into two LSTM layers at first to capture the temporal relationship of network traffic… The output of convolution layer is then fed into maxpooling layer directly. The maxpooling layer will reduce the dimension of inputs by only selecting the maximum value from n*n features, where n*n is the maxpooling filter size… After the maxpooling layer, data is reshaped to a vector again and passed into a fully connection layer with dropout operation before feeding into the output layer… In the output layer, softmax function is chosen as the active function to calculate the probabilities of different classes. The class with the highest probability will be final prediction for inputs.” Figure 5 (pg. 6) shows the model architecture, where the input sequence layer precedes the LSTM layers, the convolution layer is succeeded by maxpooling and reshaping, which reduces the dimensionality of the data to a vector (i.e. a flatten layer), and the output layer (i.e. final layer), which is preceded by the flatten layer, is a classifier.) Claim 19 is a method claim containing substantially the same elements as product claim 13. Schuster, Guo, and Bai teach the elements of claim 13, as shown above. Claims 7, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Schuster in view of Guo and Bai, and further in view of Ioffe et al. (hereinafter Ioffe), U.S. Patent Application Publication US-20160217368-A1 (published 07/28/2016). Regarding Claim 7, Schuster, Guo, and Bai teach The device of claim 1, as shown above. Schuster, Guo, and Bai do not appear to explicitly disclose wherein, in the processing sequence, a batch normalization layer succeeds the multiple consecutive LSTM layers and precedes the convolution layer. However, Ioffe teaches wherein, in the processing sequence, a batch normalization layer succeeds the multiple consecutive LSTM layers and precedes the convolution layer. (0004: “In general, one innovative aspect of the subject matter described in this specification can be embodied in a neural network system implemented by one or more computers that includes a batch normalization layer between a first neural network layer and a second neural network layer…” 0030: “In some other cases, however, the neural network layer A 104 is a convolutional layer or other kind of neural network layer…”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Schuster, Guo, Bai, and Ioffe. Schuster teaches identifying encrypted video content using machine learning and network traffic patterns. Guo teaches monitoring network traffic in order to detect and measure statistics associated with video segment chunks. Bai teaches processing network traffic flow data using an LSTM-CNN cascade model. Ioffe teaches including a batch normalization layer between layers of a neural network. One of ordinary skill would have motivation to combine Schuster, Guo, Bai, and Ioffe because “[p]articular embodiments of the subject matter described in this specification can be implemented so as to realize one or more of the following advantages. A neural network system that includes one or more batch normalization layers can be trained more quickly than an otherwise identical neural network that does not include any batch normalization layers… Additionally, during training, the batch normalization layers can act as a regularizer and may reduce the need for other regularization techniques, e.g., dropout, to be employed during training. Once trained, the neural network system that includes one normalization layers can generate neural network outputs that are as accurate, if not more accurate, than the neural network outputs generated by the otherwise identical neural network system” (Ioffe, 0006). Claim 18 is a product claim containing substantially the same elements as system claim 7. Schuster, Guo, Bai, and Ioffe teach the elements of claim 7, as shown above. Regarding Claim 20, Schuster, Guo, and Bai teach The method of claim 19, as shown above. Bai also teaches wherein: in the processing sequence, an input layer precedes the multiple consecutive LSTM layers, [a batch normalization layer succeeds the multiple consecutive LSTM layers and precedes the convolution layer], and a flatten layer succeeds the convolution layer and precedes the final layer; and the final layer comprises a classifier network. (Pg. 5, section III.C: “The inputs are fed into two LSTM layers at first to capture the temporal relationship of network traffic… The output of convolution layer is then fed into maxpooling layer directly. The maxpooling layer will reduce the dimension of inputs by only selecting the maximum value from n*n features, where n*n is the maxpooling filter size… After the maxpooling layer, data is reshaped to a vector again and passed into a fully connection layer with dropout operation before feeding into the output layer… In the output layer, softmax function is chosen as the active function to calculate the probabilities of different classes. The class with the highest probability will be final prediction for inputs.” Figure 5 (pg. 6) shows the model architecture, where the input sequence layer precedes the LSTM layers, the convolution layer is succeeded by maxpooling and reshaping, which reduces the dimensionality of the data to a vector (i.e. a flatten layer), and the output layer (i.e. final layer), which is preceded by the flatten layer, is a classifier.) Schuster, Guo, and Bai do not appear to explicitly disclose a batch normalization layer succeeds the multiple consecutive LSTM layers and precedes the convolution layer However, Ioffe teaches a batch normalization layer succeeds the multiple consecutive LSTM layers and precedes the convolution layer (0004: “In general, one innovative aspect of the subject matter described in this specification can be embodied in a neural network system implemented by one or more computers that includes a batch normalization layer between a first neural network layer and a second neural network layer…” 0030: “In some other cases, however, the neural network layer A 104 is a convolutional layer or other kind of neural network layer…”) Conclusion Claims 1-20 are rejected. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN M ROHD whose telephone number is (571)272-6445. The examiner can normally be reached Mon-Thurs 8:00-6:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.M.R./Examiner, Art Unit 2147 /VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

Apr 18, 2022
Application Filed
Jun 24, 2025
Non-Final Rejection — §103
Oct 06, 2025
Applicant Interview (Telephonic)
Oct 06, 2025
Examiner Interview Summary
Oct 06, 2025
Response Filed
Nov 12, 2025
Final Rejection — §103
Feb 18, 2026
Request for Continued Examination
Feb 27, 2026
Response after Non-Final Action
Mar 03, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month