Prosecution Insights
Last updated: April 19, 2026
Application No. 18/253,416

Improved Processing of Sequential Data via Machine Learning Models Featuring Temporal Residual Connections

Non-Final OA §102§103§112
Filed
May 18, 2023
Examiner
SMITH, BRIAN M
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
1 (Non-Final)
52%
Grant Probability
Moderate
1-2
OA Rounds
4y 3m
To Grant
89%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
129 granted / 246 resolved
-2.6% vs TC avg
Strong +37% interview lift
Without
With
+37.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
34 currently pending
Career history
280
Total Applications
across all art units

Statute-Specific Performance

§101
24.4%
-15.6% vs TC avg
§103
37.1%
-2.9% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
19.7%
-20.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 246 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 2, are objected to because of the following informalities: Claim 2 recites wherein the machine-learned convolutional neural network consist only of … This appears to be a typographical error, and should read consists only of … Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2, are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 2 recites the limitation wherein the machine-learned convolutional neural network consists only of forward-propagating temporal residual connections … This limitation as written is nonsensical, because “consist” is a close-ended term, which means that the claimed convolutional neural network cannot comprise anything else other than the forward-propagating residual connections. This renders the claim indefinite, because then the network could not include, for example, any convolutional layers or any other units, and thus it is unclear what the claim requires. For the purpose of examination, the claim will be interpreted as if the machine-learned convolutional neural network comprises forward-propagating, temporal residual connections, but no backward-propagating connections analogous to the forward-propagating temporal residual connections. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1 and 3-7, 14, 15, 21, and 22 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Jang et al, “Bi-LSTM Model to Increase Accuracy in Text Classification: Combining Word2vec CNN and Attention Mechanism.” Regarding Claim 1, Jang teaches a computing system for improved temporal processing of sequential data (Jang, pg. 6, 2nd paragraph, “The preprocessed dataset provides a unique and meaningful sequence of words” & 3rd paragraph, “processes the text as sequential data” & pg. 11, 2nd paragraph, “The model proposed in this paper has demonstrated improved performance”), the computing system comprising: one or more processors; and one or more non-transitory computer-readable media that collectively store (Jang, pg. 7, Section 4, “Experiment” implies they perform their method on a computer, in which processors and computer-readable media are inherent): a machine learned convolutional neural network (Jang, pg. 6, Fig. 4 is a neural network with a “Convolutional layer” thus a convolutional neural network) that comprises one or more temporal residual connections that respectively supply one or more sets of intermediate feature data generated from a current sequential input to one or more other instantiations of the machine learned convolutional neural network applied to process one or more other sequential inputs (Jang, pg. 6, Fig. 4, the vertical upwards and downwards connections in the Bi-lstm layer are temporal residual connections that supply intermediate feature data between lstm instantiations of the convolutional neural network model to process sequential inputs from different timesteps, see Fig. 3) and instructions that, when executed by the one or more processors, cause the computing system to perform operations, the operations comprising: for each of a plurality of sequential inputs included in a sequence (Jang, pg. 6, Eq. (1)) processing the current sequential input with at least a portion of a current instantiation of the machine learned convolutional neural network to generate a current set of intermediate feature data (Jang, pg. 6-7, Fig. 4 & Eqs. (1-5), the input data is processed via a convolution to provide data to the bi-LSTM, which is further processed by that instantiation of the LSTM to generate intermediate feature data for another instantiation of the bi-LSTM) storing the current set of intermediate feature data for provision to one or more subsequent instantiations of the machine-learned convolutional neural network applied to process one or more subsequent sequential inputs that are subsequent to the current sequential input in the sequence; accessing one or more sets of preceding intermediate feature data generated by one or more preceding instantiations of the machine-learned convolutional neural network applied to process one or more preceding sequential inputs that preceded the current sequential input in the sequence (Jang, pg. 4, Fig. 3 with pg. 6, Fig. 4, where the vertical downwards connections of the bi-LSTM provide a preceding intermediate feature value to a subsequent instantiation which also takes subsequent inputs in a sequence); and generating a model output from the current instantiation of the machine-learned convolutional neural network based at least on the current set of intermediate feature data and the one or more sets of preceding intermediate feature data (Jang, pg. 6, Fig. 4, where both “hidden state” and “output layers” are model output). Regarding Claim 3, Jang teaches the computing system of Claim 1 (and thus the rejection of Claim 1 is incorporated). Jang further teaches wherein the machine-learned convolutional neural network comprises both: forward-propagating temporal residual connections that supply the one or more sets of intermediate feature data generated from the current sequential input to the subsequent instantiations of the machine-learned convolutional neural network (Jang, pg. 6, Fig. 4, the vertical downwards connections of the bi-LSTM); and backward-propagating temporal residual connections that supply the one or more sets of intermediate feature data generated from the current sequential input to the preceding instantiations of the machine-learned convolutional neural network (Jang, pg. 6, Fig. 4, the vertical upwards connections of the bi-LSTM). Regarding Claim 4, Jang teaches the computing system of Claim 1 (and thus the rejection of Claim 1 is incorporated). Jang further teaches wherein at least one of the one or more temporal residual connections is configured to supply the one or more sets of intermediate feature data to a same layer of the one or more other instantiations of the machine-learned convolutional neural network (Jang, pg. 6, Fig. 4, where a vertical row of LSTMs is a layer and the vertical connections supply to the same layer). Regarding Claim 5, Jang teaches the computing system of Claim 1 (and thus the rejection of Claim 1 is incorporated). Jang further teaches wherein at least one of the one or more temporal residual connections is configured to supply the one or more sets of intermediate feature data to a different layer of the one or more other instantiations of the machine-learned convolutional neural network (Jang, pg. 6, Fig. 4, where a horizontal row of LSTMs is a layer and the vertical connections supply to a different layer). Regarding Claim 6, Jang teaches the computing system of Claim 1 (and thus the rejection of Claim 1 is incorporated). Jang further teaches wherein the one or more temporal residual connections comprise a plurality of temporal residual connections present at different respective depths within the machine-learned convolutional neural network (Jang, pg. 6, Fig. 4, where each horizontal row of LSTMs is a different depth and the vertical connections are thus at different depths). Regarding Claim 7, Jang teaches the computing system of Claim 1 (and thus the rejection of Claim 1 is incorporated). Jang further teaches combining at least one of the sets of preceding intermediate feature data with at least one existing set of feature data to form a combined set of feature data; and generating the model output from the current instantiation of the machine-learned convolutional neural network based at least in part on the combined set of feature data (Jang, pg. 6, Fig. 4, where the LSTMs take input from a preceding set and a current input and combine them to create the output). Regarding Claim 14, Jang teaches the computing system of Claim 1 (and thus the rejection of Claim 1 is incorporated). Jang further teaches wherein the one or more subsequent instantiations of the machine-learned convolutional neural network applied to process the one or more subsequent sequential inputs comprise a next sequential instantiation of the machine-learned convolutional neural network applied to process a next sequential input in the sequence (Jang, pg. 4, Fig. 3 with pg. 6, Fig. 4, where a next lower LSTM processes the next sequential input in the sequence). Regarding Claim 15, Jang teaches the computing system of Claim 1 (and thus the rejection of Claim 1 is incorporated). Jang further teaches wherein the one or more subsequent instantiations of the machine-learned convolutional neural network applied to process the one or more subsequent sequential inputs comprise a greater-than-next sequential instantiation of the machine-learned convolutional neural network applied to process a greater-than-next sequential input in the sequence (Jang, pg. 4, Fig. 3 with pg. 6, Fig. 4, where a lower LSTM, two or more LSTMs down, rather than just the next LSTM, processes a two-or-more-later next sequential input in the sequence). Claim 21 recites precisely the method performed by the system of Claim 1, and is thus rejected for reasons set forth in the rejection of Claim 1. Claim 22 recites a subset of Claim 1, and therefore is anticipated by Claim 1, and is thus rejected for reasons set forth in the rejection of Claim 1. Claims 1, 2, 7, 9-13, 18, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kopuklu et al, “Dissected 3D CNNs: Temporal Skip Connections for Efficient Online Video Processing.” Regarding Claim 1, Kopuklu teaches a computing system for improved temporal processing of sequential data (Kopuklu, title, “Dissected 3D CNNs: Temporal Skip Connections for Efficient Online Video Processing”), the computing system comprising: one or more processors; and one or more non-transitory computer-readable media that collectively store (Kopuklu, pg. 5, 2nd column, 1st paragraph, “implemented in PyTorch” in which processors and computer-readable media are inherent): a machine learned convolutional neural network (Kopuklu, title, “Dissected 3D CNNs: Temporal Skip Connections for Efficient Online Video Processing” & pg. 4, Fig. 3, “Proposed Dissected 2D CNN architecture”) that comprises one or more temporal residual connections that respectively supply one or more sets of intermediate feature data generated from a current sequential input to one or more other instantiations of the machine learned convolutional neural network applied to process one or more other sequential inputs (Kopuklu, pg. 4, Fig. 3, the horizontal connections are temporal residual connections that supply intermediate feature data between instantiations of the convolutional neural network model to process sequential inputs from different video frames) and instructions that, when executed by the one or more processors, cause the computing system to perform operations, the operations comprising: for each of a plurality of sequential inputs included in a sequence (Kopuklu, pg. 4, Fig. 4, video frames) processing the current sequential input with at least a portion of a current instantiation of the machine learned convolutional neural network to generate a current set of intermediate feature data (Kopuklu, pg. 4, Fig. 3, each vertical column is a current instantiation) storing the current set of intermediate feature data for provision to one or more subsequent instantiations of the machine-learned convolutional neural network applied to process one or more subsequent sequential inputs that are subsequent to the current sequential input in the sequence; accessing one or more sets of preceding intermediate feature data generated by one or more preceding instantiations of the machine-learned convolutional neural network applied to process one or more preceding sequential inputs that preceded the current sequential input in the sequence (Kopuklu, pg. 4, Fig. 3, “Cached Volumes” and the horizontal connections connect preceding instantiations); and generating a model output from the current instantiation of the machine-learned convolutional neural network based at least on the current set of intermediate feature data and the one or more sets of preceding intermediate feature data (Kopuklu, pg. 2, 2nd column, “activity recognition”). Regarding Claim 2, Kopuklu teaches the computing system of Claim 1 (and thus the rejection of Claim 1 is incorporated). Kopuklu further teaches wherein the machine-learned convolutional neural network consists only of forward-propagating temporal residual connections that supply the one or more sets of intermediate feature data generated from the current sequential input to the subsequent instantiations of the machine-learned convolutional neural network (Kopuklu, pg. 4, Fig. 3, all the temporal connections are forward-propagating). Regarding Claim 7, Kopuklu teaches the computing system of Claim 1 (and thus the rejection of Claim 1 is incorporated). Kopuklu further teaches combining at least one of the sets of preceding intermediate feature data with at least one existing set of feature data to form a combined set of feature data; and generating the model output from the current instantiation of the machine-learned convolutional neural network based at least in part on the combined set of feature data (Kopuklu, pg. 4, Fig. 3, the “Concat” operation). Regarding Claim 9, Kopuklu teaches the computing system of Claim 7 (and thus the rejection of Claim 7 is incorporated). Kopuklu further teaches concatenating the at least one of the sets of preceding intermediate feature data with the at least one existing set of feature data (Kopuklu, pg. 4, Fig. 3, the “Concat” operation) and applying one or more convolutions to the concatenated data (Kopuklu, pg. 4, Fig. 3, the convolutions applied to the concat data). Regarding Claim 10, Kopuklu teaches the computing system of Claim 7 (and thus the rejection of Claim 7 is incorporated). Kopuklu further teaches concatenating the at least one of the sets of preceding intermediate feature data with the at least one existing set of feature data (Kopuklu, pg. 4, Fig. 3, the “Concat” operation) and applying multiple convolution filters to the concatenated data in parallel (Kopuklu, pg. 4, Fig. 3, the different blocks along a vertical row all happen at the same time step, thus in parallel) and combining the outputs of the multiple convolutional filters (the final output is a combination of all intermediate outputs). Regarding Claim 11, Kopuklu teaches the computing system of Claim 10 (and thus the rejection of Claim 10 is incorporated). Kopuklu further teaches wherein the multiple convolutional filters have different respective filter sizes (Kopuklu, pg. 4, Fig. 3, “2x2x3” and “1x3x3”). Regarding Claim 12, Kopuklu teaches the computing system of Claim 10 (and thus the rejection of Claim 10 is incorporated). Kopuklu further teaches wherein the multiple convolutional filters have different respective dilation rates (Kopuklu, pg. 3, Table 1, “Stride (1,2,2)” & “Stride (1,1,1)”). Regarding Claim 13, Kopuklu teaches the computing system of Claim 7 (and thus the rejection of Claim 7 is incorporated). Kopuklu further teaches wherein he at least one existing set of feature data comprises the current set of intermediate feature data (Kopuklu, pg. 4, Fig. 3, where the concat operation acts on current data from the same column, i.e. current intermediate feature data and preceding data from the previous column). Regarding Claim 18, Kopuklu teaches the method of Claim 1 (and thus the rejection of Claim 1 is incorporated). Kopuklu further teaches wherein the plurality of sequential inputs in the sequence comprise a plurality of image frames included in a video (Kopuklu, title, video frames). Regarding Claim 20, Kopuklu teaches the method of Claim 1 (and thus the rejection of Claim 1 is incorporated). Kopuklu further teaches wherein the machine-learned convolutional neural network is configured to perform a task, wherein the task comprises action recognition or object detection (Kopuklu, pg. 2, 2nd column, “activity recognition”). Claims 1, 7, 8, 16, 17, and 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sun et al, “A Novel Coding Architecture for Multi-Line LiDAR Point Clouds Based on Clustering and Convolutional LSTM Network” (with an online publication date of 10 November 2020). Regarding Claim 1, Sun teaches a computing system for improved temporal processing of sequential data (Sun, title, “A Novel Coding Architecture for Multi-Line LiDAR Point Clouds” & pg. 2192, Fig. 1, “Point Cloud Sequence” & pg. 2196, 2nd column, last paragraph, “To eliminate temporal redundancy in point cloud sequences”), the computing system comprising: one or more processors; and one or more non-transitory computer-readable media that collectively store (Sun, pg. 2198, 1st column, Section IV, “the main body of the proposed algorithm is implemented in C++”, in which processors and computer-readable media are inherent): a machine learned convolutional neural network (Sun, pg. 2196, 2nd column, last paragraph, “we develop a prediction neural network using convolutional LSTM” & pg. 2197, Fig. 10) that comprises one or more temporal residual connections that respectively supply one or more sets of intermediate feature data generated from a current sequential input to one or more other instantiations of the machine learned convolutional neural network applied to process one or more other sequential inputs (Sun, pg. 2197, Fig. 10, the horizonal connections between timesteps are temporal residual connections that supply intermediate feature data between Convlstm instantiations of the convolutional neural network model to process sequential inputs from different timesteps) and instructions that, when executed by the one or more processors, cause the computing system to perform operations, the operations comprising: for each of a plurality of sequential inputs included in a sequence (Sun, pg. 2197, Fig. 10, the sequence of Xt inputs & pg. 2196, 2nd column, last paragraph, “point cloud sequences”) processing the current sequential input with at least a portion of a current instantiation of the machine learned convolutional neural network to generate a current set of intermediate feature data (Sun, pg. 2197, Fig. 10, the input data Xt is processed via a the ConvLSTMs to generate intermediate feature data for other instantiations of ConvLSTMs) storing the current set of intermediate feature data for provision to one or more subsequent instantiations of the machine-learned convolutional neural network applied to process one or more subsequent sequential inputs that are subsequent to the current sequential input in the sequence; accessing one or more sets of preceding intermediate feature data generated by one or more preceding instantiations of the machine-learned convolutional neural network applied to process one or more preceding sequential inputs that preceded the current sequential input in the sequence (Sun, pg. 2197, Fig. 10, the horizontal connections of the ConvLSTMs provide preceding intermediate feature values to a subsequent instantiation which also takes subsequent inputs in a sequence); and generating a model output from the current instantiation of the machine-learned convolutional neural network based at least on the current set of intermediate feature data and the one or more sets of preceding intermediate feature data (Sun, pg. 2197, Fig. 10, Pt are model output). Regarding Claim 7, Sun teaches the computing system of Claim 1 (and thus the rejection of Claim 1 is incorporated). Sun further teaches combining at least one of the sets of preceding intermediate feature data with at least one existing set of feature data to form a combined set of feature data; and generating the model output from the current instantiation of the machine-learned convolutional neural network based at least in part on the combined set of feature data (Sun, pg. 2197, Fig. 10, where of a previous timestep is summed with Xt to eventually generate the later P output). Regarding Claim 8, Sun teaches the computing system of Claim 7 (and thus the rejection of Claim 7 is incorporated). Sun further teaches wherein combining at least one of the sets of preceding intermediate feature data with at least one existing set of feature data to form a combined set of feature data comprises summing the at least one of the sets of preceding intermediate feature data with the at least one existing set of feature data (Sun, pg. 2197, Fig. 10, where of a previous timestep is summed with Xt ). Regarding Claim 16, Sun teaches the computing system of Claim 1 (and thus the rejection of Claim 1 is incorporated). Sun further teaches wherein the current set of intermediate feature data comprises an activation map for a convolutional layer of the machine-learned convolutional neural network (Sun, pg. 2197, Fig. 10, where the intermediate feature data is the output of the ConvLSTM blocks). Regarding Claim 17, Sun teaches the computing system of Claim 1 (and thus the rejection of Claim 1 is incorporated). Sun further teaches wherein the machine-learned convolutional neural network comprises or more convolutional layers followed by a long short term memory layer, wherein the one or more temporal residual connections are presented at the one or more convolutional layers (Sun, pg. 2197, Fig. 10, the ConvLSTM blocks with temporal residual connections as horizontal outputs). Regarding Claim 19, Sun teaches the computing system of Claim 1 (and thus the rejection of Claim 1 is incorporated). Sun further teaches wherein the plurality of sequential inputs included in the sequence comprise a plurality of sets of Light Detection and Ranging (LiDAR) data included in a LiDAR data sequence (Sun, title, “A Novel Coding Architecture for Multi-Line LiDAR Point Clouds” & pg. 2192, Fig. 1, “Point Cloud Sequence” & pg. 2196, 2nd column, last paragraph, “To eliminate temporal redundancy in point cloud sequences”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Jang, in view of Salur et al., “A Novel Hybrid Deep Learning Model for Sentiment Classification.” Regarding Claim 2, Jang teaches the computing system of Claim 1 (and thus the rejection of Claim 1 is incorporated). The temporal residual connections identified in Jang in the reference of Claim 1 are in the Bi-LSTM configuration of LSTMs, thus both forward-propagating and backward-propagating temporal residual connections, while Claim 2 calls for only forward-propagating temporal residual connections. However, Salur, in a similar application as Jang (of using a combination of convolutional neural networks and LSTMs for sentiment prediction) teaches that LSTMs, bi-LSTMs, and GRUs can be used in place of each other (see pg. 58085, Fig. 7 & pg. 58087, 1st column, 3rd paragraph, “the features are extracted using RNN variants such as LSTM, BiLSTM, and GRU methods”). Thus, it would have been obvious to replace the BE-LSTM of Jang with a regular LSTM, thus including only forward-propagating connection, as does Salur. The rationale is that the Bi-LSTM and LSTM are known equivalents, i.e. KSR Rationale B, “Simple substitution” (see MPEP 2144, “Rationale may be in a reference, or reasoned from ... art-recognized equivalents” & MPEP 2144.06 – Salur has inventions with LSTM and BiLSTM substituted for each other). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Mihal, US PG Pub 2024/0153044 also teaches convolutional neural networks with temporal residual connections. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIAN M SMITH whose telephone number is (469)295-9104. The examiner can normally be reached Monday - Friday, 8:00am - 4pm Pacific. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRIAN M SMITH/Primary Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

May 18, 2023
Application Filed
Jan 14, 2026
Non-Final Rejection — §102, §103, §112
Mar 12, 2026
Examiner Interview Summary
Mar 12, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596936
PREDICTIVE DATA ANALYSIS TECHNIQUES USING GRAPH-BASED CODE RECOMMENDATION MACHINE LEARNING MODELS
2y 5m to grant Granted Apr 07, 2026
Patent 12585985
RECOGNITION SYSTEM, MODEL PROCESSING APPARATUS, MODEL PROCESSING METHOD, AND RECORDING MEDIUM FOR INTEGRATING MODELS IN RECOGNITION PROCESSING
2y 5m to grant Granted Mar 24, 2026
Patent 12555025
METHOD AND SYSTEM FOR INTEGRATING FIELD PROGRAMMABLE ANALOG ARRAY WITH ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Feb 17, 2026
Patent 12518198
System and Method for Ascertaining Data Labeling Accuracy in Supervised Learning Systems
2y 5m to grant Granted Jan 06, 2026
Patent 12488068
PERFORMANCE-ADAPTIVE SAMPLING STRATEGY TOWARDS FAST AND ACCURATE GRAPH NEURAL NETWORKS
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
52%
Grant Probability
89%
With Interview (+37.0%)
4y 3m
Median Time to Grant
Low
PTA Risk
Based on 246 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month