Prosecution Insights
Last updated: April 19, 2026
Application No. 17/645,517

MACHINE-LEARNING TECHNIQUES FOR TIME-DELAY NEURAL NETWORKS

Non-Final OA §103
Filed
Dec 22, 2021
Examiner
LEY, SALLY THI
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Equifax Inc.
OA Round
3 (Non-Final)
15%
Grant Probability
At Risk
3-4
OA Rounds
3y 10m
To Grant
44%
With Interview

Examiner Intelligence

Grants only 15% of cases
15%
Career Allow Rate
5 granted / 33 resolved
-39.8% vs TC avg
Strong +29% interview lift
Without
With
+28.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
35 currently pending
Career history
68
Total Applications
across all art units

Statute-Specific Performance

§101
29.2%
-10.8% vs TC avg
§103
50.2%
+10.2% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 33 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 17 Nov 2025 has been entered. Status of Claims This Office Action is in response to the communication filed on 17 Nov 2025. Claims 1-20 are being considered on the merits. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2-4, 9, 11-12, 15, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Cella, Charles Howard (US 2020/0387379 A1; hereinafter, “Cella”) in view of Che, Z., Purushotham, S., Cho, K. et al. Recurrent Neural Networks for Multivariate Time Series with Missing Values. Sci Rep 8, 6085 (2018). https://doi.org/10.1038/s41598-018-24271-9; hereinafter “Che”. Regarding claims 1, 9, and 15, Cella teaches: A method that includes one or more processing devices performing operations comprising: (Cella, para. 1252: “The methods and systems described herein may be deployed in part or in whole through a machine that executes computer software, program codes, and/or instructions on a processor.”) receiving, by a computing system, an access request to an interactive computing environment by a target entity; (Cella, 0015: “The present disclosure describes a method, the method according to one disclosed non-limiting embodiment of the present disclosure can include accessing a distributed ledger including an instruction set, tokenizing the instruction set, interpreting an instruction set access request, and in response to the instruction set access request, providing a provable access to the instruction set.”) A system comprising: a processing device; and a memory device in which instructions (Cella, para. 0967: “The processing system 3302 may include one or more processors and memory.”) executable by the processing device are stored for causing the processing device: (Cella, para. 0967: “The memory may store computer-executable instructions that are executed by the one or more processors.”) A non-transitory computer-readable storage medium having program code that is executable by a processor device to cause a computing device: (Cella, para. 1252: “The processor may access a non-transitory storage medium through an interface that may store methods, codes, and instructions as described herein and elsewhere.”) accessing time-series data of a plurality of predictor variables associated with the target entity, (Cella, para. 0941: “In embodiments, an ESN may be used to handle time series patterns, such as, in an example, recognizing a pattern of events associated with a market, such as the pattern of price changes in response to stimuli.”) the time-series data of a predictor variable of the plurality of predictor variables comprising data instances of the predictor variable at a sequence of time points; (Cella, para. 0941: “In embodiments, an ESN may be used to handle time series patterns, such as, in an example, recognizing a pattern of events associated with a market, such as the pattern of price changes in response to stimuli.” Examiner notes that Cella teaches a predictor variable such as stimuli) determining a risk indicator for the target entity indicating a level of risk associated with the target entity (Cella, para. 0875: “Certain considerations that may be relevant to a particular system include, without limitation…risk factors related to resource utilization (e.g., increasing data storage at a single location may increase risk over distributed data; increasing throughput of a system may change the risks, such as increased traffic, higher operating points for systems, increased risk of regulatory violations, or the like).”) by inputting the time-series data of the plurality of predictor variables into a time-delay neural network, wherein: (Cella, para. 0914: “Referring to FIG. 4 through FIG. 31, embodiments of the present disclosure, including ones involving expert systems, self-organization, machine learning, artificial intelligence, and the like, may benefit from the use of a neural net, such as a neural net trained for pattern recognition, for classification of one or more parameters, characteristics, or phenomena, for support of autonomous control, and other purposes. References to a neural net throughout this disclosure should be understood to encompass a wide range of different types of neural networks…such as…time delay neural networks”). the time-delay neural network comprises (a) a plurality of attribute networks, (Cella, para. 0928 and 0929: “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a modular neural network, which may comprise a series of independent neural networks (such as ones of various types described herein) that are moderated by an intermediary.” “Combinations among any of the pairs, triplets, or larger combinations, of the various neural network types described herein, are encompassed by the present disclosure.”) each attribute network of the plurality of attribute networks corresponding to a predictor variable of the plurality of predictor variables (Cella, para. 0929: “This may also include combinations where an expert system uses one neural network for classifying an item (e.g., identifying a machine, a component, or an operational mode) and a different neural network for predicting a state of the item (e.g., a fault state, an operational state, an anticipated state, a maintenance state, or the like)”) according to an attribute dimension and further corresponding to a time point of the sequence of time point according to a time dimension and (Cella, para. 0950: “methods and systems described herein that involve an expert system or self-organization capability may use a dynamic neural network that addresses nonlinear multivariate behavior and includes learning of time-dependent behavior, such as transient phenomena and delay effects”) (b) a decision network for generating the risk indicator from outputs of the plurality of attribute networks; (Cella, para. 0875: “Any operations utilizing artificial intelligence, expert systems, machine learning, and/or any other systems or operations described throughout the present disclosure that incrementally improve, iteratively improve, and/or formally optimize parameters are understood as examples of optimization and/or improvement herein…Certain considerations that may be relevant to a particular system include, without limitation…risk factors related to resource utilization (e.g., increasing data storage at a single location may increase risk over distributed data; increasing throughput of a system may change the risks, such as increased traffic, higher operating points for systems, increased risk of regulatory violations, or the like).”) and each attribute network of the plurality of attribute networks comprises: (a) input nodes in an input layer accepting the respective data instances of the predictor variable corresponding to the attribute network; (Cella, para. 0924: “In embodiments, an RBF neural network may include an input layer, a hidden layer, and a summation layer. In the input layer, one neuron appears in the input layer for each predictor variable. In the case of categorical variables, N-1 neurons are used, where N is the number of categories.”) and (b) a set of hidden layer nodes in a hidden layer connected to the input nodes, (Cella, para. 0924: “When presented with the vector of input values from the input layer, a hidden neuron may compute a Euclidean distance of the test case from the neuron's center point and then apply the RBF kernel function to this distance, such as using the spread values”) wherein a first set of hidden layer nodes in a first attribute network of the plurality of attribute networks and a second set of hidden layer nodes in a second attribute network of the plurality of attribute networks are disjoint, and (Cella, para. 0929: “Combinations among any of the pairs, triplets, or larger combinations, of the various neural network types described herein, are encompassed by the present disclosure. This may include combinations where an expert system uses one neural network for recognizing a pattern (e.g., a pattern indicating a problem or fault condition) and a different neural network for self-organizing an activity or work flow based on the recognized pattern (such as providing an output governing autonomous control of a system in response to the recognized condition or pattern). This may also include combinations where an expert system uses one neural network for classifying an item (e.g., identifying a machine, a component, or an operational mode) and a different neural network for predicting a state of the item (e.g., a fault state, an operational state, an anticipated state, a maintenance state, or the like).” Examiner notes that Cella teaches use of networks that which different inputs and are disjoint). wherein weights of connections between the first set of hidden layer nodes and the second set of hidden layer nodes in each attribute network of the plurality of attribute networks are subject to a monotonic constraint; and (Che, sec. Methods: “ We introduce decay rates in the model to control the decay mechanism by considering the following important factors. First, each input variable in health care time series has its own meaning and importance in medical applications. The decay rates should differ from variable to variable based on the underlying properties associated with the variables. Second, as we see lots of missing patterns are informative and potentially useful in prediction tasks but unknown and possibly complex, we aim at learning decay rates from the training data rather than fixed a priori…We chose the exponentiated negative rectifier in order to keep each decay rate monotonically decreasing in a reasonable range between 0 and 1. Note that other formulations such as a sigmoid function can be used instead, as long as the resulting decay is monotonic and is in the same range”). Responsive to determining that the risk indicator is lower than a threshold risk indicator value (Cella, para. 1009: “A small transaction as utilized herein references a transaction that is small enough to limit the risk of the transaction to a threshold level, where the threshold level is either a level that is below an accepted cost of the transaction, or below a disturbance level (e.g., a financial disturbance, an operational disturbance, etc.) for the system.”) generating access credentials authorizing access to the interactive computing environment by the target entity; and (Cella, para. 1010: “In certain embodiments, tokens may be utilized to provide access to encrypted or isolated data, and/or to confirm that access to the encrypted or isolated data has been provided, or that the data has been accessed.”) transmitting, to a remote computing device (Cella, para. 1255: “The server may provide an interface to other devices including, without limitation, clients, other servers, printers, database servers, print servers, file servers, communication servers, distributed servers, social networks, and the like.”) a responsive message including the risk indicator and (Cella, para. 0875 and 0988: “Certain considerations that may be relevant to a particular system include, without limitation…risk factors related to resource utilization (e.g., increasing data storage at a single location may increase risk over distributed data” “In embodiments, a content generation system of the platform generates content for a contact event, such as an email, text message, or a post to a network, or a machine-to-machine message, such as communicating via an API or a peer-to-peer system… The content generation system may be seeded with a set of templates, which may be customized…as well as other indicators as noted throughout this disclosure…”) access credentials (Cella, para. 1010: “In certain embodiments, tokens may be utilized to provide access to encrypted or isolated data, and/or to confirm that access to the encrypted or isolated data has been provided, or that the data has been accessed.”) for use in controlling access to one or more interactive computing environments by the target entity. (Cella para. 1255: “Additionally, this coupling and/or connection may facilitate remote execution of program across the network.”), It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Che into Cella. Cella teaches utilizing forward market pricing to facilitate operational decisions; Che we develop novel deep learning models for exploiting missing patterns for effective imputation and improving prediction performance. One of ordinary skill would have been motivated to combine the teachings of Che into Cella, in order to employ a decay mechanism is designed for the input variables and the hidden states to capture the aforementioned properties. (Che, Sec. “Methods”). Regarding claims 3, 11, and 17, Cella teaches: Wherein the set of hidden layer nodes in the hidden layer comprise a first set of hidden layer nodes in a first hidden layer, and (Cella, paras. 0927, 0934 and fig. 11: “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a recurrent neural network, which may allow for a bi-directional flow of data, such as where connected units (e.g., neurons or nodes) form a directed cycle” “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use an autoencoder, autoassociator or Diabolo neural network, which may be similar to a multilayer perceptron (MLP) neural network, such as where there may be an input layer, an output layer and one or more hidden layers connecting them.”) wherein the decision network of the time-delay neural network (Cella, para. 0914: “Referring to FIG. 4 through FIG. 31, embodiments of the present disclosure, including ones involving expert systems, self-organization, machine learning, artificial intelligence, and the like, may benefit from the use of a neural net, such as a neural net trained for pattern recognition, for classification of one or more parameters, characteristics, or phenomena, for support of autonomous control, and other purposes. References to a neural net throughout this disclosure should be understood to encompass a wide range of different types of neural networks…such as…time delay neural networks”) comprises a second set of hidden layer nodes in a second hidden layer and an output layer (Cella, paras. 0927, 0934 and fig. 11: “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a recurrent neural network, which may allow for a bi-directional flow of data, such as where connected units (e.g., neurons or nodes) form a directed cycle” “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use an autoencoder, autoassociator or Diabolo neural network, which may be similar to a multilayer perceptron (MLP) neural network, such as where there may be an input layer, an output layer and one or more hidden layers connecting them.”) for outputting the risk indicator (Cella, para. 0875: “and/or risk factors related to resource utilization (e.g., increasing data storage at a single location may increase risk over distributed data; increasing throughput of a system may change the risks, such as increased traffic, higher operating points for systems, increased risk of regulatory violations, or the like).”), wherein the second set of hidden layer nodes are connected to the first set of hidden layer nodes in the first hidden layer (Cella, para. 920: “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a feed forward neural network, which moves information in one direction, such as from a data input, like a data source related to at least one resource or parameter related to a transactional environment, such as any of the data sources mentioned throughout this disclosure, through a series of neurons or nodes, to an output. Data may move from the input nodes to the output nodes, optionally passing through one or more hidden nodes, without loops.”) of the plurality of attribute networks (Cella, para. 0929: “This may also include combinations where an expert system uses one neural network for classifying an item (e.g., identifying a machine, a component, or an operational mode) and a different neural network for predicting a state of the item (e.g., a fault state, an operational state, an anticipated state, a maintenance state, or the like)”). Regarding claims 4, 12, and 18, Cella teaches: The method of claim 3, wherein the decision network determines the risk indicator (Cella, para. 0875: “and/or risk factors related to resource utilization (e.g., increasing data storage at a single location may increase risk over distributed data; increasing throughput of a system may change the risks, such as increased traffic, higher operating points for systems, increased risk of regulatory violations, or the like).”) based on the outputs of the plurality of attribute networks (Cella, para. 0928 and 0929: “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a modular neural network, which may comprise a series of independent neural networks (such as ones of various types described herein) that are moderated by an intermediary.” “Combinations among any of the pairs, triplets, or larger combinations, of the various neural network types described herein, are encompassed by the present disclosure.”) such that a monotonic relationship exists between an output of an attribute network and the risk indicator. (Cella, para. 0923: “In embodiments, an output layer may comprise a linear combination of hidden layer values representing, for example, a mean predicted output.” Examiner notes that Cella teaches a linear output comprising a combination of hidden layer values wherein a linear output represents a monotonic relationship). Claims 2, 10, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Cella in view of Che, and further in view of Geva (“ScaleNetÐMultiscale Neural-Network Architecture for Time Series Prediction” IEEE Transactions on Neural Networks, vol. 9, no. 6, pp. 1471-1482, Nov. 1998, doi: 10.1109/72.728396; hereinafter, “Geva”). Regarding claims 2, 10, and 16, Cella does not explicitly disclose: wherein the constraint comprises a first set of weights associated with a first hidden layer node in an attribute network is a shifted version of a second set of weights associated with a second hidden layer node in the attribute network However, Geva teaches: wherein the constraint comprises a first set of weights associated with a first hidden layer node in an attribute network is a shifted version of a second set of weights associated with a second hidden layer node in the attribute network. (Geva, pg. 1474, left column: “The hidden units unit are fully connected, by the weights w k 2 ,   k = 1 … K , to the one linear output producing the j th scale prediction X ^ j ( L + l ) .” Examiner notes that the broadest reasonable interpretation of a “shifted version” means a changed version such as via scaling as taught by Geva). It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Geva into Cella. Cella teaches utilizing forward market pricing to facilitate operational decisions; Geva teaches a multiscale neural net architecture for time series prediction of nonlinear dynamic systems. One of ordinary skill would have been motivated to combine the teachings of Geva into Cella, in order to generate an interpretation of the series structures, and more information about the history of the series, using fewer coefficients than other methods (Geva, abstract). Claims 5-7, 13-13, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Cella in view of Che, and further in view of Ibrahim, et. al. ("Explainable Prediction of Acute Myocardial Infarction Using Machine Learning and Shapley Values," in IEEE Access, vol. 8, pp. 210410-210417, 2020, doi: 10.1109/ACCESS.2020.3040166; hereinafter, “Ibrahim”) Regarding claims 5, 13, and 19, Cella does not explicitly disclose: wherein the operations further comprise: generating, for the target entity, explanatory data indicating relationships between [the risk indicator] and a predictor variable of the plurality of predictor variables However, Ibrahim teaches: wherein the operations further comprise: generating, for the target entity, explanatory data indicating relationships between [the risk indicator] and a predictor variable of the plurality of predictor variables. (Ibrahim, Sec. VI and Fig 5: “Shapley values are useful in revealing the contribution of each feature to an individual prediction. Fig. 5 includes two sample cases, each predicting AMI negative and positive. The local explanation graphs show how each feature shift the prediction from the base value (the average model output of the dataset) to the model output.” Examiner notes that Yam teaches generating explanatory data showing a shift from the prediction value where Cella teaches a risk indicator as set forth in previous claims). It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Ibrahim into Cella. Cella teaches utilizing forward market pricing to facilitate operational decisions; Ibrahim teaches machine learning techniques comprised of two deep learning models, a decision-tree model, and use of Sapley values to predict the onset of acute myocardial infarction. One of ordinary skill would have been motivated to combine the teachings of Ibrahim into Cella in order to identify the features of data input that contributed most to a classification decision (Ibrahim, abstract). Regarding claims 6, 14, and 20, Cella does not explicitly disclose: wherein generating the explanatory data comprises: generating a first portion of the explanatory data using the decision network; and generating a second portion of the explanatory data based on weights of the plurality of attribute networks However, Ibrahim teaches: wherein generating the explanatory data comprises: generating a first portion of the explanatory data using the decision network; and generating a second portion of the explanatory data based on weights of the plurality of attribute networks. (Ibrahim, Sec. VI and Figs. 5 and 6: “Fig. 5 includes two sample cases, each predicting AMI negative and positive” “The beeswarm plot in Fig. 6(a) gives an overview of the impact of features on the prediction, with each dot representing the Shapley value of every feature for all samples. Fig. 6(b) shows the average absolute of the Shapley values over the whole testing dataset.” Examiner notes that Ibrahim teaches explanatory data using the decision network as illustrated in figure 5 as well as explanatory data based on weights of the networks as illustrated in figure 6 where the broadest reasonable interpretation of an “attribute network” includes a feature network). It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Ibrahim into Cella as set forth above with respect to claims 6, 14, and 20. Regarding claim 7, Cella does not explicitly disclose: The method of claim 6, wherein generating the first portion of the explanatory data comprises applying a points-below-max algorithm or a Shapley value algorithm. However, Ibrahim teaches: The method of claim 6, wherein generating the first portion of the explanatory data comprises applying a points-below-max algorithm or a Shapley value algorithm. (Ibrahim, Sec. VI and Fig. 6: “Fig. 5 includes two sample cases, each predicting AMI negative and positive”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Ibrahim into Cella as set forth above with respect to claims 6, 14, and 20 Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Cella in view of Che, in view of Ibrahim and further in view of Geva. Claim Regarding 8, Cella and Ibrahim do not explicitly disclose, but Geva teaches: The method of claim 6, wherein generating the second portion of the explanatory data comprises performing wavelet analysis on the weights of the plurality of attribute networks and the time-series data of the plurality of predictor variables. (Geva, pg. 1472, right column: “Basically, we suggest the direct application of the multiscale fast wavelet transform, as it was originally presented by Mallat and Zong [7], to the time series and then to predict each scale of the wavelet’s coefficients by a separate feedforward NN (with the classical Sigmoid or Gaussian hidden units).”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Geva into Cella, as modified. Geva teaches a multiscale neural net architecture for time series prediction of nonlinear dynamic systems. One of ordinary skill would have been motivated to combine the teachings of Geva into Cella, as modified, in order to generate an interpretation of the series structures, and more information about the history of the series, using fewer coefficients than other methods (Geva, abstract). Response to Applicant Arguments/Remarks 35 U.S.C. §101 In light of applicant’s amendments and remarks, the previously asserted 35 U.S.C. §101 rejections have been withdrawn. 35 U.S.C. §102/§103 Starting at the bottom of page 12 of applicant’s remarks, Applicant asserts the claims as amended traverse the prior art rejection. In light of applicant’s amendments to independent claims 1, 9, and 15, the previously asserted 35 USC §102 rejection has been withdrawn. Independent claims 1, 9, and 15 and corresponding dependent claims 3-4, 11-12, and 17-18 now stand rejected under 35 USC § 103. Applicant makes no independent argument regarding remaining unamended dependent claims. Therefore such claims similarly remain rejected for the reasons set forth in the rejection above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sally T. Ley whose telephone number is (571)272-3406. The examiner can normally be reached Monday - Thursday, 10:00am - 6:00pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STL/Examiner, Art Unit 2147 /VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

Dec 22, 2021
Application Filed
Apr 22, 2025
Non-Final Rejection — §103
Jul 02, 2025
Applicant Interview (Telephonic)
Jul 02, 2025
Examiner Interview Summary
Jul 25, 2025
Response Filed
Aug 09, 2025
Final Rejection — §103
Nov 17, 2025
Request for Continued Examination
Nov 24, 2025
Response after Non-Final Action
Feb 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12443830
COMPRESSED WEIGHT DISTRIBUTION IN NETWORKS OF NEURAL PROCESSORS
2y 5m to grant Granted Oct 14, 2025
Patent 12135927
EXPERT-IN-THE-LOOP AI FOR MATERIALS DISCOVERY
2y 5m to grant Granted Nov 05, 2024
Patent 11880776
GRAPH NEURAL NETWORK (GNN)-BASED PREDICTION SYSTEM FOR TOTAL ORGANIC CARBON (TOC) IN SHALE
2y 5m to grant Granted Jan 23, 2024
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
15%
Grant Probability
44%
With Interview (+28.8%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 33 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month