Prosecution Insights
Last updated: April 19, 2026
Application No. 18/179,826

SYSTEMS AND METHODS FOR MACHINE LEARNING MODEL MANAGEMENT

Non-Final OA §103§112
Filed
Mar 07, 2023
Examiner
KNIGHT, PAUL M
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Jpmorgan Chase Bank N A
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
3y 1m
To Grant
79%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
169 granted / 272 resolved
+7.1% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
24 currently pending
Career history
296
Total Applications
across all art units

Statute-Specific Performance

§101
9.5%
-30.5% vs TC avg
§103
45.5%
+5.5% vs TC avg
§102
6.0%
-34.0% vs TC avg
§112
35.2%
-4.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 272 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Style In this action unitalicized bold is used for claim language, while italicized bold is used for emphasis. Applicant Reply “The claims may be amended by canceling particular claims, by presenting new claims, or by rewriting particular claims as indicated in 37 CFR 1.121(c). The requirements of 37 CFR 1.111(b) must be complied with by pointing out the specific distinctions believed to render the claims patentable over the references in presenting arguments in support of new claims and amendments. . . . The prompt development of a clear issue requires that the replies of the applicant meet the objections to and rejections of the claims. Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06. . . . An amendment which does not comply with the provisions of 37 CFR 1.121(b), (c), (d), and (h) may be held not fully responsive. See MPEP § 714.” MPEP § 714.02. Generic statements or listing of numerous paragraphs do not “specifically point out the support for” claim amendments. “With respect to newly added or amended claims, applicant should show support in the original disclosure for the new or amended claims. See, e.g., Hyatt v. Dudas, 492 F.3d 1365, 1370, n.4, 83 USPQ2d 1373, 1376, n.4 (Fed. Cir. 2007) (citing MPEP § 2163.04 which provides that a ‘simple statement such as ‘applicant has not pointed out where the new (or amended) claim is supported, nor does there appear to be a written description of the claim limitation ‘___’ in the application as filed’ may be sufficient where the claim is a new or amended claim, the support for the limitation is not apparent, and applicant has not pointed out where the limitation is supported.’)” MPEP § 2163(II)(A). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Generally: separately listed claim elements are construed as distinct components, that all claim terms must be given weight, there is presumed to be a difference in meaning and scope when different words or phrases are used in separate claims, and repeated and consistent descriptions in the specification indicate the proper scope of a claimed term. “[C]laims must ‘conform to the invention as set forth in the remainder of the specification and the terms and phrases used in the claims must find clear support or antecedent basis in the description so that the meaning of the terms in the claims may be ascertainable by reference to the description.’ 37 C.F.R. § 1.75(d)(1).” Phillips v. AWH Corp., 415 F.3d 1303, 1316 (Fed. Cir. 2005) (as cited in MPEP § 2111). Therefore, use of two different terms in the claims that both rely on the description of a single structure in the Specification may render at least one term indefinite because there is no way to determine which term should be construed in view of the description of the single structure. All independent claims substantially recite “providing, on a model serving platform, a plurality of production machine learning models and a plurality of shadow machine learning models[.]” It is not clear whether “providing” requires an operation (e.g. creating or transmitting of the models) or if this limitation should be interpreted as merely requiring that the models exist on a “model serving platform.” The plain meaning of the claim language is more consistent with interpreting “providing” as an affirmative step such as creating or transmitting. But nothing in the Specification was found describing any such operation, which tends to make the interpretation requiring only that the models exist more plausible in light of the disclosure. Since it is not clear how “providing” should be interpreted in this claim, the language is indefinite. This rejection may be overcome by avoiding verbs if no operation is being claimed, or by clearly claiming a specific operation (or set of operations). Claims 2 and 12 substantially recite “providing an event streaming platform; routing the input data to the event streaming platform; and publishing the input data to a first topic.” The claim recites the separate operations of “providing an event streaming platform” and “routing the input data to the event streaming platform[.]” It is not clear what, if any, operations are required by providing the platform. The plain language of “providing” indicates some creation or transmission of the platform. But the Specification is silent to creation or transmission of the platform, indicating that the claimed “providing” merely refers to the platform existing. Further, the claims separately recite routing data to the platform so the “providing” language would be redundant if it were interpreted as only requiring existence of the platform. Since it is not clear what, if any, operations are required by the “providing” step, claims that include this language are indefinite. Claims 7 and 17 substantially recite “wherein each of the plurality of production machine learning models receives a different percentage of the input data.” It is not clear whether the production models must receiving a different percentage of the input data from one another or if this language refers to the models receiving a different percentage of the data from that received by the shadow models. Claims 9-10 and 18-19 recite “production slots” and “shadow slots.” Both terms appear to be undefined, applicant invented terms. The Specification uses the terms, but it is not clear from context if they refer to physical slots (i.e. in a server rack where the code running each type of model is being executed) or if the “slots” refer to any set of computing resources, or even an addressable location used for the models. Ultimately, all attempts at determining the meaning of these applicant invented terms are nothing more than guesses. It is submitted that claim language that requires guessing at the meaning of terms fails to meet the requirements of “claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.” 35 U.S.C. § 112b. All dependent claims are rejected as containing the limitations of the claims from which they depend. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 11, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Christopher (Deploying Machine Learning Models in Shadow Mode; 2020) and Hilton (CRE life lessons (parts 1 and 2); 2017). 1. A method comprising: providing, on a model serving platform, (Christopher teaches “There are two fundamental approaches to Shadow Mode: Application level 1. Implementations 2. Infrastructure level implementations[.]” Christopher P. 5. Implementation at either the application level or the infrastructure level each independently teach “a model serving platform.”) a plurality of production machine learning models and a plurality of shadow machine learning models; (“The strategies you adopt when deploying software have the potential to save you from expensive and insidious mistakes. This is particularly true for machine learning systems, where detecting subtle data munging, feature engineering or model bugs in production can be very challenging, particularly when the production data inputs are hard to replicate exactly. “Shadow Mode” is one such deployment strategy, and in this post I will examine this approach and its trade-offs.” Christopher p. 1. “In scenarios where an ML system is conducting multiple model deployments a day, manual batch testing may be unrealistic - instead a service may be required to check that predictions fall within expected bounds, which can be configured depending on the expectations from the research environment.” Cristopher p. 7. Based on the general nature of the explanation in Christopher (e.g. “for machine learning systems”) and the teaching of using a service to evaluate multiple models per day for deployment, one of ordinary skill in the art would understand the teaching of Christopher as applying to a plurality of shadow and production (i.e. deployed) models. Further, a mere duplication of parts has been found to be obvious. See MPEP § 2144.04. Here, the claims merely recite “a plurality” with specific relationship or interaction between the models in the plurality of models, and no criticality attributed to the duplication.) routing input data to the plurality of production machine learning models and to the plurality of shadow machine learning models; (Cristopher teaches ““Shadow Mode” or “Dark Launch” as Google calls it is a technique where production traffic and data is run through a newly deployed version of a service or machine learning model, without that service or model actually returning the response or prediction to customers/other systems. Instead, the old version of the service or model continues to serve responses or predictions, and the new version’s results are merely captured and stored for analysis.” Christopher p. 2.) receiving, at a model monitoring engine, production output data from a first production machine learning model of the plurality of production machine learning models; receiving, at the model monitoring engine, offline output data from a first shadow machine learning model of the plurality of shadow machine learning models; (This pair of limitations reads on receiving output data from both a production (i.e. online) machine learning model and from a shadow (i.e. offline) machine learning model at “a model monitoring engine.” The “model monitoring engine” is not expressly defined in the Specification and nothing in the Specification limits this claim element to any particular structure. Further, the Specification describes this element as performing a plethora of operations related to monitoring of models. See Spec. ¶¶3-65. Consistent with this disclosure, the model monitoring engine is interpreted as any combination of computing resources carrying out some aspect of monitoring the model. This interpretation of the term “model monitoring engine” applies throughout the claim set. Christopher teaches: “Once your new model is deployed in shadow mode, it’s time to reap the benefits. In addition to the standard service-level monitoring you should be conducting at all times (HTTP response codes, latency, memory usage etc.), you are now able to compare model inputs and outputs. This comparison will be with both the research environment, and also over time to make sure inputs and outputs do not suddenly change (perhaps due to a change in an external data source). . . . Key things to analyze include . . . raw data . . . features being generated as inputs to the model . . . predictions being generated by the model . . . The time you wait before conducting this analysis depends on the business requirements and the amount of traffic coming in.” Christopher p. 7. One of ordinary skill in the art would understand the analysis taught in Cristopher, including the analysis of predictions being generated by the model, to be performed using some computing resources. As explained above, the claimed “model monitoring engine” reads on computing resources carrying out some aspect of monitoring the models. Note that Cristopher also explicitly teaches recording data from each model for later analysis: “You should already be recording all inputs and outputs to your model, either in logs or a database, for reproducibility. Shadow mode introduces the need to be able to distinguish between predictions from the current (customer-facing) model and the model in shadow mode. You should design your logging and/or database schema accordingly, such that this distinction can be made, for example by including a column in a database for recording the model version.” Christopher p. 5. “Your data collection techniques also need to evolve so that you can easily distinguish between shadow and non-shadow model inputs and outputs. The good news is that once a model is running in shadow mode, the switch to make it live should be relatively simple – forking all traffic to the new model, or toggling a feature flag.” Christopher p. 7-8.) promoting the first shadow machine learning model to a production machine learning model based on the offline output data; and demoting the first production machine learning model based on the production output data. (“for the rest of this post I will use the terms like so: Deployment (“in production but not affecting customers/users”) and release (“in production and affecting customers/users”).” Christopher p. 4. “The good news is that once a model is running in shadow mode, the switch to make it live should be relatively simple – forking all traffic to the new model, or toggling a feature flag.” Christopher p. 7-8. Christopher does not expressly teach demoting the production model. Hilton teaches “In theory, once you’ve dark-launched 100% of your traffic to the new service, making it go “live” is almost trivial. At the point where the traffic is forked to the original and new service, you’ll return the new service response instead of the original service response. If you have an enforced timeout on the new service, you’ll change that to be a timeout on the old service. Job done! Now you can disable monitoring of your original service, turn it off, reclaim its compute resources, and delete it from your source code repository. (A team meal celebrating the turn-down is optional, but strongly recommended.) Every service running in production is a tax on support and reliability, and reducing the service count by turning off a service is at least as important as adding a new service.” Hilton p. 9. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teaching of Hilton because disabling the previously deployed software saves computing resources.) For rejections of claims 11 and 20, see rejection of claim 1. Claims 2-6 and 12-16 are rejected under 35 U.S.C. 103 as being unpatentable over Christopher Hilton and AWS (What is pub/sub messaging?; Feb 2023). 2. The method of claim 1, comprising: providing an event streaming platform; routing the input data to the event streaming platform; and publishing the input data to a first topic. (As best understood, this claim is directed to using pub-sub messaging protocols to transfer data. Christopher teaches “In scenarios where performance is a concern (systems that give real-time predictions, or that have algorithms which are time-intensive), then best practice is to pass the inputs and record the outputs on the new model asynchronously (perhaps using threads or by passing the information to a distributed task queue). More advanced systems might pass the inputs to a separate Kafka topic for the new model.” Christopher p. 5. (Note that a Kafka topic refers a “topic” in the sense used with pub/sub messaging, taught in the reference below. See Alluri Kafka Topics and Partitions — A Complete Guide; 2019, cited ONLY as evidentiary support for the interpretation of the term “topic” as used in Christopher and NOT as prior art.) The previously cited art does not teach the operations of routing input data to an event streaming platform and publishing the input data to a first topic, or generally teach details of pub-sub messaging. AWS teaches “Publish-subscribe messaging, or pub/sub messaging, is an asynchronous communication model that makes it easy for developers to build highly functional and architecturally complex applications in the cloud. In modern cloud architecture, applications are decoupled into smaller, independent building blocks called services. Pub/sub messaging provides instant event notifications for these distributed systems. It supports scalable and reliable communication between independent software modules. . . . How does pub/sub messaging work? The publish-subscribe (pub/sub) system has four key components. Messages A message is communication data sent from sender to receiver. Message data types can be anything from strings to complex objects representing text, video, sensor data, audio, or other digital content. Topics Every message has a topic associated with it. The topic acts like an intermediary channel between senders and receivers. It maintains a list of receivers who are interested in messages about that topic. Subscribers A subscriber is the message recipient. Subscribers have to register (or subscribe) to topics of interest. They can perform different functions or do something different with the message in parallel. Publishers The publisher is the component that sends messages. It creates messages about a topic and sends them once only to all subscribers of that topic. This interaction between the publisher and subscribers is a one-to-many relationship. The publisher doesn’t need to know who is using the information it is broadcasting, and the subscribers don’t need to know where the message comes from.” AWS pp. 1-2. One of ordinary skill in the art would understand video/audio/sensor as teaching streaming data. Further, AWS expressly teaches using this technique with “applications that rely on real-time events” and “instantaneous, push-based delivery,” which one of ordinary skill in the art would understand to teach “routing input data to the event streaming platform.” See AWS p. 5. The “platform” reads on the pub/sub system as a whole. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teaching of AWS to modify the teaching of the previously cited art to include the type of pub-sub messaging pattern in a system including updating of production and shadow models as an instance of applying a known technique to a known device (method, or product) ready for improvement to yield predictable results; The prior art contained a "base" device (method, or product) upon which the claimed invention can be seen as an "improvement” (the prior art teaches techniques of using shadow models and production models to improve models without the risk of placing untested models into service). The prior art contained a known technique that is applicable to the base device (method, or product) (as shown above, the prior art also contained the known technique of using a pub/sub system for communication between components, which is applicable to both shadow models and production models). One of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in an improved system (one of ordinary skill in the art would have recognized that applying the pub/sub techniques to the techniques associated with training of shadow models would result in a system with increased throughput and scalability and would eliminate the need for polling when acquiring real time (steaming) data for the models. See MPEP § 2143(I)(D). See also AWS p. 5. This motivation applies to all combinations of shadow modeling techniques with pub/sub techniques. 3. The method of claim 2, comprising: subscribing, by each of the plurality of shadow machine learning models, to the first topic, and consuming the input data from the event streaming platform. (AWS teaches that pub/sub techniques include “A message is communication data sent from sender to receiver.” AWS p. 2. “A subscriber is the message recipient. Subscribers have to register (or subscribe) to topics of interest.” AWS p. 2. “Pub/sub messaging instantly pushes asynchronous event notifications when messages are published to the message topic.” AWS pp. 2-3. “Pub/sub messaging provides significant advantages to developers who build applications that rely on real-time events.” AWS p. 5. Applying this technique to machine learning models is addressed in the motivation to combine in claim 2. 4. The method of claim 2, comprising: routing the production output data to the event streaming platform; and publishing the production output data to a second topic. (This reads on using pub/sub techniques to transmit data output from models to another computing component (i.e. for evaluating the models.) Setting up a given “topic” for a given type of data to be transmitted is how pub/sub works, as explained in AWS. In other words, given the model output data to be analyzed, this claim reads on merely using standard pub/sub operations to transmit the data. AWS teaches: “A message is communication data sent from sender to receiver.” AWS p. 2. “A subscriber is the message recipient. Subscribers have to register (or subscribe) to topics of interest.” AWS p. 2. “Pub/sub messaging instantly pushes asynchronous event notifications when messages are published to the message topic.” AWS pp. 2-3. “In some cases, publishers can also be subscribers.” AWS pp. 4. “Pub/sub messaging provides significant advantages to developers who build applications that rely on real-time events.” AWS p. 5. See also AWS p. 3 illustrating the overall system. Applying this technique to machine learning models is addressed in the motivation to combine in claim 2.) 5. The method of claim 4, comprising: routing the offline output data to the event streaming platform; and publishing the offline output data to a third topic. (See rejection of claim 4. Given output of multiple models for analysis, these claims merely recite conventional ways of transmitting data using pub/sub techniques.) 6. The method of claim 5, comprising: subscribing, by the model monitoring engine, to the second topic and the third topic. (See rejection of claim 4. Given output of multiple models for analysis by some other component, these claims merely recite conventional ways of transmitting data using pub/sub techniques.) For rejections of claims 12-16, see rejections of claim 2-6. Claims 7-8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Christopher Hilton and AWS2 (Blue/Green Deployments on AWS; 2021 (See pp. 1, 34.)) 7. The method of claim 1, wherein each of the plurality of production machine learning models receives a different percentage of the input data. (Claim 8 recites a shadow model receiving 100% of the input data, while this claim recites the plurality of production models receiving “a different percentage” of the input data. This is consistent with interpreting “a different percentage of the [received] input data” as being a difference between the input data received by the production machine learning models and the input data received by the shadow models. Note here, that if the material of paragraph 44 of the Specification is the intended scope, this claim may be amended. The previously cited art does not expressly teach the production and shadow models receiving different percentages of data. AWS2 teaches current production models receiving all traffic while traffic is increased until all traffic is received by shadow (future production) models. Once all traffic is being received by the shadow model, the production model is disconnected (i.e. now receives no traffic.) This teaches different percentages of input data received by shadow and production models. “After you deploy the green environment, you have the opportunity to validate it. You might do that with test traffic before sending production traffic to the green environment, or by using a very small fraction of production traffic, to better reflect real user traffic. This is called canary analysis or canary testing. If you discover the green environment is not operating as expected, there is no impact on the blue environment. You can route traffic back to it, minimizing impaired operation or downtime and limiting the blast radius of impact.” AWS pp. 3-4. “During the deployment, you can scale out the green environment as more traffic gets sent to it and scale the blue environment back in as it receives less traffic. Once the deployment succeeds, you decommission the blue environment and stop paying for the resources it was using.” AWS2 p. 4. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teaching of AWS2 because this mitigates the risks associated with bringing a new model online and makes rollback less complicated.) 8. The method of claim 7, wherein each of plurality of shadow models receives 100% of the input data. (“During the deployment, you can scale out the green environment as more traffic gets sent to it and scale the blue environment back in as it receives less traffic. Once the deployment succeeds, you decommission the blue environment and stop paying for the resources it was using[.]” AWS2 p. 4. Note that decommissioning the blue environment implies that all traffic will go to the green environment.) For rejection of claim 17, see rejections of claims 7 and 8. Claims 9-10 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Christopher Hilton and Tranquillin (Taking a practical approach to BigQuery slot usage analysis; 2020). 9. The method of claim 1, comprising: providing a predetermined number of production slots and a predetermined number of shadow slots on the model serving platform, wherein each of the plurality of production machine learning models occupies one of the predetermined number of production slots, and wherein each of the plurality of shadow machine learning models occupies one of the predetermined number of shadow slots. (The “slots” are interpreted as reading on computing resources in general. The previously cited art does not teach allocating resources for models. Tranquillin teaches “Each task, executed on an ad-hoc microservice, requires an adequate amount of computational power in order to be fulfilled. The slot is the computational capacity unit to measure that power. The BigQuery engine dynamically identifies the amount of slots needed to perform a single query, and background processes will transparently allocate the adequate computation power needed to accomplish the task. So, it’s essential to understand how to monitor and analyze slot usage, because that lets your technical team understand if there are any bottlenecks, then allows the business to choose the best pricing model (on-demand vs. flat-rate)” Tranquillin p. 2. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teaching of Tranquillin to use “slots” for each data structure (i.e. model) because this allows monitoring of resources used by the models to determine whether more resources are needed, or to purchase less resources from the cloud.) 10. The method of claim 9, wherein the first shadow machine learning model is upgraded to a one of the predetermined number of production slots previously occupied by the first production machine learning model. (See rejection of claim1. Note that upgrading the model run on the “slot” of resources teaches upgrading the model.) For rejections of claims 18-19, see rejections of claims 9-10. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL M KNIGHT whose telephone number is (571) 272-8646. The examiner can normally be reached Monday - Friday 9-5 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. PAUL M. KNIGHTExaminerArt Unit 2148 /PAUL M KNIGHT/Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Mar 07, 2023
Application Filed
Apr 01, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530592
NON-LINEAR LATENT FILTER TECHNIQUES FOR IMAGE EDITING
2y 5m to grant Granted Jan 20, 2026
Patent 12530612
METHODS FOR ALLOCATING LOGICAL QUBITS OF A QUANTUM ALGORITHM IN A QUANTUM PROCESSOR
2y 5m to grant Granted Jan 20, 2026
Patent 12499348
READ THRESHOLD PREDICTION IN MEMORY DEVICES USING DEEP NEURAL NETWORKS
2y 5m to grant Granted Dec 16, 2025
Patent 12462201
DYNAMICALLY OPTIMIZING DECISION TREE INFERENCES
2y 5m to grant Granted Nov 04, 2025
Patent 12456057
METHODS FOR BUILDING A DEEP LATENT FEATURE EXTRACTOR FOR INDUSTRIAL SENSOR DATA
2y 5m to grant Granted Oct 28, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
79%
With Interview (+17.0%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 272 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month