Prosecution Insights
Last updated: April 19, 2026
Application No. 17/510,882

PEPTIDE-BASED VACCINE GENERATION

Non-Final OA §103
Filed
Oct 26, 2021
Examiner
BICKHAM, DAWN MARIE
Art Unit
1685
Tech Center
1600 — Biotechnology & Organic Chemistry
Assignee
NEC Laboratories America Inc.
OA Round
3 (Non-Final)
52%
Grant Probability
Moderate
3-4
OA Rounds
4y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
13 granted / 25 resolved
-8.0% vs TC avg
Strong +70% interview lift
Without
With
+69.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
39 currently pending
Career history
64
Total Applications
across all art units

Statute-Specific Performance

§101
31.0%
-9.0% vs TC avg
§103
24.3%
-15.7% vs TC avg
§102
12.2%
-27.8% vs TC avg
§112
23.5%
-16.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 25 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Request for Continued Examination A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/27/2026 has been entered. Claim Status Claims 1-20 are pending. Claims 1-20 are rejected. Priority The instant Application claims domestic benefit to US provisional application 63/105,926, filed 10/27/2020. Accordingly, each of claims 1-30 are afforded the effective filing date of the 10/27/2020. Information Disclosure Statement The information disclosure statement (IDS) filed on 10/26/2021 is in compliance with the provisions of 37 CFR 1.97 and has therefore been considered. A signed copy of the IDS document is included with this Office Action. Drawings The Drawings submitted 09/24/2025 are accepted. Claim Rejections - 35 USC § 103 The outstanding 103 rejections to the claims are withdrawn. After an updated search and consideration of applicant’s arguments, the examiner found new art that discloses generating a peptide vaccine using the new peptide sequence. As the new art discloses accurately predicting and modeling these interactions in immune-oncology range from improved and potent vaccine design it reads on generating a peptide vaccine using a new peptide sequence. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. A. Claim(s) 1-2, 4-5, 9-10, 12-13, 15-16, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Das et al. (Das, Payel, et al. "Pepcvae: Semi-supervised targeted design of antimicrobial peptide molecules." arXiv preprint arXiv:1810.07743 (2018), previously cited) in view of Sidom et al. (Sidhom, John-William, et al. "DeepTCR: a deep learning framework for understanding T-cell receptor sequence signatures within complex T-cell repertoires." BioRxiv (2019), previously cited), in view of Zhang et al. (Zhang, Jianfu, et al. "Multi-attribute transfer via disentangled representation." Proceedings of the AAAI conference on artificial intelligence. Vol. 33. No. 01. 2019, newly cited), and in further view of Sidom et al. (Sidhom, John-William, Drew M. Pardoll, and Alexander S. Baras. "Deep learning of the immune synapse." Cancer Research. Vol. 79, 2019, newly cited, henceforth referred to as Sidom2). Claims 1-10 are directed to computer-implemented method. Das discloses PepCVAE: semi-supervised targeted design of antimicrobial peptide molecules[title]. Das further discloses the model architecture was implemented in PyTorch [p. 5, par. 3] which reads on a computer-implemented method. Claims 12-20 are directed to a system for generating a peptide sequence, comprising: a hardware processor; and a memory that stores a computer program product. Das discloses the model architecture was implemented in PyTorch [p. 5, par. 3] which reads on a computer-implemented method. Although Das does not implicitly disclose a system, it is implied that to use the architecture with PyTorch and an autoencoder, a processor and memory are required. Claims 1 and 12 are directed to …of generating a peptide sequence, comprising: Das discloses a peptide generation framework PepCVAE, based on a semi-supervised variational autoencoder (VAE) model, for designing novel antimicrobial peptide (AMP) sequences [abstract]. transforming an input peptide sequence that has been identified as binding to a major histocompatibility complex (MHC) into disentangled representations, including a structural representation and an attribute representation, using an autoencoder model; Das discloses during training, known peptide sequences are fed into the encoder, which generates their respective latent codes and then the latent code is later decoded into a peptide sequence using the decoder [p. 3, fig. 1]. Das further discloses in order to perform controllable generation approach augments the unstructured latent codes ~z with a set of structured variables c, which are trained to control a salient and independent attribute of the sequence: whether it is antimicrobial, toxic, soluble, etc. [p. 4, par. 2], but is silent on major histocompatibility complex (MHC). However, Sidom discloses DeepTCR: a deep learning framework for understanding Tcell receptor sequence signatures within complex T-cell repertoires [title]. Sidom further discloses the implementation of a variational autoencoder (Fig. 1a) starts by taking a TCR sequence and following featurization as described previously, is transformed into a latent space that is parametrized by a multidimensional unit Gaussian distribution [p. 6, par. 1], which reads on transforming an input peptide sequence that has been identified as binding to a major histocompatibility complex (MHC) into disentangled representations. modifying one of the disentangled representations; Neither Das or Sidom discloses modifying one of the disentangled representations. However, Zhang discloses multi-attribute transfer via disentangled representation [title]. Zhang further discloses a model that learns disentangled representations for multiple different attributes, implementing a function that user can select some specific attributes and swap the corresponding parts of embeddings to “swap” the attributes, or apply embedding interpolation to manipulate different attributes [p. 9196, col. 1, par. 4]. Zhang also discloses implement our model using an auto-encoder, training on labeled input data pairs by swapping designated parts of embeddings [p. 9196, col. 1, par. 1]. Zhang further discloses the disentanglement of hidden units assures that only the specified attributes are changed while the others remain intact [p. 9201, col. 2, par. 3], which reads on modifying an attribute. transforming the disentangled representations, including the modified disentangled representation, to generate a new peptide sequence using the autoencoder model; and Das discloses as PepCVAE generates novel AMP sequences with higher long-range diversity, while being closer to the training distribution of biological peptides [abstract]. generating a peptide vaccine using the new peptide sequence. Das, Sidom, and Zhang are silent on generating a peptide vaccine. However, Sidom2 discloses deep learning algorithms have been utilized to achieve excellent performance in pattern-recognition tasks, such as in image and vocal recognition [DeepTCR]. Sidom2 further discloses the ability to learn complex patterns in data has tremendous implications in the genomics world, where sequence motifs become learned ‘features’ that can be used to predict functionality, guiding our understanding of disease and basic biology [DeepTCR]. Sidom2 also discloses accurately predicting and modeling these interactions in immune-oncology range from improved and potent vaccine design to biomarkers for predicting response to immunotherapy to furthering our understanding of immune recognition [abstract]. It would be obvious to generated those predicted peptide vaccines. Claims 2 and 13 are directed to further comprising training a neural network of the autoencoder model using a set of training peptide sequences. Das discloses during training, known peptide sequences are fed into the encoder, which generates their respective latent codes and then the latent code is later decoded into a peptide sequence using the decoder [p. 3, fig. 1]. Claims 4 and 15 are directed to wherein modifying the disentangled representations includes modifying an attribute representation of binding affinity. Das discloses in order to perform controllable generation approach augments the unstructured latent codes ~z with a set of structured variables c, which are trained to control a salient and independent attribute of the sequence: whether it is antimicrobial, toxic, soluble, etc. [p. 4, par. 2]. Das is silent on binding affinity. Sidom discloses 57,229 unique α/β pairs were collected with a count based measurement (as a proxy for binding affinity) to 44 specific peptide-MHC (pMHC) multimers and 6 negative controls [p. 29, fig 2] which reads on a binding affinity. Claims 5 and 16 is directed to wherein the binding affinity is a binding affinity between a peptide and the MHC. Das is silent on binding affinity. However, Sidom discloses 57,229 unique α/β pairs were collected with a count based measurement (as a proxy for binding affinity) to 44 specific peptide-MHC (pMHC) multimers and 6 negative controls [p. 29, fig 2] which reads on binding affinity between a peptide and a major histocompatibility complex. Claims 9 and 19 are directed to wherein modifying the disentangled representations includes changing coordinates of a vector representation of the disentangled representations within an embedding space. Neither Das or Sidom discloses modifying one of the disentangled representations. However, Zhang discloses multi-attribute transfer via disentangled representation [title]. Zhang further discloses a model that learns disentangled representations for multiple different attributes, implementing a function that user can select some specific attributes and swap the corresponding parts of embeddings to “swap” the attributes, or apply embedding interpolation to manipulate different attributes [p. 9196, col. 1, par. 4]. Zhang also discloses implement our model using an auto-encoder, training on labeled input data pairs by swapping designated parts of embeddings [p. 9196, col. 1, par. 1]. Zhang further discloses the disentanglement of hidden units assures that only the specified attributes are changed while the others remain intact [p. 9201, col. 2, par. 3], which reads on modifying an attribute. Claims 10 and 20 are directed to wherein transforming the input peptide sequence is performed using an encoder of the autoencoder model and transforming the disentangled representations is performed using a decoder of the autoencoder model. Das discloses known peptide sequences are fed into the encoder, which generates their respective latent codes and latent code is later decoded into a peptide sequence using the decoder [p. 3, par. 3]. In regards to claim(s) 1-2, 4-5, 9-10, 12-13, 15-16, and 19-20, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Das with Sidom as they disclose methods for using VAE for peptide representation. The motivation would have been to define the attributes mentioned in Das with the specific attributes of peptide binding of Sidom to create a downstream pipeline with this improved featurization to make inter-repertoire comparisons as discloses by Sidom [abstract], a finding that one of ordinary skill in the art would have recognized that the results of the combination were predictable. In regards to claim(s) 1-2, 4-5, 9-10, 12-13, 15-16, and 19-20, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Das and Sidom with Zhang as they disclose methods for using VAE disentangled representation. The motivation would have been to include the ability to modify attributes of Das and Sidom with the methods of Zhang to synthesize highly realistic sequences and alter the attribute labels for the sequences as disclosed by Zhang [abstract], a finding that one of ordinary skill in the art would have recognized that the results of the combination were predictable. In regards to claim(s) 1-2, 4-5, 9-10, 12-13, 15-16, and 19-20, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Das, Sidom, and Zhang with Sidom2 as they disclose methods for using VAE disentangled representation. The motivation would have been to include the vaccine generation of Sidom2 with the methods of Das, Sidom, and Zhang for improved and potent vaccine design as disclosed by Sidom2[abstract], a finding that one of ordinary skill in the art would have recognized that the results of the combination were predictable. B. Claims 3 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Das in view of Sidom, Zhang, and Sidom2, as applied to claims 1 and 12 as above, and in further view of Cheng et al. (Cheng, Pengyu, et al. "Improving disentangled text representation learning with information-theoretic guidance." arXiv preprint arXiv:2006.00693 (2020), previously cited). Claims 3 and 14 are directed to wherein training the neural network of the autoencoder model includes minimizing loss function that includes a mutual information between the structural representation and the attribute representation and further includes a Wasserstein distance between sample of a posterior distribution and a Gaussian distribution. Das discloses minimize the loss with respect to the parameters and the classifier loss requires both real and generated sequences’ c attribute to be classified correctly, while minimizing the entropy H of the classifier encourages it to have high confidence in its predictions on generated data [p. 4, par. 2]. Sidom discloses to make inter-repertoire comparisons we measure the distance between these proportionality vectors using variety of distance metrics including Euclidean, Correlation, a symmetric KL-divergence, JS-divergence, and Wasserstein distance [p. 8, par. 1], which reads on a loss function. Das, Sidom, and Zhang are silent on a Wasserstein distance between sample of a posterior distribution and a Gaussian distribution. However, Cheng discloses improving disentangled text representation learning with information-theoretic guidance [title]. Cheng further discloses IDEL reduces the dependency between style and content embeddings by minimizing a sample-based mutual information upper bound [p. 2, col. 1, par. 1]. Cheng also discloses a prior distribution p(s; c) = p(s)p(c), as the product of two multivariate unit variance Gaussians, is used to regularize the posterior distribution [p. 4, col. 2, par. 5]. In regards to claim(s) 3 and 14, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Das and Sidom with Cheng as they both disclose methods to transform data into disentangled representations. Cheng discloses improving disentangled text representation learning with information-theoretic guidance[title]. Cheng further discloses unsupervised representation learning and generation of text from a continuous space is an important topic in natural language processing [p. 1, par. 1]. The motivation would have been to modify the method of Das to be to include reducing the dependency between style and content embeddings by minimizing a sample-based mutual information upper bound to help reduce the dependence between embedding spaces as disclosed by Cheng [p. 9, col. 1, par. 1], a finding that one of ordinary skill in the art would have recognized that the results of the combination were predictable. C. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Das et al. (Das, Payel, et al. "Pepcvae: Semi-supervised targeted design of antimicrobial peptide molecules." arXiv preprint arXiv:1810.07743 (2018), previously cited) in view of Sidom et al. (Sidhom, John-William, et al. "DeepTCR: a deep learning framework for understanding T-cell receptor sequence signatures within complex T-cell repertoires." BioRxiv (2019), previously cited), in view of Zhang et al. (Zhang, Jianfu, et al. "Multi-attribute transfer via disentangled representation." Proceedings of the AAAI conference on artificial intelligence. Vol. 33. No. 01. 2019, newly cited), in view of Sidom et al. (Sidhom, John-William, Drew M. Pardoll, and Alexander S. Baras. "Deep learning of the immune synapse." Cancer Research. Vol. 79, 2019, newly cited, henceforth referred to as Sidom2), and in further view of Rubenstein et al. (Rubenstein, Paul K., Bernhard Schölkopf, and Ilya Tolstikhin. "Learning disentangled representations with wasserstein auto-encoders." (2018), newly cited). Claim 11 is directed a computer-implemented method of generating a peptide sequence, comprising: training a Wasserstein neural network model using a set of training peptide sequences by minimizing a mutual information between a structural representation and an attribute representation of the training peptide sequences; transforming an input peptide sequence that has been identified as binding to a major histocompatibility complex (MHC) into disentangled structural and attribute representations, using an encoder of the Wasserstein neural network model; modifying one of the disentangled representations to alter an attribute to improve vaccine efficacy against a predetermined pathogen, including changing coordinates of a vector representation of the disentangled representations within an embedding space; and transforming the disentangled representations, including the modified disentangled representation, to generate a new peptide sequence using a decoder of the Wasserstein neural network model; and generating a peptide vaccine using the new peptide sequence. Claim 11 is similar to claims 1-3 and 8-9 as it is a computer implemented method of analyzing peptide sequences. However, more specifically claim 11 requires a Wasserstein neural network where claim 1 requires a general autoencoder model. Das, Sidom, Zhang, and Sidom2 are silent on a Wasserstein neural network. However, Rubenstein discloses learning disentangled representations with Wasserstein auto-encoders [title]. Rubenstein further discloses Wasserstein auto-encoders (WAEs) are a recently introduced auto-encoder architecture similarly to Variational auto-encoders (VAEs), WAEs describe a particular way to train probabilistic latent variable models (LVMs) [p. 1, par. 1]. Rubenstein also discloses LVMs act by first sampling a code (feature) vector Z from a prior distribution defined over the latent space and then mapping it to a random input point using a conditional distribution also known as the decoder [p. 1, par. 1]. In regards to claim(s) 11, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Das with Rubenstein as they both are directed to learning disentangled representations. The motivation would have been to substitute the WAE of Rubenstein for the VAE of Das as WAE shares many of the properties of VAEs (stable training, encoder-decoder architecture, nice latent manifold structure). The goal, is to learn representations of datasets such that individual coordinates in the feature space correspond to human-interpretable generative factors (also referred to as factors of variation in the literature) and that learning such representations is essential for significant progress in machine learning research as disclosed by Rubenstein [p. 1, par. 5]. D. Claims 6-8, 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Das in view of Sidom, Zhang, and Sidom2, as applied to claims 1 and 12 as above, and in further view of Patronov et al. (Patronov, Atanas, and Irini Doytchinova. "T-cell epitope vaccine design by immunoinformatics." Open biology 3.1 (2013): 120139, newly cited). Claim 6 is directed to wherein modifying the disentangled representations includes modifying an attribute representation of antigen processing score. Das discloses in order to perform controllable generation approach augments the unstructured latent codes ~z with a set of structured variables c, which are trained to control a salient and independent attribute of the sequence: whether it is antimicrobial, toxic, soluble, etc. [p. 4, par. 2] which read on attribute representations. Das, Sidom, Zhang and Sidom2 are silent on an antigen processing score. However, Patronov discloses T-cell epitope vaccine design by immunoinformatics [title]. Patronov further discloses the improved knowledge of antigen recognition at molecular level has contributed to the development of rationally designed peptide vaccines [p. 2, col. 2, par. 4]. Patronov discloses a method to integrate the prediction of peptide MHC class I binding, proteaso mal C-terminal cleavage and transporter associated with antigen processing (TAP) transport efficiency [p. 5, col. 1, par. 3]. When one assesses the protein structure, the skilled artisan would readily evaluate the relevant properties such as binding affinity, t-cell receptor interactions, and antigen processing as disclosed by Patronov,. Claims 7 and 17 are directed to wherein modifying the disentangled representations includes modifying an attribute representation of T-cell receptor interaction score. Das discloses in order to perform controllable generation approach augments the unstructured latent codes ~z with a set of structured variables c, which are trained to control a salient and independent attribute of the sequence: whether it is antimicrobial, toxic, soluble, etc. [p. 4, par. 2] which read on attribute representations. Das, Sidom, Zhang and Sidom2 are silent on an aT-cell receptor interaction score. However, Patronov discloses T-cell epitope vaccine design by immunoinformatics [title]. Patronov further discloses the improved knowledge of antigen recognition at molecular level has contributed to the development of rationally designed peptide vaccines [p. 2, col. 2, par. 4]. Patronov also discloses that the interaction between the T-cell receptor and the MHC–ligand complex were also studied via docking [p. 7, col. 2, par. 2]. When one assesses the protein structure, the skilled artisan would readily evaluate the relevant properties such as binding affinity, t-cell receptor interactions, and antigen processing as disclosed by Patronov. Claims 8 and 18 are directed to wherein modifying the disentangled representations includes altering an attribute to improve vaccine efficacy against a predetermined pathogen. Das discloses in order to perform controllable generation approach augments the unstructured latent codes ~z with a set of structured variables c, which are trained to control a salient and independent attribute of the sequence: whether it is antimicrobial, toxic, soluble, etc. [p. 4, par. 2] which read on attribute representations. Das, Sidom, Zhang and Sidom2 are silent on an aT-cell receptor interaction score. However, Patronov discloses T-cell epitope vaccine design by immunoinformatics [title]. Patronov further discloses the improved knowledge of antigen recognition at molecular level has contributed to the development of rationally designed peptide vaccines [p. 2, col. 2, par. 4]. Patronov also discloses that the interaction between the T-cell receptor and the MHC–ligand complex were also studied via docking [p. 7, col. 2, par. 2]. Patronov further discloses a method to integrate the prediction of peptide MHC class I binding, proteaso mal C-terminal cleavage and transporter associated with antigen processing (TAP) transport efficiency [p. 5, col. 1, par. 3]. Patronov also discloses the methodology of analyzing the pathogen genome to identify potential antigenic proteins is known as ‘reverse vaccinology’ [p. 3, col. 2, par. 2], which reads on a known pathogen. As relevant properties such as binding affinity, t-cell receptor interactions, and antigen processing all contribute to a vaccine efficacy, they would also affect the efficiency of said vaccine as disclosed by Patronov. In regards to claim(s) 6-8, 13, and 17-18, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Das, Sidom, Zhang, and Sidom2 with Patronov as the method of Das and Sidom are directed to treatment by peptide sequence design. The motivation would have been to add the step of generating a vaccine of Zhang based on the new peptide sequence of Das and Sidom as the improved knowledge of antigen recognition at molecular level has contributed to the development of rationally designed peptide vaccines as disclosed by Zhang [p.2, col. 2, par. 4], a finding that one of ordinary skill in the art would have recognized that the results of the combination were predictable. Conclusion No claims are allowed. Inquiries Any inquiry concerning this communication or earlier communications from the examiner should be directed to Dawn M. Bickham whose telephone number is (703)756-1817. The examiner can normally be reached M-Th 7:30 - 4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Olivia Wise can be reached at 571-272-2249. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D.M.B./Examiner, Art Unit 1685 /Soren Harward/Primary Examiner, TC 1600
Read full office action

Prosecution Timeline

Oct 26, 2021
Application Filed
Jun 18, 2025
Non-Final Rejection — §103
Sep 10, 2025
Interview Requested
Sep 24, 2025
Examiner Interview Summary
Sep 24, 2025
Response Filed
Nov 03, 2025
Final Rejection — §103
Jan 14, 2026
Interview Requested
Jan 27, 2026
Request for Continued Examination
Jan 28, 2026
Response after Non-Final Action
Mar 02, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597490
METHODS AND SYSTEMS FOR MODELING PHASING EFFECTS IN SEQUENCING USING TERMINATION CHEMISTRY
2y 5m to grant Granted Apr 07, 2026
Patent 12486545
Diagnostic and Treatment of Chronic Pathologies Such as Lyme Disease
2y 5m to grant Granted Dec 02, 2025
Patent 12488859
PEPTIDE BASED VACCINE GENERATION SYSTEM WITH DUAL PROJECTION GENERATIVE ADVERSARIAL NETWORKS
2y 5m to grant Granted Dec 02, 2025
Patent 12482534
PEPTIDE BASED VACCINE GENERATION SYSTEM WITH DUAL PROJECTION GENERATIVE ADVERSARIAL NETWORKS
2y 5m to grant Granted Nov 25, 2025
Patent 12473584
METHOD FOR DETECTING THE PRESENCE, IDENTIFICATION AND QUANTIFICATION IN A BLOOD SAMPLE OF ANTICOAGULANTS WHICH ARE BLOOD COAGULATION ENZYMES INHIBITORS, AND MEANS FOR THE IMPLEMENTATION THEREOF
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
52%
Grant Probability
99%
With Interview (+69.5%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 25 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month