Prosecution Insights
Last updated: April 19, 2026
Application No. 16/278,611

GAN-CNN FOR MHC PEPTIDE BINDING PREDICTION

Non-Final OA §101§103§112
Filed
Feb 18, 2019
Examiner
BICKHAM, DAWN MARIE
Art Unit
1685
Tech Center
1600 — Biotechnology & Organic Chemistry
Assignee
Regeneron Pharmaceuticals, Inc.
OA Round
5 (Non-Final)
52%
Grant Probability
Moderate
5-6
OA Rounds
4y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
13 granted / 25 resolved
-8.0% vs TC avg
Strong +70% interview lift
Without
With
+69.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
39 currently pending
Career history
64
Total Applications
across all art units

Statute-Specific Performance

§101
31.0%
-9.0% vs TC avg
§103
24.3%
-15.7% vs TC avg
§102
12.2%
-27.8% vs TC avg
§112
23.5%
-16.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 25 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Applicant’s response, filed 09/10/2025, has been fully considered. Rejections and/or objections not reiterated from previous Office Actions are hereby withdrawn. The following rejections and/or objections are either reiterated or newly applied. They constitute the complete set presently being applied to the instant application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Restriction Election This application contains claims directed to two groups that are directed to distinct inventions, group I (claims 1-23) and group II(claim 24), where newly submitted claim 24 is directed to an invention that is independent or distinct from the invention originally claimed for the following reasons: where claims 1-23 are for a method for training a generative adversarial network (GAN) and claim 24 is for a method for training a Deep Convolutional generative adversarial network (DCGAN) and classifying candidate polypeptide-MHC-I interactions which contains distinct steps normalizing the amino-acid distribution matrix and training the DCGAN generator and a DCGAN discriminator by gradient-descent. Because claim 1 was previously presented, applicant has constructively elected that invention. Since applicant has received an action on the merits for the originally presented invention, this invention has been constructively elected by original presentation for prosecution on the merits. Accordingly, claim 24 is withdrawn from consideration as being directed to a non-elected invention. See 37 CFR 1.142(b) and MPEP § 821.03. To preserve a right to petition, the reply to this action must distinctly and specifically point out supposed errors in the restriction requirement. Otherwise, the election shall be treated as a final election without traverse. Traversal must be timely. Failure to timely traverse the requirement will result in the loss of right to petition under 37 CFR 1.144. If claims are subsequently added, applicant must indicate which of the subsequently added claims are readable upon the elected invention. Should applicant traverse on the ground that the inventions are not patentably distinct, applicant should submit evidence or identify such evidence now of record showing the inventions to be obvious variants or clearly admit on the record that this is the case. In either instance, if the examiner finds one of the inventions unpatentable over the prior art, the evidence or admission may be used in a rejection under 35 U.S.C. 103 or pre-AIA 35 U.S.C. 103(a) of the other invention. Claim Status Claims 1-24 are pending. Claims 24 is withdrawn. Claims 1 is objected to. Claims 1-23 are rejected. Priority Applicant's claim under 35 USC§ 119(e) for the benefit of prior-filed Provisional Application No. 16/278611 is acknowledged. In this action, all claims are examined as though they had an effective filing date of 17 Feb 2018. In future actions, the effective filing date of one or more claims may change, due to amendments to the claims, or further analysis of the disclosure of the priority application. Drawings The replacement drawing sheets submitted 02/18/2019 are accepted Claim Objections The claims are objected to because of the following informalities. The instant objection is newly stated and is necessitated by claim amendment. Claim 1 has been amended, however the newly added steps do not provide consecutive lettering as in a-d. Claim Rejections - 35 USC § 112 35 USC § 112(a) The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. Claims 1-23 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. This rejection pertains to new matter. The rejection newly stated based upon claim amendment. Claim 1 recites “preparing a vaccine comprising the polypeptide” and “administering the vaccine to a subject in need thereof to elicit a CDS+ T-cell response to cells presenting the polypeptide in complex with the MHC-1 protein, thereby treating cancer.” The specification as published provides support for identifying [0107], selecting and synthesizing [0055] candidate polypeptide-MHC-I interactions classified as positive. However, there is not support within the specification, nor has Applicant provided such support, for preparing a vaccine comprising the polypeptide and administering the vaccine to a subject in need thereof to elicit a CDS+ T-cell response to cells presenting the polypeptide in complex with the MHC-1 protein, thereby treating cancer. Therefore, the limitation introduces new matter. Claims 2-23 are rejected based on their dependency from claim 1. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1–23 are rejected under 35 USC § 101 because the claimed inventions are directed to non-statutory subject matter. This rejection is maintained from the previous Office action. Minor revisions have been made to address the newly-presented limitations of claim 1. "Claims directed to nothing more than abstract ideas (such as a mathematical formula or equation), natural phenomena, and laws of nature are not eligible for patent protection" (MPEP 2106.04 § I). Abstract ideas include mathematical concepts, and procedures for evaluating, analyzing or organizing information, which are a type of mental process (MPEP 2106.04(a)(2)). The claims as a whole, considering all claim elements both individually and in combination, do not amount to significantly more than the abstract idea of "training a generative adversarial network". Step 1: The Four Categories of Statutory Subject Matter (MPEP 2106.03) The claims are directed to a method, which is one of the categories of statutory subject matter. Step 2A, Prong One: Whether the Claims Set Forth or Describe a Judicial Exception(MPEP 2106.04 § II.A.1) Mathematical concepts recited in the claims include "generating … via a GAN generator and based on a set of GAN parameters, positive simulated polypeptide-MCH-I interaction data samples …"; "training … a convolutional neural network (CNN) by presenting, based on a set of CNN parameters, the positive simulated polypeptide-MCH-I interaction data samples … to the CNN"; "presenting … the positive real polypeptide-MHC-I interaction data samples and the negative real polypeptide-MHC-I interaction data samples to the CNN to generate prediction scores" and “classifying, by the CNN, each of the plurality of candidate polypeptide-MHC-1 interactions as a positive or a negative polypeptide-MHC-1 interaction”. Steps of evaluating, analyzing or organizing information recited in the claims include "determining … whether the prediction scores are indicative of the GAN being trained or not trained “and “selecting those candidate polypeptide-MHC-1 interactions classified as positive”. Hence, the claims explicitly recite numerous elements that, individually and in combination, constitute abstract ideas. The claims must therefore be examined further to determine whether they integrate that abstract idea into a practical application (MPEP 2106.04(d)). Step 2A, Prong Two: Whether the Claims Contain Additional Elements that Integrate the Judicial Exception(s) into a Practical Application (MPEP 2106.04 § II.A.2) Claim 1 recites an additional element that is not an abstract idea: that the abstract idea steps are performed "by a computing device". This limitation neither improves the functions of the computer itself, nor provides specific programming, tailored software, or meaningful guidance for implementing the abstract concept. It states nothing more than that a generic computer performs the functions that constitute the abstract idea. Hence, this is a mere instruction to apply the abstract idea using a computer, and therefore the claim does not integrate that abstract idea into a practical application (see MPEP 2106.04(d) § I; and MPEP 2106.05(f)). Claims 7 and 20 recite "outputting the GAN and the CNN". Outputting the results of the abstract idea is quintessential insignificant extrasolution activity, which does not integrate the abstract idea into a practical application (see MPEP 2106.04(d) § I; and MPEP 2106.05(g)). Claims 1 and 13 recites an additional element that is not an abstract idea: "synthesizing the polypeptide". The claims do not describe any specific synthetic procedure, nor do they even specify what polypeptide is being synthesized. This claim element is nothing more than a mere to apply the abstract idea using a generic synthesis procedure. The claim therefore does not integrate that abstract idea into a practical application (see MPEP 2106.04(d) § I; and MPEP 2106.05(f)). None of the dependent claims recite any additional non-abstract elements; they are all directed to further aspects of the information being analyzed, the manner in which that analysis is performed, or the mathematical operations performed on the information. Because the claims recite an abstract idea, and do not integrate that abstract idea into a practical application, the claims are directed to that abstract idea. Claims that are directed to abstract ideas must be examined further to determine whether the additional elements besides the abstract idea render the claims significantly more than the abstract idea. Claims that are directed to abstract ideas and that raise a concern of preemption of those abstract ideas must be examined to determine what elements, if any, they recite besides the abstract idea, and whether these additional elements constitute inventive concepts that are sufficient to render the claims significantly more than the abstract idea (MPEP 2106.05). Step 2B: Whether the Claims Contain Additional Elements that Amount to an Inventive Concept(MPEP 2106.05) As explained above, the mere instructions to implement the abstract idea using a computer are, when considered individually, insufficient to constitute an inventive concept that would render the claims significantly more than an abstract idea (see MPEP 2106.05(f)). As explained above, the mere instructions to synthesize a polypeptide are, when considered individually, insufficient to constitute an inventive concept that would render the claims significantly more than an abstract idea (see MPEP 2106.05(f)). As also explained above, the generic steps of outputting the GAN and CNN resulting from the abstract idea constitute insignificant extrasolution activity, and when considered individually, are insufficient to constitute inventive concepts that would render the claims significantly more than an abstract idea (see MPEP 2106.05(g)). When the claims are considered as a whole, they do not integrate the abstract idea into a practical application; they do not confine the use of the abstract idea to a particular technology; they do not solve a problem rooted in or arising from the use of a particular technology; they do not improve a technology by allowing the technology to perform a function that it previously was not capable of performing; and they do not provide any limitations beyond generally linking the use of the abstract idea to a broad technological environment (i.e. computerized analysis of biological sequence data; polypeptide synthesis). See MPEP 2106.05(a) and 2106.05(h). Conclusion: Claims are Directed to Non-statutory Subject Matter For these reasons, the claims, when the limitations are considered individually and as a whole, are directed to an abstract idea and lack an inventive concept. Hence, the claimed invention does not constitute significantly more than the abstract idea, so the claims are rejected under 35 USC § 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. For the following rejections, instantly claimed elements which are considered to be equivalent to the prior art teachings are described in bold for all claims, and underlined text indicates newly recited portions necessitated by claim amendment. A. Claims 1–12 and 15–23 are rejected under 35 U.S.C. 103 as being unpatentable over Vang, et al. (Bioinformatics 2017; ref. A on IDS of 17 May 2019); Somasundaram, et al. (in 2nd International Conference on Information Technology Research 2017; ref. C on IDS of 6 Nov 2019); Kusner, et al. ("GANS for Sequences of Discrete Elements with the Gumbel-softmax Distribution" 2016; previously cited), Anoniou, et al. ("Data Augmentation Generative Adversarial Networks" 2017), and Ott et al. (Ott, Patrick A., et al. "An immunogenic personal neoantigen vaccine for patients with melanoma." Nature 547.7662 (2017): 217-221, newly cited). The instant rejection is maintained from the previous Office Action and any newly recited portions are necessitated by claim amendment. The explanation of the correspondence among Vang, Somasundaram, Kusner and the claim limitations is substantially similar to that presented in the previous Office action. With respect to claim 1, Vang "propose a deep convolutional neural network architecture, name HLA-CNN, for the task of HLA class I-peptide binding prediction" (Abstract), comprising: (a) “the dataset includes HLA class I binding data including indicating binding or nonbinding, sequence, and allele subtypes (p. 2659 § 2.1). (b) "the input into HLA-CNN network is the character string of the peptide, a 9-mer peptide in this example" which is then encoded into an amino acid matrix (p. 2660 § 2.3); "indicators of binding were given as either binary values of or ic50 (half maximal inhibitory concentration) measurements. Binary indicators were used directly while values given in ic50 measurements were denoted as binding if ic50 < 500nM" (p. 2659 § 2.1); the "binary indicators" constitute "positive real data, and negative real data"; the CNN is trained until a loss criterion is reached (p. 2661, bot. of col. 1) (c) presenting an evaluation set of real positive and negative examples to the CNN (p. 2662 § 3.2) (d) Vang teaches that "the lack of training data is a well-known weakness of deep neural networks as the model may not converge to a solution or worst yet, may overfit to the small training set" (p. 2659, bot. of col. 1). Somasundaram teaches that "data mining and machine learning is typically associated with solving real world problems that are characterized by a large amount of data. However, in practice, collecting large amounts of data in medical field is infeasible" (p. 2 § III.B). Somasundaram further teaches that "GANs are neural networks that learn to create synthetic data similar to some known input data" (p. 3 § VII), and that the synthetic data are created by inputting a noise vector into the Generator element of the GAN (p. 4, col. 1). Somasundaram teaches that "after the remarkable success of GAN, it's widely used in many industries to generate things. GAN used to generate images, text, music and many more things" (p. 7 § IX). Somasundaram provides a general solution to the "lack of training data" noted as a problem by Vang: a GAN can be used to generate synthetic training data to augment the real examples. In this case, the training data of Vang are peptide sequences with their corresponding binding affinities for HLA class I; i.e. "polypeptide-MHC-I interaction data". To generate synthetic training data as taught by Somasundaram, a GAN must be created such that the generator transforms a noise vector into polypeptide-MHC-I interaction data, which is the input needed by Vang. Kusner teaches a GAN that can generate sequences of discrete elements. Kusner teaches that such a generator transforms a noise vector into a multinomial probability distribution that approximates the distribution of sequence elements at each position in the sequence (p. 3 § "Generative adversarial modeling"). The sequences are then generated by sampling one element from the corresponding multinomial probability distribution for each position in the sequence (p. 3 § "Generative adversarial modeling"). Kusner provides an example of training a sequence generator over 20,000 mini-batch iterations (top of p. 5), which constitutes satisfying a stop criterion. Kusner teaches that "we believe that these results, as a proof of concept, show strong promise for training GANs to generate discrete sequence data" (top of p. 6). Since the input to the CNN of Vang includes an amino acid sequence, the generator must transform the noise vector into an amino acid sequence. Kusner teaches that a GAN can generate a sequence of discrete elements by transforming a noise vector into a multinomial probability distribution of the possible elements (i.e. amino acids), and then sampling an element from the distribution for each position in the sequence. To generate polypeptide sequences that bind to HLA class I, the GAN must generate "an amino acid distribution matrix that mimics binding patterns of a positive polypeptide MHC-I interaction", and then sample a sequence from that distribution matrix, as in claimed step (a). An HLA-CNN, as taught by Vang, is then trained using both real and synthetic data, as in claimed step (b). If the HLA-CNN performs sufficiently well on the evaluation set, then the synthetic data generated by the GAN are sufficiently representative of real binding peptides; if not, then the GAN performance is inadequate and further training is performed by repeating the GAN and CNN training, as in claimed step (d). Somasundaram and Kusner teach a GAN that includes adjustable training parameters, but does not teach that the training parameters include "an adjustable quantity of the positive simulated polypeptide-MHC-I interaction data samples to be generated based on a quantity of the plurality of positive real polypeptide-MHC-I interaction data samples", as in step (a). Antoniou teaches a "Data Augmentation Generative Adversarial Network". This DAGAN is used to generate simulated training samples for a convolutional neural network (p. 5 § 4). As part of the use of the DAGAN, "the ratio of generated to real data was varied from 0 to 2" (mid. of p. 8). Antoniou teaches that "data augmentation is a widely applicable approach to improving performance in low-data setting, and a DAGAN is a flexible model to automatic learn to augment data. However beyond that, we demonstrate that DAGANS improve performance of classifiers even after standard data-augmentation" (p. 8 § 8). Antoniou teaches, as part of a method of using a GAN to generate simulated data for training a CNN, setting a ratio between a number of simulated training samples and real training samples, and generating an appropriate number of simulated training samples based on the ratio. In the combination of Vang, Somasundaram and Kusner, the real data are positive and negative polypeptide-MHC-I binding data, and the generated data are simulated positive polypeptide-MHC-I binding data. So when these teachings of Antoniou are applied to the system of Vang, Somasundaram and Kusner, the system will include "an adjustable quantity of the positive simulated polypeptide-MHC-I interaction data samples to be generated based on a quantity of the plurality of positive real polypeptide-MHC-I interaction data samples". Additional steps for amended claim 1: presenting a dataset to the CNN, wherein the dataset comprises a plurality of candidate polypeptide-MHC-1 interactions; and classifying, by the CNN, each of the plurality of candidate polypeptide-MHC-1 interactions as a positive or a negative polypeptide-MHC-1 interaction; Antoniou teaches, as part of a method of using a GAN to generate simulated data for training a CNN, setting a ratio between a number of simulated training samples and real training samples, and generating an appropriate number of simulated training samples based on the ratio. In the combination of Vang, Somasundaram and Kusner, the real data are positive and negative polypeptide-MHC-I binding data, and the generated data are simulated positive polypeptide-MHC-I binding data. So when these teachings of Antoniou are applied to the system of Vang, Somasundaram and Kusner, the system will include "an adjustable quantity of the positive simulated polypeptide-MHC-I interaction data samples to be generated based on a quantity of the plurality of positive real polypeptide-MHC-I interaction data samples". selecting those candidate polypeptide-MHC-1 interactions classified as positive; synthesizing a polypeptide comprising the peptide sequence of a selected candidate polypeptide-MHC-1 interaction classified as positive, wherein the polypeptide comprises a tumor-specific antigen that specifically binds to an MHC-1 protein encoded by the selected MHC allele; preparing a vaccine comprising the polypeptide; and administering the vaccine to a subject in need thereof to elicit a CDS+ T-cell response to cells presenting the polypeptide in complex with the MHC-1 protein, thereby treating cancer. Vang, Somasundaram, Kusner and Antoniou are silent on synthesizing, preparing a vaccine, and administering the vaccine. However, Ott discloses an immunogenic personal neoantigen vaccine for patients with melanoma (title). Ott further discloses effective anti-tumor immunity in humans has been associated with the presence of T cells directed at cancer neoantigens, a class of HLA-bound peptides that arise from tumor-specific mutations. Ott also discloses tumor procurement, target selection, vaccine manufacture and vaccine administration [p. 218, fig. 1]. The combination of Vang, Somasundaram, Kusner and Antoniou discloses the method of training and then testing (p. 2661, §2.3) the method for predicting HLA binding for a peptide sequence as in steps a-d above. Vang further discloses using the target allele of HLA-A for the model (p. 2661, §2.3) which reads on a tumor-specific antigen. With respect to claim 2, Somasundaram and Vang both teach that the GAN and CNN operate on biological data. With respect to claim 3, Vang teaches that "the focus of this article is on HLA class I proteins (p. 2658, bot. of col. 2) and "we apply machine learning techniques from the natural language processing (NLP) domain to tackle the task of MHC-peptide binding prediction" (p. 2659, mid. of col. 1). With respect to claims 4 and 18, Somasundaram teaches that "in GAN the training data will be in 2 parts. One is the real data pdata(x) and another one is the generated data distribution pg(x)" (p. 4, mid. of col. 1). GAN training includes adjusting the parameters of the Generator and the decision boundary of the Discriminator (p. 4, col. 2). Kusner teaches that "the discriminator takes as input any real d -dimensional vector (this could be a generated input G ( z ) or a real one x ) and predicts the probability that the input is actually drawn from the real distribution p ( x ) . It will be trained to take samples G ( z ) and real inputs x and accurately distinguish them" (p. 3 § "Generative adversarial modeling"). As explained above, in the combination of Kusner, Somasundaram and Vang, the training data are polypeptide-HLA interaction data. With respect to claim 5, Vang teaches that "the input into HLA-CNN network is the character string of the peptide" (p. 2660 § 2.3), the peptide being one that does or does not bind to HLA class I. Since the combination of Kusner and Somasundaram teaches using a GAN to generate synthetic positive training examples, synthetic training examples for the HLA-CNN of Vang must be peptide sequences predicted to bind to HLA class I, as in claimed step (j). These synthetic training examples are combined with real training examples, as in claimed step (k). The HLA-CNN is then trained until convergence, as in steps (l)–(o). With respect to claim 6, Vang teaches that the HLA-CNN outputs a binary prediction of whether the peptide binds to HLA class I (p. 2661, col. 1; Fig. 1). With respect to claims 7 and 8, Vang teaches evaluating the classification accuracy of the CNN (p. 2661, col. 1). Kusner teaches that the synthetic examples generated by the GAN should be indistinguishable from real examples (p. 3 § "Generative adversarial modeling"). If a classifier is trained with synthetic training examples that are distinguishable from real training examples, then the classifier will have poor performance. Hence, poor performance of a classifier (e.g. the HLA-CNN of Vang) indicates that the GAN is insufficiently trained; conversely, good performance of the classifier indicates that the GAN is sufficiently trained. Kusner, Somasundaram and Vang all teach computerized training of their respective models, which necessitates that the models themselves were outputted in some form. With respect to claim 9, Vang teaches that the input to the HLA-CNN is a peptide 9-mer (p. 2660 § 2.3). Hence, the GAN must generate 9-mer peptide sequences; i.e. "allele length". Kusner teaches that the GAN includes the parameter τ , which is a learning rate (mid. of p. 2). Kusner also teaches training the GAN with a chosen learning rate and batch size (p. 4 § "Optimization details"). With respect to claim 10, Vang teaches that the HLA-CNN model predicts HLA class I binding. HLA-A, HLA-B and HLA-C are HLA class I proteins. With respect to claims 11 and 12, Vang teaches "a 9-mer peptide" (p. 2660 § 2.3). With respect to claims 15 and 16, Vang teaches training HLA-CNN models with specific HLA alleles, including A*02:01, A*02:03, B*27:03 and B*27:05 (p. 2663, Table 2). With respect to claim 17, Somasundaram teaches that "the weights and biases in the discriminator and the generator are trained through back propagation" (p. 4, top of col. 1), which necessary includes "evaluating a gradient descent expression". With respect to claim 19, Somasundaram teaches that "the optimization of GAN can be formulated as a minimax problem" (p. 4, mid. of col. 2); i.e. an evaluation of a MSE function. Vang teaches that "the loss function used is the binary cross entropy function" (p. 2661, mid. of col. 1), which is the equivalent of MSE for binary outputs. Vang further teaches that the HLA-CNN model is evaluated using AUC (p. 2661, top of col. 2). With respect to claim 20, Kusner, Somsundaram and Vang all teach computerized training of their respective models, which necessitates that the models themselves were outputted in some form. With respect to claim 21, Somasundaram teaches that "z is sampled from the prior distribution pz(z) such as uniform or Gaussian distribution", z being the noise vector (p. 4, mid. of col. 1). With respect to claim 22, Kusner teaches that a GAN transforms a noise vector into a multinomial probability distribution that approximates the distribution of sequence elements at each position in the sequence (p. 3 § "Generative adversarial modeling"). A multinomial probability distribution is, by definition, a normalized matrix. With respect to claim 23, Antoniou teaches a "ratio of generated to real data" (mid. of p. 8); i.e. a "generating size parameter". Antoniou also teaches a parameter of the number of training samples for each class (p. 7, Table 1); in combination with Vang, Somasundaram and Kusner, the classes are positive and negative real polypeptide-MHC-I interactions. An invention would have been obvious to one of ordinary skill in the art if some motivation in the prior art would have led that person to modify prior art reference teachings to arrive at the claimed invention. Prior to the time of invention, said practitioner would have been motivated to modify the HLA classification method of Vang to include synthetic training data generated by a GAN, because Somasundaram and Antoniou teaches that GANs can successfully generate synthetic training data for a classifier, overcoming a problem noted by Vang. Given that Somasundaram teaches that GANs can be used to generate any kind of biomedical data, including sequences as in Kusner and Vang said practitioner would have readily predicted that the modification would successfully result in a method of generating a classifier for HLA-binding sequences, trained on a combination of real HLA binding data and synthetic training data generated by a GAN. Said practitioner also would have been motivated to modify the classifier of Vang, Somasundaram and Kusner to use the DAGAN architecture of Antoniou, or at least include a parameter specifying the ratio of simulated training data to real training data in a similar architecture, because Antoniou teaches that the DAGAN architecture improves many different classification tasks. Given that Antoniou teaches that the DAGAN can be used to improve training of a CNN, and that the classifier of Vang, Somasundaram and Kusner is a CNN, said practitioner would have readily predicted that the combination would successfully result in a method of creating a GAN that can generate synthetic training data of HLA-binding sequences, the amount of synthetic data controlled by a parameter related to the amount of real positive training examples. Said practitioner also would have been motivated to combine the selected positive peptide from the combination of Vang, Somasundaram, Kusner and Antoniou with the synthesis, manufacture, and administration of Ott for effective cancer immunotherapy: targeting highly heterogeneous tumors and selectively targeting tumor relative to healthy tissues as disclosed by Ott (p. 220, col. 2, par. 2]. The invention is therefore prima facie obvious. Claims 13 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Vang, Somasundaram, Kusner , Antoniou, and Ott applied to claims 1–12 and 15–23 above, and further in view of Carr, et al. (WO 2017/184590). The combination of Vang, Somasundaram, Kusner and Antoniou teaches a method of predicting HLA binding for a peptide sequence that includes using a general binding affinity ic50 threshold of 500nM to identify peptide binders [p. 2664, §2]. Vang, Somasundaram, Kusner and Antoniou are silent on synthesizing, preparing a vaccine, and administering the vaccine. However, Ott discloses an immunogenic personal neoantigen vaccine for patients with melanoma (title). Ott further discloses effective anti-tumor immunity in humans has been associated with the presence of T cells directed at cancer neoantigens, a class of HLA-bound peptides that arise from tumor-specific mutations. Ott also discloses tumor procurement, target selection, vaccine manufacture and vaccine administration [p. 218, fig. 1]. Carr teaches "methods for improved prediction of HLA-peptide binding, datasets for predicting HLA-peptide binding and selection of HLA-binding peptides and compositions comprising HLA-binding peptides obtained by these methods" (0004). Carr teaches "HLA-peptides sequenced by mass spectrometry along with a set of random decoys were used to build binary classifiers (one classifier per HLA allele) to predict whether a given peptide will bind to a specific HLA allele" (00471); classifiers can include "generative models" and "deep convolutional neural networks" (00114). Carr further teaches that "a subset of [predicted] peptides were synthesized … and tested for binding to HLA molecules" (00470); the peptides synthesized for experimental validation were those predicted to bind to at least one HLA (00483). With respect to claim 14, Vang teaches training HLA-CNN models to generate peptides that bind to specific HLA alleles, including A*02:01, A*02:03, B*27:03 and B*27:05 (p. 2663, Table 2). An invention would have been obvious to one of ordinary skill in the art if some teaching in the prior art would have led that person to combine prior art reference teachings to arrive at the claimed invention. Prior to the time of invention, said practitioner would have followed the teachings of Carr — synthesize peptides that are predicted to bind to HLA, to experimentally validate the prediction — and combined this experimental validation step with the method of Vang, Somasundaram, Kusner, Antoniou, and Ott. Given that both Carr and the combination of Vang, Somasundaram, Kusner, Antoniou, and Ott are directed to generating peptide sequences predicted to bind to HLAs, and that peptides of any sequence can be readily synthesized using customary techniques, said practitioner would have readily predicted that the combination would successfully result in a method of generating predicted HLA-binding peptides, followed by synthesizing those peptides for experimental validation. The invention is therefore prima facie obvious. Conclusion No claims are allowed. Inquiries Any inquiry concerning this communication or earlier communications from the examiner should be directed to Dawn Bickham whose telephone number (703)756-1817. The examiner can normally be reached on Monday - Friday 8-4. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Olivia Wise can be reached on (571)272-2249. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D.M.B./Examiner, Art Unit 1685 /Soren Harward/Primary Examiner, TC 1600
Read full office action

Prosecution Timeline

Feb 18, 2019
Application Filed
Jan 20, 2022
Non-Final Rejection — §101, §103, §112
May 18, 2022
Applicant Interview (Telephonic)
May 19, 2022
Examiner Interview Summary
May 25, 2022
Response Filed
Jun 08, 2022
Final Rejection — §101, §103, §112
Sep 13, 2022
Request for Continued Examination
Oct 03, 2022
Response after Non-Final Action
Nov 08, 2022
Non-Final Rejection — §101, §103, §112
Jan 25, 2023
Interview Requested
Feb 08, 2023
Applicant Interview (Telephonic)
Feb 08, 2023
Examiner Interview Summary
Feb 15, 2023
Response Filed
Mar 03, 2023
Final Rejection — §101, §103, §112
Apr 20, 2023
Interview Requested
May 09, 2023
Response after Non-Final Action
Jul 10, 2023
Notice of Allowance
Jul 10, 2023
Response after Non-Final Action
Aug 08, 2023
Response after Non-Final Action
Oct 16, 2023
Response after Non-Final Action
Oct 20, 2023
Response after Non-Final Action
Nov 02, 2023
Response after Non-Final Action
Jan 08, 2024
Response after Non-Final Action
Jan 11, 2024
Response after Non-Final Action
Jan 12, 2024
Response after Non-Final Action
Jan 12, 2024
Response after Non-Final Action
Jul 09, 2025
Response after Non-Final Action
Sep 10, 2025
Request for Continued Examination
Sep 15, 2025
Response after Non-Final Action
Oct 30, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597490
METHODS AND SYSTEMS FOR MODELING PHASING EFFECTS IN SEQUENCING USING TERMINATION CHEMISTRY
2y 5m to grant Granted Apr 07, 2026
Patent 12486545
Diagnostic and Treatment of Chronic Pathologies Such as Lyme Disease
2y 5m to grant Granted Dec 02, 2025
Patent 12488859
PEPTIDE BASED VACCINE GENERATION SYSTEM WITH DUAL PROJECTION GENERATIVE ADVERSARIAL NETWORKS
2y 5m to grant Granted Dec 02, 2025
Patent 12482534
PEPTIDE BASED VACCINE GENERATION SYSTEM WITH DUAL PROJECTION GENERATIVE ADVERSARIAL NETWORKS
2y 5m to grant Granted Nov 25, 2025
Patent 12473584
METHOD FOR DETECTING THE PRESENCE, IDENTIFICATION AND QUANTIFICATION IN A BLOOD SAMPLE OF ANTICOAGULANTS WHICH ARE BLOOD COAGULATION ENZYMES INHIBITORS, AND MEANS FOR THE IMPLEMENTATION THEREOF
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
52%
Grant Probability
99%
With Interview (+69.5%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 25 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month