Prosecution Insights
Last updated: April 19, 2026
Application No. 17/495,571

METHOD AND SYSTEMS FOR PREDICTION OF HLA CLASS II-SPECIFIC EPITOPES AND CHARACTERIZATION OF CD4+ T CELLS

Non-Final OA §101§102§103§112§DP
Filed
Oct 06, 2021
Examiner
SABOUR, GHAZAL
Art Unit
1686
Tech Center
1600 — Biotechnology & Organic Chemistry
Assignee
BIONTECH SE
OA Round
1 (Non-Final)
29%
Grant Probability
At Risk
1-2
OA Rounds
3y 5m
To Grant
61%
With Interview

Examiner Intelligence

Grants only 29% of cases
29%
Career Allow Rate
9 granted / 31 resolved
-31.0% vs TC avg
Strong +32% interview lift
Without
With
+32.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
34 currently pending
Career history
65
Total Applications
across all art units

Statute-Specific Performance

§101
33.2%
-6.8% vs TC avg
§103
33.4%
-6.6% vs TC avg
§102
10.6%
-29.4% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 31 resolved cases

Office Action

§101 §102 §103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Applicant’s election without traverse of Group I: Species B: the ratio of 10:4990 and PPV of at least 0.3 (in claims 66-67 and 70-71) and Group II: HLA-DRB4*1:03 (in claims 74-78) is acknowledged. Claim Status Claims 1-61 are canceled. Claims 62-81 are pending. Claims 65, 68 and 69 are withdrawn. Claims 62-64, 66-67, and 70-81 are examined on the merit. Priority The instant application is the National Stage entry of PCT/US2019/068084, International Filing Date: 12/20/2019, which claims priority to US Provisional Application 62/783,914, filed 12/21/2018. As such, the effective filing date assigned to each of claims 62-64, 66-67, and 70-81 is 12/21/2018. In this action, all claims are examined as though they had an effective filing date of 12/21/2018. In future actions, the effective filing date of one or more claims may change, due to amendments to the claims, or further analysis of the disclosure(s) of the priority application(s). Information Disclosure Statement The information disclosure statements (IDS) submitted on 08/04/2023, 07/29/2024, 01/03/2025, and 07/14/2025, and 11/06/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the list of cited references was considered in full by the examiner. A signed copy of the corresponding 1449 form has been included with this Office action. Drawings The drawings filed 10/06/2021 and 03/11/2022 are accepted. The amendments to drawings filed 03/11/2022 have been accepted. Specification The specification filed 10/06/2021 has been accepted. Claim rejection - 35 USC§ 112(b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION. —The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 66 and 67 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 66 recites “The system of claim 62, wherein:(i) the at least one hit peptide sequence comprises at least 10 hit peptide sequences, and(ii) the at least 499 decoy peptide sequences comprise at least 4990 decoy peptide sequences” in lines 1-3, which has unclear antecedence. Claim 62, from which claim 66 depends, does not instantiate a “the at least one hit peptide sequence… the at least 499 decoy peptide sequences”. It is also unclear how one hit peptide sequences comprise at least 10 hit peptide sequences. Additionally, it is unclear how 499 decoy peptide sequences comprise at least 4990 decoy peptide sequences. As such, failing to particularly point out and distinctly claim the subject matter. Claim 67 recites “The system of claim 62, wherein any nine contiguous amino acid subsequences of any of the at least one hit peptides does not overlap with any nine contiguous amino acid sub-sequences of the at least 4990 decoy peptide sequences.” in lines 1-3, which has unclear antecedence. Claim 62, from which claim 67 depends, does not instantiate a “hit peptide … decoy peptide”. As such, failing to particularly point out and distinctly claim the subject matter. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an input module configured to receive amino acid sequence information” in claims 62 and 63. “a processing module operably linked to the input module” in claims 62 and 64. “an output module configured to display the plurality of presentation predictions”, in claim 64. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Applicants’ specification does not disclose corresponding structure for input module and processing module. As such, input module and processing module are interpreted as computer programs (Specification pg. 88). If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 62-64, 66-67, and 70-81 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The Supreme Court has established a two-step framework for this analysis, wherein a claim does not satisfy § 101 if (1) it is “directed to” a patent-ineligible concept, i.e., a law of nature, natural phenomenon, or abstract idea, and (2), if so, the particular elements of the claim, considered “both individually and as an ordered combination,” do not add enough to “transform the nature of the claim into a patent-eligible application.” Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1353 (Fed. Cir. 2016) (quoting Alice, 134 S. Ct. at 2355). Applicant is also directed to MPEP 2106. Step 1: The instantly claimed invention (claims 62-64, 66-67, and 70-81 being representative) is directed to a system. Therefore, the instantly claimed invention falls into one of the four statutory categories. [Step 1: YES] Step 2A: First it is determined in Prong One whether a claim recites a judicial exception, and if so, then it is determined in in Prong Two if the recited judicial exception is integrated into a practical application of that exception. Step 2A, Prong 1: Under the MPEP § 2106.04, the Step 2A (Prong 1) analysis requires determining whether a claim recites an abstract idea, law of nature, or natural phenomenon. Claims 62-64, 66-67, and 70-81 recite the following steps which fall under the mathematical concepts, mental processes, and/or certain methods of organizing human activity groupings of abstract ideas: Claim 62 recites a trained machine learning class II HLA- peptide presentation prediction model to generate output peptide sequences with a plurality of presentation predictions, wherein each presentation prediction of the plurality of presentation predictions is indicative of a presentation likelihood that a peptide sequence of the set of candidate peptide sequences is presented by one or more proteins encoded by a class II HLA allele of a cell of the human subject; the limitation generating sequences and a likelihood is considered a mathematical calculation of calculating a likelihood/probability, as disclosed in specification [0491]. As such, said limitation falls into mathematical concepts groupings of abstract ideas. Claim 62 further recites that the trained machine learning class II HLA-peptide presentation prediction model comprises; (i) a plurality of parameters identified at least based on training data comprising:(1) sequences of training peptides (mental process of identifying sequence),(2) an identity of a protein encoded by an HLA class II allele associated with the training peptide sequences (mental process of identifying an identity), and(3) an observation by mass spectrometry that one or more of the training peptides was presented by the protein encoded by the HLA class II allele in training cells (mental process of identifying an observation); and(ii) a function representing a relation between the amino acid sequence information received as input and the presentation likelihood generated as an output based on the amino acid sequence information and the plurality of parameters (mathematical calculation/mathematical process of identifying a function), as discloses in specification [0849]. Step (b) encompasses performing evaluation, judgment, and opinion to generate data. Under its broadest reasonable interpretation when read in light of the specification, the generating step encompasses mental processes practically performed in the human mind by observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. Said step also involves mathematical processes. Claim 62 further recites the trained model has a positive predictive value of at least 0.2 according to a presentation PPV determination method; the limitation determining a PPV value is considered a mathematical calculation of calculating a likelihood, as disclosed in specification [0433]. As such, said limitation falls into mathematical concepts groupings of abstract ideas. Claim 70 recites that the trained machine learning class II HLA-peptide presentation prediction model has a PPV of at least 0.3 according to a presentation PPV determination method (mathematical process). Claim 71 recites that the trained machine learning class II HLA-peptide presentation prediction model has a PPV of at least 0.3 at a recall rate of 20% according to a presentation PPV determination method (mathematical process). Claim 76 recites the training data comprises training data obtained by deconvolution; the limitation of obtaining training data by deconvolution is considered a mathematical calculation, as disclosed in specification [0321]. As such, said limitation fall into mathematical concepts groupings of abstract ideas. Claims 63-64, 66-67, 70-75, and 77-81 provide additional information about the recites judicial exceptions. Additionally, claims 62-64, 66-67, and 70-81 recite a correlation between peptide sequences and a therapeutic composition, and as such, falls into judicial exception of Laws of nature and natural phenomena. See MPEP 2106(b) I. The identified claims recite a law of nature, a natural phenomenon (product of nature) and/or fall into one of the groups of abstract ideas of mathematical concepts, mental processes, and/or certain methods of organizing human activity for the reasons set forth above. See MPEP 2106.04 (a)(2) III and MPEP 2106.04 (b) I. Therefore, claims are directed to one or more judicial exception(s) and require further analysis in Prong Two. [Step 2A, Prong 1: YES] Step 2A: Prong 2: Under the MPEP § 2106.04, the Step 2A, Prong 2 analysis requires identifying whether there are any additional elements recited in the claim beyond the judicial exception(s), and evaluating those additional elements to determine whether they integrate the exception into a practical application of the exception. This judicial exception is not integrated into a practical application for the following reasons. The additional elements of claim(s) 62-64, 66-67, and 70-81 include the following. Claim 62 recites a system comprising a computer processor, an input module (software), a processing module (software), an executable code comprising a trained machine learning model (software), an input module configured to receive amino acid sequence information (receiving data), and preparing a therapeutic composition. Claim 63 recites the input module is configured to receive an identity of one or more proteins (receiving data). Claim 64 recites an output module configured to display the plurality of presentation predictions (outputting data). Claim 80 recites that the output peptide sequences is for preparing a therapeutic composition for the human subject. The additional elements of a system comprising a computer processor, an input module (software), a processing module (software), an executable code comprising a trained machine learning model (software), and an output module configured to display are generic computer components and/or processes. There are no limitations that indicate that the processor, input module, processing module, or output module in the computer-implemented system require anything other than generic computing systems. The courts have found the use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general-purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application. See MPEP 2106.05(f). Furthermore, the additional elements of: to receive amino acid sequence information (receiving data), to receive an identity of one or more proteins (receiving data), and display the plurality of presentation predictions (outputting data) amount to necessary data gathering and outputting. The courts have found the limitations that amount to necessary data gathering and outputting are insignificant extra-solution activity that do not integrate a recited judicial exception into a practical application in Mayo, 566 U.S. at 79, 101 USPQ2d at 1968 and O/P Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (see MPEP 2106.05(g)). Furthermore, the additional element of preparing a therapeutic composition is merely an intended use of the claimed invention or a field of use limitation, and it cannot integrate a judicial exception under the "treatment or prophylaxis" consideration. See MPEP 2106.04(d)(2). Therefore, the additionally recited elements amount to generic computer components and/or insignificant extra-solution activity and, as such, the claims as a whole do no integrate the abstract idea into practical application. See MPEP 2106.05(g). Thus, claims 62-64, 66-67, and 70-81 are directed to an abstract idea. [Step 2A, Prong 2: NO] Step 2B: In the second step it is determined whether the claimed subject matter includes additional elements that amount to significantly more than the judicial exception. An inventive concept cannot be furnished by an abstract idea itself. See MPEP § 2106.05. The claims do not include any additional steps appended to the judicial exception that are sufficient to amount to significantly more than the judicial exception. The additional elements of claim(s) 62-64, 66-67, and 70-81 include the following. Claim 62 recites a system comprising a computer processor, an input module (software), a processing module (software), an executable code comprising a trained machine learning model (software), an input module configured to receive amino acid sequence information (receiving data), and preparing a therapeutic composition. Claim 63 recites the input module is configured to receive an identity of one or more proteins (receiving data). Claim 64 recites an output module configured to display the plurality of presentation predictions (outputting data). Claim 80 recites that the output peptide sequences is for preparing a therapeutic composition for the human subject. The additional elements of a system comprising a computer processor, an input module (software), a processing module (software), an executable code comprising a trained machine learning model (software), and an output module configured to display are conventional computer components and/or processes. The courts have found the use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general-purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TU Communications LLC v. AV Auto, LLC, 823 F.3d 607,613,118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Furthermore, the additional elements of: to receive amino acid sequence information (receiving data), to receive an identity of one or more proteins (receiving data), and display the plurality of presentation predictions (outputting data) amount to necessary data gathering and outputting. The courts have found the limitations that amount to necessary data gathering and outputting are insignificant extra-solution activity that do not amount to significantly more (see MPEP 2106.05(g)). Furthermore, the additional element of preparing a therapeutic composition is merely an intended use of the claimed invention or a field of use limitation, and does not amount to significantly more. Therefore, the additional element is not sufficient to amount to significantly more than the judicial exception. Taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception(s). Even when viewed as a combination, the additional elements fail to transform the exception into a patent-eligible application of that exception. Thus, the claims as a whole do not amount to significantly more than the exception itself. [Step 2B: NO] Therefore, the instantly rejected claims are not drawn to eligible subject matter as they are directed to an abstract idea without significantly more. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 62-64, 72-75, and 78-81 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Boucher et al. (US20210113673A1). Regarding claim 62, Boucher discloses method for generating an output for constructing a personalized cancer vaccine by identifying one or more neoantigens from one or more tumor cells of a subject that are likely to be presented on a surface of the tumor cells using a computer processor (claim 1); reading on limitations of a system for selecting one or more peptide sequences for preparing a pharmaceutical composition, the system comprising a computer processor. Boucher discloses that the computer is adapted to execute computer program modules for providing functionality and that the term “module” refers to computer program logic used to provide the specified functionality. Boucher further discloses that a module can be implemented in hardware, firmware, and/or software and are stored on the storage device, loaded into the memory, and executed by the processor. Boucher further discloses an input interface linked to the processor [0547-0548]. Boucher further discloses obtaining at least one of exome, transcriptome, or whole genome nucleotide sequencing data from the tumor cells and normal cells of the subject, wherein the nucleotide sequencing data is used to obtain data representing peptide sequences of each of a set of neoantigens (claim 1); reading on limitations of (a) an input module configured to receive amino acid sequence information of a set of candidate peptide sequences expressed by cells of a human subject, wherein each candidate peptide sequence of the plurality of candidate peptide sequences is encoded by a genome, transcriptome, or exome of a human subject, or a pathogen or a virus in the human subject. Boucher further discloses inputting the numerical vectors, using a computer processor, into a deep learning presentation model to generate a set of presentation likelihoods for the set of neoantigens, each presentation likelihood in the set representing the likelihood that a corresponding neoantigen is presented by one or more class II MHC alleles on the surface of the tumor cells of the subject (claim 1); reading on limitations of (b) a processing module operably linked to the input module, the processing module comprising an executable code comprising a trained machine learning class II HLA- peptide presentation prediction model, wherein the trained machine learning class II HLA-peptide presentation prediction model is configured to generate output peptide sequences with a plurality of presentation predictions, wherein each presentation prediction of the plurality of presentation predictions is indicative of a presentation likelihood that a peptide sequence of the set of candidate peptide sequences is presented by one or more proteins encoded by a class II HLA allele of a cell of the human subject. Boucher further discloses the deep learning presentation model comprising: a plurality of parameters identified at least based on a training data set comprising: labels obtained by mass spectrometry measuring presence of peptides bound to at least one class II MHC allele identified as present in at least one of a plurality of samples; training peptide sequences encoded as numerical vectors including information regarding a plurality of amino acids that make up the peptide sequence and a set of positions of the amino acids in the peptide sequence; and at least one HLA allele associated with the training peptide sequences; and a function representing a relation between the numerical vector received as an input and the presentation likelihood generated as output based on the numerical vector and the parameters; reading on limitations of wherein the trained machine learning class II HLA-peptide presentation prediction model comprises;(i) a plurality of parameters identified at least based on training data comprising: (1) sequences of training peptides, (2) an identity of a protein encoded by an HLA class II allele associated with the training peptide sequences, and (3) an observation by mass spectrometry that one or more of the training peptides was presented by the protein encoded by the HLA class II allele in training cells; and (ii) a function representing a relation between the amino acid sequence information received as input and the presentation likelihood generated as an output based on the amino acid sequence information and the plurality of parameters. Boucher further discloses selecting a subset of the set of neoantigens based on the set of presentation likelihoods to generate a set of selected neoantigens; and generating the output for constructing the personalized cancer vaccine based on the set of selected neoantigens (claim 1); reading on limitations of wherein each peptide sequence of a subset of the output peptide sequences is for preparing a therapeutic composition for the human subject based on the presentation likelihood generated as the output of the peptide sequence being in a complex with the one or more proteins encoded by a class II HLA allele of a cell of the human subject. Boucher further compares performance results for example presentation models trained and tested using datasets. Boucher showed that for each set of model features of the example presentation models, FIG. 13C depicts a PPV value at 10% recall and that the presentation models achieved a PPV value at 10% recall varying from 14% up to 29% (Fig. 13C) [0495]; reading on limitations of the trained machine learning class II HLA-peptide presentation prediction model has a positive predictive value (PPV) of at least 0.2 according to a presentation PPV determination method. Regarding claim 63, Boucher discloses that the nucleotide sequencing data is used to obtain data representing peptide sequences of each of a set of neoantigens identified by comparing the nucleotide sequencing data from the tumor cells and the nucleotide sequencing data from the normal cells, and wherein the peptide sequence of each neoantigen comprises at least one alteration that makes it distinct from the corresponding wild-type, peptide sequence identified from the normal cells of the subject (claim 1). Boucher further discloses that samples comprise human cell lines from patients (claim 12); reading on limitations of the input module is configured to receive an identity of one or more proteins encoded by a class II HLA allele of a cell of the human subject. Regarding claim 64, Boucher discloses the computer system includes a processor and a display coupled to the graphic adaptor to display images and other information (for example, presentation information) [0546-0547] and that the presentation identification system includes presentation information to be displayed (FIG. 1D, 2A., and 14) [0321]; reading on limitations of the processing module is linked to an output module configured to display the plurality of presentation predictions. Regarding claim 72, Boucher discloses that the methods are for generating an output for constructing a personalized cancer vaccine by identifying one or more neoantigens from one or more tumor cells of a subject that are likely to be presented on a surface of the tumor cells [0094]; reading on limitations of each peptide sequence of the set of candidate peptide sequences is associated with a cancer. Regarding claim 73, Boucher discloses the steps of obtaining at least one of exome, transcriptome, or whole genome nucleotide sequencing data from the tumor cells and normal cells of the subject, wherein the nucleotide sequencing data is used to obtain data representing peptide sequences of each of a set of neoantigens identified by comparing the nucleotide sequencing data from the tumor cells and the nucleotide sequencing data from the normal cells, and wherein the peptide sequence of each neoantigen comprises at least one alteration that makes it distinct from the corresponding wild-type, peptide sequence identified from the normal cells of the subject [0094]; reading on limitations of each peptide sequence of the set of candidate peptide sequences (i) comprises a mutation,(ii) is expressed in a cancer cell of the subject, and(iii) is not encoded by a genome of a non-cancer cell of the human subject. Regarding claim 74, Boucher discloses receiving mass spectrometry data comprising data associated with a plurality of isolated peptides eluted from major histocompatibility complex (MHC) [0175] and that the neoantigen nucleotide sequence for MHC Class II peptides has a length 6-30, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, or 30 amino acids [0204]; reading on limitations of each sequence of the one or more of the training peptides sequences observed by mass spectrometry to be presented by the protein encoded by the HLA class II in training cells has a length of at least 15 amino acids. Regarding claim 75, Boucher discloses that the encoding module encodes MHC class II alleles, for example, DR allele type associated with the peptide sequence [0379] [0501-0507]; reading on limitations of the training cells comprise training cells expressing a single MHC class II complex or a protein encoded by a single allelic variant of a class II HLA locus selected from the group consisting of DR, DP, and DQ, wherein the single MHC class II complex or a protein encoded by the single allelic variant of a class II HLA locus is expressed by a cell of the subject. Regarding claim 78, Boucher discloses that 19 samples of the 39 total samples contained the HLA class II molecule allele HLA-DRB4*01:03 [0507]; reading on limitations of the protein encoded by a class II HLA allele is selected from the group consisting of: HLA-DPB1*01:01/HILA- DPA1*01:03, HLA-DPB1*02:01/HLA-DPA1*01:03, HLA-DPB1*03:01/HLA- DPA1*01:03, HLA-DPB1*04:01/HLA-DPA1*01:03, HLA-DPB1*04:02/HLA- DPA1*01:03, HLA-DPB1*06:01/HILA-DPA1*01:03, HLA-DRB1*01:01, HLA- DRB1*01:02, HLA-DRB1*03:01, HLA-DRB1*03:02, HLA-DRB1*04:01, HLA- DRB 1*04:02, HLA-DRB 1*04:03, HLA-DRB 1*04:04, HLA-DRB 1*04:05, HLA- DRB1*04:07, HLA-DRB1*07:01, HLA-DRB1*08:01, HLA-DRB1*08:02, HLA- DRB1*08:03, HLA-DRB1*08:04, HLA-DRB1*09:01, HLA-DRB1*10:01, HLA- DRB1*11:01, HLA-DRB1*11:02, HLA-DRB1*11:04, HLA-DRB1*12:01, HLA- DRB1*12:02, HLA-DRB1*13:01, HLA-DRB1*13:02, HLA-DRB1*13:03, HLA- DRB1*14:01, HLA-DRB1*15:01, HLA-DRB1*15:02, HLA-DRB1*15:03, HLA- DRB1*16:01, HLA-DRB3*01:01, HLA-DRB3*02:02, HLA-DRB3*03:01, HLA- DRB4*01:01, HLA-DRB5*01:01, HLA-DRB1*01:01, HLA-DRB1*01:02, HLA- DRB1*03:01, HLA-DRB1*04:01, HLA-DRB1*04:02, HLA-DRB1*04:04, HLA- -5-DRB1*04:05, HLA-DRB1*07:01, HLA-DRB1*08:01, HLA-DRB1*08:02, HLA- DRB1*08:03, HLA-DRB1*09:01, HLA-DRB1*11:01, HLA-DRB1*11:02, HLA- DRB1*11:04, HLA-DRB1*12:01, HLA-DRB1*13:01, HLA-DRB1*13:02, HLA- DRB1*13:03, HLA-DRB1*14:01, HLA-DRB1*15:01, HLA-DRB1*15:02, HLA- DRB1*15:03, HLA-DRB1*16:02, HLA-DRB3*01:01, HLA-DRB3*02:01, HLA- DRB3*02:02, HLA-DRB3*03:01, HLA-DRB4*01:01, HLA-DRB4*01:03, HLA- DRB5*01:01; HLA-DPB1*01:01, HLA-DPB1*02:01, HLA-DPB1*02:02, HLA- DPB1*03:01, HLA-DPB1*04:01, HLA-DPB1*04:02, HLA-DPB1*05:01, HLA- DPB1*06:01, HLA-DPB1*11:01, HLA-DPB1*13:01, HLA-DPB1*17:01, HLA- DQA1 *01:01/HLA-DQB1 *05:01, HLA-DQA1 *01:02/HLA-DQB1 *06:02, HLA- DQA1 *01:02/HLA-DQB1 *06:04, HLA-DQA1 *01:03/HLA-DQB1 *06:03, HLA- DQA1 *02:01/HLA-DQB1 *02:02, HLA-DQA1 *02:01/HLA-DQB1 *03:03, HLA- DQA1 *03:01/HLA-DQB1 *03:02, HLA-DQA1 *03:03/HLA-DQB1 *03:01, HLA- DQA1*05:01/HLA-DQB1*02:01, HLA-DQA1*05:05/HLA-DQB1*03:01, and any combination thereof. Regarding claim 79, Boucher discloses that neoantigenic peptide or polypeptide can have an IC50 of at least less than 5000 nM, at least less than 1000 nM, at least less than 500 nM, at least less than 250 nM, at least less than 200 nM, at least less than 150 nM, at least less than 100 nM, at least less than 50 nM or less [0211]; reading on limitations of each peptide sequence of the subset of the output peptide sequences binds to a protein encoded by a class II HLA allele of a cell of the human subject with an IC50 of 500 nM or less, or a predicted IC50 of 500 nM or less. Regarding claim 80, Boucher discloses that neoantigens can include nucleotides or polypeptides and that neoantigens useful in vaccines can therefore include nucleotide sequences or polypeptide sequences. Boucher further discloses that isolated peptides comprise tumor specific mutations identified by the methods disclosed herein, peptides that comprise known tumor specific mutations, and mutant polypeptides or fragments thereof identified by methods disclosed herein [0202-0203] Boucher further discloses compositions comprising at least two or more neoantigenic peptides. In some embodiments the composition contains at least two distinct peptides. At least two distinct peptides can be derived from the same polypeptide [0213]; reading on limitations of each peptide sequence of the subset of the output peptide sequences is for preparing a therapeutic composition for the human subject that comprises one or more polypeptides comprising at least two peptide sequence of the subset of the output peptide sequences or one or more polynucleotides encoding at least two of peptide sequence of the subset of the output peptide sequences. Regarding claim 81, Boucher discloses obtaining at least one of exome, transcriptome, or whole genome nucleotide sequencing data from the tumor cells of human subject (claim 1); reading on limitations of wherein each candidate peptide sequence of the plurality of candidate peptide sequences is encoded by a genome, transcriptome, or exome of a human subject with cancer. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 66-67, 70-71, and 76 are rejected under 35 U.S.C. 103 as being unpatentable over Boucher et al. (US20210113673A1) as applied to claims 62-64, 72-75, and 78-81 above, and further in view of Barra et al. (Footprints of antigen processing boost MHC class II natural ligand predictions, Genome Medicine, Published: 16 November 2018, Volume 10, article number 84, (2018)). Claims 66-67, 70-71 and 76, depend on claim 62. Limitations of claim 62 have been taught in the above rejections. Regarding claim 66, Boucher discloses the data management module that identifies peptide sequences that are not presented by MHC alleles to generate the training data (generates decoys/negatives) and that the data management module identifies source proteins from which presented peptide sequences originated from (hit), and identifies a series of peptide sequences in the source protein that were not presented on MHC alleles of the tissue sample cells (decoy). [0373]. Boucher further discloses artificially generating large amounts of peptides with random sequences of amino acids and identify the generated sequences as peptides not presented on MHC alleles (decoys) [0374] [0380] (FIG. 4). Boucher further discloses that the prevalence of presented peptides in the test set was approximately 1/2400 [0499]. As stated above, claim 66 is indefinite (claim 62, does not recite any hit or decoy limitations); reading on limitations of i) the at least one hit peptide sequence comprises at least 10 hit peptide sequences, and(ii) the at least 499 decoy peptide sequences comprise at least 4990 decoy peptide sequences. Further regarding claim 66, Barra discloses a prediction model of peptide to MHC-II binding trained with naturally eluted ligands derived from mass spectrometry in addition to peptide binding affinity data sets (Abstract). Barra further teaches that peptide binding to MHC II is the most selective step in antigen presentation to CD4+ T cells and highlighted its role for the development of cancer immunotherapies (pg. 12, col. 1, section: Discussion, para. 1). Barra further discloses evaluating the model on a test set where they assessed PPV where data is highly unbalanced (pg. 2, col. 2, para. 2). Barra further discloses using test sets with thousands of decoys to distinguish true ligands from the background proteome (see “Random” column in Table 1. For example, 38,115 random negative sequences/decoys). Barra further discloses evaluating model performance at PPV thresholds that required distinguishing a handful of hits from a vast background of decoys. Boucher further discloses that integrating processing rules boosted the PPV to at least 0.2 (often higher, for example, 0.7) (Tables 1-3, page. 3-7). Boucher further discloses that PPV is calculated by sorting all predictions and estimating the fraction of true positives with the top N predictions, where N is the number of positives/hits in the benchmark data set. Barra further discloses that PPV represents a good metric to benchmark on highly unbalanced data sets like MS-derived elution data, where we have approximately ten times more negatives than positives (pg. 4, col. 1, para. 4). Regarding claim 67, Boucher discloses that the encoding module represents residue-level annotations of the source protein for peptide pi by including an indicator variable, that is equal to 1 if peptide pi overlaps with a helix motif and 0 otherwise, or that is equal to 1 if peptide pi is completely contained with within a helix motif in the allele-noninteracting variables wi [0406]. Barra further discloses that all data with a 9mer overlap between training and evaluation sets (FIG. 7, pg. 7, col. 3, first para.). As stated above, claim 67 is indefinite (claim 62, does not recite any hit or decoy limitations); reading on limitations of any nine contiguous amino acid subsequences of any of the at least one hit peptides does not overlap with any nine contiguous amino acid sub-sequences of the at least 4990 decoy peptide sequences. Regarding claims 70 and 71, Boucher discloses that in FIG. 13C, for each set of model features of the example presentation models, a PPV value at 10% recall that was identified when the features in the set of model features were classified as allele interacting features is shown on the left side, and a PPV value at 10% recall that was identified when the features in the set of model features were classified as allele non-interacting features is shown on the right side. Boucher further notes that the feature of peptide sequence was always classified as an allele interacting feature for the purposes of FIG. 13C. Boucher further discloses achieving a PPV value at 10% recall varying from 14% up to 29%, which are significantly (approximately 500-fold) higher than PPV for a random prediction [0495]. Boucher further discloses that their models are capable of achieving significantly more accurate presentation predictions than the current best-in-class prior art model, the NetMHCII 2.3 model [0541]. Barra discloses that high accuracy prediction models for peptide MHC II interaction can be constructed from the MS-derived MHC II eluted ligand data, that the accuracy of these models can be improved by training models integrating information from both binding affinity and eluted ligand data sets, and that these improved models can be used to identify both eluted ligands and T cell epitopes in independent data sets at an unprecedented level of accuracy (pg. 12, col. 2, para. 1). Regarding claim 76, Boucher discloses a multi-allele presentation model generating per-allele presentation likelihood for class II MHC alleles inherently requiring deconvolution [0491] [0545]. Barra discloses motif deconvolution for MS datasets of eluted ligands (pg. 4, col.1, last para.) In KSR Int 'l v. Teleflex, the Supreme Court, in rejecting the rigid application of the teaching, suggestion, and motivation test by the Federal Circuit, indicated that “The principles underlying [earlier] cases are instructive when the question is whether a patent claiming the combination of elements of prior art is obvious. When a work is available in one field of endeavor, design incentives and other market forces can prompt variations of it, either in the same field or a different one. If a person of ordinary skill can implement a predictable variation, § 103 likely bars its patentability.” KSR Int'l v. Teleflex lnc., 127 S. Ct. 1727, 1740 (2007). Applying the KSR standard to Boucher and Barra, the examiner concludes that the combination of Boucher and Barra represents a work that is available in one field of endeavor, design incentives and other market forces can prompt variations of it. Both Boucher and Barra are directed to generate prediction models of peptide to MHC-II binding using machine learning. Boucher disclosed a 1/2400 hit to decoy ratio with best performing presentation model achieving a %29 PPV and up to %0.3 PPV at %10 recall. In the same field of research, Barra disclosed that high accuracy prediction models for peptide MHC II interaction can be constructed from the MS-derived MHC II eluted ligand data, that the accuracy of these models can be improved by training models integrating information from both binding affinity and eluted ligand data sets. Combining the tumor-specific rules of Boucher (for example, tumor microenvironment/antigen-presenting cells (APCs)) with processing footprints of Barra (for example, protease cleavage signatures) would have allowed a model to filter out "false positives" that bind well but are never processed, maintaining high Positive Predictive Value (PPV) even as the recall rate increases to capture more potential targets. One ordinary skilled in the art before the effective filing data of the claimed invention would have had incentives to vary Boucher’s method to incorporate specific signatures of Barra in a predictable manner to result in the claimed invention. This combination would have been expected to have provided a more stability at higher recall rate and would have improved accuracy for a more accurate MHC-II peptide prediction. Therefore, the invention would have been prima facie obvious to one of skill in the art before the effective filing date of the claimed invention, absent evidence to the contrary. Claim 77 is rejected under 35 U.S.C. 103 as being unpatentable over Boucher et al. (US20210113673A1) as applied to claims 62-64, 72-75, and 78-81 above, and further in view of Carr et al. (US20190346442A1). Claim 77 depends on claim 62. Limitations of claim 62 have been taught in the above rejections. Regarding claim 77, Boucher discloses identifying a series of peptide sequences in the source protein that were not presented on MHC alleles of the tissue sample cells [0373]. Boucher further discloses a set of numerical parameters for the presentation model can be trained based on a training data set including at least a set of training peptide sequences identified as present in a plurality of samples and one or more MHC alleles associated with each training peptide sequence, wherein the training peptide sequences are identified through mass spectrometry on isolated peptides eluted from MHC alleles (for example, isolating using tags) derived from the plurality of samples [0109]. Carr discloses methods of generating an HLA class II-allele specific binding peptide sequence database (claim 3) [0111]. Carr further discloses that the methods of in vitro and in vivo production of neoantigens relates to pharmaceutical compositions and methods of delivery of the therapy [0150]. Carr further discloses protein purification using affinity tags of Hla-protein [0118]. It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the method of Boucher to have used the known technique of using affinity tags to a specific recombinant alpha or beta chain for the isolation of exactly one HLA-II heterodimer. One of ordinary skill in the art would have been motivated to combine the methods of Boucher and Carr based on a finding that Carr contained the known technique of using affinity tag for protein purification that is applicable to the base method of Boucher and one ordinary skilled in the art would have been capable of applying this known technique to the known method of Boucher that was ready for improvement and the results would have been predictable to one ordinary skilled in the art. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 62-64, 66-67, and 70-81 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, 4-5, 7-8, 9-10, and 16-17 of U.S. Patent No. 11183272 in view of Boucher et al. (US20210113673A1) as applied to claims 62-64, 72-75, and 78-81 above, and further in view of Barra et al. (Footprints of antigen processing boost MHC class II natural ligand predictions, Genome Medicine, Published: 16 November 2018, Volume 10, article number 84, (2018)). Although the claims at issue are not identical, they are not patentably distinct from each other because: Instant Application Patent # 11183272 Claim 62: (a) an input module configured to receive amino acid sequence information of a set of candidate peptide sequences expressed by cells of a human subject, wherein each candidate peptide sequence of the plurality of candidate peptide sequences is encoded by a genome, transcriptome, or exome of a human subject, or a pathogen or a virus in the human subject; (b) a processing module operably linked to the input module, the processing module comprising an executable code comprising a trained machine learning class II HLA- peptide presentation prediction model, wherein the trained machine learning class II HLA-peptide presentation prediction model is configured to generate output peptide sequences with a plurality of presentation predictions, wherein each presentation prediction of the plurality of presentation predictions is indicative of a presentation likelihood that a peptide sequence of the set of candidate peptide sequences is presented by one or more proteins encoded by a class II HLA allele of a cell of the human subject, and wherein the trained machine learning class II HLA-peptide presentation prediction model comprises;(i) a plurality of parameters identified at least based on training data comprising:(1) sequences of training peptides,(2) an identity of a protein encoded by an HLA class II allele associated with the training peptide sequences, and(3) an observation by mass spectrometry that one or more of the training peptides was presented by the protein encoded by the HLA class II allele in training cells; and(ii) a function representing a relation between the amino acid sequence information received as input and the presentation likelihood generated as an output based on the amino acid sequence information and the plurality of parameters; wherein each peptide sequence of a subset of the output peptide sequences is for preparing a therapeutic composition for the human subject based on the presentation likelihood generated as the output of the peptide sequence being in a complex with the one or more proteins encoded by a class II HLA allele of a cell of the human subject, and wherein the trained machine learning class II HLA-peptide presentation prediction model has a positive predictive value (PPV) of at least 0.2 according to a presentation PPV determination method. Claim 1: (a) inputting amino acid information of a set of candidate peptide sequences using a computer processor, into a trained machine learning class II HLA-peptide presentation prediction model to generate a plurality of presentation predictions, wherein each candidate peptide sequence of the set of candidate peptide sequences is encoded by a genome, transcriptome, or exome of a subject, or a pathogen or a virus in the subject; wherein the plurality of presentation predictions comprises an HLA presentation prediction for each candidate peptide sequence of the set of candidate peptide sequences, wherein each presentation prediction of the plurality of presentation predictions is indicative of a presentation likelihood that a peptide sequence of the set of candidate peptide sequences is presented by one or more proteins encoded by a class II HLA allele of a cell of the subject, and wherein the trained machine learning class II HLA-peptide presentation prediction model comprises (i) a plurality of parameters identified at least based on training data comprising:(1) sequences of training peptides, (2) an identity of a protein encoded by an HLA class II allele associated with the training peptide sequences, and (3) an observation by mass spectrometry that one or more of the training peptides was presented by the protein encoded by the HLA class II allele in training cells; and (ii) a function representing a relation between the amino acid sequence information received as input and the presentation likelihood generated as an output based on the amino acid sequence information and the plurality of parameters[[:]];(b) selecting, based at least on the plurality of presentation predictions, a subset of peptide sequences of the set of candidate peptide sequences to generate a set of selected peptide sequences, wherein the selected subset of peptide sequences (i) are encoded by a genome, transcriptome, or exome of a subject, or a pathogen or a virus in the subject, and (ii) are predicted to be presented by the protein encoded by the HLA class II allele of a cell of the subject; and (c) administering to the human subject a pharmaceutical composition comprising: wherein the trained machine learning HLA-peptide presentation prediction model has a positive predictive value (PPV) of at least 0.2 at a recall rate of 10% according to a presentation PPV determination method. Claim 63: wherein the input module is configured to receive an identity of one or more proteins encoded by a class II HLA allele of a cell of the human subject. Claim 64: wherein the processing module is linked to an output module configured to display the plurality of presentation predictions. Claim 66: wherein:(i) the at least one hit peptide sequence comprises at least 10 hit peptide sequences, and(ii) the at least 499 decoy peptide sequences comprise at least 4990 decoy peptide sequences. Claim 7: wherein: (i) the at least one hit peptide sequence comprises at least 10 hit peptide sequences, and(ii) the at least 499 decoy peptide sequences comprise at least 4990 decoy peptide sequences Claim 67: wherein any nine contiguous amino acid subsequences of any of the at least one hit peptides does not overlap with any nine contiguous amino acid sub-sequences of the at least 4990 decoy peptide sequences. Claim 8: wherein any nine contiguous amino acid subsequences of any of the at least one hit peptides does not overlap with any nine contiguous amino acid sub-sequences of the at least 4990 decoy peptide sequences. Claim 70: wherein the trained machine learning class II HLA-peptide presentation prediction model has a PPV of at least 0.3 according to a presentation PPV determination method. Claim 71: wherein the trained machine learning class II HLA-peptide presentation prediction model has a PPV of at least 0.3 at a recall rate of 20% according to a presentation PPV determination method. Claim 72: wherein each peptide sequence of the set of candidate peptide sequences is associated with a cancer. Claim 9: wherein each peptide sequence of the set of candidate peptide sequences is associated with a cancer. Claim 73: wherein each peptide sequence of the set of candidate peptide sequences (i) comprises a mutation, (ii) is expressed in a cancer cell of the subject, and(iii) is not encoded by a genome of a non-cancer cell of the human subject. Claim 10: wherein each peptide sequence of the set of candidate peptide sequences (i) comprises a mutation, (ii) is expressed in a cancer cell of the subject, and (iii) is not encoded by a genome of a non-cancer cell of the subject. Claim 74: wherein each sequence of the one or more of the training peptides sequences observed by mass spectrometry to be presented by the protein encoded by the HLA class II in training cells has a length of at least 15 amino acids. Claim 2: wherein each sequence of the sequences of training peptides observed by mass spectrometry to be presented by the protein encoded by the HLA class II in training cells has a length of at least 15 amino acids. Claim 75: wherein the training cells comprise training cells expressing a single MHC class II complex or a protein encoded by a single allelic variant of a class II HLA locus selected from the group consisting of DR, DP, and DQ, wherein the single MHC class II complex or a protein encoded by the single allelic variant of a class II HLA locus is expressed by a cell of the subject. Claim 4: wherein: (i) the training cells comprise training cells expressing a single MHC class II complex or a protein encoded by a single allelic variant of a class II HLA locus selected from the group consisting of DR, DP, and DQ, wherein the single MHC class II complex or a protein encoded by the single allelic variant of a class II HLA locus is expressed by a cell of the subject. Claim 76: wherein the training data comprises training data obtained by deconvolution. Claim 4: the training data comprises training data obtained by deconvolution. Claim 77: wherein the training cells express a protein encoded by a class II HLA allele of a cell of the human subject, wherein the protein encoded by a class II HLA allele comprises an affinity tag. Claim 5: wherein the training cells express a protein encoded by a class II HLA allele of a cell of the subject, wherein the protein encoded by a class II HLA allele comprises an affinity tag. Claim 78: wherein the protein encoded by a class II HLA allele is selected from the group consisting of: HLA-DPB1*01:01/HILA- DPA1*01:03, HLA-DPB1*02:01/HLA-DPA1*01:03, HLA-DPB1*03:01/HLA- DPA1*01:03, HLA-DPB1*04:01/HLA-DPA1*01:03, HLA-DPB1*04:02/HLA- DPA1*01:03, HLA-DPB1*06:01/HILA-DPA1*01:03, HLA-DRB1*01:01, HLA- DRB1*01:02, HLA-DRB1*03:01, HLA-DRB1*03:02, HLA-DRB1*04:01, HLA- DRB 1*04:02, HLA-DRB 1*04:03, HLA-DRB 1*04:04, HLA-DRB 1*04:05, HLA- DRB1*04:07, HLA-DRB1*07:01, HLA-DRB1*08:01, HLA-DRB1*08:02, HLA- DRB1*08:03, HLA-DRB1*08:04, HLA-DRB1*09:01, HLA-DRB1*10:01, HLA- DRB1*11:01, HLA-DRB1*11:02, HLA-DRB1*11:04, HLA-DRB1*12:01, HLA- DRB1*12:02, HLA-DRB1*13:01, HLA-DRB1*13:02, HLA-DRB1*13:03, HLA- DRB1*14:01, HLA-DRB1*15:01, HLA-DRB1*15:02, HLA-DRB1*15:03, HLA- DRB1*16:01, HLA-DRB3*01:01, HLA-DRB3*02:02, HLA-DRB3*03:01, HLA- DRB4*01:01, HLA-DRB5*01:01, HLA-DRB1*01:01, HLA-DRB1*01:02, HLA- DRB1*03:01, HLA-DRB1*04:01, HLA-DRB1*04:02, HLA-DRB1*04:04, HLA- -5-DRB1*04:05, HLA-DRB1*07:01, HLA-DRB1*08:01, HLA-DRB1*08:02, HLA- DRB1*08:03, HLA-DRB1*09:01, HLA-DRB1*11:01, HLA-DRB1*11:02, HLA- DRB1*11:04, HLA-DRB1*12:01, HLA-DRB1*13:01, HLA-DRB1*13:02, HLA- DRB1*13:03, HLA-DRB1*14:01, HLA-DRB1*15:01, HLA-DRB1*15:02, HLA- DRB1*15:03, HLA-DRB1*16:02, HLA-DRB3*01:01, HLA-DRB3*02:01, HLA- DRB3*02:02, HLA-DRB3*03:01, HLA-DRB4*01:01, HLA-DRB4*01:03, HLA- DRB5*01:01; HLA-DPB1*01:01, HLA-DPB1*02:01, HLA-DPB1*02:02, HLA- DPB1*03:01, HLA-DPB1*04:01, HLA-DPB1*04:02, HLA-DPB1*05:01, HLA- DPB1*06:01, HLA-DPB1*11:01, HLA-DPB1*13:01, HLA-DPB1*17:01, HLA- DQA1 *01:01/HLA-DQB1 *05:01, HLA-DQA1 *01:02/HLA-DQB1 *06:02, HLA- DQA1 *01:02/HLA-DQB1 *06:04, HLA-DQA1 *01:03/HLA-DQB1 *06:03, HLA- DQA1 *02:01/HLA-DQB1 *02:02, HLA-DQA1 *02:01/HLA-DQB1 *03:03, HLA- DQA1 *03:01/HLA-DQB1 *03:02, HLA-DQA1 *03:03/HLA-DQB1 *03:01, HLA- DQA1*05:01/HLA-DQB1*02:01, HLA-DQA1*05:05/HLA-DQB1*03:01, and any combination thereof. Claim 11: wherein the protein encoded by a class II HLA allele is selected from the group consisting of: HLA-DPB 1* 01:01/HLA-DPA 1* 01:03, HLA-DPB 1* 02:01/HLA- DPA1*01:03, HLA-DPB1*03:01/HLA-DPA1*01:03, HLA-DPB1*04:01/HLA-DPA1*01:03, HLA- DPB1*04:02/HLA-DPA1*01:03, HLA-DPB1*06:01/HLA-DPA1*01:03, HLA-DRB1*01:01, HLA- DRB1*01:02, HLA-DRB1*03:01, HLA-DRB1*03:02, HLA-DRB1*04:01, HLA-DRB1*04:02, HLA- DRB1*04:03, HLA-DRB1*04:04, HLA-DRB1*04:05, HLA-DRB1*04:07, HLA-DRB1*07:01, HLA- DRB1*08:01, HLA-DRiB1*08:02, HLA-DRiB1*08:03, HLA-DRB1*08:04, HLA-DRB1*09:01, HLA- DRB1*10:01, HLA-DRB1*11:01, HLA-DRB1*11:02, HLA-DRB1*11:04, HLA-DRB1*12:01, HLA- DRB1*12:02, HLA-DRB1*13:01, HLA-DRB1*13:02, HLA-DRB1*13:03, HLA-DRB1*14:01, HLA- DRB1*15:01, HLA-DRB1*15:02, HLA-DRB1*15:03, HLA-DRB1*16:01, HLA-DRB3*01:01, HLA- DRB3*02:02, HLA-DRB3*03:01, HLA-DRB4*01:01, HLA-DRB5*01:01, HLA-DRB1*01:01, HLA- DRB1*01:02, HLA-DRB1*03:01, HLA-DRB1*04:01, HLA-DRB1*04:02, HLA-DRB1*04:04, HLA- DRB1*04:05, HLA-DRiB1*07:01, HLA-DRiB1*08:01, HLA-DRB1*08:02, HLA-DRB1*08:03, HLA- DRB1*09:01, HLA-DRB1*11:01, HLA-DRB1*11:02, HLA-DRB1*11:04, HLA-DRB1*12:01, HLA- DRB1*13:01, HLA-DRB1*13:02, HLA-DRB1*13:03, HLA-DRB1*14:01, HLA-DRB1*15:01, HLA- 50401-735.301 Amendment 3.12 to Claims After Allowance (FINAL) 4828-0077-1838 v.1.doexDRB1*15:02, HLA-DRB1*15:03, HLA-DRB1*16:02, HLA-DRB3*01:01, HLA-DRB3*02:01, HLA- DRB3*02:02, HLA-DRB3*03:01, HLA-DRB4*01:01, HLA-DRB4*01:03, HLA-DRB5*01:01; HLA- DPB1*01:01, HLA-DPB1*02:01, HLA-DPB1*02:02, HLA-DPB1*03:01, HLA-DPB1*04:01, HLA- DPB1*04:02, HLA-DPB1*05:01, HLA-DPB1*06:01, HLA-DPB1*11:01, HLA-DPB1*13:01, HLA- DPB1*17:01, HLA-DQA1*01:01/HLA-DQB1*05:01, HLA-DQA1*01:02/HLA-DQB1*06:02, HLA- DQA1*01:02/HLA-DQB1*06:04, HLA-DQA1*01:03/HLA-DQB1*06:03, HLA-DQA1*02:01/HLA- DQB 1*02:02, HLA-DQA1*02:01/HLA-DQB 1*03:03, HLA-DQA1*03:01/HLA-DQB1*03:02, HLA- DQA1*03:03/HLA-DQB1*03:01, HLA-DQA1*05:01/HLA-DQB1*02:01, HLA-DQA1*05:05/HLA- DQB 1* 03:01, and any combination thereof. Claim 79: wherein each peptide sequence of the subset of the output peptide sequences binds to a protein encoded by a class II HLA allele of a cell of the human subject with an IC50 of 500 nM or less, or a predicted IC50 of 500 nM or less. Claim 16: wherein each of the peptide sequences of the set of selected peptide sequences binds to a protein encoded by a class II HLA allele of a cell of the subject with an IC50 of 500 nM or less, or a predicted IC50 of 500 nM or less. Claim 80: wherein each peptide sequence of the subset of the output peptide sequences is for preparing a therapeutic composition for the human subject that comprises one or more polypeptides comprising at least two peptide sequence of the subset of the output peptide sequences or one or more polynucleotides encoding at least two of peptide sequence of the subset of the output peptide sequences. Claims 1 and 17: wherein the pharmaceutical composition comprises one or more polypeptides comprising at least two of the selected peptide sequences or one or more polynucleotides encoding at least two of the selected peptide sequences. (i) a polypeptide with one or more of the selected subset of peptide sequences, (ii) a polynucleotide encoding the polypeptide of (i), Claim 81: wherein each candidate peptide sequence of the plurality of candidate peptide sequences is encoded by a genome, transcriptome, or exome of a human subject with cancer. Claims 1 and 9: wherein each candidate peptide sequence of the set of candidate peptide sequences is encoded by a genome, transcriptome, or exome of a subject. wherein each peptide sequence of the set of candidate peptide sequences is associated with a cancer. Reference Patent Application No. 11183272 does not teach the limitations of claim 1 regarding an input module and a processing module operably linked to the input module, limitations of claim 63 regarding the input module is configured to receive an identity of one or more proteins encoded by a class II HLA allele of a cell of the human subject, limitations of claim 64 regarding the processing module is linked to an output module configured to display the plurality of presentation predictions, limitations of claim 70 regarding the trained machine learning class II HLA-peptide presentation prediction model has a PPV of at least 0.3 according to a presentation PPV determination method, limitations of claim 71 regarding the trained machine learning class II HLA-peptide presentation prediction model has a PPV of at least 0.3 at a recall rate of 20% according to a presentation PPV determination method. Boucher discloses that the computer is adapted to execute computer program modules for providing functionality and that the term “module” refers to computer program logic used to provide the specified functionality. Boucher further discloses that a module can be implemented in hardware, firmware, and/or software and are stored on the storage device, loaded into the memory, and executed by the processor. Boucher further discloses an input interface linked to the processor [0547-0548]. Boucher discloses that the nucleotide sequencing data is used to obtain data representing peptide sequences of each of a set of neoantigens identified by comparing the nucleotide sequencing data from the tumor cells and the nucleotide sequencing data from the normal cells, and wherein the peptide sequence of each neoantigen comprises at least one alteration that makes it distinct from the corresponding wild-type, peptide sequence identified from the normal cells of the subject (claim 1). Boucher further discloses that samples comprise human cell lines from patients (claim 12); reading on limitations of the input module is configured to receive an identity of one or more proteins encoded by a class II HLA allele of a cell of the human subject. Boucher discloses the computer system includes a processor and a display coupled to the graphic adaptor to display images and other information (for example, presentation information) [0546-0547] and that the presentation identification system includes presentation information to be displayed (FIG. 1D, 2A., and 14) [0321]; reading on limitations of the processing module is linked to an output module configured to display the plurality of presentation predictions. Boucher discloses that in FIG. 13C, for each set of model features of the example presentation models, a PPV value at 10% recall that was identified when the features in the set of model features were classified as allele interacting features is shown on the left side, and a PPV value at 10% recall that was identified when the features in the set of model features were classified as allele non-interacting features is shown on the right side. Boucher further notes that the feature of peptide sequence was always classified as an allele interacting feature for the purposes of FIG. 13C. Boucher further discloses achieving a PPV value at 10% recall varying from 14% up to 29%, which are significantly (approximately 500-fold) higher than PPV for a random prediction [0495]. Boucher further discloses that their models are capable of achieving significantly more accurate presentation predictions than the current best-in-class prior art model, the NetMHCII 2.3 model [0541]. Barra discloses that high accuracy prediction models for peptide MHC II interaction can be constructed from the MS-derived MHC II eluted ligand data, that the accuracy of these models can be improved by training models integrating information from both binding affinity and eluted ligand data sets, and that these improved models can be used to identify both eluted ligands and T cell epitopes in independent data sets at an unprecedented level of accuracy (pg. 12, col. 2, para. 1). One would have had a reasonable expectation of success in combining these methods because they are drawn to the related field of prediction models for peptide MHC II interaction, and one ordinary skilled in the art could have design incentives and other market forces to prompt these variations and the design incentives or market forces could have prompted one ordinary skilled in the art to vary the prior art in a predictable manner to result in the claimed invention. Conclusion No claims are allowed. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GHAZAL SABOUR whose telephone number is (703)756-1289. The examiner can normally be reached M-F 7:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Larry D. Riggs can be reached at (571) 270-3062. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /G.S./Examiner, Art Unit 1686 /LARRY D RIGGS II/Supervisory Patent Examiner, Art Unit 1686
Read full office action

Prosecution Timeline

Oct 06, 2021
Application Filed
Jan 09, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12553871
CONVERSION OF LONG CELL DATA TO SHORT CELL EQUIVALENT
2y 5m to grant Granted Feb 17, 2026
Patent 12527389
SELECTION OF A CHEMICAL COMPOUND APPLICABLE ON A CLASS OF HUMAN HAIRS
2y 5m to grant Granted Jan 20, 2026
Patent 12518854
NON-INVASIVE DETECTION OF TISSUE ABNORMALITY USING METHYLATION
2y 5m to grant Granted Jan 06, 2026
Patent 12486542
DETECTING MUTATIONS AND PLOIDY IN CHROMOSOMAL SEGMENTS
2y 5m to grant Granted Dec 02, 2025
Patent 12451216
RECURSIVE TRANSFORMERS FOR AI-BASED PROTEIN-PROTEIN INTERACTION AND DRUG DESIGN
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
29%
Grant Probability
61%
With Interview (+32.3%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 31 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month