DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 18 November 2025 has been entered.
Status of the Claims
The amended claim set received 18 November 2025 has been entered into the application.
Claims 1, 12, 17-18, and 21-22 have been amended.
Claims 2-6 and 20 are previously cancelled.
Claim 13 is objected to.
Claim(s) 1, 7-19, and 21-22 are pending.
Priority
This application claims benefit of priority to GB1607521 filed on 29 April 2016.
Information Disclosure Statement
The listing of references in the specification is not a proper information disclosure statement. 37 CFR 1.98(b) requires a list of all patents, publications, or other information submitted for consideration by the Office, and MPEP § 609.04(a) states, "the list may not be incorporated into the specification but must be submitted in a separate paper." Therefore, unless the references have been cited by the examiner on form PTO-892, they have not been considered.
Specification
The objection to the specification because the machine learning algorithm removes the influence of HLA/MHC-binding and can be applied to any peptide regardless of its MHC restriction of claim 1 and 21 in the Office Action mailed 18 June 2025 is withdrawn in view of the amendment received 18 November 2025.
The objection to the specification because the statistical inference model removes the influence of HLA/MHC-binding and can be applied to any peptide regardless of its MHC restriction of claim 22 in the Office Action mailed 18 June 2025 is withdrawn in view of the amendment received 18 November 2025.
Claim Objections
Claim 13 is objected to because of the following informalities: “wherein the binding affinities have been obtained via an MHC binding protienion algorithm, experimental measurement or combinations thereof.” The claim should be amended to recite “wherein the binding affinities have been obtained via a MHC protein binding algorithm , an experimental measurement, or a combinations thereof.” Appropriate correction is required to address the grammatical correctness of the claim.
Claim Rejections - 35 USC § 112
35 USC § 112(a)
New Matter
The rejection of claims 1 and 21 under 35 U.S.C § 112(a) in the Office Action mailed 18 June 2025 is withdrawn in view of the amendment received 18 November 2025.
The rejection of claims 22 under 35 U.S.C § 112(a) in the Office Action mailed 18 June 2025 is withdrawn in view of the amendment received 18 November 2025.
Written Description
The rejection of claims 1 and 21 under 35 U.S.C § 112(a) in the Office Action mailed 18 June 2025 is withdrawn in view of the amendment received 18 November 2025.
The rejection of claim 22 under 35 U.S.C § 112(a) in the Office Action mailed 18 June 2025 is withdrawn in view of the amendment received 18 November 2025.
35 USC § 112(b)
The instant rejection is maintained for reason for record in the Office Action mailed 18 June 2025 and modified in view of the amendments filed 18 November 2025.
The rejection of claims 1 and 21 under 35 U.S.C § 112(b) in the Office Action mailed 18 June 2025 is withdrawn in view of the amendment received 18 November 2025.
The rejection of claim 22 under 35 U.S.C § 112(b) in the Office Action mailed 18 June 2025 is withdrawn in view of the amendment received 18 November 2025.
The rejection of claims 7-19 under 35 U.S.C § 112(b) in the Office Action mailed 18 June 2025 is withdrawn in view of the amendment received 18 November 2025.
The rejection of claims 12 under 35 U.S.C § 112(b) in the Office Action mailed 18 June 2025 is withdrawn in view of the amendment received 18 November 2025.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 7-8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 7 recites HLA/MHC/peptide complex encoded by different HLA/MHC alleles. Claim 8 recites a similar limitation. It is unclear how an HLA/MHC allele can encode for the entire complex including the peptide when the peptides are from the pathogen.
Response to Arguments
Applicant’s arguments, filed 18 November 2025, have been fully considered and the rejection is maintained.
The Applicant states “The rejection of claims 7 and 8 for indefiniteness, based on the question of how an HLA/MHC allele can encode the entire complex including the peptide, is respectfully traversed. The current language of claim 7 (and similarly claim 8) already incorporates the necessary clarification by specifying that the positive data set comprises entries from "surface bound or secreted complexes comprising peptides bound to HLA/MHC molecules encoded by a plurality of different HLA/MHC alleles". This structure explicitly limits the encoding relationship to the HLA/MHC molecules ("HLA/MHC molecules encoded by a plurality of different HLA/MHC alleles"). Thus, the claimed alleles encode the molecules. The claim does not require that the alleles encode the entire complex or the peptides themselves.” [remarks, page 10].
In response, and as described above, the claims were not amended to clarify how an HLA/MHC allele can encode for the entire complex including the peptide when the peptides are from the pathogen or exogenous sources. The claims were merely amended to state that secreted complexes comprise peptides bound to HLA/MHC molecules, but the claims were not amended to address how an HLA/MHC allele can encode for the entire complex including the peptide when the peptides are from the pathogen or exogenous sources. Although the Applicant traversal is acknowledged, the rejection is maintained because the claims were not amended to clarify how HLA/MHC alleles can encode peptide sequences identified or inferred from surface bound or secreted complexes comprising peptides bound to HLA/MHC molecules encoded by a plurality of different HLA/MHC alleles when the identified or inferred from surface bound or secreted complexes are from exogenous peptide sources that were inferred or identified as peptides bound to HLA/MHC alleles. For example, it is known in the art that proteins from the pathogen (i.e., identified/inferred peptide sequences) are digested into small pieces (peptides) and loaded on to HLA antigens (to be specific, MHC class II). They are then displayed by the antigen-presenting cells to CD4+ helper T cells. Thus, it is unclear how endogenous HLA/MHC alleles can encode genes/peptides from exogenous sources such as pathogens, for example.
Claim Rejections - 35 USC § 101
The instant rejection is maintained for reason for record in the Office Action mailed 18 June 2025 and modified in view of the amendments filed 18 November 2025.
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 7-19, and 21-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Following the flowchart of the MPEP 2106
Step I: Process, Machine, Manufacture or Composition
Claims 1 and 7-19 are directed towards a method, so a process.
Claim 21 is directed towards an apparatus, so a machine.
Claim 22 is directed towards a method, so a process.
Step 2A Prong I: Identification of an Abstract Idea
Claim 1 is directed towards a method to train a machine learning model. Claim 21 is drawn to an apparatus that encompasses similar limitations as claim 1. Claim 22 is directed towards a method to train a statistical inference model. Because claims 1 and 21-22 encompass similar limitations, the rejection to claim 1 is applicable to claims 21-22.
Claims 1 and 21-22 recites:
Building a positive data set comprises entries of peptide sequences identified or inferred from surface bound or secreted HLA/MHC/peptide complexes encoded by one or a plurality of different HLA/MHC alleles.
This step can be performed in the human mind by organizing information which can be performed in the human or with a paper/pen and is therefore an abstract idea. This step encompasses generating a positive data set by taking existing information (i.e., peptide sequence data), manipulating the data using mathematical functions, and organizing this information into a new form (i.e., positive data set) which encompasses mathematical concepts which reads on abstract ideas. See MPEP 2106.04(a)(I)(A)(iv).
Building a negative data set comprising wherein the negative data set comprises entries of peptide sequences which are not identified or inferred from surface bound or secreted HLA/MHC/peptide complexes.
This step can be performed in the human mind by organizing information which can be performed in the human or with a paper/pen and is therefore an abstract idea. This step encompasses generating a positive data set by taking existing information (i.e., peptide sequence data), manipulating the data using mathematical functions, and organizing this information into a new form (i.e., negative data set) which encompasses mathematical concepts which reads on abstract ideas. See MPEP 2106.04(a)(I)(A)(iv).
identifying a multiplicity of pairings with each pairing establishing pair between one entry in the positive data set and one entry in the negative data sets that are (i) of equal length, (ii) derived from the same source protein or fragment thereof, and (iii) have similar binding affinities, with respect to an HLA/MHC molecule to which the peptide of the positive counterpart data set is restricted, and (iii).
This step can be performed in the human mind by observing and evaluating entries of (i)-(iii) to identify a multiplicity of parings between one entry of the positive and negative dataset and is therefore an abstract idea.
training the machine learning algorithm/statistical inference model on the multiplicity of pairings.
This step can be performed in the human mind by following instructions to train a machine learning algorithm/statistical inference model on the multiplicity of pairings and is therefore an abstract idea. Here, training the machine learning algorithm (claims 1 and 21) and the statistical inference model (claim 22) are generically recited and therefore read on mathematical/statistical concepts.
Wherein the amino acids at key HLA/MHC-binding anchor positions within the peptide sequences of the positive and negative data sets are removed as features for the machine learning algorithm
This step can be performed in the human mind by observing anchor positions of the positive and negative dataset to remove features from the machine learning algorithm/statistical inference model and is therefore an abstract idea.
following the training of the machine learning algorithm/statistical inference model on the multiplicity of pairings, using the trained machine learning algorithm/statistical inference model to interrogate input data comprising amino acid sequences of peptides and/or proteins, to identify peptides, or peptide fragments of said proteins, having features positively associated with natural endogenous or exogenous cellular processing, transportation and HLA/MHC presentation.
This step can be performed in the human mind by following instructions to interrogate input data to identify peptides, or peptide fragments of said proteins, having features positively associated with natural endogenous or exogenous cellular processing, transportation and HLA/MHC presentation using either the machine learning algorithm (claims 1-21) or the statistical inference model (claim 22) after training the machine learning algorithm (claims 1-21) and/or the statistical inference model (claim 22) on the multiplicity of pairing and is therefore an abstract idea. This step encompasses taking existing information (i.e., amino acid data, positive and negative datasets of peptide sequence pairings and entries), manipulating the data via mathematical/statistical correlations (i.e., machine learning model/statistical inference model), and organizing the data into a new (i.e., identified peptides, or peptide fragments of said proteins, having features positively associated with natural endogenous or exogenous cellular processing, transportation and HLA/MHC presentation) which reads on abstract ideas/mathematical concepts. See MPEP 2106.04(a)(2)(I)(A)(iv)
Claims 7-19 are further drawn to limitations that described the abstract ideas performed in claim 1 and are therefore abstract ideas.
Step 2A Prong Two: Consideration of Practical Application
Claims 1 and 21-22 do not recite any additional elements which integrate the recited judicial exception into a practical application. Here, in the instant case, the claim merely sets forth a method of data analysis to identify peptides positively associated with natural endogenous or exogenous cellular processing, transportation and major histocompatibility complex (MHC) presentation. As such practicing the claims merely results in the abstract determination (data analysis) of the identification of features positively associated with natural endogenous or exogenous cellular processing, transportation and major histocompatibility complex (MHC) presentation. Such results only produce new information (i.e., associated peptide and/pr peptide fragments) and does not provide for a practical application in the real-world realm of physical things i.e., the claims do not utilize the results of the information generated by judicial exception to affect a type of change in the real-world realm or improve the function of a computer. Therefore, the claims do not utilize the built data sets, identified multiplicity of pairings, trained machine learning algorithm/statistical inference model and the abstract ideas to construct a practical application such as treating a subject, transformation of matter, or improving upon an existing technology.
Claims 1 and 21-22 recites “creating a trained algorithm to identify peptides that contain features positively associated with natural endogenous or exogenous cellular processing, transportation and major histocompatibility complex (MHC) presentation”. The limitations do not integrate the recited judicial exception into a practical application because “creating a trained algorithm” is broadly and generically recited and attempts to cover any solution for creating a trained machine learning algorithm to identify peptides with no restriction on how the creation of the machine learning algorithm is accomplished which does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See MPEP 2106.05(f).
The recited additional element of using computer processes and components of claims 1 and 21-22 does not integrate the recited judicial exception because using computer elements is tangential to the claimed method.
This judicial exception is not integrated into a practical application because the claims do not meet any of the following criteria:
An additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field;
an additional element that applies or uses a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition;
an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim;
an additional element effects a transformation or reduction of a particular article to a different state or thing; and
an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than
a drafting effort designed to monopolize the exception.
Step 2B: Consideration of Additional Elements and Significantly More
The claimed methods also recite “additional elements” that are not drawn to an abstract idea.
The recited additional element of using computer processes of claims 1 and 21-22 does add more than the recited judicial exception because using computer elements to analyze nucleic acid sequence data is merely tangential to the claimed methods. See MPEP 2106.05(a) and 2106.05(d)(II).
In conclusion and when viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea recited in the instantly presented claims into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Response to Arguments
Applicant’s arguments, filed 18 November 2025, have been fully considered and the rejection is maintained.
The Applicant points to the updated guidance using ARP decision regarding Ex Parte Desjardin for guidance [remarks, pages 10-11]. The Applicant states that the Desjardin decision has a strong parallel to the instant claims [remarks, page 11]. The Applicant states “2. Utilizing Two Data Sets for Training Claim 1 of the present application requires the creation of two different sets of data for training the machine learning algorithm. In particular, claim 1 includes the step of "building a positive data set" and "building a negative data set." Both training sets are necessary, but they each reflect a different kind of data. In claim 1, these two data sets are then analyzed to identify a multiplicity of pairings, with each pairing establishing a pair between one entry in the positive data set and one entry in the negative data set. Training of the machine learning algorithm then takes place based on these pairs. In Ex Parte Desjardin, the invention also requires the creation of two different sets of data for training the machine learning model. In this invention, a first training set is used to train the machine learning model on a first machine learning task. A second training set is then obtained for training the machine learning model on a second, different machine learning task. As is the case in the present application, both of the training sets in Ex Parte Desjardin are necessary for the training but contain a different kind of data. The second training data in this case is used to train the machine learning model by training the model in a manner to adjust the values of parameters to optimize the performance of the second task. Thus, while there is a clear difference in the actual invention, both the present application and the invention in Ex Parte Desjardin alter the training of the machine learning algorithm in a similar manner. Both require the creation of two training sets, where both are necessary for the training method, but each contain a different kind of data. The differences in the two training sets are then used via a particular method clearly set forth in the claims to improve the training of the machine learning algorithm itself.” [remarks, pages 13-14]. The Applicant states “3. Identification of Technical Problems in the Prior Art Within the Specification. In both Ex Parte Desjardin and in the present application, the specification clearly identifies technical problems in the prior art. The ARP decision emphasized that the Specification in that case specifically identified "technical improvements over conventional systems by addressing challenges in continual learning and model efficiency by reducing storage requirements and preserving task performance across sequential training." Ex Parte Desjardin, p. 7. Similarly, as pointed about in the Applicant's Response of October 9, 2024, the Specification in the present application also points out specific technical challenges encountered in prior art techniques for machine learning.” [remarks, page 13]. The Applicant points to the paragraph “Traditional techniques” of the Applicant’s arguments/remarks filed October 9th, 2024 pp, 14-15 for guidance [remarks filed 18 November 2025, pages 13-14].
In response and with respect to the Applicant’s remarks regarding the “Traditional techniques” paragraph on pages 14-15 of the Applicant’s arguments/remarks filed October 9th, 2024, it is noted that the instant specification does not contain a paragraph as noted on pages 13-14 of the remarks filed 18 November 2025 and pages 14-15 of the Applicant response filed October 9th, 2024. Therefore, the argument is not persuasive because the instant specification does not contain a paragraph regarding disclosing a technical solution to a technical problem which the Applicant’s argument is based and/or refers. Additionally, with respect to the claims Desjardins and in light of its specification, the claims are drawn to training a machine learning model that “allows the model to preserve performance on earlier tasks even as it learns new ones, directly addressing the technical problem of 'catastrophic forgetting' in continual learning systems." Furthermore, one improvement identified in the Specification of Desjardins is to "effectively learn new tasks in succession whilst protecting knowledge about previous tasks." Spec. 21. The Specification of Desjardins also recites that the claimed improvement allows artificial intelligence (AI) systems to "us[e] less of their storage capacity" and enables "reduced system complexity." Moreover, the court was persuaded because the improvement was to how the machine learning model itself operates, and not, for example, the identified mathematical calculation. Here, in the instant case, the claims are drawn to analyzing positive and negative datasets to identify pairings (i.e., pairings of similar length, pairings derived from the same source, pairings that have similar binding affinities) of at least one entry between positive and negative data sets, training a machine learning algorithm, creating a trained algorithm, removing (e.g., filtering) out anchor positions in the positive dataset, and training the machine learning algorithm to interrogate input data comprising amino acid sequences of peptides and/or proteins, to identify peptides, or peptide fragments of said proteins, having features positively associated with natural endogenous or exogenous cellular processing, transportation and HLA/MHC presentation which is merely using a machine learning algorithms to identify peptides and peptide fragments. As such, the machine learning limitations are utilized as the means to identifying peptides and peptide fragments. Whereas, in Desjardins, the improvement was with how the machine learning model itself operates. Here, the instant claims teach a method for training a machine-learning algorithm or statistical inference model that controls for the influence of protein abundance, stability and HLA/MHC binding, enabling the algorithm or model to learn features that are synonymous with efficient processing and presentation, rather than HLA/MHC binding. Compared to Desjardins addressing the technical problem of 'catastrophic forgetting' in continual learning systems. Furthermore, the machine-learning algorithm and statistical inference model of instant claims 1 and 21-22 are generically recited with no accompanying recited structure. Moreover, the machine-learning algorithm and statistical inference model is used to generally apply the abstract idea without limiting how the trained he machine-learning algorithm and statistical inference model function. The machine-learning algorithm and statistical inference model are described at a high level such that it amounts to using a computer with a generic the machine-learning algorithm and/or statistical inference model to apply the abstract idea. Thus, the recitation of “the machine-learning algorithm and statistical inference model” is equivalent to the words "apply it". See MPEP 2106.05(f). Therefore, although the claims have similarities the instant claims as a whole are not analogous to the claims of Desjardin. Therefore, the claims do not provide an improvement.
The Applicant states “4. Identification of Claim Elements That Overcome the Identified Technical Problems. The ARP further noted that, when the claim was analyzed as a whole, at least one limitation reflected the specific improvement that overcame this technical problem. The applicant in the present case has also pointed out the claim limitations that implement the solution to this technical problem are set forth in the pending claims. Returning again to the Applicant's Response of October 9, 2024.” [remarks, page 14].
In response, although the claims are drawn to a particular way to achieve a desired outcome, they do not provide an improvement because the claims are drawn to analyzing nucleic acid sequence data (i.e., peptides, peptide fragments) to provide sequence information and/or detect genetic variation between sequences and using a machine learning algorithm to identify the genetic variation between peptides and peptide fragments of the positive and negative datasets having features positively associated with natural endogenous or exogenous cellular processing, transportation and HLA/MHC presentation. See MPEP 2106.05(d)(II)(v).
The Applicant states “Analysis of the Present Application Under the ARP Decision. When the PTAB rejected Desjardins' invention under Section 101, the PTAB characterized the invention as being directed to the abstract idea of a mathematical algorithm computing mathematical calculations and manipulating particular information. The final office action in the present application also characterizes the current invention as being directed to the abstract idea of mathematical calculations and the manipulation of information. In both the present application and in Ex Parte Desjardins. the applicant argued that the claimed invention integrated any exception into a practical application under Step 2A, Prong Two. Specifically, both the present applicant and the applicant in Ex Parte Desjardin argued that the practical application comprising an improvement to the method of training of machine learning algorithms by altering the traditional process of training the algorithm with a process that requires two different sets of training data. In both cases, the applicant pointed out how the altered training process overcame a specific technological problem found in the prior art. Before the ARP intervened in Ex Parte Desjardins. the PTAB rejected the applicant's argument, stating that an altered training process does not solve an existing technological process, but rather simply limits the invention to a particular field of use. In the present application the final office action rejected the applicant's argument, stating that there was not sufficient specificity in the recitation of a particular machine learning algorithm, including the particular structures of that machine learning algorithm.” [remarks, page 15]. The Applicant states “The ARP reversed the decision of the PTAB and held that the invention in Ex Parte Desjardins is patent eligible. The similarities between the present invention and that in Ex Parte Desjardins. and the logic the ARP decision, mean that the presently claimed invention is also patent eligible and the conclusion of the final office action must be reversed. The ARP decision reminds us that Enfish ranks among the Federal Circuit's leading cases on the eligibility of technological improvements. Under the logic of Enfish, "improvements in training the machine learning model itself" can constitute an improvement to computer technology and, as long as the claims reflect such an improvement, the claim should be patent eligible. Ex Parte Desjardins. p. 8. The ARP emphasized that where the specification identifies technological problems that are improved by the claimed training method and the claims specify elements that reflect that improvement, the claim is patent eligible. The specification in the present application likewise identifies technological problems that are improved by the claimed training method, and the pending claims contain claim elements that reflect that improvement. As such, the claims constitute an improvement to computer technology and are eligible under Section 101.” [remarks, page 15]. The Applicant states “As for the lack of specificity, the claim in Ex Parte Desjardin likewise merely identified a "machine learning model" that, like all machine learning models, has a plurality of parameters. The PTAB decision that rejected this claim explained that the claim merely described "generic computer components." PTAB Decision on Appeal, Appeal 2024-000567, Serial No. 16/319,040, March 4, 2025, p. 24. The ARP nonetheless found the claim patent eligible. This is not surprising, as the invention was not in the particular details of the machine learning model, but rather in an improved method for training that machine learning model. If the specificity of the machine learning model was sufficient in Ex Parte Desjardin, then the specificity of the machine learning algorithm in the present claims must likewise be sufficient.” [remarks, page 16].
In response, although the instant claims and the claims of Desjardin contain some similarities the claims are drawn to different types of inventions. For example, the claims of Desjardin are drawn to a method for training a machine learning model that addresses the technical problem of 'catastrophic forgetting' in continual learning systems while the instant claims are drawn to a computer implemented method for training a machine learning to identify peptides and peptide fragments. Additionally, the claims of Desjardin are rooted in computer related machine-learning, whereas, the instant claims are drawn to a biological solution (i.e., processing HLA/MHC/peptide complexes) that use machine learning elements for identifying peptides and/or peptide fragments. Furthermore, with respect to Enfish, the claims are directed to a specific improvement to the way computers operate, embodied in the self-referential table that achieved other benefits over conventional databases, such as increased flexibility, faster search times, and smaller memory requirements. Thus, the claims are not patent eligible under Enfish. Therefore, the claims are not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1, 7-8, 11, 13-14, 16, and 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Dhanda et al. (Clin Dev Immunol 2013:2013:263952. doi:10.1155/2013/263952. Epub 2013 Dec 30) in view of Tang et al. (Journal of Immunological Methods, 2015-07, Vol.422, p.22-27; Cited in the Office Action mailed 02 January 2025) in view of Fridmen et al. (U.S Patent Pub: US 2006/025944; Patent Pub Date: 16 November 2006).
Claim 1 recites building a positive dataset comprising entries of peptide sequences identified or inferred from surface bound or secreted HLA/MHC/peptide complexes encoded by one or a plurality of different HLA/MHC allele.
Claim 1 recites building a negative dataset entries of peptide sequences which are not identified or inferred from surface bound or secreted HLA/MHC/peptide complexes.
Claim 1 recites identifying a multiplicity of pairings, with each pairing establishing a pair between one entry in the positive data set and one entry in the negative data set, wherein each pair of said multiplicity of pairings comprises entries for peptide sequences that are:
Claim 1 recites (i) of equal or similar length.
Claim 1 recites (ii) are derived from the same source protein or fragment.
Claim 1 recites (iii) similar binding affinities, with respect to an HLA/MHC molecule to which the peptide of the positive counterpart data set is restricted.
Claim 1 recites training the machine learning algorithm on the multiplicity of pairings, creating a trained algorithm to identify peptides that contain features positively associated with natural endogenous or exogenous cellular processing, transportation and major histocompatibility complex (MHC) presentation.
Claim 1 recites the amino acids at key HLA/MHC-binding anchor positions within the peptide sequences of the positive and negative data sets are removed as features for the machine learning algorithm.
Claim 1 recites wherein following the training the machine learning algorithm on the multiplicity of pairings, using the trained machine learning algorithm to interrogate input data comprising amino acid sequences of peptides and/or proteins, to identify peptides, or peptide fragments of said proteins, having features positively associated with natural endogenous or exogenous cellular processing, transportation and HLA/MHC presentation.
Claim 21 recites an apparatus. Claim 22 recites a method for training a statistical inference model. Claims 21-22 encompass the same limitations as claim 1. Therefore, the rejection of claim 1 is applicable to claims 21-22.
Dhanda et al. (Dhanda) teach a data source and data processing of types data sets, positive and a negative dataset. Dhanda teaches the datasets were generated from publicly available immune epitope database (IEDB). Dhanda teaches that extracted experimentally validated MHC class II binding T-helper epitopes. Dhanda teaches peptides having length shorter than 8 residues and longer than 22 residues were removed. Dhanda teaches the data set processing resulted in unique 904 IL4 inducing and 742 noninducing MHC class II peptide sequences. Dhanda teaches these datasets are known as positive and negative sets. Dhanda teaches the dataset was created without any restrictions of host and source of epitopes [page 2 left col section 2 methodology 2.1.1]. Dhanda teaches the positive data set contains IL4 inducing peptides as positive input and IL4 noninducing peptides as negative inputs [page 2 left col methods second 2.3], as in claims 1 and 21-22 building a positive data set comprising entries of peptide sequences identified or inferred from surface bound or secreted HLA/MHC/peptide complexes encoded by one or a plurality of different HLA/MH C alleles and building a negative data set comprising entries of peptide sequences which are not identified or inferred from surface bound or secreted HLA/MHC/peptide complexes.
Dhanda teaches using amino acid lengths between 8 to 22 residues [page 2 left col, section 2 methodology 2.1.1], as in claims 1 and 21-22 wherein each pair of said multiplicity of pairings comprises entries for peptide sequences that are: (i) of equal or similar length.
Dependent claims 7-8, 11, 14, and 16
Dhanda teaches using 904 IL4 inducing MHC class II peptide sequence [page 2 left col section 2.1.1]. Dhanda teaches using dataset generate from immune epitope database (IEDB), as in claim 7. Here, it is obvious that the data from the IEDB contained entries because it is known in the art that entries data contain data such as peptide sequence, protein ID/origin, functional information [Specification page 9 lines 24-27].
Dhanda teaches using 904 IL4 inducing MHC class II peptide sequence [page 2 left col section 2.1.1], as in claim 8. Here, the 904 IL4 peptide sequences of the positive dataset read on 2 to 50 difference surface or secreted complexes bounds to HLA/MHC complexes.
Dhanda teaches using amino acid residues from 8 to 22 residues in length [page 2, left col section 2.1.1], as in claim 11.
Dhanda teaches examining physiochemical properties (PCP) (i.e., hydrophilicity, hydrophobicity, charge, steric effect, side bulk, pI, hydropathy, and amphipath) of IL4+ and IL4- peptides [page 3 left col section 2.10], as in claim 14.
Dhanda teaches examining physiochemical properties (PCP) (i.e., hydrophilicity, hydrophobicity, charge, steric effect, side bulk, pI, hydropathy, and amphipath) of IL4+ and IL4- peptides [page 3 left col section 2.10]. Dhanda teaches (iii) average of the sum of PCP of selected residues of N- and C- terminals, after analysis of discriminating residue positions; for example, 1, 2, 3, 5, and 12 positions of N- terminal were selected for hydrophilicity.” [page 3 left col section 2.10], as in claim 16.
Dhanda does not teach claim 1 identifying a multiplicity of pairings, with each pairing establishing a pair between one entry in the positive data set and one entry in the negative data set. Dhanda does not teach Claim 1 wherein each pair of said multiplicity of pairings comprises entries for peptide sequences that are: (ii) are derived from the same source protein or fragment.
Dhanda does not teach claim 1 training the machine learning algorithm on the multiplicity of pairings. Dhanda does not teach claim 1 creating a trained algorithm to identify peptides that contain features positively associated with natural endogenous or exogenous cellular processing, transportation and major histocompatibility complex (MHC) presentation.
Dhanda does not teach claim 1 wherein each pair of said multiplicity of pairings comprises entries for peptide sequences that are: (ii) are derived from the same source. Dhanda does not teach claim 1 wherein following the training the machine learning algorithm on the multiplicity of pairings, using the trained machine learning algorithm to interrogate input data comprising amino acid sequences of peptides and/or proteins, to identify peptides or peptide fragments of said proteins, having features positively associated with natural endogenous or exogenous cellular processing, transportation and HLA/MHC presentation.
Dhanda does not teach claim 1 wherein following the training the machine learning algorithm on the multiplicity of pairings, using the trained machine learning algorithm to interrogate input data comprising amino acid sequences of peptides and/or proteins, to identify peptides or peptide fragments of said proteins, having features positively associated with natural endogenous or exogenous cellular processing, transportation and HLA/MHC presentation.
Tang et al. (Tang) teach one positive dataset data is paired with the five negative sub-datasets with each pair trained with one of the following features: Hidden Markov Model, binary encoding, BLOSUM62 feature, position-specific amino acid composition, and position-specific dipeptide composition [page 23 right col second para]. Tang also teaches using pair of equal or similar length using 8-mers, 9-mers, 10-mers, and 11-mers [page 24 table 2]. Tang teaches using the same source from specific alleles such as A0201, B07002, and B3501, for example [page 24 table 2], as in claims 1 and 21-22 identifying a multiplicity of pairings, with each pairing establishing a pair between one entry in the positive data set and one entry in the negative data set and wherein each pair of said multiplicity of pairings comprises entries for peptide sequences that are:(ii) are derived from the same source protein or fragment.
Tang teaches training dataset for each length of allele [page 24 table two]. Tang teaches performances of NIEluter on 8-mer peptides [table 4] and on 10-mer peptides [table 6] [page 24], as in claims 1 and 21-22 training the machine learning algorithm on the multiplicity of pairings.
Tang teaches building NIEluter and comparing the algorithm to MHC-NP and NetMHC3.2 [page 25 left col section 2.5 and fig. 1]. Tang teaches the NIEluter is used for predicting or identifying natural processed peptides (NPP’s) [page 25 right col first para]. Tang teaches one positive dataset data is paired with the five negative sub-datasets with each pair trained with one of the following features: Hidden Markov Model, binary encoding, BLOSUM62 feature, position-specific amino acid composition, and position-specific dipeptide composition [page 23 right col second para] as in claims 1 and 21-22 creating a trained algorithm to identify peptides that contain features positively associated with natural endogenous or exogenous cellular processing, transportation and major histocompatibility complex (MHC) presentation.
Tang teaches NIEluter was able predict peptides eluted from six HLA alleles (A0201, B0702, B3501, B4403, B5301, and B5701) of 8–11 amino acids [page 26, right col section 4 conclusion]. NIEluter is a machine learning algorithm that predicted/identified peptides eluted from six HLA alleles (A0201, B0702, B3501, B4403, B5301, and B5701) [page 25 figure 1], as in claims 1 and 21-22, wherein following the training the machine learning algorithm on the multiplicity of pairings, using the trained machine learning algorithm to interrogate input data comprising amino acid sequences of peptides and/or proteins, to identify peptides or peptide fragments of said proteins, having features positively associated with natural endogenous or exogenous cellular processing, transportation and HLA/MHC presentation.
It would be obvious to one of ordinary skill in the art by the effective filing date of the claimed invention to modify Dhanda in view of Tang because Tang teaches analyzing multiplicity of pairings with respect to using datasets of naturally processed peptides or natural presented peptides (NPPs) [abstract]. Additionally, one of ordinary skill would recognize that Tang also uses positive datasets that are composed of NPPs (eluted) and the negative datasets are MHC binders that cannot be processed naturally (binder) [page 23 left col section 2.1]. One of ordinary skill in the art would be motivated to combine Dhanda in view of Tang because Tang also teaches building/processing positive and negative data sets [page 23 left col section 2.1] and teaches using machine learning algorithms (i.e., support vector machine) for processing pairing between data sets for predicting peptides eluted from six HLA alleles (A0201, B0702, B3501, B4403, B5301, and B5701) of 8–11 amino acids [page 26 right col conclusion]. Thus, there is a reasonable expectation of success to combine the known elements of using positive and negative datasets of Dhanda with the multiplicity of pairing of Tang to yield a predictable method for using a trained machine learning algorithm to interrogate input data to identify peptides, or peptide fragments of said proteins, having features positively associated with natural endogenous or exogenous cellular processing, transportation and HLA/MHC presentation.
Dhanda in view of Tang does not teach claim 1 wherein each pair of said multiplicity of pairings comprises entries for peptide sequences that are: (iii) similar binding affinities with respect to an HLA/MHC molecule to which the peptide of the positive counterpart data set is restricted. Dhanda in view of Tang does not teach claim 1 the amino acids at key HLA/MHC-binding anchor positions within the peptide sequences of the positive and negative data sets are removed as features for the machine learning algorithm. Dhanda in view of Tang does not claim 13.
Fridmen et al. (Fridmen) disclose using affinity information derived from T-cell epitopes known to bind to MHC allele of interest, computing binding score for a series of candidate peptides derived from target protein, and ranking the peptides in a list based on numerical value of the predicted binding score wherein peptides having a higher binding score are predicted to be characterized by a higher binding affinity to the MHC allele [Fridmen, claims 10 and 17], as in instant claims 1 and 21-22 wherein each pair of said multiplicity of pairings comprises entries for peptide sequences that are: (iii) similar binding affinities with respect to an HLA/MHC molecule to which the peptide of the positive counterpart data set is restricted. Here, although Fridmen does not teach pairings, it would be obvious to incorporate the binding affinities of Fridmen as pairings of entries for the positive and negative datasets for identifying peptide and/or peptide fragments.
Fridmen discloses rules for rejecting T-cell epitope which rejects epitopes if another protein in an appropriate database of reference sequences contains an amino acid sequence which differs by 1 or 2 amino acids from the amino acid sequence of the candidate T-cell epitope and the mismatches are restricted to the MHC anchor residues of the candidate epitope and the reference sequence [Fridmen, claims 8-9 and 13-14]. Fridmen discloses difference binding affinities for different peptides [pages 22-23 tables 5 and 7-8], as in instant claims 1 and 21-22 the amino acids at key HLA/MHC-binding anchor positions within the peptide sequences of the positive and negative data sets are removed as features for the machine learning algorithm. Therefore, it would be obvious to use the binding affinities of Fridmen as entries in positive and negative data sets for identifying multiplicity of pairings between the datasets for identifying peptides that contain features positively associated with natural endogenous or exogenous cellular processing, transportation and major histocompatibility complex (MHC) presentation.
Dependent claim 13
Fridmen discloses an algorithm or also known as a binding filter to predict relative binding affinities [page 8 para 0077]. Fridmen discloses example 9 which teaches in vitro binding affinity evaluation [page 25 0197], as in claim 13.
It would be obvious to one of ordinary skill in the art by the effective filing date of the claimed invention to modify Dhanda in view of Tang in view of Fridmen because Fridmen discloses a system and method for automated selection of T-cell epitope [title] that are likely to bind to an MHC I class 1 allele by collecting and curating training sets of peptide sequence known to bind to MHC alleles [Fridmen, claim 1]. Here, one of ordinary skill in the art would recognize that Fridmen teaches constructing training data of peptides that can bind to MHC allele [Fridmen, claim 1]. Additionally, one of ordinary skill in the art would further recognize that Fridmen discloses a method that filters and rejects candidate T-cell epitopes based on anchoring positions such as proline at P1 or P1’ or leucine positions [Fridmen, claims 10-15]. As such, there would be a reasonable expectation of success combining the known of elements of using positive data and negative datasets and using 8-11-mers and source of proteins of Dhanda in view of the multiplicity of pairing using entries of peptides fragments (i.e., 8-11-mer) and HLA allele sources of Tang in view of the training datasets, binding affinities, and filtering of anchoring positions of Fridmen to construct a predictable method using a trained machine learning algorithm to interrogate input data to identify peptides, or peptide fragments of said proteins, having features positively associated with natural endogenous or exogenous cellular processing, transportation and HLA/MHC presentation.
Claim(s) 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Dhanda in view of Tang in view of Fridmen, as applied to claims 1, 7-8, 11, 13-14, 16, and 21-22, and in further view of Bremel et al. (U.S Patent Pub US 2013/0330335; Cited in the Office Action mailed 02 January 2025).
Dhanda in view of Tang in view of Fridmen teach claims 1, 7-8, 11, 13-14, 16, and 21-22.
Dhanda in view of Tang in view of Fridmen teach a computer-implemented method for training a machine learning model using HLA/MHC/peptide complexes, building positive and negative datasets containing using HLA/MHC/peptide complex data related to length, source, and binding and comparing the datasets to pairings to identify peptides that contain features positively associated with natural endogenous or exogenous cellular processing transportation and HLA/major histocompatibility complex (MHC) presentation.
Dhanda in view of Tang in view of Fridmen do not teach claims 9-10.
Bremel teaches that haplotype refers to the HLA alleles found on one chromosome and the proteins encoded. Haplotype may also refer to the allele present at any one locus within the MHC. Each class of MHC is represented by several loci e.g., HLA-A (Human Leukocyte Antigen-A), HLA-B, HLA-C, HLA-E, HLA-F, HLA-G, HLA-H, HLA-J, HLA-K, HLA-L, HLA-P and HLA-V for class I and HLA-DRA, HLA-DRBl-9, HLA-, HLA-DQAl, HLA-DQBl, HLA-DPAl, HLA-DPBl, HLA-DMA, HLADMB, HLA-DOA, and HLA-DOB for class II [Bremel, disclosure [0107]], as in claims 9-10.
It would have been obvious to one of ordinary in the art by the effective filing date of the claimed invention to modify Dhanda in view of Tang in view of Fridmen, and in further view of Bremel because Bremel discloses bioinformatic processes for determination of peptide binding [Bremel, abstract]. One of ordinary skill in the art would be motivate to combine Dhanda in view of Tang in view of Fridmen, and in further view of Bremel because Bremel discloses methods for identifying ligands by generating amino acid data subsets identified as having a binding affinity for a binding partner [Bremel, claim 1]. One of ordinary skill in the art would expect a reasonable success in combining Dhanda in view of Tang in view of Fridmen, and in further view of Bremel because Bremel discloses using binding partners with MHC class I and II binders [Bremel, claims 2-8]. Bremel discloses the “neural network” refers to various configurations of classifiers used in machine learning, including multilayered perceptrons with one or more hidden layer, support vector machines and dynamic Bayesian networks [Bremel, disclosure [0136]]. Therefore, the limitations of Dhanda in view of Tang in view of Fridmen, and in further view of Bremel could be utilized to construct, train, and create a machine learning (statistical inference model) algorithm to identify peptide features positively associated with transportation and MHC presentation.
Claim(s) 17 is rejected under 35 U.S.C. 103 as being unpatentable over Dhanda in view of Tang in view of Fridmen, as applied to claims 1, 7-8, 11, 13-14, 16, and 21-22, and in further view of Mei et al. (Biopolymers, 2005, Vol.80 (6), p.775-786; Cited in the Office Action mailed 02 January 2025).
Dhanda in view of Tang in view of Fridmen teach claims 1, 7-8, 11, 13-14, 16, and 21-22.
Dhanda in view of Tang in view of Fridmen teach a computer-implemented method for training a machine learning model using HLA/MHC/peptide complexes, building positive and negative datasets containing using HLA/MHC/peptide complex data related to length, source, and binding and comparing the datasets to pairings to identify peptides that contain features positively associated with natural endogenous or exogenous cellular processing transportation and HLA/major histocompatibility complex (MHC) presentation.
Dhanda in view of Tang in view of Fridmen do not teach claim 17.
Mei et al. (Mei) teaches new set of amino acid descriptors, i.e., VHSE (principal components score Vectors of Hydrophobic, Steric, and Electronic properties), is derived from principal components analysis (PCA) on independent families of 18 hydrophobic properties, 17 steric properties, and 15 electronic properties, respectively, which are included in total 50 physicochemical variables of 20 coded amino acids [Abstract], as in claim 17.
It would be obvious to one of ordinary skill in the art by the effective filing date of the claimed invention to modify Dhanda in view of Tang in view of Fridmen, and in further view of Mei because Mei teaches using a new set of descriptors called principal component score Vectors of Hydrophobic, Steric, and Electronic properties (VHSE) [abstract]. One of ordinary skill in the art would recognize that, although Mei does not teach using MHC molecules, the descriptors can be applied to amino sequences of any origin. One of ordinary skill in the art would expect a reasonable success in combining Dhanda in view of Tang in view of Fridmen, and in further view of Mei because Mei teaches using new descriptors derived from PCA analysis of coded amino acids and using vectors which could be implemented into the systems Dhanda, Tang, and Fridmen to further describe the amino sequences and their inherent properties in order to identify peptides associated with cellular transportation proteins or HLA/MHC proteins.
Claim(s) 18 is rejected under 35 U.S.C. 103 as being unpatentable over Dhanda in view of Tang in view of Fridmen, as applied to claims 1, 7-8, 11, 13-14, 16, and 21-22, and in further view of Zhiliang et al. (Science China. Chemistry, 2008-10, Vol.51 (10), p.946-957; Cited in the Office Action mailed 02 January 2025).
Dhanda in view of Tang in view of Fridmen teach claims 1, 7-8, 11, 13-14, 16, and 21-22.
Dhanda in view of Tang in view of Fridmen teach a computer-implemented method for training a machine learning model using HLA/MHC/peptide complexes, building positive and negative datasets containing using HLA/MHC/peptide complex data related to length, source, and binding and comparing the datasets to pairings to identify peptides that contain features positively associated with natural endogenous or exogenous cellular processing transportation and HLA/major histocompatibility complex (MHC) presentation.
Dhanda in view of Tang in view of Fridmen do not teach claim 18.
Zhiliang et al. (Zhiliang) teaches that a new descriptor called vector of topological and structural information for coded and noncoded amino acids (VTSA), was derived by principal component analysis (PCA) from a matrix of 66 topological and structural variables of 134 amino acids [abstract]. Zhiliang teaches the VTSA vector was applied into two sets of peptide quantitative structure-activity relationships or quantitative sequence-activity modeling (QSARs/QSAMs) [abstract]. Zhiliang teaches modeling by support vector machine (SVM) [page 952 left col].
It would have been obvious to one of ordinary skill in the art by the effective filing date of the claimed invention to modify Dhanda in view of Tang in view of Fridmen, and in further view of Zhiliang because Zhiliang teaches methods using new descriptors, called vector of topological and structural information for coded and noncoded amino acids (VTSA), derived by principal component analysis (PCA) [abstract]. One of ordinary skill in the art would recognize that, although Zhiliang does not teach cellular transportation proteins or HLA/MHC, the VTSA can be applied to amino sequences, such as from B-cells, for example. One of ordinary skill in the art would be motivated to combine Dhanda in view of Tang in view of Fridmen, and in further view of Zhiliang because Zhiliang teaches VTSA vector was applied into two sets of peptide quantitative structure-activity relationships or quantitative sequence-activity modeling’s (QSARs/QSAMs). Molded by genetic partial least squares (GPLS), support vector machine (SVM), and immune neural network (INN) and good results were obtained [abstract]. One of ordinary skill in the art would expect a reasonable success in combining Dhanda in view of Tang in view of Fridmen, and in further view of using vectors of topological and structural information for coded and noncoded amino acids (VTSA) of Zhiliang to construct a method for training a machine learning algorithm or statistical inference model/algorithm using descriptors that can be utilized as features to identify peptide features positively associated with transportation and MHC presentation.
Claim(s) 19 are rejected under 35 U.S.C. 103 as being unpatentable over Dhanda in view of Tang in view of Fridmen, as applied to claims 1, 7-8, 11, 13-14, 16, and 21-22, and in further view of Wen et al. (Gene, 2014-08, Vol.546 (1), p.25-34; Cited in the Office Action mailed 02 January 2025).
Dhanda in view of Tang in view of Fridmen teach claims 1, 7-8, 11, 13-14, 16, and 21-22.
Dhanda in view of Tang in view of Fridmen teach a computer-implemented method for training a machine learning model using HLA/MHC/peptide complexes, building positive and negative datasets containing using HLA/MHC/peptide complex data related to length, source, and binding and comparing the datasets to pairings to identify peptides that contain features positively associated with natural endogenous or exogenous cellular processing transportation and HLA/major histocompatibility complex (MHC) presentation.
Dhanda in view of Tang in view of Fridmen do not teach claim 19.
Wen et al. (Wen) teaches the k-mer natural vector is obtained by concatenating the first group of parameters (the frequency of occurrence of each k-mer in the sequence) [page 28]. Wen teaches that the k-mer model of a genetic sequence can be described as follows: consider a genetic sequence s of length L, ‘N1, N2, …NL’, where Ni ∈ {A, C, G, T}, l = 1,2,…,L. A string of consecutive k nucleotides within a genetic sequence is called a k-mer [page 26 right col section 2.1], as in claim 19. Here, it is obvious that k can be equal to 1 or greater.
It would have been obvious to one of ordinary skill in the art by the effective filing date of the claimed invention to modify Dhanda in view of Tang in view of Fridmen, and in further view of Wen because Wen teaches describing k-mers and using k-mer vectors for phylogenetic analysis [title]. One of ordinary skill in the art would recognize that, although Wen teaches k-mer natural vector with respect to phylogenetic analysis, the k-mer frequency and k-mer length of Wen could be utilized to analyze general peptides or amino acid sequence data. One of ordinary skill in the art would be motivated to Dhanda in view of Tang in view of Fridmen, and in further view of Wen because Wen teaches that the k-mer natural vector method is a very powerful tool for analyzing and annotating genetic sequences (entries of peptide sequences). One of ordinary skill in the art would expect a reasonable success using the tools of Wen with Dhanda in view of Tang in view of Fridmen because the tools of Wen can be used to detect similarities of genetic sequences [page 29 section 2.4] and frequencies of k-mer in genetic sequences [page 28 right col the k-mer natural vector para]. Therefore, the combination of Dhanda, Tang, Fridmen, and Wen would construct a predictable method step using k-mer frequencies for identifying peptides or peptide fragments of said proteins, having features positively associated with natural endogenous or exogenous cellular processing, transportation and HLA/MHC presentation.
Conclusion
Claims 1, 7-19, and 21-22 are rejected.
No claims are allowed.
Finality
This Office action is a Non-Final action. A shortened statutory period for reply to this action is set to expire THREE MONTHS from the mailing date of this action.
Inquiries
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH C PULLIAM whose telephone number is (571)272-8696. The examiner can normally be reached 0730-1700 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Karlheinz Skowronek can be reached at (571) 272-9047. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.C.P./Examiner, Art Unit 1687
/Anna Skibinsky/
Primary Examiner, AU 1635