DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Office Action Overview
Claim Status
Pending:
1-20
Examined:
1-20
Independent:
1, 10, and 12
Allowable:
none
Objected to:
1, 3, 8, 10, 12, 14, and 19
Rejections applied
Abbreviations
X
112/b Indefiniteness
PHOSITA
"a Person Having Ordinary Skill In The Art before the effective filing date of the claimed invention"
112/b "Means for"
BRI
Broadest Reasonable Interpretation
112/a Enablement,
Written description
CRM
"Computer-Readable Media" and equivalent language
112 Other
IDS
Information Disclosure Statement
X
102, 103
JE
Judicial Exception
X
101 JE(s)
112/a
35 USC 112(a) and similarly for 112/b, etc.
101 Other
N:N
page:line
X
Double Patenting
MM/DD/YYYY
date format
Priority
As detailed in the 04/01/2022 filing receipt, this application claims priority to as early as 04/05/2021, the filing date of parent U.S provisional application 63/170,697.
Pending claims 1-9 and 12-20 are being examined with an effective filing date (EFD) of 04/05/2021, the filing date of provisional 63/170,697.
Pending claims 10 and 11 are being examined with an EFD of 04/01/2022, the filing date of the instant application, because parent Provisional application 63/170,697 does not provide support for the bolded aspects of: MHC proteins associated with a virus pathogen or tumor of claim 10, nor for treating a person for the virus pathogen or tumor of claim 11.
Specification
The disclosure is objected to because of the following informalities:
Specification paragraph [0046] recites "MHC protein of a (the) pathogen" three times. As the MHC proteins are not interpreted to be proteins of a pathogen, [0046] could possibly be amended to reflect MHC proteins are somehow associated with peptides from a pathogen, taking care not to add new matter.
Appropriate correction is required.
Claim Objections
Claims 1, 3, 8, 10, 12, 14, and 19 are objected to because of the following informalities (bold emphasis added):
Claim 1 (line 5) and claim 12 (line 8) recite "new peptides," which should be amended to "new peptide sequences" in order to recite consistent claim language.
Claim 1 (line 5) and claim 12 (line 8) recite "the discriminator," which should be amended to "the discriminator model" in order to recite consistent claim language.
Claim 3 (line 2) and claim 14 (line 2) recite "multivariate unit-variate" which appears to have a misspelling and should be corrected to "multivariate unit-variance," as recited in the Specification paragraphs [0032, 0044, and 0047].
Claim 8 (line 1-2) and claim 19 (line 1-2) recite "the generator transforms a binding class label from the encoder," which should be amended to "the generator model transforms a binding class label from the encoder model," in order to recite consistent claim language.
Claim 10 (line 5) recites "the trained GAN" which should be amended to "the trained GAN model" in order to recite consistent claim language.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Claims depending from rejected claims are rejected similarly, unless otherwise noted, and any amendments in response to the following rejections should be applied throughout the claims, as appropriate.
Claim 1 (line 4) and claim 12 (line 7) recite "the discriminator model," which requires but lacks clear antecedent basis. If this recitation refers to a previously instantiated instance, then it is not clear which instance that is. If this recitation instantiates this claim element, this is not clear. This rejection might be overcome by for example amending to recite "a discriminator model" instead of "the discriminator model." For compact examination, it is assumed that the preceding suggestion will be implemented.
In the "developing a treatment" step of claim 10, the connection is unclear between "a treatment" and "the virus pathogen…," because the treatment would typically treat a disease or condition. The rejection might be overcome by amending to include the disease state (taking care not to add new matter), possibly "developing a treatment for the virus disease attributable to the virus pathogen, or for the tumor associated with the MHC."
In a similar issue, claim 11 recites "treating a person for the virus pathogen or tumor using the developed treatment," which is unclear because the treatment would typically treat a disease or condition. The rejection might be overcome by amending to include the disease state (taking care not to add new matter), possibly "treating a person for the virus disease attributable to the virus pathogen or for the tumor using the developed treatment."
In the "training a generative…" step of claim 10, the relationship is unclear between "binding peptide sequences" and "a major histocompatibility protein (MHC)," in the recitation of "binding peptide sequences relating to a major histocompatibility protein (MHC)." It is not clear from the claim if "relating to" refers to the binding peptide sequences binding the MHC, or if "relating to" refers to the binding peptide sequences being akin to the actual MHC protein itself, or if "relating to" refers to something else.
Further, in a related rejection, regarding the "developing a treatment…" step of claim 10, the association is unclear between "virus pathogen or tumor" and "the MHC protein" in the recitation "the virus pathogen or tumor associated with the MHC protein." It is not clear in what way being "associated with" occurs.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to one or more judicial exceptions without significantly more.
MPEP 2106 details the following framework to analyze Subject Matter Eligibility:
• Step 1: Are the claims directed to a category of statutory subject matter (a process, machine, manufacture, or composition of matter)? (see MPEP § 2106.03)
• Step 2A, Prong One: Do the claims recite a judicially recognized exception, i.e. an abstract idea, a law of nature, or a natural phenomenon? (see MPEP § 2106.04(a)). Note, the MPEP at 2106.04(a)(2) & 2106.04(b) further explains that abstract ideas and laws of nature are defined as:
• mathematical concepts, (mathematical formulas or equations, mathematical relationships and mathematical calculations);
• certain methods of organizing human activity (fundamental economic practices or principles, managing personal behavior or relationships or interactions between people); and/or
• mental processes (procedures for observing, evaluating, analyzing/ judging and organizing information).
• laws of nature and natural phenomena are naturally occurring principles/ relations that are naturally occurring or that do not have markedly different characteristics compared to what occurs in nature.
• Step 2A, Prong Two: If the claims recite a judicial exception under Prong One, then is the judicial exception integrated into a practical application? (see MPEP § 2106.04(d))
• Step 2B: If the claims do not integrate the judicial exception, do the claims provide an inventive concept? (see MPEP § 2106.05)
Regarding Step 1: Yes, the claims are directed to related methods (processes) and a machine or manufacture (system, processor, and memory), and therefore to categories of statutory subject matter. (See MPEP § 2106.03).
Regarding Step 2A, Prong One: Claims 1-20 recite a judicial exception (JE) of abstract ideas in the form of mental processes and mathematical concepts as follows:
• encoding training peptide sequences (claims 1 and 12)
• generating new peptide sequences (claims 1, 10, and 12)
• training the models to generate new peptide sequences (claims 1, 10, and 12)
• using tempering softmax output units (claims 2 and 13)
• sampling a multivariate unit-variate Gaussian distribution (claims 3 and 14)
• the recited mathematical equations include cross-entropy losses (claims 4 and 15)
• embedding peptide sequences into vectors (claims 5 and 16)
• minimizing a kernel maximum (claims 6 and 17)
• claims 7 and 18 further limit the peptide sequence to binding and non-binding peptide sequence relative to a major histocompatibility complex
• transform a binding class label and a sampled latent code vector into a peptide feature representation matrix (claims 8 and 19)
• using a loss function that is based on a Wasserstein metric (claims 9 and 20)
• developing a treatment for the virus pathogen or tumor associated with the MHC
protein using the new binding peptide sequence (claim 10)
To summarize, the claims recite abstract ideas, characterized as mental processes and mathematical concepts. Considering the broadest reasonable interpretation (BRI) of the claims, the mental processes recited in independent claim 1 (e.g., encoding training peptide sequences, generating new peptide sequences, etc.) as claimed are directed to processes that may be performed in the human mind, or with pen and paper. Claims 4 and 14 explicitly recite mathematical equations, while the limitations of other claims (e.g., encoding training peptide sequences, training the encoder model, using tempering softmax outputs, sampling a multivariate unit-variance Gaussian distribution, etc.) inherently recite mathematical concepts such as those disclosed in Specification [0039-0048]. Such analysis performed mentally, or with paper and pencil, may take considerable time and effort, and although a general-purpose computer can perform these calculations at a rate and accuracy that can far exceed the mental performance of a skilled artisan, the nature of the activity is essentially the same, and therefore constitutes an abstract idea.
Therefore, the claims recite elements that constitute a JE in the form of abstract ideas. (Step 2A, Prong One: Yes.)
Regarding Step 2A, Prong Two: In Step 2A, Prong One above, claim steps and/or elements were identified as part of one or more JE(s). Here at Step 2A, Prong Two, any remaining steps and/or elements not identified as JE(s) are therefore in addition to the identified JE(s), and are considered additional elements. Because the claims have been interpreted as being directed to a JE, abstract ideas in this instance, then Step 2A, Prong Two provides that the claims be examined further to determine whether the JE is integrated into a practical application [see MPEP § 2106.04(d)]. A claim can be said to integrate a JE into a practical application when it applies, relies on, or uses the JE in a manner that imposes a meaningful limit on the JE.
MPEP § 2106.04(d)(I) lists the following example considerations for evaluating whether a JE is integrated into a practical application:
(1) An improvement in the functioning of a computer or an improvement to other technology or another technical field, as discussed in MPEP §§ 2106.04(d)(1) and 2106.05(a);
(2) Applying or using a JE to effect a particular treatment or prophylaxis for a disease or medical condition, as discussed in MPEP § 2106.04(d)(2);
(3) Implementing a JE with, or using a JE in conjunction with, a particular machine or manufacture that is integral to the claim, as discussed in MPEP § 2106.05(b);
(4) Effecting a transformation or reduction of a particular article to a different state or thing, as discussed in MPEP § 2106.05(c); and
(5) Applying or using the JE in some other meaningful way beyond generally linking the use of the JE to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception, as discussed in MPEP § 2106.05(e).
The claims recite additional elements as follows:
Additional elements of data gathering and/or outputting steps: Claims 2 and 12 recite additional elements of outputting amino acid representations. These outputting steps are additional elements which perform functions of outputting the data needed to carry out the abstract idea. These steps are considered insignificant extra-solution activity, and are not sufficient to integrate an abstract idea into a practical application as they do not impose any meaningful limitation on the abstract idea or how it is performed, nor do they provide an improvement to technology [see MPEP §§ 2106.04(d)(I) and 2106.05(g)].
Additional elements of computer components: Claim 1 and 10 recite an additional element of a computer; claims 12, 14, 17, and 20 recite the additional elements of a system, a processor, and/or a memory. The claims require only generic computer components, which do not improve computer technology, and do not integrate the recited judicial exception into a practical application (see MPEP § 2106.04(d)(1) and MPEP § 2106.05(f)).
Additional element of therapy: Claim 11 recites an additional element of treating a person for the virus pathogen or tumor using the developed treatment. This additional element of treatment does not yet recite a particular therapy for the following reasons: Claim 10 provides a general link between the abstract idea and the treatment by reciting “developing a treatment for the virus pathogen or tumor associated with the MHC protein using the new binding peptide sequence", and the claimed treatment of claim 11 ("treating a person for the virus pathogen or tumor using the developed treatment") is recited in a general way, with no physical steps of administration, etc. It appears that Specification [0046] might disclose details which, if possibly included in an amendment to claim 11, may both strengthen the link between abstract idea and the treatment, as well as provide for recitation of physical administration of the treatment. (Note: Consideration should be given to the Specification objection to [0046] put forth above in this action. Additionally, there are 112(b) rejections of claims 10 and 11 which need clarification (as they relate to treatment recitations) in order for claim 11 to recite a particular therapy. To summarize, instant claim 11 does not yet recite a particular therapy for the above reasoning, and therefore is insufficient to integrate the abstract idea into a practical application (see MPEP § 2106.04(d)(2)).
Step 2A Prong Two summary: The claims have been further analyzed with respect to Step 2A, Prong Two, and no additional elements have been found, alone or in combination, that would integrate the judicial exception into a practical application. (Step 2A, Prong Two: No).
Step 2B analysis: Because the additional claim elements do not integrate JE (in this case, the abstract ideas) into a practical application, the claims are further examined under Step 2B, which evaluates whether the additional elements, individually and in combination, amount to significantly more than the judicial exception itself by providing an inventive concept. An inventive concept is furnished by an element or combination of elements that is recited in the claim in addition to the judicial exception, and is sufficient to ensure that the claim, as a whole, amounts to significantly more than the judicial exception itself (see MPEP § 2106.05).
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims recite additional elements that are well-understood, routine, and conventional. Those additional elements are as follows:
Additional elements of data gathering and/or outputting steps: The additional elements of outputting amino acid representations of claims 2 and 12 do not cause the claims to rise to the level of significantly more than the judicial exception. The courts have recognized receiving or transmitting data over a network; and storing and retrieving information in memory; [see MPEP§2106.05(d)(II)], as well-understood, routine, conventional activity when they are claimed in a merely generic manner (e.g., at a high level of generality) or as extra-solution activity. Therefore the additional elements of data gathering are shown to be routine, well-understood, and conventional in the art, and do not provide an inventive concept needed to amount to significantly more than the judicial
Additional elements of computer components: The additional elements of a computer, system, processor, and memory recited in claims 1, 10, 12, 14, 17, and 20 do not cause the claims to rise to the level of significantly more than the judicial exception; these are conventional computer components, which do not cause the claims to rise to the level of significantly more than the judicial exception as they do not provide an inventive concept.
Further regarding the conventionality of additional elements, the MPEP at 2106.05(b) and 2106.05(d) presents several points relevant to conventional computers and data gathering steps in regard to Step 2A Prong 2 and Step 2B, including:
• A general purpose computer that applies a judicial exception, such as an abstract idea, by use of conventional computer functions, does not qualify as a particular machine (see 2106.05(b)(I)), as in the case of claims 1, 10, 12, 14, 17, and 20, which are interpreted to recite conventional computer components.
• Integral use of a machine to achieve performance of a method may integrate the recited judicial exception into a practical application or provide significantly more, in contrast to where the machine is merely an object on which the method operates, which does not integrate the exception into a practical application or provide significantly more (see 2106.05(b)(II). In the instant claims, the recited computer, processor, and memory are used in encoding peptides, generating sequences, developing treatments, training models, etc.; as such, the computer, processor, and memory act only as a tool to perform the steps of data analysis, and do not integrate the exception into a practical application or provide significantly more.
• Use of a machine that contributes only nominally or insignificantly to the execution of the claimed method (e.g., in a data gathering step or in a field-of-use limitation) would not integrate a judicial exception or provide significantly more (see 2106.05(b)(III). The computer, processor, and/or memory of claims 1, 10, and 12 used in performing data analysis do not impose meaningful limitations on the claims.
• The courts have recognized “receiving or transmitting data over a network”, “performing repetitive calculations”, and “storing and retrieving information in memory”, as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (see MPEP 2106.05(d)(II)). The outputting of data in claims 2 and 12 is recited in a generic manner.
All limitations of claims 1-20 have been analyzed with respect to Step 2B, and none provides a specific inventive concept, as they all fail to rise to the level of significantly more than the identified judicial exception, and thus do not transform the judicial exception into a patent eligible application of the exceptions. Step2B: NO. Therefore, the claims, when the limitations are considered individually and as a whole, are rejected under 35 U.S.C. § 101 as reciting non-patent eligible subject matter.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
103-A:
Claims 1-3, 5-9, 12-14, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang, Xingjian (hereafter Wang-X) (US 2019/0259474 A1, published 08/22/2019; cited on the attached form PTO-892), in view of Rahman, (Molecules, vol. 26(5):1209, pp. 1-23 (Feb.2021); cited on the attached form PTO-892).
Wang-X presents a generative adversarial network (GAN) – convolutional neural network (CNN) used in MHC peptide binding prediction (entire document).
Regarding encoding training peptide sequences using an encoder model; and generating a new peptide sequence using a generator model, of claims 1 and 12, Wang-X shows the generator 504 may generate random data samples that resemble real samples, but which may include fake samples [0060]; and the use of an information extractor ( e.g., CNN + skip – gram embedding) works well for peptide data as the binding information is spatially encoded [0145].
Regarding training the encoder model, the generator model, and the discriminator model to cause the generator model to generate new peptides that the discriminator mistakes for the training peptide sequences, including learning projection vectors with respective cross-entropy losses for binding sequences and non-binding sequences of claims 1 and 12: Wang-X shows a GAN ([0060] and fig.5A) which includes a discriminator and a generator; the generator 504 may generate random data samples that resemble real samples, but which may include fake samples [0060]. Wang-X teaches the generator may include an adversary function that may generate data intended to fool the discriminator using data samples that are almost, but not quite, correct; this may be done by picking a legitimate sample randomly from a training set ( latent space ) and synthesizing a data sample (data space) by randomly altering its features, such as by adding random noise [0061]. Wang-X teaches a Gaussian noise vector can be input into the generator that outputs a distribution matrix [0067], loss measures may include Mean Squared Error , or cross entropy [0082], and the stored models provide the capability to use generator 228 to generate artificial data and discriminator 226 to identify data [0083].
Regarding the generator model outputs amino acid representations using a plurality of tempering softmax output units of claims 2 and 13, Wang-X shows predicting confidence parameters 274 may provide the capability to specify the confidence levels (e.g., softmax normalization) [0089]; and normalization processing may include adjusting values measured on different scales to a common scale adjusting the entire probability distributions of the data values into alignment [0093].
Regarding the limitations for:
• generating the new peptide sequence includes sampling a multivariate unit-variate Gaussian distribution as input to the generator of claims 3 and 14;
• the encoder model embeds peptide sequences from a training dataset into vectors during training of claims 5 and 16; and
• the generator transforms a binding class label from the encoder and a sampled latent code vector into a peptide feature representation matrix, with each column of the matrix corresponding to an amino acid of claims 8 and 19:
Wang-X teaches an exemplary data flow diagram (Fig. 5B) of a GAN generator configured for generating positive simulated polypeptide-MHC-I interaction data; a Gaussian noise vector can be input into the generator that outputs a distribution matrix . The input noises sampled from Gaussian provides variability that mimics different binding patterns . The output distribution matrix represents probability distribution of choosing each amino acid for every position in a peptide sequence. The distribution matrix can be normalized to get rid of choices that are less likely to provide binding signals and a specific peptide sequence can be sampled from the normalized distribution matrix [0067].
Regarding the training dataset includes binding peptide sequences and nonbinding peptide sequences relative to a major histocompatibility complex of claims 7 and 18, Wang-X teaches a computer implemented method 1000 of training a neural network for binding affinity prediction may comprise collecting a set of positive biological data (i.e., binding) and negative (i.e., non-binding) biological data from a database [0106], and further discloses data for peptide binding to MHC-I protein complexes encoded by HLA alleles is known in the art and available from databases including, but not limited to IEDB, AntiJen, MHCBN , SYFPEITHI, and the like [0127].
Regarding the processor, memory, and computer program of claim 12, Wang-X teaches software [0115], and a processor and system memory [0116].
Wang-X does not teach minimizing a kernel maximum mean discrepancy regularization term of claims 6 and 17, nor the Wasserstein metric of claims 9 and 20. Regarding encoding of training data and cross entropy losses of claim 1 and 12, Rahman bolsters the encoding and cross entropy loss of Wang-X as follows:
Regarding encoding training peptide sequences using an encoder model of claims 1 and 12, Rahman shows because known protein structures in the Protein Data Bank (PDB) contain proteins of varying lengths, they consider five settings and construct five different training datasets. All distance matrices in a given training dataset have the same k x k size, where k ∈ {6, 9, 16, 64, 128}. 115,850 tertiary structures are extracted from the PDB. In addition, non-overlapping fragments of a given length l are sampled from chain ’A’ for each protein structure starting at the first residue. The corresponding distance matrix is calculated and added to the training dataset, (p.6, under "Training Dataset(s)"), providing representations (i.e., vectors). Rahman additionally teaches using binary cross entropy as the loss function (p.8, under "2.7 Implementation Details").
Regarding training the encoder model includes minimizing a kernel maximum mean discrepancy regularization term of claims 6 and 17, Rahman shows the maximum mean discrepancy (MMD) test statistic allows measuring the distance between two distributions p(x) and q(y). Briefly, MMD is the largest difference in expectations mx and my over functions in the unit ball of a reproducing kernel Hilbert space (RKHS) and is defined as the squared distance between the embeddings in an RKHS; and MMD used in training generative adversarial models to measure the distance of generated samples to some reference target set (p.7, under Maximum Mean Discrepancy (MMD)).
Regarding training the encoder model, the generator model, and the discriminator model uses a loss function that is based on a Wasserstein metric of claims 9 and 20, Rahman teaches use of the Wasserstein GAN (WGAN) in machine learning to improve stability when training GANs; and WGAN replaces the loss function with the Wasserstein distance (p.6, under "2.4 Wasserstein GAN"). Rahman further teaches the Wasserstein metric (p.8, under "Earth Mover's Distance (EMD)").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the GAN model training method for encoding, generating, and discriminating data in predicting MHC peptide binding of Wang-X with the methods for matrix representation, maximum mean discrepancy (MMD) in measuring distribution differences, and use of the Wasserstein distance in a GAN of Rahman, to come to a method of a training a GAN model for predicting MHC binding of peptides using maximum mean discrepancy and the Wasserstein metric. Rahman adds motivation to use the Wasserstein metric by stating Wasserstein GANs have been proposed in machine learning to improve stability when training GANs (Rahman, p.6, section 2.4 Wasserstein GAN), while Rahman discloses the MMD test statistic allows measuring the distance between two distributions (Rahman, p.7). One of ordinary skill would have had a reasonable expectation of success, as Wang-X and Rahman are drawn to related teachings of training generative adversarial networks for protein characteristic predictions, and one of ordinary skill in the art would have understood how to and would have been motivated to apply the teaching of Rahman to Wang-X, and as such, the combination would have been obvious.
103-B:
Claims 10 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Wang-X, (US 2019/0259474 A1, published 08/22/2019; cited on the attached form PTO-892), in view of Wang, Lu, (hereafter Wang-L), (In Proceedings of The Web Conference 2020, pp. 1785-1795, (04/20/2020); cited on the attached form PTO-892).
Regarding training a generative adversarial network (GAN) model to generate binding peptide sequences relating to an MHC protein associated with a virus pathogen or tumor; generating a new binding peptide sequence using the trained GAN; and developing a treatment for the virus pathogen or tumor associated with the MHC protein using the new binding peptide sequence of claim 10, Wang-X shows systems and methods are useful for identifying peptides that bind to the MHC-I of T cells and target cells; the peptides are tumor specific peptides, virus peptides, or a peptide that is displayed on the MHC-I of a target cell. The target cell can be a tumor cell, a cancer cell, or a virally infected cell; (this) provides a vaccine (i.e., developing a treatment), for example a cancer vaccine containing one or more peptides identified with the systems and methods (of Wang-X) [0130].Wang-X discloses generating, by a GAN generator, increasingly accurate positive simulated polypeptide MHC-I interaction data until a GAN discriminator classifies the positive simulated polypeptide-MHC-I interaction data as positive [0152].
While Wang-X teaches the viral pathogen and tumor aspects of claim 11, Wang-X does teach treating a person using the developed treatment of claim 11.
Regarding treating a person using the developed treatment of claim 11, Wang-L shows a dynamic treatment regime (DTR) is a sequence of tailored treatment decision rules that specify how the treatments should be adjusted through time according to the dynamic states of patients (p.1785, col.1). Wang-L further shows descriptions of model notations which include choosing medication or medication dosage for a patient (p.1787, table 1).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of training of a GAN model in developing a treatment (i.e., a vaccine) by generating viral or tumor peptide sequences which are predicted to bind MHC proteins of Wang-X with the method for treatment of a patient informed by an adversarial network of Wang-L, to come to a developed treatment of patient which includes generated peptides (which bind MHC protein(s) and which are associated with a virus pathogen or tumor). One of ordinary skill would have had a reasonable expectation of success, as Wang-X and Wang-L are drawn to related teachings of using generative adversarial networks for developing vaccines and/or guiding therapies, and one of ordinary skill in the art would have understood how to and would have been motivated to apply the teaching of Wang-X and Wang-L, and as such, the combination would have been obvious.
Note regarding prior art and claims 4, 15:
Regarding claims 4 and 15, while the recited equations of claims 4 and 15 are interpreted to recite a log-softmax function for showing cross-entropy losses, the specific recited equations appears to be free of the prior art. The closet art, not prior, is: Han, (Dual Projection Generative Adversarial Networks for Conditional Image Generation. arXiv preprint arXiv: vol. 2108.09016, pp.1-19 (11-29-2021); cited on the attached form PTO-892). (Han shows the equations at p.3, col.2).
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Instant claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over reference claims 1, 2, 4, 6, 8, and 9 of U.S. Patent No. 12,482,534 B2 (the '534 patent) in view of Wang-X (US 2019/0259474 A1, published 08/22/2019; cited on the attached form PTO-892). Although the claims at issue are not identical, they are not patentably distinct from each other as follows:
Instant Application (17/711,617)
Patent No. 12,482,534
Instant Claim No.
Limitations
Reference
Claim
No.
Limitations
1, 12
A computer-implemented method (system including a processor and memory in claim 12) of training a model, comprising:
encoding training peptide sequences using an encoder model;
generating a new peptide sequence using a generator model;
training the encoder model, the generator model, and the discriminator model to
cause the generator model to generate new peptides that the discriminator mistakes for the
training peptide sequences, including learning projection vectors with respective cross-entropy losses for binding sequences and non-binding sequences.
1
A computer-implemented method for generating new binding
peptides to Major Histocompatibility Complex (MHC) proteins, comprising:
transforming each sequence of a set of binding peptide sequences and a set of the nonbinding
peptide sequences into a feature representation matrix with each column corresponding
to an amino acid, the each column being either a Blocks Substitution Matrix (BLOSUM)
encoding vector or a pre-trained amino acid embedding vector, the each sequence being
represented by concatenating a BLOSUM encoding vector or pre-trained embedding vector of
amino acids;
training, by a processor device, a Generative Adversarial Network (GAN) having a
generator and a discriminator only on a set of binding peptide sequences given training data
comprising the set of binding peptide sequences and a set of non-binding peptide sequences,
wherein a GAN training objective comprises the discriminator being iteratively updated to
distinguish generated peptide sequences from sampled binding peptide sequences as fake or real
and the generator being iteratively updated to fool the discriminator;
generating new peptide sequences with user-specified binding properties to create a
vaccine based on the trained GAN; and
administering the vaccine,
wherein said training comprises learning two projection vectors for a binding class by
optimizing a GAN training objective that combines two cross-entropy losses, a first of the two cross-entropy losses discriminating binding peptide sequences in the training data from nonbinding
peptide sequences in the training data, and a second of the two cross-entropy losses
discriminating generated binding peptide sequences .from non-binding peptide sequences in the
training data.
2, 13
the generator model outputs amino acid representations using a plurality of tempering softmax output units.
8
4
The new peptide sequences are output from the generator as Softmax output units, and wherein the generator comprises a fully-connected layer for receiving an input random noise vector and
another fully-connected layer for outputting the Softmax output
the tempering Softmax units are employed with entropy regularization for implicit temperature control of the tempering Softmax units
3, 14
generating the new peptide sequence includes sampling a multivariate unit-variate Gaussian distribution as input to the generator.
6
the GAN operates on sampled latent code vectors from a multivariate Gaussian distribution obtained from the training data.
4, 15
the cross-entropy losses
include:
PNG
media_image1.png
188
376
media_image1.png
Greyscale
where p corresponds to training peptide sequences, q corresponds to peptide sequences generated by the generator model, vpλ represents embeddings of the training peptide sequences and generated peptide sequences, vqλ represents embeddings of the peptide sequences generated by the generator model, θ(·) is an embedding function, and x+ ~Px and x-~Qx are respective training and generated sequences, with P and Q being respective training and generated distributions.
9
the cross-entropy losses are expressed as:
PNG
media_image2.png
174
382
media_image2.png
Greyscale
where p and q correspond to conditional distribution or loss function using real/generated
binding peptides, the terms v; and v; represent embeddings of the real and generated samples, respectively, θ(·) is an embedding function, x+~Px and x-~Qx are real and generated sequences (with P and Q being the respective real and generated distributions), and y is a data label.
5, 16
the encoder model embeds peptide sequences from a training dataset into vectors during training.
1 (transforming step)
transforming each sequence of a set of binding peptide sequences and a set of the nonbinding peptide sequences …encoding vector or a pre-trained amino acid embedding vector,
6, 17
the encoder model includes minimizing a kernel maximum mean discrepancy regularization term.
4
the tempering Softmax units are employed with entropy regularization for implicit temperature control of the tempering Softmax units
7, 18
the training dataset includes binding peptide
sequences and nonbinding peptide sequences relative to a major histocompatibility
complex.
1 (preamble)
1 (training step)
A computer-implemented method for generating new binding peptides to Major Histocompatibility Complex (MHC) proteins
training, by a processor device, a Generative Adversarial Network (GAN) having a generator and a discriminator only on a set of binding peptide sequences given training data comprising the set of binding peptide sequences and a set of non-binding peptide sequences
8, 19
the generator transforms a binding class label
from the encoder and a sampled latent code vector into a peptide feature representation
matrix, with each column of the matrix corresponding to an amino acid.
1 (transforming step)
transforming each sequence of a set of binding peptide sequences and a set of the nonbinding peptide sequences into a feature representation matrix with each column corresponding
to an amino acid, the each column being either a Blocks Substitution Matrix (BLOSUM)
encoding vector or a pre-trained amino acid embedding vector, the each sequence being
represented by concatenating a BLOSUM encoding vector or pre-trained embedding vector of amino acids;
9, 20
training the encoder model, the generator model,
and the discriminator model uses a loss function that is based on a Wasserstein metric.
2
the GAN is a Wasserstein GAN.
10
training a generative adversarial network (GAN) model to generate binding peptide sequences relating to a major histocompatibility complex (MHC) protein associated with a virus pathogen or tumor;
generating a new binding peptide sequence using the trained GAN;
developing a treatment for the virus pathogen or tumor associated with the MHC
protein using the new binding peptide sequence.
1 (training step)
1 (preamble)
1 (generating step)
training, by a processor device, a Generative Adversarial Network (GAN)…
computer-implemented method for generating new binding
peptides to Major Histocompatibility Complex (MHC) proteins,
generating new peptide sequences with user-specified binding properties to create a
vaccine based on the trained GAN;
11
treating a person for the virus
pathogen or tumor using the developed treatment.
1 (administering step)
administering the vaccine,
Reference claims 1, 2, 4, 6, 8, and 9 of the '534 patent do not recite the instant claim 10 and 11 limitations for "virus pathogen or tumor" concerning generating peptide sequences relating to a major histocompatibility complex (MHC) protein associated with a virus pathogen or tumor in developing a treatment (e.g., a vaccine), nor a processor, memory, and program of instant claim 12. However, Wang-X teaches systems and methods for identifying peptides that bind to the MHC-I of T cells and target cells…the peptides are tumor specific peptides, virus peptides, or a peptide that is displayed on the MHC-I of a target cell. The target cell can be a tumor cell, a cancer cell, or a virally infected cell. Thus, one embodiment provides a vaccine (Wang-X, [0130]). Regarding the processor, memory, and computer program of instant claim 12, Wang-X teaches software [0115], and a processor and system memory [0116].
It would have been obvious to one of ordinary skill in the art to modify the method of reference claims 1, 2, 4, 6, 8, and 9 of the '534 patent to include the viral and tumor peptides of Wang-X because basing development of a treatment using the GAN generated viral or tumor peptides associated with MHC proteins would provide a treatment for cancer or virus disease. Accordingly, instant claims 1-20 are not patentably distinct from reference claims 1, 2, 4, 6, 8, and 9 of the '534 patent.
Conclusion
No claims are allowed.
This Office action is a Non-Final action. A shortened statutory period for reply to this action is set to expire THREE MONTHS from the mailing date of this action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Meredith A Vassell whose telephone number is (571)272-1771. The examiner can normally be reached 8:30 - 4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KARLHEINZ SKOWRONEK can be reached at (571)272-9047. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.A.V./Examiner, Art Unit 1687
/G. STEVEN VANNI/Primary patents examiner, Art Unit 1686