DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/12/2025 has been entered.
Claim Objections
Claims 1, 12. and 13 objected to because of the following informalities:
Claim 1, line 5, “finetuning the trained language” should be changed to -- finetuning the trained language model --
Claims 12 and 13 are duplicate claims and one of them should be deleted.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 – 5, 7 – 13, 15 – 17, and 19 – 23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step One
The claims are directed to a method (claims 1 – 5 and 7), a system with structural components (claims 8 – 13), and a non-transitory computer readable medium (15 – 17 and ). Thus, each of the claims falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter).
As to claims 1,
Step 2A, Prong One
The claim recites in part:
determining, using the probability distribution, a first probability of the first protein sequence and a second probability of the second protein sequence;
determining a score by comparing the first probability of the first protein sequence and the second probability of the second protein sequence, wherein a protein sequence that corresponds to a higher probability between the first probability and the second probability is the protein sequence with a higher fitness;
determining the value of the loss function based on the first probability, the second probability and the score; selected protein sequence;
generating a fitness score for the input protein sequence, wherein the fitness score corresponds to an ability of the input protein sequence to be included in a vaccine.
As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, (1) a human can easily determine the likelihood (probability) of an event by the simple math of dividing the number of favorable outcomes by the total possible outcomes. (2) Said human can easily compare two probabilities and determine which of the two probabilities is the higher probability. (3) Said human can determine how far off the higher probabilities is off from the actual selected protein sequence (loss function). (4) Said human can generate fitness score and if the fitness score is above a threshold it will be suitable a vaccine.
Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea.
Step 2A, Prong Two
The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of:
receiving a protein sequence pair of the protein sequence pairs, the protein sequence pair including a first protein sequence and a second protein sequence;
after the language model is finetuned: receiving an input protein sequence;
which amounts to extra-solution activity of gathering data for use in the claimed process. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity to a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application.
The claim further recites:
training the language model on a first training dataset of unlabeled protein sequences to learn a probability distribution over protein sequences in the first training dataset;
finetuning the trained language model as a pairwise classifier that classifies a relative fitness of protein sequence pairs from a second training dataset of protein sequences generated by applying mutations to protein sequences existing in nature, wherein a size of the second training dataset is less than the size of the first training dataset, and wherein the finetuning the trained language model iteratively continues until a value of a loss function is minimized
which is recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f))
The recitation of language model, protein fitness, protein sequence, protein fitness prediction module, training dataset, unlabeled protein sequences, probability distribution, pairwise classifier, relative fitness, protein sequence pairs, loss function, protein sequence, and fitness score amounts to generally linking the use of the judicial exception to a particular environment of field of use (See MPEP 2106.05(h)).
amounts to generally linking the use of the judicial exception to a particular environment of field of use (See MPEP 2106.05(h)). As such, the claim does not integrate the judicial exception into a practical application.
Accordingly, at Step 2A, Prong Two, the additional elements individually or in combination do no integrate the judicial exception into a practical application.
Step 2B
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional elements of:
receiving a protein sequence pair of the protein sequence pairs, the protein sequence pair including a first protein sequence and a second protein sequence;
after the language model is finetuned: receiving an input protein sequence;
are recited at a high level of generality and amounts to extra-solution activity of receiving data i.e. pre-solution activity of gathering data for use in the claimed process. The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory").
The claim further recites:
training the language model on a first training dataset of unlabeled protein sequences to learn a probability distribution over protein sequences in the first training dataset;
finetuning the trained language model as a pairwise classifier that classifies a relative fitness of protein sequence pairs from a second training dataset of protein sequences generated by applying mutations to protein sequences existing in nature, wherein a size of the second training dataset is less than the size of the first training dataset, and wherein the finetuning the trained language model iteratively continues until a value of a loss function is minimized
which is recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f))
The recitation of language model, protein fitness, protein sequence, protein fitness prediction module, training dataset, unlabeled protein sequences, probability distribution, pairwise classifier, relative fitness, protein sequence pairs, loss function, protein sequence, and fitness score amounts to generally linking the use of the judicial exception to a particular environment of field of use (See MPEP 2106.05(h)).
Accordingly, at Step 2B the additional elements individually or in combination do not amount to significantly more than the judicial exception.
.
As to claim 2, the limitations “wherein the language model is a generative language model” amounts to generally linking the use of the judicial exception to a particular environment of field of use (See MPEP 2106.05(h)).
As to claim 3,
Step 2A, Prong One
The claim recites in part:
wherein the value of the loss function includes the first probability, the second probability, and a score based on a first fitness value of the first protein sequence and a second fitness value of the second protein sequence
As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, a human can easily calculate the difference between predicted outputs (probabilities) and actual outputs.
Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea.
Step 2A, Prong Two
The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself.
Step 2B
The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception.
As to claim 4,
Step 2A, Prong One
The claim recites in part:
wherein determining the score further comprises:
determining the first fitness value using the first probability of the first protein sequence;
determining the second fitness value using the second probability of the second protein sequence; and
determining the score based on first fitness value and the second fitness value
As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, a human determine fitness score which simply involves quantifying data
Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea.
Step 2A, Prong Two
The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself.
Step 2B
The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception.
As to claim 5, the limitations “wherein the second training dataset further includes fitness labels” amounts to generally linking the use of the judicial exception to a particular environment of field of use (See MPEP 2106.05(h)).
As to claim 7,
Step 2A, Prong One
The claim recites in part:
wherein determining the score further comprises:
selecting the protein sequence pairs from a few-shot protein dataset, wherein sequences in the few-shot protein dataset are variants of protein sequences that exist in nature
As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, a human can easily select by circling with a pencil (or highlighting on a computer) the desired protein sequence from the list of protein sequences.
Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea.
Step 2A, Prong Two
The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself.
Step 2B
The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception.
Claim 8 has similar limitations as claim 1. Therefore, the claim is rejected for the same reasons as above.
The claim further recites the system, memory, and processor
are recited at a high-level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)).
Claim 9 has similar limitations as claim 2. Therefore, the claim is rejected for the same reasons as above.
Claim 10 has similar limitations as claim 3. Therefore, the claim is rejected for the same reasons as above.
Claim 11 has similar limitations as claim 4. Therefore, the claim is rejected for the same reasons as above.
Claim 12 has similar limitations as claim 7. Therefore, the claim is rejected for the same reasons as above.
Claim 13 has similar limitations as claim 7. Therefore, the claim is rejected for the same reasons as above.
Claim 15 has similar limitations as claim 1. Therefore, the claim is rejected for the same reasons as above.
The claim further recites non-transitory computer readable medium recited at a high-level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)).
Claim 16 has similar limitations as claim 3. Therefore, the claim is rejected for the same reasons as above.
Claim 17 has similar limitations as claim 4. Therefore, the claim is rejected for the same reasons as above.
Claim 19 has similar limitations as claim 5. Therefore, the claim is rejected for the same reasons as above.
Claim 20 has similar limitations as claim 7. Therefore, the claim is rejected for the same reasons as above.
Claim 21 has similar limitations as claim 2. Therefore, the claim is rejected for the same reasons as above.
As to claim 22,
Step 2A, Prong One
The claim recites in part:
wherein the function is the ability of the input protein sequence to be included in a vaccine.
As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, a human can easily use a generic computer to highlight (select) an input protein sequence to included in a vaccine.
Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea.
Step 2A, Prong Two
The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself.
Step 2B
The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception.
As to claim 23,
Step 2A, Prong One
The claim recites in part:
wherein the function is the ability of the input protein sequence to recognize a substrate
As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, a human can use a generic computer program to recognize a substrate from the input protein sequence by analyzing protein language embeddings.
Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea.
Step 2A, Prong Two
The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself.
Step 2B
The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception.
Response to Arguments
Applicant's arguments filed 11/12/2025 have been fully considered but they are not persuasive.
Claim Rejections - 35 USC § 112
The newly added limitations overcomes the 112 Rejection and the 112 Rejection has been overcome.
Claim Rejections - 35 USC § 101
The 101 Rejection still has not been overcome. The claims are abstract and the steps in the claims can be completed with a mental process and/or generic computer components. Additionally, the steps in the claims do not describe an improvement of technology in any way.
The applicant argues:
First, the claims are not directed to an abstract idea that has been identified by the Courts. The claims, as amended, are directed to a technique that (1) trains a language model on a first training dataset of unlabeled proteins and then (2) finetunes the trained language model on a second, smaller training dataset of protein sequence pairs. Moreover, the finetuning continues until the loss function is minimized. Once the language model is trained and finetuned, the language model is able to determine a fitness score for a protein sequence that is indicative of the ability of the protein sequence to be included in a vaccine (claims 1, 22), the ability of the protein sequence to perform a function (claims 8 and 15), or the ability of the protein to recognize a substrate (claim 23).
The M.P.E.P. states that the enumerated groupings of abstract ideas, including a mental process grouping, are still “firmly rooted in Supreme Court precedent as well as Federal Circuit decisions interpreting that precedent.” M.P.E.P. §2106.04(a). None of the concepts recited in the amended claims have been identified by the Courts as abstract — a threshold inquiry for identifying a judicial exception. Accordingly, for this reason alone, the claims are not directed to an abstract idea and are patent eligible under Step 2A, Prong I.
The examiner disagrees. The claimed steps of training a model, fine-tuning the model, minimizing the loss function an generating a score are merely data analysis techniques that can be performed mentally or with generic computer components thus fall within the category of abstract ideas. The applicant relying on the use of a second, smaller dataset for fine-tuning does not further limit the claims. Just stating that the second dataset is smaller does not imply that it is more refined, accurate, or “elite” in any way. The second smaller dataset could easily contain inaccurate, noisy, or biased data. Therefore, the claims do not recite how the training and finetuning results in a technological improvement. As claimed the training and fine-tuning are recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)).
Additionally, the minimizing a loss function is simply a generic mathematical process of comparing an output to a target and reducing the difference, which is routine and does not improve how a computer operates. Likewise, the “fitness score” is just an evaluation metric that compares results and assigns a value, which is a basic function that a human can do mentally. The claims fail to provide any specific way of using the minimized loss function and/or the fitness score to improve the language model or how they result in a technological improvement.
The applicant mentions M.P.E.P. §2106.04(a) as reference but the Applicant does not explain how the cited reference is relevant to the presently claimed invention. The reference is not tied to the claimed features, nor is any comparison provided demonstrating how it supports patent eligibility. It is unclear why the Applicant relies on this section M.P.E.P. §2106.04(a).
In evaluating whether a claim that requires a computer recites a mental process, examiners should carefully consider the broadest reasonable interpretation of the claim in light of the specification. For instance, examiners should review the specification to determine if the claimed invention is described as a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept. In these situations, the claim is considered to recite a mental process.
PNG
media_image1.png
18
19
media_image1.png
Greyscale
The applicant argues:
Second, based on the USPTO’s guidance, the mental process grouping is not without limits. The M.P.E.P. §2106.04(a)(2)II.A is clear that “a claim with limitations that cannot be practically performed in the human mind does not recite a mental process.” The USPTO Memo on patent eligibility dated August 4, 2025 (https://www.uspto.gov/sites/default/files/documents/memo-101-20250804.pdf) likewise reminds the Examiners that the “mental process grouping is not without limits” and the “Examiners are reminded not to expand this grouping in a manner that encompasses claim limitations that cannot practically be performed in the human mind.” Memo, p. 2. Applicant submits that at least the “training the language model” and “finetuning the trained language model” limitations of claims 1, 8, and 15 cannot be practically performed in the human mind. To the extent the Examiner disagrees, Applicant requests that the Examiner provide evidence that suggests that training and finetuning the large language model is practically possible in the human mind. Accordingly, claims 1, 8, and 15 and their dependent claims are not abstract.
The Memo also reminds the Examiners that USPTO’s Example 39 directed to training a neural network using two stages is not abstract. See Memo, pp. 2-3. In fact, the USPTO’s guidance is explicit that claim limitations directed to training the neural network are not directed to a mental process as they cannot be practically performed in a human mind. See USPTO’s Example 39 (https www uspto. gov/sites/defauit/files/docurnents/LlOl examples 37to4d2 20190107 pdf), pp. 8-9. The instant claims 1, 8 and 15 also recite a two-step training process that involves “training the language model on a first training dataset of unlabeled protein sequences “ and then “finetuning the trained language as a pairwise classifier that classifies a relative fitness of protein sequence pairs from a second training dataset of protein sequences generated by applying mutations to protein sequences existing in nature.” Accordingly, based on the USPTO's own rationale, claims 1, 8, and 15 are also not directed to a mental process, because, like the claims in Example 39, they cannot be practically performed in the human mind, and are therefore not abstract.
The examiner disagrees. The claimed “training” and “fine-tuning” does not provide any technological improvement. The claims just describe the use of conventional machine learning steps to produce a desired output. There is no indication of an improved training mechanism, novel architecture, or any improved computational performance. Instead, the claims are directed to the result of generating a classification or fitness score, which is not a technical improvement. The recited claims do not transform the abstract idea into patent-eligible subject matter.
As per MPEP 2106.04(a)(2)(III)(C)), a claim that requires a computer may still recite a mental process.
In evaluating whether a claim that requires a computer recites a mental process, examiners should carefully consider the broadest reasonable interpretation of the claim in light of the specification. For instance, examiners should review the specification to determine if the claimed invention is described as a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept. In these situations, the claim is considered to recite a mental process.
PNG
media_image1.png
18
19
media_image1.png
Greyscale
As claimed the training and fine-tuning are recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)).
The applicant mentions Example 39 of the ‘USPTO July 2024 Subject Matter Eligibility Examples’ as an example but the Applicant does not explain how the cited example is relevant to the presently claimed invention. The example is not tied to the claimed features, nor is any comparison provided demonstrating how it supports patent eligibility. The applicant relying on the use of a second, smaller dataset for fine-tuning does not further limit the claims nor relates to Example 39. Just stating that the second dataset is smaller does not imply that it is more refined, accurate, or “elite” in any way, (as the second training set in Example 39 clearly does). The second smaller dataset could easily contain inaccurate, noisy, or biased data. Therefore, the claims do not recite how the training and finetuning results in a technological improvement.
The applicant argues:
First, Applicant points the Examiner to Ex Parte Desjardins, Appeal 2024-000567, Decision on Request for Rehearing ("Decision"), designated on November 4, 2025, as precedential. This Decision categorically rejects and overrules the PTAB's position that the claims are abstract based on the rationale similar to the one in the instant Office Action. See Office Action, p. 5. The Decision states that "categorically excluding AI innovations from patent protection in the United States jeopardizes America's leadership in this critical emerging technology. Yet, under the panel's reasoning, many AI innovations are potentially unpatentable - even if they are adequately described and nonobvious - because the panel essentially equated any machine learning with an unpatentable 'algorithm' and the remaining additional elements as 'generic computer components,' without adequate explanation." Decision, p. 9. The Decision further states that "Examiners and panels should not evaluate claims at such a high level of generality." Based on this decision, the Office's rationale that the claims are not tied to a practical application because they are simple implemented on a generic computer is insufficient.
The examiner disagrees. Firstly, the applicant mentions ‘Examiner to Ex Parte Desjardins’ as an example but the Applicant does not explain how the cited example is relevant to the presently claimed invention. The example is not tied to the claimed features, nor is any comparison provided demonstrating how it supports patent eligibility. It is unclear why the Applicant relies on this example. The applicant should argued compare the example to the current application to make a better argument.
The applicant argues:
Second, the M.P.E.P. § 2106.04(d) provides various criteria for determining whether the claims integrate an invention into a practical application, including whether the claims are an "improvement in the functioning of a computer, or an improvement to other technology or technical field." The M.P.E.P. § 2106.04(d)(1) outlines a two-step framework for the Examiners to follow when determining whether the claims are directed to a practical application. First, the Examiners should evaluate the Specification to determine whether "the disclosure provides sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement.” Jd. Second, “‘if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement.” /d. In other words, the Examiner should evaluate that “the claim includes the components or steps of the invention that provide the improvement described in the specification.” Id.
The Specification satisfies step one of this two-step framework. The Specification describes that traditional, manual techniques for predicting protein fitness, and approaches that simply train a language model on unlabeled protein sequences, are insufficient to predict protein fitness for a particular function. Specification, PP 3, 18. This is, in part, because after training, a portion of the information in the language model is discarded. The solution to this problem is to finetune the trained language model. Specifically, the trained language model is finetuned on labeled data that includes protein fitness pairs. Jd. at PP 19, 51-53. The Specification further explains that the finetuning continues iteratively until the loss function is minimized. /d. This finetuning improves the language model’s ability to generate a fitness score that corresponds to a protein fitness of a protein sequence to be included in a vaccine (claims 1, 22) or the ability of a protein sequence to perform a function (claims 8 and 15), such as recognizing a substrate (claim 23) — all of which are practical applications for training and using language models to identify protein fitness for different protein sequences.
To the extent the Office argues that the claims merely recite a processor and memory, or are recited at a high level of generality — these arguments fail. Office Action, pp. 3-5. Applicant notes that humans are not equipped to train and finetune language models. Further, the claims are not recited at a high level of generality, but explain, in a series of steps, how the claimed language model is first trained on a first training dataset of unlabeled protein sequences, and then further finetuned on the second training dataset of protein sequence pairs. The amended claims further recite, in a detailed series of steps, how the finetuning process occurs. Claims that recite a set of steps or rules that improve a technological process, e.g., training and finetuning the language model to predict protein fitness are patent eligible under Mayo, Step 2A. See M.P.E.P. § 2106 citing to McRO, Inc. v.Bandai Namco Games America, Inc. 837 F.3d 1299, 1314 (Fed. Cir. 2016) (the claim incorporating rules of a particular type that improved an existing technological process are patent eligible.)
Accordingly, because the claims are tied to a practical application for training and finetuning an improved language model to generate a fitness score for a protein sequence, claims 1 8, and 15 and their dependent claims are patent eligible under § 101.Reconsideration is respectfully requested.
The examiner disagrees. The alleged improvement of “training a language model on unlabeled protein sequences followed by fine-tuning on labeled protein fitness pairs” amounts to no more than the application of routine and conventional machine learning techniques. The claims fail to disclose novel structure, a specialized training procedure, or any improvement to the functionality of a computer. The claims just use generic computing components to perform data processing and model training. The training and fine-tuning are recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)).
Secondly, the iterative fine-tuning “until a loss function is minimized” is by no means a technical improvement. Minimizing a loss function is inherent and a fundamental part of nearly all machine learning models and represents an intended result rather than a specific improvement. The applicant is silent on how the loss function is novel and differs from conventional approaches, or how the minimization leads to an improvement in the language model.
Thirdly, the generation of a “fitness score” is also not a technological improvement it is simply a value that characterizes how well a protein sequence performs or matches a desired function. A human could easily generate a fitness score as a means to evaluate and compare characteristics of a protein sequence. The applicant does not claim a specific novel algorithm for generating this fitness score therefore, the fitness score amounts to just an abstract result.
Lastly, the Applicant’s reliance on McRO is not correct. In, MCRO, the claims were found eligible because they recited specific rules that improved a technological process in a defined and non-conventional manner. In contrast, the present claim do not claim an specific rules, algorithms, or novel techniques that control how the model operates. Instead, the claims broadly claim the desired outcome of predicting protein fitness using generic training and fine-tuning steps. The claims merely recite an abstract idea implemented on a generic computer and do not integrate the exception into a practical application.
It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements. See the discussion of Diamond v. Diehr, 450 U.S. 175, 187 and 191-92, 209 USPQ 1, 10 (1981)) in subsection II, below. In addition, the improvement can be provided by the additional element(s) in combination with the recited judicial exception. See MPEP § 2106.04(d) (discussing Finjan, Inc. v. Blue Coat Sys., Inc., 879 F.3d 1299, 1303-04, 125 USPQ2d 1282, 1285-87 (Fed. Cir. 2018))
It is important to keep in mind that an improvement in the abstract idea itself is not an improvement in technology. For example, in Trading Technologies Int’l v. IBG, 921 F.3d 1084, 1093-94, 2019 USPQ2d 138290 (Fed. Cir. 2019), the court determined that the claimed user interface simply provided a trader with more information to facilitate market trades, which improved the business process of market trading but did not improve computers or technology (MPEP 2106.05(a)(II).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRANDON S COLE whose telephone number is (571)270-5075. The examiner can normally be reached Mon - Fri 7:30pm - 5pm EST (Alternate Friday's Off).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez can be reached on 571-272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRANDON S COLE/ Primary Examiner, Art Unit 2128