Prosecution Insights
Last updated: April 19, 2026
Application No. 17/827,309

MACHINE LEARNING FOR AMINO ACID CHAIN EVALUATION

Non-Final OA §101§103§DP
Filed
May 27, 2022
Examiner
THOMPSON, MILANA KAYE
Art Unit
1687
Tech Center
1600 — Biotechnology & Organic Chemistry
Assignee
Instadeep Ltd.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-60.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
14 currently pending
Career history
14
Total Applications
across all art units

Statute-Specific Performance

§101
26.7%
-13.3% vs TC avg
§103
28.3%
-11.7% vs TC avg
§102
15.0%
-25.0% vs TC avg
§112
21.7%
-18.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-20 are pending. Priority This application claims the benefit under 35 U.S.C. § 119(a) and 37 CFR § 1.55 to United Kingdom patent application no. GB 2107714.4 filed on May 28, 2021 and no. GB 2108956.0 filed on June 22, 2021. Therefore, the instant application has the effective filing date of May 28, 2021. Information Disclosure Statement The information disclosure statements (IDS) submitted on 27 May 2022, 10 January 2023, 20 November 2023, and 08 July 2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements have been considered by the examiner. Drawings The drawings, submitted 27 May 2022, require correction because several amino acid sequences do not meet the proper disclosure requirements, as detailed below. Nucleotide and/or Amino Acid Sequence Disclosures REQUIREMENTS FOR PATENT APPLICATIONS CONTAINING NUCLEOTIDE AND/OR AMINO ACID SEQUENCE DISCLOSURES Items 1) and 2) provide general guidance related to requirements for sequence disclosures. 37 CFR 1.821(c) requires that patent applications which contain disclosures of nucleotide and/or amino acid sequences that fall within the definitions of 37 CFR 1.821(a) must contain a "Sequence Listing," as a separate part of the disclosure, which presents the nucleotide and/or amino acid sequences and associated information using the symbols and format in accordance with the requirements of 37 CFR 1.821 - 1.825. This "Sequence Listing" part of the disclosure may be submitted: In accordance with 37 CFR 1.821(c)(1) via the USPTO patent electronic filing system (see Section I.1 of the Legal Framework for Patent Electronic System (https://www.uspto.gov/PatentLegalFramework), hereinafter "Legal Framework") as an ASCII text file, together with an incorporation-by-reference of the material in the ASCII text file in a separate paragraph of the specification as required by 37 CFR 1.823(b)(1) identifying: the name of the ASCII text file; ii) the date of creation; and iii) the size of the ASCII text file in bytes; In accordance with 37 CFR 1.821(c)(1) on read-only optical disc(s) as permitted by 37 CFR 1.52(e)(1)(ii), labeled according to 37 CFR 1.52(e)(5), with an incorporation-by-reference of the material in the ASCII text file according to 37 CFR 1.52(e)(8) and 37 CFR 1.823(b)(1) in a separate paragraph of the specification identifying: the name of the ASCII text file; the date of creation; and the size of the ASCII text file in bytes; In accordance with 37 CFR 1.821(c)(2) via the USPTO patent electronic filing system as a PDF file (not recommended); or In accordance with 37 CFR 1.821(c)(3) on physical sheets of paper (not recommended). When a “Sequence Listing” has been submitted as a PDF file as in 1(c) above (37 CFR 1.821(c)(2)) or on physical sheets of paper as in 1(d) above (37 CFR 1.821(c)(3)), 37 CFR 1.821(e)(1) requires a computer readable form (CRF) of the “Sequence Listing” in accordance with the requirements of 37 CFR 1.824. If the "Sequence Listing" required by 37 CFR 1.821(c) is filed via the USPTO patent electronic filing system as a PDF, then 37 CFR 1.821(e)(1)(ii) or 1.821(e)(2)(ii) requires submission of a statement that the "Sequence Listing" content of the PDF copy and the CRF copy (the ASCII text file copy) are identical. If the "Sequence Listing" required by 37 CFR 1.821(c) is filed on paper or read-only optical disc, then 37 CFR 1.821(e)(1)(ii) or 1.821(e)(2)(ii) requires submission of a statement that the "Sequence Listing" content of the paper or read-only optical disc copy and the CRF are identical. Specific deficiencies and the required response to this Office Action are as follows: Specific deficiency - This application contains sequence disclosures in accordance with the definitions for nucleotide and/or amino acid sequences set forth in 37 CFR 1.821(a)(1) and (a)(2). However, this application fails to comply with the requirements of 37 CFR 1.821 - 1.825. The sequence disclosures are located in the drawings (05/27/2022): Figures: 1-6 (parts: 104, 106, 402, 610) Required response – Applicant must provide: A "Sequence Listing" part of the disclosure, as described above in item 1); as well as An amendment specifically directing entry of the "Sequence Listing" part of the disclosure into the application in accordance with 1.825(b)(2); A statement that the "Sequence Listing" includes no new matter in accordance with 1.825(b)(5); and A statement that indicates support for the amendment in the application, as filed, as required by 37 CFR 1.825(b)(4). If the "Sequence Listing" part of the disclosure is submitted according to item 1) a) or b) above, Applicant must also provide: A substitute specification in compliance with 37 CFR 1.52, 1.121(b)(3) and 1.125 inserting the required incorporation-by-reference paragraph, consisting of: A copy of the previously-submitted specification, with deletions shown with strikethrough or brackets and insertions shown with underlining (marked-up version); A copy of the amended specification without markings (clean version); and A statement that the substitute specification contains no new matter; If the "Sequence Listing" part of the disclosure is submitted according to item 1) b), c), or d) above, Applicant must also provide: A replacement CRF in accordance with 1.825(b)(6); and Statement according to item 2) a) or b) above. Specification Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. The disclosure is objected to for an informality in the form of legal jargon (“said position”) found in lines 5 and 7 of the abstract. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101. Eligibility Step 1: Subject matter eligibility evaluation in accordance with MPEP § 2106: Claims 1-17 are directed to a statutory category (method). Claims 18-19 are directed to a statutory category (system). Claim 20 is directed to a statutory category (machine). Therefore, in accordance with MPEP § 2106.03 claims 1-20 have patent eligible subject matter. [Eligibility Step 1: YES] Eligibility Step 2A: This step determines whether a claim is directed to a judicial exception in accordance with MPEP § 2106. Eligibility Step 2A -- Prong One: Limitations are analyzed to determine if the claims recite any concepts that could equate to a judicial exception (i.e. abstract idea, law of nature, or natural phenomenon). Possible judicial exceptions are explored below. Claims 1, 18, 20: performing a process to generate second data, the second data comprising a set of one or more probability values associated with at least one position in the amino acid chain, the process comprising, for a said position in the sequence of letters, applying a language model to the sequence of letters to determine at least one probability value associated with the said position (mathematical concept) Claim 2: The computer-implemented method of claim 1, wherein performing the process to generate second data comprises performing the process for each position in the sequence of letters to determine one or more probability values for each position. (mathematical concept) Claim 3: The computer-implemented method of claim 1, wherein performing the process to generate second data comprises performing the process for each position in the sequence of letters to determine two or more probability values for each position (mathematical concept) Claim 4: The computer-implemented method of claim 1, wherein performing the process to generate second data comprises performing the process for each position in the sequence of letters to determine one or more probability values for each position, (mathematical concept) and wherein the method comprises determining a probability value of a second type based on the probability values associated with the respective letter for each position. (mathematical concept) Claim 5: The computer-implemented method of claim 4, wherein the probability value of the second type is determined based on a product of the probability values associated with the respective letter for each position. (mathematical concept) Claim 6: The computer-implemented method of claim 4, wherein the probability value of the second type is determined based on a sum of log functions of each of the probability values associated with the respective letter for each position. (mathematical concept) Claim 7: generating a second probability value of the second type associated with an amino acid chain which is different to the amino acid chain represented in the first data; (mathematical concept) generating third data representing a comparison of the first probability value of the second type and the second probability value of the second type. (mental process) Claim 8: The computer-implemented method of claim 1, wherein the method comprises selecting one or more positions, and wherein the step of performing the process to generate second data comprises performing the process for the selected one or more positions to determine one or more probability value for each selected position. (mental process, mathematical concept) Claim 9: The computer-implemented method of claim 8, wherein performing the process to generate the second data comprises performing the process for the selected one or more positions to determine two or more probability values for each position, (mathematical concept) Claim 10: The computer-implemented method of claim 9, wherein the method further comprises generating fourth data comprising a representation of one or more alternative amino acid chains from the first data using the second data. (mental process, mathematical concept) Claim 11: The computer-implemented method of claim 10, wherein generating the fourth data comprises determining one or more alternative amino acid chains by: determining a first ordered list of amino acids associated with a first selected position, the first ordered list being ordered according to probability values associated with each of the amino acids for the first selected position; (mental process) determining a second ordered list of amino acids associated with a second selected position, the second ordered list being ordered according to probability values associated with each of the amino acids for the selected position; (mental process) generating one or more alternative amino acid chains by selecting amino acids from the first ordered list and the second ordered list, wherein the selection prioritizes amino acids for each position according to the associated probability values. (mental process, mathematical concept) Claim 12: The computer-implemented method of claim 1, wherein performing the process for the said position comprises masking a said letter at the said position (mental process) and wherein applying the language model to the sequence of letters includes applying the language model to the sequence of letters with the said letter masked (mathematical concept, mental process). Claim 13: The computer-implemented method of claim 1, wherein the applying the language model comprises selecting the language model from a set of one or more language models. (mental process) Claim 14: The computer-implemented method of claim 13, wherein the language model is selected based on the first data. (mental process) Claim 16: and training the Transformer model to identify a respective set of known amino acid chains (mathematical concept) Limitations that generate secondary data by taking existing information and manipulating it using mathematical functions (encoding, softmax, log functions, product), describe organizing information through mathematical calculations, that could be executed by hand or on pen and paper. Training the transformer architecture, according to the current disclosure, entails inputting data (amino acid chains) to a model, and outputting numerical data, that undergoes secondary (softmax) calculations [spec: 0088-0089]. As such limitations of this nature fall within the mathematical concepts grouping of abstract ideas, exemplified by Digitech Image Techs., LLC v. Electronics for Imaging, Inc., 758 F.3d 1344, 1350, 111 USPQ2d 1717, 1721 (Fed. Cir. 2014). Limitations that select, determine, or remove a particular type of data equate to analysis techniques that require mere mental observations of and notations of data. As such limitations that involve activities of this manner fall into the mental process grouping of abstract ideas. As such claims 1-14, 16, and 18-20 appear to recite judicial exceptions (abstract ideas). [Eligibility Step 2A – Prong One: YES] Eligibility Step 2A – Prong Two: A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. If the claim contains no additional claim elements beyond the abstract idea, the claim fails to integrate the abstract idea into a practical application (MPEP 2106.04(d)). Eligibility Step 2B: Claim elements are probed for inventive concept equating to significantly more than the judicial exception (MPEP 2106.04(II)). The following limitations are additional elements that are analyzed to determine if they integrate the judicial exceptions into practical applications: Claim 1, 18, 20 : obtaining first data, wherein the first data includes a representation of an amino acid chain, the representation comprising a sequence of two or more letters, wherein each letter of the sequence of letters corresponds to a respective amino acid of a set of possible amino acids and a position of each letter in the sequence of letters represents a respective position of a said amino acid in the amino acid chain; and Claim 16: The computer-implemented method of claim 15, wherein the Transformer model is trained by: providing the Transformer model with a set of masked amino acid chains, each masked amino acid chain comprising a known amino acid chain in which at least one amino acid is masked; Claim 17: The computer-implemented method of claim 15, wherein the method comprises obtaining a selection of a temperature value for use in the softmax function. These limitations complete necessary data gathering activities for the claimed invention and do not place necessary limits on or integrate the abstract ideas into practical application. [Eligibility Step 2A – Prong Two: YES] Such data gathering activities that assess and measure data from prior processing to be used in a diagnosis are classified as insignificant extra-solution activity. These activities are considered well-known and conventional within the art, as exemplified by CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011). [Eligibility Step 2B: NO] Additional elements that may be categorized differently include: Claims 1, 18, 20: wherein the language model is trained using one or more datasets representing amino acid chains. Claim 3: wherein each probability value associated with a said position is associated with a different one of the set of possible amino acids from other probability values associated with the said position Claim 4: wherein the one or more probability value for each position are probability values of a first type and include probability values associated with a respective letter in the sequence of letters for each position, Claim 7: The computer-implemented method of claim 4, wherein the probability value of the second type is a first probability value of the second type, Claim 9: wherein each probability value associated with a said position is associated with a different one of the set of possible amino acids from the other probability values associated with the said position. Claim 15: The computer-implemented method of claim 1, wherein the language model comprises a Transformer model including at least an encoder and trained using the one or more datasets representing amino acid chains; and optionally, wherein an output of the Transformer model is input to a softmax function and the softmax function is dependent on a temperature value. The limitations above specify the type of data that is gathered (amino acid chains, first type probability values) and the analysis techniques used to transform them (transformer model, probability values). Selecting a particular data source or type of data to be manipulated is classified as an insignificant extra-solution activity and does not integrate the judicial exceptions of the claimed invention as a whole into practical application per MPEP 2106.05(g). [Eligibility Step 2A – Prong Two: YES] The courts classify performing repetitive calculations (i.e. probability values) as a well-understood, routine, and conventional activity per Flook, 437 U.S. at 594, 198 USPQ2d at 199. Furthermore, Gao et al. (Patterns; Vol. 1(9); 2020) reviews deep learning in protein structural modeling and design and affirms the routine, well-understood, and conventional nature of transformer models, as presented within the disclosure, with the same applications and advantages. [Eligibility Step 2B: NO] Additional elements that may be categorized differently include: Claim 1: A computer-implemented method for evaluating an amino acid chain, the computer implemented method comprising: Claim 18: A computer system comprising at least one processor and at least one storage, the storage including: a trained language model which has been trained using one or more datasets representing amino acid chains; and computer-executable instructions which, when executed by the at least one processor, cause the computer system to: Claim 19: The computer system of claim 18, wherein the computer system includes one or more user interfaces. Claim 20: A non-transitory computer-readable storage medium comprising computer-executable instructions which, when executed by one or more processors, cause the processors to: The limitations recite components of generic computing systems or the implementation of a method onto generic computer environment, in the form of computationally evaluating data with a user interface and language model. Elements of this nature do not integrate the judicial exceptions into practical application when viewed separately or in the context of the invention as a whole, as the computer acts as a mere tool to execute the judicial exceptions (evaluation), exemplified by Versata Development Group v. SAP America, 793 F.3d 1306, 1335, 115 USPQ2d 1681, 1702 (Fed. Cir. 2015) and Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 573 U.S. 208, 223, 110 USPQ2d 1976, 1983 (2014). [Eligibility Step 2A – Prong Two: YES] Furthermore, the components when viewed separately, or in the context of a whole claimed invention are well-understood, routine, and conventional within the art and do not result in a significant improvement to technology per FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016) for accelerating data analysis solely from the capabilities of a general-purpose computer; Interval Licensing LLC v. AOL, Inc., 896 F.3d 1335, 1344-45, 127 USPQ2d 1553, 1559-60 (Fed. Cir. 2018) for displaying information on a computer display, without meaningful limits; and Gao et al. (Patterns; Vol. 1(9); 2020) for language models trained on amino acid sequences. Thus, the elements lack inventive concept. [Eligibility Step 2B: NO] As such, claims 1-20 are directed to judicial exceptions and rejected under 35 U.S.C 101, in accordance with Alice/Mayo, MPEP 2143 evaluation. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-11, 13-15, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Bikard (2021/0193259 A1), as evidenced by Blanchard et al. (arXiv:1909.03469v1; 2019). Bikard describes computer implemented methods and devices that can cause a processor to generate protein sequences via an autoregressive neural network. Blanchard describes accurate computation of softmax functions. Claims 1, 18, and 20 are drawn to computer implemented methods, systems and mediums that gather data pertaining an amino acid chain. The method obtains a representation of an amino acid chain and computes at least one probability value associated with a position within the representation via a language model (probabilistic model designed to predict the likelihood of a sequence of words). The model is trained on at least one dataset that contains a plurality of amino acid chains. Bikard teaches obtaining a dataset comprising protein sequences [0120] and encoding samples of the learning dataset into latent vectors, also referred to as latent codes or representations [0061]. Bikard further teaches inputting the latent code into an autoregressive module to obtain probabilities that represent probabilities for amino acids to be selected at locations of the sequence [abstract]. The module is trained with the protein sequence dataset [0120] and includes a probabilistic encoder and decoder[0078] that predict the next element in a sequence, given the previous values in this sequence [0076]. Bikard teaches use of a processing device [0267] and read only memory for storing computer programs for implementing the invention [0269]. Regarding claim 19, Bikard further teaches including a screen for displaying data and/or serving as a graphical interface with the user [0275]. Claim 2 is directed to generating at least one probability value for each position in the amino acid sequence. Bikard teaches using the input to generate multiple variables [0122], one of the which is the probability for each amino acid to be selected at a given position of the sequence [0122]. Additional claims are directed to selecting at least one position of the input sequence to generate an associated probability value (claim 8). Then further generating at least two probability values, representing the association between different possible amino acids, for each position (claim 3), or the selected position(s), in the input sequence (claim 9). The claims also generate an amino acid chain representation using the input sequence (claim 10) and select the first and second amino acids via the probability values associated with the possible amino acids for the positions (claim 11). Bikard teaches that the decoder predicts the probability that each amino acid will occupy a given position and the chosen amino acid is the one with the highest probability [0152]. Bikard further teaches that generating an ordered sequence of amino acids comprises a step of selecting sequentially each amino acid of the ordered sequence, and after an amino acid is selected to belong to the ordered sequence, a step of checking the likelihood of usefulness of the ordered sequence [0021]. Bikard further teaches generating a protein sequence from a latent vector, additional information (conditions), and previously determined amino acids of the sequence [0084]. Regarding claim 4, Bikard teaches a step of modifying probabilities obtained from the autoregressive neural network, denoted the first autoregressive neural network, as a function of probabilities obtained from a second autoregressive neural network different from the first autoregressive neural network [0025]. Regarding claim 5, Bikard teaches that the joint data distribution is modeled as a product of conditionals [0076]. Regarding claim 6, Bikard teaches a conditional probability distribution table wherein the probability of each amino acid is computed as a function of the previous amino acids in the sequence and of the up-sampled latent vector [0042] where the output x’ is the result of a softmax function, as displayed in fig. 3 [0103]. The output of the softmax function inherently is the result of a sum of log functions, as evidenced by Blanchard (page 1, column 1). Regarding claim 7, Bikard further teaches comparing the distribution from amino acids in generated sequences with patterns of amino acid occurrences of the training dataset, as displayed in figure 8 [0145]. Additional claims are directed to selecting a language model from a set (claim 13), based on the input amino acid representation (claim 14). Regarding claims 13 and 14, Bikard teaches the invention as described previously, wherein the autoregressive neural network is of the variational auto-encoder type or of the adversarial auto-encoder type [0028]. Bikard further teaches the main difference between the two model types is the additional information (conditions) that can be included or omitted from the input amino acid representation [0068]. Bikard does not explicitly teach that the probabilistic language model must include a Transformer model, with at least one encoder, trained on an amino acid chain dataset (claim 15). Bikard does however teach that design choices relating to the structure of the decoder play an important role in determining the ability of the model to generate interesting novel proteins [0241] and compares many model types with different architectures [0231], trained on the same protein dataset [0246]. Bikard further teaches that other generative models can be constructed in which the encoder and decoder architectures can be modified to include attention mechanisms as described in “Attention Is All You Need”, Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin, arXiv:1706.03762v4 [0300], which introduced the Transformer architecture and is cited on the applicant’s IDS. Therefore, though Bikard does not explicitly teach the inclusion of a transformer model, it teaches, motivates, and suggests one ordinary skill in the art to consider an embodiment of the described invention which includes the Transformer architecture. It would similarly be obvious to train the model on protein/amino acid datasets, as Bikard uses the technique to compare other model types. Claims 12 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Bikard (20210193259 A1), as evidenced by Blanchard et and applied to claims 1-11, 13-15, and 18-20, in view of Nambiar et al (Proceedings of 11th ACM International Conference on Bioinformatics; Computational Biology and Health Informatics; 2020). Nambiar describes a neural network that performs numerous protein prediction tasks. Nambiar teaches a transformer encoder architecture that is trained by inputting a tokenized amino acid sequence (page 3, column 2) and masked language modelling task that, given the input sequence, selects a random sample of tokens in the sequence to be replaced with a special token [MASK] and predicts the masked token (page 4, column 1). Nambiar further teaches feeding the aggregate sequence representation corresponding from the transformer into an output layer, which consists of a single-layer feed-forward neural network and softmax classifier (page 5, column 1). As such, Nambiar teaches a computer-implemented method analogous to Bikard and the claimed invention. Though Bikard does not actively mask a letter of the input sequence, Bikard teaches that applying a dropout mask to some percentage of the positions in the “true context” can aid in putting more information in the latent code for amino acid predictions [0262]. Therefore, Bikard provides sufficient motivation for one of ordinary skill in the art to mask some parts of the input data, using the technique as described by Nambiar, with a reasonable expectation of success and improvement. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Bikard (20210193259 A1), in view of Nambiar, as evidenced by Blanchard, as applied to claims 1-16 and 18-20 previously, in further view of Dabre et al (arXiv:2009.09372, 2020). Claim 17 is contingent upon the optional embodiment of the Transformer-based model’s output being input to a softmax function dependent on a temperature value (claim 15). The limitation, as claimed does not require prior art references, however in the interest of compact prosecution, is directed to having a selection of temperature values for the softmax function, as embodied in Nambiar. Dabre describes the role of softmax tempering in neural network-based language models. Dabre teaches training models for each of the softmax temperature values: 1.0 (default softmax), 1.2, 1.4, 1.6, 1.8, 2.0, 3.0, 4.0, 5.0, and 10.0 (page 3, column 2) and evaluating the softmax tempering on top of the Transformer model (Vaswani et al., 2017), because it gives the state-of-the-art results for NMT (page 3, column 2). Therefore Bikard, Dabre, and Nambiar teach use of the Transformer architecture in their language model prediction tasks. Nambiar further inputs the transformer-encoded data into a softmax function, and Dabre describes how varying temperature values would affect such that data derived using the same technique. This information provides one of ordinary skill in the art with sufficient motivation to identically evaluate their Transformer derived softmax output after considering a selection of temperature values. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1 and 18-20 are provisionally rejected on the ground of non-statutory double patenting as being unpatentable over claims 1, 4-19, 21-23 of co-pending application 19/075,312 (reference) in view of Jiang et al (Transactions of the Association for Computational Linguistics; Vol. 8; 2020). Jiang et al. explores the role of prompts in language models. Claims 1, 18, and 20 are drawn to computer implemented methods, systems and CRM that evaluate an amino acid chain as previously described. The reference claims are directed to a method for evaluating biological sequence-based tasks by obtaining a natural language prompt and biological sequence data as input and using a language model to generate embeddings regarding the biological sequence data and natural language response. Claim 4 specifies that the biological data is a polypeptide sequence. Co-pending claim 21 is directed to the use of the similar method for evaluating biological object-based tasks. Though not identical, the claims of both applications use a language model to generate data capable of evaluating amino acid sequence. Therefore, they have the same effect and function. The claims may appear to differ in scope as the instant application does not specifically claim a natural language prompt or response as feature of its language model processing. However, Jiang et al. teaches that regardless of the end task, the knowledge contained in LMs is probed by providing a prompt, and letting the LM either generate the continuation of a prefix or predict missing words (page 423, column 1). Similarly, the embeddings generated by the reference application with respect to the polypeptide sequence data does not explicitly include probability values. However, Jiang et al teaches that the definition of a language model is that which produces a probability distribution over text input (page 423, column 1). Therefore, the claims only differ in the language used to explicitly define the function of their language models, but describe the identical processes, at varying levels of generality. As such, they are patentably indistinct. This rejection is provisional as the reference claims have not yet been patented. Conclusion No claims are currently allowed. Correspondence Any inquiry concerning this communication or earlier communications from the examiner should be directed to MILANA THOMPSON whose telephone number is (571) 272-8740. The examiner can normally be reached Monday - Friday, 9:00-6:00 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Karlheinz Skowronek can be reached at (571) 272-1113. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.K.T./Examiner, Art Unit 1687 /Karlheinz R. Skowronek/Supervisory Patent Examiner, Art Unit 1687
Read full office action

Prosecution Timeline

May 27, 2022
Application Filed
Jan 02, 2026
Non-Final Rejection — §101, §103, §DP (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month