Prosecution Insights
Last updated: April 19, 2026
Application No. 16/803,768

UNSUPERVISED PROTEIN SEQUENCE GENERATION

Non-Final OA §101§102§103§112
Filed
Feb 27, 2020
Examiner
MINCHELLA, KAITLYN L
Art Unit
1685
Tech Center
1600 — Biotechnology & Organic Chemistry
Assignee
The Regents of the University of California
OA Round
3 (Non-Final)
27%
Grant Probability
At Risk
3-4
OA Rounds
4y 5m
To Grant
48%
With Interview

Examiner Intelligence

Grants only 27% of cases
27%
Career Allow Rate
41 granted / 151 resolved
-32.8% vs TC avg
Strong +21% interview lift
Without
With
+20.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
52 currently pending
Career history
203
Total Applications
across all art units

Statute-Specific Performance

§101
29.9%
-10.1% vs TC avg
§103
22.5%
-17.5% vs TC avg
§102
8.9%
-31.1% vs TC avg
§112
29.8%
-10.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 151 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Applicant’s response, filed 15 July 2025 has been fully considered. The following rejections and/or objections are either reiterated or newly applied. They constitute the complete set presently being applied to the instant application. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 15 July 2025 has been entered. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions The restriction requirement in the Office action mailed 17 May 2023 has been withdrawn in view of claim amendments and further consideration of the claims because a search burden no longer exists. Claims 1-7 are no longer withdrawn. Status of Claims Claim 20 is cancelled. Claims 1-19 and 21 are pending. Claims 1-19 and 21 are rejected. Priority The effective filing date of the claimed invention is 27 Feb. 2019. Claim Objections The objection to claims 8, 19, and 21 in the Office action mailed 15 Jan. 2025 has been withdrawn in view of claim amendments received 15 July 2025. Claim Interpretation Claims 8, 19, and 21 cite “the variational autoencoder is further configured to design and generate a protein…using the unsupervised protein sequence generation and a supervised phenotype model” and “the variational autoencoder is further configured to:…design and generate, using the generative model and a supervised model, a protein…” Applicant’s specification at para. [0021] discloses the variational autoencoder may be augmented by a semi-supervised approach for downstream classification. Applicant’s specification at para. [0043] describes a phenotype model can be used to predict points in the latent feature space corresponding to features of interest, and from these points in the latent space, BioSeqVAE can hallucinate valid proteins with the desired phenotype. Accordingly, the variational autoencoder is interpreted to be semi-supervised (i.e. utilizes both unsupervised and supervised learning). Claim 9 recites “ResNet”, which is understood to mean a residual neural network. Claim Interpretation- 35 USC § 112(f) The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are: “an autoregressive module configured to learn a local structure of an amino acid sequence….” in claim 8 (the generic placeholder module, modified by structural language to generate and to predict, is not modified by sufficient structure, material, or acts for performing the claimed functions). “the/a autoregressive module configured to: learn a local structure…” in claims 19 and 21. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. For a computer-implemented 35 U.S.C. 112(f) claim limitation, the specification must disclose an algorithm for performing the claimed specific computer function, or else the claim is indefinite under 35 U.S.C. 112(b). See Net MoneyIN, Inc. v. Verisign. Inc., 545 F.3d 1359, 1367, 88 USPQ2d 1751, 1757 (Fed. Cir. 2008). In the instant case, Applicant’s specification at para. [0033]-[0034] and FIG. 1 disclose the structure of the autoregressive module to include one-dimensional convolution layers with skip connections in the style of ResNet which performs the learning function. Thus the limitation will be interpreted accordingly, including equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112(b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1-19 and 21 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Any newly recited portion is necessitated by claim amendment. Claim 1, and claims dependent therefrom, are indefinite for recitation of “known protein sequences”. It is unclear to whom and/or when the protein sequences were “known”, such that one of ordinary skill in the art could ascertain what protein sequences are considered “known” versus “unknown”. For example it is not clear if a protein sequence currently “unknown”, but later discovered would constitute a “known protein sequence”. Similarly it is unclear if a protein sequence known by a single individual, but unknown to others would be considered “known”. For purpose of examination, the dataset is interpreted to be of “protein sequences”. Claim 2 is indefinite for recitation of “…each cluster of the complete dataset”. Claims 1 and 2 do not previously recite any clusters of the complete dataset, nor are clusters necessarily part of a dataset. As a result, it is unclear which set of clusters, “each cluster” in claim 2 are referring to. It is noted Applicant’s specification at para. [0027] discloses sequences in the database may be clustered into groups that share a threshold amount of homology. Therefore, claim 2 is interpreted to mean the subset of protein sequences is determined by selecting protein sequences from at least one homology group. Claim 6 is indefinite for recitation of “…wherein the generative model is to analyze protein sequences…, model interactions…, utilize a latent feature space…”. It is unclear if the wherein clause is intended to further limit an intended use (e.g. is to) of the generative model, if the wherein clause is intending to further limit one of the positively recited steps of claim 1 regarding how the generative model is used, or if the wherein clause is intended to further limit the structure and/or programming of the generative model. Clarification is requested via claim amendment. For purpose of examination, the generative model is interpreted to be configured to model interactions between distance amino acid residues, utilize a latent feature space, and generate protein sequences, but the claims do not require the dataset includes sequences of variable length (i.e. an intended use of the model). Claim 6 is indefinite for recitation of “…distance amino acid residues” and “realistic protein sequences”. The terms “distant” and “realistic” in claim 6 are relative terms which renders the claim indefinite. The terms are not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. For purpose of examination, the limitation is interpreted to mean “amino acid residues” and “semantically-valid protein sequences”. Claims 8 and 21, and claims dependent therefrom, are indefinite for recitation of “A non-transitory machine-readable medium having executable instructions to cause one or more processing units storing a variational autoencoder, the variational autoencoder comprising…”, which is nonsensical. The metes and bounds of the claims are unclear because the claims do not recite the executable instructions cause the one or more processing units to perform. Instead, the claim mentions the one or more processing units stores a variational autoencoder. It is unclear what functions the one or more processing units are intended to be programmed to perform according to the executable instructions, or if the non-transitory machine-readable medium is only required to store the variational autoencoder comprising a parameterized encoder, a decoder, and the autoregressive module. Clarification is requested via claim amendment. For purpose of examination, the claims are interpreted to mean the non-transitory computer readable medium stores a variational autoencoder, and the encoder, decoder, and autoregressive module are configured to perform the various functions of the encoder, decoder, and autoregressive module (e.g. “to estimate..”, “to produce an output…”, etc.). If Applicant wishes to recite the instructions cause the processing units to perform unsupervised protein sequence generation, the claim can be amended to recite “…to perform unsupervised protein sequence generation comprising: estimating a latent variable…using a parameterized encoder…; producing an output in the data space…using a decoder… (etc.)…”. Claim 8, and claims dependent therefrom, recite “…a protein having a combination of desirable properties…”. The term “desirable” in claim 8 is a relative term which renders the claim indefinite. The term “desirable” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. For purpose of examination, the claim is interpreted to mean generating a protein having a combination of designed properties. Claim 8 is indefinite for recitation of “…wherein the variational autoencoder is further configured to design and generate a protein….using the unsupervised protein sequence generation and a supervised phenotype model”. The preamble of claim 8 recites “…storing a variational autoencoder for unsupervised protein sequence generation” in the preamble, but the body of claim 8 does not previously recite any unsupervised protein sequence generation in the claim, or a step of generating a protein sequence. Therefore it is not clear what protein sequence is intended to be used in the step of designing and generating a protein. Claims 19 and 21 are indefinite for recitation of “…the variational autoencoder is further configured to:….train a generative model on the dataset…generate, using the generative model and a supervised learning model, a semantically-valid protein sequence…”. It is unclear if the “generative model” is a separate model than the variational autoencoder, and thus the variational autoencoder trains a different model, or if the generative model is the same model as the variational autoencoder. If Applicant intends for the variational autoencoder to be the generative model, it is further unclear in what way the autoencoder is intended to be configured to perform functions using itself or if the claims intend to recite steps of using/applying the autoencoder. Furthermore, with respect to claim 19, given claim 8 already recites “…the variational autoencoder is further configured to design and generate…using…a supervised phenotype model”, it is further unclear if the autoencoder is configured to determine a function of the protein using another supervised learning model, or if this is the same supervised model of claim 8. In light of Applicant’s specification at least para. [0023], the “generative model” appears to be referring to the variational autoencoder itself and the supervised learning models in claims 8 and 19 are the same, but if this is the case, the claim should be amended to use consistent language and refer to the variational autoencoder and supervised model. For purposes of applying prior art, the claims are interpreted to mean that the variational autoencoder is the generative model, and thus the instructions cause the processor to perform these steps using the variational autoencoder (i.e. determine a dataset…, train the variational autoencoder…, design and generate, using the variational autoencoder… etc.). It is suggested Applicant amend the claims to distinguish between programming of the variational autoencoder and part of the executable instructions that utilize the variational autoencoder. For example, claim 19 could be amended to recite “…wherein the executable instructions further cause the one or more processing units to determine…; train…; generate…””. Claim 21 could be similarly amended to recite what the instructions cause the processor to do, rather than reciting programming of the variational autoencoder. Response to Arguments Applicant's arguments filed 15 July 2025 regarding 35 U.S.C. 112(b) have been fully considered but they are not persuasive. Applicant remarks that claims 8 and 21 are amended to recite a non-transitory machine-readable medium to store a variational autoencoder (Applicant’s remarks at pg. 9, para. 2). This argument is not persuasive. First, the claims do not recite the non-transitory machine-readable medium stores the variational autoencoder, and instead recites “…A non-transitory machine-readable medium having executable instructions to cause one or more processing units storing a variational autoencoder…”, but does not state what the instructions cause the processing unit(s) to perform. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-19 and 21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Any newly recited portion is necessitated by claim amendment. The Supreme Court has established a two-step framework for this analysis, wherein a claim does not satisfy § 101 if (1) it is “directed to” a patent-ineligible concept, i.e., a law of nature, natural phenomenon, or abstract idea, and (2), if so, the particular elements of the claim, considered “both individually and as an ordered combination,” do not add enough to “transform the nature of the claim into a patent-eligible application.” Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1353 (Fed. Cir. 2016) (quoting Alice, 134 S. Ct. at 2355). Applicant is also directed to MPEP 2106. Step 1: The instantly claimed invention (claim 8 being representative) does not fall into one of the four statutory categories, as discussed above. [Step 1: NO]. However, in the interest of compact prosecution, the claims are being examined to determine if they are directed to an abstract idea without significantly more. Step 2A: First it is determined in Prong One whether a claim recites a judicial exception, and if so, then it is determined in in Prong Two if the recited judicial exception is integrated into a practical application of that exception. Step 2A, Prong 1: Under the MPEP § 2106.04, the Step 2A (Prong 1) analysis requires determining whether a claim recites an abstract idea, law of nature, or natural phenomenon. Claim 1 recites the following steps which fall under which fall under the mental process and/or mathematical concepts grouping of abstract ideas: determining a dataset of known protein sequences, wherein the dataset comprises unlabeled or sparsely labeled data; training…a generative model on the dataset; and generating, using the generative model, a semantically-valid protein sequence example based on the dataset. Claims 8 and 21 recite the following steps which fall under the mental process and/or mathematical concepts grouping of abstract ideas: a parameterized encoder configured to estimate a latent variable in a latent space given a particular data point in a data space; a decoder configured to produce an output in the data space given a particular point in the latent space; and an autoregressive module configured to learn a local structure of an amino acid sequence; wherein the variational autoencoder is further configured to design and generate a protein having a combination of desirable properties, using the unsupervised protein sequence generation and a supervised phenotype model (claim 8) wherein the variational autoencoder is further configured to (claim 21): determine a dataset of known protein sequences, the dataset comprising unlabeled or sparsely labeled data, train a generative model on the dataset, and design and generate, using the generative model and a supervised model, a protein sequence having a target phenotype. The identified claim limitations falls into one of the groups of abstract ideas of mathematical concepts and/or mental processes for the following reasons. Here, an encoder and decoder of a variational autoencoder recite a mathematical concept. Encoders for estimating a latent variable in a latent space require a mathematical relationship between input data and latent variables, via a probabilistic latent space, such as a Gaussian distribution, as discussed in Applicant’s specification at para. [0044]). Furthermore, decoders (i.e. generative mathematical models) and generative models for producing an output given a particular point in the latent space similarly recite a mathematical relationship, as it reverses the function of the encoder, by reconstructing the input data from a sampled point of the latent space (from a gaussian distribution) via mathematical operations. For example, Applicant’s specification at para. [0043] discloses the decoder samples data from a multivariate normal distribution with estimated statistics, and each of the encoder and decoder are parameterized by weights, as recited in claim 18. This is similar to organizing information and manipulating information through mathematical correlations, Digitech Image Techs., LLC v. Electronics for Imaging, Inc., 758 F.3d 1344, 1350, 111 USPQ2d 1717, 1721 (Fed. Cir. 2014). Furthermore, the limitation of an autoregressive module recites a mathematical concept, given autoregression is a statistical technique that amounts to a textual equivalent of performing mathematical calculations. Furthermore, training a generative model on the dataset and generating a protein with a target phenotype or desirable properties using the generative model/unsupervised protein sequence generation and a supervised learning algorithm recites a mathematical concept as it requires iteratively applying the encoder and decoder and adjusting parameters of the model, outputting a protein sequence from the decoder, and inputting latent variables of the outputted protein sequence into a supervised learning model (i.e. a regression model) to predict a phenotype, which recite mathematical relationships. Thus the claims recite a mathematical relationship. See MPEP 2106.04(a)(2) I. Furthermore, the step of determining a dataset of known protein sequences encompasses selecting a subset of protein sequences, as described in the specification at para. [0028], which can be performed mentally. Similarly, using a supervised model to generate a sequence with a target phenotype recites a mental process because it encompasses inputting numerical values into a trained regression model to produce an output via addition and multiplication. See MPEP 2106.04(a)(2) III. Dependent claims 2-7 and 9-18 further limit the mathematical concept of claim 8. Dependent claim 2 further recites the mental process of identifying a subset by selecting a defined number of protein sequences from each cluster of the dataset. Dependent claims 3-4 and 7 further recites the mathematical concept of applying the generative model and a supervised learning model (e.g. a linear regression) by inputting a point in a latent feature space into the supervised learning model to predict a function of the protein sequence, as discussed above for claim 21. Dependent claim 5 further recites the mathematical concept of encoding the dataset of known protein sequences using the generative model. Dependent claim 6 further limits the function of mathematical concept of the generative model to analyze sequences of different lengths, model interactions between residues, utilize a latent feature space, and generate protein sequences. Dependent claims 9 and 13 further limit the mathematical concept of the encoder and decoder to include resnet blocks, which learn residual functions (i.e. mathematical relationships) with layer inputs. Dependent claim 10 further limits the encoder to include a one-dimensional convolution layer that has a stride of two and channel doubling, which recites a mathematical concept (i.e. a convolution). Claims 11 and 15 further limit the mathematical concept of the ResNet blocks to include strided convolution layers. Claims 12 and 16 further limit the mathematical concept of the strided convolution layers to have a dilation pattern. Dependent claim 14 further limits the mathematical concept of the decoder to include a first one-dimensional convolution layer transposed with a convolution layer of the encoder. Dependent claim 17 further limits the mathematical concept of the encoder and decoder to organize strided convolution layers in a particular manner. Dependent claim 18 further limit the mathematical concept of the encoder and decoder to be parameterized by weights. Dependent claim 19 further recites the mental process of determining a dataset, and the mental process and mathematical concept of generating a semantically valid protein sequencing using the generative model, and determining, using the generative model and supervised learning model, a function of the semantically-valid protein sequence. Therefore, claims 1-19 and 21 recite an abstract idea. [Step 2A, Prong 1: YES] Step 2A: Prong 2: Under the MPEP § 2106.04, the Step 2A, Prong 2 analysis requires identifying whether there are any additional elements recited in the claim beyond the judicial exception(s), and evaluating those additional elements to determine whether they integrate the exception into a practical application of the exception. This judicial exception is not integrated into a practical application for the following reasons: Claims 2-19 do not recite any additional elements, and thus are part of the judicial exception of claim 8. The additional elements of claims 1, 8, and 21 include: A processing device (claim 1); and A non-transitory computer-readable medium (claims 8 and 21). A non-transitory computer-readable medium and a processing device are generic computer components. The courts have found the use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application. See MPEP 2106.05(f). Therefore, the additional elements simply add computer components after the fact to the abstract idea, and thus, the claims as a whole do not integrate the abstract idea into practical application. Thus, claims 1-19 and 21 are directed to an abstract idea. [Step 2A, Prong 2: NO] Step 2B: In the second step it is determined whether the claimed subject matter includes additional elements that amount to significantly more than the judicial exception. See MPEP § 2106.05. The claims do not include any additional steps appended to the judicial exception that are sufficient to amount to significantly more than the judicial exception for the following reasons: the claims do not recite any additional elements. It is noted claims 19-20 fail to further limit the subject matter of claim 8, and thus are part of the judicial exception of claim 8. The additional elements of claims 1, 8, and 21 include: A processing device (claim 1); and A non-transitory computer-readable medium (claims 8 and 21) A non-transitory computer-readable medium and a processing device are conventional computer components. The courts have found the use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Thus, the claims as a whole do not amount to significantly more than the exception itself. [Step 2B: NO] Therefore, the instantly rejected claims are not drawn to eligible subject matter as they are directed to an abstract idea (and/or natural correlation) without significantly more. For additional guidance, applicant is directed generally to applicant is directed generally to the MPEP § 2106. Response to Arguments Applicant's arguments filed 15 July 2025 regarding 35 U.S.C. 101 have been fully considered but they are not persuasive. Applicant remarks claim 8 is amended to recite a variational autoencoder configured to design and generate a protein having a combination of desirable properties or a target phenotype, and designing and generating a protein, a concrete molecule, integrates any abstract idea into a practical application, and that claims 9-19 are patent eligible for the same reasons (Applicant’s remarks at pg. 10, para. 2-3). This argument is not persuasive because it is not commensurate with the scope of the claims. First, the generation of a protein on a computer by a model (i.e. as performed by processing units) is simply a generation of data, which would be an abstract idea. There is no requirement that the claims physically generate a protein sequence comprising amino acids. The only additional element of the claims is a non-transitory computer readable medium which stores the variational autoencoder, which does not integrate the recited judicial exception into a practical application as discussed in the above rejection and set forth in MPEP 2106.05(f). Therefore, the claims do not recite any additional elements that integrate the judicial exception into a practical application. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-8, 13-14, 19, and 21 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Bikard (2017). Any newly recited portion is necessitated by claim amendment. Any newly recited portion is necessitated by claim amendment. Cited reference: Bikard et al. US 2021/0193259 A1, filed 16 Nov. 2018, effectively filed 16 Nov. 2017 (previously cited). Regarding claim 1, Bikard discloses a method for unsupervised protein sequence generation (Abstract), comprising the following steps: Bikard discloses obtaining a training dataset comprising protein sequences ([0120]; [0133]), and that the method uses unsupervised learning ([0027]), demonstrating the training dataset is unlabeled, or discloses the method uses semi-supervised learning (i.e. some training data is labeled, or “sparsely labeled data”) ([0218]). Bikard discloses training, using a processor ([0267]; FIG. 20), a variational autoencoder (i.e. generative model) on the dataset ([0018]-[0019]; [0028], e.g. variational-auto-encoder) Bikard discloses generating a valid protein sequence using the trained variational autoencoder ([0133]-[0134]-[0139], e.g. 99.6% generated protein sequences were valid). Regarding claim 2, Bikard further discloses the dataset includes protein sequences from the InterPro and UniPro databases (i.e. a subset of protein sequences from a complete dataset of protein sequences) ([0130]) Bikard further discloses selecting protein sequences that are “luciferase-like” proteins from the dataset, such that the determined subset is from a particular homology cluster ([0131]). Regarding claim 3, Bikard further discloses using the variational autoencoder with semi-supervised learning (i.e. with a supervised learning model) to predict certain properties relating to the protein (i.e. a function of the semantically-valid protein sequence) ([0128]), including functional properties ([0016]; FIG. 12-16, e.g. solubility etc.). Regarding claim 4¸ Bikard discloses determining the functional properties of the protein comprises inputting additional information representative of a characteristic of a protein sequence to be generated, along with the latent code (i.e. a point in a latent feature space associated with the semantically valid protein sequence of the generative model) into the variational autoencoder to predict the properties of the protein sequences ([0018]; [0073], e.g. after training, providing a latent vector and chosen conditions to model generates samples with desired properties). Regarding claim 5¸ Bikard discloses the variational autoencoder is trained by an encoder that encodes protein sequences in a latent space as latent codes (i.e. estimates a latent variable in a latent space) ([0010]; FIG. 2), given an input vector to generate a latent feature vector ([0067]). Regarding claim 6, Bikard further discloses the training data set includes all proteins in a database with a length smaller than 600 amino acids (i.e. varying lengths of 0-599) ([0131]), and that the variational autoencoder model includes an autoregressive module that learns probabilities for amino acid sequences to be selected at certain positions in the generated sequence to preserve similarity in structure (i.e. models interactions between distance amino acid residues) ([0084]; [0097]; [0017]-[0018], e.g. learns transformations preserving similarities in structure; [0124]-[0125]; [0080]; [0139] FIG. 2a). Bikard further discloses the variational autoencoder uses latent feature space to generate valid protein sequences (Abstract; [0018]; [0138]-[0139]). Regarding claim 7, Bikard further discloses using the variational autoencoder with semi-supervised learning (i.e. with a supervised learning model) to predict certain properties relating to the protein (i.e. a function of the semantically-valid protein sequence) ([0128]), including functional properties (i.e. a protein having a target phenotype) ([0016]; FIG. 12-16, e.g. solubility etc.). Regarding claim 8, Bikard discloses a tangible carrier medium for computer code storing a variational auto-encoder ([0031]; 0028]; [0118]; FIG. 1A-B; FIG. 2), comprising an encoder, decoder, and autoregressive module as follows: Bikard discloses an encoder that encodes protein sequences in a latent space as latent codes (i.e. estimates a latent variable in a latent space) ([0010]; FIG. 2), given an input vector (i.e. a particular data point in a data space) ([0067]), wherein the encoder is parameterized ([0064]; [069], e.g. parameters of encoder are trained). Bikard discloses a decoder that generates a protein sequence (i.e. an output in the data space) by taking in a latent vector (i.e. a particular point in the latent space) ([0084]; Fig 2b). Bikard discloses an autoregressive module, wherein the decoder comprises the autoregressive module ([0084]; [0097]), which learns probabilities for amino acid sequences to be selected at certain positions in the generated sequence to preserve similarity in structure (i.e. a local structure of an amino acid sequence- a phenotype of the sequence, given structure is a phenotype) and outputs a valid sequence result (claim 9, e.g. autoregressive module has learning phase; [0017]-[0018], e.g. learns transformations preserving similarities in structure; [0124]-[0125]; [0080]; [0139] FIG. 2a, e.g. autoregressive module outputs the sequence result). Bikard discloses the autoregressive module includes 1-D causal convolution layers and skip connections (Fig. 5b; [0087]; [0102];[0112]; [0237], e.g. autoregressive component takes vector as input). Bikard discloses a method which includes the functions of obtaining a training dataset comprising protein sequences ([0120]; [0133]), and discloses the latent space organizes itself to cluster proteins of the same family together (i.e. the autoencoder determines a dataset of known protein sequences) ([0211]). Bikard further discloses that the learning is unsupervised or semi-supervised ([0027]; [0060]; [0128]), such that some training dataset is unlabeled and the variational autoencoder trains itself (i.e. unsupervised). Bikard discloses training the variational autoencoder using the dataset ([0065]; [0120]), wherein a variational autoencoder is a generative model ([0059]). Bikard discloses the trained variational autoencoder (i.e. the generative model) is configured to generate a valid protein sequence ([0133]-[0134]-[0139], e.g. 99.6% generated protein sequences were valid) using semi-supervised learning (i.e. a generative model and a supervised learning model) ([0128]) to determine a protein sequence of a given family with functional similarity to proteins of the family and to predict certain properties related to the protein (i.e. with desired properties) ([0128]; [0252]; [0256]; FIG. 12-16) Regarding claim 13, Bikard discloses the decoder comprises a plurality of causal convolutions including residual network blocks (i.e. convolution ResNet blocks) ([0238]); FIG. 5b). Regarding claim 14, Bikard discloses multiple convolution layers of the encoder (i.e. a second one-dimensional convolution layer of the encoder) ([0292]-[0293]; FIG. 22a) and the decoder further comprises an up-sampling layer that inverts the sequence of operations in the encoder and includes one-dimensional deconvolution layers (i.e. a one-dimensional convolution layer transposed with respect to a second one-dimensional layer of the encoder ([0085]; [0106]; [0236]; FIG. 5a). Regarding claim 19, Bikard further discloses obtaining a training dataset comprising protein sequences ([0120]; [0133]), and that the method uses unsupervised learning ([0027]), demonstrating the training dataset is unlabeled, or discloses the method uses semi-supervised learning (i.e. some training data is labeled, or “sparsely labeled data”) ([0218]; [0018]; [0158]-[0159]). Bikard discloses training the variational autoencoder (i.e. generative model) on the dataset ([0018]-[0019]; [0028], e.g. variational-auto-encoder) Bikard discloses generating a valid protein sequence with a desired functional property using the trained variational autoencoder and semi-supervised learning (i.e. a supervised model) ([0128]; [0133]-[0134]-[0139], e.g. 99.6% generated protein sequences were valid; [0158]-[0159]; FIG. 12-16, e.g. proteins with a desired function). Regarding claim 21, Bikard discloses a tangible carrier medium for computer code storing a variational auto-encoder ([0031]; 0028]; [0118]; FIG. 1A-B; FIG. 2), comprising an encoder, decoder, and autoregressive module as follows: Bikard discloses an encoder that encodes protein sequences in a latent space as latent codes (i.e. estimates a latent variable in a latent space) ([0010]; FIG. 2), given an input vector (i.e. a particular data point in a data space) ([0067]), wherein the encoder is parameterized ([0064]; [069], e.g. parameters of encoder are trained). Bikard discloses a decoder that generates a protein sequence (i.e. an output in the data space) by taking in a latent vector (i.e. a particular point in the latent space) ([0084]; Fig 2b). Bikard discloses an autoregressive module, wherein the decoder comprises the autoregressive module ([0084]; [0097]), which learns probabilities for amino acid sequences to be selected at certain positions in the generated sequence to preserve similarity in structure (i.e. a local structure of an amino acid sequence- a phenotype of the sequence, given structure is a phenotype) and outputs a valid sequence result (claim 9, e.g. autoregressive module has learning phase; [0017]-[0018], e.g. learns transformations preserving similarities in structure; [0124]-[0125]; [0080]; [0139] FIG. 2a, e.g. autoregressive module outputs the sequence result). Bikard discloses the autoregressive module includes 1-D causal convolution layers and skip connections (Fig. 5b; [0087]; [0102];[0112]; [0237], e.g. autoregressive component takes vector as input). Bikard discloses obtaining a training dataset comprising protein sequences ([0120]; [0133]), and that the method uses unsupervised learning ([0027]), demonstrating the training dataset is unlabeled, or discloses the method uses semi-supervised learning (i.e. some training data is labeled, or “sparsely labeled data”) ([0218]; [0018]; [0158]-[0159]). Bikard discloses training the variational autoencoder (i.e. generative model) on the dataset ([0018]-[0019]; [0028], e.g. variational-auto-encoder) Bikard discloses generating a valid protein sequence with a target functional property using the trained variational autoencoder and semi-supervised learning (i.e. a supervised model) ([0128]; [0133]-[0134]-[0139], e.g. 99.6% generated protein sequences were valid; [0158]-[0159]; FIG. 12-16, e.g. proteins with a desired function). Therefore, Bikard anticipates the claimed invention. Response to Arguments Applicant's arguments filed 23 Dec. 2024 regarding 35 U.S.C. 102 have been fully considered but they are not persuasive. Applicant remarks Bikard does not disclose a variational autoencoder for protein sequence generation that does not use protein structure, or that is configured to design and generate a protein having a combination of desirable properties, and thus the claims are novel over Bikard (Applicant’s remarks at pg. 10, para. 3 to pg. 11, para. 1). This argument is not persuasive. First, Bikard discloses embodiments in which the variational autoencoder is semi-supervised and generates protein sequences with desired functional properties, as discussed in the above rejection (see at least [0018]; [0128]; FIG. 12-16). Furthermore, Applicant has not explained in what way Bikard relies on protein structure in the protein sequence generation. The training dataset of Bikard includes protein sequences, in addition to desired properties of the protein (e.g. solublity), which are used to generate new protein sequences with desired characteristics ([0130]-[0131]). The variational autoencoder does not rely on any protein structures, and instead utilizes an autoregressive module that learns a local structure based on an amino acid sequence, as described in the above rejection and also recited in the claims. Therefore, the rejection is maintained. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 9-10, 15, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Bikard (2017) in view of Rolfe (2018). Any newly recited portion is necessitated by claim amendment. Cited references: Bikard et al. US 2021/0193259 A1, filed 16 Nov. 2018, effectively filed 16 Nov. 2017 (previously cited); and Rolfe et al., US 20200401916 A1, filed 07 Feb. 2019 (previously cited). Regarding claims 9-10 and 15, Bikard discloses a tangible carrier medium for computer code storing a variational auto-encoder ([0031]; 0028]; [0118]; FIG. 1A-B; FIG. 2), comprising an encoder, decoder, and autoregressive module as follows: Bikard discloses an encoder that encodes protein sequences in a latent space as latent codes (i.e. estimates a latent variable in a latent space) ([0010]; FIG. 2), given an input vector (i.e. a particular data point in a data space) ([0067]), wherein the encoder is parameterized ([0064]; [069], e.g. parameters of encoder are trained). Bikard discloses a decoder that generates a protein sequence (i.e. an output in the data space) by taking in a latent vector (i.e. a particular point in the latent space) ([0084]; Fig 2b). Bikard discloses an autoregressive module, wherein the decoder comprises the autoregressive module ([0084]; [0097]), which learns probabilities for amino acid sequences to be selected at certain positions in the generated sequence to preserve similarity in structure (i.e. a local structure of an amino acid sequence- a phenotype of the sequence, given structure is a phenotype) and outputs a valid sequence result (claim 9, e.g. autoregressive module has learning phase; [0017]-[0018], e.g. learns transformations preserving similarities in structure; [0124]-[0125]; [0080]; [0139] FIG. 2a, e.g. autoregressive module outputs the sequence result). Bikard discloses the autoregressive module includes 1-D causal convolution layers and skip connections (Fig. 5b; [0087]; [0102];[0112]; [0237], e.g. autoregressive component takes vector as input). Bikard discloses a method which includes the functions of obtaining a training dataset comprising protein sequences ([0120]; [0133]), and discloses the latent space organizes itself to cluster proteins of the same family together (i.e. the autoencoder determines a dataset of known protein sequences) ([0211]). Bikard further discloses that the learning is unsupervised ([0027]; [0060]), such that the training dataset is unlabeled and the variational autoencoder trains itself (i.e. unsupervised). Bikard discloses training the variational autoencoder using the dataset ([0065]; [0120]), wherein a variational autoencoder is a generative model ([0059]). Bikard discloses the trained variational autoencoder (i.e. the generative model) is configured to generate a valid protein sequence ([0133]-[0134]-[0139], e.g. 99.6% generated protein sequences were valid) using semi-supervised learning (i.e. a generative model and a supervised learning model) ([0128]) to determine a protein sequence of a given family with functional similarity to proteins of the family and to predict certain properties related to the protein (i.e. with desired properties) ([0128]; [0252]; [0256]; FIG. 12-16) Further regarding claim 15, Bikard further discloses the decoder performs up-sampling using transposed convolution (i.e. strided convolutions) and includes a plurality of deconvolution layers (FIG. 5a; [0040]), such that the decoder includes strided convolution layers. Further regarding claims 9 and 15, Bikard does not disclose the parameterized encoder and decoder comprise a plurality of convolution ResNet blocks, respectively. However, Bikard does note that other functional architectures of an encoder and decoder may be implemented ([0096]; [0292]). Furthermore, Rolfe discloses unsupervised learning techniques over an input space comprising discrete or continuous variables, including variational autoencoders (Abstract; [0006]; [0009]). Rolfe discloses a variational autoencoder which includes an encoder and a decoder, wherein the encoder comprises a plurality of downsampling residual blocks (i.e. ResNet blocks) ([0159]; [0167]; FIG. 5A-C) and the decoder also comprises another set of residual network blocks in an upsampling residual network ([0168]). It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the autoencoder of Bikard, to have utilized the residual blocks of Rolfe ([0159]; FIG. 5A-C) in the encoder and decoder, thus arriving at the invention of claims 9 and 15. One of ordinary skill in the art would have been motivated to combine the neural networks of Bikard and Rofle based on the simple substitution of one known element (i.e. the encoder and decoder layers of Bikard) for another known element (i.e. the residual blocks of Rolfe), given the function of the respective encoders and decoders in variational autoencoders are known, as shown by Bikard and Rolfe (i.e. variational autoencoders are known and include an encoder and decoder). Furthermore, one of ordinary skill in the art would have recognized that the results of such a substitution would have been predictable given Bikard discloses that other functional architectures of an encoder and/or decoder can be implemented in the variational autoencoder ([0096]; [0292]), and Rolfe discloses such a functional architecture. Further regarding claim 10, Bikard further discloses the encoder comprises a one-dimensional convolution layer, wherein a stride of the convolution layers is set to two (i.e. a length of an input is halved), and that each time the resolution is decreased by the factor of 2, the number of filters is increased by a factor of 2 (i.e. a channel of the encoder is doubled, given each filter produces a feature) ([0233]; [0292]-[0293]; FIG. 22a, e.g. conv. 1D). Further regarding claim 17, Bikard further discloses the decoder uses transposed convolution for up-sampling in the decoder after strided convolutions in the encoder by inverting the sequence of operations in the encoder ([0085]; [0236]) (i.e. a first pattern of strided convolution layers of the decoder is opposite/transposed to a pattern of strided convolutions of the encoder). Regarding claim 18, Bikard discloses variational auto-encoders are a deep learning technique ([0064]), such that the encoder and decoder are deep learning models, and further discloses the encoder and decoder are jointly trained, and during training the parameters of the encoder and decoder are updated using backpropagation with stochastic gradient descent (i.e. the encode and decoder are parameterized by respective weights) ([0069]). Therefore, the invention is prima facie obvious. Claims 11 is rejected under 35 U.S.C. 103 as being unpatentable over Bikard (2017) in view of Rolfe (2018), as applied to claim 9, further in view of He (2016). This rejection is previously cited. Cited reference: He et al., Deep Residual Learning for Image Recognition, 2016, CVF, pg. 770-778 (previously cited). Regarding claim 11, Bikard in view of Rolfe, disclose the method of claim 9 as applied above, including using a residual neural network in the encoder as discussed above. Regarding claim 11, while Bikard in view of Rolfe make obvious the use of a residual neural network in the encoder and decoder, as applied to claim 9 above, and Rolfe discloses the encoder comprises a plurality of downsampling residual blocks (i.e. ResNet blocks) ([0159]; FIG. 5A-C), Bikard in view of Rolfe do not explicitly disclose the plurality of convolution ResNet blocks comprise a plurality of strided convolution layers for downscaling and channel doubling. However, He discloses residual networks (ResNets) (Abstract; pg. 774, col 2, para. 2), including a residual neural network that performs downsampling directly by convolution layers that have a stride of 2, and doubles the filters every time the feature set is halved (i.e. channel doubling) (pg. 772, col. 2, para. 7; pg. 773, col. 2, para. 1, e.g. residual network is based on plain network). He discloses the residual network exhibits considerably lower training error and is generalizable to validation data and reduces prediction error compared to its plain neural network counterpart (pg. 774, col. 2, para. 2 to pg. 775, col. 1, para. 1), and further discloses doubling the number of filters preserves the time complexity per layer (pg. 772, col. 2, para. 7). It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the encoder of Bikard in view of Rolfe to have used strided convolution layers for downscaling and channel doubling, as shown by He ((pg. 772, col. 2, para. 7; pg. 773, col. 2, para. 1). One of ordinary skill in the art would have been motivated to combine the methods of Bikard in view of Rolfe and He, to have lowered training error and preserved the time complexity per layer, as shown by He (pg. 772, col. 2, para. 7; pg. 774, col. 2, para. 2 to pg. 775, col. 1, para. 1). This modification would have had a reasonable expectation of success given both Bikard in view of Rolfe and He disclose residual neural network blocks with down-sampling, such that the down-sampling via strided convolution layers in He is applicable to Bikard in view of Rolfe. Therefore the invention is prima facie obvious. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Bikard (2017) in view of Rolfe (2018) and He (2016), as applied to claim 11 above, further in view of Yu (2017). This rejection is previously cited. Cited reference: Yu et al., Dilated Residual Networks, 2017, arXiv, pg. 1-9 (previously cited). Regarding claim 12, Bikard in view of Rolfe and He make obvious the invention of claim 11 as applied above. Further regarding claim 12, Bikard in view of Rolfe and He does not disclose a dilation pattern of the plurality of convolution layers repeats every five blocks in the encoder and decoder, respectively. However, Yu discloses dilated residual neural networks (Abstract), and discloses a neural network architecture which includes strided convolutions (Figure 5, e.g. green lines represent strides). Yu further discloses the strided convolution layers have a dilation pattern across five levels of a conv-BN-Relu group (i.e. five blocks), and all layers within a given level have the same dilation (i.e. the pattern repeats every five blocks and starts over at the next layer) (Figure 5, e.g. (a) DRN-A-18). Yu further discloses dilated convolutions increase the receptive filed of higher layers (pg. 2, col. 1, para. 3 to col. 2, para. 1). It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the encoder of Bikard in view of Rolfe and He, to have utilized the repeating dilation pattern of strided convolutions of Yu, discussed above. One of ordinary skill in the art would have been motivated to combine the elements of Bikard in view of Rolfe and Yu, in order to increase the receptive filed at higher layers, as shown by Yu (pg. 2, col. 1, para. 3 to col. 2, para. 1). This modification would have had a reasonable expectation of success given Bikard in view of Rolfe and Ye and Yu each disclose residual network networks, such that the dilation pattern of Yu is applicable to Bikard in view of Rofle and Ye. Therefore, the invention is prima facie obvious. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Bikard (2017) in view of Rolfe (2018), as applied to claim 15 above, further in view of Yu (2017). This rejection is previously cited. Cited reference: Yu et al., Dilated Residual Networks, arXiv, pg. 1-9 (previously cited). Regarding claim 16, Bikard in view of Rolfe make obvious the invention of claim 15, as applied above. Further regarding claim 16, Bikard does not disclose a dilation pattern of the plurality of convolution layers repeats every five blocks in the encoder and decoder, respectively. However, Yu discloses dilated residual neural networks (Abstract), and discloses a neural network architecture which includes strided convolutions (Figure 5, e.g. green lines represent strides). Yu further discloses the strided convolution layers have a dilation pattern across five levels of a conv-BN-Relu group (i.e. five blocks), and all layers within a given level have the same dilation (i.e. the pattern repeats every five blocks and starts over at the next layer) (Figure 5, e.g. (a) DRN-A-18). Yu further discloses dilated convolutions increase the receptive filed of higher layers (pg. 2, col. 1, para. 3 to col. 2, para. 1). It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the encoder and decoder of Bikard in view of Rolfe, to have utilized the repeating dilation pattern of strided convolutions of Yu, discussed above. One of ordinary skill in the art would have been motivated to combine the elements of Bikard in view of Rolfe and Yu, in order to increase the receptive filed at higher layers, as shown by Yu (pg. 2, col. 1, para. 3 to col. 2, para. 1). This modification would have had a reasonable expectation of success given Bikard in view of Rolfe and Yu each disclose residual network networks, such that the dilation pattern of Yu is applicable to Bikard in view of Rofle. Therefore, the invention is prima facie obvious. Response to Arguments Applicant's arguments filed 23 Dec. 2024 regarding 35 U.S.C. 103 have been fully considered but they are not persuasive. Applicant remarks that claims 9-12 and 15-18 ultimately depends from claim 8 and includes all features of claim 8 as amended, which is novel over Bikard as discussed above under 35 U.S.C. 102, and thus the combination of references do not teach claims 9-12 and 15-18 (Applicant’s remarks at pg. 11, para. 2 to pg. 14, para. 2). This argument is not persuasive for the same reasons discussed above under 35 U.S.C. 102. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAITLYN L MINCHELLA whose telephone number is (571)272-6485. The examiner can normally be reached 7:00 - 4:00 M-Th. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Olivia Wise can be reached at (571) 272-2249. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KAITLYN L MINCHELLA/Primary Examiner, Art Unit 1685
Read full office action

Prosecution Timeline

Feb 27, 2020
Application Filed
Sep 20, 2024
Non-Final Rejection — §101, §102, §103
Dec 23, 2024
Response Filed
Jan 10, 2025
Final Rejection — §101, §102, §103
Jul 15, 2025
Request for Continued Examination
Jul 17, 2025
Response after Non-Final Action
Sep 09, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12569204
METHOD AND SYSTEM FOR ANALYZING GLUCOSE MONITORING DATA INDICATIVE OF A GLUCOSE LEVEL
2y 5m to grant Granted Mar 10, 2026
Patent 12494268
ENCODING/DECODING METHOD, ENCODER/DECODER, STORAGE METHOD AND DEVICE
2y 5m to grant Granted Dec 09, 2025
Patent 12431218
MULTI-PASS SOFTWARE-ACCELERATED GENOMIC READ MAPPING ENGINE
2y 5m to grant Granted Sep 30, 2025
Patent 12394504
PREDICTING DEVICE, PREDICTING METHOD, PREDICTING PROGRAM, LEARNING MODEL INPUT DATA GENERATING DEVICE, AND LEARNING MODEL INPUT DATA GENERATING PROGRAM
2y 5m to grant Granted Aug 19, 2025
Patent 12362037
METHODS AND SYSTEMS FOR RECONSTRUCTION OF THREE-DIMENSIONAL STRUCTURE AND THREE-DIMENSIONAL MOTION OF A PROTEIN MOLECULE
2y 5m to grant Granted Jul 15, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
27%
Grant Probability
48%
With Interview (+20.9%)
4y 5m
Median Time to Grant
High
PTA Risk
Based on 151 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month