Prosecution Insights
Last updated: April 19, 2026
Application No. 19/077,065

DIFFUSION-BASED GENERATIVE AI METHODS FOR PROTEIN AND DRUG DESIGN

Non-Final OA §101§103§112
Filed
Mar 12, 2025
Examiner
BICKHAM, DAWN MARIE
Art Unit
1685
Tech Center
1600 — Biotechnology & Organic Chemistry
Assignee
Deep Eigenmatics Inc.
OA Round
1 (Non-Final)
52%
Grant Probability
Moderate
1-2
OA Rounds
4y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
13 granted / 25 resolved
-8.0% vs TC avg
Strong +70% interview lift
Without
With
+69.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
39 currently pending
Career history
64
Total Applications
across all art units

Statute-Specific Performance

§101
31.0%
-9.0% vs TC avg
§103
24.3%
-15.7% vs TC avg
§102
12.2%
-27.8% vs TC avg
§112
23.5%
-16.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 25 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Restriction election In response to a requirement for restriction dated 05/06/2025, applicant elects Group II (claims 6-20) without traverse per applicant response dated 08/21/2025. Upon review of the claims, claims 19-20 are directed to group I not group II and are being withdrawn as not being in the elected group. Claim Status Claims 1-20 are pending. Claims 1-5, 19 and 20 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a non-elected species, as described above. Claims 6-18 are under examination. Claims 6-7, 10, 12-13, and 16 are objected to. Claims 6-18 are rejected. Priority The instant Application was filed 03/15/2025 and does not claim the benefit of an earlier filed application. Information Disclosure Statement The listing of references in the specification is not a proper information disclosure statement. 37 CFR 1.98(b) requires a list of all patents, publications, or other information submitted for consideration by the Office, and MPEP § 609.04(a) states, "the list may not be incorporated into the specification but must be submitted in a separate paper." Therefore, unless the references have been cited by the examiner on form PTO-892, they have not been considered. Drawings The drawings are objected to for the following informalities: The label for Fig. 9 is on a separate page. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections The claims are objected to for the following informalities: Claims 6-18 contain periods used inappropriately. The claim or claims must commence on a separate physical sheet or electronic page and should appear after the detailed description of the invention. Any sheet including a claim or portion of a claim may not contain any other parts of the application or other material. While there is no set statutory form for claims, the present Office practice is to insist that each claim must be the object of a sentence starting with "I (or we) claim," "The invention claimed is" (or the equivalent). If, at the time of allowance, the quoted terminology is not present, it is inserted by the Office of Data Management. Each claim begins with a capital letter and ends with a period. Periods may not be used elsewhere in the claims except for abbreviations. See Fressola v. Manbeck, 36 USPQ2d 1211 (D.D.C. 1995). Where a claim sets forth a plurality of elements or steps, each element or step of the claim should be separated by a line indentation, 37 CFR 1.75(i). Claim 6: for example, claim 6 line 6 recites “for each target protein and each associated ligand”. Please amend all instances of “each target protein” to “ each target protein of the plurality of protein ligand complexes” and “each ligand” to “each ligand of the plurality of protein ligand complexes” because it is assumed to provide antecedent basis. Claims 6(c)(iv) recites “of probability distributions arising from the above transformations”. Please amend to “of the probability distributions arising from the above transformations” because it is assumed to provide antecedent basis. Claims 6-7, : for example, claim 6(b)(i) recites “each voxel”. Please amend all instances of “each voxel” to “each voxel of the plurality of voxel units” because it is assumed to provide antecedent basis. Claim 10 recites “on a target protein”. Please amend to “on the target protein” because it is assumed to provide antecedent basis. Claim 12 recites “on a target protein”. Please amend to “on the target protein” because it is assumed to provide antecedent basis. Claim 12 recites “associated target”. Please amend to “the associated target” because it is assumed to provide antecedent basis. Claim 13 recites “obtaining a candidate peptide ligand's structure and docking site”. Please amend to “obtaining the candidate peptide ligand's structure and docking site” because it is assumed to provide antecedent basis. Claim 13(b)(i) recites “a candidate peptide drug structure”. Please amend to “the candidate peptide drug structure” because it is assumed to provide antecedent basis. Claim 13(c) recites “based on docking site and structure”. Please amend to “based on the docking site and structure” because it is assumed to provide antecedent basis. Claim 16 lines 2 and 3 recite “a target protein”. Please amend to “the target protein” because it is assumed to provide antecedent basis. Claim 16(b): Please amend “non-polypeptide” to “non-peptide” for claim consistency. 35 U.S.C. 112(b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 6-18 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Claims 6 includes recitations of parenthetical limitations: “the generated training data”. It is not clear if the parenthetical limitations are intended to be exemplary claim language or if they are intended to further limit the training dataset. As set forth in MPEP 2173.05(d), examples and preferences stated in the claims may lead to confusion over the intended scope of a claim. Claim 6(b)(i) and (ii), limitations, recite “the probability of finding any given constituent node in that voxel”. It is unclear what “constituent nodes” refers to in the “ligand structure”. Also step 6(b) is being done for “each target protein and each associated ligand”, which is just one thing at a time and it is unclear how the step makes “a probability distribution” based on that just one thing. Furthermore the “probability of finding any given constituent amino acid in that voxel” is unclear. Does that mean the entire portion of the amino acid or just a portion of it? Claim(s) 7-18 is/are rejected for the same reason because they depend from claims 6, respectively, and do not resolve the indefiniteness issue in those claims. Claim 6(c)(i) limitations, recite “a measure of discrepancy between consecutive probability distributions is bounded by a hyperparameter function”. It is unclear how a probability distribution can be consecutive and in which order they are in. Claim(s) 7-18 is/are rejected for the same reason because they depend from claims 6, respectively, and do not resolve the indefiniteness issue in those claims. Claim 6(c)(ii) recites “the sum of the probabilities across voxels is constant”. The voxels as claimed have an associated probability distribution not a probability. Claim(s) 7-18 is/are rejected for the same reason because they depend from claims 6, respectively, and do not resolve the indefiniteness issue in those claims. Claims 6(c)(iii) recites “the current state's probability distribution” and “the final (diffused) state's probability distribution”. There is insufficient antecedent basis for these limitation in the claim as there is no previous recitation of “a current state's probability distribution” and “a final (diffused) state's probability distribution”. Please amend to “a current state's probability distribution” and “a final (diffused) state's probability distribution” to resolve the issue. Claim(s) 7-18 is/are rejected for the same reason because they depend from claim 6, respectively, and do not resolve the indefiniteness issue in those claims. Claims 6(c)(iv) recites “the resulting sequence”. There is insufficient antecedent basis for these limitation in the claim as there is no previous recitation of “a resulting sequence”. Please amend to “a resulting sequence” to resolve the issue. Claim(s) 7-18 is/are rejected for the same reason because they depend from claim 6, respectively, and do not resolve the indefiniteness issue in those claims. Claim 6(c)(iv) recites “the generated training data”. There is insufficient antecedent basis for these limitation in the claim as there is no previous recitation of “generated training data”. Please amend to “generated training data” to resolve the issue. Claim(s) 7-18 is/are rejected for the same reason because they depend from claims 6, respectively, and do not resolve the indefiniteness issue in those claims. Claim 6(d) recites “to learn the constituent node locations”. There is insufficient antecedent basis for these limitation in the claim as there is no previous recitation of “constituent node locations”. Please amend to “to learn constituent node locations” to resolve the issue. Claim(s) 7-18 is/are rejected for the same reason because they depend from claims 6, respectively, and do not resolve the indefiniteness issue in those claims. Claims 6(d)(i) recites “the training dataset” and “the generated training data”. There is insufficient antecedent basis for these limitation in the claim as there is no previous recitation of “a training dataset” and “generated training data”. Please amend to “a training dataset” and “generated training data” to resolve the issue. Claim(s) 7-18 is/are rejected for the same reason because they depend from claim 6, respectively, and do not resolve the indefiniteness issue in those claims. Claim 6(d)(iii) recites “the spatial position”. There is insufficient antecedent basis for these limitation in the claim as there is no previous recitation of “a spatial position”. Please amend to “a spatial position” to resolve the issue. Claim(s) 7-18 is/are rejected for the same reason because they depend from claims 6, respectively, and do not resolve the indefiniteness issue in those claims. Claim 6(e)(ii) the relationship between the recitation of "a target protein's sequence and structure" in claim 6(e)(ii) and the previous recitation of "a target protein's sequence and structure" in claim 6(a)(i and ii) is not clear. It is not clear if the recitations are intended to be related or if they are intended to be distinct from one another. For compact examination, it is assumed that the recitations are related. If this assumption is correct, the rejection may be overcome by amending the recitation in claim 6(e)(ii) to "the target protein's sequence and structure". Claim(s) 7-18 is/are rejected for the same reason because they depend from claims 6, respectively, and do not resolve the indefiniteness issue in those claims. Claim 10(b)(ii) recites “the adjacency information”. There is insufficient antecedent basis for these limitation in the claim as there is no previous recitation of “adjacency information”. Please amend to “adjacency information” to resolve the issue. Claim(s) 12-15 is/are rejected for the same reason because they depend from claims 10, respectively, and do not resolve the indefiniteness issue in those claims. Claim 12(a)(i.2) recites “training labels are the connections between the ligand amino acids”. It is unclear how the connections become the training labels. Claim(s) 13-15 is/are rejected for the same reason because they depend from claims 12, respectively, and do not resolve the indefiniteness issue in those claims. Claim 12(b) recites “the node position locator neural network”. There is insufficient antecedent basis for these limitation in the claim as there is no previous recitation of “a node position locator neural network”. Please amend to “a node position locator neural network” to resolve the issue. Claim(s) 13-15 is/are rejected for the same reason because they depend from claims 12, respectively, and do not resolve the indefiniteness issue in those claims. Claim 13 the relationship between the recitation of "for a given target protein" in claim 13 and the previous recitation of " target protein " in claim 6(a)(i) is not clear. It is not clear if the recitations are intended to be related or if they are intended to be distinct from one another. For compact examination, it is assumed that the recitations are related. If this assumption is correct, the rejection may be overcome by amending the recitation in claim 13 to "for the target protein". Claim(s) 14-15 is/are rejected for the same reason because they depend from claims 13, respectively, and do not resolve the indefiniteness issue in those claims. Claim 13(c) recites “the interaction and efficacy”. There is insufficient antecedent basis for these limitation in the claim as there is no previous recitation of “an interaction and efficacy”. Please amend to “an interaction and efficacy” to resolve the issue. Claim(s) 14-15 is/are rejected for the same reason because they depend from claims 13, respectively, and do not resolve the indefiniteness issue in those claims. Claim 15 recites “the in vitro and in vivo biological activity”. There is insufficient antecedent basis for these limitation in the claim as there is no previous recitation of “in vitro and in vivo biological activity”. Please amend to “in vitro and in vivo biological activity” to resolve the issue. Claim 18 recites “the in vitro and in vivo biological activity”. There is insufficient antecedent basis for these limitation in the claim as there is no previous recitation of “in vitro and in vivo biological activity”. Please amend to “in vitro and in vivo biological activity” to resolve the issue. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 6-14, and 16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to one or more judicial exceptions without significantly more. MPEP 2106 organizes judicial exception analysis into Steps 1, 2A (Prongs One and Two) and 2B as follows below. MPEP 2106 and the following USPTO website provide further explanation and case law citations: uspto.gov/patent/laws-and-regulations/examination-policy/examination-guidance-and-training-materials. Framework with which to Evaluate Subject Matter Eligibility: Step 1: Are the claims directed to a process, machine, manufacture, or composition of matter; Step 2A, Prong One: Do the claims recite a judicially recognized exception, i.e. a law of nature, a natural phenomenon, or an abstract idea; Step 2A, Prong Two: If the claims recite a judicial exception under Prong One, then is the judicial exception integrated into a practical application (Prong Two); and Step 2B: If the claims do not integrate the judicial exception, do the claims provide an inventive concept. Framework Analysis as Pertains to the Instant Claims: Step 1 With respect to Step 1: yes, the claims are directed to method and apparatus, i.e., a process, machine, or manufacture within the above 101 categories [Step 1: YES; See MPEP § 2106.03]. Step 2A, Prong One With respect to Step 2A, Prong One, the claims recite judicial exceptions in the form of abstract ideas. The MPEP at 2106.04(a)(2) further explains that abstract ideas are defined as: mathematical concepts (mathematical formulas or equations, mathematical relationships and mathematical calculations); certain methods of organizing human activity (fundamental economic practices or principles, managing personal behavior or relationships or interactions between people); and/or mental processes (procedures for observing, evaluating, analyzing/ judging and organizing information). With respect to the instant claims, under the Step 2A, Prong One evaluation, the claims are found to recite abstract ideas that fall into the grouping of mental processes (in particular procedures for observing, analyzing and organizing information) and mathematical concepts (in particular mathematical relationships and formulas) are as follows: Independent claim 6: for each ligand, sequentially transforming each voxel's probability distribution over the ligand's constituent nodes, such that: i. a measure of discrepancy between consecutive probability distributions is bounded by a hyperparameter function, ii. a probability conservation constraint is respected, wherein for any given amino acid residue, the sum of the probabilities across voxels is constant, iii. the sequential transformation proceeds in a manner to decrease the discrepancy measure between the current state's probability distribution and the final (diffused) state's probability distribution, iv. the resulting sequence of probability distributions arising from the above transformations - the generated training data - are stored in the processor's associated memory, v. the sequential transformation is conditioned on the target protein structure; training a neural network, via the processor, to learn the constituent node locations of the ligand, wherein the neural network training process starts at the ligand's final diffused state, and sequentially recovers the preceding states, thereby ultimately yielding an approximation for the ligand's origin state: i. wherein the training dataset is the generated training data, ii. wherein for each voxel, the training loss includes a measure of discrepancy between the neural network's approximation of each state's probability distribution and the state's probability distribution in the training data, iii. wherein the neural network is configured to accept a node composition as input, and to output the spatial position of the nodes of a corresponding ligand; Dependent claim 10: obtaining the spatial locations of a peptide ligand's amino acids and its docking site on a target protein, given its amino acid composition and its target protein sequence and structure, wherein the method is also for obtaining the peptide ligand's structure, the method further comprising: a. receiving, at a processor, the ligand's amino acid sequence; b. obtaining, via the processor, the ligand's structure representation by: i. using the method of claim 9 to obtain the spatial locations of the ligand's constituent amino acids, ii. obtaining the adjacency information of the ligand's constituent amino acids from the ligand's amino acid sequence, iii. obtaining the ligand's structure and docking coordinates from its amino acid spatial locations and its amino acid adjacency information. Dependent claim 12: obtaining the spatial locations of a peptide ligand's amino acids and its docking site on a target protein, given its amino acid composition and its target protein sequence and structure, wherein the method is also for obtaining the peptide ligand's structure, the method further comprising: c. obtaining a candidate ligand structure from its amino acid spatial positions and their connections. Dependent claim 13: obtaining a candidate peptide ligand's structure and docking site given only its amino acid composition and its target protein sequence and structure, wherein the method is also for obtaining a candidate peptide drug ligand for a given target protein, the method further comprising evaluating the interaction and efficacy of each candidate peptide drug ligand with the target protein; d. selecting the most efficacious candidate peptide drug ligand from the plurality of candidate peptide drug ligands. Dependent claim 16: obtaining the docking site of a nonpeptide drug ligand on a target protein, wherein the method is also for obtaining a candidate non-peptide drug ligand for a given target protein, the method further comprising determine the docking site of each of the plurality of represented candidate non-polypeptide drug ligands on the target protein evaluating the interaction and efficacy of each represented candidate non-peptide drug ligand with the target protein selecting the most efficacious represented candidate non-peptide drug ligand from the plurality of represented candidate non-peptide drug ligands. Dependent claims 7-9 and 11 recite further steps that limit the judicial exceptions in independent claim 6 and, as such, also are directed to those abstract ideas. For example, claim 7 further limits the final diffusion state of claim 6, claim 8 further limits the discrepancy measure of claim 6, claim 9 further limits ligand of claim 6, and claim 11 further limits the ligand of claim 8. The abstract ideas recited in the claims are evaluated under the Broadest Reasonable Interpretation (BRI) and determined to each cover performance either in the mind and/or by mathematical operation because the method only requires a user to manually transform, train, obtain, evaluate, select, determine, . Without further detail as to the methodology involved in “transforming each voxel's probability distribution”, “training a neural network”, “obtaining the spatial locations of a peptide ligand's amino acids”, “obtaining the spatial locations of a peptide ligand's amino acids”, “evaluating the interaction and efficacy of each candidate peptide drug ligand”, “selecting the most efficacious candidate peptide drug ligand”, “obtaining the docking site of a nonpeptide drug ligand on a target protein”, “determine the docking site”,” evaluating the interaction and efficacy”, and “selecting the most efficacious represented candidate non-peptide drug ligand” under the BRI, one may simply, for example, use pen and paper to obtain the information and evaluate the output of the model to select the best option for a peptide-ligand drug design. Therefore, claims 6-14, and 16 recite an abstract idea [Step 2A, Prong 1: YES; See MPEP § 2106.04]. Step 2A, Prong Two Because the claims do recite judicial exceptions, direction under Step 2A, Prong Two, provides that the claims must be examined further to determine whether they integrate the judicial exceptions into a practical application (MPEP 2106.04(d)). A claim can be said to integrate a judicial exception into a practical application when it applies, relies on, or uses the judicial exception in a manner that imposes a meaningful limit on the judicial exception. This is performed by analyzing the additional elements of the claim to determine if the judicial exceptions are integrated into a practical application (MPEP 2106.04(d).I.; MPEP 2106.05(a-h)). If the claim contains no additional elements beyond the judicial exceptions, the claim is said to fail to integrate the judicial exceptions into a practical application (MPEP 2106.04(d).III). Additional elements, Step 2A, Prong Two With respect to the instant recitations, the claims recite the following additional elements: Independent claim 6: receiving, at a processor, representations of a plurality of protein-ligand complexes, wherein each ligand is represented as a linear graph consisting of: i. a sequence of nodes, and ii. an associated structure representation, and wherein each target protein is represented as: i. an amino acid sequence, and ii. an associated structure representation; for each target protein and each associated ligand, representing, via the processor: i. the ligand's structure as voxel coordinates of its constituent nodes within a three dimensional grid, wherein with each voxel is associated a probability distribution which indicates the probability of finding any given constituent node in that voxel, ii. the target protein's structure as voxel coordinates of its constituent amino acids within a three dimensional grid, wherein with each voxel is associated a probability distribution which indicates the probability of finding any given constituent amino acid in that voxel the resulting sequence of probability distributions arising from the above transformations - the generated training data - are stored in the processor's associated memory, using the trained neural network to determine: i. the spatial locations of the ligand's constituent nodes, and ii. the ligand's docking site on the target protein. receiving, at a processor: i. the node composition of a candidate ligand, and ii. a target protein's sequence and structure; and Dependent claim 10: receiving, at a processor, the ligand's amino acid sequence; training a ligand amino acid adjacency determining neural network conditioned on the target protein structure, wherein: i. the training data include (i.1) spatial locations of ligand amino acids, and (i.2) associated target protein structure, ii. training labels are the connections between the ligand amino acids; b. using the node position locator neural network to determine the amino acid positions, and using the ligand amino acid adjacency determining neural network to determine the ligand amino acid connections; Dependent claim 13: receiving, at a processor, a plurality of amino acid compositions; Dependent claim 14: synthesizing the peptide drug ligand Dependent claim 16: receiving, at a processor, a plurality of candidate non-peptide drug ligand embedding representations The claims also include non-abstract computing elements. For example, independent claim 6 includes a processor. Considerations under Step 2A, Prong Two With respect to Step 2A, Prong Two, the additional elements of the claims do not integrate the judicial exceptions into a practical application for the following reasons. Those steps directed to data gathering, such as “receiving” and to data outputting, such as “stored”, perform functions of collecting the data needed to carry out the judicial exceptions. Data gathering and outputting do not impose any meaningful limitation on the judicial exceptions, or on how the judicial exceptions are performed. Data gathering and outputting steps are not sufficient to integrate judicial exceptions into a practical application (MPEP 2106.05(g)). Further steps directed to additional non-abstract elements of “a processor” do not describe any specific computational steps by which the “computer parts” perform or carry out the judicial exceptions, nor do they provide any details of how specific structures of the computer, such as the computer-readable recording media, are used to implement these functions. The claims state nothing more than a generic computer which performs the functions that constitute the judicial exceptions. Hence, these are mere instructions to apply the judicial exceptions using a computer, and therefore the claim does not integrate that judicial exceptions into a practical application. The courts have weighed in and consistently maintained that when, for example, a memory, display, processor, machine, etc.… are recited so generically (i.e., no details are provided) that they represent no more than mere instructions to apply the judicial exception on a computer, and these limitations may be viewed as nothing more than generally linking the use of the judicial exception to the technological environment of a computer (MPEP 2106.05(f)). With respect to claim 14 and 18, the additional elements of the claims do not integrate the judicial exceptions into a practical application for the following reasons. Those steps directed to synthesizing a biologic using the peptide drug ligand are not sufficient to integrate an abstract idea into a practical application because synthesizing a biologic identified by the judicial exceptions does not impose any meaningful limitations on the abstract idea, or on how the abstract idea is performed. These steps are insignificant extra-solution activity to the judicial exception (MPEP 2106.05(¢)(2)). Further steps directed to additional non-abstract elements of using, via the processor, the trained neural network and node position locator neural network provide nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. The recitation of “using a trained neural network” also merely indicates a field of use or technological environment in which the judicial exception is performed. This type of limitation merely confines the use of the abstract idea to a particular technological environment (neural networks) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h). Thus, none of the claims recite additional elements which would integrate a judicial exception into a practical application, and the claims are directed to one or more judicial exceptions [Step 2A, Prong 2: NO; See MPEP § 2106.04(d)]. Step 2B (MPEP 2106.05.A i-vi) According to analysis so far, the additional elements described above do not provide significantly more than the judicial exception. A determination of whether additional elements provide significantly more also rests on whether the additional elements or a combination of elements represents other than what is well-understood, routine, and conventional. Conventionality is a question of fact and may be evidenced as: a citation to an express statement in the specification or to a statement made by an applicant during prosecution that demonstrates a well-understood, routine or conventional nature of the additional element(s); a citation to one or more of the court decisions as discussed in MPEP 2106(d)(II) as noting the well-understood, routine, conventional nature of the additional element(s); a citation to a publication that demonstrates the well-understood, routine, conventional nature of the additional element(s); and/or a statement that the examiner is taking official notice with respect to the well-understood, routine, conventional nature of the additional element(s). With respect to the instant claims, the courts have found that receiving and outputting data are well-understood, routine, and conventional functions of a computer when claimed in a merely generic manner or as insignificant extra-solution activity (see Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information), buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network), Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015), and OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93, as discussed in MPEP 2106.05(d)(II)(i)). As such, the claims simply append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (MPEP2106.05(d)). The data gathering steps as recited in the instant claims constitute a general link to a technological environment which is insufficient to constitute an inventive concept which would render the claims significantly more than the judicial exception (MPEP2106.05(g)&(h)). With respect to claim 6 and those claims dependent therefrom, the computer-related elements or the general purpose computer do not rise to the level of significantly more than the judicial exception. The claims state nothing more than a generic computer which performs the functions that constitute the judicial exceptions. Hence, these are mere instructions to apply the judicial exceptions using a computer, which the courts have found to not provide significantly more when recited in a claim with a judicial exception (see MPEP 2106.06(A)). The specification also notes that computer processors and systems, as example, are commercially available or widely used at [00114-00117]. The additional elements are set forth at such a high level of generality that they can be met by a general purpose computer. Therefore, the computer components constitute no more than a general link to a technological environment, which is insufficient to constitute an inventive concept that would render the claims significantly more than the judicial exceptions (see MPEP 2106.05(b)I-III). As explained with respect to Step 2A, Prong Two, the additional element of “using the trained neural network” are at best mere instructions to “apply” the abstract ideas, which cannot provide an inventive concept. See MPEP 2106.05(f). As explained with respect to Step 2A, Prong Two, the additional element of “synthesizing” and “manufacturing” are insignificant extra-solution activity to the judicial exception, which cannot provide an inventive concept. See MPEP 2106.05(f).Taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception(s). Even when viewed as a combination, the additional elements fail to transform the exception into a patent-eligible application of that exception. Thus, the claims as a whole do not amount to significantly more than the exception itself [Step 2B: NO; See MPEP § 2106.05]. Therefore, the instant claims are not drawn to eligible subject matter as they are directed to one or more judicial exceptions without significantly more. For additional guidance, applicant is directed generally to the MPEP § 2106. Please note if claims 14-15 where ;included in claim 6 and claims 17-18 were included in claim 11 the 101 rejections could possibly be overcome. 2. Claims 15 and 18 are not rejected under 101 as they are directed to assessing the in vitro and in vivo biological activity of the peptide drug ligand in humans, animals, or plants and testing the biological activity of the synthesized peptide ligand in vitro and in vivo in humans, animals, or plants with integrate the judicial exception into a practical application. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. A. Claim(s) 6-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Prat et al. (US 11,263,534 Bl, patented 03/01/2022, newly cited) in view of Alakdhar et al (Alakhdar, Amira, Barnabas Poczos, and Newell Washburn. "Diffusion models in de novo drug design." Journal of Chemical Information and Modeling 64.19 (2024): 7238-7256, newly cited), and in view of Austin et al. (Austin, Jacob, et al. "Structured denoising diffusion models in discrete state-spaces." Advances in neural information processing systems 34 (2021): 17981-17993, newly cited), and in further view of Corso et al. (Corso, Gabriele, et al. "Diffdock: Diffusion steps, twists, and turns for molecular docking." arXiv preprint arXiv:2210.01776 (2022), newly cited). Claim 6 is directed to a method, comprising: a. receiving, at a processor, representations of a plurality of protein-ligand complexes, wherein each ligand is represented as a linear graph consisting of: i. a sequence of nodes, and ii. an associated structure representation, and wherein each target protein is represented as: i. an amino acid sequence, and ii. an associated structure representation; Prat discloses system and method for molecular reconstruction and probability distributions using a 3d variational-conditioned generative adversarial network [title]. Prat further discloses SMILES data for a plurality of molecules is transformed at a molecule graph construction stage into a graph-based representation wherein each molecule is represented as a graph comprising nodes and edges, wherein each node represents an atom, and each edge represents a connection between atoms of the molecule [fig. 13]. Prat also discloses a method for the reconstruction of molecular representations and output of molecular probability distributions, comprising the steps receiving a true representation of a molecule; constructing a wave-like representation from the true representation of a molecule, the wave-like representation comprising tensors of molecular data well-suited for machine learning; passing the wave-like representation to a first variational autoencoder [claim 4]. b. for each target protein and each associated ligand, representing, via the processor: i. the ligand's structure as voxel coordinates of its constituent nodes within a three dimensional grid, wherein with each voxel is associated a probability distribution which indicates the probability of finding any given constituent node in that voxel, ii. the target protein's structure as voxel coordinates of its constituent amino acids within a three dimensional grid, wherein with each voxel is associated a probability distribution which indicates the probability of finding any given constituent amino acid in that voxel; Prat discloses molecules are voxelated and are used as the model input, which are used to train the model [col. 29, par. 3]. Prat further discloses the 3D model comprises a voxel image (volumetric, three dimensional pixel image) of the ligand [col. 8, par. 3]. Prat also discloses in cases where enrichment data is available, ligands may be identified by enriching the SMILES string for a ligand with information about possible atom configurations of the ligand and converting the enriched information into a plurality of 3D models of the atom [col. 8, par. 3]. c. for each ligand, sequentially transforming each voxel's probability distribution over the ligand's constituent nodes, such that: Prat discloses wave-like representation comprising tensors of molecular data well-suited for machine learning; passing the wave-like representation to a first variational autoencoder [claim 4], but is silent on a sequential transformation. However, Alakhdar discloses diffusion models in de novo drug design [title]. Alakhdar further discloses the denoising diffusion probabilistic model (DDPM) incorporates two Markov chains: a forward chain that transforms data into standard Gaussian noise and a reverse chain that converts noise back to data by learning denoising transformations parametrized by deep neural networks [p. 7240, col. 1, par. 2] , which reads on a sequential transformation, but is silent on the specific features of the diffusion model as in claim 6(c). However, Austin discloses structured denoising diffusion models in discrete state-spaces [title]. Austin further discloses diffusion models are latent variable generative models characterized by a forward and a reverse Markov process [p. 2, par. 3]. Austin also discloses the forward process corrupts the data into a sequence of increasingly noisy latent variables x1:T = x1; x2; :::; xT [p. 2, par. 3]. Austin further discloses the learned reverse Markov process gradually denoises the latent variables towards the data distribution [p. 2, par. 3], for example, for continuous data, the forward process typically adds Gaussian noise, which the reverse process learns to remove [p. 2, par. 3]. i. a measure of discrepancy between consecutive probability distributions is bounded by a hyperparameter function, Austin discloses a tractable expression for the forward process posterior which allows us to compute the KL divergences present [p. 3, par. 2] which reads on a measure of discrepancy between consecutive probability distributions. Austin furthers discloses a noise schedule as a linear schedule for βt and the cosine schedule [p. 23, par. 4] which reads on probability distributions is bounded by a hyperparameter function. ii. a probability conservation constraint is respected, wherein for any given amino acid residue, the sum of the probabilities across voxels is constant, Austin discloses the ability to control the data corruption and denoising process by choosing Qt, in notable contrast to continuous diffusion, for which only additive Gaussian noise has received significant attention [p. 4, par. 1]. Austin further discloses besides the constraint that the rows of Qt must sum to one to conserve probability mass [p. 4, par.1]. iii. the sequential transformation proceeds in a manner to decrease the discrepancy measure between the current state's probability distribution and the final (diffused) state's probability distribution, Austin discloses in the forward process adding noise [p.4, par. 8] which reads on decreasing the discrepancy measure between the current state's probability distribution and the final (diffused) state's probability distribution means making the current distribution noisier and more unstructured. iv. the resulting sequence of probability distributions arising from the above transformations - the generated training data - are stored in the processor's associated memory, Prat discloses the data analysis engine utilizes the information gathered, organized, and stored in the data curation platform to train machine learning algorithms at a training stage [col 10, par. 2] which reads on storing the generated training data. v. the sequential transformation is conditioned on the target protein structure; Prat discloses at the analysis stage, a query in the form of a target ligand and a target protein are entered using an exploratory drug analysis (EDA) interface [col. 10, par. 4]. Prat further discloses the target ligand is processed through the trained graph-based machine learning algorithm which, based on its training, produces an output comprising a vector representation of the likelihood of interaction of the target ligand [col. 10, par. 4]. d. training a neural network, via the processor, to learn the constituent node locations of the ligand, wherein the neural network training process starts at the ligand's final diffused state, and sequentially recovers the preceding states, thereby ultimately yielding an approximation for the ligand's origin state: Prat discloses the trained graph-based machine learning algorithm may be any type of algorithm configured to analyze graph-based data, such as graph traversal algorithms, clustering algorithms, or graph neural networks [col. 10, par. 3], but is silent on the specifics of the diffusion model in claim 1(d). However, Alakhdar discloses the reverse diffusion is responsible for learning the distribution of the data using a denoising neural network architecture that gradually removes the noise added in the forward diffusion [p. 7244, col. 1, par. 2] which reads on training process starts at the ligand's final diffused state, and sequentially recovers the preceding states, thereby ultimately yielding an approximation for the ligand's origin state. i. wherein the training dataset is the generated training data, Prat discloses at the training stage, information from the knowledge graph is extracted to provide training data in the form of graph-based representations of molecules and the known or suspected bioactivity of those molecules with certain proteins. [col. 10, par. 3], which reads on the training dataset being the generated dataset. ii. wherein for each voxel, the training loss includes a measure of discrepancy between the neural network's approximation of each state's probability distribution and the state's probability distribution in the training data, Austin discloses for the reverse process an auxiliary denoising objective for the parameterization of the reverse process, which encourages good predictions of the data at each time step and combining this with the negative variational lower bound, yielded an alternative loss function [p. 5, par. 4]. Austin further discloses both the auxiliary loss term and including the KL divergence are minimized exactly when has all its mass on the datapoint [p. 5, par. 6] which reads on loss includes a measure of discrepancy between the neural network's approximation of each state's probability distribution and the state's probability distribution in the training data. iii. wherein the neural network is configured to accept a node composition as input, and to output the spatial position of the nodes of a corresponding ligand; Prat discloses the significance of using voxelated atom features as inputs to a bioactivity model (as in the case of a 3D CNN) is that the loss can be differentiated not only with respect to the layer weights, but also with respect to the input atom features [col. 20, par. 2]. e. receiving, at a processor: Prat discloses a CPU may include one or more processors [col. 33, par4]. i. the node composition of a candidate ligand, and Prat discloses in one embodiment is configured to identify ligands with certain properties based on three dimensional (3D) models of known ligands and differentials of atom positions [col. 8, par. 3] which reads on node composition. Prat further discloses at the analysis stage, a query in the form of a target ligand and a target protein [col. 10, par. 4]. ii. a target protein's sequence and structure; and Prat discloses at the analysis stage, a query in the form of a target ligand and a target protein [col. 10, par. 4]. Prat further discloses prediction of molecule bioactivity using concatenation of outputs from a graph-based neural network which analyzes molecule structure and a sequence-based neural network which analyzes protein structure [col. 3 fig. 10]. f. using the trained neural network to determine: i. the spatial locations of the ligand's constituent nodes, and ii. the ligand's docking site on the target protein. Prat discloses the trained graph-based machine learning algorithm may be any type of algorithm configured to analyze graph-based data, such as graph traversal algorithms, clustering algorithms, or graph neural networks [col. 10, par. 3]. Prat, Alakhdar, and Austin are silent on the spatial locations of the ligand's constituent nodes, and the ligand's docking site on the target protein. However, Corso discloses DIFFDOCK: diffusion steps, twists, and turns for molecular docking [title]. Corso further discloses the model takes as input the separate ligand and protein structures and randomly sampled initial poses are denoised via a reverse diffusion over translational, rotational, and torsional degrees of freedom [p. 2, fig. 1]. Corso also discloses the molecular docking task as a generative problem, formulate a novel diffusion process over ligand poses corresponding to the degrees of freedom involved in molecular docking, and generate approximate protein apo-structures [p. 2, par. 5]. Claim 7 is directed to the method of claim 6, wherein the final (diffused) state is such that for each voxel, the associated probability distribution is uniform over the ligand nodes, wherein each node of the ligand has equal probability of being in any given voxel. Prat is silent on a diffusion model. However, Alakhdar the denoising diffusion probabilistic model (DDPM) as a neural
Read full office action

Prosecution Timeline

Mar 12, 2025
Application Filed
Oct 10, 2025
Non-Final Rejection — §101, §103, §112
Mar 26, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597490
METHODS AND SYSTEMS FOR MODELING PHASING EFFECTS IN SEQUENCING USING TERMINATION CHEMISTRY
2y 5m to grant Granted Apr 07, 2026
Patent 12486545
Diagnostic and Treatment of Chronic Pathologies Such as Lyme Disease
2y 5m to grant Granted Dec 02, 2025
Patent 12488859
PEPTIDE BASED VACCINE GENERATION SYSTEM WITH DUAL PROJECTION GENERATIVE ADVERSARIAL NETWORKS
2y 5m to grant Granted Dec 02, 2025
Patent 12482534
PEPTIDE BASED VACCINE GENERATION SYSTEM WITH DUAL PROJECTION GENERATIVE ADVERSARIAL NETWORKS
2y 5m to grant Granted Nov 25, 2025
Patent 12473584
METHOD FOR DETECTING THE PRESENCE, IDENTIFICATION AND QUANTIFICATION IN A BLOOD SAMPLE OF ANTICOAGULANTS WHICH ARE BLOOD COAGULATION ENZYMES INHIBITORS, AND MEANS FOR THE IMPLEMENTATION THEREOF
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
52%
Grant Probability
99%
With Interview (+69.5%)
4y 1m
Median Time to Grant
Low
PTA Risk
Based on 25 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month