DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Herein, “the previous Office action” refers to the Non-Final Rejection filed 10/21/2025.
Amendments Received
Amendments to the claims were received on 1/20/2026.
Priority
As detailed on the Filing Receipt filed 12/15/2020, the instant application claims priority to as early as 10/7/2019. At this point in prosecution, all claims are accorded the earliest claimed priority date.
Claim Status
Claims 1-3, 5-8, 10, 12-17, 19-22, 24, 26, 28, 30-37, 40-43, 45-46, 48-50 and 52 are pending.
Claims 4, 9, 11, 18, 23, 25, 27, 29, 38-39, 44, 47, 51 and 53 are canceled.
Claims 15-17, 19-22, 24, 26, 28, 30-37, 40-41, 43, 45-46, 48-50 and 52 stand withdrawn pursuant to 37 CFR 1.142(b) as being directed to a nonelected invention, there being no currently allowable generic or linking claim. Election without traverse was made in the reply filed 3/18/2024.
Claims 1-3, 5-8, 10, 12-14 and 42 are examined herein.
Withdrawn Objections/Rejections
The prior rejections of claim 4 under 35 USC §§ 101 and 103 are hereby withdrawn in view of Applicant’s cancelation of the claim.
The prior rejections of claims 1-3, 5-8, 10, 12-14 and 42 under 35 USC § 103, as being unpatentable over combinations of Duitama, Klau, Alipanahi, Wright, Buchbinder, Hassanzadeh and/or Chuai, are hereby withdrawn in view of Applicant’s amendment of the claims and persuasive arguments that neither Duitama nor Klau teaches a convolutional neural network as required by the amended claims, that combining the teachings of Duitama and Alipanahi would require abandoning Duitama's core methodology, and that the Office has not articulated a technically sound workflow for integrating the approaches of Duitama, Alipanahi and Klau (Remarks filed 1/20/2026 at pg. 16, para. 1; pg. 17, para. 2 – pg. 18, para. 1). See ‘Response to Arguments – Claim Rejections Under 35 USC § 103’ section for full details.
Claim Interpretation
The claims in this application are given their broadest reasonable interpretation using
the plain meaning of the claim language in light of the specification as it would be understood
by one of ordinary skill in the art (MPEP 2111-2111.01). This section documents the examiner’s
interpretation of certain claim elements under this standard.
Claims 1 and 14 recite “computer-implemented method[s]” (claim 1, line 1; claim 14, line 1), and require performance, “by[] one or more computing devices” (claim 1, line 2; claim 14, line 2), of the following functions:
“generating representative subsequences… and target sequences with a clustering algorithm” (claim 1, lines 3-4);
“generating binding molecules with maximal activity” (claim 1, lines 7-8);
“processing the binding molecules with the maximal activity by computing non-specificity… with an exact query algorithm” (claim 1, lines 10-11);
“generating the binding molecules with minimal non-specificity” (claim 1, lines 11-12);
“processing the binding molecules with a branch and bound search algorithm” (claim 1, line 13);
“generating binding molecules with minimal non-specificity and maximal activity” (claim 1, lines 13-14);
“based on a generated sequence” (claim 12, line 2);
“generating one or more binding molecules” (claim 14, line 4);
“a binding molecule generated by the machine learning algorithm” (claim 14, line 14); and
“the generated binding molecule” (claim 14, line 15).
The requirements that the methods are computer-implemented, the recited performance of generative functions “with” and “by” algorithms, and the recited further processing of generated structures “with” algorithms all indicate that the recited function of “generating… binding molecules” must be understood as output of data representing the recited molecules (rather than physical synthesis of molecules). This interpretation has been applied throughout the claims.
Claim 1 recites the limitation of “constructing a ground set of possible binding molecules by generating representative subsequences across a diverse set of genomes and target sequences from the representative subsequences with a clustering algorithm” (claim 1, lines 3-5). The claim does not appear to require that the clustering algorithm be applied to any particular data or render any particular data. For example, the limitation does not clearly accord with any of the following particular interpretations to the exclusion of the others:
a) the ground set is output by the clustering algorithm (“constructing a ground set… with a clustering algorithm”);
b) the diverse set of genomes is input to, and the representative subsequences are output by, the clustering algorithm (“generating representative subsequences across a diverse set of genomes with a clustering algorithm”); or
c) the representative subsequences are input to, and the target sequences are output by, the clustering algorithm (“generating… target sequences from the representative subsequences with a clustering algorithm”).
The claim requires that recited function(s) be carried out “with” a clustering algorithm, while the required role of the clustering algorithm in achieving these function(s) is unclear.
Regarding clustering functionality, the specification states the following: “Applicants found representative sequences for a design option, using a collection of genomes spanning diversity of a taxon, as follows: (1) Applicants extracted the amplicon… removed sequences that are too short… and performed hierarchical clustering (average linkage)” (pp. 221-2, para. 0678).
The specification thus describes an embodiment wherein sequences extracted from a set of genomes are clustered. However, importing particular embodiments from the specification to interpret recited language having a broader reasonable interpretation in plain meaning is generally improper (MPEP 2111.01 § II). This holds true even when the specification only describes a single embodiment of claimed method steps (see Altiris, Inc. v. Symantec Corp., 318 F.3d 1363, 1370-72 (Fed. Cir. 2003)).
The recited phrase “with a clustering algorithm” is therefore interpreted broadly as “in conjunction with a clustering algorithm”, i.e., achieved through at least partial use of a clustering algorithm.
Claim Rejections - 35 USC § 101
35 USC § 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3, 5-8, 10, 12-14 and 42 are rejected under 35 USC § 101 because the claimed invention is directed to non-statutory subject matter. This rejection is maintained from the previous Office action and has been revised to address the amended claims (filed 1/20/2026).
"Claims directed to nothing more than abstract ideas, natural phenomena, and laws of nature are not eligible for patent protection" (MPEP 2106.04 § I). Abstract ideas include mathematical concepts (including formulas, equations and calculations), and procedures for evaluating, analyzing or organizing information, which are a type of mental process (MPEP 2106.04(a)(2)). The claims as a whole, considering all claim elements both individually and in combination, do not amount to significantly more than abstract ideas.
Step 1: The Four Categories of Statutory Subject Matter (MPEP 2106.03)
The claims are directed to methods, which fall under one of the categories of statutory subject matter.
Step 2A, Prong One: Whether the Claims Set Forth or Describe a Judicial Exception (MPEP 2106.04 § II.A.1)
‘Mathematical concepts’ are relationships between variables and numbers, numerical formulas or equations, or acts of calculation, which need not be expressed in mathematical symbols (MPEP 2106.04(a)(2) § I). The claims recite elements which encompass mathematical concepts, at least under the broadest reasonable interpretation, including:
“constructing a ground set of binding molecules… by generating representative subsequences… with a clustering algorithm” (claim 1), e.g., evaluating an clustering algorithm for input data to calculate a set, wherein:
“the binding molecules are… amplification primers, hybridization probes, toehold switches, or guide molecules” (claim 8), i.e., processed data has particular representative nature,
“the clustering algorithm comprises locality sensitive hashing and the clustering algorithm is based on a consensus of the target sequences or on a mode of the target sequences” (claim 10), i.e., evaluating a hash function and calculation based on derived data, and
“the identification of each cluster is based on a generated sequence” (claim 12), i.e., clustering is based on derived data;
“computing an activity of the binding molecules and the target sequences with an activity function, and generating binding molecules with maximal activity” (claim 1), i.e., evaluating an algorithm for input data to calculate an optimal set wherein:
“the activity function comprises a classifier… [that] is a convolutional neural network” (claim 1),
“the activity function comprises a non-negative and non-monotone submodular function” (claim 3),
“the activity function comprises a regressing model… created via the convolutional neural network” (claim 5), and
“the convolutional neural network uses multiple parallel convolutional and locally connected filters of different widths” (claim 6), i.e., a particular arrangement of equations;
“computing non-specificity across the diverse set of genomes with an exact query algorithm, and generating the binding molecules with minimal non-specificity… wherein the non-specificity is a measure of sequence complementarity between a binding molecule and non-target sequences across the diverse set of genomes ” (claim 1), i.e., evaluating an exact query algorithm for input data to calculate an optimal set;
“processing the binding molecules to minimize non-specificity and maximize activity with a branch and bound search algorithm and generating binding molecules with minimal non-specificity and maximal activity” (claim 1), i.e., evaluating a branch and bound search algorithm for input data to calculate an optimal set, wherein:
“the branch and bound search is performed over viral genomes” (claim 7), i.e., using particular input data;
“hashing each component to a bit vector” (claim 2), i.e., evaluating a hash function for input data to calculate a bit vector;
“constructing all combinations of flipped bits” (claim 2), i.e., performing binary mathematical operations; and
“inputting a specific target sequence into an activity maximization machine-learning algorithm and generating one or more binding molecules… based on the inputted target sequence… being optimally active for the particular target sequence” (claim 14), i.e., evaluating a machine-learning algorithm for input data and calculating an optimal set member based on an activity function.
The encompassed acts of calculation constitute mathematical concepts.
‘Mental processes’ are processes that can be performed in the human mind at least with use of a physical aid, e.g., a slide rule or pen and paper (MPEP 2106.04(a)(2) § III). The independent claims additionally recite elements that encompass processes practicably performable in the human mind, at least under the broadest reasonable interpretation, including:
“splitting sequences into a configured number of components” (claim 1), i.e., dividing and organizing information; and
“identifying… a set of windows of configured nucleotide length of a set of target sequences of a sample” (claim 13), i.e., organizing information.
The above elements encompass processes of manipulating information that can be practicably performed in the human mind, at least with physical aid. Hence, the encompassed acts constitute mental processes.
Mathematical concepts and mental processes constitute enumerated groupings of abstract ideas (MPEP 2106.04(a)(2) §§ I and III). Hence, the claims recite elements that constitute an abstract idea. The claims must therefore be examined further to determine whether they integrate this abstract idea into a practical application (MPEP 2106.04(d)).
Step 2A, Prong Two: Whether the Claims Contain Additional Elements that Integrate the Judicial Exception(s) into a Practical Application (MPEP 2106.04 § II.A.2)
The claims recite additional elements which require performance of claimed functions on a computer, including:
“computer-implemented” (claims 1-3, 5-8, 10 and 12-14);
“by one or more computing devices” (claims 1 and 13-14);
“fetching corresponding tries” (claim 2), i.e., digital data structures;
“querying k-mers in each of the tries” (claim 2); and
“from a viral genome database” (claim 7).
The claims do not describe any specific computational steps by which a computer performs or carries out functions drawn to the abstract idea, nor do they provide any details of how specific structures of a computer are used to implement this abstract idea. The claims state nothing more than that a generic computer performs functions drawn to the abstract idea, and are therefore mere instructions to apply the abstract idea using a computer. As such, the claims do not integrate the abstract idea into a practical application (see MPEP 2106.04(d) § I and 2106.05(f)).
The claims further recite the following additional element:
“developing or designing a therapy or therapeutic, comprising optimizing a binding molecule for the therapy or therapeutic” (claim 42)
This element merely confines the abstract idea (i.e., mental and mathematical steps which optimize complementary sequence strings) to a particular field of use (i.e., therapy), and does not meaningfully limit execution of the abstract idea. Thus, this element is considered equivalent to mere instructions to apply the abstract idea. Mere instructions to apply a judicial exception, albeit in a particular field of use, are insufficient to integrate an abstract idea into a practical application (MPEP 2106.05(f) and 2106.05(h)).
When the claims are considered as a whole: they do not improve the functioning of a computer, other technology, or technical field (MPEP 2106.04(d)(1) and 2106.05(a)); they do not apply the abstract idea to effect a particular treatment or prophylaxis for a disease or medical condition (MPEP 2106.04(d)(2)); they do not implement the abstract idea with, or in conjunction with, a particular machine (MPEP 2106.05(b)); they do not effect a transformation or reduction of a particular article to a different state or thing (MPEP 2106.05(c)); and they do not apply or use the abstract idea in some other meaningful way beyond linking the use of the abstract idea to a particular technological environment and/or field of use (i.e., therapy; MPEP 2106.05(e) and 2106.05(h)).
Therefore, the claims do not integrate the abstract idea into a practical application. See MPEP 2106.04(d) § I.
Because the claims recite an abstract idea, and do not integrate the abstract idea into a practical application, the claims are directed to that abstract idea. Claims that are directed to an abstract idea must be examined further to determine whether the additional elements besides the abstract idea render the claims significantly more than the abstract idea. Additional elements besides the abstract idea may constitute inventive concepts that are sufficient to render the claims as significantly more (MPEP 2106.05).
Step 2B: Whether the Claims Contain Additional Elements that Amount to an Inventive Concept (MPEP 2106.05)
Mere instructions to implement an abstract idea using a computer are, when considered individually, insufficient to constitute an inventive concept that would render the claims significantly more than said abstract idea (see MPEP 2106.05(f)). Mere instructions to apply an abstract idea in a particular field of use are likewise insufficient to constitute an inventive concept that would render the claims significantly more than said abstract idea (MPEP 2106.05(f) and 2106.05(h)).
When the claims are considered as a whole: they do not improve the functioning of a computer, other technology, or technical field (MPEP 2106.04(d)(1) and 2106.05(a)); they do not implement the abstract idea with, or in conjunction with, a particular machine (MPEP 2106.05(b)); they do not effect a transformation or reduction of a particular article to a different state or thing (MPEP 2106.05(c)); they do not add specific limitations or steps, other than what is well-understood, routine and conventional activity in the field, that confine the claims to a particular useful application (MPEP 2106.05(d)); and they do not provide meaningful limitations beyond linking the use of the abstract idea to a particular technological environment and/or field of use (i.e., therapy; MPEP 2106.05(e) and 2106.05(h)).
Therefore, the claims do not provide an inventive concept and/or significantly more than the abstract idea itself. See MPEP 2106.05.
Conclusion: Claims are Directed to Non-statutory Subject Matter
For these reasons, the claims, when the limitations are considered individually and as a whole, are directed to judicial exceptions and lack an inventive concept. Hence, the claimed invention does not constitute significantly more than the judicial exceptions, so the claims are rejected under 35 USC § 101 as being directed to non-statutory subject matter.
Response to Arguments - Claim Rejections Under 35 USC § 101
In the remarks filed 1/20/2026, Applicant asserts that the claims process data in a manner that exceeds human cognitive capacity and presents supporting arguments.
Applicant alleges that the quantitative scale of genomic sequence data processed by the instant claims vastly exceeds that of the typical digital image data, such as that processed by claims indicated as eligible in Example 39 of the 2019 Revised Patent Subject Matter Eligibility Guidance, thus precluding mental performance (pg. 11, paras. 1-3).
Example 39 considers a hypothetical method for training a neural network for facial detection. The exemplified claim comprises modifying a set of digital facial images by performing digital transformation steps (mirroring, rotating, smoothing and/or contrast reduction), and training a neural network in two stages using unmodified facial images, modified facial images, and non-facial images. The exemplary analysis simply states that the exemplary claim “does not recite a mental process because the steps are not practically performed in the human mind” (pg. 9 of ‘Subject Matter Eligibility Examples: Abstract Ideas’). The scale of the processed data is not discussed as a particular factor therein.
Computer implementation does not necessarily render claimed processes as significantly more than equivalents to human mental work. This has been consistently held by the courts even in cases involving claimed performance by a computer of high-volume operations, with efficiency exceeding human mental ability, where humans could manually perform sufficiently simple forms of the operations.
For example, in Bancorp Serves., L.L.C. v. Sun Life Assur. Co. of Canada (U.S.), 687 F.3d 1266 (Fed. Cir. 2012) the court held that “the fact that the required calculations could be performed more efficiently via a computer does not materially alter the patent eligibility of the claimed subject matter” (687 F.3d at 1278). See also Parker v. Flook, 437 U.S. 584, 591-2 (1978); SiRF Tech., Inc. v. Int’l Trade Comm’n, 601 F.3d 1319, 1333 (Fed. Cir. 2010; hereafter “SiRF Tech”); Customedia Techs., LLC v. Dish Network Corp., 951 F.3d 1359, 1365 (Fed. Cir. 2020); and Trinity Info Media, LLC v. Covalent, Inc., 72 F.4th 1355 (Fed. Cir. 2023; hereafter “Trinity”).
Attention is particularly drawn to Trinity, wherein the court expressly rejected a similar argument that “humans could not mentally engage in the ‘same claimed process’ because they could not perform ‘nanosecond comparisons’ and aggregate ‘result values with huge numbers of polls and members’” (72 F.4th at 1363, internal citation omitted). In light of at least the Trinity decision, the argument against consideration of the claims as directed to mental processes due to high-volume data processing is found unpersuasive.
Applicant further asserts that alphabetical nucleotide representation (e.g., A, T, G, C) is merely human-convenient notation for digital genomic sequence data, which is generated exclusively through computational sequencing technology, and no more amenable to human mental processing than would be numeric RGB pixel values representing digital images such as those processed by claims indicated as eligible in Example 39 (pg. 12, para. 1).
As an initial matter, alphabetic nucleic acid notation (e.g., A, T, C, G) does not conceptually represent fundamentally-digital data structures. This notation, like the digital data structures produced by computational sequencing technology, represents the physical sequence of nucleotide structures (e.g., adenine, thymine, cytosine, guanine) present within a considered nucleic acid molecule. The scientific employment of alphabetic notation for this purpose predates the conventionality of computational sequencing technology. See reference Sanger et al (PNAS 70(4): 1209-1213; published April 1973), which describes sequencing of phage DNA via selective incorporation of radiolabeled bases, radioautographic imaging, digestion and separation, and manual nearest-neighbor analysis (pg. 1209, Abstract and r. column – pg. 1211, r. column). Sanger presents determined sequences in the form of ‘A-C-C-T’ and so on (pg. 1211, Table 1; pg. 1212, Fig. 4), i.e., alphabetic notation with separators.
The exemplary claim considered in Example 39 recites performance of operations upon digital data representations where prior manual performance of corresponding operations upon corresponding non-digital representations did not exist. The human mind is incapable of performing, e.g., a smoothing operation on a physical facial image (e.g., a drawing or photograph of a human face).
In contrast, the human mind is capable of “splitting sequences into a configured number of components” (claim 1) and “identifying… a set of windows of configured nucleotide length of a set of target sequences of a sample” (claim 13). The requirement that these steps are performed on a computer, upon digital data structures, does not alter the human mental capacity for their performance upon alphabetic sequences. These steps are equivalent to human mental work, and are identified in the pending rejection as mental processes. Thus, argument to the fundamentally digital nature of the data processed by these steps is found unpersuasive.
Applicant points to precedential cases including Research Corp. Techs. v. Microsoft Corp., 627 F.3d 859 (Fed. Cir. 2010, hereafter “RCT”), SiRF Tech, and SRI Int’l, Inc. v. Cisco Systems, Inc., 930 F.3d 1295 (Fed. Cir. 2019; hereafter “SRI”), and asserts that the scale of data processed by the instant claims vastly exceeds that of the data processed by claims held eligible in each of these cases (pg. 12, para. 2 – pg. 13, para. 2).
The stated reasoning of the court in each of these cases concerns the technical nature of the operations performed, rather than the scale of data processed. For example, claims held non-mental by the court in SiRF Tech required the use of a GPS receiver and could not be performed otherwise (see 930 F.3d at 1333). Unlike the claims at issue in RCT, SiRF Tech and SRI, the instant claims recite processes that are practicably performable in the human mind or are the equivalent of human mental work (e.g., “splitting sequences into a configured number of components” in claim 1).
Moreover, the claims recite numerous mathematical concepts. There is no requirement that recited acts of algorithmic calculation be practicably performable in the human mind for proper identification as mathematical concepts, and thus, abstract ideas. Indeed, the majority of steps in the claimed method are identified in the pending rejection as mathematical concepts rather than mental processes. See MPEP 2106.04(a)(2) § I.
Thus, the argument regarding high-volume data processing is found unpersuasive.
Applicant notes incorporation via amendment of requirement that the activity function comprises a convolutional neural network classifier, and asserts that the human mind is not equipped to perform the recited operations on billions of base pairs across a diverse set of genomes while simultaneously optimizing for maximal activity and minimal nonspecificity
using convolutional neural networks (pg. 13, para. 3 – pg. 14, para. 2).
A convolutional neural network is a series of mathematical functions (as is a branch and bound search algorithm). Applying these mathematical functions to satisfy particular maxima/minima constraints is a process of sequential calculation. The voluminous nature of the data involved does not change the mathematical nature of its processing in the claimed manner, and mathematical concepts are abstract ideas. Thus, the argument regarding mental infeasibility of the claimed complex, high-volume computational processing is found unpersuasive.
For the above reasons, the arguments are considered unpersuasive and the rejection is maintained.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 USC §§ 102 and 103 is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 USC § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 USC § 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 USC § 102(b)(2)(C) for any potential 35 USC § 102(a)(2) prior art against the later invention.
Claims 1, 7-8, and 13-14 are rejected under 35 USC § 103 as being unpatentable over Duitama et al (Nucleic Acids Research 37(8): 2483-2492; published 3/5/2009; previously cited), in view of Klau et al (Bioinformatics 20(Suppl. 1): 186-193; published 2004); previously cited) and Kim et al (Nature Biotechnology 36(3): 239-241; published 1/29/2018; on IDS filed 11/4/2024). The new grounds of rejection presented herein were necessitated by Applicant’s amendment of the claims (filed 1/20/2026).
Claim 1 recites a computer-implemented method, comprising:
a) constructing a ground set of possible binding molecules by generating representative subsequences across a diverse set of genomes and target sequences from the representative subsequences with a clustering algorithm;
b) processing the ground set of binding molecules by:
1. computing an activity of the binding molecules and the target sequences with an activity function, and
2. generating binding molecules with maximal activity across the target sequences, wherein the activity function comprises a classifier that is a convolutional neural network;
c) processing the binding molecules with the maximal activity by:
1. computing non-specificity across the diverse set of genomes with an exact query algorithm, wherein the non-specificity is a measure of sequence complementarity between a binding molecule and non-target sequences across the diverse set of genomes , and
2. generating the binding molecules with minimal non-specificity across the diverse set of genomes; and
d) processing the binding molecules to minimize non-specificity and maximize activity with a branch and bound search algorithm and generating binding molecules with minimal non-specificity and maximal activity.
With respect to claim 1, Duitama discloses “a new tool, PrimerHunter, that can be used to select highly sensitive and specific primers for virus subtyping” (pg. 2483, Abstract) with “application for… designing specific probes for gene expression and genome enrichment microarrays” (pg. 2491, l. column), comprising:
a) “generating an exhaustive set of candidate primers” (pg. 2484, r. column) that “efficiently amplify any one of the target sequences representing different isolates of [a viral] subtype of interest” (pg. 2484, l. column), i.e., possible binding molecules that bind target sequences across a diverse set of genomes, by “storing all occurrences in the target sequences of ‘seed’ nucleotide patterns consistent with [a] given mask M by aligning the mask M at every position i of every target sequence t, and… extracting… the nucleotides that appear at positions aligned with the 1s of M” (pg. 2485, r. column), i.e., generating representative subsequences with a clustering algorithm; and
b) steps of:
1. “estimat[ing] the melting temperature of primer-target… duplexes using [SantaLucia’s] nearest-neighbor model… [and] finding the optimum thermodynamic alignments for all evaluated duplexes, i.e., the alignments with minimum Gibbs free energy” (pg. 2485, l. column), i.e., computing activity of binding molecules and target sequences with an activity function, and
2. “For every… candidate [primer], PrimerHunter… computes the melting temperature of p with the Watson-Crick complement of t at each of [matching] positions, retaining p only if [the melting temperature is greater than or equal to a target threshold]” (pg. 2486, l. column) i.e., generating binding molecules with maximal activity score across the target sequences;
c) steps of:
1. “estimat[ing] the melting temperature of… primer-nontarget duplexes using [a] nearest-neighbor model” (pg. 2485, l. column) and “To avoid nonspecific amplification… further requir[ing] for each selected primer to have a melting temperature T(p,t,i) below a user-specified threshold T(max nontarget) at every position i of every nontarget sequence t” (pg. 2485, l. column), i.e., computing non-specificity across the diverse set of genomes with an exact query algorithm, and therefore
2. “comput[ing] the maximum melting temperature between p and the Watson-Crick complements of nontarget sequences, retaining p only if [the maximum melting temperature is lower than or equal to a nontarget threshold]” (pg. 2486, l. column), i.e., generating the binding molecules with minimal non-specificity; and
d) “seek[ing] and report[ing] a small set of primer pairs that collectively amplify all targets. The set of pairs is constructed using the classic greedy set cover algorithm… where the elements to be covered are target sequences and the sets correspond to pairs of compatible primers that amplify at least one of the target sequences and none of the nontargets” (pg. 2486, r. column), i.e., processing the binding molecules, and generating binding molecules with minimal non-specificity and maximal activity.
Duitama particularly discloses use of “Dinkelbach’s fractional programming algorithm” (pg. 2485, r. column) to compute the maximum melting temperature over all possible alignments of each primer-target pair. Duitama does not disclose computing activity of the binding molecules and the target sequences using a convolutional neural network; or processing the binding molecules to minimize non-specificity and maximize activity with a branch and bound search algorithm.
Kim presents convolutional neural network-based algorithms that predict activity between guide RNAs and target sequences (pg. 259, Abstract), and teaches that their algorithms significantly outperform previous methods for the prediction of guide RNA-target activity (pg. 240, l. column – pg. 241, l. column). Kim does not teach use of a branch and bound search algorithm.
Klau discusses design of minimal probe sets that hybridize to multiple target sequences, and presents an integer linear programming (ILP) technique comprising constituent steps of: computing a binary ‘target-probe incidence’ matrix; and therefrom computing an optimal ‘design’ sub-matrix via a branch-and-cut algorithm, i.e., a branch and bound search algorithm (pg. 186, Abstract; pg. 187, l. column – pg. 188, r. column).
Klau explicitly teaches that performance of the ILP technique in computing a design matrix (i.e., an optimal probe set) does not depend on particular derivation of the incidence matrix, but exemplifies derivation based on target-probe complementarity and thermodynamic constraints (pg. 187). Klau further exemplifies generating initial probe candidates for each of 10 sequence families (i.e., branches, see pg. 190, Fig. 2) according to constraints on nucleotide length and stability as computed according to SantaLucia’s nearest neighbor model (pg. 190, r. column).
Klau presents findings that optimal sets produced by their technique often contain less than half of the oligo members of those produced by a greedy heuristic while retaining excellent target-decoding capabilities (pg. 191, l. column – pg. 192, l. column, Tables 2-5).
With respect to claim 5, Kim teaches that a disclosed algorithm (Seq-deepCpf1) is a regression model trained via a convolutional neural network (pg. 239, Abstract and l. column).
With respect to claim 7, Duitama exemplifies “design[ing] primer pairs for 14 HA subtypes using the complete avian influenza sequences available in the NCBI flu database” (pg. 2484, r. column), i.e., a viral database. Duitama also states that “Complete classification of unknown viral samples into subtypes can be achieved by using PrimerHunter to design a specific primer pair (or set of primers) for each subtype” (pg. 2491, l. column).
With respect to claim 8, Duitama discloses “generating an exhaustive set of candidate primers… [with] desired amplification/nonamplification properties” (pg. 2484, r. column).
With respect to claim 13, Duitama discloses “taking substrings with lengths within a user-specified interval”, i.e., windows of a configured nucleotide length, “from one or more of the target sequences” (pg. 2485, r. column – pg. 2486, l. column).
Claim 14 recites a computer-implemented method, comprising:
a) inputting a particular target sequence into an activity maximization machine-learning algorithm;
b) generating one or more binding molecules from the algorithm; and
c) receiving a binding molecule, generated by the machine-learning algorithm, being optimally active for the particular target sequence.
With respect to claim 14, Duitama discloses:
a) “tak[ing] as input sets of both target and nontarget sequences (pg. 2483, Abstract);
b/c) “select[ing] primer pairs predicted to amplify all target sequences and none of the nontarget sequences… PrimerHunter automatically seeks and reports a small set of primer pairs that collectively amplify all targets and none of the nontargets” (pg. 2491, l. column), i.e., the user receives generated binding molecules being optimally active for the inputted particular target sequences.
Duitama does not disclose utilizing a machine-learning algorithm.
Kim presents convolutional neural network-based algorithms (i.e., machine-learning algorithms) that predict activity between guide RNAs and target sequences (pg. 259, Abstract), and teaches that their algorithms significantly outperform previous methods for the prediction of guide RNA-target activity (pg. 240, l. column – pg. 241, l. column).
An invention would have been obvious to one of ordinary skill in the art if some teaching in the prior art would have led that person to combine prior art reference teachings to arrive at the claimed invention. Before the effective filing date of the claimed invention, said practitioner would have combined the primer design techniques taught by Duitama with the branch-and-cut algorithm of Klau, because Klau exemplifies generation of initial candidate space according to thermodynamic constraints (computed as in Duitama; pp. 187 and 190), and teaches that their branch-and-cut technique renders functional sets with significantly fewer members than greedy heuristics (such as that utilized by Duitama; pg. 191, l. column – pg. 192, l. column, Tables 2-5). Thus, one of ordinary skill in the art would predict that simple substitution of the branch-and-cut algorithm of Klau in place of that of the greedy set cover algorithm disclosed by Duitama would yield improved primer sets.
Said practitioner would have had a reasonable expectation of success because Duitama and Klau both discuss computer-implemented, algorithmic methods of evaluating parameter space to design optimal primer sets for multiple targets.
An invention would have been obvious to one of ordinary skill in the art if some teaching in the prior art would have led that person to combine prior art reference teachings to arrive at the claimed invention. Before the effective filing date of the claimed invention, said practitioner would have implemented a convolutional neural network-based regression model, as taught by Kim, to predict binding activity within the primer design framework taught by Duitama, in view of Klau, because Kim teaches that teaches that their CNN-based algorithms significantly outperform previous methods for the prediction of guide RNA-target activity (pg. 240, l. column – pg. 241, l. column).
Furthermore, Duitama states that primer-template melting temperatures are computed by summing experimentally estimated contributions of constituent dimer duplexes and additional model terms, i.e., solving an experimentally estimated regression equation (pg. 2485, l. column). Duitama cautions that experimental data on melting temperature of duplexes with mismatches is limited (pg. 2487, r. column). Kim presents an algorithmic (CNN-based) means of constructing a well-performing regression model for predicting sequence-sequence binding activity that does not rely on prior experimental estimation of constituent dimer contributions.
Said practitioner would have had a reasonable expectation of success because Duitama, Klau and Kim all discuss computer-implemented, algorithmic methods of predicting binding activity between short RNA molecules (primers and guide RNAs) and target sequences.
In this way the disclosure of Duitama, in view of Klau and Kim, makes obvious the limitations of claims 1, 5, 7-8, and 13-14. Thus, the invention is prima facie obvious.
Claim 2 is rejected under 35 USC § 103 as being unpatentable over Duitama, in view of Klau and Kim, as applied to claim 1 above, and further in view of Wright (in Foundations of Genetic Algorithms, Volume 1, pp. 205-218, Morgan Kaufmann Publishers; published 1991; previously cited). The new grounds of rejection presented herein were necessitated by Applicant’s amendment of the claims (filed 1/20/2026).
With respect to claim 2, Duitama discloses “build[ing] a hash table storing all occurrences in the target sequences of ‘seed’ nucleotide patterns consistent with [a] given mask M… by aligning the mask M at every position i of every target sequence t, and storing in the hash table an occurrence of the seed pattern created by extracting… the nucleotides that appear at positions aligned with the 1s of M” (pg. 2485, r. column), i.e., splitting sequences into a configured number of components and hashing each component to a bit vector.
Duitama further discloses that “Once the hash table is constructed, candidate primers are generated by taking substrings with lengths within a user-specified interval”, i.e., k-mers, “from one or more of the target sequences” (pg. 2485, r. column – pg. 2486, l. column) and discusses “suffix-tree-based algorithms that search for long substrings that appear exactly or with a small number of mutations in all (or a large percentage) of the sequences of a given target set” (pg. 2484, l. column), i.e., that query k-mers in each of corresponding suffix trees (tries). Duitama characterizes the activity of primer design algorithms as exploring a “primer search space” (pg. 2484, r. column).
Duitama does not disclose constructing all combinations of flipped bits.
Klau discusses design of minimal probe sets that hybridize to multiple target sequences via an integer linear programming (ILP) technique comprising computing a binary ‘target-probe incidence’ matrix (pg. 186, Abstract; pg. 187). Klau does not teach constructing all combinations of flipped bits.
Kim presents convolutional neural network-based algorithms that predict activity between guide RNAs and target sequences (pg. 259, Abstract). Kim does not teach constructing all combinations of flipped bits.
Wright discusses “the application of genetic algorithms to optimization problems over several real parameters” (pg. 205, Abstract), and teaches that “Mutation is a common reproduction operator used for finding new points in the search space to evaluate. When a chromosome is chosen for mutation, a random choice is made of some of the genes of the chromosome, and these genes are modified. In the case of a binary-coded genetic algorithm, the corresponding bits are ‘flipped’ from 0 to 1 or from 1 to 0” (pg. 29). In this way, Wright teaches that flipping bits in a bit vector representing a biological sequence is representative of mutating components of the sequence. Wright further teaches that “mutation… allow[s] arbitrary points in space to be reached” (pg. 212), and exemplifies applications of the discussed algorithms including “optimizing simulation models… engineering design and control problems, and setting weights on neural networks” (pg. 206).
Additionally, Wright teaches that “In the theory of genetic algorithms… a schema is a ‘similarity template’ which describes a subset of the space of chromosomes… In the binary case, a schema is described by a string over the alphabet {0, 1, *}, where a * means that the corresponding position in the string is unrestricted and can be either a 0 or a 1… connected schemata… capture locality information about the function” (pg. 210). Wright thereby teaches use of locality-sensitive binary schema, of which the disclosed mask of Duitama is a special case, for the discussed algorithms.
An invention would have been obvious to one of ordinary skill in the art if some teaching in the prior art would have led that person to combine prior art reference teachings to arrive at the claimed invention. Before the effective filing date of the claimed invention, said practitioner would have implemented construction and evaluation of all combinations of flipped bits, to enhance the primer design techniques taught by Duitama, in view of Klau and Kim, because Wright teaches that bit flipping simulates sequence mutation (pg. 209) and mutation expands the search space of optimization algorithms (pg. 212). Said practitioner would have had a reasonable expectation of success because Duitama, Klau and Wright all discuss computer-implemented, algorithmic methods of evaluating parameter space to solve optimization and design problems.
In this way the disclosure of Duitama, in view of Klau, Kim and Wright, makes obvious the limitations of claim 2. Thus, the invention is prima facie obvious.
Claim 3 is rejected under 35 USC § 103 as being unpatentable over Duitama, in view of Klau and Kim, as applied to claim 1 above, and further in view of Buchbinder et al (Proceedings of the 25th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1433-1452; Society for Industrial and Applied Mathematics Publication Library; published 2014; previously cited). The new grounds of rejection presented herein were necessitated by Applicant’s amendment of the claims (filed 1/20/2026).
With respect to claim 3, Duitama discloses “estimat[ing] the melting temperature of primer-target and primer-nontarget duplexes using [a] nearest neighbor model” (pg. 2485, l. column). Duitama further discloses “using Dinkelbach’s fractional programming algorithm… to maximize [the melting temperature] equation over the set of alignments” (pg. 2485, r. column). Duitama does not disclose use of an algorithm for maximizing a non-negative and non-monotone submodular function.
Klau discusses design of minimal probe sets that hybridize to multiple target sequences via an integer linear programming (ILP) technique implementing a branch-and-cut algorithm to minimize an objective function (pg. 186, Abstract; pg. 187, l. column – pg. 188, r. column). Klau does not specifically teach use of an algorithm for maximizing a non-negative and non-monotone submodular function.
Kim presents convolutional neural network-based algorithms that predict activity between guide RNAs and target sequences (pg. 259, Abstract). Kim does not teach use of an algorithm for maximizing a non-negative and non-monotone submodular function.
Buchbinder discusses “the problem of maximizing a non-monotone and non-negative submodular function subject to a cardinality constraint (pg. 1434, l. column), and characterizes the problem thus: “Given a cardinality parameter k, the goal is to find a subset S ⊆𝒩 maximizing f(S) such that |S| ≤ k” (pg. 1433, r. column), i.e., to select a finite subset of elements that maximizes a function.
Buchbinder teaches that “submodular… [set] functions are ubiquitous in various disciplines, including combinatorics, optimization… algorithmic game theory, and machine learning. Many well known functions, such as… rank functions of matroids, entropy, mutual information, [and] coverage functions… are submodular” (pg. 1433, r. column). Buchbinder illustrates prospective practical applications thus: “consider the problem of document summarization whose applications span different fields such as natural language processing and information retrieval. In this problem the goal is to extract a small number of textual units from given text documents as to form a short summary. The quality of the summary is a nonmonotone and non-negative submodular function as similarities between selected textual units are deducted from the total benefit these textual units contribute to the summary ... The cardinality constraint is due to real-world restrictions that limit the size of the summary. It is important to note that for the above text summarization problem in particular, and for many other applications in general, fast algorithms are necessary since the size of the ground set N is very large” (pg. 1434, l. column).
Buchbinder presents a “fast algorithm… for maximizing a general non-negative submodular function subject to choosing at most k elements” (pg. 1436, l. column), i.e., an optimized subset, and further teaches that “algorithms we provide are very fast… as opposed to previous known algorithms which are continuous in nature, and thus, too slow for applications in… practical settings” (pg. 1433, l. column).
An invention would have been obvious to one of ordinary skill in the art if some teaching in the prior art would have led that person to combine prior art reference teachings to arrive at the claimed invention. Before the effective filing date of the claimed invention, said practitioner would have implemented an algorithm for maximizing a non-negative and non-monotone submodular function, as taught by Buchbinder, to enhance the primer design techniques taught by Duitama, in view of Klau and Kim, because Buchbinder teaches that optimization problems comprising selecting a constrained subset of unique elements that maximize a score function from a large ground set (e.g., selecting a set of optimal binding molecules) involve maximizing a non-negative and non-monotone submodular function (pg. 1433, r. column; pg. 1434, l. column). Said practitioner would have had a reasonable expectation of success because Duitama, Klau and Buchbinder all discuss algorithmic methods of solving optimization problems.
In this way the disclosure of Duitama, in view Klau, Kim and Buchbinder, makes obvious the limitations of claim 3. Thus, the invention is prima facie obvious.
Claim 6 is rejected under 35 USC § 103 as being unpatentable over Duitama, in view of Klau and Kim, as applied to claims 1 and 5 above, and further in view of Hassanzadeh et al (2016 IEEE Int’l Conference on Bioinformatics and Biomedicine, pp. 178-183, IEEE Xplore; published 1/19/2017; previously cited). The new grounds of rejection presented herein were necessitated by Applicant’s amendment of the claims (filed 1/20/2026).
With respect to claim 6, Duitama discloses “using [a] nearest neighbor model” (pg. 2485, l. column). Duitama does not disclose multiple parallel convolutional and locally connected filters of different widths.
Klau discusses design of minimal probe sets that hybridize to multiple target sequences via an integer linear programming (ILP) technique (pg. 186, Abstract). Klau does not teach multiple parallel convolutional and locally connected filters of different widths.
Kim describes architectural features of their algorithms including convolution and fully connected layers with different numbers of filters and units, and discusses variation of model hyperparameters including number of layers, filters, units, and filter lengths (Online Methods, second pg., l. column – r. column). Kim does not specifically teach multiple parallel convolutional and locally connected filters of different widths.
Hassanzadeh discusses “DeeperBind, a long short term recurrent convolutional network for prediction of protein binding specificities with respect to DNA probes”, and teaches that “DeeperBind can model the positional dynamics of probe sequences and hence reckons with the contributions made by individual sub-regions in DNA sequences, in an effective way” (pg. 178, Abstract).
Hassanzadeh illustrates their network architecture with a block diagram, which depicts multiple parallel convolutional and locally connected filters of different widths (pg. 181, Fig. 2 and caption; see also pg. 180, Fig. 1). Hassanzadeh teaches that “integration of RNN with CNN makes handling variable-length inputs possible, and therefore data that are produced by different platforms can be readily used in a unified framework to achieve a more accurate model” (pg. 180, r. column).
An invention would have been obvious to one of ordinary skill in the art if some teaching in the prior art would have led that person to combine prior art reference teachings to arrive at the claimed invention. Before the effective filing date of the claimed invention, said practitioner would have implemented multiple parallel convolutional and locally connected filters of different widths, as taught by Hassanzadeh, to enhance the primer design techniques taught by Duitama, in view of Klau and Kim, because Kim discusses variation of hyperparameters as part of the model optimization process (Online Methods, second pg., r. column) and Hassanzadeh teaches that integration of variable-length convolutional and locally-connected filters enables algorithmic evaluation of variable-length sequence data produced by different platforms (pg. 180, r. column). Said practitioner would have had a reasonable expectation of success because Duitama, Klau, Kim and Hassanzadeh all discuss algorithmic methods of evaluating sequence binding specificities.
In this way the disclosure of Duitama, in view of Klau, Kim and Hassanzadeh, makes obvious the limitations of claim 6. Thus, the invention is prima facie obvious.
Claims 10 and 12 are rejected under 35 USC § 103 as being unpatentable over Duitama, in view of Klau and Kim, as applied to claim 1 above, and further in view of Ondov et al (Genome Biology 17: article 132, 14 pages; published 2016; previously cited). The new grounds of rejection presented herein were necessitated by Applicant’s amendment of the claims (filed 1/20/2026).
With respect to claims 10 and 12, Duitama discloses that “A widely used approach to primer design for virus identification relies on first constructing a ‘consensus gestalt’ from a multiple alignment of target virus sequences… [and] can be quite successful at finding species-specific primers” (pg. 2483, r. column). In other words, carrying out the design process based on a generated consensus of the target sequences.
Duitama further discloses “build[ing] a hash table storing all occurrences in the target sequences of ‘seed’ nucleotide patterns consistent with [a] given mask M by aligning the mask M at every position i of every target sequence t, and storing in the hash table an occurrence of the seed pattern created by extracting… the nucleotides that appear at positions aligned with the 1s of M” (pg. 2485, r. column), i.e., clustering comprising hashing.
However, Duitama does not disclose clustering comprising locality-sensitive hashing.
Ondov discusses Mash, an “exten[sion of] the MinHash dimensionality-reduction technique… [that] enable[s] efficient clustering and search of massive sequence collections… [and] reduces large sequences and sequence sets to small, representative sketches” (pg. 1, Abstract). Ondov teaches that “[t]he MinHash technique is a form of locality-sensitive hashing… Because of the extremely low memory and CPU requirements of this probabilistic approach, MinHash is well suited for data-intensive problems in genomics. To facilitate this, we have developed Mash for the flexible construction, manipulation, and comparison of MinHash sketches from genomic data” (pg. 1, r. column).
Ondov states that “Mash combines the high specificity of matching-based approaches with the dimensionality reduction of statistical approaches” (pg. 2, l. column), and “Potential applications [of Mash] include any problem where an approximate global distance is acceptable, e.g., to… search genomic databases” (pg. 1, l. column).
Ondov exemplifies efficient, high-volume use cases of Mash including clustering the entire NCBI RefSeq genome database (Release 70 with 54,118 genomes), real-time database search using assembled or unassembled Illumina, Pacific BioSciences, and Oxford Nanopore data, and scalable clustering of hundreds of metagenomic samples by composition (pg. 1, Abstract; pg. 2, l. column). Ondov states that the clustering process is easily parallelized, which could reduce the wall clock time of clustering the entire NCBI RefSeq database to minutes with sufficient computer resources (pg. 2, r. column).
Additionally, Ondov indicates that Mash source code, precompiled binary releases, documentation and additional supporting data are all freely available online and that Mash has been tested for compatibility with Linux and Mac operating systems (pg. 13, l. column).
An invention would have been obvious to one of ordinary skill in the art if some teaching in the prior art would have led that person to combine prior art reference teachings to arrive at the claimed invention. Before the effective filing date of the claimed invention, said practitioner would have implemented locality sensitive hashing, as taught by Ondov, to enhance the primer design techniques taught by Duitama, in view of Klau and Kim, because Ondov teaches that locality sensitive hashing enables extremely low-memory, highly efficient clustering and search of genomic sequence data (pg. 1, Abstract; pg. 8, l. column). Said practitioner would have had a reasonable expectation of success because Duitama and Ondov both discuss computational methods of searching sequence space.
Claim 42 is rejected under 35 USC § 103 as being unpatentable over Duitama, in view of Klau and Kim, as applied to claim 1 above, and further in view of Chuai et al (Genome Biology 19: article 80, pp. 1-18; published 6/26/2018; previously cited). The new grounds of rejection presented herein were necessitated by Applicant’s amendment of the claims (filed 1/20/2026).
With respect to claim 42, Duitama discloses that “primers selected by PrimerHunter have high sensitivity and specificity for target sequences” (pg. 2483, Abstract). Duitama does not disclose optimizing a binding molecule for a therapy or therapeutic.
Klau discusses design of minimal probe sets that hybridize to multiple target sequences (pg. 186, Abstract). Klau does not teach optimizing a binding molecule for a therapy or therapeutic.
Kim presents convolutional neural network-based algorithms that predict activity between guide RNAs of the class 2 CRISPR-Cas system, which is used for genome editing, and target sequences (pg. 259, Abstract and l. column). Kim does not teach optimizing a binding molecule (e.g., guide RNA) for a therapy or therapeutic.
Chuai discusses “effective application of CRISPR systems… [and] CRISPR-based gene knockout” (pg. 1, Abstract and l. column) and teaches that “In this system, a single-guide RNA (sgRNA) guides Cas9 proteins to specific genomic targets. Recognition and cleavage occur via complementarity of a 20-nucleotide (nt) sequence within the sgRNA to the genomic target, i.e., the on-target, upstream of a protospacer adjacent motif (PAM)… a major challenge for its effective application is to accurately predict the sgRNA on-target knockout efficacy and off-target (OT) profile beforehand. Accurate prediction would facilitate the optimized design of sgRNAs by maximizing their on-target efficacy (high sensitivity) and minimizing their off-target effects (high sensitivity)” (pg. 1, l. column).
Chuai further teaches that “improvement in off-target prediction… is very important since near-zero off-targeting is the ultimate goal for all CRISPR-based gene therapies” (pg. 8, r. column). In this way, Chuai teaches design of guide molecules that bind to target sequences, with high specificity and sensitivity, to enable effective delivery of CRISPR systems comprising Cas proteins for therapeutic purposes.
An invention would have been obvious to one of ordinary skill in the art if some teaching in the prior art would have led that person to combine prior art reference teachings to arrive at the claimed invention. Before the effective filing date of the claimed invention, said practitioner would have utilized the primer design techniques disclosed by Duitama, in view of Klau and Kim, to design guide molecules for delivery of a therapeutic, because Duitama teaches that their method produces sequences which bind to target sequences with high specificity and sensitivity, while Chuai teaches that effective CRISPR-based therapy requires design of guide molecules that bind to target sequences with high specificity and sensitivity (pg. 1, Abstract and l. column; pg. 8, r. column). Thus, Chuai renders design of guide molecules for CRISPR therapy as an obvious application of the disclosed techniques of Duitama.
Said practitioner would have had a reasonable expectation of success because Duitama, Klau and Chuai all discuss computer-implemented, algorithmic methods of designing binding molecules. Additionally, Kim and Chuai both discuss computer-implemented, algorithmic methods of predicting binding activity of CRISPR-Cas guide RNAs to target sequences.
In this way the disclosure of Duitama, in view of Klau, Kim and Chuai, makes obvious the limitations of claim 42. Thus, the invention is prima facie obvious.
Response to Arguments - Claim Rejections Under 35 USC § 103
In the remarks filed 1/20/2026, Applicant traverses the rejections under 35 USC § 103 and alleges particular points of distinction with respect to the cited prior art.
Applicant alleges that a person of ordinary skill in the art would not have been motivated to substituted the greedy set cover algorithm disclosed by Duitama with the branch-and-cut algorithm taught by Klau because Duitama’s method achieves perfect success for reasonable design parameters, and employs the greedy set cover algorithm as an edge-case fallback when a single satisfactory primer pair cannot be found by the primary method (pg. 16, paras. 2-3; pg. 17, para. 2).
Duitama discloses application of PrimerHunter to design primer pairs for 14 North American avian influenza subtypes, and presents findings that PrimerHunter was able to identify feasible primer pairs for each of these subtypes when amplicon length was constrained between 75 and 200 bp (pg. 2491, l. column). Duitama also discloses a number of additional constraints that were applied during this application, e.g., primer length kept between 20 and 25 bp (pg. 2487, r. column – pg. 2488, l. column). In this way, PrimerHunter was found to perform quite well for a specific task under specific parameters. Duitama does laud the performance of PrimerHunter, but does not claim that PrimerHunter performs perfectly for every design scenario. Duitama also discusses specific limitations of certain prior art methods that PrimerHunter improves upon, but does not generally disparage or discredit the employment of other algorithms.
Duitama’s discussion of performance and specific improvements realized by PrimerHunter would not generally discourage one of ordinary skill in the art from experimenting with other search algorithms, particularly those also presented with favorable performance data. Klau assesses the performance of their algorithm against that of a prior greedy heuristic, and presents findings that their algorithm consistently generates significantly smaller sets than the prior algorithm (pg. 189, r. column; pg. 191, l. column and Table 2). Thus, the argument against motivation in view of PrimerHunter’s performance is found unpersuasive.
Applicant asserts that Alipanahi and of Duitama respectively address protein binding prediction and PCR primer design, which are fundamentally different problems in molecular biology (pg. 17, paras. 1-4).
For proper application in an obviousness rejection, prior art must be analogous to the claimed invention. That is, art from the same ‘field of endeavor’ and/or reasonably pertinent to the problem faced by the inventor (MPEP 2141.01(a) and 2141 § I), wherein the ‘field of endeavor’ is not limited to a particular focus within a given field (Netflix, Inc. v. DivX, LLC, 80 F.4th 1352, 1358-59 (Fed. Cir. 2023)). When more than one prior art reference is used as the basis of an obviousness rejection, it is not required that the references be analogous art to each other for proper combination (Sanofi-Aventis Deutschland GMbH v. Mylan Pharms. Inc., 66 F.4th 1373, 1380 (Fed. Cir. 2023)).
Obviousness in light of multiple prior art references does not depend on the equivalence of their conceptual focus, but rather what their combined teachings would have suggested to one of ordinary skill in the art. Thus, the argument against combination of Duitama and Klau in view of differences in their conceptual focus is found unpersuasive.
Applicant alleges that combining the teachings of Duitama and Alipanahi would require abandoning Duitama's core methodology, and further that the Office has not articulated a technically sound workflow for integrating the three fundamentally different approaches of Duitama’s thermodynamic PCR primer design, Alipanahi’s machine learning for protein binding prediction, and Klau’s integer linear programming for probe set optimization (pg. 17, para. 2 – pg. 18, para. 1).
Applicant’s arguments here have been interpreted as argument that modification of the method of Duitama to implement teachings of Alipanahi and Klau would have rendered the method of Duitama as inoperable for its intended purpose, and further that the Office has not presented a sufficient explanation of how one of ordinary skill in the art would have combined the teachings of each reference to arrive at the claimed invention. These arguments are found persuasive.
Applicant notes that neither Duitama nor Klau teaches a convolutional neural network as required by the amended claims (pg. 16, para. 1). This argument is found persuasive.
In light of the above persuasive arguments, the rejection of claims under 35 USC 103 have been withdrawn. The art of record includes the reference Kim et al (Nature Biotechnology 36(3): 239-241; published 1/29/2018), listed on the IDS filed 11/24/2024. Kim is considered to remedy the outlined deficiencies, and is applied in rejections herein.
Conclusion
At this point in prosecution, no claim is allowed.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Theodore C. Striegel whose telephone number is (571)272-1860. The examiner can normally be reached Mon-Fri 12pm-8pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Olivia M. Wise can be reached at (571)272-2249. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/T.C.S./Examiner, Art Unit 1685
/JESSE P FRUMKIN/Primary Examiner, Art Unit 1685 April 1, 2026