Prosecution Insights
Last updated: April 19, 2026
Application No. 17/332,904

Machine Learning-Based Analysis of Process Indicators to Predict Sample Reevaluation Success

Non-Final OA §101§103§112
Filed
May 27, 2021
Examiner
BAILEY, STEVEN WILLIAM
Art Unit
1687
Tech Center
1600 — Biotechnology & Organic Chemistry
Assignee
Illumina, Inc.
OA Round
3 (Non-Final)
35%
Grant Probability
At Risk
3-4
OA Rounds
4y 4m
To Grant
56%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
23 granted / 66 resolved
-25.2% vs TC avg
Strong +21% interview lift
Without
With
+20.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
53 currently pending
Career history
119
Total Applications
across all art units

Statute-Specific Performance

§101
36.7%
-3.3% vs TC avg
§103
22.5%
-17.5% vs TC avg
§102
5.6%
-34.4% vs TC avg
§112
26.1%
-13.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 66 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION The Applicant’s response, received 10 November 2025 has been fully considered. The following rejections and/or objections are either reiterated or newly applied. They constitute the complete set presently being applied to the instant application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10 November 2025 has been entered. Status of the Claims Claims 1-20 are pending. Claims 1-20 are rejected. Claims 1, 4, 6, 7, 8, 9, 10, 11, 12, 14, 16, 18, and 20 are objected to. Priority The effective filing date of the claimed invention is 29 May 2020. Claim Interpretation The Applicant’s amendment received 26 June 2025 has been fully considered, however after further consideration, the claim interpretations in the Office action mailed 26 March 2025 are maintained in view of the amendment. Claims 1, 3, and 13 recite the limitations “…receiving from the one or more inconclusive sample evaluation runs one or more call rates indicating a percentage of sample locations with a quality score above a threshold and, a plurality of readouts of radiant signals from process probes during at least a first, pre-hybridization stage of the one or more inconclusive sample evaluation runs in which sample DNA is in a liquid form, a second stage of the one or more inconclusive sample evaluation runs in which sample DNA is hybridized to an image-generating chip, and a third stage of the one or more inconclusive sample evaluation runs in which probe DNA is extended and the extension is labeled with a fluorescent label, wherein each process probe is configured to produce the radiant signals indicative of one or more processing conditions during the sample evaluation run.” These limitations are interpreted to recite product-by-process limitations with the product being: the data of the one or more call rates indicating a percentage of sample locations with a quality score above a threshold, and the data of the plurality of readouts of radiant signals from process probes, and further interpreted to not require the active steps of performing the sample evaluation run to generate the one or more call rates, and the plurality of readouts of radiant signals from process probes. Claims 2, 4, 14, 16, 18, and 20 further recite the limitations “the readouts from a first, pre-hybridization stage of the sample evaluation run in which sample DNA is in a liquid form, the readouts of one or more probes selected from a group consisting of: a plurality of first readouts from process probes configured to respond to non-human bacterial DNA not present in human DNA and produce signals meeting a pre-determined threshold indicative of contamination of the sample by non-human bacterial DNA, and a plurality of second readouts from four complementary process probes configured to respond to extensions of target bases at non-polymorphic sites of common human sample sequences, and produce radiant signals meeting a pre-determined threshold indicative of good extensions for each of four complementary extension reagents, and combinations thereof.” These limitations are interpreted to recite product-by-process limitations with the product being: the data of the readouts from process probes, and further interpreted to not require the active steps of performing the sample evaluation run to generate the readouts of signals from process probes. Claims 2, 4, 14, 16, 18, and 20 further recite the limitations “the readouts from a second stage of the sample evaluation run in which sample DNA is hybridized to an image-generating chip, the readouts of one or more probes selected from a group consisting of: a plurality of third readouts from process probes configured to respond to a common sequence known as wild-type allele in a human sample and produce high radiant signals according to a pre-determined threshold indicative of good sample composition and binding conditions, a plurality of fourth readouts from process probes comprising mismatched complementary bases configured to respond to a common sequence in a human sample by binding weakly, resulting in separation of the common sequence from the probes with the mismatched complementary bases, and producing approximately background level radiant signals not exceeding a predetermined threshold, and combinations thereof, and a plurality of fifth readouts from process probes configured to respond to synthetic sequences mixed with reagent in high, medium, and low concentration levels and respectively produce high, medium, and low radiant signals according to respective pre-determined thresholds indicative of good reagent delivery.” These limitations are interpreted to recite product-by-process limitations with the product being: the data of the readouts from process probes, and further interpreted to not require the active steps of performing the sample evaluation run to generate the readouts of signals from process probes. Claims 2, 4, 14, 16, 18, and 20 further recite the limitations “the readouts from a third stage of the sample evaluation run in which probe DNA is extended and the extension is labeled with a fluorescent label, the readouts from one or more probes selected from a group consisting of: a plurality of sixth readouts from process probes comprising a hairpin complementary sequence configured to respond to chemicals mixed in reagent for performing single-base extensions, wherein the process probes are configured to produce radiant signals indicative of good conditions for single base extensions, a plurality of seventh readouts from process probes configured to block extensions on a 3' end of probe sequences, such that synthetic targets mixed in reagent and extensions of the synthetic targets are removed after extension and staining, and to produce low radiant signals indicative of good conditions for target removal, and a plurality of eighth readouts from process probes covered with chemicals configured to bind fluorescent labels mixed in reagent, wherein the process probes are configured to produce high radiant signals indicative of a good quality staining process, and combinations thereof.” These limitations are interpreted to recite product-by-process limitations with the product being: the data of the readouts from process probes, and further interpreted to not require the active steps of performing the sample evaluation run to generate the readouts of signals from process probes. Claims 15, 17, and 19 recite the limitations “a training set of sample evaluation runs, the training set including one or more inconclusive sample evaluation runs that produced inconclusive results for samples, followed by an additional sample evaluation run, receive training data for each of the one or more inconclusive sample evaluation runs comprising one or more call rates indicating a percentage of sample locations with a quality score above a threshold and a plurality of readouts of radiant signals from process probes during at least a first, pre-hybridization stage of the one or more inconclusive sample evaluation runs in which sample DNA is in a liquid form, a second stage of the one or more inconclusive sample evaluation runs in which sample DNA is hybridized to an image-generating chip, and a third stage of the one or more inconclusive sample evaluation runs in which probe DNA is extended and the extension is labeled with a fluorescent label, wherein each process probe is configured to produce the radiant signals indicative of one or more processing conditions during the sample evaluation runs; wherein training data for each of the additional sample evaluation runs comprises one or more ground truth indicators of a conclusive or inconclusive result.” These limitations are interpreted recite product-by-process limitations with the product being: the training data for each of the inconclusive sample evaluation runs, and the additional sample evaluation run, and further interpreted to not require the active steps of performing the sample evaluation runs to generate the training data. Response to Arguments The Applicant’s arguments received 10 November 2025 have been fully considered, but are not persuasive. The Applicant states on pages 21-22 of the Remarks that independent claims 1, 3, 13, 15, 17, and 19 have been amended to clarify that the referenced data is received from the one or more sample evaluation runs, e.g., claim 1 has been amended to recite that the claimed non-transitory computer readable storage medium is configured to “receive from the one or more inconclusive sample evaluation runs…” and claim 1 has been further amended to recite that the claimed non-transitory computer readable storage medium is then configured to “input received one or more call rates and the plurality of readouts to a classifier….” The Applicant further states that claims 3 and 13 have been similarly amended, and in addition, claims 13 has been amended to recite the steps “performing the additional sample evaluation run in the event the retry success score is indicative of a process error, or discarding the sample in the event the retry success score is indicative of a sample error,” and therefore, based on the present amendments, the Applicant requests reconsideration of the interpretation of the pending claims as reciting “product-by-process” limitations. These arguments not persuasive, because first, as discussed in the response to arguments in the Office action mailed 08 August 2025, the claims are interpreted to recite product-by-process limitations that are embedded within the overall CRM, system, and method claims. Second, as noted in the MPEP at 2113 subsection I., even though product-by-process claims are limited by and defined by the process, determination of patentability is based on the product itself (i.e., the data, in the instant claims), and does not depend on its method of production. In the instant claims, the CRM (claims 1 and 15), the system including one or more processor coupled to memory (claims 3 and 17), and the method of scoring (claims 13 and 19), are directed to using a classifier and/or training a classifier to analyze previously generated data, i.e., the call rates and readouts. The product-by-process limitations that are embedded within the CRM (claims 1 and 15), the system including one or more processor coupled to memory (claims 3 and 17), and the method of scoring (claims 13 and 19) claims define the product (i.e., the data used by the classifier and/or to train the classifier) in terms of the process used to create it, which meets the definition of a product that is defined by the process steps by which the product is made (MPEP 2113 I.). Third, the MPEP does not exclude “data” as a product. The various examples provided in MPEP 2113 I. are merely examples and are not limiting of the subject matter that can be considered as a product-by-process limitation. Fourth, it is noted that the “performing” and “discarding” steps of amended claim 13 are non-obligatory contingent steps that are subsequent to the analysis of the received product-by-process data having been input to a classifier. Claim Objections The objections to claims 2, 3, 4, and 19 in the Office action mailed 08 August 2025 are withdrawn in view of the amendment received 10 November 2025. The amendment received 10 November 2025 has been fully considered, however after further consideration, new grounds of objection are raised in view of the amendment. Claims 1, 4, 6, 7, 8, 9, 10, 11, 12, 14, 16, 18, and 20 are objected to because of the following informalities: The word “pre-determined” should not be hyphenated. Furthermore, claims 2 and 14 recite an unhyphenated form of the word, i.e., “predetermined,” creating an inconsistency in usage. Claim 20 is further objected to because of the following informalities: The word “thresholds” in line 43 should be replaced with the word “threshold.” Appropriate corrections are required. Claim Rejections - 35 USC § 112 The rejection of claims 2, 4, 6, and 14-20 under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, in the Office action mailed 08 August 2025 is withdrawn in view of the amendment received 10 November 2025. The Applicant’s amendment received 10 November 2025 has been fully considered, however after further consideration, new grounds of rejection are raised under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, in view of the amendment. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Independent claims 1, 3, and 13 are indefinite for reciting “receive/receiving from the one or more inconclusive sample evaluation runs” because it is not clear as to whether the claim requires that the data is received from the assay equipment, i.e., the “receiving from” step is actually the output step from the genotyping machine. A potential remedy of this indefiniteness may be by way of amending the claim limitation to recite “receiving data of one or more inconclusive sample evaluation runs” or “receiving data for each of the one or more inconclusive sample evaluation runs.” Claims 2, 4-12, and 14 are indefinite for depending from either claims 1, 3, or 13 and for failing to remedy the indefiniteness of the respective independent claim from which it depends. Claim Rejections - 35 USC § 101 The Applicant’s amendment received 10 November 2025 has been fully considered, however after further consideration, the rejection of claims 1-20 under 35 U.S.C. 101 in the Office action mailed 08 August 2025 is maintained with modification in view of the amendment. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites: (a) mental processes, i.e., concepts performed in the human mind (e.g., observation, evaluation, judgement, opinion); and (b) mathematical concepts (e.g., mathematical relationships, formulas or equations, mathematical calculations). Claim Interpretations Independent claims 1, 3, and 13 are interpreted as reciting product-by-process limitations with the product being the data from one or more inconclusive sample evaluation runs (i.e., the one or more call rates and the plurality of readouts of radiant signals) and further interpreted to not recite active steps of performing the process of generating the data from the one or more inconclusive sample evaluation runs (e.g., performing genotyping assays). Independent claims 15, 17, and 19 are interpreted as reciting product-by-process limitations with the product being the data (i.e., the training data) for assembling a training set of sample evaluation runs comprising one or more inconclusive sample evaluation runs that produced inconclusive results for samples, followed by an additional sample evaluation run, and receiving training data for each of the one or more inconclusive sample evaluations (i.e., the one or more call rates and the plurality of readouts of radiant signals) and further interpreted to not recite active steps of performing the process of generating the training data from the one or more inconclusive sample evaluation runs (e.g., performing genotyping assays). See the foregoing Claim Interpretations section for further in-depth analysis of the product-by-process interpretations. Claims 1, 3, and 13 are further interpreted to recite a product-by-process limitation with the product being a trained classifier, and further interpreted to not recite active steps of performing the process of producing the product (e.g., performing steps of training a classifier). Subject matter eligibility evaluation in accordance with MPEP 2106. Eligibility Step 1: Step 1 of the eligibility analysis asks: Is the claim to a process, machine, manufacture or composition of matter? Claims 1 and 2 are directed to a non-transitory computer readable storage medium and a processor (i.e., a machine or manufacture); claims 3-12 are directed to a system including one or more processors coupled to memory (i.e., a machine or manufacture); claims 13 and 14 are directed to a method (i.e., a process); claims 15 and 16 are directed to a non-transitory computer readable storage medium and a processor (i.e., a machine or manufacture); claims 17 and 18 are directed to a system including one or more processors coupled to memory (i.e., a machine or manufacture); and claims 19 and 20 are directed to a method (i.e., a process). Therefore, these claims are encompassed by the categories of statutory subject matter, and thus, satisfy the subject matter eligibility requirements under step 1. [Step 1: YES] Eligibility Step 2A: First it is determined in Prong One whether a claim recites a judicial exception, and if so, then it is determined in Prong Two whether the recited judicial exception is integrated into a practical application of that exception. Eligibility Step 2A: Prong One: In determining whether a claim is directed to a judicial exception, examination is performed that analyzes whether the claim recites a judicial exception, i.e., whether a law of nature, natural phenomenon, or abstract idea is set forth or described in the claim. Independent claims 1, 3, and 13 recite the following steps which fall within the mental processes and/or mathematical concepts groupings of abstract ideas: data indicating a percentage of sample locations with a quality score above a threshold (i.e., mental processes); data of radiant signals from process probes during at least a first, pre-hybridization stage of the one or more inconclusive sample evaluation runs in which sample DNA is in a liquid form, a second stage of the one or more inconclusive sample evaluation runs in which sample DNA is hybridized to an image-generating chip, and a third stage of the one or more inconclusive sample evaluation runs in which probe DNA is extended and the extension is labeled with a fluorescent label, wherein each process probe is configured to produce the radiant signals indicative of one or more processing conditions during the sample evaluation run (i.e., mental processes); a classifier trained to predict retry success confidence scores based, at least in part, on corresponding call rate and readout data from one or more different inconclusive sample evaluation runs, wherein the retry success confidence scores indicate whether an additional sample evaluation run will produce a conclusive result for the sample (i.e., mental processes and mathematical concepts) (for example use of known statistical analysis as ‘support vector machines (SVM), deep learning-based approaches, gradient boosted trees, logistic regression, K- nearest neighbor, decision trees, Naive Bayes, perceptron, and convolutional neural networks’ in [0034]); generate/generating, from the received one or more call rates and the plurality of readouts input into the trained classifier, at least a retry success confidence score indicative of whether the additional sample evaluation run of the sample will produce the conclusive result (i.e., mental processes and mathematical concepts); and evaluate when determining whether to conduct the additional sample evaluation run of the sample (i.e., mental processes). Independent claims 15, 17, and 19 recite the following steps which fall within the mental processes and/or mathematical concepts groupings of abstract ideas: assemble/assembling a training set of sample evaluation runs, the training set including one or more inconclusive sample evaluation runs that produced inconclusive results for samples, followed by an additional sample evaluation run (i.e., mental processes and mathematical concepts); data indicating a percentage of sample locations with a quality score above a threshold and (i.e., mental processes); data of radiant signals from process probes during at least a first, pre-hybridization stage of the one or more inconclusive sample evaluation runs in which sample DNA is in a liquid form, a second stage of the one or more inconclusive sample evaluation runs in which sample DNA is hybridized to an image-generating chip, and a third stage of the one or more inconclusive sample evaluation runs in which probe DNA is extended and the extension is labeled with a fluorescent label, wherein each process probe is configured to produce the radiant signals indicative of one or more processing conditions during the sample evaluation runs (i.e., mental processes); training data for each of the additional sample evaluation runs comprises one or more ground truth indicators of a conclusive or inconclusive result (i.e., mental processes); train/training a classifier, using the training data, to score whether an additional sample evaluation run for a particular sample is likely to produce the conclusive result (i.e., mental processes and mathematical concepts); and determining whether to reevaluate production samples after one or more inconclusive sample evaluation runs (i.e., mental processes). Dependent claims 2, 4, 14, 16, 18, and 20 further recite the following steps which fall within the mental processes and/or mathematical concepts groupings of abstract ideas, as noted below. These claims are interpreted as reciting readouts of the genotyping assay signals that are the same product, i.e., data, of product-by-process limitations, as discussed in the foregoing claim interpretation sections. Therefore, dependent claims 2, 4, 14, 16, 18, and 20 are further describing the data that is received and input to a classifier in the independent claims, as noted below. Dependent claims 2, 4, 14, 16, 18, and 20 further recite: wherein the readouts are from types of process probes that include (i.e., mental processes of considering data): the readouts from a first, pre-hybridization stage of the sample evaluation run in which sample DNA is in a liquid form, the readouts of one or more probes selected from a group consisting of (i.e., mental processes of considering data): a plurality of first readouts from process probes configured to respond to non-human bacterial DNA not present in human DNA and produce signals indicative of contamination of the sample by non-human bacterial DNA (i.e., mental processes of considering data), and a plurality of second readouts from four complementary process probes configured to respond to extensions of target bases at non-polymorphic sites of common human sample sequences, and produce radiant signals indicative of good extensions for each of four complementary extension reagents, and combinations thereof (i.e., mental processes of considering data); the readouts from a second stage of the sample evaluation run in which sample DNA is hybridized to an image-generating chip, the readouts of one or more probes selected from a group consisting of (i.e., mental processes of considering data): a plurality of third readouts from process probes configured to respond to a common sequence known as wild-type allele in a human sample and produce high radiant signals indicative of good sample composition and binding conditions (i.e., mental processes of considering data), a plurality of fourth readouts from process probes comprising mismatched complementary bases configured to respond to a common sequence in a human sample by binding weakly, resulting in separation of the common sequence from the probes with the mismatched complementary bases, and producing approximately background level radiant signals, and combinations thereof (i.e., mental processes of considering data) and a plurality of fifth readouts from process probes configured to respond to synthetic sequences mixed with reagent in high, medium, and low concentration levels and respectively produce high, medium, and low radiant signals indicative of good reagent delivery, and combinations thereof (i.e., mental processes of considering data); and the readouts from a third stage of the sample evaluation run in which probe DNA is extended and the extension is labeled with a fluorescent label, the readouts from one or more probes selected from a group consisting of (i.e., mental processes of considering data): a plurality of sixth readouts from process probes comprising a hairpin complementary sequence configured to respond to chemicals mixed in reagent for performing single-base extensions, wherein the process probes are configured to produce radiant signals indicative of good conditions for single base extensions (i.e., mental processes of considering data), a plurality of seventh readouts from process probes configured to block extensions on a 3' end of probe sequences, such that synthetic targets mixed in reagent and extensions of the synthetic targets are removed after extension and staining, and to produce low radiant signals indicative of good conditions for target removal (i.e., mental processes of considering data), and a plurality of eighth readouts from process probes covered with chemicals configured to bind fluorescent labels mixed in reagent, wherein the process probes are configured to produce high radiant signals indicative of a good quality staining process, and combinations thereof (i.e., mental processes of considering data). Dependent claims 5-12 further recite the following steps which fall within the mental processes and/or mathematical concepts groupings of abstract ideas, as noted below. These claims are interpreted as reciting readouts of the genotyping assay signals that are the same product, i.e., data, of product-by-process limitations, as discussed in the foregoing claim interpretation sections. Therefore, dependent claims 5-12 are further describing the data that is received and input to a classifier in the independent claims, as noted below. Dependent claim 5 further recites: wherein one of the process probes from the first stage is configured respond to non-human contamination not present in human samples and produce radiant signals indicative of contamination of the sample by non-human sequences (i.e., mental processes of considering data). Dependent claim 6 further recites: wherein the process probes from the first stage include four complementary process probes configured to respond to extensions of target bases at non-polymorphic sites of common human sample sequences and produce radiant signals indicative of good extensions for each of four complementary extension reagents (i.e., mental processes of considering data). Dependent claim 7 further recites: wherein one of the process probes from the second stage is configured to respond to a common sequence known as wild-type allele in a human sample and produce high radiant signals indicative of good sample composition and binding conditions (i.e., mental processes of considering data). Dependent claim 8 further recites: wherein one of the process probes from the second stage includes mismatched complementary bases configured to respond to a common sequence in a human sample by binding weakly, resulting in separation of the common sequence from the probes with the mismatched complementary bases, and producing approximately background level radiant signals (i.e., mental processes of considering data). Dependent claim 9 further recites: wherein one of the process probes from the second stage is configured to respond to synthetic sequences mixed with reagent in high, medium, and low concentration levels and respectively produce high, medium, and low radiant signals indicative of good reagent delivery (i.e., mental processes of considering data). Dependent claim 10 further recites: wherein one of the process probes from the third stage comprises a hairpin complementary sequence configured to respond to chemicals mixed in reagent for performing single-base extensions, wherein the one of the process probes is configured to produce radiant signals indicative of good conditions for single base extensions (i.e., mental processes of considering data). Dependent claim 11 further recites: wherein one of the process probes from the third stage is configured to block extensions on a 3' end of probe sequences, such that synthetic targets mixed in reagent and extensions of the synthetic targets are removed after extension and staining, and to produce low radiant signals indicative of good conditions for target removal (i.e., mental processes of considering data). Dependent claim 12 further recites: wherein one of the process probes from the third stage is covered with chemicals configured to bind fluorescent labels mixed in reagent, wherein the one of the process probes is configured to produce high radiant signals indicative of a good quality staining process. The abstract ideas recited in the claims are evaluated under the broadest reasonable interpretation (BRI) of the claim limitations when read in light of and consistent with the specification. As noted in the foregoing section, the claims are determined to contain limitations that can practically be performed in the human mind with the aid of a pen and paper, and therefore recite judicial exceptions from the mental process grouping of abstract ideas (e.g., evaluate data when determining whether to conduct the additional sample evaluation run of the sample). Furthermore, and as noted in the claim interpretations section above, the claims are interpreted to recite product-by-process limitations with the product being the data of the one or more call rates, and the data of the plurality of readouts of radiant signals generated from process probes, and thus the limitations reciting details on the stages of the evaluation run and the types of process probes used are limitations merely further defining the product (i.e., data). Additionally, the recited limitations that are identified as judicial exceptions from the mathematical concepts grouping of abstract ideas (e.g., training a classifier at paras. [00101] – [00117] in the Specification; and using a classifier to generate a confidence score at para. [0090]) are abstract ideas irrespective of whether or not the limitations are practical to perform in the human mind. Therefore, claims 1-20 recite an abstract idea. [Step 2A Prong One: YES] Eligibility Step 2A Prong Two: In determining whether a claim is directed to a judicial exception, further examination is performed that analyzes if the claim recites additional elements that when examined as a whole integrates the judicial exception(s) into a practical application (MPEP 2106.04(d)). A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. The claimed additional elements are analyzed to determine if the abstract idea is integrated into a practical application (MPEP 2106.04(d)(I)). If the claim contains no additional elements beyond the abstract idea, the claim fails to integrate the abstract idea into a practical application (MPEP 2106.04(d)(III)). The judicial exceptions identified in Eligibility Step 2A Prong One are not integrated into a practical application because of the reasons noted below. Dependent claims 4-12, 14, 18, and 20 do not recite any elements in addition to the judicial exception, and thus are part of the judicial exception. The additional elements in independent claims 1, 3, 13, 15, 17, and 19 include: a non-transitory computer readable storage medium and a processor (claims 1 and 15); a system including one or more processors coupled to memory (claims 3 and 17); receive from the one or more inconclusive sample evaluation runs (claims 1, 3, and 13): one or more call rates a plurality of readouts input the received one or more call rates and the plurality of readouts (claims 1, 3, and 13); display at least the retry success confidence score to an operator (claims 1 and 3); performing the additional sample evaluation run in the event the retry success score is indicative of a process error, or discarding the sample in the event the retry success score is indicative of a sample error (claim 13); receive training data for each of the one or more inconclusive sample evaluation runs (claims 15, 17, and 19) comprising one or more call rates a plurality of readouts save parameters of the trained classifier (claims 15, 17, and 19). The additional elements in dependent claims 2 and 16 include: a non-transitory computer readable storage medium (claims 2 and 16). The additional elements of a non-transitory computer readable storage medium and a processor (claims 1, 2, 15, and 16); and a system including one or more processors coupled to memory (claims 3 and 17); invoke a computer and/or computer-related components merely as tools for use in the claimed process, and therefore are not an improvement to computer functionality itself, or an improvement to any other technology or technical field, and thus, do not integrate the judicial exceptions into a practical application (MPEP 2106.04(d)(1)). The additional elements of receiving data (claims 1, 3, 13, 15, 17, and 19); inputting data (claims 1, 3, and 13); displaying data (claims 1 and 3); and saving data (claims 15, 17, and 19); are merely pre-solution and/or post-solution activities used in the claimed process – nominal additions to the claims that do not meaningfully limit the claims, and therefore do not add more than insignificant extra-solution activity to the judicial exceptions (MPEP 2106.05(g)). The additional elements of performing the additional sample evaluation run in the event the retry success score is indicative of a process error, or discarding the sample in the event the retry success score is indicative of a sample error (claim 13); does not positively recite an active step that meaningfully limits the claim because it is a contingent limitation that comprises alternative embodiments of the claim neither of which meaningfully limit the claim, i.e., discarding the sample is an insignificant extra-solution activity, and performing the additional sample evaluation run is an insignificant extra-solution step of gathering data (MPEP 2106.05(g)). Furthermore, the additional elements of performing the additional sample evaluation run in the event the retry success score is indicative of a process error (claim 13) does not amount to more than mere instructions to implement an abstract idea, because the claim fails to recite details of how the solution to the problem is accomplished, i.e., the claim recites performing the additional sample evaluation run with no restriction on how the result is accomplished (e.g., the limitation merely recites the idea of a solution or outcome) and no description of the mechanism for accomplishing the result (e.g., the limitation “performing the additional sample evaluation run” amounts to a type of recitation that is equivalent to the words “apply it” (MPEP 2106.05(f)). Thus, the additionally recited elements merely invoke a computer as a tool, and as such, when all limitations in claims 1-20 have been considered as a whole, the claims are deemed to not recite any additional elements that would integrate a judicial exception into a practical application, and therefore claims 1-20 are directed to an abstract idea (MPEP 2106.04(d)). [Step 2A Prong Two: NO] Eligibility Step 2B: Because the claim recites an abstract idea, and does not integrate that abstract idea into a practical application, the claim is probed for a specific inventive concept. The judicial exception alone cannot provide that inventive concept or practical application (MPEP 2106.05). Identifying whether the additional elements beyond the abstract idea amount to such an inventive concept requires considering the additional elements individually and in combination to determine if they amount to significantly more than the judicial exception (MPEP 2106.05A i-vi). The claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception(s) because of the reasons noted below. Dependent claims 4-12, 14, 18, and 20 do not recite any elements in addition to the judicial exception. The additional elements recited in independent claims 1, 3, 13, 15, 17, and 19 and dependent claims 2 and 16 are identified above, and carried over from Step 2A: Prong Two along with their conclusions for analysis at Step 2B. Any additional element or combination of elements that was considered to be insignificant extra-solution activity at Step 2A: Prong Two was re-evaluated at Step 2B, because if such re-evaluation finds that the element is unconventional or otherwise more than what is well-understood, routine, conventional activity in the field, this finding may indicate that the additional element is no longer considered to be insignificant; and all additional elements and combination of elements were evaluated to determine whether any additional elements or combination of elements are other than what is well-understood, routine, conventional activity in the field, or simply append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, per MPEP 2106.05(d). The additional elements of a non-transitory computer readable storage medium and a processor (claims 1, 2, 15, and 16); and a system including one or more processors coupled to memory (claims 3 and 17); are conventional computer components and/or functions (see MPEP at 2106.05(b) and 2106.05(d)(II) regarding conventionality of computer components and computer processes). The additional elements of receiving data (claims 1, 3, 13, 15, 17, and 19); inputting data (claims 1, 3, and 13); displaying data (claims 1 and 3); and saving data (claims 15, 17, and 19); are merely pre-solution and/or post-solution activities used in the claimed process – nominal additions to the claims that do add more than insignificant extra-solution activity to the judicial exceptions, and therefore do not amount to significantly more than the judicial exceptions (i.e., do not amount to an inventive concept) (MPEP 2106.05(g)). The additional elements of performing the additional sample evaluation run in the event the retry success score is indicative of a process error, or discarding the sample in the event the retry success score is indicative of a sample error (claim 13); are merely post-solution activities used in the claimed process – nominal additions to the claims that do add more than insignificant extra-solution activity to the judicial exceptions, and therefore do not amount to significantly more than the judicial exceptions (i.e., do not amount to an inventive concept) (MPEP 2106.05(g)) and/or does not amount to more than mere instructions to implement an abstract idea (MPEP 2106.05(f)). Therefore, when taken alone, all additional elements in claims 1-20 do not amount to significantly more than the above-identified judicial exception(s). Even when evaluated as a combination, the additional elements fail to transform the exception(s) into a patent-eligible application of that exception. Thus, claims 1-20 are deemed to not contribute an inventive concept, i.e., amount to significantly more than the judicial exception(s) (MPEP 2106.05(II)). [Step 2B: NO] Response to Arguments The Applicant’s arguments/remarks received 10 November 2025 have been fully considered, but are not persuasive. The Applicant summarizes the MPEP guidance on eligibility analysis under 35 U.S.C. 101 on pages 23-24 of the Remarks, states on page 24 (para. 2) that since the time of the Final Office action was issued, the USPTO promulgated new guidance on evaluating subject matter of claims under § 101 – Charles Kim, Memorandum: Reminders on evaluating subject matter eligibility of claims under 35 U.S.C. § 101, p. 2, USPTO (August 4, 20205). The Applicant further states that the guidance should be applied consistently across art units, and, in particular, to the present case. These arguments/statements are not persuasive, because first, the August 4, 2025 Memorandum states (page 1, para. 1) that the memorandum is not intended to announce any new USPTO practice or procedure and is meant to be consistent with existing USPTO guidance, and second, the USPTO guidance has been followed during examination of the instant claims in this Office action and in Office actions of record. The Applicant summarizes (Remarks, page 24, para. 3) Step 1 of the eligibility analysis in the Office action mailed 08 August 2025, and summarizes the MPEP and August 4, 2025 Memorandum guidance at Step 2A of the eligibility analysis under 35 U.S.C. 101 (Remarks, page 25, paras. 1-2) and states (para. 2) that neither prong of Step 2A is met. The Applicant further summarizes (para. 3 and page 26, para. 1) the August 4, 2025 Memorandum with regard to evaluating mental processes, and states (para. 2) that the subject claims are limited to a practical application that includes meaningful limitations that cannot practically be performed in the mind, and further points to the Specification (Specification, paras. [0028], [0029] & [0041]) and further states that it follows that the claims do not recite a judicial exception, and thus, under the Step 2A prong 1 analysis, the claims are eligible under § 101. These arguments are not persuasive, because first, the paragraphs pointed to in the Specification describe elements of the genotyping process that are not recited in the claims (i.e., the production process used to generate the data that is used in the claimed process), and second, using a classifier is both a mental process and a mathematical concept, as noted in the above rejection and in the Office actions of record, because the amount of data and/or the amount of time to perform the process steps, in and of themselves is not a limitation which takes a process out of the realm of the human mind. It is the process performed on that data which is the mental step, and mental steps identified in the claims do not have to be the fastest, most efficient, or require specialized computing elements. Thus, although the amount of data may be considered to be significantly large and take considerable time and effort to process manually, the use of a computer to perform the claimed method at a rate and accuracy that can far outstrip the mental performance of a skilled artisan does not change the nature of the activity being performed (i.e., an abstract idea), and therefore does not materially alter the patent eligibility of the claimed subject matter. That Applicant states on page 27 (bottom) and page 28 (top) that the Final Office action contends that the claims are interpreted to recite product-by-process limitations, and further states that the Applicant has amended the claims to clarify that the limitations identified by the Final Office action are indeed not product-by-process limitations, and therefore these limitations do not merely express abstract ideas, and the Examiner must give due consideration to the above-cited disclosure establishing that the claimed subject matter are not practical to perform in the human mind. These arguments are not persuasive, because first, and as explained in the response to arguments in the Claim Interpretations section above, the claims do not actually recite active steps of using genotyping equipment to generate the data used in the claimed process, and therefore the data that is received at the receiving step of the independent claims is interpreted as a product (i.e., the data) that has been previously generated by a process that is not claimed by the instant application. Therefore, limitations describing and/or defining the data (e.g., dependent claims 2-12) are merely limitations that are part of the abstract idea itself (i.e., information). The Applicant states on page 28 (para. 2) that under Step 2A prong 2 analysis, the claims would still be patent eligible, at least because they reflect an improvement to the functioning of a computer or another technology or technical area. The Applicant further summarizes (para. 3) the August 4, 2025, Memorandum, and further states (para. 4) that the analysis provided in the Final Office action fails to consider the claims as a whole, as suggested in the August 4, 2025, Memorandum, because the Final Office action points to several limitations and fails to evaluate these limitations as a whole or in combination with the alleged abstract idea, and further states that each of the identified limitations are rejected with a conclusory statement as to why they do not amount to significantly more than the abstract idea, and without proper analysis of the detailed subject matter of these limitations, or due consideration for how these limitations may be combined with each other and/or alleged abstract idea to provide significantly more than the abstract idea. The Applicant further summarizes (Remarks, page 29, para. 2) the August 4, 2025, Memorandum, with regard to Step 2A, prong 2 analysis, and further states (para. 3) that the claims recite subject matter that reflects improvements to technology, which are specifically called out in the Specification (e.g., paras. [0028], [0029], & [0041]) and that for example, the claimed technology reflects improvements in both processing time and cost savings versus conventional technologies, and further states (para. 4) that even if it were assumed that the claims were directed to a judicial exception, the claims would still be patent eligible under § 101 because they reflect an improvement to the functioning of a computer or another technology or technical field. These arguments are not persuasive, because first, and as discussed above, the August 4, 2025 Memorandum states (page 1, para. 1) that the memorandum is not intended to announce any new USPTO practice or procedure and is meant to be consistent with existing USPTO guidance, and second, the USPTO guidance has been followed during examination of the instant claims in this Office action and in Office actions of record, particularly with regard to the analysis of any recited additional elements eligibility steps 2A Prong Two and 2B, as demonstrated in the above rejection and in also in the Office actions of record. Third, the paragraphs pointed to in the Specification (i.e., paras. [0028], [0029], & [0041]) describe elements of the genotyping process that are not recited in the claims (i.e., the production process used to generate the data that is used in the claimed process), and fourth, regarding the Applicant’s assertion of an improvement to the functioning of a computer or another technology or technical field, the above rejection is based on the outcome of the eligibility analysis at Step 2A Prong Two that determined that when all limitations in claims 1-20 had been considered as a whole (i.e., the analysis takes into consideration all the claim limitations and how those limitations interact and impact each other when evaluating whether the exception is integrated into a practical application), they were deemed to not recite any additional elements that would integrate a judicial exception into a practical application (MPEP 2106.04(d)), and the above rejection is further based on the outcome of the eligibility analysis at Step 2B that determined that when all additional elements in claims 1-20 had been evaluated individually and in combination, they were deemed to not contribute an inventive concept, i.e., amount to significantly more than the judicial exceptions (MPEP 2106.05(II)). Thus, the instant claimed advantages (e.g., savings in both processing time and cost) are a purported improvement to the abstract idea (data analysis), and not an improvement to computer functionality itself, or an improvement to another technology or technical field. Claim Rejections - 35 USC § 103 The Applicant’s amendment received 10 November 2025 has been fully considered, however after further consideration, new grounds of rejection are raised under 35 U.S.C. 103 in view of the amendment. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Schofl et al. (“2.7 million samples genotyped for HLA by next generation sequencing: lessons learned.” BMC Genomics, 2017, Vol. 18:61, pp. 1-16, newly cited, hereinafter “Schofl 2017”) in view of Schofl et al. (“Prediction of spurious HLA class II typing results using probabilistic classification.” Human Immunology, 2016, Vol. 77, pp. 264-272, newly cited, hereinafter “Schofl 2016”) in view of Illumina (Evaluation of Infinium Genotyping Assay Controls Training Guide, 2012, pp. 1-31, as cited in the Office action mailed 26 March 2025, and as cited in the Information Disclosure Statement received 02 November 2021). Claim Interpretations Independent claims 1, 3, and 13 recite the limitation “predict retry success confidence scores… wherein the retry success confidence scores indicate whether an additional sample evaluation run will produce a conclusive result for the sample.” This limitation is interpreted as a decision between acquiring a new sample (i.e., the retry success confidence score indicates that an additional sample evaluation run will not produce a conclusive result for the sample) or else performing repeat testing of the same sample (i.e., the retry success confidence score indicates that an additional sample evaluation run will produce a conclusive result for the sample) (for example, see para. [0084] in the Specification). Independent claims 15, 17, and 19 recite “…to score whether an additional sample evaluation run for a particular sample is likely to produce the conclusive result.” This limitation is interpreted in accordance with the foregoing claim interpretation for claims 1, 3, and 13. Schofl 2017 shows genotyping over 2 million samples (page 2, col. 2, paras. 2-4) and that on occasion, the initial genotyping attempt fails to meet quality standards and the affected sample has to be reanalyzed, and that this can affect either individual loci or the entire sample, with a full sample repeat typing including a second DNA extraction being triggered if the initial DNA concentration is less than 2 ng/μl or more than 4 HLA loci fail to produce credible results (page 9, col. 1, bottom, through col. 2, para. 1). Schofl 2017 further shows that several common factors may trigger repeat typing of an individual locus, such as technical concerns including too low total read numbers, too few reads on target, or heavy imbalances across alleles, and that other causes for concern are the presence of more alleles than expected, which might be a result of one or more of many factors such as PCR chimerism, sample contamination, co-amplification of closely related loci, misamplification or sequencing errors, and further shows that an analysis must also be repeated if the genotyping software fails to construct a genotype, which may occur for benign reasons (e.g., a novel allelic sequence is encountered) or be a result of stochastic allelic dropouts during PCR, and more insidiously, allelic dropouts may remain “silent”, i.e., be technically valid but erroneous homozygous or even heterozygous genotypes may arise (page 9, col. 2, para. 2). Schofl 2017 further shows that machine learning techniques are employed to help identify “implausible” genotypes that warrant repeated analyses to check their validity (Ibid.); and repeat genotyping runs are automatically suggested by the genotyping software based on conservative thresholds for quality metrics (page 9, col. 2, para. 3). Schofl 2017 does not show using a trained classifier to predict typing results (claims 1, 3, and 13); training a classifier to predict typing results (claims 15, 17, and 19); a plurality of readouts of radiant signals from process probes during at least a first, pre-hybridization stage of the sample evaluation run in which sample DNA is in a liquid form, a second stage of the sample evaluation run in which sample DNA is hybridized to an image-generating chip, and a third stage of the sample evaluation run in which probe DNA is extended and the extension is labeled with a fluorescent label (claims 1, 3, 13, 15, 17, and 19); or readouts generated from process probes as recited in dependent claims 2, 4, 14, 16, 18, and 20, and dependent claims 5-12. Schofl 2016 shows prediction of spurious HLA class II typing results using probabilistic classification (Title; and Abstract); training and using probabilistic classifiers (binary logistic regression and random forest models) to detect spurious typing results (Abstract); and further shows that it is expedient to reduce the number of necessary replicate genotyping as far as possible by specifically identifying likely unreliable genotypes in a largely automatic fashion (page 265, col. 1, para. 3). Schofl 2017 in view of Schofl 2016 does not show a plurality of readouts of radiant signals from process probes during at least a first, pre-hybridization stage of the sample evaluation run in which sample DNA is in a liquid form, a second stage of the sample evaluation run in which sample DNA is hybridized to an image-generating chip, and a third stage of the sample evaluation run in which probe DNA is extended and the extension is labeled with a fluorescent label (claims 1, 3, 13, 15, 17, and 19); or readouts generated from process probes as recited in dependent claims 2, 4, 14, 16, 18, and 20, and dependent claims 5-12. Illumina shows a training guide for the evaluation of Infinium genotyping assay controls (Title); sorting samples by call rate (Figure 2); and evaluating the assay controls based on relative intensities of readout signals (Figure 5). Regarding claims 1, 3, 13, 15, 17, and 19, the plurality of readouts of radiant signals from process probes are interpreted to be the same product, i.e., data, of product-by-process limitations that are interpreted to not recite active steps of performing genotyping assays using specific process probes, as noted in the claim interpretation section above. Illumina further shows an overview of the Infinium genotyping assay protocol and steps where built-in controls come into play (Figure 1, which corresponds to FIG. 8 in the instant drawings and para. [0077] in the Specification), e.g., sample independent controls (shown in blue) and sample dependent controls (shown in red), and further shows that sample-independent controls evaluate BeadChip and reagent performance, efficiency of hybridization, and the staining process, and include staining controls, extension controls, target removal controls, and hybridization controls (page 6, paras. 1-2); whereas the sample dependent controls are used to evaluate sample quality and performance, and include stringency controls, non-specific binding controls, and non-polymorphic controls, wherein the sample-dependent stringency and non-polymorphic control probes specifically target human DNA (page 6, para. 3). Regarding dependent claims 2, 4, 14, 16, 18, and 20, and dependent claims 5-12, the readouts of the genotyping assay signals are interpreted to be the same product, i.e., data, of product-by-process limitations that are interpreted to not recite active steps of performing genotyping assays using specific process probes, as noted in the claim interpretation section above. Illumina further shows using sample independent process control probes to evaluate chip performance that are informative of efficiency of hybridization and staining (Appendix A, pages 19-22) and with regard to the sample independent controls as grouped according to the different process stages (Figure 1, which corresponds to FIG. 8 in the instant drawings and para. [0077] in the Specification), further shows monitoring the expected outcome for each of the controls (page 10, para. 1) including the staining controls (sample-independent) that are used to assess the efficiency of the staining process (Figure 6); extension controls (sample-independent) that test the efficiency of single base extension (Figure 7); target-removal controls (sample-independent) that test the efficiency of stripping off DNA templates after the extension reaction (Figure 8); hybridization controls (sample-independent) that assess the efficiency of DNA hybridization using synthetic targets instead of amplified DNA (Figure 9). Illumina further shows using sample-dependent controls to evaluate assay performance across samples, and assess sample DNA quality (Appendix A, pages 22-24) and with regard to the sample dependent controls as grouped according to the different process stages (Figure 1, which corresponds to FIG. 8 in the instant drawings and para. [0077] in the Specification), further shows stringency controls (sample-dependent) comprising perfect match (PM) and mis-match (MM) controls that assess the stringency of the hybridization process (Figure 11); non-specific binding controls (sample-dependent) that test sample quality and specificity of the assay (Figure 12); and non-polymorphic controls (sample-dependent) that assess sample quality and overall performance of the assay by querying non-polymorphic regions of the human genome (Figure 13). Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method shown by Schofl 2017 by incorporating methods for using probabilistic classification of genotyping results, as shown by Schofl 2016, and discussed above. One of ordinary skill in the art would have been motivated to combine the methods of Schofl 2017 with the methods of Schofl 2016, because Schofl 2017 shows that machine learning techniques are employed to help identify “implausible” genotypes that warrant repeated analyses to check their validity, and Schofl 2016 shows methods for training and using probabilistic classifiers for classifying typing results. This modification would have had a reasonable expectation of success given that both Schofl 2017 and Schofl 2016 disclose methods for automated prediction of genotyping results with the software automatically suggesting the need for repeat genotyping runs, including the analysis of individual loci or the entire sample, or alternatively a full sample repeat typing including a second DNA extraction, or alternatively identifying the sample contamination (which would require a new sample). It would have been further prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method shown by Schofl 2017 in view of Schofl 2016 by incorporating methods of training and using classifiers with data of readouts from assay process probes, as shown by Illumina, and discussed above. One of ordinary skill in the art would have been motivated to combine the methods of Schofl 2017 in view of Schofl 2016 with the methods of Illumina, because using the trained probabilistic classifiers of Schofl 2017 in view of Schofl 2016 would be more accurate using parameters for the data generated by the Illumina process control probes, because Illumina shows that the Infinium microarray provides tools to genotype with unparalleled accuracy and reproducibility, and that all Infinium BeadChips are equipped with a set of internal control probes designed to support quality control of the assay’s stringent performance criteria and to demonstrate its robustness (Illumina, page 5). This modification would have had a reasonable expectation of success given that both Schofl 2017 in view of Schofl 2016 and Illumina disclose methods for ensuring genotyping results at the highest possible quality. Conclusion No claims are allowed. This Office action is a Non-Final action. A shortened statutory period for reply to this action is set to expire THREE MONTHS from the mailing date of this application. Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEVEN W. BAILEY whose telephone number is (571)272-8170. The examiner can normally be reached Mon - Fri. 1000 - 1800. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KARLHEINZ SKOWRONEK can be reached on (571) 272-9047. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.W.B./Examiner, Art Unit 1687 /Joseph Woitach/Primary Examiner, Art Unit 1687
Read full office action

Prosecution Timeline

May 27, 2021
Application Filed
Mar 21, 2025
Non-Final Rejection — §101, §103, §112
Jun 26, 2025
Response Filed
Jul 09, 2025
Interview Requested
Aug 05, 2025
Final Rejection — §101, §103, §112
Aug 14, 2025
Examiner Interview Summary
Oct 08, 2025
Response after Non-Final Action
Nov 10, 2025
Request for Continued Examination
Nov 12, 2025
Response after Non-Final Action
Jan 13, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12527627
GENERATIVE COMPUTATIONAL PREDICTIVE MODEL FOR SOFT TISSUE REPAIR PLANNING
2y 5m to grant Granted Jan 20, 2026
Patent 12467096
METHODS AND SYSTEMS FOR IDENTIFYING METHYLATION BIOMARKERS
2y 5m to grant Granted Nov 11, 2025
Patent 12458967
METHOD OF STORING DATA IN POLYMER
2y 5m to grant Granted Nov 04, 2025
Patent 12374422
SEQUENCE-GRAPH BASED TOOL FOR DETERMINING VARIATION IN SHORT TANDEM REPEAT REGIONS
2y 5m to grant Granted Jul 29, 2025
Patent 12367978
METHODS AND SYSTEMS FOR DETERMINING SOMATIC MUTATION CLONALITY
2y 5m to grant Granted Jul 22, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
35%
Grant Probability
56%
With Interview (+20.8%)
4y 4m
Median Time to Grant
High
PTA Risk
Based on 66 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month