DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 28-32 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected invention, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 26 December 2025 and the claims were canceled.
Claims 1-27 are pending and examined herein.
Claims 28-32 are canceled.
Priority
As detailed on the 10 March 2022 filing receipt, the application claims priority as early as 31 March 2021. At this point in examination, all claims have been interpreted as being accorded this priority date as the effective filing date.
Information Disclosure Statement
Information disclosure statements (IDS) were filed on 27 July 2022. The submissions are in compliance with the provisions of 37 CFR 1.97. Accordingly, the references are being considered by the examiner except for US PG Pub 20180274023 A1, where there the reference is duplicated (references 9 and 67) and the latter is struck through, and the PCT documents referenced, which are foreign patent documents that are not present in the application contents.
Claim Objections
Claim 1 is objected to because the system claim recites three components – memory, a neural network, and an intensity contextualization unit – which is followed by additional information about what the “data flow logic” and the “neural network” are configured to do. The claim is interpreted as the last two elements be placed in “wherein” clauses and amended appropriately.
Claims 26-27 are objected to because of the following informality: the final “processing” step is not indented as the other steps (MPEP 608.01(i)).
Appropriate correction is required.
Claim Interpretation
Claim 1 recites “data logic flow having access to the memory.” The data flow logic is not disclosed as part of the system nor the processor (Fig. 14C) and so is interpreted as instructions.
Claim Interpretation under 35 USC 112(f)
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an intensity contextualization unit” in claims 1, 16-18, and 21.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
The recited “intensity contextualization unit” is disclosed as having convolution pipelines (pg. 4, paragraph [29]) and determines values (pg. 9, paragraphs [63-64]). The intensity contextualization unit is optionally disclosed as having a number of possible embodiments, including a multilayer perceptron (MLP), a feedforward neural network, a fully-connected neural network, a fully convolutional neural network, a semantic segmentation neural network, or a generative adversarial network (pg. 10, paragraph [66]). Therefore, all disclosed embodiments suggest the intensity contextualization unit is a neural network in addition to the neural network in addition to the plainly disclosed neural network.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-27 are rejected under 35 USC § 101 because the claimed inventions are directed to an abstract idea without significantly more. "Claims directed to nothing more than abstract ideas (such as a mathematical formula or equation), natural phenomena, and laws of nature are not eligible for patent protection" (MPEP 2106.04 § I). Abstract ideas include mathematical concepts, and procedures for evaluating, analyzing or organizing information, which are a type of mental process (MPEP 2106.04(a)(2)). The claims as a whole, considering all claim elements individually and in combination, are directed to a judicial exception at Step 2A, Prong 2, and the additional elements of the claims, considered individually and in combination, do not provide significantly more at Step 2B than the abstract idea of base calling.
MPEP 2106 organizes JE analysis into Steps 1, 2A (Prong One & Prong Two), and 2B as analyzed below.
Step 1: Are the claims directed to a process, machine, manufacture, or composition of matter (MPEP 2106.03)?
Step 2A, Prong One: Do the claims recite a judicially recognized exception, i.e., a law of
nature, a natural phenomenon, or an abstract idea (MPEP 2106.04(a-c))?
Step 2A, Prong Two: If the claims recite a judicial exception under Prong One, then is the judicial exception integrated into a practical application by an additional element (MPEP 2106.04(d))?
Step 2B: Do the claims recite a non-conventional arrangement of elements in addition to any identified judicial exception(s) (MPEP 2106.05)?
Step 1: Are the claims directed to a 101 process, machine, manufacture, or composition of matter (MPEP 2106.03)?
The claims are directed to method (claims 1-25), a computer system (claim 26), and a non-transitory computer-readable medium (claim 27), each of which falls within one of the categories of statutory subject matter. [Step 1: Yes]
Step 2A, Prong One: Do the claims recite a judicially recognized exception, i.e., a law of nature, a natural phenomenon, or an abstract idea (MPEP 2106.04(a-c))?
With respect to Step 2A, Prong One, the claims recite judicial exceptions in the form of abstract ideas. MPEP § 2106.04(a)(2) further explains that abstract ideas are defined as:
• mathematical concepts (mathematical formulas or equations, mathematical relationships
and mathematical calculations) (MPEP 2106.04(a)(2)(I));
• certain methods of organizing human activity (fundamental economic principles or practices, managing personal behavior or relationships or interactions between people) (MPEP 2106.04(a)(2)(II)); and/or
• mental processes (concepts practically performed in the human mind, including observations, evaluations, judgments, and opinions) (MPEP 2106.04(a)(2)(III)).
The claims recite detecting intensity patterns in patches (claim 1), which is interpreted as an observation and thus a step the human mind is practically equipped to perform.
The claims recites determining intensity context data based on the values (claim 1), where determining intensity context is interpreted on making a comparison, which is data interpretation or evaluation and thus a step the human mind is practically equipped to perform.
The claims recite appending the context data to the patches to generate contextualized images (claim 1), which is interpreted as data manipulation where the previously determined context data and images are combined, and thus an abstract step.
The claims recite applying the convolution filters and generating base calls, using the data to make a base call is interpreted as data evaluation and thus a mental process.
The claims recite additional information about the images (claim 2), where intensity values are considered to be numerical values associated with channels, which is additional information about the data and thus abstract.
The claims recite mathematical concepts such as central tendency (claims 3-20), and so are considered to recite mathematical concepts.
The claims recite image processing to generate representations (claim 22), concatenating image data (claim 23), and additional information about the representation (claim 24), which is all interpreted as information about the data and not attributable to a statutory category.
Hence, the claims explicitly recite numerous elements that, individually and in combination,
constitute abstract ideas. The claims must therefore be examined further to determine whether they
integrate that abstract idea into a practical application (MPEP 2106.04(d)). [Step 2A: Yes]
Step 2A, Prong Two: If the claims recite a judicial exception under Prong One, then is the judicial exception integrated into a practical application by an additional element (MPEP 2106.04(d))?
Elements in addition to the abstract ideas recited in the instant claims are: a system or computer system (claims 1 and 25-26), memory storing emission images (claims 1 and 26), accessing images (claims 25-27), processors (claims 26-27), non-transitory computer readable memory (claim 27), a neural network with convolution filters (claim 1), and an intensity contextualization unit, interpreted as a neural network (claim 1) with convolution filters (claim 21).
The additional elements in the claims largely recite computer elements, in the forms of memory, processors, and neural networks. Hence, these are mere instructions to apply the abstract idea using a computer, and therefore the claim does not integrate that abstract idea into a practical application (see MPEP 2106.04(d) § I; and MPEP 2106.05(f)). The recited accessing of data in memory is considered to be a data gathering step, which is insignificant extra-solution activity and does not integrate the abstract ideas into a practical application (MPEP 2106.05(g)). Additionally, the use of neural networks in particular merely confines the use of the abstract idea to a particular technological environment (neural networks) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h).
None of the dependent claims recite any additional non-abstract elements; they are all directed
to further aspects of the information being analyzed, the manner in which that analysis is performed, or
the mathematical operations performed on the information. [Step 2A Prong Two: No]
Step 2B: Do the claims recite a non-conventional arrangement of elements in addition to any identified judicial exception(s) (MPEP 2106.05)?
Claims found to be directed to a judicial exception are then further evaluated to determine if the claims recite an inventive concept that provides significantly more than the judicial exception itself. Step 2B of 101 analysis determines whether the claims contain additional elements that amount to an inventive concept, and an inventive concept cannot be furnished by an abstract idea itself (MPEP 2106.05).
Elements in addition to the abstract ideas recited in the instant claims are: a system or computer system (claims 1 and 25-26), memory storing emission images (claims 1 and 26), accessing images (claims 25-27), processors (claims 26-27), non-transitory computer readable memory (claim 27), a neural network with convolution filters (claim 1), and an intensity contextualization unit, interpreted as a neural network (claim 1) with convolution filters (claim 21).
Accessing image data that is stored in memory is interpreted as a conventional computer task (Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93). See MPEP 2106.05(d), subsection II. Using the computer and in particular neural networks to obtain the basecall information are considered to be instructions to apply the abstract ideas, which cannot provide an inventive concept. See MPEP 2106.05(f). Use of neural networks in basecalling is taught at least by Lv (bioRxiv 374165: 6 pgs., 2020; newly cited), Boža (Plos One 12(6): 178751, 13 pgs., 2017; cited previously in a 27 July 2022 IDS form), and Zeng (Fronters in Genetics 10(1332): 11 pgs., 2020; cited previously in a 27 July 2022 IDS form). [Step 2B: No]
Conclusion: Claims are Directed to Non-statutory Subject Matter
For these reasons, the claims, when the limitations are considered individually and as a whole,
are directed to an abstract idea and lack an inventive concept. Hence, the claimed invention does not
constitute significantly more than the abstract idea, so the claims are rejected under 35 USC § 101 as
being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-6 and 25-27
Claim(s) 1-6 and 25-27 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jaganathan (WO 2020/0191387 A1; newly cited) in view of Wang (Scientific Reports 7(41348): 11 pgs., 2017; previously cited on a 27 July 2022 IDS form).
Claim 1 recites a system for base calling, comprising memory storing images that depict intensity emissions of a set of analytes, the intensity emissions generated by analytes in the set of analytes during sequencing cycles of a sequencing run.
Jaganathan teaches storing images of patches in memory (paragraph [361]) where the image data depicts intensity emissions of one or more clusters and their surrounding background (paragraph [414]).
Claim 1 recites data flow logic having access to the memory and configured to provide a neural network access to the images on a patch-by-patch basis, patches in an image depicting the intensity emissions for a subset of the analytes, and the patches having undiverse intensity patterns due to limited base diversity of analytes in the subset.
Jaganathan teaches interpreting images on an “image patch-by-image patch basis” (paragraph [534]) before being fed into a neural network (paragraph [244]), where the patch has less information than a full tile or cell.
Claim 1 recites the neural network with a plurality of convolution filters, convolution filters in the plurality of convolution filters having receptive fields confined to the patches, and the convolution filters configured to detect intensity patterns in the patches with losses in detection due to the undiverse
intensity patterns and the confined receptive fields.
Jaganathan teaches a convolutional neural network with a number of convolution filters (paragraph [81]), where the filters have local receptive fields (paragraph [318]).
Claim 1 recites an intensity contextualization unit configured to determine intensity context data based on intensity values in the images and store the intensity context data in the memory.
Jaganathan teaches a spatial convolutional network (paragraph [781]) for analysis of multiple patches in time (i.e., patches adjacent in time). Jagnathan teaches patches with intensity data from adjacent clusters (paragraph [399]), where the adjacent clusters are interpreted as possibly occurring in a different patch but not explicitly modeling emission intensity from other patches.
Claim 1 recites the data flow logic is configured to append the intensity context data to the patches to generate intensity contextualized images and provide the intensity contextualized images to the neural network.
Jaganathan does not teach appending context data to the patches.
Claim 1 recites the neural network is configured to apply the convolution filters on the intensity
contextualized images and generate base call classifications, the intensity context data in the
intensity contextualized images compensating for the losses in detection.
Jaganathan teaches processing the output from the layers of the neural network to call bases (claim 1) but not with contextualization from adjacent patches.
Wang teaches deconvoluting adjacent clusters (abstract), where the recited comparison other patches are interpreted as reading on removing crosstalk between adjacent clusters such that analysis of one cluster is informed by analysis of another cluster (pg. 1, last paragraph). Wang teaches reads with the smallest edit distance to the consensus and giving them a specific mapping rate (pg. 10, third paragraph), which is interpreted as appending the information.
Claim 25 recites a computer-implemented method of base calling incorporating the steps found in claim 1, as taught by Jaganathan and Wang, where Jaganathan teaches a system comprising a processor and memory (paragraph [757]) and such a system is interpreted as a computer.
Claim 26 recites a system including one or more processors coupled to memory, the memory loaded with computer instructions to perform base calling, the instructions, when executed on the processors, implement the method previously claimed in claims 1 and 25, as taught by Jaganathan and Wang, where Jaganathan teaches a system comprising a processor and memory (paragraph [757]).
Claim 27 recites a non-transitory computer readable storage medium impressed with
computer program instructions for base calling, the instructions, when executed on a processor,
implement the method previously claimed in claims 1 and 25.
Jaganathan and Wang teach the required steps, and Jaganathan further teaches executing the steps using a non-transitory computer readable medium (paragraph [757]).
Claim 2 recites the images have the intensity values for one or more intensity channels.
Jaganathan teaches intensity values in each image channel (paragraph [339]), where the intensity channels are interpreted as intensity channels.
Claim 3 recites the intensity context data specifies summary statistics of the intensity values.
Jaganathan teaches a maximum intensity (paragraph [315]), which is interpreted as a summary statistic.
Claim 4 recites the intensity context data identifies a maximum value in the intensity values.
Jaganathan teaches a maximum intensity (paragraph [315]).
Claim 5 recites the intensity context data identifies a minimum value in the intensity values.
Jaganathan teaches a minimum intensity (paragraph [685]).
Claim 6 recites the intensity context data identifies a mean of the intensity values.
Jaganathan teaches determining mean intensity (paragraph 743-744]).
Combining Jaganathan and Wang
An invention would have been obvious to one of ordinary skill in the art if some motivation in the prior art would have led that person to modify prior art reference teachings to arrive at the claimed invention prior to the effective filing date of the invention. One would have been motivated to combine the work of Jaganathan, which teaches using a neural network to interpret intensity emissions from data adjacent in space, with Wang, which teaches error from adjacent clusters in space (abstract) because Jaganathan teaches physically close clusters can be difficult to parse (paragraph [84]) and looks at examining patches over time, and Wang teaches an option to overcome this problem, which they term crosstalk, is correction for clusters nearby in space by mathematically removing or correcting the signal (abstract), which together are interpreted as teaching the context of the instant claims. Jaganathan and Wang are both directed to the shared field of endeavor of optimizing nucleotide basecalling. Therefore, the invention is considered prima facie obvious.
Claim(s) 7-15
Claim(s) 7-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jaganathan in view of Wang as applied to claims 1-6 and 25-27 above and further in view of Quinn (Experimental Design and Data Analysis for Biologists, Cambridge Press: New York, 552 pgs., 2002; newly cited).
Claim 7 recites the intensity context data identifies a mode of the intensity values.
Jaganathan teaches mean intensity (paragraph 743-744]) but not mode.
Quinn teaches mean, median, and mode are known measure of the center of a distribution (pg. 10, col. 1, second paragraph).
Claim 8 recites the intensity context data identifies a standard deviation of the intensity values.
Jaganathan teaches variance as a parameter (pg. 6, paragraph [79]), where variance can be represented as standard deviation (pg. 103, paragraph [694]).
Claim 9 recites the intensity context data identifies a variance of the intensity values.
Jaganathan teaches variance as a parameter (pg. 6, paragraph [79]).
Claim 10 recites the intensity context data identifies a skewness of the intensity values.
Quinn teaches skewness as a common asymmetry in biological data (pg. 62, col. 2, last paragraph).
Claim 11 recites the intensity context data identifies a kurtosis of the intensity values.
Jaganathan does not teach kurtosis of intensity values.
Quinn teaches kurtosis and treatment of outliers (pg. 68, Section 4.5), which would effect the tailedness or kurtosis of the data.
Claim 12 recites the intensity context data identifies an entropy of the intensity values.
Jaganathan does not teach entropy of intensity values.
Quinn teaches determining uncertainty as an important characteristic of biological data (pg. 7, col. 2, first paragraph), where entropy is interpreted as reading on uncertainty.
Claim 13 recites the intensity context data identifies one or more percentiles of the intensity values.
Jaganathan teaches normalizing intensity based on percentiles (paragraph [534]).
Claim 14 recites the intensity context data identifies a delta between at least one of the maximum value and the minimum value, the maximum value and the mean, the mean and the minimum value, and a higher one of the percentiles and a lower one of the percentiles.
Quinn teaches absolute deviations around the mean or median (pg. 144, col. 1, first paragraph), which is interpreted as a delta or difference between a maximum or minimum and the mean.
Claim 15 recites the intensity context data identifies a sum of the intensity values.
Jaganathan teaches sums of intensity data in the local receptive fields (Fig. 18b, Ref. 1816c).
Combining Jaganathan, Wang, and Quinn
The previously combined art is interpreted as teaching contextualization of intensity emissions by comparing to adjacent information, where measures such as mean, maximum, and minimum values are taught by Jaganathan and Wang. The previously combined art did not teach other metrics such as mode, skew, or different around a metric of central tendency. However, Quinn teaches many of these measures, including that they are common, important measures (pg. 10, col. 1, second paragraph) where the median, mode, and mean are alternative measures for the center of distribution. Quinn teaches these metrics as common in measurement, interpretation, and analysis of biological data, and thus it would be obvious to one having ordinary skill in the art to perform a simple substitution for the metrics presented by the combination of Jaganathan and Wang for those taught by Quinn, and the results would be reasonably predictable.
Claims Free of the Prior Art
Claims 16-24 are considered to be free of the prior art. The previously applied art does not teach determining multiple maxima, minima, or sums nor the details of the neural network representing the intensity contextualization unit.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Robert J Kallal whose telephone number is (571)272-6252. The examiner can normally be reached Monday through Friday 8 AM - 4 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Olivia M. Wise can be reached at (571) 272-2249. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Robert J. Kallal/Examiner, Art Unit 1685