DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Status
Claims 1-29 are pending.
Claims 3-5, 7-10, 14-16, 18-21 are objected to.
Claims 1-2, 6, 11-13, 17, and 21-29 are rejected.
Priority
The instant Application claims domestic benefit to US provisional applications 63/161,880 and 63/161,896, filed Mar 16 2021. Accordingly, each of claims 1-29 are afforded the effective filing date of the Mar 16 2021.
Information Disclosure Statement
The information disclosure statements (IDS) filed on Jun 2 2022, Aug 1 2022, Mar 28 2024, Apr 25 2024, Jun 24 2024, Oct 17 2024, Feb 20 2025, Jun 26 2025, Oct 2 2025, and Dec 4 2025 are in compliance with the provisions of 37 CFR 1.97 and have therefore been considered. Signed copies of the IDS documents are included with this Office Action.
The information disclosure statements filed Jul 10 2024 and Dec 21 2023 fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. The copy of the Jul 10 2024 NPL publication is not legible. The copy of the Dec 21 2023 FOR #1 is not provided in English. No copy of the Dec 21 2023 FOR #13 has been provided. It is further noted that a second NPL publication was submitted on Dec 21 2023 but was not listed on the IDS and is therefore not considered. The IDSs have been placed in the application file, but the information referred to therein has not been completely considered, as indicated by strikethrough. All other references have been considered.
Applicant is reminded that it is desirable to avoid the submission of long lists of documents if it can be avoided. As set forth in MPEP 2004, applicant is directed to eliminate clearly irrelevant and marginally pertinent cumulative information. If a long list is submitted, highlight those documents which have been specifically brought to applicant's attention and/or are known to be of most significance. See Penn Yah Boats, Inc. v. Sea Lark Boats, Inc., 359 SUPP. 948, 175 USPQ 260 (S.D. Fla. 1972), aff'd, 479 F.2d 1338, 178 USPQ 577 (5 Cir. 1973), cert. denied, 414 U.S. 874 (1974). But cf. Molins PLC v. Textron Inc., 48 F.3d 1172, 33 USPQ2d1823 (Fed. Cir. 1995). Applicant has cited more than 600 references, many of which are clearly irrelevant to the claimed invention. Applicant is cautioned against burying material references and the appearance of inequitable conduct in this application.
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: 156, 158, 160, 162, and 164 in FIG. 1; 735 in FIG. 7
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference characters "1408" and "14012" have both been used to designate “Edge tiles” in FIG. 14 and “1412” and “1414” have both been used to designate “Central tiles” in [0165] as published. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Objections
The claims are objected to for the following informalities:
Claim 21 recites multiple limitations which include a plurality of steps, such as in limitations 3-4: “configuring the topology of the neural network… and causing the neural network…”. As set forth in 37 CFR 1.75, where a claim sets forth a plurality of steps, each step of the claim should be separated by a line indentation (see MPEP 608.01(i)).
Claim Rejections - 35 USC § 112
35 U.S.C. 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 21-25 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
Claim 21, limitation 2, recites “wherein the first sensor data and the second sensor data are generated during a subset of sensing cycles in a series of sensing cycles”. It is unclear whether the wherein clause is intended to require generating the first and second sensor data within the metes and bounds of the claimed invention, or if it is only further limiting the type of sensor data such that their generation is not required within the metes and bounds of the invention. As set forth in MPEP 2111.04.I, “wherein” clauses raise the question as to the limiting effect of the language in a claim. The metes and bounds of the claims are unclear. For compact examination, it is assumed that the generation of the sensor data is not required to be performed. The rejection may be overcome by clarifying what steps are required to be performed. Claims 22-25 are rejected based on their dependency from claim 21.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
A. Claims 1-2, 11-13, 17, 21-23, and 26-29 are rejected under 35 U.S.C. 103 as being unpatentable over Dikici et al. (US 2022/0067497; cited on the Oct 2 2025 IDS) in view of Bartov et al. (US 2021/0366576; priority to Mar 10 2019; newly cited; corresponds to US 11,462,300 cited on the Dec 21 2023 IDS).
Claim 1 discloses a system, comprising: a host processor; memory accessible by the host processor storing a topology of a neural network, a plurality of weights sets to configure the topology to execute a base calling operation, weight sets in the plurality of weights sets trained on respective training data sets in a plurality of training data sets, the training data sets corresponding to respective sequencing events in a plurality of sequencing events of the base calling operation, the sequencing events spanning temporal progression of the base calling operation through subseries of sensing cycles in a series of sensing cycles, and spatial progression of the base calling operation through locations on a biosensor, and sensor data for sensing cycles in the series of sensing cycles; and a configurable processor having access to the memory and configured with data flow logic to load the topology on processing elements of the configurable processor, select a weight set from the plurality of weights sets based at least in part on a subject subseries of sensing cycles and/or a subject location on the biosensor, load, on the processing elements, subject sensor data for the subject subseries of sensing cycles and the subject location on the biosensor, and load weights in the selected weight set on the processing elements to configure the topology with the weights, and to cause the neural network to apply the weights in the selected weight set on the subject sensor data to produce base call classification data.
Claim 11 discloses a system, comprising: a host processor; memory accessible by the host processor storing a topology of a neural network, first, second, and third weight sets for configuring the topology to execute a base calling operation, the first, second, and third weight sets respectively corresponding to first, second, and third subseries of sensing cycles in a series of sensing cycles, and first, second, and third sensor data respectively corresponding to the first, second, and third subseries of sensing cycles; and a configurable processor having access to the memory and configured with data flow logic to load the topology on processing elements of the configurable processor, load the first sensor data on the processing elements, load the first weight set on the processing elements to configure the topology with weights in the first weight set, and cause the neural network to apply the weights in the first weight set on the first sensor data to produce first base call classification data for sensing cycles in the first subseries of sensing cycle, load the second sensor data on the processing elements, load the second weight set on the processing elements to configure the topology with weights in the second weight set, and cause the neural network to apply the weights in the second weight set on the second sensor data to produce second base call classification data for sensing cycles in the second subseries of sensing cycle, and load the third sensor data on the processing elements, load the third weight set on the processing elements to configure the topology with weights in the third weight set, and cause the neural network to apply the weights in the third weight set on the third sensor data to produce third base call classification data for sensing cycles in the third subseries of sensing cycle.
Claim 21 discloses a computer-implemented method for generating base call classification data, comprising: loading a topology of a neural network on processing elements of a processor, the processor to execute base call operations; storing (i) first sensor data from clusters within first one or more tiles of a flow cell, (ii) second sensor data from clusters within second one or more tiles of the flow cell, (iii) a first weight set comprising first one or more weights, and (iv) a second weight set comprising second one or more weights, wherein the first sensor data and the second sensor data are generated during a subset of sensing cycles in a series of sensing cycles; configuring the topology of the neural network with the first weight set, and causing the neural network configured with the first weight set to process the first sensor data and to produce first base call classification data for the first one or more tiles and for the subset of sensing cycles; and configuring the topology of the neural network with the second weight set, and causing the neural network configured with the second weight set to process the second sensor data and to produce second base call classification data for the second one or more tiles and for the subset of sensing cycles.
Claim 26 discloses a system, comprising: a host processor; memory accessible by the host processor storing (i) a topology of a neural network, and (ii) a plurality of weights to configure the topology to execute a basecalling operation, wherein the plurality of weights are based on tile locations, a series of sensing cycles, and/or sensor data; and a configurable processor having access to the memory and configured with data flow logic to load the topology on processing elements of the configurable processor, load the plurality of weights on the processing elements to configure the topology with the plurality of weights, to cause the neural network to produce base call classification data.
The prior art to Dikici discloses methods and systems for converting a plurality of weights of a filter of a Deep Neural Network (DNN) in a first number format to a second number format, the second number format having less precision than the first number format, to enable the DNN to be implemented in hardware logic (abstract). Dikici teaches computing-based devices comprising processors and computer executable instructions to control the operation of the device [0152-0153]. Dikici teaches that the process may be a application-specific integrated circuit (ASIC), a programmable logic array, a field-programmable gate array (FPGA), or the like (i.e., configurable processor) [0159]. Dikici teaches storing and accessing input data in memory [0139]. Dikici teaches training a DNN to perform a desired task, such as image processing, by identifying values for the weights of the DNN [0012]. At [0133-0135] in reference to FIG. 16, Dikici teaches a convolution engine of a DNN (i.e., a topology of a neural network) configured to perform a convolution operation on the received input data using the weights associated with a particular convolution layer. The weights for each convolution layer of the DNN may be stored in a coefficient buffer and the weights for a particular convolution layer may be provided (i.e., loaded) to the convolution engine when that particular convolution layer is being processed by the convolution engine (i.e., configure the topology with the weights). Dikici teaches that the DNN may have multiple convolution engines so that multiple windows can be processed simultaneously [0137]. Dikici teaches that each filter of a convolution layer is generated by sliding the filter across the input data, and, at each step, determining the filter weights and the input data values for that window [0073].
Dikici does not teach the limitations “a plurality of weights sets to configure the topology to execute a base calling operation, the training data sets corresponding to respective sequencing events in a plurality of sequencing events of the base calling operation, the sequencing events spanning temporal progression of the base calling operation through subseries of sensing cycles in a series of sensing cycles, and spatial progression of the base calling operation through locations on a biosensor”, “sensor data for sensing cycles in the series of sensing cycles”, “select a weight set from the plurality of weights sets based at least in part on a subject subseries of sensing cycles and/or a subject location on the biosensor”, or “cause the neural network to apply the weights in the selected weight set on the subject sensor data to produce base call classification data” in claim 1; “configuring the topology to execute a base calling operation”, “sensing cycles in a series of sensing cycles”, data “corresponding to the first, second, and third subseries of sensing cycles”, to produce first second, and third “base call classification data for sensing cycles” in claim 11; “the processor to execute base call operations”, first and second sensor data from clusters within first and second one or more tiles of a flow cell, “wherein the first sensor data and the second sensor data are generated during a subset of sensing cycles in a series of sensing cycles”, causing the neural network to produce first and second base call classification data for the first and second one or more tiles and for the subset of sensing cycles in claim 21; and “configure the topology to execute a basecalling operation, wherein the plurality of weights are based on tile locations, a series of sensing cycles, and/or sensor data” and “to cause the neural network to produce base call classification data” in claim 26.
However, the prior art to Bartov discloses methods, systems, and media for accurate and efficient estimation of a genome of a genus (abstract). Bartov teaches that flow sequencing by synthesis (SBS) may comprise performing repeated DNA extension cycles, wherein individual species of nucleotides and/or labeled analogs are presented to a primer-template-polymerase complex, which then incorporates the nucleotide if complementary [0094]. Bartov teaches after processing biological samples to generate sequencing signals of nucleic acids, a trained algorithm may be used to process the sequencing signals to perform sequencing calling (e.g., determining the base calls based on the sequence signals) [0167-0181]. Bartov teaches that the algorithm may be a neural network, including a U-Net [0183-0186]. Bartov teaches that the U-Net may be fed various types of information [0189]. At [0191], Bartov teaches that the additional type of information may include local information corresponding to the vicinity of the readings. For example, the local information may represent readings with a tile, such as a reading per flow. A substrate that supports the samples may be virtually segmented to tiles, and the local information may reflect readings corresponding to a given tile. For example, the readings may be calculated as a mean signal for all beads in the photometry image tile and per flow.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine, in the course of routine experimentation and with a reasonable expectation of success, Dikici and Bartov because both references disclose methods for training and using neural networks. Such a combination would result in a method where weights trained for specific tiles of a flow cell and cycles of the sensing cycle, as taught by Bartov, are stored and accessed only when necessary, as taught by Dikici. Ultimately, the weights for the windows and filters as taught by Dikici could be substituted by the weights for specific tiles of a flow cell and cycles of the sensing cycle as taught by Bartov, and the DNN of Dikici could be applied to the basecalling application taught by Bartov. The motivation to perform base calling with separate weights for different tiles would have been to use local information to compensate for non-uniformity across the substrate (for example, some tiles may be illuminated with stronger radiation than another tile), as taught by Bartov [0191].
Regarding claim 2, Dikici in view of Bartov teaches the method of claim 1. Claim 2 further adds that the subseries of sensing cycles include a subseries of initial sensing cycles, a subseries of intermediate sensing cycles, and a subseries of final sensing cycles, and wherein the training data sets and the weight sets respectively correspond to the subseries of initial sensing cycles, the subseries of intermediate sensing cycles, and the subseries of final sensing cycles, which Dikici does not teach.
However, Bartov teaches that additional information may include information indicative of the flow base (base used during the flow) and/or the flow position, such as a flow base synthetic integer vector and a flow position synthetic integer vector [0192], which is considered to read on a subseries of sensing cycles, including initial, intermediate, and final cycles, as instantly claimed.
Regarding claims 12-13, 22, 27, and 29, Dikici in view of Bartov teaches the system of claim 11, the method of claim 21, and the system of claim 26. Claims 12-13, 22, and 27 further add additional sensor data of sensing cycles, weight sets, and basecalling operations, and claim 29 further adds that the second series of sensing cycles occur subsequent to the first series of sensing cycles, which is considered to be taught as described above by Bartov who teaches basecalling of cycles of SBS data [0094; 0167-0181].
Regarding claim 17, Dikici in view of Bartov teaches the system of claim 11. Claim 17 further adds that weights in the first, second, and third weight sets are quantized using different scaling factors.
Dikici teaches using quantized weights for a filter of the DNN in an accelerated DNN [0131]. Dikici teaches determining weights for the filters with different quantization methods (claims 1 at 5).
Regarding claims 23 and 28, Dikici in view of Bartov teaches the method of claim 21 and the system of claim 26. Claim 23 further adds that the first one or more tiles are within a first area of the flow cell; and the second one or more tiles are within a second area of the flow cell, and claim 28 further adds that the first tile locations are on a first area within a flow cell; and the second tile locations are on a second area within the flow cell, which Dikici does not teach.
However, at [0191], Bartov teaches that the additional type of information may include local information corresponding to the vicinity of the readings. For example, the local information may represent readings with a tile, such as a reading per flow. A substrate that supports the samples may be virtually segmented to tiles, and the local information may reflect readings corresponding to a given tile. For example, the readings may be calculated as a mean signal for all beads in the photometry image tile and per flow. This local information may be used for compensating for non-uniformity across the substrate (for example, some tiles may be illuminated with stronger radiation than another tile).
B. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Dikici in view of Bartov, as applied to claims 1 as above, and in further view of Cacho et al. (2016. Base-Calling of High-Throughput Sequencing Data Using a Random Effects Mixture Model (Doctoral dissertation, UC Riverside; cited on the Jun 2 2022 IDS).
Regarding claim 6, Dikici in view of Bartov teaches the method of claim 1. Claim 6 further adds that the sequencing events span temporal progression of the base calling operation through base calling paired-end reads, and wherein the training data sets and the weight sets respectively correspond to reads in the paired-end reads., which neither Dikici nor Bartov teach.
However, the prior art to Cacho discloses Base-Calling of High-Throughput Sequencing Data Using a Random Effects Mixture Model (title). Cacho teaches paired-end sequencing (p. 16, par. 1).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine, in the course of routine experimentation and with a reasonable expectation of success, Dikici in view of Bartov with Cacho because Bartov and Cacho both teach methods for basecalling sequencing data. It would have been obvious to one of ordinary skill in the art to substitute the pair-end read signals taught by Cacho for the sequencing data taught by Bartov, because one of ordinary skill in the art would have been able to carry out such a substitution, and the result of performing basecalling on the sequenced paired-end reads would be reasonably predictable.
Conclusion
No claims are allowed.
Claims 3-5, 7-10, 14-16, 18-20, 24-25 appear to be free of the prior art. The closest prior art to Bartov et al. (US 2021/0366576; priority to Mar 10 2019; newly cited; corresponds to US 11,462,300 cited on the Dec 21 2023 IDS) does not disclose accounting for edge locations and non-edge locations as, instantly claimed in claims 3, 7-8, and 24-25, or determining one or more parameters of the current sequencing run; and select the weight set from the plurality of weights sets based further on the one or more determined parameters of the current sequencing run, as instantly claimed in claim 9. The closest prior art to Dikici et al. (US 2022/0067497; cited on the Oct 2 2025 IDS) and Bartov do not disclose that the topology includes spatial layers that do not combine the sensor data and resulting feature maps between the successive sensing cycles, and temporal layers that combine resulting feature maps between the successive sensing cycles, as instantly claimed in claim 14, or that the weight sets correspond to different sequencing chemistries, sequencing assays, or sequencing configurations, as instantly claimed in claims 18-20.
Claims 3-5, 7-10, 14-16, 18-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Inquiries
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JANNA NICOLE SCHULTZHAUS whose telephone number is (571)272-0812. The examiner can normally be reached on Monday - Friday 8-4.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Olivia Wise can be reached on (571)272-2249. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JANNA NICOLE SCHULTZHAUS/Examiner, Art Unit 1685