Prosecution Insights
Last updated: April 19, 2026
Application No. 17/921,417

METHOD OF DIAGNOSING A BIOLOGICAL ENTITY, AND DIAGNOSTIC DEVICE

Final Rejection §101§103
Filed
Oct 26, 2022
Examiner
ERICKSON, BENNETT S
Art Unit
3683
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Oxford University Innovation Limited
OA Round
4 (Final)
38%
Grant Probability
At Risk
5-6
OA Rounds
3y 7m
To Grant
84%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
53 granted / 141 resolved
-14.4% vs TC avg
Strong +46% interview lift
Without
With
+45.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
47 currently pending
Career history
188
Total Applications
across all art units

Statute-Specific Performance

§101
32.4%
-7.6% vs TC avg
§103
45.6%
+5.6% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
10.6%
-29.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 141 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. GB2006144.6, filed on April 27, 2020. Response to Amendment In the amendment filed on October 28, 2025, the following has occurred: claim(s) 1, 21, 25 have been amended and claim(s) 15, 19, 24 have been cancelled. Now, claim(s) 1, 3, 6-14, 16-18, 20-23, 25 are pending. Claim Objections Claim 1 objected to because of the following informalities: “at least one unit” in p. 3, ll. 1, “the at least one unit” in p. 3, ll. 3, “the at least one unit” in p. 3, ll. 4. These appear to be typographical errors as Applicant’s Specification from October 26, 2022 discloses on p. 13, “The generation of the bounding boxes using the area filtering (to include only objects of a suitable size) is combined with the localization information (to include only objects where colocalized labels are present) to provide the highest quality data to the machine learning system (i.e. data units that are most easily compared with each other and with training data and which contain minimal or no units that do not correspond to instances of the biological entity that it is desired to diagnose).”. Appropriate correction is required. For examination purposes, the Examiner will interpret the claimed portion as “at least one data unit”, “the at least one data unit”, “the at least one data unit”. Claim 25 objected to because of the following informalities: “a subset of instances of the biological entity” in p. 6, ll. 4, “at least one unit” in p. 6, ll. 25, “the at least one unit” in p. 6, ll. 27, “the at least one unit” in p. 6, ll. 28. These appear to be typographical errors. Appropriate correction is required. For examination purposes, the Examiner will interpret the claimed portion as “a subset of plural instances of the biological entity”, “at least one data unit”, “the at least one data unit”, “the at least one data unit”. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: "by a sample processing unit", "by a data processing unit" in claim 1, "a sample receiving unit configured to... ", "a sample processing unit configured... ", "a sensing unit configured to... ", "a data processing unit configured to:... " in claim 25. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1, 3, 6-14, 16-18, 20-23, 25 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1, 3, 6-14, 16-18, 20-21: Step 2A Prong One Claim 1 recite(s) attaching, a plurality of optically detectable label to each of at least a subset of plural instances of a biological entity in a sample, the attaching, for at least one of the plurality of optically detectable labels, comprising contacting each of the subsets of plural instances of the biological entity with a polyvalent cation, wherein at least one of the plurality of optically detectable labels comprises nucleic acids and fluorophores: capturing, image data representing one or more images of the sample, each image containing plural instances of the biological entity; preprocessing, the image data to obtain preprocessed image data, wherein preprocessing the image data comprises generating a plurality of sub-images for each image of the sample, each sub-image representing a different portion of the image and containing a different one of the instances of the biological entity, wherein generating the plurality of sub-images comprises: identifying regions, where, in each region, plural optically detectable labels are colocalized, colocalization being defined as where locations of plural optically detectable labels are consistent with the optically detectable labels being attached to a same one of the instances of the biological entity; and generating, a separate sub-image for each of at least a subset of the identified regions, each generated sub-image containing a different one of the identified regions, wherein the colocalized optically detectable labels comprise at least two colocalized optically detectable labels comprise at least two colocalized optically detectable labels of different types; and generating at least one data unit based on the preprocessed image data, the generating comprising filtering the preprocessed image data; and determining, the presence, absence, or identity of the biological entity based on the classification These limitations, as drafted given the broadest reasonable interpretation, but for a generic computer component, encompass managing interactions between people, including following rules or instructions, which is a subgrouping of Certain Methods of Organizing Human Activity. For example, but for the generic computer component of “…by a sample processing unit…”, “…by one or more fluorescence microscopy devices…”, “…by a data processing unit…”, “…the trained machine learning system…”, the claim encompasses a user attaching a plurality of optically detectable labels to each of at least a subset of plural instances of a biological entity in a sample, a user capturing image data representing one or more images of the sample, each image containing plural instances of the biological entity, a user analyzing the image data by determining a plurality of sub-images for each image of the sample, a user identifying regions where, in each region, plural optically detectable labels are colocalized, a user determining a separate sub-image for each of at least subset of the identified regions, a user determining at least one data unit based on the preprocessed image data, and a user determining using the presence, absence, or identity of the biological entity based on the classification. These steps encompass steps that could be performed manually by users following rules or instructions which constitute certain methods of organizing human activity. These steps could be carried out manually by individuals, such as doctors or clinical staff, in a clinical facility. Claims 3, 6-14, 16-18, 20-21 incorporate the abstract idea identified above and recite additional limitations that expand on the abstract idea. For example, claims 3, 6-13 further describes the analyzing of the image data and recites further steps for data determination. Similarly, claims 14 and 17-18 further describe the generic computer component. Similarly, claim 16 further defines the biological entity. Finally, claims 20-21 further define the optically detectable labels. Such steps encompass Certain Methods of Organizing Human Activity. Claims 1, 3, 6-14, 16-18, 20-21: Step 2A Prong Two This judicial exception is not integrated into a practical application because the remaining elements amount to merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea. Claim 1 and claims 14 and 17-18 recite “…by a sample processing unit…”, “…by one or more microscopy device,…”, “…by a data processing unit…”, “trained machine learning system” and in claim 1 recites “inputting at least one data unit into a trained machine learning system configured to classify the at least one data unit” at a high degree of generality, amount no more than generally linking the abstract idea to a particular technical environment. The recitation is also similar to adding the words "apply it" to the abstract idea. As set forth in MPEP 2106.05(f), merely reciting the words "apply it" or an equivalent, is an example of when an abstract idea has not been integrated into a practical application. Claims 1, 3, 6-14, 16-18, 20-21: Step 2B The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as discussed above with respect to integration into a practical application, the additional elements are recited at a high level of generality. Additionally, generally linking the abstract idea to a particular technological environment does not amount to significantly more than the abstract idea (See MPEP 2106.05(h) and Affinity Labs of Texas v. DirectTV, LLC, 838 F.3d 1253, 120 USP12d 1201 (Fed. Cir. 2016)). Claims 22-23 recite the same functions as claim 1, but in different statutory categories. Thus, these elements taken individually or together do not amount to significantly more than the abstract ideas themselves. Claim 25: Step 2A Prong One Claim 25 recite(s) receive a sample of a biological entity; cause attachment of a plurality of optically detectable label to at least a subset of instances of the biological entity present in the sample; capture one or more images of the fluorescence microscopy sample containing the optically detectable labels to obtain image data representing one or more images of the sample, each image containing plural instances of the biological entity; and preprocess the image data to obtain preprocessed image data, wherein preprocessing the image data comprises generating a plurality of sub-images for each image of the sample, each sub-image representing a different portion of the image and containing a different one of the instances of the biological entity, wherein generating the plurality of sub-images comprises: identifying regions where, in each region, plural optically detectable labels are colocalized, colocalization being defined as where locations of plural optically detectable labels are consistent with the optically detectable labels being attached to a same one of the instances of the biological entity; and generating a separate a sub-image for each of at least a subset of the identified regions, each generated sub-image containing a different one of the identified regions, wherein the colocalized optically detectable labels comprise at least two colocalized optically detectable labels of different type; filter the preprocessed image data to generate at least one data unit; receive the at least one data unit; classify the at least one data unit; and determine a presence, absence, or identity of the biological entity based on the classification These limitations, as drafted given the broadest reasonable interpretation, but for generic computer components, encompass managing interactions between people, including following rules or instructions, which is a subgrouping of Certain Methods of Organizing Human Activity. For example, but for the generic computer components of "a sample receiving unit", "a sample processing unit", "a sensing unit", "a data processing unit", and "a trained machine learning system configured to:", the claim encompasses a user manually receiving a sample of a biological entity, a user manually attaching a plurality of optically detectable labels to at least a subset of instances of a biological entity present in the sample, a user manually capture one or more fluorescence microscopy images of the sample containing the optically detectable labels to obtain image data, a user analyzing the image data by determining a plurality of sub-images for each image of the sample, a user identifying regions where, in each region, plural optically detectable labels are colocalized, a user determining a separate sub-image for each of at least subset of the identified regions, a user filtering the preprocessed image data to determine at least one data unit, a user receiving the at least one data unit, a user classifying the at least one data unit, and a user determining using a presence, absence, or identity of the biological entity based on the classification. These steps encompass steps that could be performed manually by users following rules or instructions which constitute certain methods of organizing human activity. These steps could be carried out manually by individuals, such as doctors or clinical staff, in a clinical facility. Claim 25: Step 2A Prong Two This judicial exception is not integrated into a practical application because the remaining elements amount to no more than general purpose computer components programmed to perform the abstract idea and merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea. Claim 25, directly or indirectly, recite the following generic computer components "a sample receiving unit", "a sample processing unit", "a sensing unit", "a data processing unit" are recited at a high-level of generality (i.e., "The sample receiving unit 4 is configured to receive a sample for analysis. The sample receiving unit 4 may be configured in any of the various known ways for handling samples in medical diagnostic devices (e.g. fluidics or microfluidics could be used to move the sample, immobilise, label and image it). The device 2 further comprises a sample processing unit 6 configured to cause attachment of at least one optically detectable label to at least a subset of instances of a biological entity present in the sample. The sample processing unit 6 may therefore comprise a reservoir containing suitable reagents (e.g. fluorescent labels). The device 2 further comprises a sensing unit 8 configured to capture one or more images of the sample containing the optically detectable labels to obtain image data. The device further comprises a data processing unit 8 that preprocesses the image data to obtain preprocessed image data and uses the preprocessed image data in a trained machine learning system to diagnose the biological entity. The preprocessed may be performed using any of the methods described above. The trained machine learning system may be implemented within the device 2or the device 2 may communicate with an external server that implements the trained machine learning system. For example, the data processing unit 8 may alternatively be configured to send the obtained image data to a remote data processing unit configured to preprocess the image data to obtain preprocessed image data, and use the preprocessed image data in a trained machine learning system to diagnose the biological entity." in Specification in Paragraph [0068]) As set forth in the 2019 Eligibility Guidance, 84 Fed. Reg. at 55 "merely include[ing] instructions to implement an abstract idea on a computer" is an example of when an abstract idea has not been integrated into a practical application. The claim recites in claim 25 "a trained machine learning system configured to:" at a high degree of generality, amount no more than generally linking the abstract idea to a particular technical environment. The recitation is also similar to adding the words "apply it" to the abstract idea. As set forth in MPEP 2106.05(f), merely reciting the words "apply it" or an equivalent, is an example of when an abstract idea has not been integrated into a practical application. Claim 25: Step 2B The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using a computer configured to perform above identified functions amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. See Alice 573 U.S. at 223 ("mere recitation of a generic computer cannot transform a patent-ineligible abstract idea Additionally, generally linking the abstract idea to a particular technological environment does not amount to significantly more than the abstract idea (See MPEP 2106.05(h) and Affinity Labs of Texas v. DirectTV, LLC, 838 F.3d 1253, 120 USP12d 1201 (Fed. Cir. 2016)). Thus, these elements taken individually or together do not amount to significantly more than the abstract ideas themselves. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3, 11-12, 14, 16-18, 22-23, 25 are rejected under 35 U.S.C. 103 as being unpatentable over Tandon et al. (U.S. Patent Pre-Grant Publication No. 2018/0211380) in view of Cohen et al. (U.S. Patent Pre-Grant Publication No. 2013/0044940) in further view of Boyden et al. (U.S. Patent Pre-Grant Publication No. 2020/0041514). As per independent claim 1, Tandon discloses a method of determining a presence, absence, or identity of a biological entity in a sample, comprising: preprocessing, by a data processing unit (See Paragraph [0007]: The one or more processors are further configured to: receive a plurality of images of training cellular artifacts and classification data of the training cellular artifacts, wherein one or more of the training cellular artifacts belong to the same class as the sample feature of interest; apply the principal component analysis to the plurality of training images of cellular artifacts to obtain a plurality of feature vectors for the plurality of training cellular artifacts; and train the random forest classifier using the plurality of feature vectors for the plurality of training cellular artifacts and the classification data of the training cellular artifacts. In some implementations, the PCA includes a randomized PCA, which the Examiner is interpreting one or more processors to encompass a data preprocessing unit), the image data to obtain preprocessed image data wherein preprocessing the image data comprises generating a plurality of sub-images for each image of the sample, each sub-image representing a different portion of the image and containing a different one of the instances of the biological entity (See Paragraphs [0276]-[0277]: The intensity topography generated by the Euclid function can then be plotted in a three dimensional space to characterize cell boundaries, and identify regions of segmentation and body centers, which the Examiner is interpreting the claimed portion when combined with Boyden below), wherein generating the plurality of sub-images comprises: generating, by the data processing unit, a separate sub-image for each of at least a subset of the identified regions, each generated sub-image containing a different one of the identified regions (See Fig. 23 and Paragraphs [0289]-[0290]: The segmented cellular artifacts are generated by using the generated Euclidean transformation to mimic the map on the original input image and generate the separate segment images, which the Examiner is interpreting the separate segment images to encompass a plurality of sub-images), wherein the colocalized optically detectable labels comprise at least two colocalized optically detectable labels of different type; generating at least one data unit based on the preprocessed image data, the generating comprising filtering the preprocessed image data (See Paragraphs [0331]-[0332]: Segmentation further involves splicing the one or more images of the biological sample using the local maxima and data obtained from applying the Sobel filter, thereby obtaining a plurality of images of the cellular artifacts, which the Examiner is interpreting the Sobel filter to encompass filtering the preprocessed image data, and interpreting the plurality of images to encompass at least one data unit); inputting the at least one data unit into a trained machine learning system configured to classify the at least one data unit (See Paragraphs [0331]-[0333]: Each of the plurality of images of the cellular artifacts is provided to a machine-learning classification model to classify the cellular artifacts); and determining, using the trained machine learning system, the presence, absence, or identity of the biological entity based on the classification (See Paragraphs [0150]-[0152], [0164]-[0167]: A machine learning model (which has been generalized and pre-trained on example images of the sample type) interfaces with the hardware component to scan the full sample images and automatically make a classification, diagnosis, and/or analysis.) While Tandon teaches a method for preprocessing, by a data processing unit, the image data to obtain preprocessed image data wherein preprocessing the image data comprises generating a plurality of sub-images for each image of the sample, each sub-image representing a different portion of the image and containing a different one of the instances of the biological entity, wherein generating the plurality of sub-images comprises: generating, by the data processing unit, a separate sub-image for each of at least a subset of the identified regions, each generated sub-image containing a different one of the identified regions, Tandon may not explicitly teach preprocessing, by a data processing unit, the image data to obtain preprocessed image data wherein preprocessing the image data comprises generating a plurality of sub-images for each image of the sample, each sub-image representing a different portion of the image and containing a different one of the instances of the biological entity, wherein generating the plurality of sub-images comprises: identifying, by the data processing unit, regions where, in each region, plural optically detectable labels are colocalized, colocalization being defined as where locations of plural optically detectable labels are consistent with the optically detectable labels being attached to a same one of the instances of the biological entity; and generating, by the data processing unit, a separate sub-image for each of at least a subset of the identified regions, each generated sub-image containing a different one of the identified regions, wherein the colocalized optically detectable labels comprise at least two colocalized optically detectable labels of different type. Cohen teaches a method for preprocessing, by a data processing unit, the image data to obtain preprocessed image data wherein preprocessing the image data comprises generating a plurality of sub-images for each image of the sample, each sub-image representing a different portion of the image and containing a different one of the instances of the biological entity, wherein generating the plurality of sub-images comprises: identifying, by the data processing unit, regions where, in each region, plural optically detectable labels are colocalized, colocalization being defined as where locations of plural optically detectable labels are consistent with the optically detectable labels being attached to a same one of the instances of the biological entity (See Figs. 3-5 and Paragraphs [0057]-[0068]: An object detection module may use edge-based segmentation to detect the boundaries of objects in an image section, detected edges may define enclosed regions in the image section, object measurements also include measurements of detected objects across a series of microscopy images, and the measurement module may measure fluorescence ratios or colocalization information for the detected objects in series of microscopy images, which the Examiner is interpreting the object detection module to encompass identifying regions, and interpreting the measurement module may measure colocalization information for the detected objects to encompass plural optically detectable labels are colocalized when combined with Boyden’s disclosure of detectable labels); and generating, by the data processing unit (See Paragraph [0020]: Multiple processing units may respectively process the image sections simultaneously in parallel), a separate sub-image for each of at least a subset of the identified regions, each generated sub-image containing a different one of the identified regions, wherein the colocalized optically detectable labels comprise at least two colocalized optically detectable labels of different type (See Figs. 3-5 and Paragraphs [0057]-[0068], [0082]: An object detection module may use edge-based segmentation to detect the boundaries of objects in an image section, detected edges may define enclosed regions in the image section, object measurements also include measurements of detected objects across a series of microscopy images, and the measurement module may measure fluorescence ratios or colocalization information for the detected objects in series of microscopy images, and various characteristics of the detected objects can be identified, which the Examiner is interpreting identifying various characteristics of the detected objects to encompass at least two colocalized optically detectable labels of different type.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the method of Tandon to include preprocessing, by a data processing unit, the image data to obtain preprocessed image data wherein preprocessing the image data comprises generating a plurality of sub-images for each image of the sample, each sub-image representing a different portion of the image and containing a different one of the instances of the biological entity, wherein generating the plurality of sub-images comprises: identifying, by the data processing unit, regions where, in each region, plural optically detectable labels are colocalized, colocalization being defined as where locations of plural optically detectable labels are consistent with the optically detectable labels being attached to a same one of the instances of the biological entity; and generating, by the data processing unit, a separate sub-image for each of at least a subset of the identified regions, each generated sub-image containing a different one of the identified regions, wherein the colocalized optically detectable labels comprise at least two colocalized optically detectable labels of different type as taught by Cohen. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Tandon with Cohen with the motivation of providing an improved approach to identification and measurement of objects in a microscopy image (See Background of Cohen in Paragraph [0004]). While Tandon/Cohen discloses the method as described above, Tandon/Cohen may not explicitly teach attaching, by a sample processing unit, a plurality of optically detectable labels to each of at least a subset of plural instances of a biological entity in a sample, the attaching, for at least one of the plurality of optically detectable labels, comprising contacting each of the subsets of plural instances of the biological entity with a polyvalent cation, wherein at least one of the plurality of optically detectable labels comprises nucleic acids and fluorophores; capturing, by one or more fluorescence microscopy devices, image data representing one or more images of the sample, each image containing plural instances of the biological entity. Boyden teaches a method for attaching, by a sample processing unit, a plurality of optically detectable labels to each of at least a subset of plural instances of a biological entity in a sample (See Paragraphs [0052]-[0053]: The sample contacted with a bi-functional linker wherein the bi-functional linker comprises a binding moiety and an anchor, wherein the binding moiety binds to biomolecules in the sample, the anchor may be a physical, biological, or chemical moiety that attaches or crosslinks the sample to the composition, hydrogel or other swellable material), the attaching, for at least one of the plurality of optically detectable labels, comprising contacting each of the subsets of plural instances of the biological entity with a polyvalent cation (See Paragraphs [0061]-[0078]: The buffer comprises a non-specific protease, a metal ion chelator, a nonionic surfactant, and a monovalent salt, which the Examiner is interpreting the use of a metal ion chelator to encompass contacting each of the subsets of plural instances of the biological entity with a polyvalent cation as metal ion chelators are commonly used to bind metal ions like C a 2 + ,   F e 2 + ,   M g 2 + , which are polyvalent cations), wherein at least one of the plurality of optically detectable labels comprises nucleic acids and fluorophores (See Paragraph [0050]-[0052], [0056]-[0057}: The detectable label is chemically attached to the biological sample, or a targeted component thereof, the detectable label is antibody and/or fluorescent dye wherein the antibody and/or fluorescent dye, further comprises a physical, biological, or chemical anchor or moiety that attaches or crosslinks the specimen to the swellable polymer, such as a swellable hydrogel, and a small molecule linker having a binding moiety capable of attaching to a target nucleic acid and an anchor moiety capable of attaching to the swellable material, which the Examiner is interpreting the target nucleic acid to encompass nucleic acids ([0051]), and interpreting fluorophore to encompass fluorophores ([0050]])); capturing, by one or more fluorescence microscopy devices, image data representing one or more images of the sample, each image containing plural instances of the biological entity (See Fig. 1B and Paragraphs [0007], [0126]-[0127]: Low-magnification images of specimens were imaged on a Nikon Ti-E epifluorescence microscope with a SPECTRA X light engine (Lumencor), and a 5.5 Zyla sCMOS camera (Andor), controlled by NIS-Elements AR software, with a 4×0.13 NA air objective or 10×0.2 NA air objective (Nikon), which the Examiner is interpreting low-magnification images of specimens to encompass image data representing one or more images of the sample, and Fog. 1B to encompass each image containing plural instances of the biological entity.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the method of Tandon/Cohen to include attaching, by a sample processing unit, a plurality of optically detectable labels to each of at least a subset of plural instances of a biological entity in a sample, the attaching, for at least one of the plurality of optically detectable labels, comprising contacting each of the subsets of plural instances of the biological entity with a polyvalent cation, wherein at least one of the plurality of optically detectable labels comprises nucleic acids and fluorophores; capturing, by one or more fluorescence microscopy devices, image data representing one or more images of the sample, each image containing plural instances of the biological entity as taught by Boyden. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Tandon/Cohen with Boyden with the motivation of providing resolution improvement (See Detailed Description of Boyden in Paragraph [0017]). Claim(s) 22-23 mirror claim 1 only within different statutory categories, and are rejected for the same reasons as claim 1. As per claim 3, Tandon/Cohen/Boyden discloses the method of claim 1 as described above. Tandon further teaches wherein the sub-images are generated such that each sub-image contains one and only one of the instances of the biological entity (See Fig. 23 and Paragraphs [0289]-[0290]: The segmented cellular artifacts are generated by using the generated Euclidean transformation to mimic the map on the original input image and generate the separate segment images, which the Examiner is interpreting the segmented cellular artifacts in Fig. 23 to encompass each sub-image contains one and only one of the instances of the biological entity.) As per claim 11, Tandon/Cohen/Boyden discloses the method of claim 1 as described above. Tandon further teaches wherein each sub-image is defined by a bounding box surrounding the sub-image (See Fig. 23 and Paragraphs [0146]-[0147]: Segmentation may define boundaries in an image of the cellular artifacts, the boundaries may be defined by collections of Cartesian coordinates, polar coordinates, pixel IDs, etc., which the Examiner is interpreting the boundaries to encompass a bounding box.) As per claim 12, Tandon/Cohen/Boyden discloses the method of claims 1 and 11 as described above. Tandon further teaches wherein the bounding boxes are defined so as to surround only objects that have an area within a predetermined size range, preferably wherein the predetermined size range has an upper limit and/or a lower limit (See Fig. 23 and Paragraphs [0146]-[0147], [0355]: Segmentation may define boundaries in an image of the cellular artifacts, the boundaries may be defined by collections of Cartesian coordinates, polar coordinates, pixel IDs, etc., the pixels making up the cellular artifact are divided into slices of predetermined sizes, which the Examiner is interpreting the boundaries to encompass a bounding box.) As per claim 14, Tandon/Cohen/Boyden discloses the method of claim 1 as described above. Tandon further teaches further comprising training a machine learning system to provide the trained machine learning system, wherein the training of the machine learning system comprises: receiving training data containing representations of one or more images of each of one or more samples and diagnosis information about a diagnosed biological entity in each sample, each image containing plural instances of the diagnosed biological entity of the corresponding sample, and each of at least a subset of the instances having at least one optically detectable label attached to the instance (See Paragraphs [0212]-[0215]: Training a deep learning or other classification model employs a training set that includes a plurality of images having cells and/or other features of interest in samples, the images of the training set include two or more different types of sample features associated with two or more conditions that are to be classified by the trained model, which the Examiner is interpreting the two or more conditions to encompass diagnosis information); and training a machine learning system using the received training data (See Paragraphs [0212]-[0215]: Training machine learning models for images of biological samples.) As per claim 16, Tandon/Cohen/Boyden discloses the method of claim 1 as described above. Tandon further teaches wherein the biological entity is a virus or bacterium (See Paragraphs [0041], [0218]: The cellular artifacts can be classified according to non-host features selected from the group consisting of protozoa present in the host, bacteria present in the host, fungi present in the host, helminths present in the host, and viruses present in the host.) As per claim 17, Tandon/Cohen/Boyden discloses the method of claim 1 as described above. Tandon further teaches wherein the machine learning system comprises a deep learning system (See Paragraph [0150]: The term "classifier" (or classification model) is sometimes used to describe all forms of classification model including deep learning models.) As per claim 18, Tandon/Cohen/Boyden discloses the method of claim 1 as described above. Tandon further teaches wherein the machine learning system comprises a convolutional neural network, preferably a 15-layer shallow convolutional neural network (See Paragraph [0333]: The machine-learning classification model includes a neural network model, and the neural network model includes a convolutional neural network model, under broadest reasonable interpretation the term preferably does not require the convolutional neural network model to disclose a 15-layer shallow convolutional neural network.) As per independent claim 25, Tandon discloses a device, comprising: a sample receiving unit configured to receive a sample of a biological entity (See Fig. 4B and Paragraphs [0091]- [0092], [0195]: The system one or more actuators are coupled to the camera, the one or more actuators are coupled to a stage for receiving a biological sample and the biological can be obtained from any subject or biological source, although the sample is often taken from a human subject (e.g., a patient), samples can be taken from any organism samples, which the Examiner is interpreting tone or more actuators to encompass a sample receiving unit); and a data processing unit configured to: preprocess the image data to obtain preprocessed image data, wherein preprocessing the image data (See Paragraphs [0147], [0161]-[0163]: A trained portable system may employ deep learning based image processing to automatically analyze a sample and image it in full in one shot or through staging, which the Examiner is interpreting the image processing to encompass preprocessing the image data as the broadest reasonable interpretation of preprocessing is processing done before more processing) comprises generating a plurality of sub-images for each image of the sample, each sub-image representing a different portion of the image and containing a different one of the instances of the biological entity (See Paragraphs [0276]-[0277]: The intensity topography generated by the Euclid function can then be plotted in a three dimensional space to characterize cell boundaries, and identify regions of segmentation and body centers, which the Examiner is interpreting the claimed portion when combined with Boyden below), wherein generating the plurality of sub-images comprises: generating a separate sub-image for each of at least a subset of the identified regions, each generated sub-image containing a different one of the identified regions (See Fig. 23 and Paragraphs [0289]-[0290]: The segmented cellular artifacts are generated by using the generated Euclidean transformation to mimic the map on the original input image and generate the separate segment images, which the Examiner is interpreting the separate segment images to encompass a plurality of sub-images.); and filter the preprocessed image data to generate at least one data unit (See Paragraphs [0331]-[0332]: Segmentation further involves splicing the one or more images of the biological sample using the local maxima and data obtained from applying the Sobel filter, thereby obtaining a plurality of images of the cellular artifacts, which the Examiner is interpreting the Sobel filter to encompass filtering the preprocessed image data, and interpreting the plurality of images to encompass at least one data unit); a trained machine learning system configured to: receive the at least one data unit (See Paragraphs [0331]-[0333]: Each of the plurality of images of the cellular artifacts is provided to a machine-learning classification model to classify the cellular artifacts); classify the at least one data unit (See Paragraphs [0331]-[0333]: Each of the plurality of images of the cellular artifacts is provided to a machine-learning classification model to classify the cellular artifacts); and determine a presence, absence, or identity of the biological entity based on the classification (See Paragraphs [0150]-[0152], [0164]-[0167]: A machine learning model (which has been generalized and pre-trained on example images of the sample type) interfaces with the hardware component to scan the full sample images and automatically make a classification, diagnosis, and/or analysis.) While Tandon teaches a device for a data processing unit configured to: preprocess the image data to obtain preprocessed image data, wherein preprocessing the image data comprises generating a plurality of sub-images for each image of the sample, each sub-image representing a different portion of the image and containing a different one of the instances of the biological entity, wherein generating the plurality of sub-images comprises: generating a separate sub-image for each of at least a subset of the identified regions, each generated sub-image containing a different one of the identified regions, Tandon may not explicitly teach a data processing unit configured to: preprocess the image data to obtain preprocessed image data, wherein preprocessing the image data comprises generating a plurality of sub-images for each image of the sample, each sub-image representing a different portion of the image and containing a different one of the instances of the biological entity, wherein generating the plurality of sub-images comprises: identifying regions where, in each region, plural optically detectable labels are colocalized, colocalization being defined as where locations of plural optically detectable labels are consistent with the optically detectable labels being attached to a same one of the instances of the biological entity; and generating a separate sub-image for each of at least a subset of the identified regions, each generated sub-image containing a different one of the identified regions, wherein the colocalized optically detectable labels comprise at least two colocalized optically detectable labels of different type. Cohen teaches a device for preprocess the image data to obtain preprocessed image data, wherein preprocessing the image data comprises generating a plurality of sub-images for each image of the sample, each sub-image representing a different portion of the image and containing a different one of the instances of the biological entity, wherein generating the plurality of sub-images comprises: identifying regions where, in each region, plural optically detectable labels are colocalized, colocalization being defined as where locations of plural optically detectable labels are consistent with the optically detectable labels being attached to a same one of the instances of the biological entity (See Figs. 3-5 and Paragraphs [0057]-[0068]: An object detection module may use edge-based segmentation to detect the boundaries of objects in an image section, detected edges may define enclosed regions in the image section, object measurements also include measurements of detected objects across a series of microscopy images, and the measurement module may measure fluorescence ratios or colocalization information for the detected objects in series of microscopy images, which the Examiner is interpreting the object detection module to encompass identifying regions, and interpreting the measurement module may measure colocalization information for the detected objects to encompass plural optically detectable labels are colocalized when combined with Boyden’s disclosure of detectable labels); and generating a separate sub-image for each of at least a subset of the identified regions, each generated sub-image containing a different one of the identified regions, wherein the colocalized optically detectable labels comprise at least two colocalized optically detectable labels of different type (See Figs. 3-5 and Paragraphs [0057]-[0068], [0082]: An object detection module may use edge-based segmentation to detect the boundaries of objects in an image section, detected edges may define enclosed regions in the image section, object measurements also include measurements of detected objects across a series of microscopy images, and the measurement module may measure fluorescence ratios or colocalization information for the detected objects in series of microscopy images, and various characteristics of the detected objects can be identified, which the Examiner is interpreting identifying various characteristics of the detected objects to encompass at least two colocalized optically detectable labels of different type.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the device of Tandon to include preprocess the image data to obtain preprocessed image data, wherein preprocessing the image data comprises generating a plurality of sub-images for each image of the sample, each sub-image representing a different portion of the image and containing a different one of the instances of the biological entity, wherein generating the plurality of sub-images comprises: identifying regions where, in each region, plural optically detectable labels are colocalized, colocalization being defined as where locations of plural optically detectable labels are consistent with the optically detectable labels being attached to a same one of the instances of the biological entity; and generating a separate sub-image for each of at least a subset of the identified regions, each generated sub-image containing a different one of the identified regions, wherein the colocalized optically detectable labels comprise at least two colocalized optically detectable labels of different type as taught by Cohen. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Tandon with Cohen with the motivation of providing an improved approach to identification and measurement of objects in a microscopy image (See Background of Cohen in Paragraph [0004]). While Tandon/Cohen discloses the device as described above, Tandon/Cohen may not explicitly teach a sample processing unit configured to cause attachment of a plurality of optically detectable labels to at least a subset of plural instances of the biological entity present in the sample, wherein the attachment, for at least one of the plurality of optically detectable labels, comprises contacting each of the subsets of plural instances of the biological entity with a polyvalent cation, and at least one of the plurality of optically detectable labels comprises nucleic acids and fluorophores; a sensing unit configured to capture one or more fluorescence microscopy images of the sample containing the optically detectable labels to obtain image data representing one or more images of the sample, each image containing plural instances of the biological entity. Boyden teaches a device for a sample processing unit configured to cause attachment of a plurality of optically detectable labels to at least a subset of plural instances of the biological entity present in the sample (See Paragraphs [0052]-[0053]: The sample contacted with a bi-functional linker wherein the bi-functional linker comprises a binding moiety and an anchor, wherein the binding moiety binds to biomolecules in the sample, the anchor may be a physical, biological, or chemical moiety that attaches or crosslinks the sample to the composition, hydrogel or other swellable material), wherein the attachment, for at least one of the plurality of optically detectable labels, comprises contacting each of the subsets of plural instances of the biological entity with a polyvalent cation (See Paragraphs [0061]-[0078]: The buffer comprises a non-specific protease, a metal ion chelator, a nonionic surfactant, and a monovalent salt, which the Examiner is interpreting the use of a metal ion chelator to encompass contacting each of the subsets of plural instances of the biological entity with a polyvalent cation as metal ion chelators are commonly used to bind metal ions like C a 2 + ,   F e 2 + ,   M g 2 + , which are polyvalent cations), and at least one of the plurality of optically detectable labels comprises nucleic acids and fluorophores (See Paragraph [0050]-[0052], [0056]-[0057}: The detectable label is chemically attached to the biological sample, or a targeted component thereof, the detectable label is antibody and/or fluorescent dye wherein the antibody and/or fluorescent dye, further comprises a physical, biological, or chemical anchor or moiety that attaches or crosslinks the specimen to the swellable polymer, such as a swellable hydrogel, and a small molecule linker having a binding moiety capable of attaching to a target nucleic acid and an anchor moiety capable of attaching to the swellable material, which the Examiner is interpreting the target nucleic acid to encompass nucleic acids ([0051]), and interpreting fluorophore to encompass fluorophores ([0050]])); a sensing unit configured to capture one or more fluorescence microscopy images of the sample containing the optically detectable labels to obtain image data representing one or more images of the sample, each image containing plural instances of the biological entity (See Fig. 1B and Paragraphs [0007], [0126]-[0127]: Low-magnification images of specimens were imaged on a Nikon Ti-E epifluorescence microscope with a SPECTRA X light engine (Lumencor), and a 5.5 Zyla sCMOS camera (Andor), controlled by NIS-Elements AR software, with a 4×0.13 NA air objective or 10×0.2 NA air objective (Nikon), which the Examiner is interpreting low-magnification images of specimens to encompass image data representing one or more images of the sample, and Fog. 1B to encompass each image containing plural instances of the biological entity.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the device of Tandon/Cohen to include a sample processing unit configured to cause attachment of a plurality of optically detectable labels to at least a subset of plural instances of the biological entity present in the sample, wherein the attachment, for at least one of the plurality of optically detectable labels, comprises contacting each of the subsets of plural instances of the biological entity with a polyvalent cation, and at least one of the plurality of optically detectable labels comprises nucleic acids and fluorophores; a sensing unit configured to capture one or more fluorescence microscopy images of the sample containing the optically detectable labels to obtain image data representing one or more images of the sample, each image containing plural instances of the biological entity as taught by Boyden. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Tandon/Cohen with Boyden with the motivation of providing resolution improvement (See Detailed Description of Boyden in Paragraph [0017]). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Tandon et al. (U.S. Patent Pre-Grant Publication No. 2018/0211380) in view of Cohen et al. (U.S. Patent Pre-Grant Publication No. 2013/0044940) in view of Boyden et al. (U.S. Patent Pre-Grant Publication No. 2020/0041514) in further view of Barnes et al. (U.S. Patent Pre-Grant Publication No. 2018/0322632). As per claim 6, Tandon/Cohen/Boyden discloses the method of claim 1 as described above. Tandon/Cohen/Boyden may not explicitly teach wherein the colocalized optically detectable labels of different type comprise optically detectable labels having different emission spectra. Barnes teaches a method wherein the colocalized optically detectable labels of different type comprise optically detectable labels having different emission spectra (See Paragraphs [0063]-[0064]: Tissue slide images contain many features, only some of which are of interest for any particular study, for finding regions of interest it is helpful to first select the proper color for unmixing, the primary color channels are R, G, and B, which the Examiner is interpreting the color channels to encompass different emission spectra.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the method of Tandon/Cohen/Boyden to include the colocalized optically detectable labels of different type comprise optically detectable labels having different emission spectra as taught by Barnes. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Tandon/Cohen/Boyden with Barnes with the motivation of improving computational efficiency (See Background of the Subject Disclosure of Barnes in Paragraph [0005]). Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Tandon et al. (U.S. Patent Pre-Grant Publication No. 2018/0211380) in view of Cohen et al. (U.S. Patent Pre-Grant Publication No. 2013/0044940) in view of Boyden et al. (U.S. Patent Pre-Grant Publication No. 2020/0041514) in view of Barnes et al. (U.S. Patent Pre-Grant Publication No. 2018/0322632) in further view of Kamens et al. (U.S. Patent Pre-Grant Publication No. 2019/0228840). As per claim 7, Tandon/Cohen/Boyden discloses the method of claim 1, and Tandon/Cohen/Boyden/Barnes discloses the method of claim 6 as described above. Tandon/Cohen/Boyden/Barnes may not explicitly teach wherein the generation of the sub-images comprises using relative intensities from the colocalized optically detectable labels of different type to select a subset of the identified regions, for the generation of the sub-images, that have a higher probability of containing one and only one instance of the biological instance. Kamens teaches a method wherein the generation of the sub-images comprises using relative intensities from the colocalized optically detectable labels of different type to select a subset of the identified regions (See Paragraphs [0050], [0108]-[0110]: A software module performs post-processing of one or a plurality of refined locations, the post-processing may include characterizing centroid location, volume, shape, intensity, density, transparency, regularity, or a combination thereof, which the Examiner is interpreting the post-processing include characterizing intensity to encompass using relative intensities from the colocalized optically detectable labels of different type, and a plurality of refined locations to encompass a subset of the identified regions), for the generation of the sub-images, that have a higher probability of containing one and only one instance of the biological instance (See Paragraphs [0108]-[0l 10]: The generated report may contain a sample data heat mapped to indicate confidence level, which the Examiner is interpreting indicated confidence level to encompass probability of containing one and only one instance of the biological instance when combined with Tandon's disclosure in [0269], [0392] that identifies conditional probability.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the method of Tandon/Cohen/Boyden/Barnes to include the generation of the sub-images comprises using relative intensities from the colocalized optically detectable labels of different type to select a subset of the identified regions, for the generation of the sub-images, that have a higher probability of containing one and only one instance of the biological instance as taught by Kamens. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Tandon/Cohen/Boyden/Barnes with Kamens with the motivation of providing novel computational approaches (See Background of Kamens in Paragraph [0003]). As per claim 8, Tandon/Cohen discloses the method of claim 1, Tandon/Cohen/Boyden/Barnes discloses the method of claim 6, and Tandon/Cohen/Boyden/Barnes/Kamens discloses the method of claim 7 as described above. Tandon/Cohen/Boyden/Barnes may not explicitly teach wherein the colocalized optically detectable labels of different type are configured to have different labelling efficiency with respect to each other, preferably by forming the colocalized optically detectable labels of different type using nucleic acids of different length and/or different numbers of strands. Kamens teaches a method wherein the colocalized optically detectable labels of different type are configured to have different labelling efficiency with respect to each other, preferably by forming the colocalized optically detectable labels of different type using nucleic acids of different length and/or different numbers of strands (See Paragraphs [0049]-[0050], [0057], [0067]-[0068]: Such assays may combine data from a number of different phenotypic sources, such as images of various cell and/or tissue types, each with multiple different stains that are selected to highlight different phenotypic properties; epigenetic and/or gene expression data; and proteomics analysis, and such assays may highlight certain morphological features, biomarkers (e.g. telomere length), epigenetic alterations (e.g. DNA methylation patterns, post-transcriptional modification of histones, and chromatin remodeling) and/or components, such as a specific protein, of a sample, which the Examiner is interpreting highlight morphological features and/or biomarkers to encompass different type using nucleic acids of different length.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the method of Tandon/Cohen/Boyden/Barnes to include the colocalized optically detectable labels of different type are configured to have different labelling efficiency with respect to each other, preferably by forming the colocalized optically detectable labels of different type using nucleic acids of different length and/or different numbers of strands as taught by Kamens. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Tandon/Cohen/Boyden/Barnes with Kamens with the motivation of providing novel computational approaches (See Background of Kamens in Paragraph [0003]). Claims 9-10, 13, 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Tandon et al. (U.S. Patent Pre-Grant Publication No. 2018/0211380) in view of Cohen et al. (U.S. Patent Pre-Grant Publication No. 2013/0044940) in view of Boyden et al. (U.S. Patent Pre-Grant Publication No. 2020/0041514) in further view of Kamens et al. (U.S. Patent Pre-Grant Publication No. 2019/0228840). As per claim 9, Tandon/Cohen/Boyden discloses the method of claim 1 as described above. Tandon/Cohen/Boyden may not explicitly teach wherein the generation of the sub-images comprises using detected axial ratios of objects in the identified regions to select a subset of the identified regions, for the generation of the sub-images, that have a higher probability of containing one and only one instance of the biological instance. Kamens teaches a method wherein the generation of the sub-images comprises using detected axial ratios of objects in the identified regions to select a subset of the identified regions, for the generation of the sub-images, that have a higher probability of containing one and only one instance of the biological instance (See Paragraphs [0049]-[0050], [0057]: Such assays may combine data from a number of different phenotypic sources, such as images of various cell and/or tissue types, each with multiple different stains that are selected to highlight different phenotypic properties; epigenetic and/or gene expression data; and proteomics analysis, and such assays may highlight certain morphological features, biomarkers (e.g. telomere length), epigenetic alterations (e.g. DNA methylation patterns, post-transcriptional modification of histones, and chromatin remodeling) and/or components, such as a specific protein, of a sample, which the Examiner is interpreting highlight morphological features to encompass using detected axial ratios of objects.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the method of Tandon/Cohen/Boyden to include the generation of the sub-images comprises using detected axial ratios of objects in the identified regions to select a subset of the identified regions, for the generation of the sub-images, that have a higher probability of containing one and only one instance of the biological instance as taught by Kamens. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Tandon/Cohen/Boyden with Kamens with the motivation of providing novel computational approaches (See Background of Kamens in Paragraph [0003]). As per claim 10, Tandon/Cohen/Boyden discloses the method of claim 1 as described above. Tandon/Cohen/Boyden may not explicitly teach further comprising detecting one or more axial ratios of objects in the generated sub-images and using the detected one or more axial ratios to select a trained machine learning system to use to diagnose the biological entity. Kamens teaches a method further comprising detecting one or more axial ratios of objects in the generated sub-images and using the detected one or more axial ratios to select a trained machine learning system to use to diagnose the biological entity (See Paragraphs [0049]-[0050], [0057], [0067]-[0068]: In the case of imaging data, samples are prepared using a subset of stains designed to highlight specific morphological features and/or certain components of the sample, such as a specific protein, the samples are properly labeled for use during machine learning training, certain features may be automatically labeled using a feature extractor which can detect macro features in the dataset for use by the training algorithm, which the Examiner is interpreting the training algorithm to encompass a trained machine learning system, and interpreting morphological features to encompass detecting one or more axial ratios.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the method of Tandon/Cohen/Boyden to include detecting one or more axial ratios of objects in the generated sub-images and using the detected one or more axial ratios to select a trained machine learning system to use to diagnose the biological entity as taught by Kamens. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Tandon/Cohen/Boyden with Kamens with the motivation of providing novel computational approaches (See Background of Kamens in Paragraph [0003]). As per claim 13, Tandon/Cohen/Boyden discloses the method of claim 1 and Tandon/Cohen/Boyden/Kamens discloses the method of claim 10 as described above. Tandon further teaches wherein: each bounding box is defined by identifying a smallest rectangular box that contains the object to be surrounded by the bounding box and expanding the smallest rectangular box to a common bounding box size that is the same for at least a subset of the bounding boxes (See Fig. 23 and Paragraphs [0146]-[0147], [0355]: Segmentation may define boundaries in an image of the cellular artifacts, the boundaries may be defined by collections of Cartesian coordinates, polar coordinates, pixel IDs, etc., the pixels making up the cellular artifact are divided into slices of predetermined sizes, which the Examiner is interpreting the boundaries to encompass a bounding box as the segmentation is able to define boundaries of an image); and generation of the preprocessed image data comprises filling a region within the bounding box outside of the smallest rectangular box with artificial padding data (See Paragraphs [0146]-[0147]: Segmentation removes background pixels (pixels deemed to be unassociated with any sample feature) and groups of foreground pixels into cellular artifacts, which can then be extracted and fed to a classification model, which the Examiner is interpreting background pixels to encompass filling a region with artificial padding data.) As per claim 20, Tandon/Cohen/Boyden discloses the method of claim 1 as described above. Tandon/Cohen/Boyden may not explicitly teach wherein each of one or more of the optically detectable labels is attached using any one or more of the following: antibodies; functionalised nanoparticles; aptamers; and genome hybridisation probes. Kamens teaches a method wherein each of one or more of the optically detectable labels is attached using any one or more of the following: antibodies (See Paragraphs [0015], [0022] [0053]: The samples are stained, and labeling the first sample comprises attributing chronological age, age-related diseases, progression, clinical metadata, or a combination thereof to the first sample, labeling the first sample comprises attributing information of the first sample by imaging, morphological profiling, cell painting, mass spectrometry, antibody chips, 2-F Fluorescence Difference Gel Electrophoresis (DIGE), mass spectrometric immunoassay (MSIA), or laser capture microdissection (LCM), which the Examiner is interpreting utilizing antibody chips to encompass attached using antibodies); functionalised nanoparticles; aptamers; and genome hybridisation probes. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the method of Tandon/Cohen/Boyden to include each of one or more of the optically detectable labels is attached using antibodies as taught by Kamens. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Tandon/Cohen/Boyden with Kamens with the motivation of providing novel computational approaches (See Background of Kamens in Paragraph [0003]). As per claim 21, Tandon/Cohen/Boyden discloses the method of claim 1 as described above. Tandon/Cohen/Boyden may not explicitly teach wherein each of the optically detectable labels comprises a nucleic acid with an added fluorophore. Kamens teaches a method wherein each of the optically detectable labels comprises a nucleic acid with an added fluorophore (See Paragraphs [0050], [0077]: The sample may be tagged with fluorophore, magnetic nanoparticles, or DNA barcoded base linkers, and DNA and RNA features may be identified, which the Examiner is interpreting tagging the sample with fluorophore to encompass an added fluorophore.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the method of Tandon/Cohen/Boyden to include each of the optically detectable labels comprises a nucleic acid with an added fluorophore as taught by Kamens. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Tandon/Cohen/Boyden with Kamens with the motivation of providing novel computational approaches (See Background of Kamens in Paragraph [0003]). Response to Arguments In the Remarks filed on October 28, 2025, the Applicant argues that the newly amended and/or added claims overcome the Claim Objection(s), Claim Interpretation, 35 U.S.C. 101 rejection(s), and 35 U.S.C. 103 rejection(s). The Examiner acknowledges that the newly added and/or amended claims overcome the previous Claim Objection(s). The Examiner does not acknowledge that the newly added and/or amended claims overcome the newly added Claim Objection(s), Claim Interpretation, 35 U.S.C. 101 rejection(s), and 35 U.S.C. 103 rejection(s). The Applicant argues that: (1) claim 1 is not directed towards any of these concepts. Furthermore, the Examiner is referred to Example 39 of the Subject-Matter Eligibility Examples: Abstract Ideas. The USPTO determined that this claim does not recite a judicial exception under Step 2A Prong One because the claim "does not recite any method of organizing human activity such as a fundamental economic concept or managing interactions between people." Similarly to the Example 39 claim, present claim 1 recites specific physical and technical steps including attaching optically detectable labels comprising nucleic acids and fluorophores to biological entities using polyvalent cations, capturing images via fluorescence microscopy devices, preprocessing image data through colocalization-based sub-image generation, filtering preprocessed data to generate units, and classifying those units using a trained machine learning system to determine the presence, absence, or identity of a biological entity. Like the Example 39 claim involving image collection, transformation, and iterative neural network training for facial detection, the present claim 1 is directed to a specific technical process for biological entity detection through image analysis and machine learning classification. Neither claim is directed at fundamental economic principles, commercial or legal interactions, or the management of personal behavior or relationships. Therefore, present claim 1 is deemed not to amount to a method of organizing human activity; (2) amended claim 1 recites specific physical transformations and technical processes that remove it from the realm of abstract ideas. The claim requires attaching optically detectable labels comprising nucleic acids and fluorophores to biological entities using polyvalent cations, which is a chemical process that physically transforms the biological entity through molecular interactions. This is not data manipulation or mental steps, but rather a tangible physical process occurring at the molecular level. The claim further requires capturing images via fluorescence microscopy devices, tying the methodology to specific hardware that converts physical fluorescent signals into digital data. These are not generic steps that could be performed mentally or with pen and paper, but rather steps necessarily dependent on specific physical and chemical processes. Notwithstanding this, amended claim 1 is not simply directed to abstract data manipulation. Amended claim 1 recites specific preparatory steps, steps involving the facilitation of physical interactions at a molecular level, the acquisition of images by defined and specific hardware, and specific preprocessing steps that are dependent on the constraints of the associated hardware. In more detail, amended claim 1 requires, in part, the attachment of labels comprising nucleic acids and fluorophores to a biological entity. The interaction between the surface and the nucleic acid is a tangible process that results in the production of a new labelled entity. This evidently allows the physical transformation of the biological entity via chemical processes, which are not, and cannot be, considered any form of abstract idea. Amended claim 1 specifies that the images are captured via fluorescence microscopy. This ties the methodology to the specific microscopy hardware, which transforms the physical signals from the fluorophores into imaging data. Amended claim 1 includes an additional step of generating a unit based on the pre-processed image data by filtering said pre-processed image data. Filtering may require identifying regions within given constraints, which cannot be considered to encompass generic mathematical concepts. Further, the filtering may result in a technical improvement in the quality of image data, as evidenced in the paragraph [0056] of the published application, which is considered to be more than generic image processing. Amended claim 1 recites that the methodology is integrated with a machine learning classifier. The machine learning classification processes proceed only after the entity is physically labelled, imaged, and pre-processed. The machine learning technique is configured to classify the filtered units that are further constrained by the colocalization of at least two different labels, one of which contains fluorophores. Additionally, such features are not easily identifiable to the human eye, see paragraphs [0058] and [0059] of the published application. As such, the ML classification is constrained by biological and optical considerations. Accordingly, amended claim 1 is directed to a specific technical process for determining the presence, absence or identity of a biological entity, and does not amount to an abstract idea in isolation. It is therefore submitted that claims 1, 3, 6 to 14, and 16, 18, 22, and 23 are not directed towards a judicial exception; (3) the claim integrates the methodology into a practical application. The classification of the biological entity first requires physical labelling and imaging. The ML output includes the presence, absence, or identity of the biological entity; this information has clear, useful applications in diagnostics. It is therefore submitted that the claim is directed to a specific and practical technique that is not merely an abstract idea. It is therefore submitted that claims 1, 3, 6 to 14, and 16, 18, 22, and 23 are not directed towards a judicial exception; (4) the amended claim provides an inventive concept beyond the realms of generic computer implementation. The labelling of the biological entity involves contacting it with a polyvalent cation. This is a non-generic labelling technique as described in the specification. The images are captured by fluorescence microscopy, in which the hardware converts the physical signals from the fluorophores into interpretable data. The images are pre-processed to identify colocalised regions and filtered to generate units, thereby providing higher-quality data for classification and improving diagnostic outputs. These are not considered to amount to generic preprocessing or processing steps. The ML is constrained by biological and optical considerations whilst enabling the accurate and fast determination of the nature of the biological entity, thereby resulting in a technical improvement over conventional diagnostic methods. It is therefore submitted that the features of amended claim 1 amount to significantly more than an abstract idea. It is therefore submitted that claims 1, 3, 6 to 14, 16, 18, 22, and 23 are not directed towards a judicial exception. In the Office Action, the Examiner rejected claim 15 under 35 U.S.C. § 101 as allegedly directed to unpatentable subject matter. Claim 15 has been canceled, thus rendering the rejection moot. In the Office Action, the Examiner rejected claim 25 under 35 U.S.C. § 101 as allegedly directed to unpatentable subject matter. Claim 25 is amended to be congruent with the corresponding claim 1; as such, the above comments regarding the judicial exception apply mutatis mutandis; (5) amended independent claim 1 is directed toward determining the presence, absence, or identity of a biological entity in a sample. Instances of the biological entity (e.g., intact particles) are labelled using polyvalent cations. Each of at least a subset of plural instances of the biological entity has at least one optically detectable label attached, the optically detectable label comprising nucleic acids and fluorophores. In the present technology, images of the sample are captured using fluorescence microscopy, with each image containing multiple instances of the biological entity. The image data is preprocessed by generating a plurality of subimages for each image in the sample. Each sub-image represents a different portion of the image and contains a different number of instances of the biological entity. The plurality of sub-images is generated by identifying regions in which, within each region, plural optically detectable labels are colocalized, colocalization being defined as locations of plural optically detectable labels consistent with the optically detectable labels being attached to the same instance of the biological entity. A separate sub-image for each of at least a subset of the identified regions is generated, each containing a different identified region. The colocalized optically detectable labels comprise at least two colocalized optically detectable labels of different types. The preprocessed image data, filtered to generate units, is provided to a trained machine learning system. The trained machine learning model is configured to classify units and determine the presence, absence, or identity of the biological entity based on the classification. The present technology, therefore, allows more accurate detection of colocalised label patterns, enabling higher-quality sub-images to be produced for input into a classification model. Tandon broadly relates to "methods, systems and apparatus for imaging and analyzing a biological sample of a host organism to identify a sample feature of interest, such as a cell type of interest" (see Tandon, paragraph [0003]). In Tandon, one or more images of a biological sample are captured by a camera (see Tandon, paragraph [0004]). This is in contrast to the present technology that utilises fluorescence microscopy images, wherein at least one of the optically detectable labels comprises nucleic acids and fluorophores. There is no teaching or suggestion in Tandon of attaching optically detectable labels to a biological entity by contacting said entity with polyvalent cations, let alone as a preparatory step for downstream machine-learning-based classification of a biological entity as required by amended claim 1. In Tandon, the acquired images are segmented to obtain a plurality of cellular artifacts. A machine learning classification model is applied to the segmented images to classify the cellular artifacts. A determination that the classified cellular artifacts belong to a class that the sample feature of interest belongs is made. A cellular artifact is any item in an image of a biological sample "that might qualify as a cell, parasite or other sample feature of interest" (see Tandon, paragraph [0147]). A sample feature is a feature of the biological sample that represents a potentially clinically interesting condition (see Tandon, paragraph [0095]). Examples of sample features include abnormal host cells, parasites infecting the host, and a combination thereof (see Tandon, claim 9 and paragraphs [0095] to [0097]). As such, it can be concluded that cellular artifacts encompass abnormal cells, parasites, and a combination thereof. Segmentation is described as an image analysis process that identifies individual sample features, and cellular artifacts, in the image (see Tandon, paragraph [0146]). The segmentation removes background pixels that are deemed to not be associated with any sample and groups the foreground pixels into cellular artifacts. The cellular artifacts are extracted and fed to the classification model. To group the foreground pixels, the segmentation process may utilise a gradient technique to identify cellular artifact edges or a distance transformation to define cellular artifacts in the context of boundary pixels. Further, the segmentation process as disclosed in Tandon is not and cannot be the same as the claimed process as each sub-image in the present technology comprises a different one of the instances of the biological entity to which at least one optically detectable label may be attached. The generation of the segmented cellular artifacts using the Euclidean transformation mimics the map on the original input image and generates the separate segment images to encompass a plurality of sub-images. However, the segmented sub-images do not contain different instances of the biological entity each having at least one optically detectable label attached. As such, Tandon fails to disclose at least the following features required by amended claim 1; (6) Tandon's segmentation process generates cellular artifacts based on morphological boundaries in the image, without any consideration of whether optically detectable labels are attached to the biological entities. Tandon uses "labels" only in the sense of classification metadata, text labels identifying what type of cell or condition an image shows (see Tandon Paragraphs [0258]-[0259], [0296]). Tandon does not teach or suggest physically attaching optically detectable labels to biological entities as a preparatory step for image analysis and machine learning classification. The present invention, by contrast, requires that the biological entities themselves be labeled with optically detectable markers (nucleic acids with fluorophores, attached via polyvalent cations) before imaging, and the sub-images are generated based on the colocalization of these physical labels. This fundamental difference of physical labeling of entities versus metadata labeling of images renders Tandon deficient for teaching the claimed invention. Cohen fails to cure the deficiencies of Tandon. Cohen broadly relates to the processing of microscopy images (see Cohen, paragraph [0001]). Cohen describes receiving a microscopy image and determining a configuration for an image section that includes a portion of the image (see Cohen, paragraph [0005]). Multiple image sections are processed in parallel by various processing units. One or more objects are determined to be present in the image sections. A segmentation module determines which pixels are associated with the object (see Cohen, paragraph [0023]). The objects are measured. There is no teaching nor suggestion m Cohen of attaching optically detectable labels to a biological entity using polyvalent cations, let alone as a preparatory step for downstream machine-learning based classification of a biological entity as required by amended claim 1. In more detail, in Cohen, the images are preprocessed and a preliminary scan is conducted to identify objects in the microscopy images (see Cohen, paragraphs [0025] and [0026]). The image sections are defined (see Cohen, paragraph [0027]). These sections may be defined based on the maximum size of an object detected during the preliminary scan or set by a user, on the memory available in the analysis system, or by using bounding coordinates corresponding to a coordinate system (see Cohen, paragraphs [0027] to [0030]). In Cohen, objects are detected in the image sections and the centre locations of the objects are determined (see Cohen, paragraph [0051]). Segmentation is performed which the pixels said to be associated with the object are detected (see Cohen, paragraph [0054]). The next step of the method of Cohen is the performance of measurements based on the pixels associated with the detected objects (see Cohen, paragraph [0068]). In an example, where multiple microscopy images are obtained under varying illumination conditions, fluorescence ratios of colocalization information for the detected objects in a series of microscopy images be obtained. This is in contrast to the present technology in which, sub-images for each individual image are generated on the basis of colocalization information contained within that individual image alone. In other words, the sub-images for each image are generated on the basis of regions within each individual image that contains colocalised plural optically detectable labels. The distinction between Cohen's approach and the claimed invention is fundamental. Cohen measures colocalization information across a series of microscopy images obtained under varying conditions, which is a post-processing analysis step that compares data between multiple different images to generate measurements, see Cohen Paragraph [0068]. In contrast, the present invention identifies colocalized optically detectable labels within each individual image and uses that colocalization information to generate sub-images from that same individual image. The claimed process does not compare across images; rather, it analyzes spatial relationships of labels within a single image to determine which regions likely contain individual instances of the biological entity, and then generates separate sub-images for those regions. Cohen's teaching of measuring colocalization as a comparative metric across different images does not teach or suggest using colocalization within a single image as a basis for generating sub-images from that image. One of ordinary skill in the art would have no reason to take Cohen's cross-image measurement technique, change it to an intra-image segmentation approach for generating sub-images, and then combine it with Tandon. Furthermore, there is no disclosure of colocalised optically detectable labels of different types, let alone two colocalised optically detectable labels of a different type. It is therefore submitted that claim 1 is patentable over Tandon in view of Cohen. It is further submitted that claims 2 to 25 are patentable over Tandon in view of Cohen at least by virtue of their dependence or reference; (7) the Examiner rejected claim 15 as allegedly being obvious over Tandon in view of Kamens. Claim 15 has been canceled thus rendering the rejection moot. In response to argument (1), the Examiner does not find the Applicant’s argument(s) persuasive. The Examiner maintains that the Applicant’s newly amended claims recite an abstract idea without significantly more. The additional elements amount to the generic computer components and does not integrate the abstract idea into a practical application. The Examiner maintains that the newly amended claims recite a Certain Method of Organizing Human Activity without significantly more. The Examiner does not acknowledge that the Applicant’s claims are similar to Example 39, Example 39 describes a method for training a neural network for facial detection, while the Applicant’s claims 1, 3, 6-14, 16-18, 20-23 recite a method of determining a presence, absence, or identify of a biological entity in a sample. The recitation of “trained machine learning system” is recited at a high degree of generality, amount no more than generally linking the abstract idea to a particular technical environment. The recitation is also similar to adding the words "apply it" to the abstract idea. As set forth in MPEP 2106.05(f), merely reciting the words "apply it" or an equivalent, is an example of when an abstract idea has not been integrated into a practical application. The 35 U.S.C. 101 rejection(s) stand. In response to argument (2), the Examiner does not find the Applicant’s argument(s) persuasive. The Examiner maintains that the Applicant’s newly amended claims recite an abstract idea without significantly more, specifically managing interactions between people, including following rules or instructions, which is a subgrouping of Certain Methods of Organizing Human Activity as rejected above in the 35 U.S.C. 101 rejection(s). The Examiner maintains that these steps could be carried out manually by individuals, such as doctors or clinical staff, in a clinical facility as the Applicant’s newly amended claims recite steps that a person following rules or instructions could accomplish the abstract idea. The additional elements generally link the abstract idea to a particular technological environment does not amount to significantly more than the abstract idea (See MPEP 2106.05(h) and Affinity Labs of Texas v. DirectTV, LLC, 838 F.3d 1253, 120 USP12d 1201 (Fed. Cir. 2016)). The Examiner does not acknowledge that the Applicant’s newly amended claims recite a technical improvement as the Examiner maintains that the Applicant’s claims are similar to “ii. Using well-known standard laboratory techniques to detect enzyme levels in a bodily sample such as blood or plasma, Cleveland Clinic Foundation v. True Health Diagnostics, LLC, 859 F.3d 1352, 1355, 1362, 123 USPQ2d 1081, 1082-83, 1088 (Fed. Cir. 2017)” and “iii. Gathering and analyzing information using conventional techniques and displaying the result, TLI Communications, 823 F.3d at 612-13, 118 USPQ2d at 1747-48” (See MPEP 2106.05(a)(II)), which the courts have indicated may not be sufficient to show an improvement to technology. The 35 U.S.C. 101 rejection(s) stand. In response to argument (3), the Examiner does not find the Applicant’s argument(s) persuasive. The Examiner maintains that the additional elements amount no more than generally linking the abstract idea to a particular technical environment. The recitation is also similar to adding the words "apply it" to the abstract idea. As set forth in MPEP 2106.05(f), merely reciting the words "apply it" or an equivalent, is an example of when an abstract idea has not been integrated into a practical application. The Examiner maintains that the newly amended claims recite an abstract idea without significantly more. The Examiner maintains that claims 1, 3, 6 to 14, and 16, 18, 22, and 23 are rejected under 35 U.S.C. 101 rejection(s). The 35 U.S.C. 101 rejection(s) stand. In response to argument (4), the Examiner does not find the Applicant’s argument(s) persuasive. The Examiner does not acknowledge that the Applicant’s newly amended claims recite a technical improvement as the Examiner maintains that the Applicant’s claims are similar to “ii. Using well-known standard laboratory techniques to detect enzyme levels in a bodily sample such as blood or plasma, Cleveland Clinic Foundation v. True Health Diagnostics, LLC, 859 F.3d 1352, 1355, 1362, 123 USPQ2d 1081, 1082-83, 1088 (Fed. Cir. 2017)” and “iii. Gathering and analyzing information using conventional techniques and displaying the result, TLI Communications, 823 F.3d at 612-13, 118 USPQ2d at 1747-48” (See MPEP 2106.05(a)(II)), which the courts have indicated may not be sufficient to show an improvement to technology. The Examiner maintains that claims 1, 3, 6-14, 16-18, 20-23, 25 are rejected under 35 U.S.C. 101. The 35 U.S.C. 101 rejection(s) stand. In response to argument (5), the Examiner finds the Applicant’s argument(s) persuasive. The Examiner has added Boyden et al. (U.S. Patent Pre-Grant Publication No. 2020/0041514) to the combination of Tandon et al. (U.S. Patent Pre-Grant Publication No. 2018/0211380) in view of Cohen et al. (U.S. Patent Pre-Grant Publication No. 2013/0044940) to reject the independent claims 1 and 25. Boyden is relied upon to teach “attaching, by a sample processing unit, a plurality of optically detectable labels to each of at least a subset of plural instances of a biological entity in a sample, the attaching, for at least one of the plurality of optically detectable labels, comprising contacting each of the subsets of plural instances of the biological entity with a polyvalent cation, wherein at least one of the plurality of optically detectable labels comprises nucleic acids and fluorophores;” and “capturing, by one or more fluorescence microscopy devices, image data representing one or more images of the sample, each image containing plural instances of the biological entity” in independent claim 1, and “a sample processing unit configured to cause attachment of a plurality of optically detectable labels to at least a subset of plural instances of the biological entity present in the sample, wherein the attachment, for at least one of the plurality of optically detectable labels, comprises contacting each of the subsets of plural instances of the biological entity with a polyvalent cation, and at least one of the plurality of optically detectable labels comprises nucleic acids and fluorophores;” and “a sensing unit configured to capture one or more fluorescence microscopy images of the sample containing the optically detectable labels to obtain image data representing one or more images of the sample, each image containing plural instances of the biological entity” in independent claim 25. The Examiner maintains that the combination of Tandon/Cohen/Boyden encompasses the newly amended independent claims 1 and 25. The 35 U.S.C. 103 rejection(s) stand. In response to argument (6), the Applicant’s argument(s) are not persuasive. The Examiner has added Boyden et al. (U.S. Patent Pre-Grant Publication No. 2020/0041514) to disclose “a plurality of optically detectable labels to each of at least a subset of plural instances of a biological entity in a sample” to be combined with Tandon’s disclosure of a machine learning model (which has been generalized and pre-trained on example images of the sample type) interfaces with the hardware component to scan the full sample images and automatically make a classification, diagnosis, and/or analysis ([0150]-[0152], [0164]-[0167]). The Examiner maintains that Cohen’s disclosure in Paragraphs [0041] and [0066]: “The surrounding area may be reevaluated as a new individual image section for processing and analysis at a single processing unit 114. Where neighboring areas of adjacent image sections are selected, the neighboring areas may be merged into a new image section 240 as shown by way of example in FIG. 9. In the new image section 240 of FIG. 9, the objects 242a, 242b, and 244 respectively corresponding to the suspect locations 232a-b and 234a-b are near the center of the new image section 240 rather than straddling a boundary between adjacent image sections. Because only one processing unit 114 processes and analyzes the new image section 240, the object detection module 124 can determine whether one object or two objects are present at the suspect locations 232a-b and 234a-b.” when combined with Tandon/Boyden. The Examiner maintains that the combination of Tandon/Cohen/Boyden encompasses the amended independent claims 1 and 25, and the claims 3, 6-14, 16-18, 20-23 are rejected individually and due to their dependency to independent claim 1. The 35 U.S.C. 103 rejection(s) stand. In response to argument (7), the Examiner acknowledges that the cancellation of independent claim 15. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Bennett S Erickson whose telephone number is (571)270-3690. The examiner can normally be reached Monday - Friday: 9:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Morgan can be reached at (571) 272-6773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Bennett Stephen Erickson/Primary Examiner, Art Unit 3683
Read full office action

Prosecution Timeline

Oct 26, 2022
Application Filed
Jun 28, 2024
Non-Final Rejection — §101, §103
Nov 01, 2024
Response Filed
Jan 10, 2025
Final Rejection — §101, §103
Apr 14, 2025
Request for Continued Examination
Apr 15, 2025
Response after Non-Final Action
Jul 24, 2025
Non-Final Rejection — §101, §103
Oct 28, 2025
Response Filed
Jan 29, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597518
INCORPORATING CLINICAL AND ECONOMIC OBJECTIVES FOR MEDICAL AI DEPLOYMENT IN CLINICAL DECISION MAKING
2y 5m to grant Granted Apr 07, 2026
Patent 12580069
AUTOMATIC SETTING OF IMAGING PARAMETERS
2y 5m to grant Granted Mar 17, 2026
Patent 12580061
System and Method for Virtual Verification in Pharmacy Workflow
2y 5m to grant Granted Mar 17, 2026
Patent 12567501
STABILITY ESTIMATION OF A POINT SET REGISTRATION
2y 5m to grant Granted Mar 03, 2026
Patent 12499978
METHODS, SYSTEMS, AND DEVICES FOR DETERMINING MUTLI-PARTY COLOCATION
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
38%
Grant Probability
84%
With Interview (+45.9%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 141 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month