Prosecution Insights
Last updated: April 19, 2026
Application No. 18/433,240

DETERMINING BIOMARKERS FROM HISTOPATHOLOGY SLIDE IMAGES

Non-Final OA §101§103§112§DP
Filed
Feb 05, 2024
Examiner
VARNDELL, ROSS E
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Tempus AI Inc.
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
98%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
520 granted / 615 resolved
+22.6% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
28 currently pending
Career history
643
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
66.9%
+26.9% vs TC avg
§102
6.4%
-33.6% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 615 resolved cases

Office Action

§101 §103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The IDS(s) has/have been considered and placed in the application file. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 35 U.S.C. 119(e) as follows: The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994). The disclosure of at least the two prior-filed applications, Application No. 62/671300 and 62/824039, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. The independent claims of the instant application requires training biomarkers classification models using “a molecular training dataset” that includes “molecular data based on sequencing” and is “clustered by biomarker.” Processing segmented tile images by predicting a biomarker classification and prediction a tissue classification for each tile, and then determining the overall presence of biomarkers based on both predictions. Neither of the reviewed provisionals (62/671300 and 62/824039) teaches these specific elements. Provisional 62/671300 describes an adversarial framework using a dual-classifier (Figure 3) and demonstrates that its output image embeddings naturally group by MSI statuses (Figures 8A and 8B). However, it fails to support the independent claims, which requires training models using a “molecular training dataset” that includes “molecular data based on sequencing” and includes “molecular data subsets clustered by biomarkers.” The provisional only teaches clustering as a mathematical visualization of the model’s output embeddings, rather than requiring pre-clustered, sequence-based input dataset to train the models. Since it is missing this information, it cannot provide an effective filling date. The burden rests on the applicant to establish this support. See MPEP § 211.05 and In re NTP, Inc., 654 F.3d 1279, 1288 (Fed. Cir. 2011). The examiner believes this to be a drafting oversight. Applicant is requested to provides a limitation-by-limitation analysis explicitly pointing to the specific paragraphs, page numbers, and/or figures in the priority applications that provides full 35 USC § 112(a) support for every limitation in the claims, with particular attention to the limitations provided above. Should the applicant maintain the priority claim without providing the detailed mapping, the examiner reserves the right to issue a formal Requirement for Information under 37 CFR § 1.105 to compel the identification of the specific portions of the provisional applications relied upon for 35 USC § 112(a) support so the examiner can determine the effective filling date. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-30 of U.S. Patent No. 10,957,041 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the ‘240 application claims recite the same fundamental process of using a deep learning framework to predict biomarker and tissue classification from tiled H&E slide images to determine biomarker presence, differing only in that the ‘240 application omits narrowing limitations. Claims 1-19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-30 of U.S. Patent No. 11,610,307 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the ‘240 application claims recite the same fundamental process of using a deep learning framework to predict biomarker and tissue classification from tiled H&E slide images to determine biomarker presence, differing only in that the ‘240 claims being a broader genus that encompasses the more specific species claimed in the ‘307 patent. Claims 1-19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-28 of U.S. Patent No. US 11,682,098 B2. Although the claims at issue are not identical, they are not patentably distinct from each other the ‘240 application claims recite the same fundamental process of using a deep learning framework to predict biomarker and tissue classification from tiled H&E slide images to determine biomarker presence, with the ‘098 patent claims adding details of multi-tile adjacency analysis and 3D probability array generation. Claim Objections Claim 18-19 is/are objected to because of informalities. The examiner recommends the following changes. Claim 18, line(s) 5, recites "the digital image" without antecedent basis. There is no prior recitation of "a digital image" in claim 18. Claim 18, line(s) 15, recites "the target tissue" without antecedent basis. There is no prior recitation of "a target tissue" in claim 18 Claim 18, line(s) 16, recites "the electronic network" without antecedent basis. There is no prior recitation of "an electronic network" in claim 18 Claim 19, line(s) 16, recites "the electronic network" without antecedent basis. There is no prior recitation of "an electronic network" in claim 19 Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1-19 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “substantially similar” in claims 1, 18, and 19 is a relative term which renders the claim indefinite. The term “substantially similar” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. It is unclear what constitutes a "substantially similar" tissue sample. This renders the claims indefinite as the specification does not provide a standard for measuring "substantially similar." Claim(s) 2-17 depend either directly or indirectly from the rejection of Claim(s) 1, therefore they are also rejected. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-19 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite using trained deep learning/biomarker classification models to predict biomarker classifications and tissue classifications for tile images, and determining a predicted presence of biomarkers which constitutes mathematical concepts (ML model inference) and mental processes (evaluating classification results to determine biomarker presence). The additional elements of generic computer components (processors, memory, network), processing segmented tile images, and transmitting predictions do not integrate the judicial exception into a practical application, as they amount to mere instructions to apply the abstract idea on a computer, insignificant extra-solution activity, and field-of-use limitations. At Step 2B, these additional elements are well-understood, routine, and conventional as evidenced by the specification's description of generic computing hardware and known image processing techniques. STEP 1: Statutory Category Claim Category Pass? 1 (+ deps 2-17) Machine (system) Yes 18 Manufacture (CRM) Yes 19 Process (method) Yes STEP 2A, PRONG 1: Judicial Exception? Yes – Abstract Idea (Mathematical Concepts + Mental Processes) The independent claims recite: "predicting a respective biomarker classification for each tile image using one or more biomarker classification models." This is a mathematical concept. The biomarker classification models perform mathematical operations (neural network computations) to generate classification predictions. Per the TC2100 guidance and July 2024 SME Examples 47-49, ML model inference constitutes mathematical calculations. "predicting a respective tissue classification for each tile image using one or more trained deep learning classifier models." Same analysis as above, applying trained DL models to classify tiles is a mathematical calculation. "determine, based on (i) and (ii), a predicted presence of one or more biomarkers in the target tissue." This is both a mathematical concept (aggregating classification predictions) and a mental process (a pathologist could, in principle, evaluate tissue and biomarker classifications to determine biomarker presence, the specification at [0004]-[0005] describes pathologists performing this evaluation manually). Training data description of a molecular training dataset with sequencing data clustered by biomarker. This describes characteristics of training data, which is a data characteristic that further defines the mathematical model but does not add a non-abstract limitation. The claims recite mathematical concepts (ML model inference/classification) and mental processes (determining biomarker presence from classification results) . These are considered together as a single abstract idea. (Step 2A, Prong 1: YES) STEP 2A, PRONG 2: Practical Application? No, the claims do not integrate the abstract idea into a practical application. Additional elements beyond the judicial exception: "one or more processors; an electronic network; one or more memories" (claim 1) Generic computer components recited at a high level of generality. These are mere instructions to "apply it" on a computer. See MPEP § 2106.05(f). "process a plurality of segmented tile images each corresponding to a different respective portion of the digital image." Data gathering/pre-processing. This is insignificant extra solution activity (data characterization). See MPEP § 2106.05(g). "transmit, via the electronic network, the predicted presence of the one or more biomarkers" Insignificant extra-solution activity (mere data output/transmission). See MPEP§ 2106.05(g). "Hematoxylin and Eosin-stained slide of a target tissue." Field of use limitation (medical imaging/histopathology). See MPEP § 2106.05(h). Training data characteristics such as the molecular dataset corresponding to training tissue samples, including molecular sequencing data, clustered by biomarker. These describe the type of training data used, which is a data characteristic of the mathematical model, not a practical application. This limits the field of use of the abstract idea. No improvement to computer functionality is claimed . The specification describes the improvement as better biomarker detection results from H&E images (avoiding IHC staining), which is an improvement to results, not to the functioning of a computer or other technology. See Intellectual Ventures I LLC v. Capital One Bank, 792 F.3d 1363 (Fed. Cir. 2015). No particular machine beyond generic processors/memory. No transformation of a physical article. The claims do not treat a patient, control a medical device, or interface with physical pathology equipment in a meaningful way. (Step 2A, Prong 2: NO, not integrated into practical application. Claims are directed to the abstract idea.) STEP 2B: Inventive Concept? No, the additional elements do not provide an inventive concept. Generic computer components (processors, memory, network) are well-understood, routine, and conventional (WURC). See MPEP § 2106.05(d); Berkheimer evidence: specification at [0392]-[0395] describes generic processor implementations. Transmitting data over a network is WURC. See OIP Techs., Inc. v. Amazon.com, 788 F.3d 1359 (Fed. Cir. 2015). Processing tile images from a digital image is WURC in the histopathology field. The specification at [0010]-[0011] acknowledges that tiling/segmentation of WSls and use of CNNs/FCNs for image classification are known. The specific training data characteristics do not supply an inventive concept at Step 2B because they describe what data the mathematical model was trained on, not an unconventional technical step. (Step 2B: NO - no inventive concept.) Dependent claims: Most dependent claims add further details of the mathematical process (claims 3-8, 10-16) or data characteristics (claim 9, 11) that remain within the abstract idea. Claim 17 (adding pathology slide scanner system and receiving image from it) adds a particular data gathering device, but this is likely WURC extra-solution activity given that digital slide scanners are conventional in histopathology. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 2, 7-9, and 12-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chukka et al. (US 2016/0042511 A1 – hereinafter “Chukka”) in view of Coudray et al. (Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning – hereinafter “Coudray”). Claim 1. A computing system for identifying biomarkers in a digital image of a Hematoxylin and Eosin-stained slide of a target tissue (Chukka teaches a computing system for identifying biomarkers in a slide stained with "hematoxylin (blue stain)" (Chukka, ¶ [0024]).), comprising: one or more processors; an electronic network; and one or more memories having stored thereon computer-executable instructions that, when executed by the one or more processors, cause the computing system to (Chukka discloses the hardware components, stating "analyzing system 100 includes an imaging apparatus 120 and a computer system 110" that are interconnected by a "network 130" (Chukka, ¶ [0027]), where the computer system contains processors and memory executing instructions (Chukka, ¶ [0029]).): process a plurality of segmented tile images each corresponding to a different respective portion of the digital image (Chukka teaches processing a plurality of segmented tile images, stating, "The tissue region is then segmented into tiles and classification and nuclei counting algorithms are performed with respect to each tile" (Chukka, ¶ [0010]). Chukka does not explicitly teach processing the tiles "using a deep learning framework.") (i) predicting a respective biomarker classification for each tile image using one or more biomarker classification models (Chukka teaches predicting classifications for the extracted tiles and objects, including utilizing trained object classifiers to determine "gene status in breast carcinomas," which constitutes a biomarker classification model (Chukka, ¶ [0023]).), (ii) predicting a respective tissue classification for each tile image using one or more trained deep learning classifier models (Chukka teaches predicting tissue classifications for each tile image by assigning a probability of the extracted feature belonging to a "tissue, stromal, or lymphatic region" (Chukka, ¶ [0036]). As established below, Coudray teaches implementing these classifiers using deep learning models); determine, based on (i) and (ii), a predicted presence of one or more biomarkers in the target tissue (Chukka teaches determining the presence of biomarkers based on the individual classifications, disclosing obtaining a composite score for the slide based at least in part on the characteristics of the extracted objects" (Chukka, ¶ [0010]); and transmit, via the electronic network, the predicted presence of the one or more biomarkers (Chukka teaches transmitting the predicted results over the electronic network, stating that the data "In addition or alternatively, a score and record of the analysis performed for the tissue in the slide can be transmitted over a computer communication link (e.g., the Internet) to a remote computer system for viewing, storage, or analysis " (Chukka, ¶ [0052]).). Chukka does not explicitly teach processing the tiles "using a deep learning framework." Also, Chukka does not explicitly teach “training dataset that (a) corresponds to a plurality of training tissue samples, (b) includes molecular data based on sequencing of a substantially similar sample associated with each training tissue sample, and (c) includes a plurality of molecular data subsets clustered by biomarker.” However, Coudray teaches utilizing a deep learning framework to process segmented tile images corresponding to portions of a digital whole-slide image. Coudray states, "the network was instead trained, validated and tested using 512x512 pixel tiles, obtained from nonoverlapping 'patches' of the whole-slide images" (Coudray, Page 1560, left column) and processed using a "deep convolutional neural network (inception v3)" (Coudray, Page 1559, Abstract). Coudray also teaches (i) predicting a respective biomarker classification for each tile image using one or more biomarker classification models. Coudray teaches predicting a biomarker classification for each tile image, stating, "we trained the network to predict the ten most commonly mutated genes in LUAD" (Coudray, Page 1559, Abstract). Coudray applies these biomarker classification models to each individual tile, explaining that "Each mutation classification was treated as a binary classification, and our formulation allowed multiple mutations to be assigned to a single tile" (Coudray, Page 1568, left column) and "the per-tile classification results were aggregated on a per-slide basis" (Coudray, Page 1560, right column). Wherein the one or more biomarker classification models are trained using a molecular training dataset that (a) corresponds to a plurality of training tissue samples, Coudray teaches training the models using a molecular training dataset that corresponds to a plurality of training tissue samples and includes molecular data based on sequencing.). (b) includes molecular data based on sequencing of a substantially similar sample associated with each training tissue sample, Coudray states, "gene mutation data for matched patient samples were downloaded from TCGA" (Coudray, Page 1562, right column) and "data from the TCGA dataset used for training were identified with the next-generation sequencing (NGS) tools lllumina HiSeq 2000 or Genome Analyzer II" (Coudray, Page 1564, right column) and (c) includes a plurality of molecular data subsets clustered by biomarker, and Coudray teaches this limitation by grouping or partitioning the molecular dataset into distinct subsets based on the presence of the 10 most common gene mutations (biomarkers). Coudray teaches, “To make sure the training and test sets contained enough images from the mutated genes, we only selected those which were mutated in at least 10% of the available tumors” (Coudray, Page 1562, right column). Coudray structures this training dataset into biomarker-specific subsets to train the classifiers, stating, “fully trained this network … using only LUAD whole-slide images … each cell associated to a mutation and set to 1 or 0 depending on the presence or absence of the mutation. Only the most commonly mutated genes were used … leading to a training set of 223,185 tiles.” (ii) predicting a respective tissue classification for each tile image using one or more trained deep learning classifier models. Coudray teaches predicting a respective tissue classification for each tile using the trained deep learning models, stating: "we trained inception v3 to recognize tumor versus normal" (Coudray, Page 1560) and "we trained the network on a direct three-way classification into the three types of images (normal, LUAD, LUSC)" (Coudray, Page 1560). Coudray explicitly performs this via per tile classification, noting "it takes ~20s to calculate per-tile classification probabilities on 500 tiles" (Coudray, Page 1561, left column). determine, based on (i) and (ii), a predicted presence of one or more biomarkers in the target tissue; and Coudray teaches determining the predicted presence of the biomarker in the target tissue based on the tile classifications, explicitly stating that "the per-tile classification results were aggregated on a per-slide basis ... thus generating a per-slide classification" (Coudray, Page 1560, right column). It would have been obvious to a person of ordinary skill in the art at the time of the invention to modify the automated digital slide analysis system of Chukka with the deep convolutional neural network framework of Coudray. A person of ordinary skill in the art would be motivated to integrate Coudray's deep learning architecture into Chukka's automated scanning and computer network system to advance beyond traditional image feature extraction, thereby improving the accuracy of tissue classification (e.g., distinguishing between tumor subtypes like LUAD and LUSC). Furthermore, the artisan would be strongly motivated to utilize Coudray's method of training the deep learning framework on sequencing-derived molecular datasets clustered by biomarker. Combining these teachings predictably allows the computing system to not only classify standard optical tissue morphologies but also accurately predict actionable molecular biomarkers (such as specific gene mutations) directly from standard H&E slide tiles, avoiding the need for separate costly NGS assays and streamlining the diagnostic workflow. Claim 2. The combination of Chukka and Coudray teaches the computing system of claim 1, the one or more memories having stored thereon further instructions that, when executed by the one or more processors, cause the computing system to: separate the digital image into the plurality of segmented tile images by processing the digital image using at least one of (i) a tiling mask or (ii) a trained multiple instance learning controller (“The component 112 can use a grid pattern to tile the portion of the slide corresponding to the tissue data” (Chukka, ¶ [0044]) Applying a grid pattern to a digital image to segment it into discrete sub-regions (tiles) constitutes applying a tiling mask. IPR2026-00185, Exhibit 1002, Dr. Papanikolopoulos, ¶¶ [0069]-[0071].). Claim 7. The combination of Chukka and Coudray teaches the computing system of claim 3, the one or more memories having stored thereon further instructions that, when executed by the one or more processors, cause the computing system to: classify the Hematoxylin and Eosin-stained image using tile-based biomarker classification analysis (Coudray teaches classifying the H&E stained slide using a tile-based biomarker classification system, "the per-tile classification results were aggregated on a per-slide basis" (Coudray, Page 1560, right column).). Claim 8. The combination of Chukka and Coudray teaches the computing system of claim 3, the one or more memories having stored thereon further instructions that, when executed by the one or more processors, cause the computing system to: generate one or both of (i) the trained biomarker classification models, and (ii) the trained deep learning classifier models (Coudray teaches generating both the trained biomarker classification models and the trained deep learning classifier models, "we trained the network to predict the ten most commonly mutated genes in LUAD" (Coudray, Page 1559, Abstract) and "we trained inception v3 to recognize tumor versus normal" (Coudray, Page 1560).). Claim 9. The combination of Chukka and Coudray teaches the computing system of claim 1, the one or more memories having stored thereon further instructions that, when executed by the one or more processors, cause the computing system to: for each tile image in the plurality of tile images: infer a class status of the tile image; and discard, when the class status of the tile image does not correspond to a desired class, the tile image (Chukka teaches inferring the class of the region/tile and discarding it if it does not correspond to the desired class. Chukka states, “an approximate region segmentation process can be performed based at least in part on, for example, an identification of dense lymphocyte clusters. Accordingly, detected lymphocyte regions can be discarded while searching for, as an example, blue-stained tumor nuclei” ¶[0045].). Claim 12. The combination of Chukka and Coudray teaches the computing system of claim 1, wherein the deep learning framework includes at least one of a multi-scale deep learning framework or a single-scale deep learning framework (Coudray trains the neural network using fixed, uniform "512x512 pixel tiles” (Coudray, Page 1560, left column) at a single magnification, this is a single-scale framework.). Claim 13. The combination of Chukka and Coudray teaches the computing system of claim 12, wherein the single-scale deep learning framework is a convolution neural network having a ResNet configuration or an Inception configuration ("we trained inception v3 to recognize tumor versus normal" (Coudray, Page 1560)). Claim 14. The combination of Chukka and Coudray teaches the computing system of claim 1, the one or more memories having stored thereon further instructions that, when executed by the one or more processors, cause the computing system to: for each tile image in the plurality of tile images: process the tile image using a biomarker classification model trained to predict a different respective biomarker classification; and determine, based on the predicted biomarkers of the tile image, a predicted presence of one or more biomarkers in the target tissue (Coudray applies these biomarker classification models to each individual tile, explaining that "Each mutation classification was treated as a binary classification, and our formulation allowed multiple mutations to be assigned to a single tile" (Coudray, Page 1568, left column) and "the per-tile classification results were aggregated on a per-slide basis" (Coudray, Page 1560, right column)); and generate a report containing the digital image and a digital overlay visualizing the predicted presence of the one or more biomarkers (Chukka teaches "a score and record of the analysis performed for the tissue in the slide can be transmitted " (Chukka, ¶ [0052]) and “an overlay image is produced to label features of interest in the image of a specimen from a subject” (Chukka, ¶ [0010]).). Claim 15. The computing system of claim 14, wherein the digital overlay includes an overlay element identifying tumor content of the digital image or tumor percentage of the digital image (Chukka teaches generating a digital overlay based on the percentage of tumor classification “estimating the percentage of positively-stained (e.g., brown-colored) nuclear objects to the total number of positively-stained and negatively-stained ( e.g., blue-colored)” ¶¶ [0003]-[0004]. It would be obvious that an overlay image designed to visualize a slide scored by tumor percentage would include an element identifying that percentage.). Claim 16. The combination of Chukka and Coudray teaches the computing system of claim 1, the one or more memories having stored thereon further instructions that, when executed by the one or more processors, cause the computing system to: for each molecular data subset in the one or more molecular data subsets: receive a plurality of digital images of Hematoxylin and Eosin-stained training slides of training tissue samples corresponding to the respective different biomarker of the molecular data subset in an image-based biomarker prediction system having one or more processors (Coudray teaches, “To make sure the training and test sets contained enough images from the mutated genes, we only selected those which were mutated in at least 10% of the available tumors” (Coudray, Page 1562, right column) and received images corresponding to these biomarkers “fully trained this network … using only LUAD whole-slide images … each cell associated to a mutation and set to 1 or 0 depending on the presence or absence of the mutation.” (Coudray, Page 1568, right column)); and generate one of the trained biomarker classification models, based on the plurality of digital images of the Hematoxylin and Eosin-stained training slides (Coudray teaches generating the trained biomarker classification models based on these digital images, stating, "we trained the network to predict the ten most commonly mutated genes in LUAD" (Coudray, Page 1559, Abstract).). Claim 17. The combination of Chukka and Coudray teaches the computing system of claim 1, wherein the computing system further comprises: a pathology slide scanner system; and the one or more memories have stored thereon further instructions that, when executed by the one or more processors, cause the computing system to: receive, via the electronic network, the digital image from the pathology slide scanner system (Chukka states, "analyzing system 100 includes an imaging apparatus 120 and a computer system 110" and the “images are sent to a computer system 110 either through a direct connection or via a network 130” (Chukka, ¶ [0027]), where the computer system contains processors and memory executing instructions (Chukka, ¶ [0029])). Claim 18. The combination of Chukka and Coudray teaches a non-transitory computer-readable medium comprising a set of computer executable instructions that, when executed by one or more processors, cause a computer to (Chukka, ¶¶ [0024]-[0033]) … The combination of Chukka and Coudray discloses the remaining elements recited in claim 18 for at least the reasons discussed in claim 1 above. The rationale provided for the rejection of claim(s) 1 is applicable to claim 18, mutatis mutandis. Accordingly, claim 20 is rendered obvious by the combination of Chukka and Coudray. Claim 19. The combination of Chukka and Coudray teaches a computer-implemented method for identifying biomarkers in a digital image of a Hematoxylin and Eosin-stained slide of a target tissue (Chukka, ¶¶ [0024]-[0033]), comprising … The combination of Chukka and Coudray discloses the remaining elements recited in claim 19 for at least the reasons discussed in claim 1 above. The rationale provided for the rejection of claim(s) 1 is applicable to claim 19, mutatis mutandis. Accordingly, claim 20 is rendered obvious by the combination of Chukka and Coudray. Claims 3, 4, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Chukka in view of Coudray as applied to claim 1 above, and further in view of Pan et al. (Cell detection in pathology and microscopy images with multi-scale fully convolutional neural networks – hereinafter “Pan”). Claim 3. The combination of Chukka and Coudray teaches the computing system of claim 1, the one or more memories having stored thereon further instructions that, when executed by the one or more processors, cause the computing system to: receive, at the deep learning framework, at least one training Hematoxylin and Eosin stained slide image having a respective label corresponding to a respective biomarker (Coudray teaches receiving training H&E slide images having labels corresponding to respective biomarkers. Coudray states that for the training datasets, "each cell [was] associated to a mutation and set to 1 or 0 depending on the presence or absence of the mutation" (Coudray, Page 1567) and "we trained the network to predict the ten most commonly mutated genes" (Coudray, Page 1559).); classify the Hematoxylin and Eosin-stained slide image using tile-based tissue classification analysis (Coudray teaches classifying the slide image using a tile-based tissue classification analysis. Coudray states, "we trained inception v3 to recognize tumor versus normal. .. the per-tile classification results were aggregated on a per-slide basis" (Coudray, Page 1560).); Neither Chukka nor Coudray explicitly teach “analyzing the Hematoxylin and Eosin-stained slide image using a pixel-based cell segmentation.” However, Pan, in the same field of endeavor of pathology image analysis, teaches analyzing the Hematoxylin and Eosin-stained slide image using a pixel-based cell segmentation. Specifically, Pan discloses utilizing a neural network approach for "regression of a density map to robustly detect the nuclei," explicitly operating at the pixel level by predicting density values continuously across the image space to segment individual cells (Pan, Page 1, Abstract). It would have been obvious to a person of ordinary skill in the art to integrate Pan's pixel-based Fully Convolutional Network into the Chukka and Coudray tile-based architecture. A person of ordinary skill in the art would be motivated to do so to resolve common pathology "data complexity (cell overlapping, inhomogeneous intensities, background clutters and image artifacts)" (Pan, Page 1, Abstract) by utilizing Pan's density map regression provides highly accurate, pixel-level cell isolation, yielding cleaner cellular inputs for deep learning biomarker prediction models. Claim 4. The combination of Chukka, Coudray, and Pan teaches the computing system of claim 3, the one or more memories having stored thereon further instructions that, when executed by the one or more processors, cause the computing system to: identify a plurality of cells within the plurality of tile images using a trained cell segmentation model by: applying each of the plurality of tile images to a cell segmentation model and, for each tile image, assigning a cell classification to one or more pixels within the tile image (Pan teaches assigning values to individual pixels within a tile, stating "The typical application of the convolution network is the classification task, where the output of the image is a class label. However, in many visual tasks, especially in biomedical image processing, the expected output should include localization. That is to say, a class label should be assigned to each pixel." (Pan, p. 7, § 3.2.1)). It would have been obvious to a person of ordinary skill in that art to incorporate Pan’s pixel-level classification to Chukka’s and Coudray’s system to provide precise, robust, localization and boundaries for individual cells within complex, overlapping pathology images. Claim 10. The combination of Chukka, Coudray, and Pan teaches the computing system of claim 1, wherein at least one of the trained deep learning classifier models is a tile-resolution Fully Convolutional Network (FCN) classification model (Pan explicitly teaches applying "a novel multi-scale fully convolutional neural networks approach" to image patches (tiles) to detect nuclei (Pan, Page 1, Abstract).). The rationale provided for the rejection of claim(s) 3 is applicable to claim 10, mutatis mutandis. Claims 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Chukka, Coudray, and Pan as applied to claim 1 above, and further in view of Jones et al. (Voronoi-Based Segmentation of Cells on Image Manifolds– hereinafter “Jones”). Claim 5. The combination of Chukka, Coudray, and Pan teaches the computing system of claim 4, the one or more memories having stored thereon further instructions that, when executed by the one or more processors, cause the computing system to: assign the cell classification to one or more pixels within the tile image by: identifying the one or more pixels as a cell interior, (Chukka discloses an adaptive thresholding technique “to distinguish between darker objects ( e.g., cell nuclei [interior]) and other objects, ( e.g., stroma and slide background [exterior])” ¶[0026].) Chukka, Coudray, and Pan discloses all of the subject matter as described above except for specifically teaching “a cell border.” However, Jones in the same field of endeavor teaches a cell border (p. 535, Abstract, “finding the borders of cells in microscopy images” and p. 541 “algorithm is only responsible for computing cell-cell boundaries”). Therefore, it would have been obvious to one of ordinary skill in the art to combine Chukka, Coudray, Pan, and Jones before the effective filing date of the claimed invention. The motivation for this combination of references would have been to more precisely and completely delineate individual cellular boundaries from the surrounding background tissue. Claim 6. The combination of Chukka, Coudray, and Pan teaches the computing system of claim 4, wherein the trained cell segmentation model is a pixel-resolution three-dimensional classification model (Chukka ¶0048 discloses a “RGB input image”) trained to classify a cell interior, a cell border, and a cell exterior (Chukka discloses an adaptive thresholding technique “to distinguish between darker objects ( e.g., cell nuclei [interior]) and other objects, ( e.g., stroma and slide background [exterior])” ¶[0026]; Jones p. 535, Abstract, “finding the borders of cells in microscopy images” and p. 541 “algorithm is only responsible for computing cell-cell boundaries). The rationale provided for the rejection of claim(s) 5 is applicable to claim 6, mutatis mutandis. Claims 11 is rejected under 35 U.S.C. 103 as being unpatentable over Chukka in view of Coudray as applied to claim 1 above, and further in view of Applicant Admitted Prior Art (“AAPA”). Claim 11. The combination of Chukka and Coudray teaches the computing system of claim 1, wherein the one or more biomarkers include at least one of a tumor-infiltrating lymphocyte (TIL) biomarker, a nucleus-to-cytoplasm (NC) ratio biomarker, a ploidy biomarker, a signet ring morphology biomarker, a programmed death-ligand 1 (PD-L 1) biomarker, a consensus molecular subtype (CMS) biomarker, a human epidermal growth factor receptor 2 (HER2) biomarker, or a homologous recombination deficiency (HRD) biomarker (the '249 specification 's AAPA explicitly admits that utilizing predictive models to identify "CMS classifications" and "CMS category assignment" was known in the prior art (AAPA ¶¶ [0146], [0147]). Alternatively, Chukka teaches predicting ''gene status in breast carcinomas" (¶ [0023]), and a POSITA would readily recognize HER2 as the ubiquitous, standard gene status biomarker tested in breast carcinomas). It would have been obvious to a person of ordinary skill in the art to configure the Chukka and Coudray deep learning system to specifically predict the Consensus Molecular Subtype (CMS) biomarkers admitted as prior art (AAPA). The AAPA establishes CMS classifications as known, clinically significant molecular profiles derived from RNA expression. An artisan would be strongly motivated to apply Coudray's technique of predicting genetic mutations directly from optical H&E tiles to these known CMS biomarkers, thereby expanding the system's prognostic capabilities without requiring costly or time-consuming physical sequencing assays. Conclusion The prior art made of record but not relied, yet considered pertinent to the applicant’s disclosure, is listed on the PTO-892 form. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ross Varndell whose telephone number is (571)270-1922. The examiner can normally be reached M-F, 9-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, O’Neal Mistry can be reached at (313)446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Ross Varndell/Primary Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Feb 05, 2024
Application Filed
Feb 21, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603810
System and Method for Communications Beam Recovery
2y 5m to grant Granted Apr 14, 2026
Patent 12597238
AUTOMATIC IMAGE VARIETY SIMULATION FOR IMPROVED DEEP LEARNING PERFORMANCE
2y 5m to grant Granted Apr 07, 2026
Patent 12582348
DEVICE AND METHOD FOR INSPECTING A HAIR SAMPLE
2y 5m to grant Granted Mar 24, 2026
Patent 12579441
SYSTEMS AND METHODS FOR IMAGE RECONSTRUCTION
2y 5m to grant Granted Mar 17, 2026
Patent 12579786
SYSTEM AND METHOD FOR PROPERTY TYPICALITY DETERMINATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
98%
With Interview (+13.0%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 615 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month