Prosecution Insights
Last updated: April 19, 2026
Application No. 17/998,037

SYSTEMS AND METHODS FOR CHARACTERIZING A TUMOR MICROENVIRONMENT USING PATHOLOGICAL IMAGES

Non-Final OA §103§112
Filed
Nov 06, 2022
Examiner
SHARIFF, MICHAEL ADAM
Art Unit
2672
Tech Center
2600 — Communications
Assignee
The Board Of Regents Of The University Of Texas System
OA Round
3 (Non-Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
94 granted / 115 resolved
+19.7% vs TC avg
Strong +22% interview lift
Without
With
+22.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
16 currently pending
Career history
131
Total Applications
across all art units

Statute-Specific Performance

§101
17.9%
-22.1% vs TC avg
§103
43.1%
+3.1% vs TC avg
§102
18.6%
-21.4% vs TC avg
§112
16.4%
-23.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 115 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/17/2025 has been entered. Response to Arguments Applicant's arguments, filed 10/15/2025, regarding the rejection of the claims under 35 U.S.C. 101 have been fully considered and are persuasive. Therefore, the rejection of the claims under 35 U.S.C. 101 has been withdrawn. Applicant’s arguments, see remarks, filed 08/01/2025, with respect to the rejection of independent claims 1, 13, and 19 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of U.S. Patent Application Publication No.: 2019/0259154 (Madabhushi et al.) and U.S. Patent Application Publication No.: 2021/0279866 (Svekolkin et al.) under 35 U.S.C. 103. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claim 3 is rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Specifically, the claim improperly depends from claim 2, which has been canceled, and should depend from independent claim 1. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “histology-based digital staining system” in claims 1, 13, and 19. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-7, 9-12, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No.: 2017/0270666 (Barnes et al.) (hereinafter Barnes), in view of non-patent literature “An automatic nuclei segmentation method based on deep convolutional neural networks for histopathology images”; BMC biomed eng 1, 24 (2019) (Jung et al.) (hereinafter Jung), and in view of U.S. Patent Application Publication No.: 2019/0259154 (Madabhushi et al.) (hereinafter Madabhushi). Regarding claim 1, Barnes teaches one or more non-transitory computer-readable storage media storing computer-executable instructions for performing a computer process on a computing system, the computer process comprising: (Barnes, para. [0011], lines 1-5: “In yet another exemplary embodiment, the subject disclosure comprises a tangible non-transitory computer-readable medium to store computer-readable code that is executed by a processor to perform operations. The system includes a processor and a memory coupled to the processor, the memory configured to store computer-readable instructions that, when executed by the processor, cause the processor to perform operations”) receiving a pathological image of patient tissue of a patient, the patient tissue including a plurality of cells (Barnes, para. [0064]; para. [0056]; FIG. 2B: “FIG. 4 shows a method for early-stage prognosis, according to an exemplary embodiment of the subject disclosure. This method may use components described with reference to system 100, or other components that perform similar functions. For instance, an image series corresponding to a single patient undergoing diagnosis may be received (S401) from an imaging system or any other input. The image series may include data in the form of color channels or frequency channels representing serial sections of tissue stained with various biomarkers. Example biomarkers include biomarkers for estrogen receptors (ER), human epidermal growth factor receptors 2 (HER2), Ki-67, and progesterone receptors (PR). The imaging system may include the ISCAN COREO™ product of the assignee Ventana Medical Systems, Inc. The image data corresponds to cancerous or significantly cancerous sections retrieved from a single patient.”; “FIG. 2A shows a series of images of serial tissue sections, according to an exemplary embodiment of the subject disclosure; PNG media_image1.png 1014 516 media_image1.png Greyscale ); simultaneously segmenting and classifying nuclei of the plurality of cells using a histology-based digital staining system, the nuclei of the plurality of cells segmented according to spatial location and classified according to cell type, thereby generating one or more groups of nuclei, each of the one or more groups of nuclei having an identified cell type (Barnes, para. [0065]; para. [0057]; FIG. 4; FIG. 2A; para. [0013]: “Once the image data is received (S401), an image in a series of images corresponding to slides comprising serial tissue sections may be displayed on a user interface for field-of-view (FOV) selection and annotation (S403). Several annotation mechanisms (S403) may be provided, such as designating known or irregular shapes, or defining an anatomic region of interest (e.g., tumor region). In one example, the field of view is a whole slide, whole tumor region, or whole tissue section. The annotation (S403) annotates the FOV on the first slide and a registration operation (S405) maps the annotations across the remainder of the slides. As described herein, several methods for annotation and registration may be utilized, depending on the defined FOV. For example, a whole tumor region on a Hematoxylin and Eosin (H&E) slide from among the plurality of serial slides may be defined, and registration operation (S405) maps and transfers the whole tumor annotations from the H&E slide to each of the remaining IHC slides in the series. Alternatively, representative regions or “hot spots” may be identified on a Ki67 digitized whole slide, and may be mapped to equivalent annotated regions on the other IHC slides.”; Hematoxylin and Eosin (H&E) slide is a histology-based digitial staining system.; “FIG. 2B shows an alternate means for FOV selection using representative regions or “hot spots” 231 on a Ki67 digitized whole slide 225. Hot spots are specific regions of the whole slide that contain relatively high and heterogeneous amounts of Ki67 protein. The FOV 231 may, for instance, be in the form of a rectangular shape 231. Other embodiments may provide a manually drawn FOV selection, or automated image analysis algorithms may highlight such FOV regions on the Ki67 slide 225. An inter-marker registration operation as described above may be used to map these “hot spots” to equivalent annotated regions on the other IHC slides such as ER 226, PR 227, and H&E slide 228. Shown on the right hand side of FIG. 2B are the zoomed-in versions of these hot spots, depicted at 20× magnification. Additional IHC slides are not depicted by FIG. 2B or 2A may be similarly annotated, such as HER2. In either case, whether the whole tumor or only “hot spots” are annotated, the corresponding regions on the remaining slides necessarily correspond to similar tissue types, assuming the magnification remains constant across the series.”; PNG media_image2.png 846 655 media_image2.png Greyscale ; PNG media_image3.png 852 621 media_image3.png Greyscale ; “A ‘multi-channel image’ as understood herein encompasses a digital image obtained from a biological tissue sample in which different biological structures, such as nuclei and tissue structures, are simultaneously stained with specific fluorescent dyes, each of which fluoresces in a different spectral band thus constituting one of the channels of the multi-channel image. The biological tissue sample may be stained by a plurality of stains and/or by a stain and a counterstain, the later being also referred to as a “single marker image”.); and determining a composition and a spatial organization of a tumor microenvironment of the patient tissue based on the one or more groups of nuclei (Barnes, para. [0066]; para. [0078]: “Given the FOV, image analysis operations are used to compute scores (S407) for each slide. The scores for each slide may be based on a determination of a percent positivity, as well as a regional heterogeneity. Tumor nuclei that are positively and negatively stained for a particular biomarker, such as Ki67, ER, PR, HER2, etc. are counted, and a percent positivity is computed. Additional scoring mechanisms may be employed, such as H-scores representing regional heterogeneity of a particular marker or protein … The resulting slide-level scores may be combined together to generate IHC3, IHC4, or IHCn scores for the series of slides, depending on the number of individually-stained slides. Any scores computed from the H&E slide can also be included to the information from IHC slides to accordingly specify a different risk scoring metric. The scores are based on, for example, a whole-tumor FOV selection or on a “hot spot” FOV selection.”; “In some embodiments, a computer system can be programmed to automatically identify features in an image of a specimen based at least in part on one or more selection criteria, including criteria based at least in part on color characteristics, sample morphology (e.g., cell component morphology, cell morphology, tissue morphology, anatomical structure morphology, etc.), tissue characteristics (e.g., density, composition, or the like), spatial parameters (e.g., arrangement of tissue structures, relative positions between tissue structures, etc.), image characteristic parameters, or the like. If the features are nuclei, the selection criteria can include, without limitation, color characteristics, nuclei morphology (e.g., shape, dimensions, composition, etc.), spatial parameters (e.g., position of nuclei in cellular structure, relative position between nuclei, etc.), image characteristics, combinations thereof, or the like. After detecting candidate nuclei, algorithms can be used automatically to provide a score or information about the entire analyzed image.”). Barnes fails to teach generating, using a mask regional convolutional neural network (Mask R-CNN) comprising a region proposal network, a classification branch, and a mask-generation branch, one or more masks corresponding to nuclei of the plurality of cells in the pathological image; and simultaneously segmenting and classifying the nuclei of the plurality of cells using a histology-based digital staining system by applying the one or more masks. Jung teaches generating, using a mask regional convolutional neural network (Mask R-CNN) comprising a region proposal network, a classification branch, and a mask-generation branch, one or more masks corresponding to nuclei of the plurality of cells in the pathological image; and simultaneously segmenting and classifying the nuclei of the plurality of cells using a histology-based digital staining system by applying the one or more masks (Jung, page 4, right column; section Nuclei Segmentation; pages 5-6; FIG. 3: “Mask R-CNN [31] is a state-of-the-art object segmentation framework that can identify not only the location of any object but also its segmented mask. Mask R-CNN extends the object detection model Faster R-CNN [32] by adding a third branch for predicting segmentation masks to the existing branches for classification and bounding box regression. Mask R-CNN is a two-stage framework. In the first stage, it scans an input image and finds areas that may contain an object using a Region Proposal Network (RPN). It predicts the classes of proposed areas, refines the bounding box, and generates masks for an object at the pixel level in the next stage based on the proposed areas from the first stage … “While the original Mask R-CNN used 5 scales with box areas starting from 1282, which is suitable for the COCO dataset, we modify the anchor sizes since nuclei are much smaller than the objects in the COCO dataset. We obtain segmentation results of Mask R-CNN on the top 1000 candidates to detect a large number of nuclei.”; “we apply Mask R-CNN as well as color normalization and multiple inference to segment nuclei in H&E stained histopathology images”; Mask R-CNN is used for simultaneous object detection and instance segmentation; In Mask R-CNN, the nucleus is segmented from the rest of the image; it means the nucleus pixels will be assigned a color (say blue) and all the background pixels will be assigned yellow which is a simultaneous segmentation and classification; PNG media_image4.png 444 1012 media_image4.png Greyscale ). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the computer process, as taught by Barnes, to include the step of generating, using a mask regional convolutional neural network (Mask R-CNN) comprising a region proposal network, a classification branch, and a mask-generation branch, one or more masks corresponding to nuclei of the plurality of cells in the pathological image, as taught by Jung; further, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the step of simultaneously segmenting and classifying nuclei of the plurality of cells using a histology-based digital staining system, as taught by Barnes, to be done by applying the one or more masks, as further taught by Jung. The suggestion/motivation for doing so would have been that using a Mask R-CNN allows for highly accurate instance segmentation, meaning it not only detects objects but also precisely delineates their boundaries simultaneously which allows for accurate identification of multiple instances of the same object in images; this has application in medical imaging where there can be numerous nuclei of cells in a tissue image. Barnes, in view of Jung, fails to teach the segmentation and classification of the nuclei of the plurality of cells producing nuclei segmentation results including spatial locations of a plurality of centroids corresponding to the nuclei; determining a composition and a spatial organization of a tumor microenvironment of the patient tissue based on one or more groups of nuclei by applying a triangulation algorithm to the spatial locations of the plurality of plurality of centroids of the nuclei segmentation results; and generating a spatial connectivity graph representing one or more relationships between the nuclei of the plurality of the cells, the spatial connectivity graph comprising one or more edges encoding one or more inter-cell distances and one or more local topological features. Madabhushi teaches the segmentation and classification of the nuclei of the plurality of cells producing nuclei segmentation results including spatial locations of a plurality of centroids corresponding to the nuclei (Madabhushi, para. [0021]; para. [0035]; para. [0006]; FIG. 2: “Embodiments quantitatively evaluate the spatial arrangement of nuclei through the construction of a CG or CGs. A graph is a mathematical construct comprising of a finite sets of objects (nodes) that capture global and local relationships via pair-wise connections (edges) between the nodes. Graphs may be used to quantitatively characterize nuclear architecture in histopathological images by representing the nuclei as nodes and subsequently quantifying neighborhood relationships (e.g., proximity) and spatial arrangement between the nodes.”; “Operations 100 also includes, at 140, generating at least one nuclear cell graph (CG) based on the plurality of segmented cellular nuclei. In one embodiment, a node of the at least one nuclear CG is defined on a centroid of a member of the plurality of cellular nuclei. A first node is connected to a second, different node based on a Euclidean distance between the first node and the second node. In another embodiment, the centroid of a local nuclei cluster is used as a node, and a plurality of nodes is used to construct the global CG. The probability a first node will be linked with a second, different node is based on an exponentially decaying function of the Euclidean distance between the nodes.”; “FIG. 2 illustrates segmented cellular nuclei in NSCLC tissue”; PNG media_image5.png 278 486 media_image5.png Greyscale ); determining a composition and a spatial organization of a tumor microenvironment of the patient tissue based on one or more groups of nuclei by applying a triangulation algorithm to the spatial locations of the plurality of plurality of centroids of the nuclei segmentation results (Madabhushi, para. [0023]; para. [0045]: “Embodiments compute a set of cell graph features based on the CG. The set of cell graph features capture tumor morphology within the microenvironment of the tumor. These features may include first-order statistics (e.g. mean, mode, median) of the representative descriptors. In one embodiment, the set of cell graph features may include a Delaunay side length disorder of the cells feature. The set of cell graph features may also include a Delaunay ratio of the minimum and maximum triangular areas formed by cells feature. The set of cell graph features may also include a number of possible triangles formed from cells (i.e., nodes) of the cell graph feature. Other cell graph features may be computed”; “Operations 1100 also includes, at 1130, extracting a set of cellular graph (CG) features from the set of digitized images. In one embodiment, the set of CG features includes at least one of a Delaunay triangulation feature or a Voronoi feature. In one embodiment, the set of CG features includes a side length disorder of a Delaunay triangulation feature, a ratio of minimum and maximum triangular areas formed by nodes of the CG, and a number of possible polygons formed by nodes of the CG. In this embodiment, a polygon is a triangle.”); and generating a spatial connectivity graph representing one or more relationships between the nuclei of the plurality of the cells, the spatial connectivity graph comprising one or more edges encoding one or more inter-cell distances and one or more local topological features (Madabhushi, para. [0020]- [0021]: “Embodiments quantitatively evaluate the spatial arrangement of nuclei through the construction of a CG or CGs. A graph is a mathematical construct comprising of a finite sets of objects (nodes) that capture global and local relationships via pair-wise connections (edges) between the nodes. Graphs may be used to quantitatively characterize nuclear architecture in histopathological images by representing the nuclei as nodes and subsequently quantifying neighborhood relationships (e.g., proximity) and spatial arrangement between the nodes”; “Embodiments further construct a nuclear cell graph (CG) based on the cellular nuclei represented in the digitized H&E stained image. In one embodiment, the cell graph is a global cell graph in which each nucleus represented in the digitized H&E stained image defines a node of the graph. Embodiments may define nodes on all the cellular nuclei represented in the digitized H&E image. Thus, embodiments may define nodes of the CG on different types of nuclei. For example, embodiments may define nodes on cancer cell nuclei and on tumor infiltrating lymphocytes, or on other types of cellular nuclei. Nodes may be connected based on distance metrics such as Euclidean Distance between nodes, or the L1 norm. In another embodiment, a threshold number of nuclei (e.g., 50%, 75%, or 90%) represented in the digitized H&E stained image may be employed to define nodes of the graph.”). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to: 1) modify the segmentation and classification of the nuclei of the plurality of cells, as taught by Barnes, in view of Jung, to produce nuclei segmentation results including spatial locations of a plurality of centroids corresponding to the nuclei, as taught by Madabhushi; 2) modify the step of determining a composition and a spatial organization of a tumor microenvironment of the patient tissue based on one or more groups of nuclei, as taught by Barnes, in view of Jung, to include applying a triangulation algorithm to the spatial locations of the plurality of plurality of centroids of the nuclei segmentation results, as taught by Madabhushi; 3) modify the computer process, as taught by Barnes, in view of Jung, to include the step of generating a spatial connectivity graph representing one or more relationships between the nuclei of the plurality of the cells, the spatial connectivity graph comprising one or more edges encoding one or more inter-cell distances and one or more local topological features, as taught by Madabhushi. The suggestion/motivation for doing so would have been that “this technique improves on those employed by existing approaches to segmenting nuclei by being computationally simpler and faster; this technique also facilitates the adjustment and fine-tuning of parameters with greater simplicity than techniques used by existing approaches, thereby providing the technical effect of improving the performance of computers, systems, or other apparatus on which embodiments are implemented” (Madabhushi, para. [0033]); further suggestion/motivation for doing so would have been that a “personalized cancer treatment plan may be generated based, at least in part, on the classification and at least one of the probability, the set of nuclear radiomic features, the set of CG features, or the digitized image … defining a personalized cancer treatment plan facilitates delivering a particular treatment that will be therapeutically active to the patient, while minimizing negative or adverse effects experienced by the patient” (Madabhushi, para. [0051]-[0053]). Therefore, it would have been obvious to combine Barnes, with Jung and Madabhushi, to obtain the invention as specified in claim 1. Regarding claim 3, Barnes, in view of Jung, and in view of Madabhushi, teaches the one or more non-transitory computer-readable storage media of claim 1. Barnes, in view of Jung, and in view of Madabhushi, fails to teach wherein the mask regional convolutional network is trained using a plurality of training pathological images, and each of the plurality of training pathological images is manually labeled. Jung further teaches wherein the mask regional convolutional network is trained using a plurality of training pathological images, and each of the plurality of training pathological images is manually labeled (Jung, page 6, Experiment and Results, para. 2; Table 2: “The first dataset is the multiple organ H&E stained histopathology image dataset (MOSID) [20]. It contains a total of 30 images and the spatial size of each image is 1000×1000. Histopathology images of the following seven organs were collected: breast, kidney, liver, prostate, bladder, colon, and stomach. We divide the dataset into a training set and test set as shown in Table 2. Histopathology images of the bladder, colon, and stomach are included in only the test set.”; PNG media_image6.png 174 822 media_image6.png Greyscale ). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the mask regional convolutional network, as taught by Barnes, in view of Jung, and in view of Madabhushi, to be trained using a plurality of training pathological images, and each of the plurality of training pathological images is manually labeled, as further taught by Jung. The suggestion/motivation for doing so would have been that manually labeling pathology images provides the essential, high-quality "ground truth" data (like tumor boundaries, cell types) that supervised machine learning models need to learn accurately, defining what's what (e.g., normal vs. cancerous) for precise pattern recognition, leading to more reliable AI diagnostics, better disease detection, and improved patient outcomes by correcting model errors and handling complex visual features beyond simple algorithms. Therefore, it would have been obvious to combine Barnes, Jung, and Madabhushi, with Jung further, to obtain the invention as specified in claim 3. Regarding claim 4, Barnes, in view of Jung, and in view of Madabhushi, teaches the one or more non-transitory computer-readable storage media of claim 1, wherein the patient tissue is at least one of lung tissue, breast tissue, head tissue, or neck tissue (Barnes, para. [0067], lines 10-13: ““Based on the training workflow, optimized cut-off points are provided from database 418 for enabling the scores to be stratified (S411) into low-risk and high-risk groups for cancer recurrence besides medical applications such as anatomical or clinical pathology, prostrate/lung cancer diagnosis, etc.”). Regarding claim 5, Barnes, in view of Jung, and in view of Madabhushi, teaches the one or more non-transitory computer-readable storage media of claim 1, wherein the cell type includes at least one of tumor cells, stromal cells, macrophages, red blood cells, lymphocytes, or karyorrhexis (Barnes, para. [0065]; para. [0057]; FIG. 4; FIG. 2A; para. [0013]; see rejection of claim 1 above for the discussion of tumor cells). Regarding claim 6, Barnes, in view of Jung, and in view of Madabhushi, teaches the one or more non-transitory computer-readable storage media of claim 1, wherein the plurality of cells is stained in the image using one or more colors according to the composition and the spatial organization of the tumor microenvironment (Barnes, para. [0042]: “For instance, input data 102 may provide a means for inputting image data from one or more scanned IHC slides to memory 110. Image data may include data related to color channels or color wavelength channels, as well as details regarding a staining and/or imaging process. For instance, a tissue section may require staining by means of application of a staining assay containing one or more different biomarkers associated with chromogenic stains for brightfield imaging or fluorophores for fluorescence imaging. Staining assays can use chromogenic stains for brightfield imaging, organic fluorophores, quantum dots, or organic fluorophores together with quantum dots for fluorescence imaging, or any other combination of stains, biomarkers, and viewing or imaging devices. Example biomarkers include biomarkers for estrogen receptors (ER), human epidermal growth factor receptors 2 (HER2), Ki-67, and progesterone receptors (PR), wherein the tissue section is detectably labeled with antibodies for each of ER, HER2, Ki-67 and PR. In some embodiments of the subject disclosure, the operations of scoring, cox modeling, and risk stratification are depending on the type of biomarker being used as well as the field-of-view (FOV) selection and annotations. Therefore, any other biomarker tissue slides (like immune markers or some other additional markers) will trigger slide image analysis and scoring specific to the particular marker and include those scores in the Cox model fitting process.”). Regarding claim 7, Barnes, in view of Jung, and in view of Madabhushi, teaches the one or more non-transitory computer-readable storage media of claim 1, wherein the composition and the spatial organization of the tumor microenvironment is further determined based on image features extracted using connections between the plurality of centroids corresponding to each of the nuclei of the plurality of cells (Madabhushi, para. [0021]; para. [0035]; para. [0006]; FIG. 2; para. [0023]; para. [0045]; para. [0020]- [0021]; see rejection of claim 1 above; centroids of nuclei clusters are used as nodes in a cell graph (CG) having edges connecting and features are extracted from the cell graph using Delaunay triangulation and determining connection (strength of edges) between the nodes using Euclidean distance). Regarding claim 9, Barnes, in view of Jung, and in view of Madabhushi, teaches the one or more non-transitory computer-readable storage media of claim 1, further comprising: generating a prognostic model for the patient based on the composition and the spatial organization of the tumor microenvironment (Barnes, para. [0067]: “The IHC3 or IHC4 combination scores and the combined regional heterogeneity scores may then be entered into a Cox proportional hazards regression model (S409) to maximize the combined predictive capabilities of both measures. The Cox proportional hazards regression model models time to distant recurrence by taking two variables and finding the best logistic combination of the two to predict time to distant recurrence. Depending upon the type of FOV selected, a plurality of coefficients or parameters for the Cox model may be retrieved from parameter database 418. The coefficients may be based on training data for similar workflows as described with respect to FIG. 3, thereby enabling survival predictions for the slide series of the individual patient being tested. Based on the training workflow, optimized cut-off points are provided from database 418 for enabling the scores to be stratified (S411) into low-risk and high-risk groups for cancer recurrence besides medical applications such as anatomical or clinical pathology, prostrate/lung cancer diagnosis, etc.,”; see steps S409 and S411 in the flowchart of FIG. 4 in the rejection of claim 1 above). Regarding claim 10, Barnes, in view of Jung, and in view of Madabhushi, teaches the one or more non-transitory computer-readable storage media of claim 9, wherein the prognostic model includes a risk score (Barnes, para. [0067]; see rejection of claim 9 above). Regarding claim 11, Barnes, in view of Jung, and in view of Madabhushi, teaches the one or more non-transitory computer-readable storage media of claim 10, further comprising: assigning the patient to a risk group corresponding to a predicted survival outcome based on the risk score (Barnes, para. [0067]; see rejection of claims 9-10 above). Regarding claim 12, Barnes, in view of Jung, and in view of Madabhushi, teaches the one or more non-transitory computer-readable storage media of claim 1, wherein the image is a patch from a larger image (Barnes, para. [0037]; para. [0044]: “The tissue slides may represent the time of diagnosis of the patient. The tissue slides may be processed according to a specific staining protocol and stains or biomarkers may be scored using a specific scoring protocol. For example, a series of histopathological simplex and/or multiplex tissue slides from serial sections of cancerous tissue block corresponding to each patient and stained with H&E and multiple IHC tumor and immune markers (such as tumor markers ER, PR, Ki67, HER2, etc. and/or immune markers such as CD3, CD8, CD4 etc.) are digitized using a digital pathology scanning system, for example, on a whole slide scanner or a digital microscope.”; each section can be thought of as a “patch” or section of a larger image); “For example, a qualified reader such as a pathologist may annotate a whole-tumor region on any other IHC slide, and execute registration module 112 to map the whole tumor annotations on the other digitized slides. For example, a pathologist (or automatic detection algorithm) may annotate a whole-tumor region on an H&E slide triggering an analysis of all adjacent serial sectioned IHC slides to determine whole-slide tumor scores for the annotated regions on all slides.”). Regarding claim 19, Barnes teaches a system for characterizing patient tissue of a patient, the system comprising: the pathological image captured using a tissue slide scanning kit (Barnes, abstract; para. [0007]: “The subject disclosure presents systems and computer-implemented methods for providing reliable risk stratification for early-stage cancer patients by predicting a recurrence risk of the patient and to categorize the patient into a high or low risk group. A series of slides depicting serial sections of cancerous tissue are automatically analyzed by a digital pathology system, a score for the sections is calculated, and a Cox proportional hazards regression model is used to stratify the patient into a low or high risk group.”; “The present invention provides for an computational pathology system, where a digital pathology system is used to digitizing cancer biopsy tissue samples followed with using image analysis workflow methods for analyzing the digitized tissue slides and statistical analysis methods to correlate the obtained biomarker expressions in the tissue samples with the patient survival outcome information to construct and clinical use a prognostic model for a prognostic and predictive evaluation of cancer tissue samples, such as early stage cancer prognosis”) a plurality of cells in a pathological image of the patient tissue of the patient (Barnes, para. [0064]; para. [0056]; FIG. 2B: “FIG. 4 shows a method for early-stage prognosis, according to an exemplary embodiment of the subject disclosure. This method may use components described with reference to system 100, or other components that perform similar functions. For instance, an image series corresponding to a single patient undergoing diagnosis may be received (S401) from an imaging system or any other input. The image series may include data in the form of color channels or frequency channels representing serial sections of tissue stained with various biomarkers. Example biomarkers include biomarkers for estrogen receptors (ER), human epidermal growth factor receptors 2 (HER2), Ki-67, and progesterone receptors (PR). The imaging system may include the ISCAN COREO™ product of the assignee Ventana Medical Systems, Inc. The image data corresponds to cancerous or significantly cancerous sections retrieved from a single patient.”; “FIG. 2A shows a series of images of serial tissue sections, according to an exemplary embodiment of the subject disclosure; PNG media_image7.png 823 419 media_image7.png Greyscale ); a histology-based digital staining system simultaneously segmenting and classifying nuclei, the nuclei of the plurality of cells segmented according to spatial location and classified according to type (Barnes, para. [0065]; para. [0057]; FIG. 4; FIG. 2A; para. [0013]: “Once the image data is received (S401), an image in a series of images corresponding to slides comprising serial tissue sections may be displayed on a user interface for field-of-view (FOV) selection and annotation (S403). Several annotation mechanisms (S403) may be provided, such as designating known or irregular shapes, or defining an anatomic region of interest (e.g., tumor region). In one example, the field of view is a whole slide, whole tumor region, or whole tissue section. The annotation (S403) annotates the FOV on the first slide and a registration operation (S405) maps the annotations across the remainder of the slides. As described herein, several methods for annotation and registration may be utilized, depending on the defined FOV. For example, a whole tumor region on a Hematoxylin and Eosin (H&E) slide from among the plurality of serial slides may be defined, and registration operation (S405) maps and transfers the whole tumor annotations from the H&E slide to each of the remaining IHC slides in the series. Alternatively, representative regions or “hot spots” may be identified on a Ki67 digitized whole slide, and may be mapped to equivalent annotated regions on the other IHC slides.”; Hematoxylin and Eosin (H&E) slide is a histology-based digitial staining system.; “FIG. 2B shows an alternate means for FOV selection using representative regions or “hot spots” 231 on a Ki67 digitized whole slide 225. Hot spots are specific regions of the whole slide that contain relatively high and heterogeneous amounts of Ki67 protein. The FOV 231 may, for instance, be in the form of a rectangular shape 231. Other embodiments may provide a manually drawn FOV selection, or automated image analysis algorithms may highlight such FOV regions on the Ki67 slide 225. An inter-marker registration operation as described above may be used to map these “hot spots” to equivalent annotated regions on the other IHC slides such as ER 226, PR 227, and H&E slide 228. Shown on the right hand side of FIG. 2B are the zoomed-in versions of these hot spots, depicted at 20× magnification. Additional IHC slides are not depicted by FIG. 2B or 2A may be similarly annotated, such as HER2. In either case, whether the whole tumor or only “hot spots” are annotated, the corresponding regions on the remaining slides necessarily correspond to similar tissue types, assuming the magnification remains constant across the series.”; PNG media_image2.png 846 655 media_image2.png Greyscale ; PNG media_image3.png 852 621 media_image3.png Greyscale ; “A ‘multi-channel image’ as understood herein encompasses a digital image obtained from a biological tissue sample in which different biological structures, such as nuclei and tissue structures, are simultaneously stained with specific fluorescent dyes, each of which fluoresces in a different spectral band thus constituting one of the channels of the multi-channel image. The biological tissue sample may be stained by a plurality of stains and/or by a stain and a counterstain, the later being also referred to as a “single marker image”.); and the histology-based digital staining system determining a composition and a spatial organization of a tumor microenvironment of the patient tissue based on the one or more groups of nuclei (Barnes, para. [0066]; para. [0078]: “Given the FOV, image analysis operations are used to compute scores (S407) for each slide. The scores for each slide may be based on a determination of a percent positivity, as well as a regional heterogeneity. Tumor nuclei that are positively and negatively stained for a particular biomarker, such as Ki67, ER, PR, HER2, etc. are counted, and a percent positivity is computed. Additional scoring mechanisms may be employed, such as H-scores representing regional heterogeneity of a particular marker or protein … The resulting slide-level scores may be combined together to generate IHC3, IHC4, or IHCn scores for the series of slides, depending on the number of individually-stained slides. Any scores computed from the H&E slide can also be included to the information from IHC slides to accordingly specify a different risk scoring metric. The scores are based on, for example, a whole-tumor FOV selection or on a “hot spot” FOV selection.”; “In some embodiments, a computer system can be programmed to automatically identify features in an image of a specimen based at least in part on one or more selection criteria, including criteria based at least in part on color characteristics, sample morphology (e.g., cell component morphology, cell morphology, tissue morphology, anatomical structure morphology, etc.), tissue characteristics (e.g., density, composition, or the like), spatial parameters (e.g., arrangement of tissue structures, relative positions between tissue structures, etc.), image characteristic parameters, or the like. If the features are nuclei, the selection criteria can include, without limitation, color characteristics, nuclei morphology (e.g., shape, dimensions, composition, etc.), spatial parameters (e.g., position of nuclei in cellular structure, relative position between nuclei, etc.), image characteristics, combinations thereof, or the like. After detecting candidate nuclei, algorithms can be used automatically to provide a score or information about the entire analyzed image.”). Barnes fails to teach a histology-based digital staining system simultaneously segmenting and classifying nuclei using one or more masks corresponding to the nuclei of the plurality of cells in the pathological image generated by a mask convolutional neural network (Mask R-CNN) comprising a regional proposal network, a classification branch, and a mask-generation branch, thereby generating one or more groups of nuclei having an identified cell type. Jung teaches a histology-based digital staining system simultaneously segmenting and classifying nuclei using one or more masks corresponding to the nuclei of the plurality of cells in the pathological image generated by a mask convolutional neural network (Mask R-CNN) comprising a regional proposal network, a classification branch, and a mask-generation branch, thereby generating one or more groups of nuclei having an identified cell type (Jung, page 4, right column; section Nuclei Segmentation; pages 5-6; page 2, right-hand col., para. 4, lines 8-10; FIG. 3; FIG. 2: “Mask R-CNN [31] is a state-of-the-art object segmentation framework that can identify not only the location of any object but also its segmented mask. Mask R-CNN extends the object detection model Faster R-CNN [32] by adding a third branch for predicting segmentation masks to the existing branches for classification and bounding box regression. Mask R-CNN is a two-stage framework. In the first stage, it scans an input image and finds areas that may contain an object using a Region Proposal Network (RPN). It predicts the classes of proposed areas, refines the bounding box, and generates masks for an object at the pixel level in the next stage based on the proposed areas from the first stage … “While the original Mask R-CNN used 5 scales with box areas starting from 1282, which is suitable for the COCO dataset, we modify the anchor sizes since nuclei are much smaller than the objects in the COCO dataset. We obtain segmentation results of Mask R-CNN on the top 1000 candidates to detect a large number of nuclei.”; “we apply Mask R-CNN as well as color normalization and multiple inference to segment nuclei in H&E stained histopathology images”; “Thus, we apply Mask R-CNN as well as color normalization and multiple inference to segment nuclei in H&E stained histopathology images”; Mask R-CNN is used for simultaneous object detection and instance segmentation; In Mask R-CNN, the nucleus is segmented from the rest of the image; it means the nucleus pixels will be assigned a color (say blue) and all the background pixels will be assigned yellow which is a simultaneous segmentation and classification; PNG media_image4.png 444 1012 media_image4.png Greyscale ; PNG media_image8.png 784 668 media_image8.png Greyscale ). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the histology-based digital staining system, as taught by Barnes, to simultaneously segment and classify nuclei using one or more masks corresponding to the nuclei of the plurality of cells in the pathological image generated by a mask convolutional neural network (Mask R-CNN) comprising a regional proposal network, a classification branch, and a mask-generation branch, thereby generating one or more groups of nuclei having an identified cell type, as further taught by Jung. The suggestion/motivation for doing so would have been that using a Mask R-CNN allows for highly accurate instance segmentation, meaning it not only detects objects but also precisely delineates their boundaries simultaneously which allows for accurate identification of multiple instances of the same object in images; this has application in medical imaging where there can be numerous nuclei of cells in a tissue image. Barnes, in view of Jung, fails to teach the segmentation and classification of the nuclei of the plurality of cells producing nuclei segmentation results including spatial locations of a plurality of centroids corresponding to each the nuclei; the histology-based digital staining system determining a composition and a spatial organization of a tumor environment of the patient tissue based on the one or more groups of nuclei by applying a triangulation algorithm to the spatial locations of the plurality of plurality of centroids of the nuclei segmentation results to generate a spatial connectivity graph representing one or more local topological relationships and one or more global topological relationships among the identified cell types, and computing one or more features from the spatial connectivity graph for assessing tumor microenvironmental structure and patient prognosis; Madabhushi teaches the segmentation and classification of the nuclei of the plurality of cells producing nuclei segmentation results including spatial locations of a plurality of centroids corresponding to each the nuclei (Madabhushi, para. [0021]; para. [0035]; para. [0006]; FIG. 2: “Embodiments quantitatively evaluate the spatial arrangement of nuclei through the construction of a CG or CGs. A graph is a mathematical construct comprising of a finite sets of objects (nodes) that capture global and local relationships via pair-wise connections (edges) between the nodes. Graphs may be used to quantitatively characterize nuclear architecture in histopathological images by representing the nuclei as nodes and subsequently quantifying neighborhood relationships (e.g., proximity) and spatial arrangement between the nodes.”; “Operations 100 also includes, at 140, generating at least one nuclear cell graph (CG) based on the plurality of segmented cellular nuclei. In one embodiment, a node of the at least one nuclear CG is defined on a centroid of a member of the plurality of cellular nuclei. A first node is connected to a second, different node based on a Euclidean distance between the first node and the second node. In another embodiment, the centroid of a local nuclei cluster is used as a node, and a plurality of nodes is used to construct the global CG. The probability a first node will be linked with a second, different node is based on an exponentially decaying function of the Euclidean distance between the nodes.”; “FIG. 2 illustrates segmented cellular nuclei in NSCLC tissue”; PNG media_image5.png 278 486 media_image5.png Greyscale ); the histology-based digital staining system determining a composition and a spatial organization of a tumor environment of the patient tissue based on the one or more groups of nuclei by applying a triangulation algorithm to the spatial locations of the plurality of plurality of centroids of the nuclei segmentation results to (Madabhushi, para. [0023]; para. [0045]: “Embodiments compute a set of cell graph features based on the CG. The set of cell graph features capture tumor morphology within the microenvironment of the tumor. These features may include first-order statistics (e.g. mean, mode, median) of the representative descriptors. In one embodiment, the set of cell graph features may include a Delaunay side length disorder of the cells feature. The set of cell graph features may also include a Delaunay ratio of the minimum and maximum triangular areas formed by cells feature. The set of cell graph features may also include a number of possible triangles formed from cells (i.e., nodes) of the cell graph feature. Other cell graph features may be computed”; “Operations 1100 also includes, at 1130, extracting a set of cellular graph (CG) features from the set of digitized images. In one embodiment, the set of CG features includes at least one of a Delaunay triangulation feature or a Voronoi feature. In one embodiment, the set of CG features includes a side length disorder of a Delaunay triangulation feature, a ratio of minimum and maximum triangular areas formed by nodes of the CG, and a number of possible polygons formed by nodes of the CG. In this embodiment, a polygon is a triangle.”) generate a spatial connectivity graph representing one or more local topological relationships and one or more global topological relationships among the identified cell types (Madabhushi, para. [0020]- [0021]: “Embodiments quantitatively evaluate the spatial arrangement of nuclei through the construction of a CG or CGs. A graph is a mathematical construct comprising of a finite sets of objects (nodes) that capture global and local relationships via pair-wise connections (edges) between the nodes. Graphs may be used to quantitatively characterize nuclear architecture in histopathological images by representing the nuclei as nodes and subsequently quantifying neighborhood relationships (e.g., proximity) and spatial arrangement between the nodes”; “Embodiments further construct a nuclear cell graph (CG) based on the cellular nuclei represented in the digitized H&E stained image. In one embodiment, the cell graph is a global cell graph in which each nucleus represented in the digitized H&E-stained image defines a node of the graph. Embodiments may define nodes on all the cellular nuclei represented in the digitized H&E image. Thus, embodiments may define nodes of the CG on different types of nuclei. For example, embodiments may define nodes on cancer cell nuclei and on tumor infiltrating lymphocytes, or on other types of cellular nuclei. Nodes may be connected based on distance metrics such as Euclidean Distance between nodes, or the L1 norm. In another embodiment, a threshold number of nuclei (e.g., 50%, 75%, or 90%) represented in the digitized H&E stained image may be employed to define nodes of the graph.”), and computing one or more features from the spatial connectivity graph for assessing tumor microenvironmental structure and patient prognosis (Madabhushi, para. [0023]; para. [0038]: “Embodiments compute a set of cell graph features based on the CG. The set of cell graph features capture tumor morphology within the microenvironment of the tumor. These features may include first-order statistics (e.g. mean, mode, median) of the representative descriptors. In one embodiment, the set of cell graph features may include a Delaunay side length disorder of the cells feature. The set of cell graph features may also include a Delaunay ratio of the minimum and maximum triangular areas formed by cells feature. The set of cell graph features may also include a number of possible triangles formed from cells (i.e., nodes) of the cell graph feature. Other cell graph features may be computed.”; “Operations 100 also includes, at 150, providing the set of nuclear radiomic features and the set of CG features to a machine learning classifier … operations 100 also includes, at 160, receiving, from the machine learning classifier, a probability that the ROT will respond to immunotherapy. The machine learning classifier computes the probability based, at least in part, on the set of nuclear radiomic features and the set of CG features … Operations 100 also includes, at 170, generating a classification of the ROT as a responder or non-responder based on the probability. The classification is generated, based, at least in part, on the probability. For example, embodiments may classify the region of tissue as likely to respond to immunotherapy when the probability >=0.5, and may classify the region of tissue as unlikely to respond to immunotherapy when the probability <0.5. Other classification schemes may be employed.”; outputting the level of response of a subject to a treatment meets the broadest reasonable interpretation of the claim term “patient prognosis” because predicting a patient's response to treatment is a crucial part of their overall prognosis (the likely course of a disease or ailment); see FIG. 1 steps 130 for feature extraction and steps 160-170 for prognosis model). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to: 1) modify the segmentation and classification of the nuclei of the plurality of cells, as taught by Barnes, in view of Jung, to include spatial locations of a plurality of centroids corresponding to each the nuclei, as taught by Madabhushi; 2) modify the histology-based digital staining system determining a composition and a spatial organization of a tumor environment of the patient tissue based on the one or more groups of nuclei, as taught by Barnes, in view of Jung, to apply a triangulation algorithm to the spatial locations of the plurality of plurality of centroids of the nuclei segmentation results to generate a spatial connectivity graph representing one or more local topological relationships and one or more global topological relationships among the identified cell types, as taught by Madabhushi; 3) modify the histology-based digital staining system, as taught by Barnes, in view of Jung, to compute one or more features from the spatial connectivity graph for assessing tumor microenvironmental structure and patient prognosis, as taught by Madabhushi. The suggestion/motivation for doing so would have been that “this technique improves on those employed by existing approaches to segmenting nuclei by being computationally simpler and faster; this technique also facilitates the adjustment and fine-tuning of parameters with greater simplicity than techniques used by existing approaches, thereby providing the technical effect of improving the performance of computers, systems, or other apparatus on which embodiments are implemented” (Madabhushi, para. [0033]); further suggestion/motivation for doing so would have been that a “personalized cancer treatment plan may be generated based, at least in part, on the classification and at least one of the probability, the set of nuclear radiomic features, the set of CG features, or the digitized image … defining a personalized cancer treatment plan facilitates delivering a particular treatment that will be therapeutically active to the patient, while minimizing negative or adverse effects experienced by the patient” (Madabhushi, para. [0051]-[0053]). Therefore, it would have been obvious to combine Barnes, with Jung and Madabhushi, to obtain the invention as specified in claim 19. Regarding claim 20, Barnes, in view of Jung, and in view of Madabhushi, teaches the system of claim 9, wherein the pathological image is received from a user device over a network (Barnes, para [0041]; para. [0042], lines 1-2: “FIG. 1 shows a system for early-stage prognosis, according to an exemplary embodiment of the subject disclosure. System 100 comprises a memory 110, which stores a plurality of processing modules or logical instructions that are executed by processor 105 coupled to computer 101. Besides processor 105 and memory 110, computer 101 also includes user input and output devices such as a keyboard, mouse, stylus, and a display/touchscreen. As will be explained in the following discussion, processor 105 executes logical instructions stored on memory 110, performing image analysis and other quantitative operations resulting in an output of results to a user operating computer 101 or via a network. For instance, input data 102 may provide a means for inputting image data from one or more scanned IHC slides to memory 110.”; Barnes has the ability to communicate over network and allows an input image to be received over network or input directly by a user to the system). Claims 13-18 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No.: 2017/0270666 (Barnes et al.) (hereinafter Barnes), in view of non-patent literature “An automatic nuclei segmentation method based on deep convolutional neural networks for histopathology images”; BMC biomed eng 1, 24 (2019) (Jung et al.) (hereinafter Jung), and in view of U.S. Patent Application Publication No 2019/0259154 (Madabhushi et al.) (hereinafter Madabhushi), and in further view of U.S. Patent Application Publication No.: 2021/0279866 (Svekolkin et al.) (hereinafter Svekolkin). Regarding claim 13, Barnes teaches a method for characterizing patient tissue of a patient, the method comprising: (Barnes, abstract: “The subject disclosure presents systems and computer-implemented methods for providing reliable risk stratification for early-stage cancer patients by predicting a recurrence risk of the patient and to categorize the patient into a high or low risk group. A series of slides depicting serial sections of cancerous tissue are automatically analyzed by a digital pathology system, a score for the sections is calculated, and a Cox proportional hazards regression model is used to stratify the patient into a low or high-risk group.”) receiving a pathological image of patient tissue of a patient, the patient tissue including a plurality of cells (Barnes, para. [0064]; para. [0056]; FIG. 2B: “FIG. 4 shows a method for early-stage prognosis, according to an exemplary embodiment of the subject disclosure. This method may use components described with reference to system 100, or other components that perform similar functions. For instance, an image series corresponding to a single patient undergoing diagnosis may be received (S401) from an imaging system or any other input. The image series may include data in the form of color channels or frequency channels representing serial sections of tissue stained with various biomarkers. Example biomarkers include biomarkers for estrogen receptors (ER), human epidermal growth factor receptors 2 (HER2), Ki-67, and progesterone receptors (PR). The imaging system may include the ISCAN COREO™ product of the assignee Ventana Medical Systems, Inc. The image data corresponds to cancerous or significantly cancerous sections retrieved from a single patient.”; “FIG. 2A shows a series of images of serial tissue sections, according to an exemplary embodiment of the subject disclosure; PNG media_image9.png 650 331 media_image9.png Greyscale ); simultaneously segmenting and classifying nuclei of the plurality of cells using a histology-based digital staining system, the nuclei of the plurality of cells segmented according to spatial location and classified according to cell type, thereby generating one or more groups of nuclei, each of the one or more groups of nuclei having an identified cell type (Barnes, para. [0065]; para. [0057]; FIG. 4; FIG. 2A; para. [0013]: “Once the image data is received (S401), an image in a series of images corresponding to slides comprising serial tissue sections may be displayed on a user interface for field-of-view (FOV) selection and annotation (S403). Several annotation mechanisms (S403) may be provided, such as designating known or irregular shapes, or defining an anatomic region of interest (e.g., tumor region). In one example, the field of view is a whole slide, whole tumor region, or whole tissue section. The annotation (S403) annotates the FOV on the first slide and a registration operation (S405) maps the annotations across the remainder of the slides. As described herein, several methods for annotation and registration may be utilized, depending on the defined FOV. For example, a whole tumor region on a Hematoxylin and Eosin (H&E) slide from among the plurality of serial slides may be defined, and registration operation (S405) maps and transfers the whole tumor annotations from the H&E slide to each of the remaining IHC slides in the series. Alternatively, representative regions or “hot spots” may be identified on a Ki67 digitized whole slide, and may be mapped to equivalent annotated regions on the other IHC slides.”; Hematoxylin and Eosin (H&E) slide is a histology-based digitial staining system.; “FIG. 2B shows an alternate means for FOV selection using representative regions or “hot spots” 231 on a Ki67 digitized whole slide 225. Hot spots are specific regions of the whole slide that contain relatively high and heterogeneous amounts of Ki67 protein. The FOV 231 may, for instance, be in the form of a rectangular shape 231. Other embodiments may provide a manually drawn FOV selection, or automated image analysis algorithms may highlight such FOV regions on the Ki67 slide 225. An inter-marker registration operation as described above may be used to map these “hot spots” to equivalent annotated regions on the other IHC slides such as ER 226, PR 227, and H&E slide 228. Shown on the right hand side of FIG. 2B are the zoomed-in versions of these hot spots, depicted at 20× magnification. Additional IHC slides are not depicted by FIG. 2B or 2A may be similarly annotated, such as HER2. In either case, whether the whole tumor or only “hot spots” are annotated, the corresponding regions on the remaining slides necessarily correspond to similar tissue types, assuming the magnification remains constant across the series.”; PNG media_image2.png 846 655 media_image2.png Greyscale ; PNG media_image3.png 852 621 media_image3.png Greyscale ; “A ‘multi-channel image’ as understood herein encompasses a digital image obtained from a biological tissue sample in which different biological structures, such as nuclei and tissue structures, are simultaneously stained with specific fluorescent dyes, each of which fluoresces in a different spectral band thus constituting one of the channels of the multi-channel image. The biological tissue sample may be stained by a plurality of stains and/or by a stain and a counterstain, the later being also referred to as a “single marker image””); determining a composition and a spatial organization of a tumor microenvironment of the patient tissue based on the one or more groups of nuclei (Barnes, para. [0066]; para. [0078]: “Given the FOV, image analysis operations are used to compute scores (S407) for each slide. The scores for each slide may be based on a determination of a percent positivity, as well as a regional heterogeneity. Tumor nuclei that are positively and negatively stained for a particular biomarker, such as Ki67, ER, PR, HER2, etc. are counted, and a percent positivity is computed. Additional scoring mechanisms may be employed, such as H-scores representing regional heterogeneity of a particular marker or protein … The resulting slide-level scores may be combined together to generate IHC3, IHC4, or IHCn scores for the series of slides, depending on the number of individually-stained slides. Any scores computed from the H&E slide can also be included to the information from IHC slides to accordingly specify a different risk scoring metric. The scores are based on, for example, a whole-tumor FOV selection or on a “hot spot” FOV selection.”; “In some embodiments, a computer system can be programmed to automatically identify features in an image of a specimen based at least in part on one or more selection criteria, including criteria based at least in part on color characteristics, sample morphology (e.g., cell component morphology, cell morphology, tissue morphology, anatomical structure morphology, etc.), tissue characteristics (e.g., density, composition, or the like), spatial parameters (e.g., arrangement of tissue structures, relative positions between tissue structures, etc.), image characteristic parameters, or the like. If the features are nuclei, the selection criteria can include, without limitation, color characteristics, nuclei morphology (e.g., shape, dimensions, composition, etc.), spatial parameters (e.g., position of nuclei in cellular structure, relative position between nuclei, etc.), image characteristics, combinations thereof, or the like. After detecting candidate nuclei, algorithms can be used automatically to provide a score or information about the entire analyzed image.”); and generating a prognosis model for the patient based on one or more features (Barnes, para. [0054]; para. [0049]: “For instance, in a clinical or diagnostic workflow, when a new slide series comprising H&E and IHC slides from a new patient is input into system 100, and annotations generated and FOVs analyzed using image analysis algorithms to output scores, the corresponding IHC3/IHC4 formulae with specific coefficients are used to compute the whole-slide score for that patient. If whole tumor annotations are performed, WholeTumor_IHC3 and WholeTumor_IHC4 scores may be computed. If “hot spot” annotations are performed, Ki67_HotspotBased_IHC3 and Ki67_HotspotBased_IHC4 scores may be computed. The cutoff points for these scores are used to provide a prognosis for the patient, i.e. stratifying their risk group, based on the cutoff points generated during comparisons of training data with survival curves.”; “In this embodiment, Cox modeling module 114 may be trained by comparing the biomarker/IHC scores for individual slides with survival data comprising populations of high and low risks to determine whole-slide scoring algorithms depending on the type of FOV selection and annotation/registration being used. A cutoff point is determined that matches the input survival data, using a log-rank-test statistic to determine an accurate prediction of low and high risk. The scoring algorithms and cutoff points generated during training may be used to analyze new patient slides and provide a risk assessment or prognosis via risk stratification module 115.”) Barnes fails to teach generating, using a mask regional convolutional neural network (Mask R-CNN) comprising a region proposal network, a classification branch, and a mask-generation branch, one or more masks corresponding to nuclei of the plurality of cells in the pathological image; and simultaneously segmenting and classifying the nuclei of the plurality of cells using a histology-based digital staining system by applying the one or more masks. Jung teaches generating, using a mask regional convolutional neural network (Mask R-CNN) comprising a region proposal network, a classification branch, and a mask-generation branch, one or more masks corresponding to nuclei of the plurality of cells in the pathological image; and simultaneously segmenting and classifying the nuclei of the plurality of cells using a histology-based digital staining system by applying the one or more masks (Jung, page 4, right column; section Nuclei Segmentation; pages 5-6; FIG. 3: “Mask R-CNN [31] is a state-of-the-art object segmentation framework that can identify not only the location of any object but also its segmented mask. Mask R-CNN extends the object detection model Faster R-CNN [32] by adding a third branch for predicting segmentation masks to the existing branches for classification and bounding box regression. Mask R-CNN is a two-stage framework. In the first stage, it scans an input image and finds areas that may contain an object using a Region Proposal Network (RPN). It predicts the classes of proposed areas, refines the bounding box, and generates masks for an object at the pixel level in the next stage based on the proposed areas from the first stage … “While the original Mask R-CNN used 5 scales with box areas starting from 1282, which is suitable for the COCO dataset, we modify the anchor sizes since nuclei are much smaller than the objects in the COCO dataset. We obtain segmentation results of Mask R-CNN on the top 1000 candidates to detect a large number of nuclei.”; “we apply Mask R-CNN as well as color normalization and multiple inference to segment nuclei in H&E stained histopathology images”; Mask R-CNN is used for simultaneous object detection and instance segmentation; In Mask R-CNN, the nucleus is segmented from the rest of the image; it means the nucleus pixels will be assigned a color (say blue) and all the background pixels will be assigned yellow which is a simultaneous segmentation and classification; PNG media_image4.png 444 1012 media_image4.png Greyscale ). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the method, as taught by Barnes, to include the step of generating, using a mask regional convolutional neural network (Mask R-CNN) comprising a region proposal network, a classification branch, and a mask-generation branch, one or more masks corresponding to nuclei of the plurality of cells in the pathological image, as taught by Jung; further, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the step of simultaneously segmenting and classifying nuclei of the plurality of cells using a histology-based digital staining system, as taught by Barnes, to be done by applying the one or more masks, as further taught by Jung. The suggestion/motivation for doing so would have been that using a Mask R-CNN allows for highly accurate instance segmentation, meaning it not only detects objects but also precisely delineates their boundaries simultaneously which allows for accurate identification of multiple instances of the same object in images; this has application in medical imaging where there can be numerous nuclei of cells in a tissue image. Barnes, in view of Jung, fails to teach the segmentation and classification of the nuclei of the plurality of cells producing nuclei segmentation results including spatial locations of a plurality of centroids corresponding to the nuclei; determining a composition and a spatial organization of a tumor microenvironment of the patient tissue based on one or more groups of nuclei by applying a triangulation algorithm to the spatial locations of the plurality of plurality of centroids of the nuclei segmentation results; and generating a prognostic model for the patient based on one or more features including an inter-cell connectivity, a density distribution, and a neighborhood diversity. Madabhushi teaches the segmentation and classification of the nuclei of the plurality of cells producing nuclei segmentation results including spatial locations of a plurality of centroids corresponding to the nuclei (Madabhushi, para. [0021]; para. [0035]; para. [0006]; FIG. 2: “Embodiments quantitatively evaluate the spatial arrangement of nuclei through the construction of a CG or CGs. A graph is a mathematical construct comprising of a finite sets of objects (nodes) that capture global and local relationships via pair-wise connections (edges) between the nodes. Graphs may be used to quantitatively characterize nuclear architecture in histopathological images by representing the nuclei as nodes and subsequently quantifying neighborhood relationships (e.g., proximity) and spatial arrangement between the nodes.”; “Operations 100 also includes, at 140, generating at least one nuclear cell graph (CG) based on the plurality of segmented cellular nuclei. In one embodiment, a node of the at least one nuclear CG is defined on a centroid of a member of the plurality of cellular nuclei. A first node is connected to a second, different node based on a Euclidean distance between the first node and the second node. In another embodiment, the centroid of a local nuclei cluster is used as a node, and a plurality of nodes is used to construct the global CG. The probability a first node will be linked with a second, different node is based on an exponentially decaying function of the Euclidean distance between the nodes.”; “FIG. 2 illustrates segmented cellular nuclei in NSCLC tissue”; PNG media_image5.png 278 486 media_image5.png Greyscale ); determining a composition and a spatial organization of a tumor microenvironment of the patient tissue based on one or more groups of nuclei by applying a triangulation algorithm to the spatial locations of the plurality of plurality of centroids of the nuclei segmentation results (Madabhushi, para. [0023]; para. [0045]: “Embodiments compute a set of cell graph features based on the CG. The set of cell graph features capture tumor morphology within the microenvironment of the tumor. These features may include first-order statistics (e.g. mean, mode, median) of the representative descriptors. In one embodiment, the set of cell graph features may include a Delaunay side length disorder of the cells feature. The set of cell graph features may also include a Delaunay ratio of the minimum and maximum triangular areas formed by cells feature. The set of cell graph features may also include a number of possible triangles formed from cells (i.e., nodes) of the cell graph feature. Other cell graph features may be computed”; “Operations 1100 also includes, at 1130, extracting a set of cellular graph (CG) features from the set of digitized images. In one embodiment, the set of CG features includes at least one of a Delaunay triangulation feature or a Voronoi feature. In one embodiment, the set of CG features includes a side length disorder of a Delaunay triangulation feature, a ratio of minimum and maximum triangular areas formed by nodes of the CG, and a number of possible polygons formed by nodes of the CG. In this embodiment, a polygon is a triangle.”); and generating a prognostic model for the patient based on one or more features including an inter-cell connectivity, a density distribution, and a neighborhood diversity (Madabhushi, para. [0038]; para. [0020]-[0021]: “Operations 100 also includes, at 150, providing the set of nuclear radiomic features and the set of CG features to a machine learning classifier … operations 100 also includes, at 160, receiving, from the machine learning classifier, a probability that the ROT will respond to immunotherapy. The machine learning classifier computes the probability based, at least in part, on the set of nuclear radiomic features and the set of CG features … Operations 100 also includes, at 170, generating a classification of the ROT as a responder or non-responder based on the probability. The classification is generated, based, at least in part, on the probability. For example, embodiments may classify the region of tissue as likely to respond to immunotherapy when the probability >=0.5, and may classify the region of tissue as unlikely to respond to immunotherapy when the probability <0.5. Other classification schemes may be employed.”; outputting the level of response of a subject to a treatment meets the broadest reasonable interpretation of the claim term “prognosis model” because predicting a patient's response to treatment is a crucial part of their overall prognosis (the likely course of a disease or ailment); “Embodiments further construct a nuclear cell graph (CG) based on the cellular nuclei represented in the digitized H&E stained image. In one embodiment, the cell graph is a global cell graph in which each nucleus represented in the digitized H&E stained image defines a node of the graph. Embodiments may define nodes on all the cellular nuclei represented in the digitized H&E image. Thus, embodiments may define nodes of the CG on different types of nuclei. For example, embodiments may define nodes on cancer cell nuclei and on tumor infiltrating lymphocytes, or on other types of cellular nuclei. Nodes may be connected based on distance metrics such as Euclidean Distance between nodes, or the L1 norm.”; “Embodiments quantitatively evaluate the spatial arrangement of nuclei through the construction of a CG or CGs. A graph is a mathematical construct comprising of a finite sets of objects (nodes) that capture global and local relationships via pair-wise connections (edges) between the nodes. Graphs may be used to quantitatively characterize nuclear architecture in histopathological images by representing the nuclei as nodes and subsequently quantifying neighborhood relationships (e.g., proximity) and spatial arrangement between the nodes.”; see FIG. 1 steps 130 for feature extraction and steps 160-170 for prognosis model). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to: 1) modify the segmentation and classification of the nuclei of the plurality of cells, as taught by Barnes, in view of Jung, to produce nuclei segmentation results including spatial locations of a plurality of centroids corresponding to the nuclei, as taught by Madabhushi; 2) modify the step of determining a composition and a spatial organization of a tumor microenvironment of the patient tissue based on one or more groups of nuclei, as taught by Barnes, in view of Jung, to include applying a triangulation algorithm to the spatial locations of the plurality of plurality of centroids of the nuclei segmentation results, as taught by Madabhushi; 3) modify the step of generating a prognostic model for the patient based on one or more features, as taught by Barnes, in view of Jung, to have features including an inter-cell connectivity, a density distribution, and a neighborhood diversity, as taught by Madabhushi. The suggestion/motivation for doing so would have been that “this technique improves on those employed by existing approaches to segmenting nuclei by being computationally simpler and faster; this technique also facilitates the adjustment and fine-tuning of parameters with greater simplicity than techniques used by existing approaches, thereby providing the technical effect of improving the performance of computers, systems, or other apparatus on which embodiments are implemented” (Madabhushi, para. [0033]); further suggestion/motivation for doing so would have been that a “personalized cancer treatment plan may be generated based, at least in part, on the classification and at least one of the probability, the set of nuclear radiomic features, the set of CG features, or the digitized image … defining a personalized cancer treatment plan facilitates delivering a particular treatment that will be therapeutically active to the patient, while minimizing negative or adverse effects experienced by the patient” (Madabhushi, para. [0051]-[0053]). Barnes, in view of Jung, and in view of Madabhushi, fails to teach generating a spatial connectivity matrix encoding one or more topological relationships among the plurality of cells; and one or more features derived from the spatial connectivity matrix, the one or more features including an inter-cell connectivity, a density distribution, and a neighborhood diversity. Svekolkin teaches generating a spatial connectivity matrix encoding one or more topological relationships among the plurality of cells (Svekolkin, para. [0222]-[0223]: “The computing device then computes, based on the graph representation of the tissue 776, the local cell features at step 752B. The local cell features can include information about the cells that can be determined based on the cell data 774 and/or the graph 776. For example, the local cell features can include, for each cell, a cell type, cell neighbors determined based on the edges of the graph 776, neighboring cell types, neighbor distance data (e.g., median distance to neighbors, mask-related data (e.g., a percentage of area filled with positive pixels for marker masks under each cell (e.g., a CD31 mask for blood vessels, etc.)), and/or the like. Each node can therefore have an associated set of local data points (e.g., represented as a vector). In some embodiments, the node data can include the cell type, which can be encoded using a plurality of variables. For example, if there are seven discovered cell types in the tissue sample, then “cell type 6” can be encoded as [0, 0, 0, 0, 0, 1, 0]. In some embodiments, the node data can include the median value of lengths of all node edges for the cell. In some embodiments, the node data can include the percentage of positive pixels of a given mask for a cell, which can be extended to include data for each of a plurality of masks (if present). In some embodiments, the data can include the percentage of the cells located within one or more masks of selected markers (e.g., a percentage of the area of the cell mask filled with positive cells). Such mask-based data can allow the computing device to leverage information about cells and/or structures that may otherwise be difficult to segment. As a result, in some embodiments the total number of data points for each node is L, which is the sum of (1) the number of cell types, (2) the number of masks to consider (if any), and (3) a value for the median distance of edges of given node. The graph 776 can be encoded for input into the graph neural network 772. In some embodiments, the node information can be stored in a matrix with dimensionality n by L, where n is a number of nodes and L is the number of node features. In some embodiments, the graph is encoded, such as into a sparse adjacency matrix (e.g., with dimensionality n by n nodes), into an adjacency list of edges, and/or the like.”); and one or more features derived from the spatial connectivity matrix, the one or more features including an inter-cell connectivity, a density distribution, and a neighborhood diversity (Svekolkin, para. [0222]; see above discussing local cell features such as inter-cell distance of nodes in the spatial graph; and neighborhood diversity; para. [0139]; para. [0228]: “The computing device 116 uses the information about the cell locations, cell types, and/or other information (e.g., information regarding physical parameters of the cells, such as cell area information, density information, etc.) to determine characteristics 106 of the tissue sample, including determining information regarding neighboring cells (e.g., neighboring cell types) and/or the organization of the cells in the tissue sample. For example, the computing device 116 can determine the neighboring cell types of cells of a cell type of interest. Such neighboring cell type information can be indicative of, for example, whether at least some of the cells of the cell type of interest are (a) closely clustered together in one or more clusters (e.g., if the cells of interest largely neighbor each other), (b) are distributed throughout the tissue sample (e.g., if the cells of interest mostly neighbor other types of cells in the tissue sample), (c) are grouped together with one or more other cell types in the tissue sample (e.g., if the cells of interest mostly neighbor the one or more other cell types in the tissue sample), and/or other cell neighbor information.”; “At step 756, the cell embeddings (including the neighborhood data) are clustered to determine one or more clusters 778. Various clustering techniques can be used to determine the clusters. For example, as described herein the techniques can include using a centroid-based clustering algorithm (e.g., K-means)”). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the method, as taught by Barnes, in view of Jung, and in view of Madabhushi, to include the step of generating a spatial connectivity matrix encoding one or more topological relationships among the plurality of cells, and deriving one or more features from the spatial connectivity matrix, the one or more features including an inter-cell connectivity, a density distribution, and a neighborhood diversity, as taught by Svekolkin. The suggestion/motivation for doing so would have been that using a spatial connectivity matrix in Graph Neural Networks (GNNs) captures underlying physical or logical relationships, allowing models to learn complex spatial patterns, integrate multimodal data (like gene expression with location), improve interpretability by revealing tissue organization or network hubs, and achieve better performance in tasks like predicting disease states or user behavior by understanding how entities influence their neighbors. Barnes, in view of Jung, in view of Madabhushi, and in view of Svekolkin, teaches generating a prognostic model for the patient based on one or more features derived from the spatial connectivity matrix, the one or more features including an inter-cell connectivity, a density distribution, and a neighborhood diversity (Madabhushi teaches generating a prognostic model for the patient based on one or more features including an inter-cell connectivity, a density distribution, and a neighborhood diversity; Svekolkin teaches generating a spatial connectivity matrix encoding one or more topological relationships among the plurality of cells, and deriving one or more features from the spatial connectivity matrix, the one or more features including an inter-cell connectivity, a density distribution, and a neighborhood diversity; the spatial connectivity matrix taught by Svekolkin is easily integrated into the prognosis of Madabhushi (they both teach creating cell/nucleus graphs from extracted cell/nucleus features with nodes and edges); the spatial connectivity matrix is simply a mathematical form for the extracted feature data). Therefore, it would have been obvious to combine Barnes, with Jung, Madabhushi, and Svekolkin, to obtain the invention as specified in claim 13. Regarding claim 14, Barnes, in view of Jung, in view of Madabhushi, and in view of Svekolkin teaches the method of claim 13, wherein a treatment for the patient is optimized based on the prognostic model (Barnes, para. [0006]: “Patients with localized (early stage, resectable) breast cancer undergoing curative surgery and/or therapy have an underlying risk of local or distant cancer recurrence while those patients who experience recurrence exhibit an increased mortality rate. Depending on the size of risk, different treatment options exist. Thus, an assay that allows one to reliably identify patients with a low or high risk of cancer recurrence is needed. Accordingly, technologies are also needed that can reliably discriminate between high and low risk patients and provide healthcare providers with additional information to consider when determining a patient's treatment options.”). Regarding claim 15, Barnes, in view of Jung, in view of Madabhushi, and in view of Svekolkin teaches the method of claim 13, wherein the composition and the spatial organization of a tumor microenvironment of the patient tissue is determined based on the image features extracted from the pathological image using the triangulation algorithm, the triangulation algorithm being a Delaunay triangulation (Madabhushi, para. [0023]; para. [0045]; see rejection of claim 13 above; Delaunay triangulation is used to extract features from the cell graph). Regarding claim 16, Barnes, in view of Jung, in view of Madabhushi, and in view of Svekolkin teaches the method of claim 15. Barnes, in view of Jung, in view of Madabhushi, and in view of Svekolkin teaches, fails to teach wherein the image features are associated with transcriptional activity of biological pathways. Svekolkin further teaches wherein the image features are associated with transcriptional activity of biological pathways (Svekolkin, para. [0184]; FIG. 11: “FIG. 11 is a diagram pictorially illustrating another exemplary use of a trained neural network 1100 to process immunofluorescence images to generate cell location/segmentation data 1102, according to some embodiments of the technology described herein. In this example, the MxIF image 1110 includes DAPI marker image 1112, and NaKATPase marker image 1114, where DAPI is a fluorescent DNA stain and NaKATPase is a membrane marker. It should be appreciated that DAPI, NaKATPase, and/or other markers can be used. For example, other markers can include a cytoplasm marker S6, a membrane marker PCK26, Carbonic anhydrase IX (CAIX), CD3, and/or the like. The computing device uses the trained neural network 1100 to process the DAPI marker image 1112 and the NaKATPase marker image 1114 to generate the cell location/segmentation information 1102.”; the claim term “transcriptional activity of biological pathways” refers to how gene expression (making RNA from DNA) within a pathway is regulated, controlled by Transcription Factors (TFs); DAPI marker images stains DNA blue, marking cell nuclei, and while it shows chromatin density, changes in its intensity or distribution can indirectly relate to transcriptional activity by reflecting chromatin condensation; DAPI fluorescence lifetime imaging (FLIM) can reveal DNA-protein interactions, hinting at transcriptional states; therefore extracting features from a DAPI marker image meets the broadest reasonable interpretation of the claim term “transcriptional activity of biological pathways”; PNG media_image10.png 598 870 media_image10.png Greyscale ). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the image features, as taught by Barnes, in view of Jung, in view of Madabhushi, and in view of Svekolkin, to be associated with transcriptional activity of biological pathways, as further taught by Svekolkin. The suggestion/motivation for doing so would have been that DAPI binds strongly to DNA, staining the nucleus blue, allowing researchers to easily find, count, and segment cells in images, a prerequisite for any gene expression study; when used with fluorescent in situ hybridization (FISH) for mRNA (to see active transcription) or antibodies for transcription factors, DAPI provides the nuclear boundary, showing where within the cell the transcription is happening (e.g., in the nucleus vs. cytoplasm). Therefore, it would have been obvious to combine Barnes, Jung, Madabhushi, and Svekolkin, with Svekolkin further, to obtain the invention as specified in claim 16. Regarding claim 17, Barnes, in view of Jung, in view of Madabhushi, and in view of Svekolkin teaches the method of claim 13, wherein the composition of the patient tissue includes one or more different cell types (Barnes, para. [0065]; para. [0057]; FIG. 4; FIG. 2A; para. [0013]; see rejection of claim 13 above; “In either case, whether the whole tumor or only “hot spots” are annotated, the corresponding regions on the remaining slides necessarily correspond to similar tissue types, assuming the magnification remains constant across the series; Barnes, para. [0073]: “The trained classifier can be selected based at least in part on how best to handle training data variability, for example, in tissue type, staining protocol, and other features of interest, for slide interpretation. The system can analyze a specific region of an image based at least in part on information within that region, as well as information outside of that region. In some embodiments, a multi-stage binary classifier can identify positive and negative nuclei. The positive nuclei can be distinguished from the negative nuclei, lymphocytes, and stroma. Additionally, the negative cells and lymphocytes can be distinguished from stroma. Lymphocytes are then distinguished from the negative nuclei. In further classification, the positive cells can be distinguished from background cells. For example, if the positive cells have brown stained nuclei, the background cells may exhibit cytoplastmic blush that can be filtered out. Based at least in part on the number of positive/negative nuclei, a score (e.g., a whole-slide score) can be determined.”). Regarding claim 18, Regarding claim 5, Barnes, in view of Jung, and in view of Madabhushi, and in view of Svekolkin, teaches the method of claim 13, wherein the cell type includes at least one of tumor cells, stromal cells, macrophages, red blood cells, lymphocytes, or karyorrhexis (Barnes, para. [0065]; para. [0057]; FIG. 4; FIG. 2A; para. [0013]; see rejection of claim 1 above for the discussion of tumor cells). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Barnes, in view of Jung, in view of Madabhushi, and in further view of U.S. Patent Application Publication No.: 2013/0230230 (Ajemba et al.) (hereinafter Ajemba). Regarding claim 8, Barnes, in view of Jung, in view of Madabhushi teaches the one or more non-transitory computer-readable storage media of claim 7. Barnes, in view of Jung, in view of Madabhushi, fails to teach wherein each of the centroids of the nuclei of the plurality of cells is defined as a vertex on a feature graph and edges between sets of the vertices correspond to the connections for different cell types. Ajemba teaches wherein each of the centroids of the nuclei of the plurality of cells is defined as a vertex on a feature graph and edges between sets of the vertices correspond to the connections for different cell types (Ajemba, para. [0160]-[0161]; FIG. 12G: “FIG. 12G is a flowchart of illustrative stages involved in ring segmentation by a graph process based upon clustering a triangulation of epithelial nuclei according to some embodiments of the present invention. Some embodiments of the present invention operate based on the principle that a key geometric property of a “ring” of points, possibly including some interior points not on the ring boundary is that the points are more closely spaced around the boundary than in the interior or exterior of the ring. In some embodiments of the present invention, a suitably initialized watershed process on a graph captures this property. At stage 1254, Delaunay triangulation with epithelial nuclei centers as vertices is performed. In some embodiments of the present invention, the triangle connectivity graph is the Voronoi diagram. At stage 1256, a “depth” is assigned to each triangle, for example, equal to the length of the longest side. At stage 1258, a sort by depth is performed, and then starting from the deepest triangles, neighboring regions (e.g., 3 neighboring regions) are examined and regions are merged if the length of the common side is at least, for example, 90% of the depth of the neighbor, and if both regions touch the same epithelial units”; PNG media_image11.png 926 434 media_image11.png Greyscale ). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the each of the centroids of the nuclei of the plurality of cells, as taught by Barnes, in view of Jung, in view of Madabhushi, to be defined as a vertex on a feature graph and edges between sets of the vertices correspond to the connections for different cell types, as taught by Ajemba. The suggestion/motivation for doing so would have been that “an advantage of a graph-based watershed method over a pixel-based algorithm is that it is more convenient to track region statistics within the algorithm and to apply fine-tuned region merging criteria (Ajemba, para. [0161]). Therefore, it would have been obvious to combine Barnes, Jung, and Madabhushi, with Ajemba, to obtain the invention as specified in claim 8. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL ADAM SHARIFF whose telephone number is 571-272-9741. The examiner can normally be reached M-F 8:30-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached on 571-272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL ADAM SHARIFF/ Examiner, Art Unit 2672 /SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Nov 06, 2022
Application Filed
Mar 22, 2025
Non-Final Rejection — §103, §112
May 30, 2025
Interview Requested
Jun 10, 2025
Applicant Interview (Telephonic)
Jun 11, 2025
Examiner Interview Summary
Aug 01, 2025
Response Filed
Aug 09, 2025
Final Rejection — §103, §112
Oct 15, 2025
Response after Non-Final Action
Nov 13, 2025
Request for Continued Examination
Nov 17, 2025
Response after Non-Final Action
Jan 08, 2026
Non-Final Rejection — §103, §112
Apr 15, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602903
Method for Analyzing Image Information Using Assigned Scalar Values
2y 5m to grant Granted Apr 14, 2026
Patent 12579776
DISPLAY DEVICE, DISPLAY METHOD, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12561959
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR TARGET IMAGE PROCESSING
2y 5m to grant Granted Feb 24, 2026
Patent 12548293
IMAGE DETECTION METHOD AND APPARATUS
2y 5m to grant Granted Feb 10, 2026
Patent 12541976
RELATIONSHIP MODELING AND ANOMALY DETECTION BASED ON VIDEO DATA
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+22.3%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 115 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month