DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: QUANTIFICATION OF CONDITIONS ON BIOMEDICAL IMAGES ACROSS STAINING MODALITIES USING A MULTI-TASK DEEP LEARNING FRAMEWORK IN ORDER TO SEGMENT A SECOND BIOMEDICAL IMAGE FROM A FIRST BIOMEDICAL IMAGE TO IDENTIFY A REGION OF INTEREST AND CALCULATE A SCORE FOR A CONDITION IN THE REGION OF INTEREST . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim (s) 9, 10, 12-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yagi (WO-2019046774) in view of Kapil (US Pub 2019/0392580) . Re claim 9 : (Original) Yagi discloses a method of quantifying conditions on biomedical images, comprising: identifying, by a computing system, a first biomedical image in a first staining modality, the first biomedical image having at least one region of interest (ROI) corresponding to a condition (e.g. a first biomedical image is identified in a stained virtual slides. The regions within the slides can contain features that are detected that are associated with a condition. The features are within an region of interest in the slide, which is taught in ¶ [100] and [101]. ) ; [00100] In some implementations, the tissue segmentation engine 325 may segment the stained virtual slides. For example, for each stained virtual slice, the tissue segmentation engine 32S can be configured to segment the stained virtual slice to identify regions of interest as well as regions that may be considered insignificant (e.g., portions of the stained virtual slice that correspond to substrate material or empty space). It should be understood that, in some implementations, the tissue segmentation engine 325 can be configured to perform segmentation using other techniques. [00101] The feature detector 335 can be configured to analyze the 3D model of the tissue block to detect one or more features that may be used to identify one or more conditions or to grade a severity of the one or more conditions. The feature detector 335 can be configured to analyze one or more regions of one or more individual stained virtual slices or the stained tissue block to detect certain features. The feature detector 335 can perform feature detection by identifying pixel intensity values or pixel color values of one or more clusters of pixels. The feature detector 335 can use the identified values to determine if the cluster of pixels corresponds to a known feature. In some embodiments, the features can identify a tissue type or fluid type. In some embodiments, the features can correspond to a region that is to be annotated for review by a medical professional. In some embodiments, the features can correspond to a particular grade of a condition. In some embodiments, by analyzing a large number of tissue samples that have been annotated with certain conditions, a machine learning model can be trained to identify such features. In this way, the feature detector 335 can include or utilize a machine learning model to identify certain features, determine a grading of the feature and generate annotations that can be used to assist a medical professional reviewing the 3D model. In some embodiments, the feature detector 335 can be configured to identify tumor regions within prostate tissue samples and further be configured to grade the tumor region that can be used to determine a severity of the tumor. In some embodiments, the feature detector 335 can determine, through unsupervised learning, additional features that may correspond to a disease or condition that previously were unknown. [00102] In some implementations, the feature detector 335 can execute a machine learning algorithm that has been trained to identify any feature that may be of interest for detecting one or more conditions or anomalies, such as tumor regions, based on one or more stained virtual slices. For example, such features can be or can include features that are indicative of one or more tumor cells. The feature detector 335 can be configured to execute a machine learning algorithm that can correlate characteristics of a stained virtual slice with a probability that the stained virtual slice includes one or more features of interest. In some implementations, the feature detector 33S can examine the characteristics of the stained virtual slice on a pixel-by- pixel basis to determine whether each individual pixel may indicate a feature of interest, such as a tumor. For example, the pixel characteristics can be a color, an intensity, a brightness, a size, a position, or any other characteristic. In some implementations, the feature detector 33S can also examine the characteristics of a stained virtual slice by examining groups of pixels together. For example, the feature detector 33S can examine groups of contiguous pixels to determine any characteristic of the group, such as changes in intensity or contrast levels, and can use the machine learning algorithm to determine whether such characteristics may indicate a feature of interest, such as a tumor cell. [00103] In some implementations, the machine learning model can be trained based on annotations provided on images of physical slides or virtual slices. In some implementations, the annotations can be provided by a medical professional, such as a pathologist, a technician among others. The machine learning model can receive images of slides or slices, a corresponding label indicating that the slide or slice includes a tumor or not, and annotations that may identify the region that includes the tumor. In some embodiments, the slide or slice may not include annotations but includes a slide level classification of being tumorous or not. The machine learning model can then identify one or more features from the images that are indicative of tumor regions and features that are not indicative of tumor regions. Once the machine learning model is sufficiently trained, the machine learning model can receive a new image and indicate whether or not the slide include a tumor or not, and in some embodiments, can identify the region on the image including the tumor based on the features. [00104] In some implementations, the machine learning model can continuously learn based on feedback received on virtual slices on which the machine learning model provides an output. For example, the feature detector 335 can apply the machine learning algorithm to a stained virtual slice to determine whether the slice is likely to include a feature of interest such as a tumor cell (e.g., greater than 50% probability, based on the model). In some implementations, a pathologist can examine either the stained virtual slice or a corresponding stained physical slide to make an independent determination of whether the stained virtual slice does in fact include a feature of interest. This result can be compared with the result generated by the feature detector 335 using the machine learning algorithm. In some implementations, a large number of such comparisons can be performed to generate a large dataset that can be used as training data to refine the machine learning algorithm or model. [00105] In some implementations, the feature detector 335 can also be configured to identify features of interest that may exist in the 3D image generated according to the method 360. Such a feature detection technique can be more effective than examining individual virtual slices of the 3D image. For example, some features, such as tumor islands, may be very difficult or impossible to detect by examining individual virtual slices, as the features themselves may extend through multiple adjacent virtual slices of the 3D image. Thus, the feature detector 335 can be configured to examine the reconstructed 3D image in a manner similar to that described above in connection with examining individual slices. For example, the feature detector 335 can be configured to examine the 3D image as a whole, or sections of the 3D image that contain more than one individual virtual slice. In some implementations, the feature detector 335 can separate the 3D image into a plurality of smaller 3D portions that each include image data corresponding to more than one virtual slice. The feature detector 335 can execute a machine learning algorithm to determine whether any of these 3D portions (or the 3D image as a whole) includes a feature of interest Thus, the feature detector 335 may be more likely to correctly identify a 3D feature, such as a tumor island, in this manner, as compared to examining only a single virtual slice at a time. [00106] In some implementations, the results obtained by the feature detector 335 can also be used to refine the machine learning algorithm that is applied to the 3D image. For example, the feature detector 335 can apply the machine learning algorithm to a 3D image to determine whether the 3D image is likely to include a feature of interest such as a tumor island (e.g., greater than 50% probability, based on the model). In some implementations, a pathologist can examine either the 3D image or the tissue block or tissue sample itself to make an independent determination of whether the 3D image includes a feature of interest. This result can be compared with the result generated by the feature detector 335 using the machine learning algorithm. In some implementations, a large number of such comparisons can be performed to generate a large dataset that can be used as training data to refine the machine learning algorithm, which can subsequently be more efficient at detecting features of interest in other 3D images. Examples of such 3D images, which can be examined by the feature detector 335 to identify various features of interest, are described further below in connection with FIGS. 4A-4D. applying, by the computing system, a trained image segmentation model to the first biomedical image (e.g. the system applies a trained machine learning model to a slide image in order to detect a condition, which is taught in ¶ [100]-[106] above.) , the image segmentation model configured to: generate a second biomedical image in a second staining modality using the first biomedical image in the first staining modality (e.g. a second biomedical image is created using a first biomedical image in order to create a second stained virtual slice that is used to detect conditions within the second or created image, which is taught in ¶ [107] and [108].) ; [00107] Referring now to FIG. 4A, illustrated is a view of a 3D model 400 of a tissue sample that can be generated according to the method 360. For example, the 3D model 400 can be formed by virtually stacking stained virtual slices of a physical tissue block. FIG. 4A shows two regions that may correspond to tumor tissue. Referring again to FIGS. 3 A and 3B, in some implementations, the method 360 also can include using the 3D model to detect various features within the tissue block. For example, the tumor island detector 330 can be configured to detect "islands" of tumor cells within the 3D model. An island of tumor cells can be, for example, a cluster of tumor cells having a thickness in the vertical direction within the 3D model, such that it is only observable by examining non-coplanar portions of the 3D model (i.e., by examining portions of the 3D model corresponding to multiple virtual slices). The tumor island detector 330 can detect tumor islands, for example, by examining the color of intensity values of pixels in a set of contiguous virtual slides of the 3D model to determine whether such an island of tumor cells appears to be present. Similarly, the feature detector 335 can be configured to identify or locate other features that may be of interest. Such features can include any type or biological structure within the tissue block that may be relevant for diagnostic or analysis purposes. The feature detector 335 can detect such features in a manner similar to that of the tumor island detector 330. FIGS. 4B-4D show different portions of the 3D model of the tissue block 400 depicted in FIG. 4A. The views of FIGS. 4B-4D can help to highlight the three-dimensional nature of the tumor tissue 40S and 410 (e.g., tumor islands), which can extend through multiple virtual slices of the 3D model 400. For illustrative purposes, the tumor tissue 405 and 410 is not labeled in FIGS. 4B-4D. [00108] Referring again to FIGS. 3 A and 3B, the method 360 can be used to produce a 3D image of a whole tissue block without sectioning the block into physical slices or applying a physical stain to the block, thereby saving substantial time and expense relative to the method 200 shown in FIG. 2. In addition, by dispensing with the need to physically section the tissue block into slides and instead allowing slices of the tissue block to be formed in an automated fashion, the method 360 can allow for a much larger number of virtual slices to be included in the 3D image, relative to the number of slides that can be used in the method 200. The increased number of virtual slides achieves an increase in spatial resolution of the 3D image produced by the method 360. As a result, smaller features (e.g., tumor islands) can be detected using the 3D image produced through the method 360 than by using the 3D image produced through the method 200. This can allow a physician, pathologist or other medical care provider to provide earlier and more accurate diagnosis of tumor cells or other structures in a patient. generate a segmented biomedical image using the first biomedical image and the second biomedical image, the segmented biomedical image identifying one or more ROIs (e.g. the created virtual stained slides made from a plurality of stacked slides can be segmented in order to determine if regions of interest contain a group of pixels that coincide with a condition, which is taught in ¶ [100], [107] and [108] above.) . However, Yagi fails to specifically teach the features of the trained image segmentation model having a plurality of kernels, the plurality of kernels configured to: determining, by the computing system, a score for the condition in the first biomedical image based on the one or more ROIs identified in the segmented biomedical image; and providing, by the computing system, an output based on at least one of the second biomedical image, the score for the condition, or the segmented biomedical image. However, this is well known in the art as evidenced by Kapil. Similar to the primary reference, Kapil discloses calculating a score for the stained images evaluated (same field of endeavor or reasonably pertinent to the problem). Kapil discloses the trained image segmentation model having a plurality of kernels (e.g. the system discloses the CNN is trained and performs operations that correspond with kernels, which is taught in ¶ [32]. ) , the plurality of kernels configured to: [0032] System 10 includes a convolutional neural network used for processing digital images and computing a score for the histopathological diagnosis of a cancer patient. The convolutional neural network is trained, wherein the training calculations are performed by a data processor 14. In one embodiment, data processor 14 is a specialized processor that can simultaneously perform multiple convolution operations between a plurality of pixel matrices and corresponding kernels. The logical operations of the model are implemented on data processor 14 as hardware, firmware, software, and/or a combination thereof to provide a means for characterizing regions of tissue in the digital image. Each trained model comprising an optimized set of parameters and associated mathematical operations is then stored in the database 13. determining, by the computing system, a score for the condition in the first biomedical image based on the one or more ROIs identified in the segmented biomedical image (e.g. the invention discloses calculating a score for a tumor based on the segmented portion of an image of stained tissue converted into a digital image. The tumor cell score is calculated for the cancer patient, which is taught in ¶ [30]-[35]. ) ; and [0030] FIG. 1 shows a system 10 for generating a histopathological score for a cancer patient by computing a score for the histopathological diagnosis based on a total number of pixels determined to belong to tumor epithelial tissue that is positively stained by the diagnostic antibody. For example, the histopathological score is the Tumor Cell (TC) score. In one embodiment, the diagnostic antibody binds to the programmed death ligand 1 (PD-L1), and system 10 calculates a histopathological score for the cancer patient based on the total number of pixels of a digital image that have been determined to belong to tumor epithelium and to have been positively stained using the PD-L1 antibody. A high concentration of PD-L1 and thus a greater number of pixels determined to belong to tumor epithelium tissue that is positively stained by the PD-L1 antibody in solid tumors is indicative of a positive prognosis for patients treated by a PD-1/PD-L1 checkpoint inhibitor. Thus, system 10 analyzes digital images 11 to determine the total number of pixels that belong to first tissue that is positively stained using the diagnostic antibody PD-L1 and that is tumor epithelium. The first tissue is a specific group of tumor epithelial cells that are positively stained by the diagnostic antibody, in this embodiment by the PD-L1 antibody. [0031] System 10 also identifies tissue that has been negatively stained by the diagnostic antibody. Tissue is considered to be “negatively stained” if the tissue is not positively stained by the diagnostic antibody. In this embodiment, the negatively stained tissue is tumor epithelial tissue that has not been positively stained by the PD-L1 antibody. The second tissue is a specific group of tumor epithelial cells that is negatively stained by the diagnostic antibody, in this embodiment by the PD-L1 antibody. In one embodiment, the first and second tissues that are positively and negatively stained by the diagnostic antibody belong to the same group of tumor epithelial cells. System 10 also identifies other tissue that belongs to different types of cells than the first and second tissues. The other tissue can be immune cells, necrotic cells, or any other cell type that is not the first or second tissue. The histopathological score computed by system 10 is displayed on a graphical user interface 15 of a user work station 16. [0032] System 10 includes a convolutional neural network used for processing digital images and computing a score for the histopathological diagnosis of a cancer patient. The convolutional neural network is trained, wherein the training calculations are performed by a data processor 14. In one embodiment, data processor 14 is a specialized processor that can simultaneously perform multiple convolution operations between a plurality of pixel matrices and corresponding kernels. The logical operations of the model are implemented on data processor 14 as hardware, firmware, software, and/or a combination thereof to provide a means for characterizing regions of tissue in the digital image. Each trained model comprising an optimized set of parameters and associated mathematical operations is then stored in the database 13. [0033] Once trained, system 10 reliably and precisely determines the total number of pixels that belong to tumor epithelium and that have been positively stained by the diagnostic antibody PD-L1. Training the convolutional neural network of system 10 by using a generative adversarial network obviates the need for extensive manual annotations of the digital images 11 that make up the training data set by transferring semi-automatically generated annotations on digital images of tissue stained with the epithelial cell marker CK to the PD-L1 domain. The biomarker CK specifically labels tumor epithelium, thereby allowing for a semi-automated segmentation of tumor epithelial regions based on the CK staining. After semantic segmentation, the digital images of tissue stained with the epithelial cell marker CK are transformed into the PD-LI domain. During this step, synthetic or fake images are generated. The regions identified as epithelial cells (positive for CK staining) are labeled as being either positive for PD-L1 staining (PD-L1 expressing cells) or negative for PD-L1 staining (non-PD-L1 expressing cells). The resulting fake images of tissue stained with PD-L1 antibody that are generated based on the images of tissue stained using CK are then used in conjunction with a reduced number of images with manual annotations to train the convolutional neural network of system 10 to identify positively stained tissue in digital images of tissue stained with the PD-L1 antibody. [0034] FIG. 2 illustrates tissue samples 18A and 18B being taken from cancer patients 17A and 17B, respectively. A tissue slice 19A from tissue sample 18A is placed on a slide 20A and stained with a diagnostic antibody, such as the PD-L1 antibody. The tissue slice 19B is placed on the slide 20B and stained with a helper antibody, such as the CK antibody. The helper antibody is used only to train system 10, whereas the diagnostic antibody is used to compute the histopathological score in a clinical application of system 10. High resolution digital images are acquired from slice 19A and slice 19B of the tissue from cancer patients 17A and 17B. The tissue samples 18A and 18B can be taken from a patient suffering from non-small-cell lung cancer, other types of lung cancer, breast cancer, prostate cancer, pancreatic cancer or another type of cancer. In one embodiment, the tissue sample 18A is taken from a solid tumor. In another embodiment, the tissue sample has been stained with a PD-L1 antibody, and in yet another embodiment with an HER2 antibody. In some embodiments, regions that include normal epithelial cells (as opposed to tumor epithelium) are excluded from the digital images acquired from slice 19A and slice 19B. [0035] FIG. 3 shows a digital image acquired from slice 19B of the tissue from cancer patient 17B that has been stained using the CK antibody. Epithelial cells 21 are stained positive by the CK stain. Non-epithelial region 22 is not stained positively by the CK stain. The identification and segmentation of CK-positive epithelial cells and non-CK-positive other tissue is a first substep of the overall task of identifying epithelial cells that are positively stained using PD-L1. providing, by the computing system, an output based on at least one of the second biomedical image, the score for the condition, or the segmented biomedical image (e.g. the score for the tumor cell is output on a display, which is taught in ¶ [31] above.) . Therefore, in view of Kapil, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention was made to have the feature of the trained image segmentation model having a plurality of kernels, the plurality of kernels configured to: determining, by the computing system, a score for the condition in the first biomedical image based on the one or more ROIs identified in the segmented biomedical image; and providing, by the computing system, an output based on at least one of the second biomedical image, the score for the condition, or the segmented biomedical image , incorporated in the device of Yagi, in order to determine and output a score associated with a condition based on the evaluation of segmented area of a tissue image , which improve the calculation of the score in a repeatable manner and the precision of estimation regarding the determined protein (as stated in Kapil ¶ [05] ). Re claim 10 : (Original) The method of claim 9, further comprising establishing, by the computing system, the trained model using a training dataset (e.g. the model is trained using a dataset, which is taught in ¶ [103] and [104] above. ) , the training dataset comprising (i) a plurality of unlabeled biomedical images in the corresponding plurality of staining modalities and (ii) a labeled biomedical image identifying at least one ROI in one of the plurality of unlabeled biomedical images (e.g. the invention discloses a training dataset including labeled images indicating whether a slide or slice contains a tumor or not. Once the model is trained, unlabeled images can be input into the model in order to continuously learn and fine tune the model to recognize if tumor cells are present, which is taught in ¶ [103] and [104] above.) . However, Yagi fails to specifically teach the features of the trained image segmentation model using a training dataset . However, this is well known in the art as evidenced by Kapil. Similar to the primary reference, Kapil discloses calculating a score for the stained images evaluated (same field of endeavor or reasonably pertinent to the problem). Kapil discloses the trained image segmentation model using a training dataset (e.g. the system discloses a training of a model for performing segmentation of images by using staining images, which is taught in ¶ [33], [34] above and ¶ [39] , [47] . ). [0039] FIG. 7 illustrates a convolutional neural network 25 used in a clinical application for determining how many pixels of a first image patch belong to a first tissue that has been positively stained by the diagnostic antibody PD-L1 and that belongs to tumor epithelium, and a generative adversarial network 26 used for training the convolutional neural network 25. The convolutional neural network 25 acts as a discriminator in the framework of the domain adaptation and semantic segmentation generative adversarial network (DASGAN) 26. In one embodiment, the DASGAN includes two generative adversarial networks that operate as a cycle forming a CycleGAN 27. The tissue slides stained with CK and PD-L1 that are used to train network 25 need not be from tissue of the same patient. Thus, FIG. 2 shows the PD-L1-stained tissue slice 19A coming from patient 17A and the CK-stained tissue slice 19B coming from patient 17B. When network 25 is deployed to generate a score, only the images of tissue slices stained by the diagnostic antibody, such as PD-L1, are used. Images of tissue slices stained by the helper antibody, such as CK, are used only for training the algorithm of the convolutional neural network 25. Network 25 is trained to recognize pixels of a first tissue that is associated with both fake CK positive staining and real PD-L1 positive staining. [0047] The first discriminator network 25 is then trained to segment PL-L1 images, whether fake or real, and to classify individual pixels as being stained by PL-L1 by using the fake PD-L1 images 39 generated by generator network 28 along with real PD-L1 images 40 . In step S 8 , the fake image 39 output in step S 7 is input into the first discriminator network 25 together with the real image 40 of the same stain domain, for example, a digital image that has been acquired from a tissue slice stained with PD-L1 antibody. For example, the first discriminator network 25 can be trained based on the fake PD-L1 images generated by the first generator network 28 and the associated ground-truth masks. The fake PD-L1 images generated by the first generator network 28 are then used for training in conjunction with manual annotations on real PD-L1 images acquired from tissue slices stained with PD-L1 antibody. In one embodiment, the complete DASGAN network 26 , consisting of the network 27 (CycleGAN) and of the two SegNet networks 25 (PD-L1 SegNet) and 30 (CK SegNet) are trained simultaneously. Although “simultaneous” training still involves sequential steps on a computer, the individual optimizing steps of training both networks are interwoven. Therefore, in view of Kapil, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention was made to have the feature of the trained image segmentation model using a training dataset , incorporated in the device of Yagi, in order to train a segmentation model using training data, which the segmentation is used to create data to improve the training of the model (as stated in Kapil ¶ [10] ). Re claim 12 : (Original) Yagi discloses t he method of claim 9, wherein determining the score further comprises determining a plurality of scores for the plurality of staining modalities based on a plurality of segmented images corresponding to the plurality of staining modalities (e.g. the system calculates a probability of whether a tumor is present. The probability is a score that determines if a slice contains a tumor, which is taught in ¶ [103]-[105] above.) . Re claim 13 : (Original) However, Yagi fails to specifically teach the features of t he method of claim 9, wherein identifying the first biomedical image further comprises receiving the first biomedical image acquired from a tissue sample in accordance with immunostaining of the first staining modality, the first biomedical image having the at least one ROI corresponding to a feature associated with the condition in the tissue sample. However, this is well known in the art as evidenced by Kapil. Similar to the primary reference, Kapil discloses staining an image (same field of endeavor or reasonably pertinent to the problem). Kapil discloses wherein identifying the first biomedical image further comprises receiving the first biomedical image acquired from a tissue sample in accordance with immunostaining of the first staining modality, the first biomedical image having the at least one ROI corresponding to a feature associated with the condition in the tissue sample (e.g. the system discloses staining the tissue stained with an antibody that is digitally scanned. A region of the digitally stained image is identified as a region of interest where a tumor could be present, which is taught in ¶ [33] and [34] above and [30], [31].) . [0030] FIG. 1 shows a system 10 for generating a histopathological score for a cancer patient by computing a score for the histopathological diagnosis based on a total number of pixels determined to belong to tumor epithelial tissue that is positively stained by the diagnostic antibody. For example, the histopathological score is the Tumor Cell (TC) score. In one embodiment, the diagnostic antibody binds to the programmed death ligand 1 (PD-L1), and system 10 calculates a histopathological score for the cancer patient based on the total number of pixels of a digital image that have been determined to belong to tumor epithelium and to have been positively stained using the PD-L1 antibody. A high concentration of PD-L1 and thus a greater number of pixels determined to belong to tumor epithelium tissue that is positively stained by the PD-L1 antibody in solid tumors is indicative of a positive prognosis for patients treated by a PD-1/PD-L1 checkpoint inhibitor. Thus, system 10 analyzes digital images 11 to determine the total number of pixels that belong to first tissue that is positively stained using the diagnostic antibody PD-L1 and that is tumor epithelium. The first tissue is a specific group of tumor epithelial cells that are positively stained by the diagnostic antibody, in this embodiment by the PD-L1 antibody. [0031] System 10 also identifies tissue that has been negatively stained by the diagnostic antibody. Tissue is considered to be “negatively stained” if the tissue is not positively stained by the diagnostic antibody. In this embodiment, the negatively stained tissue is tumor epithelial tissue that has not been positively stained by the PD-L1 antibody. The second tissue is a specific group of tumor epithelial cells that is negatively stained by the diagnostic antibody, in this embodiment by the PD-L1 antibody. In one embodiment, the first and second tissues that are positively and negatively stained by the diagnostic antibody belong to the same group of tumor epithelial cells. System 10 also identifies other tissue that belongs to different types of cells than the first and second tissues. The other tissue can be immune cells, necrotic cells, or any other cell type that is not the first or second tissue. The histopathological score computed by system 10 is displayed on a graphical user interface 15 of a user work station 16. Therefore, in view of Kapil, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention was made to have the feature of wherein identifying the first biomedical image further comprises receiving the first biomedical image acquired from a tissue sample in accordance with immunostaining of the first staining modality, the first biomedical image having the at least one ROI corresponding to a feature associated with the condition in the tissue sample , incorporated in the device of Yagi, in order to train a segmentation model using training data, which the segmentation is used to create data to improve the training of the model (as stated in Kapil ¶ [10]). Re claim 14 : (Original) Yagi discloses t he method of claim 9, wherein providing the output further comprises generating information to present based on the score for the condition and the segmented biomedical image, the segmented biomedical image identifying the one or more ROIs, the one or more ROIs corresponding to one of a presence of the condition or an absence of the condition (e.g. the system discloses a machine learning model, once trained, can identify that a tumor is present or not and a particular region of interest that contains the tumor, which is taught in ¶ [103]-[105] above. The system may be able to further detect a tumor island.) . Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yagi, as modified by Kapil, as applied to claim 9 above, and further in view of Gadermayr (NPL document titled “Which Way Round? A Study on the Performance of Stain-Translation for Segmenting Arbitrarily Dyed Histological Images”) . Re claim 11 : (Original) The method of claim 9, wherein the first plurality of kernels of the first model is arranged across: to generate a corresponding plurality of second biomedical images corresponding to the first biomedical image, each of the plurality of second biomedical images in a staining modality different from the first staining modality (e.g. a second biomedical image is generated from an initial biomedical image that is stained I order to create slices of segments of the tissue evaluated. A stain can be applied to slices in order to have a different staining modality than an original image, which is taught in ¶ [ 89 ]-[ 94 ]. ) ; [0089] In some other implementations, the virtual slice generator 305 can generate virtual slices by imaging the whole tissue block in layers. For example, the virtual slice generator 305 can first image an upper surface of the tissue block to a create a virtual slice corresponding to the upper surface. Then, the virtual slice generator 305 can image a layer of the tissue block just below the surface to create a second virtual slice. [0090] In some implementations, the second virtual slice can be generated by scanning a portion of the tissue block positioned at a depth of about 1 micron to about 5 microns below the upper surface. In some implementations, other depths may be used. For example, the second virtual slice can be at a depth of between about 1 micron and about ten microns below the upper surface of the tissue block. In some implementations, the second virtual slice can be at a depth of at least 1 micron, at least 2 microns, at least 3 microns, at least 4 microns, at least S microns, at least 6 microns, at least 7 microns, at least 8 microns, at least 9 microns, at least 10 microns, at least 15 microns, at least 20 microns, at least 25 microns, at least 50 microns, at least 60 microns, at least 70 microns, at least 80 microns, at least 90 microns, or at least 100 microns below the upper surface of the tissue block. The virtual slice generator 305 can repeat this process at sequential depths throughout the thickness of the tissue block to generate an arbitrarily selected number of virtual slices. In some implementations, the virtual slice generator 305 can store all of the virtual slices, for example in the database 340. Thus, the virtual slice generator is able to produce images similar to those produced in step 220 of the method 200 described above. However, unlike in the method 200, there is no need to physically section the tissue block into slices and to create a slide corresponding to each slice. Thus, the method 360 can save time and expense relative to the method 200. [0091] In addition, producing virtual slices in this manner can help to overcome other limitations of physically slicing the tissue block to produce physical slides. For example, while the thickness of physical slices can be limited, for example, by the cutting tool used and/or the dexterity of the human operator or automated equipment, sequential scanning of layers to produce virtual slices can be done at resolutions that may not be achievable with physical slices. In some implementations, sequentially scanning the plurality of successive layers of the tissue block to produce images corresponding to virtual slices of the tissue block can include sequentially scanning at least 300 layers, at least 350 layers, at least 400 layers, at least 500 layers, or at least 1000 layers of the tissue block. In contrast, physical slices can typically be limited to around 200 or 300 slices per block, due to time constraints and physical limitations. Each of the plurality of successive layers scanned to produce the virtual slices can have a thickness between one micron and five microns, which can be substantially smaller than the typical thickness of physical slices of a tissue block. In some other implementations, each virtual slice may have a thickness between about 1 micron and about ten microns. In some implementations, each virtual slice may have a thickness of at least 1 micron, at least 2 microns, at least 3 microns, at least 4 microns, at least 5 microns, at least 6 microns, at least 7 microns, at least 8 microns, at least 9 microns, at least 10 microns, at least IS microns, at least 20 microns, at least 25 microns, at least SO microns, at least 60 microns, at least 70 microns, at least 80 microns, at least 90 microns, or at least 100 microns. In some implementations, each virtual slice may have a thickness of greater than 100 microns. [0092] The method 360 also includes digitally staining the plurality of virtual slices (step 37S). In some implementations, this step can be performed by the digital slice stainer 310 shown in FIG. 3A. Digitally staining the virtual slices can produce a plurality of stained virtual slices. The digital slice stainer 310 can be configured to alter each digital slice produced by the virtual slice generator 30S in a manner similar to the physical staining process described above. Thus, in some implementations, the digital slice stainer can apply a filter or can otherwise modify or alter each virtual slice to enhance the contrast and pixel intensity values of pixels corresponding to biological structures in the slice that may be of interest. For example, the digital slice stainer 310 can be configured to alter each virtual slice in a manner that mimics physical hematoxylin and eosin staining, as described above. In some implementations, the digital slice stainer 310 can retrieve the virtual slices from the database 340, apply the virtual stain to each slice, and then store the stained slices in the database 340. [0093] In some implementations, the virtual slice stainer 310 can stain the virtual slices in an automated fashion. For example, an algorithm can be used to determine, for each pixel that makes up a virtual slice, a particular color for that pixel. In some implementations, each virtual slice can include a plurality of pixels. Each pixel can have a pixel address corresponding to a location within the virtual slice. Each pixel can also have a particular intensity value. In some implementations, the virtual slice stainer 310 can be configured to assign, to each pixel of a virtual slice, a color value determined by converting the intensity value of the pixel to a particular color value. The virtual slice stainer 310 can include a machine learning algorithm that has been trained to convert intensity values to color values. Stated in another way, the color selected for a pixel can be related to a pixel intensity value of the pixel, and the virtual slice stainer 310 can be configured to execute the algorithm that can correlate pixel intensity values of the original virtual slice with colors to be selected for corresponding pixels of the stained virtual slice. An intensity value for a pixel can be a numerical representation of its brightness. For example, a white pixel can have a high intensity value, while a black pixel can have a low intensity value, and grey pixels can have intermediate intensity values. In some implementations, the virtual slice stainer 310 can also apply a normalization technique to the pixels of the original virtual slice, for example to ensure that the intensity values for all pixels of the virtual slice have intensity values within a predetermined range. [0094] In some implementations, the virtual slice stainer 310 can use machine learning to virtually stain each slice. For example, the algorithm applied by the virtual slice stainer 310 can be a machine learning model. In some implementations, the results of virtually staining a slice can also be used to refine the machine learning model. For example, the virtual slice stainer 310 can stain each virtual slice by selecting a color for each pixel of the stained virtual slice. In some embodiments, the virtual slice stainer 310 can apply a digital stain to each slice using one or more color application techniques. Examples of some techniques include pseudocolor, density slicing and choropleths. In some implementations, the color for each pixel can be selected based on that pixel's intensity value, as described above. In some implementations, the color can be selected based on other factors as well. For example, the color selected for each pixel of a stained virtual slice can be chosen based on any characteristic or combination of characteristics of the corresponding pixel in the virtual slice. In some implementations, the virtual slice stainer 310 can select the color for a pixel based on a radiographic density characteristic of a corresponding pixel in the virtual slice. In some implementations, the color selected for an individual pixel can be based on a comparison of the pixel's intensity value to either a maximum pixel intensity value or a minimum pixel intensity value included in the original virtual slice. In some implementations, the color selected for an individual pixel can also be based on characteristics of other pixels, such as a comparison of the pixel's intensity value to the intensity values of other pixels, such as one or more adjacent pixels. This can help to provide similar or increased contrast levels between visual elements of the stained virtual slice and corresponding visual elements of the original virtual slice. Thus, in some implementations, pixels that appear darker in the original virtual slice (i.e., pixels having relatively low intensity values) can result in corresponding pixels having relatively darker appearances (e.g., pixels having a relatively dark purple hue) in the stained virtual slice. a plurality of second blocks corresponding to the plurality of staining modalities, the plurality of second blocks to generate a corresponding plurality of segmented biomedical images using the plurality of second biomedical images (e.g. the system discloses taking the stained slices and segmenting sections that belong to a particular slice area from areas that do not belong. This creates a plurality of segmented biomedical images using the stained images, which is taught in ¶ [99]. ) ; and [0099] The 3D reconstruction engine 320 can use the virtual slices and the registration information determined by the slice registration engine 315 to produce a 3D model of the tissue block. For example, once the registration information is determined, the 3D reconstruction engine 320 can create the 3D model of the tissue block by virtually stacking the virtual slices together in the correct order, and with the correct registration. To complete the model, the tissue segmentation engine 325 can be configured to identify portions of the resulting 3D image that correspond to tissue and portions that do not. Generally, each virtual slice may include some area corresponding to tissue and some area corresponding to empty space or the substrate material (e.g., paraffin) used to form the solid tissue block. Because the substrate material and empty space are not of interest for purposes of the 3D model, image information corresponding to these portions can be discarded. The tissue segmentation engine 325 can distinguish portions of the model corresponding to tissue from portions of the model not corresponding to tissue by examining color and intensity values for the pixels in the 3D model. For example, in some implementations, the digital slice stainer 310 can apply the digital stain to each virtual slice in such a manner that only alters the color of tissue cells, but does not change the color of the substrate material. In such implementations, the tissue segmentation engine 325 can examine the 3D model and discard portions of it corresponding to pixels whose color values indicate they have not been altered by the digital staining process. [00100] In some implementations, the tissue segmentation engine 325 may segment the stained virtual slides. For example, for each stained virtual slice, the tissue segmentation engine 32S can be configured to segment the stained virtual slice to identify regions of interest as well as regions that may be considered insignificant (e.g., portions of the stained virtual slice that correspond to substrate material or empty space). It should be understood that, in some implementations, the tissue segmentation engine 325 can be configured to perform segmentation using other techniques. a third block to generate the segmented biomedical image using the plurality of segmented biomedical images (e.g. the tissue block is created by stacking virtual slices and placing them in a correct order. Next, the stacked slices are then segmented into a biomedical image that is a part of a region of interest, which is taught in ¶ [98] and [99] above. The reconstruction engine can perform the feature of stacking the virtual slices to create segmentation. ) . However, Yagi fails to specifically teach the features of a plurality of first blocks corresponding to the plurality of staining modalities besides the first staining modality, the first plurality of blocks to generate, a plurality of second blocks corresponding to the plurality of staining modalities, the plurality of second blocks to generate, and a third block to generate. However, this is well known in the art as evidenced by Gadermayr et al. Similar to the primary reference, Gadermayr et al discloses staining slide images (same field of endeavor or reasonably pertinent to the problem). Gadermayr discloses a plurality of first blocks corresponding to the plurality of staining modalities besides the first staining modality, the first plurality of blocks to generate, a plurality of second blocks corresponding to the plurality of staining modalities, the plurality of second blocks to generate, and a third block to generate (e.g. page 3 shows a plurality of blocks that perform a group of staining input image data. Next, the figure shows a group of blocks to segment the stained data. The model G is used to create a segmented image, which is seen on page 3 under Methods. ). Therefore, in view of Gadermayr, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention was made to have the feature of a plurality of first blocks corresponding to the plurality of staining modalities besides the first staining modality, the first plurality of blocks to generate, a plurality of second blocks corresponding to the plurality of staining modalities, the plurality of second blocks to generate, and a third block to generate, incorporated in the device of Yagi, as modified by Kapil, in order to provide blocks to process the creation of stained biomedical images, segmenting the data and creating a segmented image, which can increase flexibility and save computation time when more than one segmentation is required (as stated in Gadermayr page 2).