Prosecution Insights
Last updated: April 19, 2026
Application No. 18/271,203

APPARATUSES, SYSTEMS AND METHODS FOR GENERATING SYNTHETIC IMAGE SETS

Final Rejection §102§103
Filed
Jul 06, 2023
Examiner
SATCHER, DION JOHN
Art Unit
2676
Tech Center
2600 — Communications
Assignee
UNIVERSITY OF WASHINGTON
OA Round
2 (Final)
85%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
33 granted / 39 resolved
+22.6% vs TC avg
Moderate +14% lift
Without
With
+14.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
29 currently pending
Career history
68
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
61.9%
+21.9% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 39 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 01/16/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment Applicant’s Amendments filed on 01/14/2026 has been entered and made of record. Currently pending Claim(s): Independent Claim(s): Amended Claim(s): Cancelled Claim(s): 1–8, 10–30 1, 12, 17 and 23 1, 4, 9, 12, 17, 23 and 27 16 Response to Applicant’s Arguments This office action is responsive to Applicant’s Arguments/Remarks Made in an Amendment received on 01/14/2026. In view of the amendments filed on 01/14/2026 to the specification, the specification objections is withdrawn. In view of applicant Arguments/Remarks and amendment filed on 01/14/2026 with respect to independent claims 1, 12, 17 and 23 under 35 U.S.C 102 and U.S.C. 103, claim rejection has been fully considered and the arguments are found to be not persuasive (See Page 10–12), therefore the claim rejection with respect to 35 U.S.C. 102 and U.S.C. 103 still applies. Applicant argues, in summary the applied prior art (Stumpe), (Johnson) does not disclose or suggest (see page(s) 10–12): “the synthetic images generated based on processing a selected slice of the first depth stack of images with neighboring slices of the first depth stack of images” However, the Examiner respectfully disagrees with the Applicant’s line of reasoning. The Examiner has thoroughly reviewed the Applicant’s arguments but respectfully believes that the cited reference to reasonably and properly meet the claimed limitations. Johnson uses image stacks as input. These image stacks can be stacks of image slices. See Johnson, ¶ [0074], “each image of the stack corresponding to a 2D slice of the three dimensional piece of tissue”. These image stacks are input into a neural network to process the whole image stack. See Johnson, ¶ [0093], “may receive an input image stack in the form of a three dimensional image stack 552 containing images of slices using an unlabeled imaging modality”. If Johnson is processing a whole image stack and then it should be inherently processing a slice and it neighboring slice. Johnson is passing the whole image stack and processing all the images through the machine learning model and processing See Johnson, ¶ [0096], “In step 325, the processor can apply one or more trained statistical models to all of the series of images to generate a series of images having the fluorescent labeling or other labeling”. Therefore, with this broad interpretation, Stumpe in combination with Johnson teaches, discloses or suggests the Applicant’s invention, training a neural network to generate images with a more specific labeling based on images with a less specific labeling and segmenting the generated image. Thus, due to the Applicant’s broad claim language, Applicant’s invention is not far removed from the art of record. As a result, it is respectfully submitted that the present application is not in condition for allowance. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 17–19 and 21 are rejected under 35 U.S.C. 102 as being unpatentable over Johnson et al. (US 20190384047 A1, hereafter, "Johnson"). Regarding claim 17, Johnson discloses a method comprising: imaging a depth stack of images of a tissue using a first labelling technique (See Johnson, ¶ [0098], a processor, such as the processor 120 of FIG. 1B, may receive an input image stack in the form of a three-dimensional image stack 552 containing images of slices using an unlabeled imaging modality, for example, bright-field, DIC, phase contrast imaging, and/or the like); generating a synthetic depth stack of images from the imaged depth stack of images using a machine learning model (See Johnson, ¶ [0098], Predicting the labeling of structures in an unlabeled testing data set by applying a trained statistical model to generate a predicted labeled data), wherein the synthetic depth stack of images predict an appearance of the tissue as if it was prepared using a second labelling technique (See Johnson, ¶ [0098], Predicting the labeling of structures in an unlabeled testing data set by applying a trained statistical model to generate a predicted labeled data set can also be referred to as generating a fourth set of 3D images. Note: Examiner is interpreting the predicted label dataset as the second labeling technique) and wherein the second labelling technique is targeted to a tissue structure of interest and the first labelling technique is less specific to the tissue structure of interest (See Johnson, ¶ [0098], Predicting the labeling of structures in an unlabeled testing data set by applying a trained statistical model to generate a predicted labeled data set can also be referred to as generating a fourth set of 3D images, the fourth set of 3D images including an indication of the estimated location of the cellular structure. Predictive localization by applying a trained statistical model at step 555 is indicated by image transform functions f.sub.1, f.sub.2 . . . f.sub.m carried out by CPU/GPUs which may be part of a device 110 and a processor 120 of a system 100. The results of each transform function is a separate labelled data set which can be reassembled into a three-dimensional image stack as indicated. Image stack 556 is an example generated image stack containing predicted labeling of one or more structures), the synthetic depth stack of images generated based on processing a selected slice of the depth stack of images with neighboring slices of the depth stack of images (See Johnson, ¶ [0093], may receive an input image stack in the form of a three dimensional image stack 552 containing images of slices using an unlabeled imaging modality. Note: Since the whole stack is processed then the slice and adjacent slice is processed as well). Regarding claim 18, Johnson discloses The method of claim 17, further comprising segmenting the tissue structure of interest based on the synthetic depth stack of images (See Johnson, ¶ [0126], Thus, this technique allows an image processing pipeline developed with one imaging modality to be leveraged to process data collected in another imaging modality. For example, 3D cell segmentation can be developed based upon fluorescent membrane markers, and then applied directly to predictions). Regarding claim 19, Johnson discloses the method of claim 18, further comprising segmenting the tissue structure of interest based on the depth stack of images (See Johnson, ¶ [0040], In some aspects of the embodiments herein, the predictive labeling may be used to provide fast and efficient visualization of various sub-cellular structures (which are also referred to as intracellular structures), such as cell membranes, nucleus, organelles, and other structures. In some aspects of the embodiments herein, the predictive labeling may be used to assist in cell segmentation or to facilitate other aspects of performing cytometry). Regarding claim 21, Johnson discloses the method of claim 17, further comprising training the machine learning model based on a second depth stack of images of a tissue prepared using the first labelling technique and the second labelling technique (See Johnson, ¶ [0076], At step 303 of the method 300, the processor can allocate a training data set for training a statistical model, such as a neural network. The training data can include image data of the cells/tissues captured with and without labelling. In an embodiment, the allocated training data set can be used to optimize a set of parameters of a statistical model designed to capture a desired target labeling, through an iterative training procedure). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claim(s) 1–2, 5–6, 8, 10, 11–14, 15–16, 23, 26 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Stumpe et al. (US 20200394825 A1, hereafter, "Stumpe") in view of Johnson et al. (US 20190384047 A1, hereafter, "Johnson"). Regarding claim 1, Stumpe teaches a method comprising: labelling a first tissue sample with a first labelling technique and a second labelling technique, wherein the second labelling technique is targeted to a tissue structure of interest and the first labelling technique has a lower specificity to the tissue structure (See Stumpe, ¶ [0048], In the model training process of FIG. 7, we start at step 100 with obtaining a multitude (e.g., thousands) of pairs of images of a given tissue type, such as for example images of breast cancer tissue. The image pairs could be unstained + special stained, or stained (e.g., H&E stained) + special stained. Such pairs of images could be obtained from one or more private or publicly available pre-existing tissue image databases. ¶ [0042], An alternative is to take a section such as third section 40 of the tissue block, stain it with H&E, scan it with a whole slide scanner at different magnifications, resulting in a set of images, one of which is shown at 42, and then de-stain the specimen and then re-stain the specimen with an IHC or other special stain and generate a new set of images of the specimen with the IHC stain at different magnifications, one of which is shown at 44. Note: Examiner is interpreting the pairs are the first and second labeling technique); [collecting a first depth stack of images of the tissue]; training a machine learning model using the first depth stack of images to generate synthetic images of the tissue structure as they appear with the second labelling technique based on images using the first labelling technique (See Stumpe, ¶ [0051], At step 140, the precisely aligned pairs of images are supplied as training data to a machine learning predictor model. The training data is used to teach the model to predict a virtual stained image (in this example, a HER2 image) from the first or input image (the H&E image)), [the synthetic images generated based on processing a selected slice of the first depth stack of images with neighboring slices of the first depth stack of images; and segmenting the tissue structure of interest in a second depth stack of images of a second tissue sample prepared with the first labelling technique based on the trained machine learning model]. However, Stumpe fail(s) to teach collecting a first depth stack of images of the tissue; the synthetic images generated based on processing a selected slice of the first depth stack of images with neighboring slices of the first depth stack of images; segmenting the tissue structure of interest in a second depth stack of images of a second tissue sample prepared with the first labelling technique based on the trained machine learning model. Johnson, working in the same field of endeavor, teaches: collecting a first depth stack of images of the tissue (See Johnson, ¶ [0098], a processor, such as the processor 120 of FIG. 1B, may receive an input image stack in the form of a three-dimensional image stack 552 containing images of slices using an unlabeled imaging modality, for example, bright-field, DIC, phase contrast imaging, and/or the like); the synthetic images generated based on processing a selected slice of the first depth stack of images with neighboring slices of the first depth stack of images (See Johnson, ¶ [0093], may receive an input image stack in the form of a three dimensional image stack 552 containing images of slices using an unlabeled imaging modality. Note: Since the whole stack is processed then the slice and adjacent slice is processed as well); segmenting the tissue structure of interest in a second depth stack of images of a second tissue sample prepared with the first labelling technique based on the trained machine learning model (See Johnson, ¶ [0040], In some aspects of the embodiments herein, the predictive labeling may be used to provide fast and efficient visualization of various sub-cellular structures (which are also referred to as intracellular structures), such as cell membranes, nucleus, organelles, and other structures. In some aspects of the embodiments herein, the predictive labeling may be used to assist in cell segmentation or to facilitate other aspects of performing cytometry). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Stumpe’s reference to collecting a first depth stack of images of the tissue; segmenting the tissue structure of interest in a second depth stack of images of a second tissue sample prepared with the first labelling technique based on the trained machine learning model based on the method of Johnson’s reference. The suggestion/motivation would have been to provide more accurate prediction and provide more complete information about the tissue (See Johnson, ¶ [0003–0007]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Johnson with Stumpe to obtain the invention as specified in claim 1. Regarding claim 2, Stumpe in view of Johnson teaches the method of claim 1, [wherein the machine learning model is configured to process a selected slice of the second depth stack of images along with slices adjacent to the selected slice]. However, Stumpe fail(s) to teach wherein the machine learning model is configured to process a selected slice of the second depth stack of images along with slices adjacent to the selected slice. Johnson, working in the same field of endeavor, teaches: wherein the machine learning model is configured to process a selected slice of the second depth stack of images along with slices adjacent to the selected slice (See Johnson, ¶ [0098], As indicated in the figure, a processor, such as the processor 120 of FIG. 1B, may receive an input image stack in the form of a three dimensional image stack 552 containing images of slices using an unlabeled imaging modality, for example, bright-field, DIC, phase contrast imaging, and/or the like. The input image stack 552 in FIG. 5 is depicted as a cube and represents a three-dimensional image stack. An example test data set can be of any suitable size, from as few as 15 images to as large as the entire set of training data/images ¶ [0093], may receive an input image stack in the form of a three dimensional image stack 552 containing images of slices using an unlabeled imaging modality. Note: Since the whole stack is processed then the slice and adjacent slice is processed as well). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Stumpe’s reference wherein the machine learning model is configured to process a selected slice of the second depth stack of images along with slices adjacent to the selected slice based on the method of Johnson’s reference. The suggestion/motivation would have been to provide more accurate prediction and provide more complete information about the tissue (See Johnson, ¶ [0003–0007]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Johnson with Stumpe to obtain the invention as specified in claim 2. Regarding claim 5, Stumpe teaches the method of claim 1, wherein the first labelling technique, the second labelling technique or combinations thereof include label free imaging (See Stumpe, ¶ [0028], The input pairs of images could take the form of unstained image + special stained image, or stained image (typically, H&E stained image) + special stained image. Note: Examiner is interpreting the unstaining as the label free). Regarding claim 6, Stumpe teaches The method of claim 1, wherein the second labelling technique is targeted to a biomarker associated with the tissue structure of interest (See Stumpe, ¶ [0002], More specific stains, known in the art as “special stains” (e.g. immunohistochemical stains, IHCs) exist to highlight specific targets, e.g. very specific tumor markers, or cellular or tissue structures. Loosely speaking, this can be regarded as a very specific kind of image recoloring. Examples of special stains include HER2 stain for detecting specific genetic mutation markers in breast cancer specimens. ¶ [0057], The input to the model is an RGB image, and the output is an RGB image with the same tissue morphology but different colors and contrast patterns, depending on the respective special stain that is predicted. Given that IHCs bind to very specific antigens and are indicators of local protein expressions (e.g. HER2 in the case of a ERBB2 breast cancer mutation)). Regarding claim 8, Stumpe teaches the method of claim 1, further comprising diagnosing a condition, monitoring the condition, making a prediction about progression of the condition, making a prediction about treatment response, or combinations thereof based on the identified structure of interest in the second depth stack of images (See Stumpe, ¶ [0046], The virtual image of FIG. 6C may also be used for other purposes, such as providing visualizations of the tissue specimen and supporting explanations to supplement predictions made about the tissue specimen, such as tumor detections, diagnosis or classification of the tissue sample). Regarding claim 10, Stumpe in view of Johnson teaches the method of claim 1, [further comprising generating a synthetic depth stack based on the second depth stack and the machine learning model, wherein the synthetic depth stack predicts the appearance of the second tissue if it were prepared with the second labelling technique]. However, Stumpe fail(s) to teach further comprising generating a synthetic depth stack based on the second depth stack and the machine learning model, wherein the synthetic depth stack predicts the appearance of the second tissue if it were prepared with the second labelling technique. Johnson, working in the same field of endeavor, teaches: further comprising generating a synthetic depth stack based on the second depth stack and the machine learning model, wherein the synthetic depth stack predicts the appearance of the second tissue if it were prepared with the second labelling technique (See Johnson, ¶ [0098], The results of each transform function is a separate labelled data set which can be reassembled into a three-dimensional image stack as indicated. Image stack 556 is an example generated image stack containing predicted labeling of one or more structures). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Stumpe’s reference further comprising generating a synthetic depth stack based on the second depth stack and the machine learning model, wherein the synthetic depth stack predicts the appearance of the second tissue if it were prepared with the second labelling technique based on the method of Johnson’s reference. The suggestion/motivation would have been to provide more accurate prediction and provide more complete information about the tissue (See Johnson, ¶ [0003–0007]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Johnson with Stumpe to obtain the invention as specified in claim 10. Regarding claim 11, Stumpe in view of Johnson teaches the method of claim 10, [further comprising segmenting the tissue structure of interest in the second depth stack of images based on the synthetic depth stack]. However, Stumpe fail(s) to teach further comprising segmenting the tissue structure of interest in the second depth stack of images based on the synthetic depth stack. Johnson, working in the same field of endeavor, teaches: further comprising segmenting the tissue structure of interest in the second depth stack of images based on the synthetic depth stack (See Johnson, ¶ [0040], In some aspects of the embodiments herein, the predictive labeling may be used to provide fast and efficient visualization of various sub-cellular structures (which are also referred to as intracellular structures), such as cell membranes, nucleus, organelles, and other structures. In some aspects of the embodiments herein, the predictive labeling may be used to assist in cell segmentation or to facilitate other aspects of performing cytometry). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Stumpe’s reference further comprising segmenting the tissue structure of interest in the second depth stack of images based on the synthetic depth stack based on the method of Johnson’s reference. The suggestion/motivation would have been to provide more accurate prediction and provide more complete information about the tissue (See Johnson, ¶ [0003–0007]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Johnson with Stumpe to obtain the invention as specified in claim 11. Regarding claim 12, Stumpe discloses a method comprising: generating a first set of images of a tissue sample (See Stumpe, ¶ [0042], An alternative is to take a section such as third section 40 of the tissue block, stain it with H&E, scan it with a whole slide scanner at different magnifications, resulting in a set of images, one of which is shown at 42, and then de-stain the specimen and then re-stain the specimen with an IHC or other special stain and generate a new set of images of the specimen with the IHC stain at different magnifications, one of which is shown at 44. Note: Examiner is interpreting the pairs are the first and second labeling technique); generating a second set of images of the tissue, wherein the second set include targeted labelling of a tissue structure of interest of the tissue sample, and wherein the first set of images are less specific to the tissue structure (See Stumpe, ¶ [0042], An alternative is to take a section such as third section 40 of the tissue block, stain it with H&E, scan it with a whole slide scanner at different magnifications, resulting in a set of images, one of which is shown at 42, and then de-stain the specimen and then re-stain the specimen with an IHC or other special stain and generate a new set of images of the specimen with the IHC stain at different magnifications, one of which is shown at 44. ¶ [0043], It is possible to repeat this process for several different sections of the tissue block and apply different special stains to the sections in order to build up a set of image pairs (unstained/stained) with different special stains. Likewise, the procedure of FIG. 3 can be repeated so as to generate sets of H&E/special stain image pairs for different special stains. Note: Examiner is interpreting the pairs are the first and second labeling technique); and training a machine learning model to generate synthetic images from the first set of images which predict an appearance of the second set of images (See Stumpe, ¶ [0051], At step 140, the precisely aligned pairs of images are supplied as training data to a machine learning predictor model. The training data is used to teach the model to predict a virtual stained image (in this example, a HER2 image) from the first or input image (the H&E image). ¶ [0047], Model training will now be described in conjunction with FIG. 7. In the present discussion we will focus on a multitude (thousands) of pairs of tissue images for model training of a given type, such as breast cancer tissue, with one image being an H&E image of the specimen and the other image being a HER2 (special stained) image of the same specimen), [the machine learning model based on iteratively processing a selected slice of the first set of images with neighboring slices of the first set of images]. However, Stumpe fail(s) to teach the machine learning model based on iteratively processing a selected slice of the first set of images with neighboring slices of the first set of images. Johnson, working in the same field of endeavor, teaches: the machine learning model based on iteratively processing a selected slice of the first set of images with neighboring slices of the first set of images (See Johnson, ¶ [0093], may receive an input image stack in the form of a three dimensional image stack 552 containing images of slices using an unlabeled imaging modality. Note: Since the whole stack is processed then the slice and adjacent slice is processed as well); Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Stumpe’s reference the machine learning model based on iteratively processing a selected slice of the first set of images with neighboring slices of the first set of images based on the method of Johnson’s reference. The suggestion/motivation would have been to provide more accurate prediction and provide more complete information about the tissue (See Johnson, ¶ [0003–0007]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Johnson with Stumpe to obtain the invention as specified in claim 12. Regarding claim 13, Stumpe teaches the method of claim 12, further comprising: [generating a third set of images of a second tissue sample, wherein the third set of images are less specific to the tissue structure of interest; and segmenting the tissue structure of interest in the third set of images based on using the trained machine learning model on the third set of images]. However, Stumpe fail(s) to teach generating a third set of images of a second tissue sample, wherein the third set of images are less specific to the tissue structure of interest; and segmenting the tissue structure of interest in the third set of images based on using the trained machine learning model on the third set of images. Johnson, working in the same field of endeavor, teaches: generating a third set of images of a second tissue sample, wherein the third set of images are less specific to the tissue structure of interest (See Johnson, ¶ [0107], The training data set in the example in FIG. 8 includes pairs of three dimensional image stacks (A, B and C) each pair consisting of one image stack obtained through transmitted light imaging (e.g., stacks 862A, 862B, and 862C) and the other corresponding labelled image stack obtained through fluorescence imaging (e.g., 864A, 864B, and 864C), with a specific fluorescent tag); and segmenting the tissue structure of interest in the third set of images based on using the trained machine learning model on the third set of images (See Johnson, ¶ [0138], Thus, one aspect of the embodiments herein relates to training a statistical model, such as a U-net or other deep neural network, to predict nuclear compartments, cell membrane or cell compartments, or other cellular structures from an image that was captured without applying fluorescent dye, and then using that model to facilitate cell counting, segmentation, or categorization (e.g., sorting) from subsequent images of live or dead cells. Such a trained statistical model can facilitate a dye-free kinetic cell assay). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Stumpe’s reference generating a third set of images of a second tissue sample, wherein the third set of images are less specific to the tissue structure of interest; and segmenting the tissue structure of interest in the third set of images based on using the trained machine learning model on the third set of images based on the method of Johnson’s reference. The suggestion/motivation would have been to provide more accurate prediction and provide more complete information about the tissue (See Johnson, ¶ [0003–0007]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Johnson with Stumpe to obtain the invention as specified in claim 13. Regarding claim 14, Stumpe discloses the method of claim 12, further comprising training a general adversarial network (GAN) as the machine learning model (See Stumpe, ¶ [0013], Examples of machine learning predictor models that are suitable for the present purposes include generative adversarial networks). Regarding claim 15, Stumpe in view of Johnson teaches the method of claim 12, [wherein the first set of images and the second set of images are a depth stack of the tissue]. However, Stumpe fail(s) to teach wherein the first set of images and the second set of images are a depth stack of the tissue. Johnson, working in the same field of endeavor, teaches: wherein the first set of images and the second set of images are a depth stack of the tissue (See Johnson, ¶ [0098], The input image data includes a set of image stacks 664 acquired through a labeled imaging method. The set 664 includes a first set of 3D images of multiple sets of 3D images, of which 654 is one. The input data also includes a set of image stacks 662, or second set of 3D images of multiple sets of 3D images, acquired through transmitted light imaging 662, of which 652 is one image stack). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Stumpe’s reference wherein the first set of images and the second set of images are a depth stack of the tissue based on the method of Johnson’s reference. The suggestion/motivation would have been to provide more accurate prediction and provide more complete information about the tissue (See Johnson, ¶ [0003–0007]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Johnson with Stumpe to obtain the invention as specified in claim 15. Regarding claim 16, Stumpe in view of Johnson teaches the method of claim 15, [further comprising training the machine learning model to generate the synthetic images based on iteratively processing a selected slice of the depth stack along with neighboring slices of the depth stack]. However, Stumpe fail(s) to teach further comprising training the machine learning model to generate the synthetic images based on iteratively processing a selected slice of the depth stack along with neighboring slices of the depth stack. Johnson, working in the same field of endeavor, teaches: further comprising training the machine learning model to generate the synthetic images based on iteratively processing a selected slice of the depth stack along with neighboring slices of the depth stack (See Johnson, ¶ [0098], As indicated in the figure, a processor, such as the processor 120 of FIG. 1B, may receive an input image stack in the form of a three dimensional image stack 552 containing images of slices using an unlabeled imaging modality, for example, bright-field, DIC, phase contrast imaging, and/or the like. The input image stack 552 in FIG. 5 is depicted as a cube and represents a three-dimensional image stack. An example test data set can be of any suitable size, from as few as 15 images to as large as the entire set of training data/images). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Stumpe’s reference further comprising training the machine learning model to generate the synthetic images based on iteratively processing a selected slice of the depth stack along with neighboring slices of the depth stack based on the method of Johnson’s reference. The suggestion/motivation would have been to provide more accurate prediction and provide more complete information about the tissue (See Johnson, ¶ [0003–0007]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Johnson with Stumpe to obtain the invention as specified in claim 16. Regarding claim 23, Stumpe in view of Johnson teaches an apparatus (See Stumpe, [Abstract], A machine learning predictor model is trained to generate a prediction of the appearance of a tissue sample stained with a special stain such as an IHC stain from an input image that is either unstained or stained with H&E) comprising: [a microscope configured to generate a depth stack of images of a tissue prepared with a first labelling technique]; a processor (See Stumpe, ¶ [0014], In another aspect, a computer system is disclosed comprising one or more processing units and memory implementing one or more (or more preferably a plurality: “a suite”) of machine learning predictor models, the models generating data in the form of a prediction of the appearance of a virtual special stained image of a tissue sample of a respective given tissue type from data representing an input unstained or H&E stained image of the given tissue sample); and a memory encoded with executable instructions which, when executed by the processor (See Stumpe, ¶ [0014], In another aspect, a computer system is disclosed comprising one or more processing units and memory implementing one or more (or more preferably a plurality: “a suite”) of machine learning predictor models, the models generating data in the form of a prediction of the appearance of a virtual special stained image of a tissue sample of a respective given tissue type from data representing an input unstained or H&E stained image of the given tissue sample), cause the apparatus to: [generate a synthetic depth stack of images from the imaged depth stack of images using a machine learning model, wherein the synthetic depth stack of images predict an appearance of the tissue like it was prepared with a second labelling technique and wherein the second labelling technique is targeted to a tissue structure of interest and the first labelling technique is less specific to the tissue structure of interest, the synthetic depth stack of images generated based on processing a selected slice of the imaging depth stack with neighboring slices of the imaging depth stack]. However, Stumpe fail(s) to teach a microscope configured to generate a depth stack of images of a tissue prepared with a first labelling technique; generate a synthetic depth stack of images from the imaged depth stack of images using a machine learning model, wherein the synthetic depth stack of images predict an appearance of the tissue like it was prepared with a second labelling technique and wherein the second labelling technique is targeted to a tissue structure of interest and the first labelling technique is less specific to the tissue structure of interest, the synthetic depth stack of images generated based on processing a selected slice of the imaging depth stack with neighboring slices of the imaging depth stack. Johnson, working in the same field of endeavor, teaches: a microscope configured to generate a depth stack of images of a tissue prepared with a first labelling technique (See Johnson, ¶ [0059], In an embodiment, the method includes step 201, in which the processor 120 receive a first set of three-dimensional (3D) microscopy images and a second set of 3D microscopy images. In an embodiment, the first set of 3D microscopy images and the second set of 3D microscopy images are received via a communication interface, such as the I/O unit 140, from an image storage device or directly from an image sensor of a microscope); generate a synthetic depth stack of images from the imaged depth stack of images using a machine learning model, wherein the synthetic depth stack of images predict an appearance of the tissue like it was prepared with a second labelling technique and wherein the second labelling technique is targeted to a tissue structure of interest and the first labelling technique is less specific to the tissue structure of interest (See Johnson, ¶ [0098], Predicting the labeling of structures in an unlabeled testing data set by applying a trained statistical model to generate a predicted labeled data set can also be referred to as generating a fourth set of 3D images, the fourth set of 3D images including an indication of the estimated location of the cellular structure. Predictive localization by applying a trained statistical model at step 555 is indicated by image transform functions f.sub.1, f.sub.2 . . . f.sub.m carried out by CPU/GPUs which may be part of a device 110 and a processor 120 of a system 100. The results of each transform function is a separate labelled data set which can be reassembled into a three-dimensional image stack as indicated. Image stack 556 is an example generated image stack containing predicted labeling of one or more structures), the synthetic depth stack of images generated based on processing a selected slice of the imaging depth stack with neighboring slices of the imaging depth stack (See Johnson, ¶ [0093], may receive an input image stack in the form of a three dimensional image stack 552 containing images of slices using an unlabeled imaging modality. Note: Since the whole stack is processed then the slice and adjacent slice is processed as well). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Stumpe’s reference a microscope configured to generate a depth stack of images of a tissue prepared with a first labelling technique; generate a synthetic depth stack of images from the imaged depth stack of images using a machine learning model, wherein the synthetic depth stack of images predict an appearance of the tissue like it was prepared with a second labelling technique and wherein the second labelling technique is targeted to a tissue structure of interest and the first labelling technique is less specific to the tissue structure of interest, the synthetic depth stack of images generated based on processing a selected slice of the imaging depth stack with neighboring slices of the imaging depth stack based on the method of Johnson’s reference. The suggestion/motivation would have been to provide more accurate prediction and provide more complete information about the tissue (See Johnson, ¶ [0003–0007]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Johnson with Stumpe to obtain the invention as specified in claim 23. Regarding claim 26, Stumpe in view of Johnson teaches the apparatus of claim 23, [wherein the machine learning model is trained on a depth stack of images of a second tissue prepared with the first labelling technique and the second labelling technique]. However, Stumpe fail(s) to teach wherein the machine learning model is trained on a depth stack of images of a second tissue prepared with the first labelling technique and the second labelling technique. Johnson, working in the same field of endeavor, teaches: wherein the machine learning model is trained on a depth stack of images of a second tissue prepared with the first labelling technique and the second labelling technique (See Johnson, ¶ [0098], Predicting the labeling of structures in an unlabeled testing data set by applying a trained statistical model to generate a predicted labeled data set can also be referred to as generating a fourth set of 3D images, the fourth set of 3D images including an indication of the estimated location of the cellular structure. Predictive localization by applying a trained statistical model at step 555 is indicated by image transform functions f.sub.1, f.sub.2 . . . f.sub.m carried out by CPU/GPUs which may be part of a device 110 and a processor 120 of a system 100. The results of each transform function is a separate labelled data set which can be reassembled into a three-dimensional image stack as indicated. Image stack 556 is an example generated image stack containing predicted labeling of one or more structures). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Stumpe’s reference wherein the machine learning model is trained on a depth stack of images of a second tissue prepared with the first labelling technique and the second labelling technique based on the method of Johnson’s reference. The suggestion/motivation would have been to provide more accurate prediction and provide more complete information about the tissue (See Johnson, ¶ [0003–0007]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Johnson with Stumpe to obtain the invention as specified in claim 26. Regarding claim 28, Stumpe teaches the apparatus of claim 23, wherein the machine learning model is trained on another processor (See Stumpe, ¶ [0069], The computer 300 is provided with processors which execute software instructions implementing the machine learning model(s) including model parameters, algorithms, etc. In use, via an application programming interface, the user operating the workstations 250 selects input images locally from memory or data stores local to the workstation and the images along with associated metadata, e.g., indicating tissue type, are provided over computer networks 304 to the computer system 300 implementing the machine learning predictor model(s)). Claim(s) 3 is rejected under 35 U.S.C. 103 as being unpatentable over Stumpe et al. (US 20200394825 A1, hereafter, "Stumpe") in view of Johnson et al. (US 20190384047 A1, hereafter, "Johnson") further in view of Wang et al. (See NPL attached, "Video-to-Video Synthesis", hereafter, "Wang"). Regarding claim 3, Stumpe in view of Johnson teaches the method of claim 2, [wherein the machine learning model is a vid2vid general adversarial network (GAN)]. However, Stumpe and Johnson fail(s) to teach wherein the machine learning model is a vid2vid general adversarial network (GAN). Wang, working in the same field of endeavor, teaches: wherein the machine learning model is a vid2vid general adversarial network (GAN) (See Wang, [Pg. 3, ln. 34-35, 3 Video-to-Video Synthesis], We propose a conditional GAN framework for this conditional video distribution matching task. Let G be a generator that maps an input source sequence to a corresponding output frame sequence). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Stumpe’s reference to wherein the machine learning model is a vid2vid general adversarial network (GAN) based on the method of Wang’s reference. The suggestion/motivation would have been to improve synthetic data generation (See Wang, [Pg. 8, ln. 1–40]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Wang with Stumpe and Johnson to obtain the invention as specified in claim 3. Claim(s) 4 and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Stumpe et al. (US 20200394825 A1, hereafter, "Stumpe") in view of Johnson et al. (US 20190384047 A1, hereafter, "Johnson") further in view of Kenny (US 20130044933 A1, hereafter, "Kenny") and further in view of Wu et al. (US 20190188446 A1, hereafter, "Wu"). Regarding claim 4, Stumpe in view of Johnson teaches the method of claim 1, wherein the first labelling technique includes labelling with H&E analogs, Mason's tri-chrome, periodic acid-Schiff (PAS), 4',6-diamidino-2- phenylindole (DAPI) or combinations thereof (See Stumpe, ¶ [0035], As noted above, the methods of this disclosure provide for generation of a virtual stained image of a tissue specimen showing the appearance of the specimen as if it were stained with a special stain such as an IHC stain, from an input image which may be either an unstained image or an image of a specimen stained with H&E. Note: Examiner is interpreting only having to one of a combination of these techniques), and [wherein the second labelling technique includes labelling with aptamers, antibodies, peptides, nanobodies, antibody fragments, enzyme-activated probes, and fluorescent in situ hybridization (FISH) probes]. However, Stumpe and Johnson fail(s) to teach wherein the second labelling technique includes labelling with aptamers, antibodies, peptides, nanobodies, antibody fragments, enzyme-activated probes; Kenny, working in the same field of endeavor, teaches: wherein the second labelling technique includes labelling with aptamers, antibodies, peptides, nanobodies, antibody fragments, enzyme-activated probes (See Kenny, ¶ [0016], The term "binder" refers to a molecule that may bind to one or more targets in the biological sample. A binder may specifically bind to a target. Suitable binders may include one or more of natural or modified peptides, proteins (e.g., antibodies, affibodies, or aptamers), nucleic acids (e.g., polynucleotides, DNA, RNA, or aptamers); polysaccharides (e.g., lectins, sugars), lipids, enzymes, enzyme substrates or inhibitors, ligands, receptors, antigens, or haptens. Note: Examiner is interpreting that you need a combination of all technique based how it is claimed). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Stumpe’s reference to wherein the second labelling technique includes labelling with aptamers, antibodies, peptides, nanobodies, antibody fragments, enzyme-activated probes based on the method of Kenny’s reference. The suggestion/motivation would have been to more accurately distinguish and identify internal features of a cell (See Kenny, ¶ [0004–0009]). However, Stumpe, Johnson and Kenny fail(s) to teach fluorescent in situ hybridization (FISH) probes. Wu, working in the same field of endeavor, teaches: fluorescent in situ hybridization (FISH) probes (See Wu, ¶ [0027], Alternatively, the first tissue sample may be stained with a molecular stain, such as CD68 IHC or CD163 IF. Some examples of molecular staining methods that may be used to stain the first tissue sample include immunohistochemistry (IHC), immunofluorescence (IF), in situ hybridization (ISH), fluorescent in situ hybridization (FISH), and RNA (f)ISH. As additional examples, the first tissue sample may be stained with Giemsa stain or Picrosirius red. Note: Examiner is interpreting that you need a combination of all technique based how it is claimed). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Stumpe’s reference to fluorescent in situ hybridization (FISH) probes based on the method of Wu’s reference. The suggestion/motivation would have been to combine the techniques to further enhance the cellular structure (See Wu, ¶ [0002–0009]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Kenny and Wu with Stumpe and Johnson to obtain the invention as specified in claim 4. Regarding claim 27, claim 27 is rejected the same as claim 4 and the arguments similar to that presented above for claim 4 are equally applicable to the claim 27, and all of the other limitations similar to claim 4 are not repeated herein, but incorporated by reference. Claim(s) 7 is rejected under 35 U.S.C. 103 as being unpatentable over Stumpe et al. (US 20200394825 A1, hereafter, "Stumpe") in view of Johnson et al. (US 20190384047 A1, hereafter, "Johnson") further in view of Sase et al. (US 20200218054 A1, hereafter, “Sase”). Regarding claim 7, Stumpe in view of Johnson teaches the method of claim 1, further comprising: [collecting the first depth stack of images with a first microscope; and collecting the second depth stack of images with a second microscope]. However, Stumpe and Johnson fail(s) to teach collecting the first depth stack of images with a first microscope; and collecting the second depth stack of images with a second microscope. Sase, working in the same field of endeavor, teaches: collecting the first depth stack of images with a first microscope; and collecting the second depth stack of images with a second microscope (See Sase, ¶ [0088], FIG. 6 illustrates an example of a display screen 63a of the display 63 that displays the Z stack images obtained by each of the first and second microscopes 30 and 40). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Stumpe’s reference collecting the first depth stack of images with a first microscope; and collecting the second depth stack of images with a second microscope based on the method of Sase’s reference. The suggestion/motivation would have been to integrate the more of the tissue data and add more complete information about the tissue for processing (See Sase, ¶ [0003–0007]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Sase with Stumpe and Johnson to obtain the invention as specified in claim 7. Claim(s) 20 is rejected under 35 U.S.C. 103 as being unpatentable over Johnson et al. (US 20190384047 A1, hereafter, "Johnson") in view of Stumpe et al. (US 20200394825 A1, hereafter, "Stumpe"). Regarding claim 20, The method of claim 17, [further comprising diagnosing a condition, monitoring the condition, making a prediction about progression of the condition or combinations thereof based on the synthetic depth stack of images]. However, Johnson fail(s) to teach further comprising diagnosing a condition, monitoring the condition, making a prediction about progression of the condition or combinations thereof based on the synthetic depth stack of images. Stumpe, working in the same field of endeavor, teaches: further comprising diagnosing a condition, monitoring the condition, making a prediction about progression of the condition or combinations thereof based on the synthetic depth stack of images (See Stumpe, ¶ [0046], The virtual image of FIG. 6C may also be used for other purposes, such as providing visualizations of the tissue specimen and supporting explanations to supplement predictions made about the tissue specimen, such as tumor detections, diagnosis or classification of the tissue sample). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Johnson’s reference to further comprising diagnosing a condition, monitoring the condition, making a prediction about progression of the condition or combinations thereof based on the synthetic depth stack of images based on the method of Stumpe’s reference. The suggestion/motivation would have been to accurately diagnose tumor and enhance prediction (See Stumpe, ¶ [0002–0019]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Stumpe with Johnson to obtain the invention as specified in claim 20. Claim(s) 22 is rejected under 35 U.S.C. 103 as being unpatentable over Johnson et al. (US 20190384047 A1, hereafter, "Johnson") in view of Glaser et al. (See NPL attached, "Multi-immersion open-top light-sheet microscope for high-throughput imaging of cleared tissues", hereafter, "Glaser"). Regarding claim 22, Johnson teaches the method of claim 17, further comprising imaging the depth stack of images (See Johnson, ¶ [0098], a processor, such as the processor 120 of FIG. 1B, may receive an input image stack in the form of a three-dimensional image stack 552 containing images of slices using an unlabeled imaging modality, for example, bright-field, DIC, phase contrast imaging, and/or the like) of the tissue using an [open top light sheet microscope]. However, Johnson fail(s) to teach open top light sheet microscope. Glaser, working in the same field of endeavor, teaches: open top light sheet microscope (See Glaser, [Pg. 2, Col. 1, ln. 55-56], In order to improve the ease-of-use and throughput of LSFM, open-top light-sheet (OTLS) microscopes have been proposed). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Johnson’s reference to open top light sheet microscope based on the method of Glaser’s reference. The suggestion/motivation would have been to improve synthetic image generation (See Glaser, [Pg. 2, Results]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Glaser with Johnson to obtain the invention as specified in claim 22. Claim(s) 24 is rejected under 35 U.S.C. 103 as being unpatentable over Stumpe et al. (US 20200394825 A1, hereafter, "Stumpe") in view of Johnson et al. (US 20190384047 A1, hereafter, "Johnson") and further in view of Thagaard et al. (US 11,276,165 B2, hereafter, "Thagaard"). Regarding claim 24, Stumpe in view of Johnson teaches the apparatus of claim 23, [wherein the tissue has a thickness of 5um or greater]. However, Stumpe and Johnson fail(s) to teach wherein the tissue has a thickness of 5um or greater. Thagaard, working in the same field of endeavor, teaches: wherein the tissue has a thickness of 5um or greater (See Thagaard, [Col. 7, ln. 14–17], Specimens are typically sliced at a range of 3 μm-50 μm. In one embodiment of the presently disclosed method for labelling of histopathological images the first and second seconds are thinner than 1 mm). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Stumpe’s reference to wherein the tissue has a thickness of 5um or greater based on the method of Thagaard’s reference. The suggestion/motivation would have been to provide accurate data for the synthesis of data (See Thagaard, [Col. 1, ln. 17–59 and Col. 2, ln. 1–57]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Thagaard with Stumpe and Johnson to obtain the invention as specified in claim 24. Claim(s) 25 is rejected under 35 U.S.C. 103 as being unpatentable over Stumpe et al. (US 20200394825 A1, hereafter, "Stumpe") in view of Johnson et al. (US 20190384047 A1, hereafter, "Johnson") and further in view of Glaser et al. (See NPL attached, "Multi-immersion open-top light-sheet microscope for high-throughput imaging of cleared tissues", hereafter, "Glaser"). Regarding claim 25, Stumpe in view of Johnson the apparatus of claim 23, wherein the microscope is an [open top light sheet (OTLS) microscope]. However, Stumpe and Johnson fail(s) to teach open top light sheet (OTLS) microscope. Glaser, working in the same field of endeavor, teaches: open top light sheet (OTLS) microscope (See Glaser, [Pg. 2, Col. 1, ln. 55-56], In order to improve the ease-of-use and throughput of LSFM, open-top light-sheet (OTLS) microscopes have been proposed). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Johnson’s reference open top light sheet (OTLS) microscope based on the method of Glaser’s reference. The suggestion/motivation would have been to improve synthetic image generation (See Glaser, [Pg. 2, Results]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Glaser with Stumpe and Johnson to obtain the invention as specified in claim 25. Claim(s) 29–30 are rejected under 35 U.S.C. 103 as being unpatentable over Stumpe et al. (US 20200394825 A1, hereafter, "Stumpe") in view of Johnson et al. (US 20190384047 A1, hereafter, "Johnson") further in view of Veidman et al. (US 20200372635 A1, hereafter, "Veidman"). Regarding claim 29, Stumpe in view of Johnson teaches the apparatus of claim 23, wherein the memory further includes instructions which, when executed by the processor (See Stumpe, ¶ [0014], In another aspect, a computer system is disclosed comprising one or more processing units and memory implementing one or more (or more preferably a plurality: “a suite”) of machine learning predictor models, the models generating data in the form of a prediction of the appearance of a virtual special stained image of a tissue sample of a respective given tissue type from data representing an input unstained or H&E stained image of the given tissue sample), [cause the apparatus to generate a segmentation mask based on the synthetic depth stack of images]. However, Stumpe fail(s) to teach cause the apparatus to generate a segmentation based on the synthetic depth stack of images; Johnson, working in the same field of endeavor, teaches: cause the apparatus to generate a segmentation based on the synthetic depth stack of images (See Johnson, ¶ [0040], In some aspects of the embodiments herein, the predictive labeling may be used to provide fast and efficient visualization of various sub-cellular structures (which are also referred to as intracellular structures), such as cell membranes, nucleus, organelles, and other structures. In some aspects of the embodiments herein, the predictive labeling may be used to assist in cell segmentation or to facilitate other aspects of performing cytometry). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Stumpe’s reference to cause the apparatus to generate a segmentation based on the synthetic depth stack of images based on the method of Johnson’s reference. The suggestion/motivation would have been to provide more accurate prediction and provide more complete information about the tissue (See Johnson, ¶ [0003–0007]). However, Stumpe and Johnson fail(s) to teach mask. Veidman, working in the same field of endeavor, teaches: mask (See Veidman, ¶ [0186], The patch-level segmentation code may be implemented as a segmentation CNN (e.g., based on Unet). The patch-level segmentation may output the segmentation as a mask). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Stumpe’s reference to a mask based on the method of Veidman’s reference. The suggestion/motivation would have been to more accurately classify tissue samples (See Veidman, ¶ [0002–0006]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Johnson and Veidman with Stumpe to obtain the invention as specified in claim 29. Regarding claim 30, Stumpe in view of Johnson further in view of Veidman teaches the apparatus of claim 29, wherein the memory further includes instructions which, when executed by the processor (See Johnson, ¶ [0040], In some aspects of the embodiments herein, the predictive labeling may be used to provide fast and efficient visualization of various sub-cellular structures (which are also referred to as intracellular structures), such as cell membranes, nucleus, organelles, and other structures. In some aspects of the embodiments herein, the predictive labeling may be used to assist in cell segmentation or to facilitate other aspects of performing cytometry), [cause the apparatus to generate the segmentation mask based on the synthetic depth stack of images and the imaged depth stack of images]. However, Stumpe fail(s) to teach cause the apparatus to generate the segmentation based on the synthetic depth stack of images and the imaged depth stack of images; Johnson, working in the same field of endeavor, teaches: cause the apparatus to generate the segmentation based on the synthetic depth stack of images and the imaged depth stack of images (See Johnson, ¶ [0040], In some aspects of the embodiments herein, the predictive labeling may be used to provide fast and efficient visualization of various sub-cellular structures (which are also referred to as intracellular structures), such as cell membranes, nucleus, organelles, and other structures. In some aspects of the embodiments herein, the predictive labeling may be used to assist in cell segmentation or to facilitate other aspects of performing cytometry). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Stumpe’s reference to cause the apparatus to generate the segmentation based on the synthetic depth stack of images and the imaged depth stack of images based on the method of Johnson’s reference. The suggestion/motivation would have been to provide more accurate prediction and provide more complete information about the tissue (See Johnson, ¶ [0003–0007]). However, Stumpe and Johnson fail(s) to teach mask. Veidman, working in the same field of endeavor, teaches: mask (See Veidman, ¶ [0186], The patch-level segmentation code may be implemented as a segmentation CNN (e.g., based on Unet). The patch-level segmentation may output the segmentation as a mask). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Stumpe’s reference to a mask based on the method of Veidman’s reference. The suggestion/motivation would have been to more accurately classify tissue samples (See Veidman, ¶ [0002–0006]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Johnson and Veidman with Stumpe to obtain the invention as specified in claim 30. Allowable Subject Matter Claim(s) 9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim(s) 9 contain subject matter that is not disclosed or made obvious in the cited art. In regard to claim 9, when considering claim 9 as a whole, prior art of record fails to disclose or render obvious, alone or in combination: “The method of claim 1, further comprising taking a third depth stack of images of the second tissue sample and generating a mosaic image based on overlapping edges of the second and the third depth stack of images”. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Leshem et al. (US 20200302144 A1) teaches a microscope for adaptive sensing may comprise an illumination assembly, an image capture device configured to collect light from a sample illuminated by the assembly, and a processor. The processor may be configured to execute instructions which cause the microscope to capture, using the image capture device, an initial image set of the sample, identify, in response to the initial image set, an attribute of the sample, determine, in response to identifying the attribute, a three-dimensional (3D) process for sensing the sample, and generate, using the determined 3D process, an output image set comprising more than one focal plane. Various other methods, systems, and computer-readable media are also disclosed. Mazo (US 20180240235 A1) teaches there is provided a method for segmentation of an image of a target patient, comprising: providing a target 2D slice and nearest neighbor 2D slice(s) of a 3D anatomical image, and computing, by a trained multi-slice fully convolutional neural network (multi-slice FCN), a segmentation region including a defined intra-body anatomical feature that extends spatially across the target 2D slice and the nearest neighbor 2D slice(s), wherein the target 2D slice and each of the nearest neighbor 2D slice(s) are processed by a corresponding contracting component of sequential contracting components of the multi-slice FCN according to the order of the target 2D slice and the nearest neighbor 2D slice(s) based on the sequence of 2D slices extracted from the 3D anatomical image, wherein outputs of the sequential contracting components are combined and processed by a single expanding component that outputs a segmentation mask for the target 2D slice. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DION J SATCHER whose telephone number is (703)756-5849. The examiner can normally be reached Monday - Thursday 5:30 am - 2:30 pm, Friday 5:30 am - 9:30 am PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DION J SATCHER/ Patent Examiner, Art Unit 2676 /Henok Shiferaw/ Supervisory Patent Examiner, Art Unit 2676
Read full office action

Prosecution Timeline

Jul 06, 2023
Application Filed
Oct 15, 2025
Non-Final Rejection — §102, §103
Jan 12, 2026
Applicant Interview (Telephonic)
Jan 12, 2026
Examiner Interview Summary
Jan 14, 2026
Response Filed
Feb 28, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586218
MOTION ESTIMATION WITH ANATOMICAL INTEGRITY
2y 5m to grant Granted Mar 24, 2026
Patent 12579787
INSTRUMENT RECOGNITION METHOD BASED ON IMPROVED U2 NETWORK
2y 5m to grant Granted Mar 17, 2026
Patent 12573066
Depth Estimation Using a Single Near-Infrared Camera and Dot Illuminator
2y 5m to grant Granted Mar 10, 2026
Patent 12555263
SYSTEMS AND METHODS FOR TWO-STAGE OBJECTION DETECTION
2y 5m to grant Granted Feb 17, 2026
Patent 12548140
DETERMINING PROCESS DEVIATIONS THROUGH VIDEO ANALYSIS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+14.2%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 39 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month