DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/15/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 19 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 19 recites the limitation "the threshold value". There is insufficient antecedent basis for this limitation in the claim as there is no mention of a previous threshold value in claim 1.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 5-8, 12 and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Ho et al. US PG-Pub(US 20230036156 A1) in view of Chen et al. ("Deep Learning Based Automatic Immune Cell Detection for Immunohistochemistry Images") in view of Li et al. ("COUNTERFACTUAL HYPOTHESIS TESTING OF TUMOR MICROENVIRONMENT SCENARIOS THROUGH SEMANTIC IMAGE SYNTHESIS").
Regarding Claim 1, Ho teaches a system for counterfactual optimization comprising: non-transitory memory(¶[0191], “Features and functions described for the systems 100 and 500 may be stored on and implemented from one or more non-transitory computer-readable media 1212 of the computing device 1200”, ¶[0191] discloses a non-transitory computer-readable media is stored on a computing device.) configured to store: executable instructions; and spatial omics training data comprising a plurality of training images each comprising a plurality of molecule channels; (¶[0031], providing the plurality of brightfield images and the plurality of corresponding fluorescence images to the trained model comprises: combining the plurality of brightfield images and the plurality of corresponding fluorescence images into one or more multi-channel images to be provided to the trained model.)and a hardware processor in communication with the non-transitory memory(¶[0192] “The computer-readable media 1212 may include executable computer-readable code stored thereon for programming a computer (e.g., comprising a processor(s) and GPU(s)) to the techniques herein.”, ¶[0192] discloses a processor is executing computer-readable code stored in memory), the hardware processor programmed by the executable instructions to perform: generating a training image label for each of the plurality of training images indicating presence of at least one T cell in the training image ([0026] “In a variation of this embodiment, the trained model comprises a segmentation model trained to detect immune cells and having a first machine learning algorithm trained using a plurality of training brightfield images and a plurality of corresponding training fluorescence images having immune cells labeled.” ¶[0037], “In a variation of this embodiment, the immune cells include T cells and/or Natural Killer (NK) cells.”, ¶[0026] discloses using a machine learning model to detect T-Cells/immune cells in the images in which the immune cells in training data is previously labeled and ¶[0037] discloses the immune cells being labelled are T-Cells.); generating a plurality of masked training images from the plurality of training images with any T cell present in a training image of the plurality of training images masked in a masked training image of the plurality of masked training images generated (¶[0040], “providing the plurality of brightfield images to a trained model and, using the trained model, identifying and distinguishing cancer organoid cells and immune cells within the plurality of brightfield images, generating an organoid segmentation mask and an immune segmentation mask” , ¶[0040] discloses providing a plurality of brightfield images and generating a segmentation mask of the T-Cells in the image. ¶[0196] discloses inputting the brightfield images into a segmentation model to generate a plurality of mask images of immune cells and further shown in Figure 17, Element 1114 the plurality of mask images pertaining to the T-Cell Segmentation mask.);
training a model with the plurality of masked training images(¶[0148], “the paired brightfield and mask images were used for training at least one image segmentation model.”, ¶[0148] discloses using masked images to train a model.)
Ho does not explicitly teach training a model comprising a classifier with the plurality of training images as input and the training image label as output
Chen teaches training a model comprising a classifier with the plurality of training images as input and the training image label as output (Page 22, 2.2 Cell Detection, Last Paragraph, “we first unmix the test RGB image ¯I to obtain ¯Idab, then apply the trained CNN classifier to the patches centered at each pixel of the test image ¯Idab. Let y = C(p) denote the CNN classifier that takes the patch p as input and produces the probabilistic label y for the patch, here y ∈ [0, 1]. Hence, a probability map M as shown in Fig.4 is created for each test image, in which higher probability means that the pixel is more likely to be the centroid of the immune cell”. as disclosed in this section of the prior art, a classifier used to generate a probabilistic label for the patches in the image if they pertain to immune cells.);
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Ho with Chen in order to train a classifier to label the image data. One skilled in the art would have been motivated to modify Ho in this manner in order to determine the distribution and localization of the differentially expressed biomarkers of immune cells (such as T-cells or B-cells) in cancerous tissue for an immune response study. (Chen, Abstract)
Ho and Chen do not explicitly teach performing a counterfactual optimization to determine a tumor perturbation using a second plurality of images with no T cell present in each of the plurality of images.
Li teaches performing a counterfactual optimization to determine a tumor perturbation using a second plurality of images with no T cell present in each of the plurality of images. (Page 5, Second Paragraph, “The first part of our biological discovery toolkit comprises of techniques for counterfactual hypothesis testing of cell-cell interactions (Table 2). These techniques quantify how one cell’s protein expression at a pixel level reacts to newly introduced adjacent cells (Fig. 2). We can ask questions such as “how would a tumor cell be affected by adding CD8 T cells next to it?” (Fig. 1B) by artificially inserting adjacent T cells in the segmentation patch and observing the change in predicted protein expression on the tumor cell. CCIGAN’s conditional and generative nature allows for hypothesis testing of user-manipulated cell patches with different cell types, location and morphology. CCIGAN can discover significantly correlative cell-cell interactions in a dataset but is not a replacement for in vivo or wet lab testing to establish causality”, in this section of the prior art, it is disclosed that a counterfactual hypothesis is performed by adding T-Cells which were not present next to a tumor cell and determining the effects and changes of the protein expression on the tumor cell. The examiner is interpreting the term perturbation to be making small, controlled changes to data or model parameters which is being done in cited section of the prior art and described in the abstract as the author discloses “we develop a generative model that allows users to test hypotheses about the effect of cell-cell interactions on protein expression through in silico perturbation”.)
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Ho and Chen with LI in order to perform counterfactual optimization by perturbing the data with T-Cells. One skilled in the art would have been motivated to modify Ho and Chen in this manner in order to learn relationships between all imaging channels simultaneously and yields biological insights from multiple imaging technologies in silico, capturing known tumor-immune cell interactions missed by other state-of-the-art GAN models. (Li, Abstract)
Regarding Claim 2, the combination of Ho, Chen and Li teach the system of claim 1, where Ho further teaches wherein the T cell is CD8+ T cell. (¶[0098], “the molecule tagged by the dye could be a cell death marker, or the molecule could be a protein that is specific to a certain type of cell, etc. CD8 and CD4 are example markers of certain T-cells”, ¶[0098] discloses the T-Cell is a CD8 t-cell.)
Regarding Claim 3, the combination of Ho, Chen and Li teach the system of claim 1, where Ho further teaches wherein the spatial omics training data comprises spatial omics data generated from a tumor sample of a subject and/or a tumor sample for each of a plurality of subjects or from a plurality of tumor samples of a subject or a plurality of tumor samples for each of a plurality of subjects. (¶0005] “capturing, at different time points, a plurality of brightfield images and corresponding fluorescence images of the culturing well comprising the first combination of cancer organoid cells and immune cells; providing the plurality of brightfield images and the plurality of corresponding fluorescence images to a trained model”, ¶[0005] discloses capturing brightfield images of cancer cells and immune cells of a subject)
Regarding Claim 5, the combination of Ho, Chen and Li teach the system of claim 3, where Ho further teaches wherein a subject comprises a mammal, or a human (¶[0086] discloses that the tumor cells detected are from a human).
Regarding Claim 6, the combination of Ho, Chen and Li teach the system of claim 5, where Ho further teaches wherein the subject is a cancer subject(¶[0108] discloses the tumors organoids are cancer cells from a subject).
Regarding Claim 7, the combination of Ho, Chen and Li teach the system of claim 1, where Ho further teaches wherein the spatial omics data is generated using imaging mass cytometry. ([0171], “Identification of organoid characteristics may be calculated by first growing the organoids and then performing Flow cytometry/Mass cytometry to identify characteristics such as surface and intracellular protein expression, which may be detected by antibody-conjugated stains.”, ¶[0171] discloses using mass cytometry to determine characteristics in the image data.)
Regarding Claim 8, the combination of Ho, Chen and Li teach the system of claim 1, where Ho further teaches wherein the spatial omics data comprises proteomics data, transcriptomics data, or a combination thereof. ([0090] “With the present techniques, various cancer (organoid) cell characteristics and immune cell based therapies characteristics may be measured and used to create co-cultures and/or to determine the efficacy of immune therapies. The various characteristics include cell mortality, transcriptional profile”, ¶[0090] discloses transcriptomic data of the cells is determined when imaging the cell sample.)
Regarding Claim 12, the combination of Ho, Chen and Li teach the system of claim 1, where Li further teaches wherein each of the plurality of molecule channels corresponds to a different protein. (Page 2, 2 Results, Paragraph 2, “CCIGAN takes as input labeled cell segmentations patches (subsection of a full deep-learning based cell segmentation usually included in the dataset) and generates a multiplexed cell image where each channel is a spatial prediction of a particular protein being expressed on the cells in the segmentation patch”, in this section of the prior art, disclose the channels of the cell images pertain to different proteins expressed on the cell.)
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Ho and Chen with LI in order to determine different types of protein in each molecule channel. One skilled in the art would have been motivated to modify Ho and Chen in this manner in order to learn relationships between all imaging channels simultaneously and yields biological insights from multiple imaging technologies in silico, capturing known tumor-immune cell interactions missed by other state-of-the-art GAN models. (Li, Abstract)
Regarding Claim 20, the combination of Ho, Chen and Li teach the system of claim 1, where Ho further teaches wherein the classifier comprises a neural network, a deep neural network, a convolutional neural network, a fully convolutional neural network, or a combination thereof. (¶[0133], “model 314 is configured with a machine learning algorithm and, more particularly, as a convolution neural network, such as a Mask R-CNN model.”, ¶[0133] discloses CNN is used as the segmentation model.)
Regarding Claim 21, the combination of Ho, Chen and Li teach the system of claim 1, where Ho further teaches wherein the classifier comprises U-Net, Reset-18, EfficientNet-B0, MedViT, or a combination thereof. ([0203] “Both segmentation models 1108 and 1110 were implemented using a TensorFlow implementation of U-Net with 5 convolutional paired up-sampling and down-sampling layers, and 16 features in the first layer.”, ¶[0203] discloses a U-Net architecture for the segmentation model)
Claim 13 are rejected under 35 U.S.C. 103 as being unpatentable over Ho et al. US PG-Pub(US 20230036156 A1) in view of Chen et al. ("Deep Learning Based Automatic Immune Cell Detection for Immunohistochemistry Images") in view of Li et al. ("COUNTERFACTUAL HYPOTHESIS TESTING OF TUMOR MICROENVIRONMENT SCENARIOS THROUGH SEMANTIC IMAGE SYNTHESIS") in further view of Jackson et al. US PG-Pub(US 20210089750 A1).
Regarding Claim 13, while the combination of Ho, Chen and Li teach the system of claim 1, they do not explicitly teach wherein the training image label is a binary value, wherein 0 indicates absence of any T cell in a corresponding training image of the training image label, and/or wherein 1 indicates presence of at least one T cell in a corresponding training image of the training image label.
Jackson teaches wherein the training image label is a binary value, wherein 0 indicates absence of any T cell in a corresponding training image of the training image label, and/or wherein 1 indicates presence of at least one T cell in a corresponding training image of the training image label. (¶[0054], “As used herein, a “confluence mask” refers to a binary image in which pixels are identified as belonging to the one or more cells in the biological specimen such that pixels corresponding to the one or more cells are assigned a value of 1 and the remaining pixels corresponding to background are assigned a value of 0 or vice versa.”¶[0057], “As used herein, a “cell-by-cell segmentation mask” refers to an image having binary pixelation (i.e., each pixel is assigned a value of 0 or 1 by the processor) such that the cells of the biological specimen 110 are each displayed as a distinct region-of-interest. The cell-by-cell segmentation mask may advantageously permit label-free counting of cells displayed therein, permit determination of the entire area of individual adherent cells, permit analysis based on cell texture metrics and cell shape descriptors, and/or permit detection of individual cell boundaries, including for adherent cells that tend to be formed in sheets, where each cell may contact a number of other adjacent cells in the biological specimen 110.”
¶[0054] discloses generating a binary value for the pixels in the image and giving them a score of 1 if it pertains to a cell of interest or 0 if it is not the cell of interest. ¶[0057] further discloses the process of binary pixelation in which the cells are assigned 0 or 1 by a processor in a region of interest.)
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Ho, Chen, Li with Jackson in order to determine a binary value when detecting cells in the image. One skilled in the art would have been motivated to modify Ho, Chen and Li in this manner in order to analyze images of a biological specimen using a computational model. (Jackson, ¶[0003])
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Ho et al. US PG-Pub(US 20230036156 A1) in view of Chen et al. ("Deep Learning Based Automatic Immune Cell Detection for Immunohistochemistry Images") in view of Li et al. ("COUNTERFACTUAL HYPOTHESIS TESTING OF TUMOR MICROENVIRONMENT SCENARIOS THROUGH SEMANTIC IMAGE SYNTHESIS") in further view of Blau et al. US PG-Pub(US 20230088271 A1).
Regarding Claim 14, while the combination of Ho, Chen and Li teach the system of claim 1, they do not explicitly teach wherein generating the training image label comprises: generating the training image label by clustering cells in the training image and one or more other training images of the plurality of training images. (Blau, ¶[0088], “The labels generated from unsupervised clustering are then used to in machine learning algorithm such as k-nearest neighbor-based label propagation in high dimensional space or training a neural network classifier to label the remaining lower confidence cells. This results in the abundance and spatial dynamics of 27 cell types (>90% of cells are annotated; cell type annotations were manually validated) during regeneration including myogenic cells, immune cells, and fibroblasts (FIG. 8). Specifically, FIG. 8 illustrates representative spatial maps of 27 cell types in uninjured and regenerating TA muscles (days 3 and 6 after injury) of wild type mice. Each dot represents a single cell.”, ¶[0088] discloses performing clustering of cell images to generate a label for the cell images. )
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Ho, Chen and Li with Blau in order to cluster the images to generate a label. One skilled in the art would have been motivated to modify Ho, Chen and Li in this manner in order to identify cell linkages and rendering transcriptome profiles to spatial coordinates. (Blau, Abstract)
Claims 19 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Ho et al. US PG-Pub(US 20230036156 A1) in view of Chen et al. ("Deep Learning Based Automatic Immune Cell Detection for Immunohistochemistry Images") in view of Li et al. ("COUNTERFACTUAL HYPOTHESIS TESTING OF TUMOR MICROENVIRONMENT SCENARIOS THROUGH SEMANTIC IMAGE SYNTHESIS") in further view of Sahu et al. US PG-Pub(US 20210365420 A1).
Regarding Claim 19, the combination of Ho, Chen and Li teach the system of claim 1, where Ho further teaches wherein the hardware processor is programmed by the executable instructions to perform: (i) determining the threshold value (¶[0138], “Immune cell segmentation labels were then detected from these fluorescence images using a threshold-based segmentation method in CellProfiler.”. ¶[0138] discloses using determine a threshold when performing segmentation.),
Li teaches (ii) applying the tumor perturbation to a plurality of test images with a T-cell distribution to determine a perturbed T-cell distribution. (Page 5, Second Paragraph, “The first part of our biological discovery toolkit comprises of techniques for counterfactual hypothesis testing of cell-cell interactions (Table 2). These techniques quantify how one cell’s protein expression at a pixel level reacts to newly introduced adjacent cells (Fig. 2). We can ask questions such as “how would a tumor cell be affected by adding CD8 T cells next to it?” (Fig. 1B) by artificially inserting adjacent T cells in the segmentation patch and observing the change in predicted protein expression on the tumor cell. CCIGAN’s conditional and generative nature allows for hypothesis testing of user-manipulated cell patches with different cell types, location and morphology. CCIGAN can discover significantly correlative cell-cell interactions in a dataset but is not a replacement for in vivo or wet lab testing to establish causality”, in this section of the prior art, it is disclosed that a counterfactual hypothesis is performed by adding T-Cells which were not present next to a tumor cell and determining the effects and changes of the protein expression on the tumor cell. The examiner is interpreting the term perturbation to be making small, controlled changes to data or model parameters which is being done in cited section of the prior art and described in the abstract as the author discloses “we develop a generative model that allows users to test hypotheses about the effect of cell-cell interactions on protein expression through in silico perturbation”.)
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Ho and Chen with Li in order to perform counterfactual optimization by perturbing the data with T-Cells. One skilled in the art would have been motivated to modify Ho and Chen in this manner in order to learn relationships between all imaging channels simultaneously and yields biological insights from multiple imaging technologies in silico, capturing known tumor-immune cell interactions missed by other state-of-the-art GAN models. (Li, Abstract)
However, they do not explicitly teach optionally wherein determining the threshold value using root mean squared error (RMSE
Sahu teaches optionally wherein determining the threshold value using root mean squared error (RMSE) ([0073], ”The gradient descent method may include the steps of: [0074] providing an initial adjustment weight, e.g. 0.5, as adjustment weight, and repeating the following steps until a convergence condition is fulfilled, e.g., a difference, such as Root-Mean-Squared-Error (RMSE), between the median data correctness value for the respective bin and the bin data correctness value for the respective bin being below a recalibration threshold value” … ¶[0076], “determining a difference between the median data correctness value for the respective bin and the bin data correctness value, e.g., RMSE, determining a gradient of difference, e.g., the RMSE loss, with respect to the adjustment weight, [0077] adjusting the adjustment weight based on the adjustment weight, a learning rate, and the gradient.”, ¶[0073]-¶[0077] discloses using RMSE when the correctness rate in the data is below a threshold value.)
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Ho, Chen and Li with Sahu in order to use RSME when determining a threshold. One skilled in the art would have been motivated to modify Ho, Chen and Li in this manner in order to improve recalibration of the ground truth dataset. (Sahu, ¶[0069])
Regarding Claim 22, while the combination of Ho, Chen and Li teach the system of claim 1, they do not explicitly teach wherein training the model comprises training the model using stochastic gradient decent and/or T cell prediction loss.
Sahu teaches wherein training the model comprises training the model using stochastic gradient decent and/or T cell prediction loss. (¶[0072], “The learning of an adjustment weight can for example be performed based on machine learning methods, such as neural networks or gradient descent method, e.g., stochastics gradient descent method.”, ¶[0072] discloses using stochastic gradient decent in training the model.)
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Ho, Chen and Li with Sahu in order to train using stochastic gradient decent. One skilled in the art would have been motivated to modify Ho, Chen and Li in this manner in order to improve recalibration of the ground truth dataset. (Sahu, ¶[0069])
Allowable Subject Matter
Claims 9, 15, 17, 23 and 26-27 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding Claim 9, the primary reason for the allowance of the claims is the inclusion of the limitation, “wherein a training image of the plurality of training images is 48 pixels by 48 pixels in size, and/or a training image of the plurality of training images corresponds to a section of 48 μm by 48 μm in size”, in the claim which is not found in the prior art references. It is noted that the examiner has not found any other prior art to anticipate or obviate the quoted claim limitations supra, when read in light/combination of the other claimed limitations within the cited claims. Also, it is noted that the quoted limitations, in combination with the other claim limitations of the cited claim, deem the claim patentable, not just the consideration of the quoted limitations by themselves.
Regarding Claim 15, the primary reason for the allowance of the claims is the inclusion of the limitation, “wherein masked pixels in each of the plurality of masked training images comprise values of 0 or an average value of pixels in the masked training image or the corresponding training image, or wherein the plurality of masked training images comprises, for each of the plurality training images, the training image if no T cell is present in the training image, or the training image with any T cell present in the training image masked.”, in the claim which is not found in the prior art references. It is noted that the examiner has not found any other prior art to anticipate or obviate the quoted claim limitations supra, when read in light/combination of the other claimed limitations within the cited claims. Also, it is noted that the quoted limitations, in combination with the other claim limitations of the cited claim, deem the claim patentable, not just the consideration of the quoted limitations by themselves.
Regarding Claim 17, the primary reason for the allowance of the claims is the inclusion of the limitation, “wherein the model comprises a fully-connected layer connected to a last layer of the classifier, wherein the fully-connected layer outputs a value between 0 and 1, and/or wherein the model outputs a value of 0 if the output of the fully-connected layer is below a threshold value and a value of 1 if the output of the fully-connected layer is at least the threshold value.”, in the claim which is not found in the prior art references. It is noted that the examiner has not found any other prior art to anticipate or obviate the quoted claim limitations supra, when read in light/combination of the other claimed limitations within the cited claims. Also, it is noted that the quoted limitations, in combination with the other claim limitations of the cited claim, deem the claim patentable, not just the consideration of the quoted limitations by themselves.
Regarding Claim 23, the primary reason for the allowance of the claims is the inclusion of the limitation, “wherein (i) the spatial omics training data comprises the second plurality of images with no T cell present in each the image, (ii) the plurality of training images comprises the second plurality of images with no T cell present in the image; and/or (iii) the second plurality of images with no T cell present in the image comprises no training image of the plurality of training images.”, in the claim which is not found in the prior art references. It is noted that the examiner has not found any other prior art to anticipate or obviate the quoted claim limitations supra, when read in light/combination of the other claimed limitations within the cited claims. Also, it is noted that the quoted limitations, in combination with the other claim limitations of the cited claim, deem the claim patentable, not just the consideration of the quoted limitations by themselves.
Regarding Claim 26, the primary reason for the allowance of the claims is the inclusion of the limitation, “wherein the counterfactual optimization comprises a term corresponding to increasing predicted probability of T cells, a term corresponding to minimizing change, and/or a term corresponding to a shift closer to training data.”, in the claim which is not found in the prior art references. It is noted that the examiner has not found any other prior art to anticipate or obviate the quoted claim limitations supra, when read in light/combination of the other claimed limitations within the cited claims. Also, it is noted that the quoted limitations, in combination with the other claim limitations of the cited claim, deem the claim patentable, not just the consideration of the quoted limitations by themselves.
Regarding Claim 27, the primary reason for the allowance of the claims is the inclusion of the limitation, “wherein a tumor perturbation comprises, for each of the second plurality of images, a change in an intensity in each of one or more of the plurality of molecule channels.”, in the claim which is not found in the prior art references. It is noted that the examiner has not found any other prior art to anticipate or obviate the quoted claim limitations supra, when read in light/combination of the other claimed limitations within the cited claims. Also, it is noted that the quoted limitations, in combination with the other claim limitations of the cited claim, deem the claim patentable, not just the consideration of the quoted limitations by themselves.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAN D HOANG whose telephone number is (571)272-4344. The examiner can normally be reached Monday-Friday 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN M VILLECCO can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HAN HOANG/Examiner, Art Unit 2661