DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments (see remarks), filed 01/07/2025, with respect to the claim 1-12, 15-17, 27, 29 and 43 have been fully considered but respectfully, are not persuasive.
The applicant argues on page 10, “The independent claims have been amended to specify that the at least one image comprises an image of a physically expanded sample of a tissue or a cell specimen. This feature was previously recited by claim 13, which stood rejected in view of Weisenfeld. This reference mentions the concept of "expansion microscopy" because it discusses "a preparative step of expansion microscopy." (Weisenfeld at [0152]). This reference does not disclose receiving and processing a physically expanded sample as recited in the amended claims. Thus, one skilled in the art would not have been motivated to combine Weisenfeld with Yip or the other cited art. Claims 1, 15, and 29 are allowable for at least this reason.”
In response, the Office respectfully disagrees. Based on the breadth of the claim language, the prior art by WEISENFELD et al. (US 20210150707 A1) explicitly teaches receiving, with at least one processor, image data associated with at least one image at a first resolution, the at least one image comprising an image of physically expanded sample of a tissue or a cell specimen (Fig. 1. Paragraph [0506]-WEISENFELD discloses a region of interest can be identified in a biological sample using a variety of different techniques, e.g., expansion microscopy, bright field microscopy, dark field microscopy, phase contrast microscopy, electron microscopy, fluorescence microscopy, reflection microscopy, interference microscopy, and confocal microscopy, and combinations thereof. Further in paragraph [0152]-WEISENFELD discloses a biological sample embedded in a hydrogel can be isometrically expanded. In paragraph [0154]-WEISENFELD discloses Isometric expansion can be performed by anchoring one or more components of a biological sample to a gel, followed by gel formation, proteolysis, and swelling. In paragraph [0156]-WEISENFELD discloses isometric expansion of the sample can increase the spatial resolution of the subsequent analysis of the sample. Isometric expansion of the biological sample can result in increased resolution in spatial profiling (e.g., single-cell profiling). In paragraph [0157]-WEISENFELD discloses Isometric expansion can enable three-dimensional spatial resolution of the subsequent analysis of the sample. Further in paragraph [0158]-WEISENFELD discloses a biological sample is isometrically expanded to a volume at least 2×, 2.1×, 2.2×, 2.3×, 2.4×, 2.5×, 2.6×, 2.7×, 2.8×, 2.9×, 3×, 3.1×, 3.2×, 3.3×, 3.4×, 3.5×, 3.6×, 3.7×, 3.8×, 3.9×, 4×, 4.1×, 4.2×, 4.3×, 4.4×, 4.5×, 4.6×, 4.7×, 4.8×, or 4.9× its non-expanded volume (wherein the sample is physically expanded for image analysis at a specific resolution through isometric expansion (i.e. a preparative step for expansion microscopy) and/or expansion microscopy itself)).
The applicant argues on page 11, “In view of the foregoing amendments and remarks, reconsideration and allowance of claims 1-12, 14-17, 27, 29, and 43 are respectfully requested.”
In response, the Office respectfully disagrees for reasons stated above and below.
The applicant is encouraged to amend to overcome the current grounds of rejection and/or prior arts of record.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 8, 10, 12, 15, 27, 29, and 43 are rejected under 35 U.S.C. 103 as being unpatentable over YIP et al. (US 20200258223 A1), hereinafter referenced as YIP in view of WEISENFELD et al. (US 20210150707 A1), hereinafter referenced as WEISENFELD, and in further view of MIKHNO et al. (US 20170039706 A1), hereinafter referenced as MIKHNO.
Regarding claim 1, YIP explicitly teaches a computer implemented method (Fig. 1. Paragraph [0084]-YIP discloses an imaging-based biomarker prediction system is formed of a deep learning framework configured and trained to directly learn from histopathology slides and predict the presence of biomarkers in medical images. The deep learning frameworks may be configured and trained to analyze medical images and identify biomarkers that indicate the presence of a tumor, a tumor state/condition, or information about a tumor of the tissue sample. In paragraph [0411]-YIP discloses FIG. 38 illustrates an example computing device 3800 for implementing the imaging-based biomarker prediction system 100 of FIG. 1.) comprising:
removing, with the at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Please also read paragraph [0111 and 0113]), a background from the at least one image (Fig. 1. Paragraph [0117]-YIP discloses the image pre-processing sub-system 114 may perform further image processing that removes artifacts and other noise from received images by doing preliminary tissue detection 114d, for example, to identify regions of the images corresponding to histopathology stained tissue for subsequent analysis, classification, and segmentation. In paragraph [0118]-YIP discloses in multiscale configuration where image data is to be analyzed on a tile-basis, image pre-processing includes receiving an initial histopathology image, and removing non-tissue objects from the image (wherein the background is non-tissue objects). Further in paragraph [0360]-YIP discloses a process 1508 displays images associated with a tissue masking step of process 1502. An assembled probability map generated by the process 1504 is passed through this tissue mask to remove background. Both background and marker area are removed by the masking algorithm of the process 1508. Please also read paragraph [0283]);
segmenting, with the at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Please also read paragraph [0111 and 0113]), the at least one image to define a plurality of single-cell images (Fig. 1. Paragraph [0130]-YIP discloses a histopathology image may be segmented. In paragraph [0131]-YIP discloses in system 100, the deep learning framework 150 further includes a trained image classifier module 170. In paragraph [0133]-YIP discloses the module 170 may further include a cell segmenter 176 that identifies cells within a histopathology image, including cell borders, interiors, and exteriors. Please also see Fig. 3 and read paragraph [0151, 0155-0156 and 0176]);
assigning, with the at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Please also read paragraph [0111 and 0113]), a label to the at least one single-cell image of the plurality of single-cell images (Fig. 1. Paragraph [0130]-YIP discloses a histopathology image may be segmented and each segment of the image may be labeled according to one or more data types that may be classified to that segment. The histopathology image may be labeled as a whole according to the one or more data types that may be classified to the image or at least one segment of the image (wherein data types may indicate one or more biomarkers and labeling a histopathology image or a segment with a data type may identify the biomarker). Further in paragraph [0176]-YIP discloses when training a tile based deep learning network to predict a biomarker classification label for each tile utilizes a strongly supervised approach to generate biomarker labels to identify the HRD status (Positive or Negative) of individual cells (wherein single cell RNA sequencing may be used alone, or in combination with laser guided micro-dissection to extract one cell at a time, to achieve labels for each cell and may incorporate a cell segmentation model and artificial intelligence engine to classify the pixel values inside each of the cell contours according to biomarker status. Please also read paragraph [0124, 0155-0156 and 0190]);
training (Fig. 3. Paragraph [0123]-YIP discloses to analyze the received histopathology image data and other data, the imaging-based biomarker prediction system 102 includes a deep learning framework 150 that implements various machine learning techniques to generate trained classifier models for image-based biomarker analysis from received training sets of image data or sets of image data and other patient information. In paragraph [0125]-YIP discloses the deep learning framework 150 includes image data 162a. To train or use a multiscale PD-L1 biomarker classifier, this image data 162a may include pre-processed image data received from the sub-system 114, images from H&E slides or images from IHC slides (with or without human annotation), including IHC slides targeting PD-L1, PTEN, EGFR, Beta catenin/catenin beta1, NTRK, HRD, PIK3CA, and hormone receptors including HER2, AR, ER, and PR. To train or use other biomarker classifiers, whether multiscale classifiers or single-scale classifiers, the image data 162A may include images from other stained slides. Please also read paragraph [0190]), with the at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Please also read paragraph [0111 and 0113]), a machine learning model (Fig. 1, #150 called a Deep learning Framework. Paragraph [0124]) to predict a classification of the at least one single-cell image of the plurality of single-cell images based on inputting the plurality of single-cell images into the machine learning model (Fig. 1. Paragraph [0123]-YIP discloses with trained classifier models, the deep learning framework 150 is further used to analyze and diagnose the presence of image-based biomarkers in subsequent images collected from patients. In paragraph [0132]-YIP discloses the trained image classifier module 170 includes trained tissue classifiers 172, trained by the module 160 using one or more training image sets, to identify and classify tissue type in regions/areas of received image data. In some examples, these trained tissue classifiers are trained to identify biomarkers via the tissue classification, where these include single-scale configured classifiers 172a and multiscale classifiers 172b. Further in paragraph [0160]-YIP discloses with the cell segmentation in a histopathology image generated by the cell segmentation model 316 and the tissue classification from the tissue classification model 302, a biomarker classification model 322 receives data from both and determines a predicted biomarker presence in the histopathology image, and with the multiscale configuration, the prediction biomarker presence in each tile image of the histopathology image. Please also see Fig. 3 and read paragraph [0176]).
Although YIP explicitly teaches receiving (Fig. 1. Paragraph [0115]-YIP discloses the imaging-based biomarker prediction system 102 is communicatively coupled to receive medical images, for example of histopathology slides such as digital H&E stained slide images, IHC stained slide images, or digital images of any other staining protocols (wherein images are received from any number of medical image data sources such as physician clinical records systems 106 or histopathology image repositories 110), with at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Further in paragraph [0111]-YIP discloses FIG. 1 illustrates a prediction system 100 capable of analyzing digital images of histopathology slides of a tissue sample and determining the likelihood of biomarker presence in that tissue, where biomarker presence indicates a predictive tumor presence, a predicted tumor state/condition, or other information about a tumor of the tissue sample, such as a possibility of clinical response through the use of a treatment associated with the biomarker (wherein system 100 includes an imaging-based biomarker prediction system 102 that implements, image processing operations, deep learning frameworks, and may be implemented on one or more computing device that include a number of processors, controllers or other electronic components for processing or facilitating image capture, generation, or storage and image analysis, and deep learning tools for analysis of images. Please also read paragraph [0113]), image data associated with at least one image at a first resolution (Fig. 1. Paragraph [0115]-YIP discloses the imaging-based biomarker prediction system 102 is communicatively coupled to receive medical images, for example of histopathology slides such as digital H&E stained slide images, IHC stained slide images, or digital images of any other staining protocols (wherein images are received from any number of medical image data sources such as physician clinical records systems 106 or histopathology image repositories 110. In paragraph [0116]-YIP discloses in FIG. 1, the imaging-based biomarker prediction system 102 includes an image pre-processing sub-system 114 that performs initial image processing to enhance image data for faster processing in training a machine learning framework and for performing biomarker prediction using a trained deep learning framework. Further in paragraph [0118]-YIP discloses in a multiscale configuration where image data is to be analyzed on a tile-basis, image pre-processing includes receiving an initial histopathology image, at a first image resolution. Please also read paragraph [0139, 0313 and 329-0330]);
YIP fails to explicitly teach receiving, with at least one processor, image data associated with at least one image at a first resolution, the at least one image comprising an image of physically expanded sample of a tissue or a cell specimen.
However, WEISENFELD explicitly teaches receiving, with at least one processor, image data associated with at least one image at a first resolution, the at least one image comprising an image of physically expanded sample of a tissue or a cell specimen (Fig. 1. Paragraph [0506]-WEISENFELD discloses a region of interest can be identified in a biological sample using a variety of different techniques, e.g., expansion microscopy, bright field microscopy, dark field microscopy, phase contrast microscopy, electron microscopy, fluorescence microscopy, reflection microscopy, interference microscopy, and confocal microscopy, and combinations thereof. Further in paragraph [0152]-WEISENFELD discloses a biological sample embedded in a hydrogel can be isometrically expanded. In paragraph [0154]- WEISENFELD discloses Isometric expansion can be performed by anchoring one or more components of a biological sample to a gel, followed by gel formation, proteolysis, and swelling. In paragraph [0156]-WEISENFELD discloses isometric expansion of the sample can increase the spatial resolution of the subsequent analysis of the sample. Isometric expansion of the biological sample can result in increased resolution in spatial profiling (e.g., single-cell profiling). In paragraph [0157]-WEISENFELD discloses Isometric expansion can enable three-dimensional spatial resolution of the subsequent analysis of the sample. Further in paragraph [0158]-WEISENFELD discloses a biological sample is isometrically expanded to a volume at least 2×, 2.1×, 2.2×, 2.3×, 2.4×, 2.5×, 2.6×, 2.7×, 2.8×, 2.9×, 3×, 3.1×, 3.2×, 3.3×, 3.4×, 3.5×, 3.6×, 3.7×, 3.8×, 3.9×, 4×, 4.1×, 4.2×, 4.3×, 4.4×, 4.5×, 4.6×, 4.7×, 4.8×, or 4.9× its non-expanded volume).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of YIP of having a computer implemented method, with the teachings of WEISENFELD of having receiving, with at least one processor, image data associated with at least one image at a first resolution, the at least one image comprising an image of physically expanded sample of a tissue or a cell specimen.
Wherein YIP’s method having receiving, with at least one processor, image data associated with at least one image at a first resolution, the at least one image comprising an image of physically expanded sample of a tissue or a cell specimen.
The motivation behind the modification would have been to obtain a method that improves machine learning model training, accuracy and classifications as well as the resolution for spatial analysis, since both YIP and WEISENFELD concern cellular image analysis. Wherein YIP’s systems and methods provides improves the accuracy, classification and training of a machine learning model by disrupting shift invariance and the convergence speed and stability during training, while WEISENFELD’s systems and methods that improves the capture of analytes and resolution for spatial analysis. Please see YIP et al. (US 20200258223 A1), Paragraph [0094, 0365, 0370 and 0379] and WEISENFELD et al. (US 20210150707 A1), Abstract and Paragraph [0496, 0515, and 0564].
Although YIP explicitly teaches applying, with the at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Please also read paragraph [0113]), a filter to at least one single-cell image of the plurality of single-cell images (Fig. 3. Paragraph [0118]-YIP discloses in multiscale configuration where image data is to be analyzed on a tile-basis, image pre-processing includes receiving an initial histopathology image, at a first image resolution, downsampling that image to a second image resolution, and then performing a normalization on the downsampled histopathology image, such as color and/or intensity normalization, and removing non-tissue objects from the image. In paragraph [0271]-YIP discloses the pre-processing controller 302 can receive an image from the file having a resolution that is higher than the optimal resolution and downsample the image at a ratio that achieves the optimal resolution, at process 1106. In paragraph [0279]-YIP discloses the controller 302 removes these pixels (wherein the pixels represent artifacts, markings or blurred areas) by converting the image to a grayscale image, passing the grayscale image through a Gaussian blur filter that mathematically adjusts the original grayscale value of each pixel to a blurred grayscale value to create a blurred image. Other filters may be used to blur the image. Please also read paragraph [0146]);
YIP fails to explicitly teach applying, with the at least one processor, a filter to at least one single-cell image of the plurality of single-cell images by iteratively decreasing a kernel size of the filter, resulting in second resolution.
However, MIKHNO explicitly teaches applying, with the at least one processor, a filter to at least one image of the plurality of images by iteratively decreasing a kernel size of the filter, resulting in second resolution (Paragraph [0240]-MIKHNO discloses PSF-MLEM also introduces some specific artifacts. To prevent such noise reconstruction, it was proposed to use gradual PSF introduction, but from large to small kernel size, rather than the inverse. Using a large kernel first enables to update the non-noisy pre-approximation provided by the MLEM with a low-passed update map. The PSF kernel size support is then iteratively reduced to the size of the true PSF kernel. This iterative reduction enables to use more detailed update maps with more spatial details at later iterations. Please also read paragraph [0239]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of YIP in view of WEISENFELD of having a computer implemented method, with the teachings of MIKHNO of having applying, with the at least one processor, a filter to at least one image of the plurality of images by iteratively decreasing a kernel size of the filter, resulting in second resolution.
Wherein YIP’s method having applying, with the at least one processor, a filter to at least one single-cell image of the plurality of single-cell images by iteratively decreasing a kernel size of the filter, resulting in second resolution.
The motivation behind the modification would have been to obtain a method that improves machine learning model training, accuracy and classifications as well as the resolution for spatial analysis, since both YIP and MIKHNO concern biological image analysis, neural networks and gaussian filters. Wherein YIP’s systems and methods provides improves the accuracy, classification and training of a machine learning model by disrupting shift invariance and the convergence speed and stability during training, while MIKHNO’s systems and methods that improves clinical scanners and image quality, resolution and object contrast while also reducing background noise and hotspot artifacts. Please see YIP et al. (US 20200258223 A1), Paragraph [0094, 0365, 0370 and 0379] and MIKHNO et al. (US 20170039706 A1), Abstract and Paragraph [0106, 0109, 0111].
Regarding claim 8, YIP in view of WEISENFELD and in further view of MIKHNO explicitly teaches the computer implemented method of claim 1, YIP further teaches wherein the filter is a Gaussian filter (Fig. 3. Paragraph [0273]-YIP discloses at process 1106, after the pre-processing controller 302 obtains an image with an optimal resolution, it locates all parts of the image that depict tumor sample tissue and digitally eliminates debris, pen marks, and other non-tissue objects. In paragraph [0274]-YIP discloses at process 1106, the pre-processing controller 302 differentiates between tissue and non-tissue regions of the image and uses Gaussian blur removal to edit pixels with non-tissue objects. In an example, any control tissue on a slide that is not part of the tumor sample tissue can be detected and labeled as control tissue by the tissue detector or manually labeled by a human analyst as control tissue that should be excluded from the downstream tile grid projections. In paragraph [0278]-YIP discloses at process 1106, the controller 302 eliminates pixels in the image that have low local variability (wherein these pixels represent artifacts, markings, or blurred areas). In paragraph [0279]-YIP discloses at process 1106, the controller 302 removes these pixels by converting the image to a grayscale image, passing the grayscale image through a Gaussian blur filter that mathematically adjusts the original grayscale value of each pixel to a blurred grayscale value to create a blurred image. Other filters may be used to blur the image).
Regarding claim 10, YIP in view of WEISENFELD and in further view of MIKHNO explicitly teaches the computer implemented method of claim 1, YIP further teaches further comprising:
manipulating an image perspective of the at least one single-cell image of the plurality of single-cell images to provide a plurality of augmented single-cell images, wherein manipulating the image perspective comprises randomly flipping, randomly rotating, randomly shearing along an axis, and/or randomly translating at least one single-cell image of the plurality of single-cell images, and/or adding random noise to the at least one single-cell image of the plurality of single-cell images (Fig. 3. Paragraph [0336]-YIP each histopathology image can exhibit large degrees of variation in visual features, including tumor appearance, so a training set may include digital slide images that are highly dissimilar to better train the model for the variety of slides that it may analyze. Images in training data may also be subjected to data augmentation (including rotating, scaling, color jitter, etc.), before being used to train the model).
Regarding claim 12, YIP in view of WEISENFELD and in further view of MIKHNO explicitly teaches the computer implemented method of claim 1, YIP further teaches wherein the machine learning model comprises a convolutional neural network (CNN) (Fig. 1. Paragraph [0135]-YIP discloses the trained image classifier module 170 and associated classifiers may be configured with an image-analysis adapted machine learning techniques, including, for example, deep learning techniques, including, by way of example, a CNN model and, more particular, a tile-resolution CNN, that in some examples is implemented as a FCN model, and, more particularly still, implemented as a tile-resolution FCN model, etc. Please also read paragraph [0042, 0149-0150]).
Regarding claim 15, YIP explicitly teaches a system (Fig. 1, #100 called a prediction system. Paragraph [0111]. Further in paragraph [0084]-YIP discloses an imaging-based biomarker prediction system is formed of a deep learning framework configured and trained to directly learn from histopathology slides and predict the presence of biomarkers in medical images. The deep learning frameworks may be configured and trained to analyze medical images and identify biomarkers that indicate the presence of a tumor, a tumor state/condition, or information about a tumor of the tissue sample) comprising at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]) programmed or configured to:
remove a background from the at least one image (Fig. 1. Paragraph [0117]-YIP discloses the image pre-processing sub-system 114 may perform further image processing that removes artifacts and other noise from received images by doing preliminary tissue detection 114d, for example, to identify regions of the images corresponding to histopathology stained tissue for subsequent analysis, classification, and segmentation. In paragraph [0118]-YIP discloses in multiscale configuration where image data is to be analyzed on a tile-basis, image pre-processing includes receiving an initial histopathology image, and removing non-tissue objects from the image (wherein the background is non-tissue objects). Further in paragraph [0360]-YIP discloses a process 1508 displays images associated with a tissue masking step of process 1502. An assembled probability map generated by the process 1504 is passed through this tissue mask to remove background. Both background and marker area are removed by the masking algorithm of the process 1508. Please also read paragraph [0283]);
segment the at least one image to define a plurality of single-cell images (Fig. 3. Paragraph [0130]-YIP discloses a histopathology image may be segmented. In paragraph [0131]-YIP discloses in system 100, the deep learning framework 150 further includes a trained image classifier module 170. In paragraph [0133]-YIP discloses the module 170 may further include a cell segmenter 176 that identifies cells within a histopathology image, including cell borders, interiors, and exteriors. Please also read paragraph [0151, 0155-0156 and 0176]);
assign a label to the at least one single-cell image of the plurality of single-cell images (Fig. 1. Paragraph [0130]-YIP discloses a histopathology image may be segmented and each segment of the image may be labeled according to one or more data types that may be classified to that segment. The histopathology image may be labeled as a whole according to the one or more data types that may be classified to the image or at least one segment of the image (wherein data types may indicate one or more biomarkers and labeling a histopathology image or a segment with a data type may identify the biomarker). Further in paragraph [0176]-YIP discloses when training a tile based deep learning network to predict a biomarker classification label for each tile utilizes a strongly supervised approach to generate biomarker labels to identify the HRD status (Positive or Negative) of individual cells (wherein single cell RNA sequencing may be used alone, or in combination with laser guided micro-dissection to extract one cell at a time, to achieve labels for each cell and may incorporate a cell segmentation model and artificial intelligence engine to classify the pixel values inside each of the cell contours according to biomarker status. Please also read paragraph [0124, 0155-0156 and 0190]);
and train (Fig. 3. Paragraph [0123]-YIP discloses to analyze the received histopathology image data and other data, the imaging-based biomarker prediction system 102 includes a deep learning framework 150 that implements various machine learning techniques to generate trained classifier models for image-based biomarker analysis from received training sets of image data or sets of image data and other patient information. In paragraph [0125]-YIP discloses the deep learning framework 150 includes image data 162a. To train or use a multiscale PD-L1 biomarker classifier, this image data 162a may include pre-processed image data received from the sub-system 114, images from H&E slides or images from IHC slides (with or without human annotation), including IHC slides targeting PD-L1, PTEN, EGFR, Beta catenin/catenin beta1, NTRK, HRD, PIK3CA, and hormone receptors including HER2, AR, ER, and PR. To train or use other biomarker classifiers, whether multiscale classifiers or single-scale classifiers, the image data 162A may include images from other stained slides. Please also read paragraph [0190]) a machine learning model (Fig. 1, #150 called a Deep learning Framework. Paragraph [0123]) to predict a classification of the at least one single- cell image of the plurality of single-cell images based on inputting the plurality of single-cell images into the machine learning model (Fig. 1. Paragraph [0123]-YIP discloses with trained classifier models, the deep learning framework 150 is further used to analyze and diagnose the presence of image-based biomarkers in subsequent images collected from patients. In paragraph [0132]-YIP discloses the trained image classifier module 170 includes trained tissue classifiers 172, trained by the module 160 using one or more training image sets, to identify and classify tissue type in regions/areas of received image data. In some examples, these trained tissue classifiers are trained to identify biomarkers via the tissue classification, where these include single-scale configured classifiers 172a and multiscale classifiers 172b. Further in paragraph [0160]-YIP discloses with the cell segmentation in a histopathology image generated by the cell segmentation model 316 and the tissue classification from the tissue classification model 302, a biomarker classification model 322 receives data from both and determines a predicted biomarker presence in the histopathology image, and with the multiscale configuration, the prediction biomarker presence in each tile image of the histopathology image. Please also see Fig. 3 and read paragraph [0176]).
Although YIP explicitly teaches receive image data associated with at least one image at a first resolution (Fig. 1. Paragraph [0115]-YIP discloses the imaging-based biomarker prediction system 102 is communicatively coupled to receive medical images, for example of histopathology slides such as digital H&E stained slide images, IHC stained slide images, or digital images of any other staining protocols (wherein images are received from any number of medical image data sources such as physician clinical records systems 106 or histopathology image repositories 110. In paragraph [0116]-YIP discloses in FIG. 1, the imaging-based biomarker prediction system 102 includes an image pre-processing sub-system 114 that performs initial image processing to enhance image data for faster processing in training a machine learning framework and for performing biomarker prediction using a trained deep learning framework. Further in paragraph [0118]-YIP discloses in a multiscale configuration where image data is to be analyzed on a tile-basis, image pre-processing includes receiving an initial histopathology image, at a first image resolution. Please also read paragraph [0139, 0313 and 329-0330]);
YIP fails to explicitly teach receive image data associated with at least one image at a first resolution, the at least one image comprising an image of physically expanded sample of a tissue or a cell specimen.
However, WEISENFELD explicitly teaches receive image data associated with at least one image at a first resolution, the at least one image comprising an image of physically expanded sample of a tissue or a cell specimen (Fig. 1. Paragraph [0506]-WEISENFELD discloses a region of interest can be identified in a biological sample using a variety of different techniques, e.g., expansion microscopy, bright field microscopy, dark field microscopy, phase contrast microscopy, electron microscopy, fluorescence microscopy, reflection microscopy, interference microscopy, and confocal microscopy, and combinations thereof. Further in paragraph [0152]-WEISENFELD discloses a biological sample embedded in a hydrogel can be isometrically expanded. In paragraph [0154]- WEISENFELD discloses Isometric expansion can be performed by anchoring one or more components of a biological sample to a gel, followed by gel formation, proteolysis, and swelling. In paragraph [0156]-WEISENFELD discloses isometric expansion of the sample can increase the spatial resolution of the subsequent analysis of the sample. Isometric expansion of the biological sample can result in increased resolution in spatial profiling (e.g., single-cell profiling). In paragraph [0157]-WEISENFELD discloses Isometric expansion can enable three-dimensional spatial resolution of the subsequent analysis of the sample. Further in paragraph [0158]-WEISENFELD discloses a biological sample is isometrically expanded to a volume at least 2×, 2.1×, 2.2×, 2.3×, 2.4×, 2.5×, 2.6×, 2.7×, 2.8×, 2.9×, 3×, 3.1×, 3.2×, 3.3×, 3.4×, 3.5×, 3.6×, 3.7×, 3.8×, 3.9×, 4×, 4.1×, 4.2×, 4.3×, 4.4×, 4.5×, 4.6×, 4.7×, 4.8×, or 4.9× its non-expanded volume).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of YIP of having a system, with the teachings of WEISENFELD of having receive image data associated with at least one image at a first resolution, the at least one image comprising an image of physically expanded sample of a tissue or a cell specimen.
Wherein YIP’s system having receive image data associated with at least one image at a first resolution, the at least one image comprising an image of physically expanded sample of a tissue or a cell specimen.
The motivation behind the modification would have been to obtain a system that improves machine learning model training, accuracy and classifications as well as the resolution for spatial analysis, since both YIP and WEISENFELD concern cellular image analysis. Wherein YIP’s systems and methods provides improves the accuracy, classification and training of a machine learning model by disrupting shift invariance and the convergence speed and stability during training, while WEISENFELD’s systems and methods that improves the capture of analytes and resolution for spatial analysis. Please see YIP et al. (US 20200258223 A1), Paragraph [0094, 0365, 0370 and 0379] and WEISENFELD et al. (US 20210150707 A1), Abstract and Paragraph [0496, 0515, and 0564].
Although YIP explicitly teaches apply a filter to at least one single-cell image of the plurality of single-cell images (Fig. 3. Paragraph [0118]-YIP discloses in multiscale configuration where image data is to be analyzed on a tile-basis, image pre-processing includes receiving an initial histopathology image, at a first image resolution, downsampling that image to a second image resolution, and then performing a normalization on the downsampled histopathology image, such as color and/or intensity normalization, and removing non-tissue objects from the image. In paragraph [0271]-YIP discloses the pre-processing controller 302 can receive an image from the file having a resolution that is higher than the optimal resolution and downsample the image at a ratio that achieves the optimal resolution, at process 1106. In paragraph [0279]-YIP discloses the controller 302 removes these pixels (wherein the pixels represent artifacts, markings or blurred areas) by converting the image to a grayscale image, passing the grayscale image through a Gaussian blur filter. Other filters may be used to blur the image. Please also read paragraph [0146]);
YIP fails to explicitly teach apply a filter to at least one single-cell image of the plurality of single-cell images by iteratively decreasing a kernel size of the filter.
However, MIKHNO explicitly teaches apply a filter to at least one image of the plurality of images by iteratively decreasing a kernel size of the filter (Paragraph [0240]-MIKHNO discloses PSF-MLEM also introduces some specific artifacts. To prevent such noise reconstruction, it was proposed to use gradual PSF introduction, but from large to small kernel size, rather than the inverse. Using a large kernel first enables to update the non-noisy pre-approximation provided by the MLEM with a low-passed update map. The PSF kernel size support is then iteratively reduced to the size of the true PSF kernel. This iterative reduction enables to use more detailed update maps with more spatial details at later iterations. Please also read paragraph [0239]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of YIP in view of WEISENFELD of having a system, with the teachings of MIKHNO of having apply a filter to at least one image of the plurality of images by iteratively decreasing a kernel size of the filter.
Wherein YIP’s system having apply a filter to at least one single-cell image of the plurality of single-cell images by iteratively decreasing a kernel size of the filter.
The motivation behind the modification would have been to obtain a system that improves machine learning model training, accuracy and classifications as well as the resolution for spatial analysis, since both YIP and MIKHNO concern biological image analysis, neural networks and gaussian filters. Wherein YIP’s systems and methods provides improves the accuracy, classification and training of a machine learning model by disrupting shift invariance and the convergence speed and stability during training, while MIKHNO’s systems and methods that improves clinical scanners and image quality, resolution and object contrast while also reducing background noise and hotspot artifacts. Please see YIP et al. (US 20200258223 A1), Paragraph [0094, 0365, 0370 and 0379] and MIKHNO et al. (US 20170039706 A1), Abstract and Paragraph [0106, 0109, 0111].
Regarding claim 27, YIP in view of WEISENFELD and in further view of MIKHNO explicitly teaches the system of claim 15, YIP fails to explicitly teach wherein the at least one image comprises an image of a physically expanded sample of a tissue or a cell specimen.
However, WEISENFELD explicitly teaches wherein the at least one image comprises an image of a physically expanded sample of a tissue or a cell specimen (Fig. 1. Paragraph [0506]-WEISENFELD discloses a region of interest can be identified in a biological sample using a variety of different techniques, e.g., expansion microscopy, bright field microscopy, dark field microscopy, phase contrast microscopy, electron microscopy, fluorescence microscopy, reflection microscopy, interference microscopy, and confocal microscopy, and combinations thereof. Further in paragraph [0152]-WEISENFELD discloses a biological sample embedded in a hydrogel can be isometrically expanded. In paragraph [0154]- WEISENFELD discloses Isometric expansion can be performed by anchoring one or more components of a biological sample to a gel, followed by gel formation, proteolysis, and swelling. In paragraph [0156]-WEISENFELD discloses isometric expansion of the sample can increase the spatial resolution of the subsequent analysis of the sample. Isometric expansion of the biological sample can result in increased resolution in spatial profiling (e.g., single-cell profiling). In paragraph [0157]-WEISENFELD discloses Isometric expansion can enable three-dimensional spatial resolution of the subsequent analysis of the sample. Further in paragraph [0158]-WEISENFELD discloses a biological sample is isometrically expanded to a volume at least 2×, 2.1×, 2.2×, 2.3×, 2.4×, 2.5×, 2.6×, 2.7×, 2.8×, 2.9×, 3×, 3.1×, 3.2×, 3.3×, 3.4×, 3.5×, 3.6×, 3.7×, 3.8×, 3.9×, 4×, 4.1×, 4.2×, 4.3×, 4.4×, 4.5×, 4.6×, 4.7×, 4.8×, or 4.9× its non-expanded volume).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of YIP in view of WEISENFELD and in further view of MIKHNO of having a system, with the teachings of WEISENFELD of having wherein the at least one image comprises an image of a physically expanded sample of a tissue or a cell specimen.
Wherein YIP’s system having wherein the at least one image comprises an image of a physically expanded sample of a tissue or a cell specimen.
The motivation behind the modification would have been to obtain a system that improves machine learning model training, accuracy and classifications as well as the resolution for spatial analysis, since both YIP and WEISENFELD concern cellular image analysis. Wherein YIP’s systems and methods provides improves the accuracy, classification and training of a machine learning model by disrupting shift invariance and the convergence speed and stability during training, while WEISENFELD’s systems and methods that improves the capture of analytes and resolution for spatial analysis. Please see YIP et al. (US 20200258223 A1), Paragraph [0094, 0365, 0370 and 0379] and WEISENFELD et al. (US 20210150707 A1), Abstract and Paragraph [0496, 0515, and 0564].
Regarding claim 29, YIP explicitly teaches a computer program product comprising at least one non-transitory computer readable medium including one or more instructions (Fig. 38. Paragraph [0411]-FIG. 38 illustrates an example computing device 3800 for implementing the imaging-based biomarker prediction system 100 of FIG. 1. As illustrated, the system 100 may be implemented on the computing device 3800 and in particular on one or more processing units 3810, which may represent Central Processing Units (CPUs), and/or on one or more or Graphical Processing Units (GPUs) 3811, including clusters of CPUs and/or GPUs. The system 100 may be stored on and implemented from one or more non-transitory computer-readable media 3812 of the computing device 3800. The computer-readable media 3812 may include an operating system 3814 and the deep learning framework 3816 having elements corresponding to that of deep learning framework 300, including the pre-processing controller 302, classifier modules 304 and 306, and the post-processing controller 308. The computer-readable media 3812 may store trained deep learning models, executable code, etc. used for implementing the techniques herein) that, when executed by the at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Please also read paragraph [0111-0113]), cause the at least one processor to:
remove a background from the at least one image (Fig. 1. Paragraph [0117]-YIP discloses the image pre-processing sub-system 114 may perform further image processing that removes artifacts and other noise from received images by doing preliminary tissue detection 114d, for example, to identify regions of the images corresponding to histopathology stained tissue for subsequent analysis, classification, and segmentation. In paragraph [0118]-YIP discloses in multiscale configuration where image data is to be analyzed on a tile-basis, image pre-processing includes receiving an initial histopathology image, and removing non-tissue objects from the image (wherein the background is non-tissue objects). Further in paragraph [0360]-YIP discloses a process 1508 displays images associated with a tissue masking step of process 1502. An assembled probability map generated by the process 1504 is passed through this tissue mask to remove background. Both background and marker area are removed by the masking algorithm of the process 1508. Please also read paragraph [0283]);
segment the at least one image to define a plurality of single-cell images (Fig. 1. Paragraph [0130]-YIP discloses a histopathology image may be segmented. In paragraph [0131]-YIP discloses in system 100, the deep learning framework 150 further includes a trained image classifier module 170. In paragraph [0133]-YIP discloses the module 170 may further include a cell segmenter 176 that identifies cells within a histopathology image, including cell borders, interiors, and exteriors. Please also see Fig. 3 and read paragraph [0151, 0155-0156 and 0176]);
assign a label to the at least one single-cell image of the plurality of single-cell images (Fig. 1. Paragraph [0130]-YIP discloses a histopathology image may be segmented and each segment of the image may be labeled according to one or more data types that may be classified to that segment. The histopathology image may be labeled as a whole according to the one or more data types that may be classified to the image or at least one segment of the image (wherein data types may indicate one or more biomarkers and labeling a histopathology image or a segment with a data type may identify the biomarker). Further in paragraph [0176]-YIP discloses when training a tile based deep learning network to predict a biomarker classification label for each tile utilizes a strongly supervised approach to generate biomarker labels to identify the HRD status (Positive or Negative) of individual cells (wherein single cell RNA sequencing may be used alone, or in combination with laser guided micro-dissection to extract one cell at a time, to achieve labels for each cell and may incorporate a cell segmentation model and artificial intelligence engine to classify the pixel values inside each of the cell contours according to biomarker status. Please also read paragraph [0124, 0155-0156 and 0190]);
and train (Fig. 3. Paragraph [0123]-YIP discloses to analyze the received histopathology image data and other data, the imaging-based biomarker prediction system 102 includes a deep learning framework 150 that implements various machine learning techniques to generate trained classifier models for image-based biomarker analysis from received training sets of image data or sets of image data and other patient information. In paragraph [0125]-YIP discloses the deep learning framework 150 includes image data 162a. To train or use a multiscale PD-L1 biomarker classifier, this image data 162a may include pre-processed image data received from the sub-system 114, images from H&E slides or images from IHC slides (with or without human annotation), including IHC slides targeting PD-L1, PTEN, EGFR, Beta catenin/catenin beta1, NTRK, HRD, PIK3CA, and hormone receptors including HER2, AR, ER, and PR. To train or use other biomarker classifiers, whether multiscale classifiers or single-scale classifiers, the image data 162A may include images from other stained slides. Please also read paragraph [0190]) a machine learning model (Fig. 1, #150 called a Deep learning Framework. Paragraph [0123]) to predict a classification of the at least one single-cell image of the plurality of single-cell images based on inputting the plurality of single-cell images into the machine learning model (Fig. 1. Paragraph [0123]-YIP discloses with trained classifier models, the deep learning framework 150 is further used to analyze and diagnose the presence of image-based biomarkers in subsequent images collected from patients. In paragraph [0132]-YIP discloses the trained image classifier module 170 includes trained tissue classifiers 172, trained by the module 160 using one or more training image sets, to identify and classify tissue type in regions/areas of received image data. In some examples, these trained tissue classifiers are trained to identify biomarkers via the tissue classification, where these include single-scale configured classifiers 172a and multiscale classifiers 172b. Further in paragraph [0160]-YIP discloses with the cell segmentation in a histopathology image generated by the cell segmentation model 316 and the tissue classification from the tissue classification model 302, a biomarker classification model 322 receives data from both and determines a predicted biomarker presence in the histopathology image, and with the multiscale configuration, the prediction biomarker presence in each tile image of the histopathology image. Please also see Fig. 3 and read paragraph [0176]).
Although YIP explicitly teaches receive image data associated with at least one image at a first resolution (Fig. 1. Paragraph [0115]-YIP discloses the imaging-based biomarker prediction system 102 is communicatively coupled to receive medical images, for example of histopathology slides such as digital H&E stained slide images, IHC stained slide images, or digital images of any other staining protocols (wherein images are received from any number of medical image data sources such as physician clinical records systems 106 or histopathology image repositories 110. In paragraph [0116]-YIP discloses in FIG. 1, the imaging-based biomarker prediction system 102 includes an image pre-processing sub-system 114 that performs initial image processing to enhance image data for faster processing in training a machine learning framework and for performing biomarker prediction using a trained deep learning framework. Further in paragraph [0118]-YIP discloses in a multiscale configuration where image data is to be analyzed on a tile-basis, image pre-processing includes receiving an initial histopathology image, at a first image resolution. Please also read paragraph [0139, 0313 and 329-0330]);
YIP fails to explicitly teach receive image data associated with at least one image at a first resolution, the at least one image comprising an image of physically expanded sample of a tissue or a cell specimen.
However, WEISENFELD explicitly teaches receive image data associated with at least one image at a first resolution, the at least one image comprising an image of physically expanded sample of a tissue or a cell specimen (Fig. 1. Paragraph [0506]-WEISENFELD discloses a region of interest can be identified in a biological sample using a variety of different techniques, e.g., expansion microscopy, bright field microscopy, dark field microscopy, phase contrast microscopy, electron microscopy, fluorescence microscopy, reflection microscopy, interference microscopy, and confocal microscopy, and combinations thereof. Further in paragraph [0152]-WEISENFELD discloses a biological sample embedded in a hydrogel can be isometrically expanded. In paragraph [0154]- WEISENFELD discloses Isometric expansion can be performed by anchoring one or more components of a biological sample to a gel, followed by gel formation, proteolysis, and swelling. In paragraph [0156]-WEISENFELD discloses isometric expansion of the sample can increase the spatial resolution of the subsequent analysis of the sample. Isometric expansion of the biological sample can result in increased resolution in spatial profiling (e.g., single-cell profiling). In paragraph [0157]-WEISENFELD discloses Isometric expansion can enable three-dimensional spatial resolution of the subsequent analysis of the sample. Further in paragraph [0158]-WEISENFELD discloses a biological sample is isometrically expanded to a volume at least 2×, 2.1×, 2.2×, 2.3×, 2.4×, 2.5×, 2.6×, 2.7×, 2.8×, 2.9×, 3×, 3.1×, 3.2×, 3.3×, 3.4×, 3.5×, 3.6×, 3.7×, 3.8×, 3.9×, 4×, 4.1×, 4.2×, 4.3×, 4.4×, 4.5×, 4.6×, 4.7×, 4.8×, or 4.9× its non-expanded volume).
. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of YIP of having a computer program product, with the teachings of WEISENFELD of having receiving, with at least one processor, image data associated with at least one image at a first resolution, the at least one image comprising an image of physically expanded sample of a tissue or a cell specimen.
Wherein YIP’s computer program product having receiving, with at least one processor, image data associated with at least one image at a first resolution, the at least one image comprising an image of physically expanded sample of a tissue or a cell specimen.
The motivation behind the modification would have been to obtain a computer program product that improves machine learning model training, accuracy and classifications as well as the resolution for spatial analysis, since both YIP and WEISENFELD concern cellular image analysis. Wherein YIP’s systems and methods provides improves the accuracy, classification and training of a machine learning model by disrupting shift invariance and the convergence speed and stability during training, while WEISENFELD’s systems and methods that improves the capture of analytes and resolution for spatial analysis. Please see YIP et al. (US 20200258223 A1), Paragraph [0094, 0365, 0370 and 0379] and WEISENFELD et al. (US 20210150707 A1), Abstract and Paragraph [0496, 0515, and 0564].
Although YIP explicitly teaches apply a filter to at least one single-cell image of the plurality of single-cell images (Fig. 3. Paragraph [0118]-YIP discloses in multiscale configuration where image data is to be analyzed on a tile-basis, image pre-processing includes receiving an initial histopathology image, at a first image resolution, downsampling that image to a second image resolution, and then performing a normalization on the downsampled histopathology image, such as color and/or intensity normalization, and removing non-tissue objects from the image. In paragraph [0271]-YIP discloses the pre-processing controller 302 can receive an image from the file having a resolution that is higher than the optimal resolution and downsample the image at a ratio that achieves the optimal resolution, at process 1106. In paragraph [0279]-YIP discloses the controller 302 removes these pixels (wherein the pixels represent artifacts, markings or blurred areas) by converting the image to a grayscale image, passing the grayscale image through a Gaussian blur filter that mathematically adjusts the original grayscale value of each pixel to a blurred grayscale value to create a blurred image. Other filters may be used to blur the image. Please also read paragraph [0146 and 0176]);
YIP fails to explicitly teach apply a filter to at least one single-cell image of the plurality of single-cell images by iteratively decreasing a kernel size of the filter.
However, MIKHNO explicitly teaches apply a filter to at least one image of the plurality of images by iteratively decreasing a kernel size of the filter (Paragraph [0240]-MIKHNO discloses PSF-MLEM also introduces some specific artifacts. To prevent such noise reconstruction, it was proposed to use gradual PSF introduction, but from large to small kernel size, rather than the inverse. Using a large kernel first enables to update the non-noisy pre-approximation provided by the MLEM with a low-passed update map. The PSF kernel size support is then iteratively reduced to the size of the true PSF kernel. This iterative reduction enables to use more detailed update maps with more spatial details at later iterations. Please also read paragraph [0239])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of YIP in view of WEISENFELD of having a computer program product, with the teachings of MIKHNO of having apply a filter to at least one image of the plurality of images by iteratively decreasing a kernel size of the filter.
Wherein YIP’s computer program product having apply a filter to at least one single-cell image of the plurality of single-cell images by iteratively decreasing a kernel size of the filter.
The motivation behind the modification would have been to obtain a computer program product that improves machine learning model training, accuracy and classifications as well as the resolution for spatial analysis, since both YIP and MIKHNO concern biological image analysis, neural networks and gaussian filters. Wherein YIP’s provides improves the accuracy, classification and training of a machine learning model by disrupting shift invariance and the convergence speed and stability during training, while MIKHNO’s systems and methods that improves clinical scanners and image quality, resolution and object contrast while also reducing background noise and hotspot artifacts. Please see YIP et al. (US 20200258223 A1), Paragraph [0094, 0365, 0370 and 0379] and MIKHNO et al. (US 20170039706 A1), Abstract and Paragraph [0106, 0109, 0111].
Regarding claim 43, YIP in view of WEISENFELD and in further view of MIKHNO explicitly teaches the computer implemented method of claim 1, YIP further teaches comprising:
receiving, with at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Please also read paragraph [0111-0113]), image data associated with at least one image at a first resolution (Fig. 1. Paragraph [0115]-YIP discloses the imaging-based biomarker prediction system 102 is communicatively coupled to receive medical images, for example of histopathology slides such as digital H&E stained slide images, IHC stained slide images, or digital images of any other staining protocols (wherein images are received from any number of medical image data sources such as physician clinical records systems 106 or histopathology image repositories 110. In paragraph [0116]-YIP discloses in FIG. 1, the imaging-based biomarker prediction system 102 includes an image pre-processing sub-system 114 that performs initial image processing to enhance image data for faster processing in training a machine learning framework and for performing biomarker prediction using a trained deep learning framework. Further in paragraph [0118]-YIP discloses in a multiscale configuration where image data is to be analyzed on a tile-basis, image pre-processing includes receiving an initial histopathology image, at a first image resolution);
removing, with the at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Please also read paragraph [0111-0113]), a background from the at least one image (Fig. 1. Paragraph [0117]-YIP discloses the image pre-processing sub-system 114 may perform further image processing that removes artifacts and other noise from received images by doing preliminary tissue detection 114d, for example, to identify regions of the images corresponding to histopathology stained tissue for subsequent analysis, classification, and segmentation. In paragraph [0118]-YIP discloses in multiscale configuration where image data is to be analyzed on a tile-basis, image pre-processing includes receiving an initial histopathology image, and removing non-tissue objects from the image (wherein the background is non-tissue objects). Further in paragraph [0360]-YIP discloses a process 1508 displays images associated with a tissue masking step of process 1502. An assembled probability map generated by the process 1504 is passed through this tissue mask to remove background. Both background and marker area are removed by the masking algorithm of the process 1508. Please also read paragraph [0206 and 0283]);
segmenting, with the at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Please also read paragraph [0111-0113]), the at least one image to define a plurality of single-cell images (Fig. 3. Paragraph [0130]-YIP discloses a histopathology image may be segmented. In paragraph [0131]-YIP discloses in system 100, the deep learning framework 150 further includes a trained image classifier module 170. In paragraph [0133]-YIP discloses the module 170 may further include a cell segmenter 176 that identifies cells within a histopathology image, including cell borders, interiors, and exteriors. Please also read paragraph [0155-0156, 0176 and 0207]);
inputting, with the at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Please also read paragraph [0111-0113]), at least one single-cell image of the plurality of single-cell images into a trained machine learning model (Fig. 1. Paragraph [0123]-YIP discloses to analyze the received histopathology image data and other data, the imaging-based biomarker prediction system 102 includes a deep learning framework 150 that implements various machine learning techniques to generate trained classifier models for image-based biomarker analysis from received training sets of image data or sets of image data and other patient information. In paragraph [0125]-YIP discloses the deep learning framework 150 includes image data 162a. To train or use a multiscale PD-L1 biomarker classifier, this image data 162a may include pre-processed image data received from the sub-system 114, images from H&E slides or images from IHC slides (with or without human annotation), including IHC slides targeting PD-L1, PTEN, EGFR, Beta catenin/catenin beta1, NTRK, HRD, PIK3CA, and hormone receptors including HER2, AR, ER, and PR. To train or use other biomarker classifiers, whether multiscale classifiers or single-scale classifiers, the image data 162A may include images from other stained slides. Please also read paragraph [0190]), wherein the trained machine learning model outputs a classification value corresponding to a category of a plurality of categories (Fig. 1. Paragraph [0123]-YIP discloses with trained classifier models, the deep learning framework 150 is further used to analyze and diagnose the presence of image-based biomarkers in subsequent images collected from patients. In paragraph [0132]-YIP discloses the trained image classifier module 170 includes trained tissue classifiers 172, trained by the module 160 using one or more training image sets, to identify and classify tissue type in regions/areas of received image data. Further in paragraph [0160]-YIP discloses with the cell segmentation in a histopathology image generated by the cell segmentation model 316 and the tissue classification from the tissue classification model 302, a biomarker classification model 322 receives data from both and determines a predicted biomarker presence in the histopathology image, and with the multiscale configuration, the prediction biomarker presence in each tile image of the histopathology image. Please also see Fig. 3 and
read paragraph [0176 and 0207-0208]);
sorting, with the at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Please also read paragraph [0111-0113]), the at least one single-cell image of the plurality of single-cell images into one category of the plurality of categories based on the value output (Fig. 1. Paragraph [0208]-YIP discloses in FIG. 7, tissue classification is performed at a process 706, which receives the histopathology image from the process 704 and performs tissue classification on each received tile using a trained tissue classification model. The trained tissue classification model is configured to classify each tile into different tissue classes (e.g., tumor, stroma, normal epithelium, etc.). The process 706 may output, as a result, a list of lists. Each nested interior list serves as nested classification that describes a single tile and contains the position of the tile, the probabilities that the tile is each of the classes contained in the model, and the identity of the most probable class. Please also read paragraph [0209, 0214, 0351-0352 and 0404-0405]) by the trained machine learning model (Fig. 1, #150 called a Deep learning Framework. Paragraph [0123]);
generating, with the at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Please also read paragraph [0111-0113]), a communication comprising a predicted treatment outcome based on which category of the plurality of categories the at least one single-cell image of the plurality of single-cell images is sorted into (Fig. 28. Paragraph [0187]-YIP discloses the post-processing controller 308 is further configured to determine a number of different biomarker prediction metrics and a number of tumor prediction metrics. Example prediction metrics include: predicted patient survival and immunotherapy/therapy response. Further in paragraph [0404]-YIP discloses in FIG. 28, a process 2800 is provided for determining a proposed immunotherapy treatment for a patient using the imaging-based biomarker predictor system 102 of FIG. 1, and in particular the biomarker prediction of the deep learning framework 300 of FIG. 3. At a process 2806, the trained deep learning framework applies the images to a trained tissue classifier model and a trained biomarker segmentation model to determine biomarker status of the tissue regions of the image. A trained cell segmentation classifier model is further used by the process 2806. The process 2806 generates biomarker status and biomarker metrics for the image. As shown in FIG. 29, the output from the process 2806 may be provided to a process 2808 and implemented on a tumor therapy decision system 2900. Please also read paragraph [0405]); and
displaying, with the at least one processor, data associated with the communication via a graphical user interface (GUI) on a user device (Fig. 10. Paragraph [0245]-YIP discloses with the process 600, as shown in process 900 of FIG. 9, after prediction, the predicted biomarker classification from block 814 may be received at block 902. A clinical report for the histopathology image, and thus for the patient, may be generated at the block 904 including predicted biomarker status and, at the block 906, an overlay map may be generated showing the predicted biomarker status for display to a clinician or for providing to a pathologist for determining a preferred immunotherapy corresponding to the predicted biomarker. In paragraph [0246]-YIP discloses FIGS. 10A and 10B illustrate examples of a digital overlay maps created by the overlay map generator 324 of system 300, for example. These overlay maps may be generated as static digital reports displayed to clinicians or as dynamic reports allowing user interaction through a graphical user interface (GUI). FIG. 10A illustrates a tissue class overlay map generated by the overlay map generator 324. FIG. 10B illustrates a cell outer edge overlay map generated by the overlay map generator 324. Please also read paragraph [0356, 0362 and 0369]), wherein the trained machine learning model (Fig. 1, #150 called a Deep learning Framework. Paragraph [0123]) is trained by:
receiving, with at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Please also read paragraph [0111-0113]), image data associated with at least one image at a first resolution (Fig. 1. Paragraph [0115]-YIP discloses the imaging-based biomarker prediction system 102 is communicatively coupled to receive medical images, for example of histopathology slides such as digital H&E stained slide images, IHC stained slide images, or digital images of any other staining protocols (wherein images are received from any number of medical image data sources such as physician clinical records systems 106 or histopathology image repositories 110. In paragraph [0116]-YIP discloses in FIG. 1, the imaging-based biomarker prediction system 102 includes an image pre-processing sub-system 114 that performs initial image processing to enhance image data for faster processing in training a machine learning framework and for performing biomarker prediction using a trained deep learning framework. Further in paragraph [0118]-YIP discloses in a multiscale configuration where image data is to be analyzed on a tile-basis, image pre-processing includes receiving an initial histopathology image, at a first image resolution);
removing, with the at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Please also read paragraph [0111-0113]), a background from the at least one image (Fig. 1. Paragraph [0117]-YIP discloses the image pre-processing sub-system 114 may perform further image processing that removes artifacts and other noise from received images by doing preliminary tissue detection 114d, for example, to identify regions of the images corresponding to histopathology stained tissue for subsequent analysis, classification, and segmentation. In paragraph [0118]-YIP discloses in multiscale configuration where image data is to be analyzed on a tile-basis, image pre-processing includes receiving an initial histopathology image, and removing non-tissue objects from the image (wherein the background is non-tissue objects). Further in paragraph [0360]-YIP discloses a process 1508 displays images associated with a tissue masking step of process 1502. An assembled probability map generated by the process 1504 is passed through this tissue mask to remove background. Both background and marker area are removed by the masking algorithm of the process 1508. Please also read paragraph [0283]);
segmenting, with the at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Please also read paragraph [0111-0113]), the at least one image to define a plurality of single-cell images (Fig. 3. Paragraph [0130]-YIP discloses a histopathology image may be segmented. In paragraph [0131]-YIP discloses in system 100, the deep learning framework 150 further includes a trained image classifier module 170. In paragraph [0133]-YIP discloses the module 170 may further include a cell segmenter 176 that identifies cells within a histopathology image, including cell borders, interiors, and exteriors. Please also read paragraph [0151, 0155-0156 and 0176]);
applying, with the at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Please also read paragraph [0111-0113]), a filter to at least one single-cell image of the plurality of single-cell images, wherein the filter decreases a resolution of the at least one single-cell image as compared to the first resolution, to a second resolution (Fig. 3. Paragraph [0118]-YIP discloses in multiscale configuration where image data is to be analyzed on a tile-basis, image pre-processing includes receiving an initial histopathology image, at a first image resolution, downsampling that image to a second image resolution, and then performing a normalization on the downsampled histopathology image, such as color and/or intensity normalization, and removing non-tissue objects from the image. In paragraph [0271]-YIP discloses the pre-processing controller 302 can receive an image from the file having a resolution that is higher than the optimal resolution and downsample the image at a ratio that achieves the optimal resolution, at process 1106. In paragraph [0279]-YIP discloses the controller 302 removes these pixels (wherein the pixels represent artifacts, markings or blurred areas) by converting the image to a grayscale image, passing the grayscale image through a Gaussian blur filter that mathematically adjusts the original grayscale value of each pixel to a blurred grayscale value to create a blurred image. Other filters may be used to blur the image. Please also read paragraph [0146 and 0176]);
assigning, with the at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Please also read paragraph [0111-0113]), a label to the at least one single-cell image of the plurality of single-cell images (Fig. 1. Paragraph [0130]-YIP discloses a histopathology image may be segmented and each segment of the image may be labeled according to one or more data types that may be classified to that segment. The histopathology image may be labeled as a whole according to the one or more data types that may be classified to the image or at least one segment of the image (wherein data types may indicate one or more biomarkers and labeling a histopathology image or a segment with a data type may identify the biomarker). Further in paragraph [0176]-YIP discloses when training a tile based deep learning network to predict a biomarker classification label for each tile utilizes a strongly supervised approach to generate biomarker labels to identify the HRD status (Positive or Negative) of individual cells (wherein a single cell RNA sequencing may be used alone, or in combination with laser guided micro-dissection to extract one cell at a time, to achieve labels for each cell and may incorporate a cell segmentation model and artificial intelligence engine to classify the pixel values inside each of the cell contours according to biomarker status). Please also read paragraph [0124, 0155-0156 and 0190]); and
training (Fig. 1. Paragraph [0123]-YIP discloses to analyze the received histopathology image data and other data, the imaging-based biomarker prediction system 102 includes a deep learning framework 150 that implements various machine learning techniques to generate trained classifier models for image-based biomarker analysis from received training sets of image data or sets of image data and other patient information. In paragraph [0125]-YIP discloses the deep learning framework 150 includes image data 162a. To train or use a multiscale PD-L1 biomarker classifier, this image data 162a may include pre-processed image data received from the sub-system 114, images from H&E slides or images from IHC slides (with or without human annotation), including IHC slides targeting PD-L1, PTEN, EGFR, Beta catenin/catenin beta1, NTRK, HRD, PIK3CA, and hormone receptors including HER2, AR, ER, and PR. To train or use other biomarker classifiers, whether multiscale classifiers or single-scale classifiers, the image data 162A may include images from other stained slides. Please also read paragraph [0190]), with the at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Please also read paragraph [0111-0113]), a machine learning model (Fig. 1, #150 called a Deep learning Framework. Paragraph [0123]) to predict a classification of the at least one single-cell image of the plurality of single-cell images based on inputting the plurality of single-cell images into the machine learning model (Fig. 1. Paragraph [0123]-YIP discloses with trained classifier models, the deep learning framework 150 is further used to analyze and diagnose the presence of image-based biomarkers in subsequent images collected from patients. In paragraph [0132]-YIP discloses the trained image classifier module 170 includes trained tissue classifiers 172, trained by the module 160 using one or more training image sets, to identify and classify tissue type in regions/areas of received image data. Further in paragraph [0160]-YIP discloses with the cell segmentation in a histopathology image generated by the cell segmentation model 316 and the tissue classification from the tissue classification model 302, a biomarker classification model 322 receives data from both and determines a predicted biomarker presence in the histopathology image, and with the multiscale configuration, the prediction biomarker presence in each tile image of the histopathology image. Please also see Fig. 3 and read paragraph [0176 and 0207-0208]).
Claims 2-3 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over YIP et al. (US 20200258223 A1), hereinafter referenced as YIP in view of WEISENFELD et al. (US 20210150707 A1), hereinafter referenced as WEISENFELD, and in further view of MIKHNO et al. (US 20170039706 A1), hereinafter referenced as MIKHNO and in further view of SANDKUJIJL et al. (US 20230013375 A1), hereinafter referenced as SANDKUJIJL.
Regarding claim 2, YIP in view of WEISENFELD and in further view of MIKHNO explicitly teaches the computer implemented method of claim 1, YIP further teaches wherein the at least one image comprises an image of a plurality of cells (Fig. 1. Paragraph [0115]-YIP discloses the imaging-based biomarker prediction system 102 is communicatively coupled to receive medical images (wherein medical images may include histopathology slides such as digital H&E stained slide images, IHC stained slide images, or digital images of any other staining protocols)), and wherein segmenting the at least one image to define the plurality of single-cell images (Fig. 1. Paragraph [0130]-YIP discloses a histopathology image may be segmented. In paragraph [0131]-YIP discloses in system 100, the deep learning framework 150 further includes a trained image classifier module 170. In paragraph [0133]-YIP discloses the module 170 may further include a cell segmenter 176 that identifies cells within a histopathology image, including cell borders, interiors, and exteriors. Please also see Fig. 3 and read paragraph [0151, 0155-0156 and 0176]) comprises:
identifying a location of each cell of the plurality of cells of the at least one image based on a pixel coordinate of each cell of the plurality of cells of the at least one image (Fig. 3. Paragraph [0212]-YIP discloses the process 712 may access a stored image processing library and use that library to find contours around the cell interior class. The process 712 may perform a cell registration process. The cell border class (denoted by locations with a 1 value in each mask) ensures separation between neighboring cell interiors. This generates a list of every contour on each mask. The process 712 determines the coordinates of the contour's centroid (center of mass), from which the process 712 produces a centroid list. Next, to generate outputs that are in the coordinate space defined by the entire received image instead of the coordinate space that is specific to a single tile in the image, each coordinate in the contour lists and the centroid lists is shifted. The process 714 performs the same processes as the process 712, but on the lymphocyte classes. In paragraph [0217]-YIP discloses the process 716 has the coordinates for each cell centroid, the coordinates for the top-left corner of each tissue classification tile, and the size of each tissue classification tile, and is configured to determine the parent tile for each cell based on its centroid location. Please also read paragraph [0112 and 0152]); and
defining for each of the plurality of cells, a single-cell image based on the location of a cell of the plurality of cells (Fig. 3. Paragraph [0152]-YIP discloses the module 304 receives tiled, sub-images from the pipeline 315, and the cell segmentation model 316 determines the list of locations of all lymphocytes, and those locations are compared to the other three class model's list of all cells determined from the model 316. In paragraph [0155]-YIP discloses a UNet model can recognize the outer edges of many types of cells and may classify each cell according to cell shape or its location within a tissue class region assigned by the tissue classification module 320. Further in paragraph [0217]-YIP discloses the process 716, each cell is binned into one of the tissue classification tiles (from process 706) based on location. Please also read paragraph [0166 and 0188]).
YIP fails to explicitly teach wherein the single-cell image comprises the cell of the plurality of cells and a microenvironment of the cell of the plurality of cells.
However, SANDKUIJL explicitly teaches wherein the single-cell image comprises the cell of the plurality of cells and a microenvironment of the cell of the plurality of cells (Paragraph [0153]-SANDKUIJL discloses imaging mass cytometry may identify a plurality of immune cell types in a tumor microenvironement, and may further identify cell state (e.g., intracellular signalling and/or expression of receptors involved in activation or suppression of an immune response). In paragraph [0392]-SANDKUIJL discloses a segmentation program may be applied to segment cells in IMC images obtained from tissue stained with a segmentation panel. Use of such programs, and such a program may be a neural network, such as a convolutional neural network. In paragraph [0474]- SANDKUIJL discloses the IMC dataset may be structured such that single cell data can used as an input to an algorithm to classify the sample (e.g., for diagnostic or prognostic applications).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of YIP in view of WEISENFELD and in further view of MIKHNO of having a computer implemented method, with the teachings of SANDKUIJL of having wherein the single-cell image comprises the cell of the plurality of cells and a microenvironment of the cell of the plurality of cells.
Wherein YIP’s method having wherein the single-cell image comprises the cell of the plurality of cells and a microenvironment of the cell of the plurality of cells.
The motivation behind the modification would have been to obtain a method that improves machine learning model training, accuracy and classifications as well as improved resolution, since both YIP and SANDKUIJL concern cellular image analysis. Wherein YIP’s systems and methods provides improves the accuracy, classification and training of a machine learning model by disrupting shift invariance and the convergence speed and stability during training, while SANDKUIJL’s systems and methods that improves the resolution of a sample for analysis. Please see YIP et al. (US 20200258223 A1), Paragraph [0094, 0365, 0370 and 0379] and SANDKUIJL et al. (US 20230013375 A1), Abstract and Paragraph [0338].
Regarding claim 3, YIP in view of WEISENFELD and in further view of MIKHNO and in further view of SANDKUIJL explicitly teaches the computer implemented method of claim 2, YIP further teaches wherein the image of the plurality of cells (Fig. 1. Paragraph [0115]-YIP discloses the imaging-based biomarker prediction system 102 is communicatively coupled to receive medical images (wherein medical images may include histopathology slides such as digital H&E stained slide images, IHC stained slide images, or digital images of any other staining protocols)) comprises a plurality of nuclei (Fig. 3. Paragraph [0151-YIP discloses the cell segmentation model 316 may be configured as a first pixel-level FCN model, that identifies and assigns each pixel of image data into a cell-subunit class: (i) cell interior, (ii) a cell border, or (iii) a cell exterior. In paragraph [0166]-YIP discloses the cell segmentation model 316 may be trained to analyze an input image and assign one of the three classes to each pixel, define cells as a group of adjacent nucleus pixels and all cytoplasm pixels between the nucleus pixels and the next nearest border pixels, and then for each cell, the biomarker classification model 322 may be configured to calculate the nucleus (wherein image sets containing nuclei may also be annotated and used for training a machine learning model). Please also read paragraph [0163, 0187-0188 and 0209]), and wherein prior to identifying the location of each cell of the plurality of cells of the at least one image (Fig. 3. Paragraph [0116]-YIP discloses in FIG. 1, the imaging-based biomarker prediction system 102 includes an image pre-processing sub-system 114 that performs initial image processing to enhance image data for faster processing in training a machine learning framework and for performing biomarker prediction using a trained deep learning framework. Further in paragraph [0152]-YIP discloses the module 304 receives tiled, sub-images from the pipeline 315, and the cell segmentation model 316 determines the list of locations of all lymphocytes, and those locations are compared to the other three class model's list of all cells determined from the model 316. Please also read paragraph [0210, 0212 and 0216]), the method further comprises:
blurring the at least one image by decreasing the resolution of the at least one image to facilitate identification of the plurality of nuclei (Fig. 3. Paragraph [0118]-YIP discloses in multiscale configuration where image data is to be analyzed on a tile-basis, image pre-processing includes receiving an initial histopathology image, at a first image resolution, downsampling that image to a second image resolution, and then performing a normalization on the downsampled histopathology image, such as color and/or intensity normalization, and removing non-tissue objects from the image. In paragraph [0271]-YIP discloses the pre-processing controller 302 can receive an image from the file having a resolution that is higher than the optimal resolution and downsample the image at a ratio that achieves the optimal resolution, at process 1106. In paragraph [0279]-YIP discloses the controller 302 removes these pixels (wherein the pixels represent artifacts, markings or blurred areas) by converting the image to a grayscale image, passing the grayscale image through a Gaussian blur filter. Other filters may be used to blur the image. Please also read paragraph [0146]).
Regarding claim 16, YIP in view of WEISENFELD and in further view of MIKHNO explicitly teaches the system of claim 15, YIP further teaches wherein the at least one image comprises an image of a plurality of cells (Fig. 3. Paragraph [0138]-YIP discloses FIG. 3 illustrates an example implementation of the imaging-based biomarker prediction system 102, and the deep learning framework 150 in the form of deep learning framework 300. The framework 300 may be communicatively coupled to receive histopathology image data and other data (molecular data, tumor response data, demographic data, etc.) from external systems, such as the physician clinical records system 106, the histopathology imaging system 108, the genomic sequencing system 112, the medical images repository 110, and/or the organoid modeling lab 116 of FIG. 1 and through the network 104. The organoid modeling lab 116 may collect various types of data, such as, for example, the sensitivity of an organoid to a drug, single-cell analysis data or detection of cellular products indicating the presence of specific cell populations, as well as organoid image data, any of which may be stored within the molecular data 162b. Please also read paragraph [0140-0141]), and wherein when segmenting the at least one image to define the plurality of single-cell images (Fig. 3. Paragraph [0130]-YIP discloses a histopathology image may be segmented. In paragraph [0131]-YIP discloses in system 100, the deep learning framework 150 further includes a trained image classifier module 170. In paragraph [0133]-YIP discloses the module 170 may further include a cell segmenter 176 that identifies cells within a histopathology image, including cell borders, interiors, and exteriors. Please also read paragraph [0151, 0155-0156 and 0176]), the at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Please also read paragraph [0111 and 0113]) is further programmed or configured to:
identify a location of each cell of the plurality of cells of the at least one image based on a pixel coordinate of each cell of the plurality of cells of the at least one image (Fig. 3. Paragraph [0212]-YIP discloses the process 712 may access a stored image processing library and use that library to find contours around the cell interior class. The process 712 may perform a cell registration process. The cell border class (denoted by locations with a 1 value in each mask) ensures separation between neighboring cell interiors. This generates a list of every contour on each mask. The process 712 determines the coordinates of the contour's centroid (center of mass), from which the process 712 produces a centroid list. Next, to generate outputs that are in the coordinate space defined by the entire received image instead of the coordinate space that is specific to a single tile in the image, each coordinate in the contour lists and the centroid lists is shifted. The process 714 performs the same processes as the process 712, but on the lymphocyte classes. In paragraph [0217]-YIP discloses the process 716 has the coordinates for each cell centroid, the coordinates for the top-left corner of each tissue classification tile, and the size of each tissue classification tile, and is configured to determine the parent tile for each cell based on its centroid location. Please also read paragraph [0112 and 0152]); and
define for each of the plurality of cells, a single-cell image based on the location of a cell of the plurality of cells (Fig. 3. Paragraph [0152]-YIP discloses the module 304 receives tiled, sub-images from the pipeline 315, and the cell segmentation model 316 determines the list of locations of all lymphocytes, and those locations are compared to the other three class model's list of all cells determined from the model 316. In paragraph [0155]-YIP discloses a UNet model can recognize the outer edges of many types of cells and may classify each cell according to cell shape or its location within a tissue class region assigned by the tissue classification module 320. Further in paragraph [0217]-YIP discloses the process 716, each cell is binned into one of the tissue classification tiles (from process 706) based on location. Please also read paragraph [0166 and 0188]).
YIP fail to explicitly teach wherein the single-cell image comprises the cell of the plurality of cells and a microenvironment of the cell of the plurality of cells.
However, SANDKUIJL explicitly teaches wherein the single-cell image comprises a microenvironment of the cell of the plurality of cells (Paragraph [0153]-SANDKUIJL discloses imaging mass cytometry may identify a plurality of immune cell types in a tumor microenvironement, and may further identify cell state (e.g., intracellular signalling and/or expression of receptors involved in activation or suppression of an immune response). In paragraph [0392]-SANDKUIJL discloses a segmentation program may be applied to segment cells in IMC images obtained from tissue stained with a segmentation panel. Use of such programs, and such a program may be a neural network, such as a convolutional neural network. In paragraph [0474]- SANDKUIJL discloses the IMC dataset may be structured such that single cell data can used as an input to an algorithm to classify the sample (e.g., for diagnostic or prognostic applications).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of YIP in view of WEISENFELD and in further view of MIKHNO of having a system, with the teachings of SANDKUIJL of having wherein the single-cell image comprises the cell of the plurality of cells and a microenvironment of the cell of the plurality of cells.
Wherein YIP’s system having wherein the single-cell image comprises the cell of the plurality of cells and a microenvironment of the cell of the plurality of cells.
The motivation behind the modification would have been to obtain a system that improves machine learning model training, accuracy and classifications as well as improved resolution, since both YIP and SANDKUIJL concern cellular image analysis. Wherein YIP’s systems and methods provides improves the accuracy, classification and training of a machine learning model by disrupting shift invariance and the convergence speed and stability during training, while SANDKUIJL’s systems and methods that improves the resolution of a sample for analysis. Please see YIP et al. (US 20200258223 A1), Paragraph [0094, 0365, 0370 and 0379] and SANDKUIJL et al. (US 20230013375 A1), Abstract and Paragraph [0338].
Regarding claim 17, YIP in view of WEISENFELD and in further view of MIKHNO and in further view of SANDKUIJL explicitly teaches the system of claim 16, YIP further teaches wherein the image of the plurality of cells comprises a plurality of nuclei (Fig. 3. Paragraph [0151-YIP discloses the cell segmentation model 316 may be configured as a first pixel-level FCN model, that identifies and assigns each pixel of image data into a cell-subunit class: (i) cell interior, (ii) a cell border, or (iii) a cell exterior. In paragraph [0166]-YIP discloses the cell segmentation model 316 may be trained to analyze an input image and assign one of the three classes to each pixel, define cells as a group of adjacent nucleus pixels and all cytoplasm pixels between the nucleus pixels and the next nearest border pixels, and then for each cell, the biomarker classification model 322 may be configured to calculate the nucleus (wherein image sets containing nuclei may also be annotated and used for training a machine learning model). Please also read paragraph [0163, 0187-0188 and 0209]), and wherein prior to identifying the location of each cell of the plurality of cells of the at least one image (Fig. 3. Paragraph [0149]-YIP discloses the deep learning multiscale classifier module 304 is configured to perform cell segmentation through a cell segmentation model 316, where cell segmentation may be a pixel-level process of the histopathology image from normalization process 310. Further in paragraph [0152]-YIP discloses the module 304 receives tiled, sub-images from the pipeline 315, and the cell segmentation model 316 determines the list of locations of all lymphocytes, and those locations are compared to the other three class model's list of all cells determined from the model 316. Please also read paragraph [0210, 0212 and 0216]), the at least one processor (Fig. 38, #3810 called processing units. Paragraph [0411]. Please also read paragraph [0111-0113])) is further programmed or configured to:
blur the at least one image by decreasing the resolution of the at least one image to facilitate identification of the plurality of nuclei (Fig. 3. Paragraph [0118]-YIP discloses in multiscale configuration where image data is to be analyzed on a tile-basis, image pre-processing includes receiving an initial histopathology image, at a first image resolution, downsampling that image to a second image resolution, and then performing a normalization on the downsampled histopathology image, such as color and/or intensity normalization, and removing non-tissue objects from the image. In paragraph [0271]-YIP discloses the pre-processing controller 302 can receive an image from the file having a resolution that is higher than the optimal resolution and downsample the image at a ratio that achieves the optimal resolution, at process 1106. In paragraph [0279]-YIP discloses the controller 302 removes these pixels (wherein the pixels represent artifacts, markings or blurred areas) by converting the image to a grayscale image, passing the grayscale image through a Gaussian blur filter that mathematically adjusts the original grayscale value of each pixel to a blurred grayscale value to create a blurred image. Other filters may be used to blur the image. Please also read paragraph [0146]).
Claims 4-7 are rejected under 35 U.S.C. 103 as being unpatentable over YIP et al. (US 20200258223 A1), hereinafter referenced as YIP in view of WEISENFELD et al. (US 20210150707 A1), hereinafter referenced as WEISENFELD, and in further view of MIKHNO et al. (US 20170039706 A1), hereinafter referenced as MIKHNO and in further view of AKILESH et al. (US 20210310058 A1), hereinafter referenced as AKILESH.
Regarding claim 4, YIP in view of WEISENFELD and in further view of MIKHNO explicitly teaches the computer implemented method of claim 1, YIP further teaches further comprising:
repeating, the steps of:
assigning the label to the at least one single-cell image of the plurality of single-cell images (Fig. 1. Paragraph [0130]-YIP discloses a histopathology image may be segmented and each segment of the image may be labeled according to one or more data types that may be classified to that segment. The histopathology image may be labeled as a whole according to the one or more data types that may be classified to the image or at least one segment of the image (wherein data types may indicate one or more biomarkers and labeling a histopathology image or a segment with a data type may identify the biomarker). Further in paragraph [0176]-YIP discloses when training a tile based deep learning network to predict a biomarker classification label for each tile utilizes a strongly supervised approach to generate biomarker labels to identify the HRD status (Positive or Negative) of individual cells (wherein single cell RNA sequencing may be used alone, or in combination with laser guided micro-dissection to extract one cell at a time, to achieve labels for each cell and may incorporate a cell segmentation model and artificial intelligence engine to classify the pixel values inside each of the cell contours according to biomarker status. Please also read paragraph [0124, 0155-0156, 0177, and 0190]); and
training (Fig. 3. Paragraph [0123]-YIP discloses to analyze the received histopathology image data and other data, the imaging-based biomarker prediction system 102 includes a deep learning framework 150 that implements various machine learning techniques to generate trained classifier models for image-based biomarker analysis from received training sets of image data or sets of image data and other patient information. In paragraph [0124]-YIP discloses in system 100, the deep learning framework 150 includes a histopathology image-based classifier training module 160 that can access received and stored data from the external systems 106, 108, 110, 112, and 116, and any others, where that data may be parsed from received data streams and databased into different data types. In paragraph [0125]-YIP discloses the deep learning framework 150 includes image data 162a. For example, to train or use a multiscale PD-L1 biomarker classifier, this image data 162a may include pre-processed image data received from the sub-system 114, images from H&E slides or images from IHC slides (with or without human annotation), including IHC slides targeting PD-L1. To train or use other biomarker classifiers, whether multiscale classifiers or single-scale classifiers, the image data 162A may include images from other stained slides. Please also read paragraph [0155-0156, 0176 and 0190]) the machine learning model (Fig. 3, #150 called a Deep learning Framework. Paragraph [0159]) to predict a classification of the at least one single-cell image of the plurality of single-cell images (Fig. 1. Paragraph [0123]-YIP discloses with trained classifier models, the deep learning framework 150 is further used to analyze and diagnose the presence of image-based biomarkers in subsequent images collected from patients. In paragraph [0132]-YIP discloses the trained image classifier module 170 (wherein trained image classifier module 170 may includes trained tissue classifiers 172, trained by the module 160 using one or more training image sets, to identify and classify tissue type and biomarkers, trained cell classifiers 174 that identify biomarkers via cell classification, and a cell segmenter 176 that identifies cells within a histopathology image). Further in paragraph [0160]-YIP discloses with the cell segmentation in a histopathology image generated by the cell segmentation model 316 and the tissue classification from the tissue classification model 302, a biomarker classification model 322 receives data from both and determines a predicted biomarker presence in the histopathology image. Additionally, in paragraph [0222]-YIP discloses the process 718 may perform a training process repeated many times. Please also see Fig. 7, and read paragraph [0376, 0378, 0380 and 0385]);
Although YIP explicitly teaches applying the filter to the at least one single-cell image of the plurality of single-cell images (Fig. 3. Paragraph [0118]-YIP discloses in multiscale configuration where image data is to be analyzed on a tile-basis, image pre-processing includes receiving an initial histopathology image, at a first image resolution, downsampling that image to a second image resolution, and then performing a normalization on the downsampled histopathology image, such as color and/or intensity normalization, and removing non-tissue objects from the image), wherein the filter decreases the resolution of the at least one single-cell image of the plurality of single-cell images as compared to the second resolution and below the first resolution (Fig. 3. Paragraph [0264]-YIP discloses the process 1100 may be performed on each received image for analysis and biomarker prediction (wherein the process 1100 may be performed on received training images). In paragraph [0265]-YIP discloses when training a classifier model, each digital image file received by the pre-processing controller 302, at 1102, contains multiple versions of the same image content, and each version has a different resolution (wherein the file stores copies in stacked layers, arranged by resolution). In paragraph [0271]-YIP discloses the pre-processing controller 302 can receive an image from the file having a resolution that is higher than the optimal resolution and downsample the image at a ratio that achieves the optimal resolution, at process 1106. Please also read paragraph [0279 and 0330]);
YIP fails to explicitly teach wherein the filter increases the resolution of the at least one single-cell image of the plurality of single-cell images.
However, AKILESH explicitly teaches wherein the filter increases the resolution of the at least one single-cell image of the plurality of single-cell images (Fig. 31. Paragraph [0437]-AKILESH discloses the deconvolution process may improve the contrast and resolution of cell images for further analysis. The image analysis method may comprise an iterative deconvolution of the image. The image analysis method may comprise 1, 2, 3, 4, 5, 6, 7, 8, 9, or 10 iterations of deconvolving the image. The image analysis method may comprise more than 1, more than 2, more than 3, more than 4, more than 5, more than 6, more than 7, more than 8, more than 9, or more than 10 iterations of deconvolving the image. The deconvolution procedure may remove or reduce out-of-focus blur or other sources of noise in the epifluorescence images or super-resolution images, enhancing the signal-to-noise ratio (SNR) within ROIs).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of YIP in view of WEISENFELD and in further view of MIKHNO of having a computer implemented method, with the teachings of AKILESH of having wherein the filter increases the resolution of the at least one single-cell image of the plurality of single-cell images.
Wherein YIP’s method having wherein the filter increases the resolution of the at least one single-cell image of the plurality of single-cell images.
The motivation behind the modification would have been to obtain a method that improves machine learning model training, accuracy and classifications as well as the contrast and resolution of cell images for further analysis, since both YIP and AKILESH concern cellular image analysis. Wherein YIP’s provides improves the accuracy, classification and training of a machine learning model by disrupting shift invariance and the convergence speed and stability during training, while AKILESH’s systems and methods enable the detection of short nucleic acid sequences at high throughput and at a high signal-to-noise ratio and improves the contrast and resolution of cell images for further analysis. Please see YIP et al. (US 20200258223 A1), Paragraph [0094, 0365, 0370 and 0379] and AKILESH et al. (US 20210310058 A1), Abstract and Paragraph [0145, 0149 and 0473].
Regarding claim 5, YIP in view of WEISENFELD and in further view of MIKHNO and in further view of AKILESH explicitly teaches the computer implemented method of claim 4, YIP further teaches further comprising repeating one or more additional times, the steps of:
applying the filter to the at least one single-cell image of the plurality of single-cell images (Fig. 3. Paragraph [0118]-YIP discloses in multiscale configuration where image data is to be analyzed on a tile-basis, image pre-processing includes receiving an initial histopathology image, at a first image resolution, downsampling that image to a second image resolution, and then performing a normalization on the downsampled histopathology image, such as color and/or intensity normalization, and removing non-tissue objects from the image. In paragraph [0279]-YIP discloses filters may be used to blur the image (wherein blurring is performed by the preprocessing module). Please also read paragraph [0146]);
assigning the label to the at least one single-cell image of the plurality of single-cell images (Fig. 1. Paragraph [0130]-YIP discloses a histopathology image may be segmented and each segment of the image may be labeled according to one or more data types that may be classified to that segment. Further in paragraph [0176]-YIP discloses when training a tile based deep learning network to predict a biomarker classification label for each tile utilizes a strongly supervised approach to generate biomarker labels to identify the HRD status (Positive or Negative) of individual cells (wherein single cell RNA sequencing may be used alone, or in combination with laser guided micro-dissection to extract one cell at a time, to achieve labels for each cell and may incorporate a cell segmentation model and artificial intelligence engine to classify the pixel values inside each of the cell contours according to biomarker status). Further in paragraph [0377]-YIP discloses training may be performed with weakly supervised learning that involves only image level labeling. This process may be repeated many times, given enough collections and tiles as input to the neural network, it will learn to differentiate tiles with different classes with higher accuracy as more iterations are performed. Please also read paragraph [0155-0156 and 0380]); and
Although YIP explicitly teaches training the machine learning model (Fig. 3, #150 called a Deep learning Framework. Paragraph [0159]) to predict a classification of the at least one single-cell image of the plurality of single-cell images (Fig. 3. Paragraph [0123]-YIP discloses to analyze the received histopathology image data and other data, the imaging-based biomarker prediction system 102 includes a deep learning framework 150 that implements various machine learning techniques to generate trained classifier models for image-based biomarker analysis from received training sets of image data or sets of image data and other patient information. With trained classifier models, the deep learning framework 150 is further used to analyze and diagnose the presence of image-based biomarkers in subsequent images collected from patients. In paragraph [0116]-YIP discloses the imaging-based biomarker prediction system 102 includes an image pre-processing sub-system 114 that performs initial image processing to enhance image data for faster processing in training a machine learning framework and for performing biomarker prediction using a trained deep learning framework (wherein the pre-processing controller 302 can receive images at multiple resolutions, downsample the image to achieve the optimal resolution and then apply a filter such as a gaussian blur to remove artifacts or noise). Please also read paragraph [0140, 0176, 0190 and 0310-0312]).
YIP fails to explicitly teach wherein with each subsequent repeat of the steps, the resolution of the filter increases from a previous repetition towards or to the first resolution.
However, AKILESH explicitly teaches wherein with each subsequent repeat of the steps, the resolution of the filter increases from a previous repetition towards or to the first resolution (Fig. 31. Paragraph [0437]-AKILESH discloses the image analysis method may comprise a deconvolution of the image. The deconvolution process may improve the contrast and resolution of cell images for further analysis. The image analysis method may comprise an iterative deconvolution of the image. The image analysis method may comprise 1, 2, 3, 4, 5, 6, 7, 8, 9, or 10 iterations of deconvolving the image. The image analysis method may comprise more than 1, more than 2, more than 3, more than 4, more than 5, more than 6, more than 7, more than 8, more than 9, or more than 10 iterations of deconvolving the image. The deconvolution procedure may remove or reduce out-of-focus blur or other sources of noise in the epifluorescence images or super-resolution images, enhancing the signal-to-noise ratio (SNR) within ROIs).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of YIP in view of WEISENFELD and in further view of MIKHNO and in further view of AKILESH of having a computer implemented method, with the teachings of AKILESH of having wherein with each subsequent repeat of the steps, the resolution of the filter increases from a previous repetition towards or to the first resolution.
Wherein YIP’s method having wherein with each subsequent repeat of the steps, the resolution of the filter increases from a previous repetition towards or to the first resolution.
The motivation behind the modification would have been to obtain a method that improves machine learning model training, accuracy and classifications as well as the contrast and resolution of cell images for further analysis, since both YIP and AKILESH concern cellular image analysis. Wherein YIP’s systems and methods provides improves the accuracy, classification and training of a machine learning model by disrupting shift invariance and the convergence speed and stability during training, while AKILESH’s systems and methods enable the detection of short nucleic acid sequences at high throughput and at a high signal-to-noise ratio and improves the contrast and resolution of cell images for further analysis. Please see YIP et al. (US 20200258223 A1), Paragraph [0094, 0365, 0370 and 0379] and AKILESH et al. (US 20210310058 A1), Abstract and Paragraph [0145, 0149 and 0473].
Regarding claim 6, YIP in view of WEISENFELD and in further view of MIKHNO and in further view of AKILESH explicitly teaches the computer implemented method of claim 5, YIP further teaches wherein the steps are repeated until the resolution of the at least one single-cell image of the plurality of single-cell images is at the first resolution of the plurality of single-cell images (Fig. 31. Paragraph [0264]-YIP discloses FIG. 11 illustrates a process 1100 for preparing digital images of histopathology slides for tissue classification, biomarker detection, and mapping analysis, as may be implemented using the system 300 (wherein the process 1100 may be performed on each received image for analysis and biomarker prediction and may be performed, in whole or in part, on initially received training images). In paragraph [0265]-YIP discloses when training a classifier model, each digital image file received by the pre-processing controller 302, at 1102, contains multiple versions of the same image content, and each version has a different resolution. The file stores these copies in stacked layers, arranged by resolution such that the highest resolution image containing the greatest number of bytes is the bottom layer (wherein the pyramidal structure and the highest resolution is the highest resolution achievable by the scanner or camera that created the digital image file). In paragraph [0271]-YIP discloses the pre-processing controller 302 can receive an image from the file having a resolution that is higher than the optimal resolution and downsample the image at a ratio that achieves the optimal resolution, at process 1106).
Regarding claim 7, YIP in view of WEISENFELD and in further view of MIKHNO and in further view of AKILESH explicitly teaches the computer implemented method of claim 6, YIP fails to explicitly teach wherein the steps are repeated until the resolution of the at least one single-cell image of the plurality of single-cell images is a highest resolution.
However, AKILESH explicitly teaches wherein the steps are repeated until the resolution of the at least one single-cell image of the plurality of single-cell images is a highest resolution (Fig. 31. Paragraph [0437]-AKILESH discloses the image analysis method may comprise a deconvolution of the image. The deconvolution process may improve the contrast and resolution of cell images for further analysis. The image analysis method may comprise an iterative deconvolution of the image. The image analysis method may comprise 1, 2, 3, 4, 5, 6, 7, 8, 9, or 10 iterations of deconvolving the image. The image analysis method may comprise more than 1, more than 2, more than 3, more than 4, more than 5, more than 6, more than 7, more than 8, more than 9, or more than 10 iterations of deconvolving the image. The deconvolution procedure may remove or reduce out-of-focus blur or other sources of noise in the epifluorescence images or super-resolution images, enhancing the signal-to-noise ratio (SNR) within ROIs).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of YIP in view of WEISENFELD and in further view of MIKHNO and in further view of AKILESH of having a computer implemented method, with the teachings of AKILESH of having wherein the steps are repeated until the resolution of the at least one single-cell image of the plurality of single-cell images is a highest resolution.
Wherein YIP’s method having wherein the steps are repeated until the resolution of the at least one single-cell image of the plurality of single-cell images is a highest resolution.
The motivation behind the modification would have been to obtain a method that improves machine learning model training, accuracy and classifications as well as the contrast and resolution of cell images for further analysis, since both YIP and AKILESH concern cellular image analysis. Wherein YIP’s systems and methods provides improves the accuracy, classification and training of a machine learning model by disrupting shift invariance and the convergence speed and stability during training, while AKILESH’s systems and methods enable the detection of short nucleic acid sequences at high throughput and at a high signal-to-noise ratio and improves the contrast and resolution of cell images for further analysis. Please see YIP et al. (US 20200258223 A1), Paragraph [0094, 0365, 0370 and 0379] and AKILESH et al. (US 20210310058 A1), Abstract and Paragraph [0145, 0149 and 0473].
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over YIP et al. (US 20200258223 A1), hereinafter referenced as YIP in view of WEISENFELD et al. (US 20210150707 A1), hereinafter referenced as WEISENFELD, and in further view of MIKHNO et al. (US 20170039706 A1), hereinafter referenced as MIKHNO and in further view of DITTAMORE et al. (US 20220260574 A1), hereinafter referenced as DITTAMORE.
Regarding claim 9, YIP in view of WEISENFELD and in further view of MIKHNO explicitly teaches the computer implemented method of claim 1, YIP further teaches wherein the machine learning model (Fig. 3, #102 called a Deep learning Framework. Paragraph [0123]) outputs a classification value for the at least one single-cell image of the plurality of single-cell images (Fig. 3. Paragraph [0138]-YIP discloses FIG. 3 illustrates an example implementation of the imaging-based biomarker prediction system 102, and the deep learning framework 150 in the form of deep learning framework 300. In paragraph [0139]-YIP discloses the framework 300 includes a pre-processing controller 302, a deep learning framework cell segmentation module 304, a deep learning framework multiscale classifier module 306, a deep learning framework single-scale classifier module 307, and a deep learning post-processing controller 308. In paragraph [0160]-YIP discloses with the cell segmentation in a histopathology image generated by the cell segmentation model 316 and the tissue classification from the tissue classification model 302, a biomarker classification model 322 receives data from both and determines a predicted biomarker presence in the histopathology image, and in particularly, with the multiscale configuration, the prediction biomarker presence in each tile image of the histopathology image), and wherein training the machine learning model to predict a classification of the at least one single-cell image of the plurality of single-cell images (Fig. 3. Paragraph [0190]-YIP discloses in training mode, where the deep learning frameworks in the system 300 are trained, various training data may be obtained (wherein training image data 401 is provided to the pre-processing controller and may be high resolution and low resolution histopathology images, digitally and/or manually annotated, annotated tissue image data from various tissue types, computer generated, synthetic image data, segmented cells, image data of labeled biomarkers (e.g., the biomarkers discussed herein), either slide-level label or tile-level labels). Please also read [0174, 0176 and 0198]) further comprises:
based on determining that the classification predicted for the at least one single-cell image does not match a classification of the at least one single-cell image, automatically correcting the classification of the at least one single-cell image to match the classification predicted for the at least one single-cell image (Fig. 3. Paragraph [0333]-YIP discloses the training set images are converted to input training image matrices and processed by the tissue classifier module 306 to assign a tissue class label to each tile image of the training image. If the tissue classifier module 306 does not accurately label the validation set of training images to match the corresponding annotations added by a human analyst, the weights of each layer of the deep learning network may be adjusted automatically by stochastic gradient descent through backpropagation until the tissue classifier module 306 accurately labels most of the validation set of training images. Please also read paragraph [0386]).
YIP fails to explicitly teach determining whether an accuracy value of the machine learning model is above a threshold value; based on determining that the accuracy value of the machine learning model is above the threshold value, determining whether the classification predicted for the at least one single-cell image matches a classification of the at least one single-cell image.
However, DITTAMORE explicitly teaches determining whether an accuracy value of the machine learning model (Paragraph [0184]-DITTAMORE discloses the disclosed methods encompass the use of a predictive model. The disclosed methods encompass comparing a measurable feature with a reference feature. Analyzing a measurable feature encompasses one or more of a support vector machine classification algorithm, a machine learning algorithm, or a combination thereof. In paragraph [0185]-DITTAMORE discloses an analytic classification process can use any one of a variety of statistical analytic methods to manipulate the quantitative data and provide for classification of the sample. Examples of useful methods include machine learning algorithms and other methods known to those skilled in the art) is above a threshold value (Paragraph [0187]-DITTAMORE discloses the predictive ability of a model can be evaluated according to its ability to provide a quality metric, e.g. AUROC (area under the ROC curve) or accuracy, of a particular value, or range of values. A desired quality threshold is a predictive model that will classify a sample with an accuracy of at least about 0.7, at least about 0.75, at least about 0.8, at least about 0.85, at least about 0.9, at least about 0.95, or higher. As an alternative measure, a desired quality threshold can refer to a predictive model that will classify a sample with an AUC of at least about 0.7, at least about 0.75, at least about 0.8, at least about 0.85, at least about 0.9, or higher);
based on determining that the accuracy value of the machine learning model is above the threshold value (Paragraph [0187]-DITTAMORE discloses ROC analysis can be used to select the optimal threshold under a variety of clinical circumstances, balancing the inherent tradeoffs that exist between specificity and sensitivity. In paragraph [0188]-DITTAMORE discloses the relative sensitivity and specificity of a predictive model can be adjusted to favor either the specificity metric or the sensitivity metric, where the two metrics have an inverse relationship), determining whether the classification predicted for the at least one single-cell image (Paragraph [0089]-DITTAMORE discloses the non-enrichment CTC analysis platform described herein enables the methods of the invention by allowing for single cell resolution and accurate genomic profiling of heterogeneous CTC populations (wherein the term CTC is a “circulating tumor cell” related to cancer that is present in a biological sample and can be present as single cells or clusters, the term biological sample can be any sample that contains CTCs and CTC data can be generated with any microscopic method known in the art). In paragraph [0120]-DITTAMORE discloses phenotypic parameters are analyzed by a classifier that utilizes the models and/or algorithms to predict 15 cell types. Based on the classifications of the cell types, a determination is made as to whether a sample contains or does not contain cell type K. In paragraph [0121]-DITTAMORE discloses after computation, each cell will have 15 probabilities of being one of the 15 cell types. Then each cell is ranked by its 15 probabilities, and the cell is determined as one cell type with the highest probability. Please also read paragraph [0143, 0153 and 0178]) matches a classification of the at least one single-cell image (Paragraph [0186]-DITTAMORE discloses classification can be made according to predictive modeling methods that set a threshold for determining the probability that a sample belongs to a given class. The probability preferably is at least 50%, or at least 60%, or at least 70%, or at least 80%, or at least 90% or higher. Classifications also can be made by determining whether a comparison between an obtained dataset and a reference dataset yields a statistically significant difference. If so, then the sample from which the dataset was obtained is classified as not belonging to the reference dataset class. Conversely, if such a comparison is not statistically significantly different from the reference dataset, then the sample from which the dataset was obtained is classified as belonging to the reference dataset class); and
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of YIP in view of WEISENFELD and in further view of MIKHNO explicitly teaches of having a computer implemented method, with the teachings of DITTAMORE of having wherein the steps are repeated until the resolution of the at least one single-cell image of the plurality of single-cell images is a highest resolution.
Wherein YIP’s method having wherein the steps are repeated until the resolution of the at least one single-cell image of the plurality of single-cell images is a highest resolution.
The motivation behind the modification would have been to obtain a method that improves machine learning model training, accuracy and classifications as well enables accurate genomic profiling of heterogeneous CTC populations, since both YIP and DITTAMORE concern cellular image analysis. Wherein YIP’s systems and methods provides improves the accuracy, classification and training of a machine learning model by disrupting shift invariance and the convergence speed and stability during training, while DITTAMORE’s systems and methods enables accurate genomic profiling of heterogeneous CTC populations by allowing for single cell resolution. Please see YIP et al. (US 20200258223 A1), Paragraph [0094, 0365, 0370 and 0379] and DITTAMORE et al. (US 20220260574 A1), Abstract and Paragraph [0089].
Claims 11 is rejected under 35 U.S.C. 103 as being unpatentable over YIP et al. (US 20200258223 A1), hereinafter referenced as YIP in view of WEISENFELD et al. (US 20210150707 A1), hereinafter referenced as WEISENFELD, and in further view of MIKHNO et al. (US 20170039706 A1), hereinafter referenced as MIKHNO and in further view of NATAN et al. (US 20190120767 A1), hereinafter referenced as NATAN.
Regarding claim 11, YIP in view of WEISENFELD and in further view of MIKHNO explicitly teaches the computer implemented method of claim 1, YIP fails to explicitly teach wherein the at least one image has a resolution of 5 nm to 250 nm per pixel.
However, NATAN explicitly teaches wherein the at least one image (Fig. 1A-B. Paragraph [0031]-NATAN discloses super-resolution fluorescence microscopy is a type of light microscopy that provides images at a higher resolution than permitted by the limit of diffraction. Using visible light and high numerical aperture objectives, conventional microscopy images are limited to a resolution of about 250 nm. Super-resolution images can be taken at much higher resolution, currently as high as 5 nm. Super-resolution imaging includes any microscopy techniques that result in a resolution of at least about 250 nm, 200 nm, 150 nm, 100 nm, 50 nm, 25 nm, 20 nm, 15 nm, 10 nm, or 5 nm. In some embodiments, the resolution is from about 200 nm to 5 nm, 150 to 10 nm, 100 to 5 nm).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of YIP in view of WEISENFELD and in further view of MIKHNO explicitly teaches of having a computer implemented method comprising: receiving, with at least one processor, image data associated with at least one image at a first resolution, with the teachings of NATAN of having wherein the at least one image has a resolution of 5 nm to 250 nm per pixel.
Wherein YIP’s method having wherein the at least one image has a resolution of 5 nm to 250 nm per pixel.
The motivation behind the modification would have been to obtain a method that improves machine learning model training, accuracy and classifications as well as improves the resolution of the super-resolution microscopy, since both YIP and NATAN concern cellular image analysis. Wherein YIP’s systems and methods provides improves the accuracy, classification and training of a machine learning model by disrupting shift invariance and the convergence speed and stability during training, while NATAN’s systems and methods that improves the resolution of the super-resolution microscopy. Please see YIP et al. (US 20200258223 A1), Paragraph [0094, 0365, 0370 and 0379] and NATAN et al. (US 20190120767 A1), Abstract and Paragraph [0046 and 0104].
Allowable Subject Matter
Claim 14 is therefrom objected to as being dependent upon rejected base claim 1 respectively but would be allowable if rewritten in independent form including all of the limitations of the base claims and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 14, the prior arts fail to explicitly teach, wherein the sample is expanded by permeating the sample with a polymer monomer composition comprising an a,p- unsaturated carbonyl monomer, comprising an acrylate, methacrylate, acrylamide, or methacrylamide monomer for producing a water-swellable (co)polymer, and an enal able to polymerize with the acrylate, methacrylate, acrylamide, or methacrylamide monomer; and polymerizing the polymer monomer composition with the enal to form a swellable material containing the cell or tissue sample, resulting in covalent linking of the enal to both the swellable material and a biomaterial in the sample, as claimed in claim 14.
Conclusion
Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant`s disclosure.
Hosseini et al. (US 20200349707 A1)- Various image diagnostic systems, and methods of operating thereof, are disclosed herein. Example embodiments relate to operating the image diagnostic system to identify one or more tissue types within an image patch according to a hierarchical histological taxonomy, identifying an image patch associated with normal tissue, generating a pixel-level segmented image patch for an image patch, generating an encoded image patch for an image patch of at least one tissue, searching for one or more histopathological images, and assigning an image patch to one or more pathological cases.....................Please see Fig. 2-3. Abstract.
BOYDEN et al. (US 20190064037 A1)- The invention provides a method for preparing an expanded biological specimen suitable for microscopic analysis. Expanding the biological sample can be achieved by binding, e.g., anchoring, key biomolecules to a polymer network and swelling, or expanding, the polymer network, thereby moving the biomolecules apart as further described below. As the biomolecules are anchored to the polymer network isotropic expansion of the polymer network retains the spatial orientation of the biomolecules resulting in an expanded, or enlarged, biological specimen.....................Please see Fig. 1. Abstract.
MOEN et al. (US 20200364857 A1)- Disclosed herein include systems and methods for biological object tracking and lineage construction. Also disclosed herein include cloud-based systems and methods for allocating computational resources for deep learning-enabled image analysis of biological objects. Also disclosed herein include systems and methods for annotating and curating biological object tracking-specific training datasets....................Please see Fig. 1-2 and 5-6. Abstract.
KARAM et al. (US 20190303720 A1)- Embodiments of a deep learning enabled generative sensing and feature regeneration framework which integrates low-end sensors/low quality data with computational intelligence to attain a high recognition accuracy on par with that attained with high-end sensors/high quality data or to optimize a performance measure for a desired task are disclosed......................Please see para. [0096-0100]. Abstract.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner
should be directed to Aaron Bonansinga whose telephone number is (703) 756-5380 The examiner can normally be reached on Monday-Friday, 9:00 a.m. - 6:00 p.m. ET.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s
supervisor, Chineyere Wills-Burns can be reached by phone at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AARON TIMOTHY BONANSINGA/Examiner, Art Unit 2673
/CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673