Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1-2, 4-8, 12-13, 20, and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Eastwood et al. (US 20180137689 A1) in view of Georgescu et al. (US 20190206056 A1).
Regarding claim 1, Eastwood et al. disclose a method for generating a digital reconstruction of tissue (see para [0072]; “3D reconstruction from 2D histological tissue section images”), the method comprising: receiving, at a computing system, image data of a tissue sample, wherein one or more sections of the tissue sample are stained with hematoxylin and eosin (H&E) (see Abstract; “creating a 3D volume image of a tissue block from a series of images of histological sections taken from the tissue block”, see also para [0006]; “acquiring a three dimensional image of a tissue sample formed from a plurality of images tissue layers, processing sections of the imaged tissue sample”, see para [0041]; “the section is picked up on a glass slide and stained (usually with hematoxylin and eosin, the H&E stain)”); registering, by the computing system, the image data to generate registered image data based on mapping independent serial images of the image data to a common coordinate system using non-linear image registration (see para [0074]; “uses multi-scale image registration that utilizes image symmetry, overall image shape, and image content to compile serial sections into a 3D volume. The HIC software uses multiple stage, multiple resolution registration algorithms for each adjacent pair of section images in properly ordered series of section images, for example, as generated using section-ordering processes”, see para [0129]; “The present inventors have discovered that this correction is essential for successful 2D and 3D linear and nonlinear registration of sections and brains to reference brains (and their atlases) for varying sources”); determining, by the computing system, a digital volume of the tissue sample in three dimensional (3D) space based on the annotated image data (see para [0045]; “computer-implemented methods that allow a user to quickly and easily create a 3D histological image, or “3D composite histological image,” from images of multiple histological sections… algorithms for efficiently identifying and generating the 3D composite histological image”), and returning, by the computing system, the digital volume of the tissue sample in 3D space to be presented in a graphical user interface (GUI) display at a user computing device (see para [0047]; The display of anatomical information from a tissue-block atlas may result from any one of user interaction with a displayed experimental section image, user interaction with a displayed atlas image, and user interaction with a graphically displayed ontological hierarchy of anatomical regions of the tissue block under consideration”). However, Eastwood et al. does not teach identifying, by the computing system, tissue subtypes based on application of a machine learning model to the registered image data.
In the same field of endeavor, Duric et al. teaches identifying, by the computing system, tissue subtypes based on application of a machine learning model to the registered image data (see para [0063]; “The proposed computer-automated method for tumor finding, outlining and classifying uses a convolutional neural network (CNN) to find each nuclear pixel on the WSI and then to classify each such pixel into one of a non-tumor class and one of a plurality of tumor classes, in our current implementation breast tumor classes”); annotating, by the computing system, the identified tissue subtypes to generate annotated image data (see para [0078]; “The CNN thus labels each pixel as non-cancerous or belonging to one or more of several different cancer (tumor) types”, see also para [0098]; “with reference to Tile D, Tile C also shows results from our CNN method (first areas shaded pink with pink perimeter lines correspond to a first tumor type, i.e. the tumor type shown red in Tile D; second areas shaded yellow with pink perimeter lines correspond to a second tumor type, i.e. the tumor type shaded blue in Tile D). [0099] Tile D is a tumor probability heatmap generated by our CNN”). Accordingly, it has been obvious to one ordinary skill in the art before the effective filed of the invention to modify a method creating a 3D volume image of a tissue block from a series of images of histological sections taken from the tissue block of Eastwood et al. in view of the use of a convolutional neural network (CNN) is applied to identifying tumors in a histological image of Georgescu et al. in order to automate and improve identification and annotation of tissue types in the reconstructed image (see para [0153]).
Regarding claim 2, the rejection of claim 1 is incorporated herein.
Georgescu et al. in the combination further teach wherein the tissue sample is at least one of a pancreatic tissue sample, a skin tissue sample, a breast tissue sample, a lung tissue sample, and a small intestines tissue sample (see para [0078]; “The cancer of particular interest is breast cancer, but the method is also applicable to histology images of other cancers, such as cancer of the bladder, colon, rectum, kidney, blood (leukemia), endometrium, lung, liver, skin, pancreas, prostate, brain, spine and thyroid”).
Regarding claim 4, the rejection of claim 1 is incorporated herein.
Eastwood et al. in the combination further teach wherein the image data is between lx and 40x magnification, wherein lateral x and y resolution is between 0.2pm and 10pm and axial z resolution is between 0.5pm and 40pm (see para [0174]; “automatically move the stage, increase the magnification, and/or automatically focus the imaging system to effectively zoom into the contoured region. Those skilled in the art will readily understand that many variations of selection and zooming in and zooming out can be devised that are under the control of the AIM software in response to user inputs directed to any one of the displayed images and/or the graphically displayed ontological hierarchy. Automated annotation of anatomies on a live-view image of a specimen slide that is at a low resolution provided by AIM software of the present disclosure, and the AIM software's ability to keep track of location on a slide while switching objective lenses (e.g., to a higher magnification lens) enables precise measurements within objectively determined regions of the slide-mounted tissue section at issue”).
Regarding claim 5, the rejection of claim 1 is incorporated herein.
Eastwood et al. in the combination further teach wherein registering, by the computing system, the image data to generate registered image data further comprises: identifying, as a point of reference, a center image of the image data (see para [0075]; “a first pair of section images in the ordered series of section images, and the first section image is the fixed image (a/k/a “reference” or “source”) …this first registration stage… a centered rigid transform based on center of rotation, angle of rotation, and translation”); and calculating global registration for each of the image data based on the point of reference (see para [0134]; “Registration is run starting at each candidate location to adjust parameters of a centered Euler transform (3D translation and rotation about a point) at a coarse resolution (˜128 pixels along the shortest axis, in one example)”, see also para [0179], Note; a specific search strategy that can be used as a method to achieve global registration).
Regarding claim 6, the rejection of claim 5 is incorporated herein.
Eastwood et al. in the combination further teach wherein calculating global registration further comprises iteratively calculating registration angle and translation for each of the image data (see para [0137]-[0145]; “Registration is run starting at each candidate location to adjust parameters of a centered Euler transform (3D translation and rotation about a point) at a coarse resolution (˜128 pixels along the shortest axis, in one example).. This process is repeated until the spacing between candidate transforms drops below a threshold (e.g., 80 μm)…. that refines favorable candidate transforms allowing for oblique angles and scale differences… Registration is run starting at each candidate location to adjust parameters of a scale-versor transform (translation, rotation, and non-uniform scaling)).
Regarding claim 7, the rejection of claim 6 is incorporated herein.
Eastwood et al. in the combination further teach further comprising calculating elastic registration for each of the image data based on calculating rigid registration of cropped image tiles of each of the globally registered image data at intervals that range between 0.1mm and 5mm (see para [0116]; “register the individual brain image to the template using multiple stage registration up to a nonlinear (BSpline) transform”, see also para [0069]; “To create a series of section images from each slide image, the HIC software uses the contour(s) it draws around each section image to determine a crop-box sized to completely contain the image region(s) as defined by the contour(s). The HIC software then stores each cropped section image in a distinct image file and causes these section image files to be ordered in serial order using the selected ordering information described above. the resolution is limited to 1,000 pixels×2,000 pixels, but other resolutions can certainly be used….the image registration techniques described herein typically work very well using these limited-resolution images”, Note: nonlinear registration implies elastic registration and nonlinear B-spline are fundamentally interval-based).
Regarding claim 8, the rejection of claim 1 is incorporated herein.
Georgescu et al. in the combination further teach wherein the tissue sample includes at least one of normal human tissue, precancerous human tissue, and cancerous human tissue (see para [0033]; “There are various options for setting the tissue classes, but most if not all embodiments will have in common that a distinction will be made in the classes between non-tumorous and tumorous tissue. The non-tumorous tissue classes may include one, two or more classes. The tumorous tissue classes may also include one, two or more classes. For example, in our current implementation we have three tissue classes, one for non-tumorous tissue and two for tumorous tissue, wherein the two tumorous tissue classes are for invasive tumors and in situ tumors”).
Regarding claim 12, the rejection of claim 1 is incorporated herein.
Georgescu et al. in the combination further teach wherein the machine learning model was trained, by the computing system, with manual annotations of one or more tissue subtypes in a plurality of training tissue image data wherein the machine learning model is at least one of a deep learning semantic segmentation model, a convolutional neural network (CNN), and a U-net structure (see para [0124]; “FIG. 4 is a flow diagram showing the steps involved in training the CNN.[0125] In Step S40, training data is retrieved containing WSIs for processing which have been annotated by a clinician to find, outline and classify tumors. The clinician's annotations represent the ground truth data”, see also para [0088]; “Training the network may be done on a GPU, CPU or a FPGA using any one of several available deep learning frameworks”, and para [0129]; “each of a batch of input image patches is input into the CNN and processed to find, outline and classify the patches on a pixel-by-pixel basis as described further above with reference to FIGS. 1A and 1B” implies semantic segmentation ).
Regarding claim 13, the rejection of claim 12 is incorporated herein.
Georgescu et al. in the combination further teach further comprising training, by the computing system, the machine learning model based on randomly overlaying extracted annotated regions of one or more tissue samples on a training image (see para [0129]-[0031]; “each of a batch of input image patches is input into the CNN and processed….. the CNN output image patches are compared with the ground truth data…. the probability map is presented on the display as a semi-transparent overlay to the WSI… the CNN then learns from this comparison and updated the CNN weights”) and cutting the training image into the plurality of training tissue image data (see para [0126]; “In Step S41, the WSIs are broken down into image patches, which are the input image patches for the CNN. That is, image patches are extracted from the WSI”, see also page 6, last para; “we create our training dataset by semi-randomly overlaying extracted annotated regions on a large image, then cutting this large image into many training and validation images”.
Regarding claim 20, the rejection of claim 1 is incorporated herein.
Georgescu et al. in the combination further teach further comprising classifying, by the computing system, the image data based on pixel resolution, annotation tissue classes, color definitions for labeling of tissue classes, and names of tissue subtypes corresponding to labels associated with each class of tissue subtypes (see para [0035]; “With the results from the CNN, the method may be extended to include a scoring process based on the pixel classification and the tumors that are defined from that classification with reference to the probability map. For example, the method may further comprise: defining areas in the histological image that correspond to tumors according to the probability map; scoring each tumor according to a scoring algorithm to assign a score to each tumor; and storing the scores into the record in the data repository. The scoring thus takes place on the histological image, but is confined to those areas identified by the probability map as containing tumorous tissue”, see also para [0036]; “The tumor scores may also be displayed in some convenient manner, e.g. with text labels on or pointing to the tumors, or alongside the image”, and para [0099]; “Tile D is a tumor probability heatmap generated by our CNN. It can be seen how our approach of pixel-level prediction produces areas with smooth perimeter outlines. For our heatmap, different (arbitrarily chosen) colors indicate different classes, namely green for non-tumor, red for a first tumor type and blue for a second tumor type”).
Regarding claim 26, the rejection of claim 1 is incorporated herein.
Georgescu et al. in the combination further teach further comprising generating, by the computing system, immune cell heatmaps of pancreatic cancer precursor legions based on the digital volume of the tissue sample and using at least one of H&E, immunocytochemistry (HC),immunofluorescence (IF), imaging mass cytometry (IMC), and spatial transcriptomics (see para [0099]; “For our heatmap, different (arbitrarily chosen) colors indicate different classes, namely green for non-tumor, red for a first tumor type and blue for a second tumor type”, see also para [0105]; “For example, there could be 5 adjacent sections, each with a different stain, such as ER, PR, p53, HER2, H&E and Ki-67”, and para [0010]; “On the glass slide, hormone-specific antibodies are applied using immunohistochemical (IHC) techniques to a formalin-fixed paraffin-embedded breast tissue section from the patient”).
Claims 3, 9-11, 14-19, and 21-25 are rejected under 35 U.S.C. 103 as being unpatentable over Eastwood et al. in view of Georgescu et al. as applied in claim 1, and further in view of Kiemen et al. NPL “In situ characterization of the 3D microanatomy of the pancreas and pancreatic cancer at single cell resolution”.
Regarding claim 3, the rejection of claim 1 is incorporated herein. the combination of Eastwood et al. and Georgescu et al. does not teach further comprising determining, by the computing system, 3D radial density of each identified tissue subtype and each cell in the digital volume of the tissue sample.
In the same field of endeavor, Kiemen et al. teach further comprising determining, by the computing system, 3D radial density of each identified tissue subtype and each cell in the digital volume of the tissue sample (see page 29, 1st para; “3D radial density of tissue subtypes and cells was calculated using the multi labelled and cell coordinate 3D matrices”). Accordingly, it has been obvious to one ordinary skill in the art before the effective filed of the invention to modify a method creating a 3D volume image of a tissue block from a series of images of histological sections taken from the tissue block of Eastwood et al. in view of the use of a convolutional neural network (CNN) is applied to identifying tumors in a histological image of Georgescu et al. and method to reconstruct three-dimensional (3D) centimeter-scale tissues utilizing deep learning approaches of Kiemen et al. in order to compare overall cell densities between samples (see page 29, 1st para).
Regarding claim 9, the rejection of claim 1 is incorporated herein.
Kiemen et al. in the combination further teach further comprising normalizing, by the computing system, the registered image data to generate normalized image data based on: correcting two dimensional (2D) serial cell counts based on in-situ measured nuclear diameter of cells in the tissue sample (see page 5, Fig. 1; “. 2D serial cell counts are corrected using the in-situ measured nuclear diameter of cells in different tissue bodies”); locating nuclei in each histological section of the registered image data based on color deconvolution (see page 6, 2nd para; “We further established an automated cell detection workflow to locate all nuclei in each histological section based on color deconvolution and a previously established algorithm”); for each located nuclei, measuring in-situ diameters of each cell type; mapping the nuclei in a serial 2D z plane; and extrapolating true cell counts from the serial 2D z plane (see page 6, 2nd para; “In situ diameters of each cell type were measured and incorporated to extrapolate true 3D cell counts from cell counts on serial 2D z-planes”). Accordingly, it has been obvious to one ordinary skill in the art before the effective filed of the invention to modify a method creating a 3D volume image of a tissue block from a series of images of histological sections taken from the tissue block of Eastwood et al. in view of the use of a convolutional neural network (CNN) is applied to identifying tumors in a histological image of Georgescu et al. and method to reconstruct three-dimensional (3D) centimeter-scale tissues utilizing deep learning approaches of Kiemen et al. in order to changes to cancer cell organization in the tissue region (see page 5, Fig. 1).
Regarding claim 10, the rejection of claim 1 is incorporated herein.
Kiemen et al. in the combination further teach further comprising normalizing, by the computing system, the registered image data to generate normalized image data based on: extracting, using color deconvolution, a hemotoxylin channel from each of the image data depicting the one or more sections of the tissue samples stained with H&E (see page 24, 1st para; “First, the hemotoxylin channel of all H&E images was extracted using color deconvolution”); and for each of the image data depicting the one or more sections of the tissue samples stained with H&E: identifying a tissue region in the image data based on detecting regions of the image data with low green channel intensity and high red-green-blue (rbg) standard deviation; converting rgb channels in the image data to optical density (see page 24, 1st para; “For each image, the tissue region of the image was identified by finding regions of the image with low green channel intensity and high red-green-blue (rgb) standard deviation. Next, rgb channels were converted to optical density”); identifying clusters, based on kmeans clustering, to represent one or more optical densities of the image data (see page 24, 1st para; “Using kmeans clustering analysis, 100 clusters were identified to represent the optical densities of the image”); and deconvolving the image data, based on the one or more optical densities, into hemotoxylin, eosin, and background channel images (see page 24, 1st para; “deconvolve the rgb image in to hemotoxylin, eosin, and background channel images”). Accordingly, it has been obvious to one ordinary skill in the art before the effective filed of the invention to modify a method creating a 3D volume image of a tissue block from a series of images of histological sections taken from the tissue block of Eastwood et al. in view of the use of a convolutional neural network (CNN) is applied to identifying tumors in a histological image of Georgescu et al. and method to reconstruct three-dimensional (3D) centimeter-scale tissues utilizing deep learning approaches of Kiemen et al. in order to establish an automated cell detection workflow to locate all nuclei in each histological section (see page 24, 1st para).
Regarding claim 11, the rejection of claim 10 is incorporated herein.
Georgescu et al. in the combination further teach further comprising: smoothing, for each of the image data, the hemotoxylin channel image; and identifying, for each of the image data, a nuclei in the smoothed hemotoxylin channel image (see para [0102]; “Tile B is a tumor probability heatmap generated by our CNN. For our heatmap, different (arbitrarily chosen) colors indicate different classes, namely green for non-tumor, reddish-brown for invasive tumor (shown pink in Tile A), and blue for in situ tumor (shown yellow in Tile A). Once again, it can be seen how our approach of pixel-level prediction produces areas with smooth perimeter outlines”, see also para [0061]; “We describe a computer-automated tumor finding method which detects and outlines invasive and in situ breast cancer cell nuclei automatically”).
Regarding claim 14, the rejection of claim 13 is incorporated herein.
Kiemen et al. in the combination further teach wherein training the machine learning model further comprises: identifying, by the computing system, bounding boxes around each annotated region of the one or more tissue samples; and randomly overlaying each identified bounding box containing a least represented tissue subtype on a blank image tile until the tile is at least 65% full of annotated regions of the one or more tissue samples (see page 25, 3rd para; “Bounding boxes of all annotations were identified and each annotated rgb image region was extracted and saved as a separate image file. A matrix was used to keep track of which bounding box images contained with annotation tissue types. Training images were built through creation of a 9000x9000x3, zero-value rgb image tile. Annotation bounding boxes containing the least represented deep learning class were randomly overlaid on the blank image tile until the tile was >65% full of annotations and such that the number of pixels of each deep learning class was approximately even”). Accordingly, it has been obvious to one ordinary skill in the art before the effective filed of the invention to modify a method creating a 3D volume image of a tissue block from a series of images of histological sections taken from the tissue block of Eastwood et al. in view of the use of a convolutional neural network (CNN) is applied to identifying tumors in a histological image of Georgescu et al. and method to reconstruct three-dimensional (3D) centimeter-scale tissues utilizing deep learning approaches of Kiemen et al. in order to build a larger validation images (see page 25, 3rd para).
Regarding claim 15, the rejection of claim 14 is incorporated herein.
Georgescu et al. in the combination further teach wherein the image tile is an rgb image composed of overlaid manual annotations, and wherein the image tile is cut, by the computing system, into a plurality of image tiles for use with the machine learning model (see para [0080]; “as a preprocessing step, both for training and prediction, patches are extracted from the WSI which have the desired pixel dimensions, e.g. N×N×n pixels, where n=3 in the case that each physical location has three pixels associated with three primary colors—typically RGB”, see also para [0100]; “FIG. 3 is in color and shows an example of the input RGB image patch (Tile A on the left) and the final output tumor probability heat map (Tile B on the right)”).
Regarding claim 16, the rejection of claim 1 is incorporated herein.
Kiemen et al. in the combination further teach wherein the machine learning model is trained, by the computing system, to identify at least one of inflammation, cancer cells, and extracellular matrix (ECM) in the image data (see page 6, 3rd para; “Deep learning methods have been successfully used to identify many structures in H&E images, such as inflammation, (24) cancer cells, (25, 26) and extracellular matrix (ECM)”). Accordingly, it has been obvious to one ordinary skill in the art before the effective filed of the invention to modify a method creating a 3D volume image of a tissue block from a series of images of histological sections taken from the tissue block of Eastwood et al. in view of the use of a convolutional neural network (CNN) is applied to identifying tumors in a histological image of Georgescu et al. and method to reconstruct three-dimensional (3D) centimeter-scale tissues utilizing deep learning approaches of Kiemen et al. in order to visualize and quantify the architecture of the pancreas (see page 6, 3rd para).
Regarding claim 17, the rejection of claim 1 is incorporated herein.
Kiemen et al. in the combination further teach wherein the tissue subtypes include at least one of normal ductal epithelium, pancreatic intraepithelial neoplasia, intraductal papillary mucinous neoplasm, PDAC, smooth muscle and nerves, acini, fat, ECM, and islets of Langerhans (see page 6, 3rd para; “A total of eight tissue subtypes were identified: normal ductal epithelium, precursors (pancreatic intraepithelial neoplasia [PanIN] or intraductal papillary mucinous neoplasm [IPMN]), PDAC, smooth muscle & nerves, acini, fat, ECM, and islets of Langerhans (Figure 1D)”). Accordingly, it has been obvious to one ordinary skill in the art before the effective filed of the invention to modify a method creating a 3D volume image of a tissue block from a series of images of histological sections taken from the tissue block of Eastwood et al. in view of the use of a convolutional neural network (CNN) is applied to identifying tumors in a histological image of Georgescu et al. and method to reconstruct three-dimensional (3D) centimeter-scale tissues utilizing deep learning approaches of Kiemen et al. in order to visualize and quantify the architecture of the pancreas (see page 6, 3rd para).
Regarding claim 18, the rejection of claim 1 is incorporated herein.
Kiemen et al. in the combination further teach wherein determining, by the computing system, the digital volume of the tissue sample in 3D space based on the annotated image data comprises consolidating multi-labeled image data into a 3D matrix based on registering (i) the annotated image data and (ii) cell coordinates counted on unregistered histological sections of the annotated image data (see page 26, 3rd para; “Multi-labelled images created by the DeepLab portion of the CODA pipeline were consolidated into a 3D matrix using the H&E image registration results. Similarly, cellular coordinates counted on the unregistered histological sections were consolidated into a 3D cell matrix using the H&E image registration results”). Accordingly, it has been obvious to one ordinary skill in the art before the effective filed of the invention to modify a method creating a 3D volume image of a tissue block from a series of images of histological sections taken from the tissue block of Eastwood et al. in view of the use of a convolutional neural network (CNN) is applied to identifying tumors in a histological image of Georgescu et al. and method to reconstruct three-dimensional (3D) centimeter-scale tissues utilizing deep learning approaches of Kiemen et al. in order to maintain registration quality (see page 26, 3rd para).
Regarding claim 19, the rejection of claim 18 is incorporated herein.
Kiemen et al. in the combination further teach wherein the 3D matrix is subsampled, by the computing system, using nearest neighbor interpolation from original voxel dimensions of 2x2x12pm3/voxel to an isotropic 12x12x12pm3/voxel (see page 26, 3rd para; “For all calculations performed on the 3D labelled matrices of the tissues, the 3D matrix was subsampled using nearest neighbor interpolation from original voxel dimensions of 2x2x12[Symbol font/0x6D]m3 /voxel to an isotropic 12x12x12[Symbol font/0x6D]m3 /voxel”).
Regarding claim 21, the rejection of claim 1 is incorporated herein.
Kiemen et al. in the combination further teach further comprising, for each tissue subtype: summing, by the computing system, pixels of the tissue sample in a z dimension; generating, by the computing system, a projection of a volume of the tissue sample on an xy axis (see page 26, last para; “The 3D labelled matrices of each patient case were used to construct z-projections of each tissue subtype. For each tissue subtype, the pixels of the 3D matrix corresponding to that subtype were summed in the z-dimension, creating a projection of the volume on the xy axis”); normalizing, by the computing system, the projection based on the projection's maximum; and visualizing, by the computing system, the projection using a same color scheme created for visualization of the tissue sample in the 3D space (see page 26, last para; “The projections were normalized by their maximum and visualized using the imagesc command in MATLAB 2020b using the same color scheme created for visualization of the 3D tissue”). Accordingly, it has been obvious to one ordinary skill in the art before the effective filed of the invention to modify a method creating a 3D volume image of a tissue block from a series of images of histological sections taken from the tissue block of Eastwood et al. in view of the use of a convolutional neural network (CNN) is applied to identifying tumors in a histological image of Georgescu et al. and method to reconstruct three-dimensional (3D) centimeter-scale tissues utilizing deep learning approaches of Kiemen et al. in order to create visualization of the 3D tissue (see page 26, last para).
Regarding claim 22, the rejection of claim 1 is incorporated herein.
Kiemen et al. in the combination further teach further comprising calculating, by the computing system, cell density of each tissue subtype in the tissue sample using the digital volume of the tissue sample (see page 27, 4th para; “Cell density of each tissue subtype was calculated by combining the tissue subtype data in the multi labelled 3D matrix with cell coordinate data in the cell 3D matrix”). Accordingly, it has been obvious to one ordinary skill in the art before the effective filed of the invention to modify a method creating a 3D volume image of a tissue block from a series of images of histological sections taken from the tissue block of Eastwood et al. in view of the use of a convolutional neural network (CNN) is applied to identifying tumors in a histological image of Georgescu et al. and method to reconstruct three-dimensional (3D) centimeter-scale tissues utilizing deep learning approaches of Kiemen et al. in order to compare overall cell densities between samples (see page 27, 4th para).
Regarding claim 23, the rejection of claim 1 is incorporated herein.
Kiemen et al. in the combination further teach further comprising measuring, by the computing system, tissue connectivity in the tissue sample using the digital volume of the tissue sample (see page 28, 2nd para; “The 3D multi labelled matrices were used to determine tissue connectivity”). Accordingly, it has been obvious to one ordinary skill in the art before the effective filed of the invention to modify a method creating a 3D volume image of a tissue block from a series of images of histological sections taken from the tissue block of Eastwood et al. in view of the use of a convolutional neural network (CNN) is applied to identifying tumors in a histological image of Georgescu et al. and method to reconstruct three-dimensional (3D) centimeter-scale tissues utilizing deep learning approaches of Kiemen et al. in order to significantly increase our knowledge of the human tumor microenvironment (see page 28, 2nd para).
Regarding claim 24, the rejection of claim 1 is incorporated herein.
Kiemen et al. in the combination further teach further comprising calculating, by the computing system, collagen fiber alignment in the tissue sample using the digital volume of the tissue sample (see page 14, last para; “Quantification of collagen fiber alignment using a method described in ref,(38)”, see also page 28, last para; “We measured the alignment index of the eosin channel to compare the degree of collagen alignment in axially and longitudinally sectioned regions of the ducts”). Accordingly, it has been obvious to one ordinary skill in the art before the effective filed of the invention to modify a method creating a 3D volume image of a tissue block from a series of images of histological sections taken from the tissue block of Eastwood et al. in view of the use of a convolutional neural network (CNN) is applied to identifying tumors in a histological image of Georgescu et al. and method to reconstruct three-dimensional (3D) centimeter-scale tissues utilizing deep learning approaches of Kiemen et al. in order to create collagen alignment around normal pancreatic duct (see page 14, last para).
Regarding claim 25, the rejection of claim 1 is incorporated herein.
Kiemen et al. in the combination further teach further comprising calculating, by the computing system, a fibroblast aspect ratio of the tissue sample based on measuring a length of major and minor axis of nuclei in a ductal submucosa in the digital volume of the tissue sample (see page 28, last para; “Calculation of collagen fiber alignment and fibroblast aspect ratio ….measured the length of major and minor axis of nuclei in the ductal submucosa to calculate aspect ratios using ImageJ. In total, we measured 1546 nuclei. Violin plots were constructed from data using code available in ref. (47)”). Accordingly, it has been obvious to one ordinary skill in the art before the effective filed of the invention to modify a method creating a 3D volume image of a tissue block from a series of images of histological sections taken from the tissue block of Eastwood et al. in view of the use of a convolutional neural network (CNN) is applied to identifying tumors in a histological image of Georgescu et al. and method to reconstruct three-dimensional (3D) centimeter-scale tissues utilizing deep learning approaches of Kiemen et al. in order to create collagen alignment around normal pancreatic duct (see page 28, last para).
Claims 27-30 are rejected under 35 U.S.C. 103 as being unpatentable over Eastwood et al. in view of Georgescu et al. as applied in claim 1, and further in view of Duric et al. (US 20220323043 A1).
Regarding claim 27, the rejection of claim 1 is incorporated herein.
Georgescu et al. in the combination further teach further comprising: retrieving, by the computing system and from a data store (see para [0019]; “receiving a histological image or set thereof from a record stored in a data repository”), one or more deep learning models that were trained using patient tissue training data (see para [0125]; “In Step S40, training data is retrieved containing WSIs for processing which have been annotated by a clinician to find, outline and classify tumors. The clinician's annotations represent the ground truth data”, see also para [0059]; “For the deepest convolutional layer C10, three stages of deconvolution are needed, via D1 and D2 to layer D3. The result is three arrays D3, D5, D6 of equal size to the input patch”), wherein the one or more deep learning models are configured to (i) generate multi-dimensional volumes of patient tissue from patient tissue image data; wherein the patient tissue training data is different than the tissue sample and wherein the patient tissue image data is different than the image data; generating, by the computing system, the digital volume of the tissue sample in 3D space based on applying the one or more deep learning models to the image data (see para [0132]-[0137]; “After training, the CNN can be applied to WSIs independently of any ground truth data, i.e. in live use for prediction FIG. 5 is a flow diagram showing the steps involved in prediction using the CNN. In Step S50, one or more WSIs are retrieved for processing, e.g. from a laboratory information system (LIS) or other histological data repository. The WSIs are pre-processed, for example as described above. In Step S51, image patches are extracted from the or each WSI. The patches may cover the entire WSI or may be a random or non-random selection. In Step S52, the image patches are pre-processed, for example as described above. In Step S53, each of a batch of input image patches is input into the CNN and processed to find, outline and classify the patches on a pixel-by-pixel basis as described further above with reference to FIGS. 1A and 1B”, see also para [0106]-[0107]; “The different images are then aligned, warped or otherwise pre-processed to map the coordinates of any given feature on one image to the same feature on the other images… a coordinate mapping between different WSIs of a set comprising differently stained adjacent sections, the WSIs can be merged into a single composite WSI from which composite patches may be extracted for processing by the CNN”). However, the combination of Eastwood et al. and Georgescu et al. does not teach (ii) determine stiffness measurements of tissue components in the multi- dimensional volumes of patient tissue; determining, by the computing system, stiffness measurements of the tissue components of the tissue sample based on applying the one or more deep learning models to the digital volume of the tissue sample; and returning, by the computing system, the determined stiffness measurements for the tissue components of the tissue sample.
In the same end of endeavor, Duric et al. in the combination further teach (ii) determine stiffness measurements of tissue components in the multi- dimensional volumes of patient tissue (see para [0087]; “Volumetric stiffness measurements may further stratify this risk, particularly for the denser fibroglandular/stromal tissues”, see also para [0199]; “UST stiffness measurements by SoftVue extracted information on the tissue bulk modulus which was then converted to an index of relative tissue stiffness (from 0=very soft to 1=extremely stiff)”); determining, by the computing system, stiffness measurements of the tissue components of the tissue sample based on applying the one or more deep learning models to the digital volume of the tissue sample (see Abstract; “method of analyzing an image of a volume of tissue to determine a risk of developing breast cancer”, see also para [0097]; “machine learning and/or employing a mask to selectively identify more grouped regions of stiffness >5 mm (i.e., representing potential mass for evaluation)”); and returning, by the computing system, the determined stiffness measurements for the tissue components of the tissue sample (see para [0267]; “SoftVue UST is unique in its ability to display a whole-breast distribution of tissue stiffness, including masses”, see also para [0276]; “Quantitative stiffness values of large and small masses, as displayed by the unfiltered and spatially filtered algorithms, are shown in Table 12”). Accordingly, it has been obvious to one ordinary skill in the art before the effective filed of the invention to modify a method creating a 3D volume image of a tissue block from a series of images of histological sections taken from the tissue block of Eastwood et al. in view of the use of a convolutional neural network (CNN) is applied to identifying tumors in a histological image of Georgescu et al. and method of analyzing an image of a volume of tissue to determine a risk of developing breast cancer of Duric et al. in order to monitor breast cancer during and/or after a treatment protocol (see page 28, last para).
Regarding claim 28, the rejection of claim 27 is incorporated herein.
Duric et al. in the combination further teach wherein the tissue sample is a breast tissue (see para [0015]; “stiffness of a region of interest within the volume of breast tissue”).
Regarding claim 29, the rejection of claim 27 is incorporated herein.
Duric et al. in the combination further teach wherein determining, by the computing system, stiffness measurements of the tissue components of the tissue sample comprises determining Pearson or Spearman correlation and statistical significance for each of the tissue components in the digital volume of the tissue sample (see para [0139]; “Pearson correlation coefficients, or analysis of variance (ANOVA) as appropriate”, see also para [0148]; “Spearman correlation coefficients were calculated to determine the strength of the correlations between the Volpara and UST assessment of breast density”).
Regarding claim 30, the rejection of claim 27 is incorporated herein.
Duric et al. in the combination further teach wherein the stiffness measurements correspond to at least one of (i) resistances of the tissue components of the tissue sample to deformation, (ii) elastic modulus, and (iii) Young's modulus (see para [0083]; “The primary method by which to assess breast density with ultrasound tomography is through the measurement of Sound Speed. The average speed of sound (s) through human tissue is related to tissue density and elasticity as: ….. In human breast tissue, the elastic constant scales as c is proportion to ρ.sup.3. Substitution into the above equation for sound speed allows us to factor out the dependence on elasticity”, see also para [0096]; “Tissue properties expressed by the bulk modulus thus describe material resistance to uniform compression and associated volume changes. The bulk modulus also has a larger dynamic range than either Young's or shear modulus, allowing greater likelihood of tissue differentiation”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WINTA GEBRESLASSIE whose telephone number is (571)272-3475. The examiner can normally be reached Monday-Friday9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at 571-270-5180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WINTA GEBRESLASSIE/Examiner, Art Unit 2677