Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claims 1, 8, 9 and 14 has been amended.
Claims 2, 10-11 has been cancelled.
Claims 1, 3-9, and 12-14 are still pending for consideration.
Response to Arguments
Applicant's arguments filed Nov 06, 2025 have been fully considered but they are not persuasive.
Applicant on page 2 of the “Remarks” asserts “Applicant respectfully asserts that cycleGAN is for normalizing pathological images. Image colors can vary depending on staining conditions, microscope illumination, scanner models, etc.; and cycleGAN in Agus is intended to normalize these variations, not to convert to different types of staining images. Agus extracts fingerprints by inputting the normalized images into a CNN afterward to classify specific phenotype (disease...) of a sample. …While a general deep net computes a general nonlinear function, a net with only layers of this form computes a nonlinear filter, which we call a deep filter or fully convolutional network… Applicant respectfully submits that neither Agus nor Che, whether considered individually or in combination, teaches or suggests a deep learning model that converts an H&E image into a CK image, as presently claimed. Applicant further respectfully submits that neither Agus nor Che, whether considered individually or in combination, teaches or suggests the process of calculating TSR as presently claimed”.
Response: Agus describes normalization of staining variations as one application. Agus on para [0079] disclose use of neural style transfer using CycleGAN to recolor (i.e., changing stain appearance) pathology images transferring H&E staining coloration such that they appear as if prepared under different staining conditions, and further explains that CycleGAN exchanges texture between image sets while preserving structural information, analogous to transforming photographs into impressionist paintings and horses into zebras. This disclosure demonstrates that Agus teaches deep-learning-based image-to-image transformation beyond mere normalization of pixel intensities. Agus on para [0072] further disclose that the CycleGAN framework includes a generator network that takes an input image (image A) and transforms into an output image of a different style (image B), while preserving structural information. Agus on para [0061] confirms that the process is implemented using deep convolutional neural network operating on image inputs.
Agus expressly teaches that deep learning is used to extract biologically meaningful information from histopathology images, including predicting diagnostic, prognostic, and theragnostic features that are not visually discernible by a pathologist. Agus further discloses that the presence or absence of biomarkers is inferred from stained tissue samples, and explicitly identifies cytokeratin markers among the useful biomarkers whose status may be predicted. Agus explains that sech predicted biomarkers status may be used for treatment determination or prognosis, and position deep learning-based analysis of H&E morphology as a surrogate for conventional immunohistochemistry-based biomarker assessment (see para [0060], [0120]-[0125]).
Accordingly, Agus is DL-driven biological inference and predict biomarker status not color normalization, not cosmetic preprocessing, but functional transformation of pathology information. Thus, Applicant's arguments are not persuasive.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1, 3-4, 9, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Agus et al. (US 20220180518 A1) in view of Che et al. (US 20240312604 A1) and further in view of Boucheron (US 20100111396 A1).
Regarding claim 1, Agus et al. teach the method comprising: receiving, by an analysis device, a first stained image of target tissue (see para [0056]; “the digital pathology slide image is obtained”); generating, by the analysis device, a second stained image by inputting the first stained image into a trained deep learning model (see, para [0079]; “In the second two experiments, the style transfer algorithm CycleGAN.sup.17 was used to recolor images (FIG. 4), making them appear as if they were prepared at a different site.. Here, we use CycleGAN to transfer the H&E staining coloration from a reference site to images from other sites” see also para [0072]; “Briefly, the CycleGAN approach consists of training two neural nets, a generator which takes an image A transforms it into an image of style B, and a discriminator which is trained to distinguish between generated images and real ones”, and (see para [0061]; “In this regard, the convolutional neural. It should be appreciated that any deep convolutional neural network that operates on the pre-processed input can be utilized”); generate a second binary image by binarizing the virtual second stained image (see para [0107]; “Given the high AUC for the ER classifier, heatmaps showing the regions predicted to be highly ER-positive or negative across whole slides were made… Subsequently, the pre-trained ER network was used to make patch-level predictions across the entire slide. The predictions are shaded in grayscale. Black signifies a prediction of −1 (ER-negative), while white signifies +1 (ER-positive). Gray values correspond to scores close to 0 (indeterminate)”), wherein the first stained image is a hematoxylin & eosin (H&E) stained image, (see para [0079]; “we use CycleGAN to transfer the H&E staining coloration from a reference site to images from other sites”) and the second stained image is a cytokeratin (CK) stained image (see para [0060]; “the diagnostic, prognostic, or theragnostic feature, is the presence or absence of a biomarker…Examples of useful biomarkers include, cytokeratin markers”). However, Agus et al. does not teach a method of predicting a tumor-stroma ratio (TSR) on the basis of a deep learning model, generating, by the analysis device, a first binary image of the first stained image; and calculating, by the analysis device, a TSR of the target tissue by subtracting the second binary image from the first binary image, wherein the first stained image is a hematoxylin & eosin (H&E) stained image, and the second stained image is a cytokeratin (CK) stained image.
In the same field of endeavor Che et al. teach a method of predicting a tumor-stroma ratio (TSR) on the basis of a deep learning model (see para [0084]; “a content evaluation unit, configured to calculate a tumor proportion and a tumor-stroma ratio of the digital pathology slide image”, see also para [0101]; “obtaining a digital pathology slide image, and determining an effective pathological region based on the digital pathology slide image; identifying a tumor cell region corresponding to the effective pathological region by using a deep learning-based pathology image classifier”); generating, by the analysis device, a first binary image of the first stained image (see para [0047]; “the effective pathological region is a region dyed by a red dyeing reagent and a blue dyeing reagent….Therefore, binarization processing is performed on the digital pathology slide image, so that a color picture is converted into a grayscale image”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the general histologic classification of pathology specimens through machine learning of Agus et al. in view of the use for automatically evaluating the tumor cell content of digital pathology slide image using a deep learning-based pathology image classifier of Che et al. in order to obtain a uniform and precise evaluation result due to a subjective deviation in manual analysis (see para [0084]).
Additionally, Che et al. disclose and calculating, by the analysis device, a TSR of the target tissue by subtracting the second binary image from the first binary image (see para [0048]; “Determine a first area of the tumor cell region, and determine a second area of the effective pathological region. [0050] Step 106B: Calculate a tumor proportion and a tumor-stroma ratio of the digital pathology slide image based on the first area and the second area.. Specifically, the first area may be calculated based on the tumor cell region, and the second area may be calculated based on the effective pathological region. The tumor proportion and the tumor-stroma ratio are respectively calculated”, see also para [0037]; “The digital pathology slide image is segmented to determine the effective pathological region, where an image segmentation method includes but is not limited to an image binarization processing method, a machine learning-based image segmentation model”, Note: binary segmentation is used to isolate tumor and stroma components into distinct "binary image" masks to calculate their respective pixel areas for a final, and also identifying the total tumor-stroma boundary (effective region) and the tumor cells within it, from which the areas (pixel counts) are calculated to determine the ratio), but does not explicitly disclose by subtracting.
In the same field of endeavor Boucheron teach and calculating, by the analysis device, a TSR of the target tissue by subtracting the second binary image from the first binary image (see para [0043]; “classifying one or more biological materials comprises classifying cytoplasm materials and stroma materials by subtracting out pixels related to background and nuclei”, see also para [0437]; “More specifically, we use the morphological opening of a binary image with a disk-shaped structuring element (SE). The radius of the SE is increased by 1 pixel each iteration and the residue of the image (the sum of the pixels) is calculated…Looking at the first derivative (the element-wise subtraction) of .phi.(k) will yield a local maximum for a structuring element with approximate size of a large number of objects in the original binary image”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the general histologic classification of pathology specimens through machine learning of Agus et al. in view of the use for automatically evaluating the tumor cell content of digital pathology slide image using a deep learning-based pathology image classifier of Che et al. and quantitative object and spatial arrangement-level analysis of nuclear material, cytoplasm material, and stromal material of Boucheron in order to better coordinate clinical care of women presenting breast masses (see para [0043]).
Regarding claim 3, the rejection of claim 1 is incorporated herein.
The combination of Agus et al., Che et al. and Boucheron further teach wherein the deep learning model comprises: is trained with a generator configured to receive a real first stained image and generate a virtual second stained image; and a discriminator configured to discriminate whether the virtual second stained image output by the generator is real or not, wherein the discriminator is a patch generative adversarial network (PatchGAN) (see Agus et al. para [0072]; “Briefly, the CycleGAN approach consists of training two neural nets, a generator which takes an image A transforms it into an image of style B, and a discriminator which is trained to distinguish between generated images and real ones. The networks are trained simultaneously as adversaries. As the discriminator improves, the generator is challenged to learn better transformations from style A to B. Conversely, as the generator improves, the discriminator is challenged to learn better features that distinguish real and generated image”, Note; per definition “CycleGAN often utilizes PatchGAN as its discriminator architecture. The discriminators within CycleGAN typically follow the PatchGAN design to provide detailed feedback on the realism of generated image patches, which contributes to the overall quality of the image translation”).
Regarding claim 4, the rejection of claim 1 is incorporated herein.
The combination of Agus et al., Che et al. and Boucheron further teach wherein the deep learning model is trained generator configured to receive a real first stained image which is divided into a plurality of patches (see Agus et al. para [0084]; “Similar to other deep learning approaches, a patch-based classifier was trained to predict molecular marker status…First, each whole slide image was grossly segmented into foreground vs. background. The foreground areas were divided into non-overlapping squares of (112×112 microns), which were scaled to a final patch size of 224×224 pixels”, see also para [0086]; “Using the same image patches and cross validation splits from the control experiment (described above) and the previously trained fingerprint network, 512D fingerprints were extracted for each image patch and then trained a second “biomarker” neural network to predict marker status based on these fingerprints”) and generate a virtual second stained image by each patch unit (see Agus et al. para [0077]; “The region colors were determined as follows: each core image was divided into overlapping square patches (size 224×224 pixels, with 80% linear overlap). Each patch was passed to the neural network, and a probability vector was calculated predicting the identity of each core via Softmax… Each heat map shows the probability of predicting the correct patient and is shaded from 0 (blue-light) to 1 (red-dark)”); and a discriminator configured to discriminate the virtual second stained image by each patch unit using a pair of corresponding patches in the real first stained image and the virtual second stained image (see Agus et al. para [0072; “a discriminator which is trained to distinguish between generated images and real ones. The networks are trained simultaneously as adversaries. As the discriminator improves, the generator is challenged to learn better transformations from style A to B. Conversely, as the generator improves, the discriminator is challenged to learn better features that distinguish real and generated images… Following style transfer, the objective loss function to promote style invariance was adapted. The new objective loss function has two components, a cross entropy loss (abbreviated ‘CE’) to predict the identity of each patch”).
Regarding claim 9, Agus et al. teach the analysis device comprising: an input device configured to receive a first stained image of target tissue (see para [0055]; “receiving a digital image 16′ of a histologic sample as input to a tissue fingerprinting function 18…. The untrained machine learning device 12 is trained with digital images 16.sup.n from a plurality of characterized or uncharacterized stained tissue samples”) a storage device (see Fig. 2 step 104 disclose storage device) configured to store a generative adversarial model for generating a second stained image on the basis of the first stained image (see para [0072]; “In this project, the open-source CycleGAN code was used without modification. The network was trained to transfer styles between images of BR20823 that were stained by the array manufacturer (US Biomax) or at our site (style transfer between slides 1 and 2, respectively, as shown in FIG. 4). Thus, the original set of 13,415 cores was augmented to three-fold its original size via neural style transfer (each core has an original image, a virtual USC stain and a virtual Biomax stain)”, see also para [0079]; “In the second two experiments, the style transfer algorithm CycleGAN.sup.17 was used to recolor images (FIG. 4), making them appear as if they were prepared at a different site.. Here, we use CycleGAN to transfer the H&E staining coloration from a reference site to images from other sites”); generate a virtual second binary image by inputting the received first stained image to the generative adversarial model (see para [0084]; “To train the classifier, the entire set of TCGA patients was split into five groups. For each cross validation fold, three groups were used to train, one group (the “overfitting group”) was used to monitor overfitting and perform early stopping, and the remaining group was used to test the network's final performance. To the train the network, patches were assigned a binary label per the patient-level annotation”), generate a second binary image by binarizing the virtual second stained image (see para [0107]; “Given the high AUC for the ER classifier, heatmaps showing the regions predicted to be highly ER-positive or negative across whole slides were made… Subsequently, the pre-trained ER network was used to make patch-level predictions across the entire slide. The predictions are shaded in grayscale. Black signifies a prediction of −1 (ER-negative), while white signifies +1 (ER-positive). Gray values correspond to scores close to 0 (indeterminate)”); wherein the first stained image is a hematoxylin & eosin (H&E) stained image, (see para [0079]; “we use CycleGAN to transfer the H&E staining coloration from a reference site to images from other sites”) and the second stained image is a cytokeratin (CK) stained image (see para [0060]; “the diagnostic, prognostic, or theragnostic feature, is the presence or absence of a biomarker…Examples of useful biomarkers include, cytokeratin markers”). However, Agus et al. does not teach a method of predicting a tumor-stroma ratio (TSR) on the basis of a deep learning model, generating, by the analysis device, a first binary image of the first stained image; and calculating, by the analysis device, a TSR of the target tissue by subtracting the second binary image from the first binary image.
In the same field of endeavor Che et al. teach a an analysis for predicting a tumor-stroma ratio (TSR) on the basis of a deep learning model (see para [0084]; “a content evaluation unit, configured to calculate a tumor proportion and a tumor-stroma ratio of the digital pathology slide image”, see also para [0101]; “obtaining a digital pathology slide image, and determining an effective pathological region based on the digital pathology slide image; identifying a tumor cell region corresponding to the effective pathological region by using a deep learning-based pathology image classifier”); generating, by the analysis device, a first binary image of the first stained image (see para [0047]; “the effective pathological region is a region dyed by a red dyeing reagent and a blue dyeing reagent….Therefore, binarization processing is performed on the digital pathology slide image, so that a color picture is converted into a grayscale image”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the general histologic classification of pathology specimens through machine learning of Agus et al. in view of the use for automatically evaluating the tumor cell content of digital pathology slide image using a deep learning-based pathology image classifier of Che et al. in order to obtain a uniform and precise evaluation result due to a subjective deviation in manual analysis (see para [0084]).
Additionally, Che et al. disclose and calculating, by the analysis device, a TSR of the target tissue by subtracting the second binary image from the first binary image (see para [0048]; “Determine a first area of the tumor cell region, and determine a second area of the effective pathological region. [0050] Step 106B: Calculate a tumor proportion and a tumor-stroma ratio of the digital pathology slide image based on the first area and the second area.. Specifically, the first area may be calculated based on the tumor cell region, and the second area may be calculated based on the effective pathological region. The tumor proportion and the tumor-stroma ratio are respectively calculated”, see also para [0037]; “The digital pathology slide image is segmented to determine the effective pathological region, where an image segmentation method includes but is not limited to an image binarization processing method, a machine learning-based image segmentation model”, Note: binary segmentation is used to isolate tumor and stroma components into distinct "binary image" masks to calculate their respective pixel areas for a final, and also identifying the total tumor-stroma boundary (effective region) and the tumor cells within it, from which the areas (pixel counts) are calculated to determine the ratio), but does not explicitly disclose by subtracting.
In the same field of endeavor Boucheron teach and calculating, by the analysis device, a TSR of the target tissue by subtracting the second binary image from the first binary image (see para [0043]; “classifying one or more biological materials comprises classifying cytoplasm materials and stroma materials by subtracting out pixels related to background and nuclei”, see also para [0437]; “More specifically, we use the morphological opening of a binary image with a disk-shaped structuring element (SE). The radius of the SE is increased by 1 pixel each iteration and the residue of the image (the sum of the pixels) is calculated…Looking at the first derivative (the element-wise subtraction) of .phi.(k) will yield a local maximum for a structuring element with approximate size of a large number of objects in the original binary image”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the general histologic classification of pathology specimens through machine learning of Agus et al. in view of the use for automatically evaluating the tumor cell content of digital pathology slide image using a deep learning-based pathology image classifier of Che et al. and quantitative object and spatial arrangement-level analysis of nuclear material, cytoplasm material, and stromal material of Boucheron in order to better coordinate clinical care of women presenting breast masses (see para [0043]).
Regarding claim 12, the rejection of claim 9 is incorporated herein.
The combination of Agus et al., Che et al. and Boucheron further teach wherein the generative adversarial model comprises: a discriminator discriminating whether each patch is real or not, wherein the discriminator is a patch generative adversarial network (PatchGAN) (see Agus et al. para [0072]; “Briefly, the CycleGAN approach consists of training two neural nets, a generator which takes an image A transforms it into an image of style B, and a discriminator which is trained to distinguish between generated images and real ones”).
The combination of Agus et al., Che et al. and Boucheron further teach a generator including U-net architecture without dropout layer (see Che et al. para [0061]; “a cell segmentation algorithm of a U-Net model”).
Claims 5-7, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Agus et al. and Che et al. in view of Boucheron as applied in claims 1, and 9 above, and further in view of Ozcan et al. (US 20190333199 A1).
Regarding claim 5, the rejection of claim 1 is incorporated herein.
The combination of Agus et al., Che et al. and Boucheron further teach wherein the deep learning model is trained with training data, wherein the training data includes a real first stained image and a real second stained image of a same tissue section (see Agus et al. para [0072]; “Briefly, the CycleGAN approach consists of training two neural nets, a generator which takes an image A transforms it into an image of style B, and a discriminator which is trained to distinguish between generated images and real ones. The networks are trained simultaneously as adversaries. As the discriminator improves, the generator is challenged to learn better transformations from style A to B. Conversely, as the generator improves, the discriminator is challenged to learn better features that distinguish real and generated images”). However, the combination of Agus et al, Che et al. and Boucheron as a whole does not teach and the real first stained image and the real second stained image are data obtained by performing global alignment on the entire images at a pixel level and performing local alignment on a plurality of areas constituting the entire images.
In the same field of endeavor Ozcan et al. teach and the real first stained image and the real second stained image are data obtained by performing global alignment on the entire images at a pixel level and performing local alignment on a plurality of areas constituting the entire images (see para [0199]; “FIG. 2 and FIG. 34 illustrate an example of the global and local registration operations used to co-register pairs of low-resolution images 20′ (or image patches) and high-resolution images 50 (or image patches)”, see also para [0165]; “To address this, the globally matched images are fed into a pyramidal elastic registration algorithm as illustrated in FIG. 34 to achieve sub-pixel level matching accuracy”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the general histologic classification of pathology specimens through machine learning of Agus et al. and the use for automatically evaluating the tumor cell content of digital pathology slide image using a deep learning-based pathology image classifier of Che et al. in view of quantitative object and spatial arrangement-level analysis of nuclear material, cytoplasm material, and stromal material of Boucheron and further in view of a trained deep neural network co-registering pairs of high-resolution microscopy image patches and their corresponding low-resolution microscopy image patches of Ozcan et al.in order to increase the sparsity of the output images and reduce noise (see para [0199]).
Regarding claim 6, the rejection of claim 5 is incorporated herein.
The combination of Agus et al., Che et al., Boucheron and Ozcan et al. further teach wherein each of the global alignment and the local alignment comprises: performing color deconvolution on the two images to be aligned (see Ozcan et al. para [0045]; “FIGS. 18A-1 to 18A-5; 18B-1 to 18B-5; 18C-1 to 18C-5 illustrate a comparison of deep learning results against Lucy-Richardson and non-negative least squares (NNLS) image deconvolution algorithms for three different fluorescent stains/dyes (DAPI, FITC, TxRed). Inputs are seen in FIGS. 18A-1, 18B-1, 18C-1. Ground truth images are seen in FIGS. 18A-5, 18B-5, 18C-5”); binarizing the two images having undergone color deconvolution (see Ozcan et al. para [0103]; “A) Color images are converted to grayscale images (step no illustrated in FIG. 2)”), generating a cross-correlation heatmap of the two binarized images; and calculating a shift vector on the basis of a maximum value in the heatmap (see Ozcan et al. para [0061]; “The 2D cross-correlation map of each block pair from the corresponding two input images is calculated. (4) The shift of each block is calculated by fitting a 2D Gaussian function to the peak of the cross-correlation map. This shift map (N×N) is interpolated to the image size (e.g., 1024×1024 pixels) as a translation map. (5) Apply the translation map to the image to be registered by linear interpolation. If the maximum value of the translation map is greater than the tolerance value (e.g. 0.2 pixels), repeat steps (3-5). Else if the block size is larger than the minimum block size (e.g. 64×64), increase N and shrink the block size (e.g., 1.2 times), and repeat steps (2-5)”).
Regarding claim 7, the rejection of claim 6 is incorporated herein.
The combination of Agus et al., Che et al., Boucheron and Ozcan et al. further teach wherein the local alignment is performed by aligning each of the plurality of areas on the basis of a sum of a mean value of shift vectors of the plurality of areas and the shift vector value of the global alignment (see Ozcan et al. para [0165]; “This registration step starts with a block size of 256×256 and stops at a block size of 64×64, while shrinking the block size by 1.2 times every 5 iterations with a shift tolerance of 0.2 pixels”, see also para [0195]; “Next, local registration was performed using a pyramid elastic registration algorithm as described herein. This algorithm breaks the images into iteratively smaller blocks (see e.g., FIG. 34), registering the local features within the blocks each time, achieving sub-pixel level agreement between the lower-resolution and higher-resolution SEM images 20′, 50”).
Regarding claim 13, the rejection of claim 9 is incorporated herein.
The combination of Agus et al., Che et al., Boucheron and Ozcan et al. further teach wherein the generative adversarial model is trained with training data, wherein the training data includes a real first stained image and a real second stained image of a same tissue section (see Agus et al. para [0072]; “Briefly, the CycleGAN approach consists of training two neural nets, a generator which takes an image A transforms it into an image of style B, and a discriminator which is trained to distinguish between generated images and real ones. The networks are trained simultaneously as adversaries. As the discriminator improves, the generator is challenged to learn better transformations from style A to B. Conversely, as the generator improves, the discriminator is challenged to learn better features that distinguish real and generated images”).
The combination of Agus et al., Che et al., Boucheron and Ozcan et al. further teach and the real first stained image and the real second stained image are data obtained by performing global alignment on the entire images at a pixel level and performing local alignment on a plurality of areas constituting the entire images (see Ozcan et al. para [0199]; “FIG. 2 and FIG. 34 illustrate an example of the global and local registration operations used to co-register pairs of low-resolution images 20′ (or image patches) and high-resolution images 50 (or image patches)”, see also para [0165]; “To address this, the globally matched images are fed into a pyramidal elastic registration algorithm as illustrated in FIG. 34 to achieve sub-pixel level matching accuracy”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the general histologic classification of pathology specimens through machine learning of Agus et al. and the use for automatically evaluating the tumor cell content of digital pathology slide image using a deep learning-based pathology image classifier of Che et al. in view of quantitative object and spatial arrangement-level analysis of nuclear material, cytoplasm material, and stromal material of Boucheron and their corresponding low-resolution microscopy image patches of Ozcan et al.in order to increase the sparsity of the output images and reduce noise (see para [0199]).
Claims 8 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Agus et al. and Che et al. in view of Boucheron as applied in claims 1 and 9 above, and further in view of Srinidhi et al. “Deep neural network models for computational histopathology: A survey”.
Regarding claim 8, the rejection of claim 1 is incorporated herein. The combination of Agus et al., Che et al., and Boucheron as a whole does not teach wherein the deep learning model is trained additionally using a loss function of a hematoxylin-eosin-diaminobenzidine (DAB) (HED) color space
In the same field of endeavor Srinidhi et al. teach wherein the deep learning model is trained additionally using a loss function of a hematoxylin-eosin-diaminobenzidine (DAB) (HED) color space (see page 17 section 3.4.2; “One may combat staining variation by augmenting the training data by varying each pixel value per channel within a predefined range on transformed color spaces, such as HSV (hue, saturation and value) or HED (Hematoxylin, Eosin, and Diaminobenzidine)”, see also page 46 table 1; “Overview of supervised learning models. The acronyms for the staining stands for: H&E (haematoxylin and eosin); DAB-H (Diaminobenzidine Hematoxylin)”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the general histologic classification of pathology specimens through machine learning of Agus et al. and the use for automatically evaluating the tumor cell content of digital pathology slide image using a deep learning-based pathology image classifier of Che et al. in view of quantitative object and spatial arrangement-level analysis of nuclear material, cytoplasm material, and stromal material of Boucheron and deep neural network models for computational histopathology of Srinidhi et al.in order to develop novel techniques that may be applicable to medical images (see page 46, Table 1).
Regarding claim 14, the rejection of claim 9 is incorporated herein.
The combination of Che et al., Agus et al. Boucheron and Srinidhi et al. further teach wherein the deep learning model is trained additionally using a loss function of a hematoxylin-eosin-diaminobenzidine (DAB) (HED) color space (see page 17 section 3.4.2; “One may combat staining variation by augmenting the training data by varying each pixel value per channel within a predefined range on transformed color spaces, such as HSV (hue, saturation and value) or HED (Hematoxylin, Eosin, and Diaminobenzidine)”, see also page 46 table 1; “Overview of supervised learning models. The acronyms for the staining stands for: H&E (haematoxylin and eosin); DAB-H (Diaminobenzidine Hematoxylin)” see also page 2 section 2; “The goal is to train a model fθ : x → y that best predicts the label for an unknown test image based on a loss function L” further page 7; “to improve the detection task by modifying the loss function or incorporating additional features into popular deep learning architectures”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the general histologic classification of pathology specimens through machine learning of Agus et al. and the use for automatically evaluating the tumor cell content of digital pathology slide image using a deep learning-based pathology image classifier of Che et al. in view of quantitative object and spatial arrangement-level analysis of nuclear material, cytoplasm material, and stromal material of Boucheron and deep neural network models for computational histopathology of Srinidhi et al.in order to develop novel techniques that may be applicable to medical images (see page 46, Table 1).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WINTA GEBRESLASSIE whose telephone number is (571)272-3475. The examiner can normally be reached Monday-Friday9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at 571-270-5180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WINTA GEBRESLASSIE/Examiner, Art Unit 2677
/ANDREW W BEE/ Supervisory Patent Examiner, Art Unit 2677