Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 9 is objected to because of the following informalities:
Claim 9 line 2 “a total number of features types” should be “a total number of feature types”
Appropriate correction is required.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-18 are rejected on the ground of nonstatutory double patenting as being unpatentable over allowed claims 32-48 of U.S. Patent No. 11,842,488 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the scope of patented claims anticipates the scope of application claims.
Below is a table showing the conflicting claims:
Current 18502867
US Patent No 11,842,488 B2
1.(Original) A method for performing explainable pathological analysis of medical images, the method comprising:
for a region of interest (ROI) in a whole slide image (WSI) of a tissue, identifying features of a plurality of feature types, wherein at least one feature type is at least partially indicative of a pathological condition of the tissue within the ROI;
using a classifier trained to classify an image using features of the plurality of feature types into one of a plurality of classes of tissue conditions:
classifying the ROI into a class within the plurality of classes, and (ii) designating to the ROI a label indicating a tissue condition associated with the class and with the tissue in the ROI;
storing explanatory information about the designation of the label, the explanatory information comprising information about the identified features; and displaying:
(i) at least a portion of the WSI with boundary of the ROI highlighted, (ii) the label designated to the ROI, and (iii) a user interface (UI) comprising:
(a) a first UI element for providing to a user access to the stored explanatory information, and (b) one or more additional UI elements enabling the user to provide feedback on the designated label.
32. (Original) A system for performing explainable pathological analysis of medical images, the system comprising:
a first processor; and a first memory in electrical communication with the first processor, and comprising instructions that, when executed by a processing unit that comprises the first processor or a second processor, and that is in electronic communication with a memory module that comprises the first memory or a second memory, program the processing unit to:
for a region of interest (ROI) in a whole slide image (WSI) of a tissue, identify features of a plurality of feature types, wherein at least one feature type is at least partially indicative of a pathological condition of the tissue within the ROI;
operate as a classifier, trained to classify an image using features of the plurality of feature types into one of a plurality of classes of tissue conditions, to:
classify the ROI into a class within the plurality of classes, and (ii) designate to the ROI a label indicating a tissue condition associated with the class and with the tissue in the ROI;
store explanatory information about the designation of the label, the explanatory information comprising information about the identified features; and display:
(i) at least a portion of the WSI with boundary of the ROI highlighted, (ii) the label designated to the ROI, and (iii) a user interface (UI) comprising:
(a) a first UI element for providing to a user access to the stored explanatory information, and (b) one or more additional UI elements enabling the user to provide feedback on the designated label.
2. (Original) The method of claim 1, wherein: the tissue comprises breast tissue; and the plurality of classes of tissue conditions comprises two or more of: invasive carcinoma, ductal carcinoma in situ (DCIS), high-risk benign, low-risk benign, atypical ductal hyperplasia (ADH), flat epithelial atypia (FEA), columnar cell change (CCC), and normal duct.
33. (Original) The system of claim 32, wherein: the tissue comprises breast tissue; and the plurality of classes of tissue conditions comprises two or more of: invasive carcinoma, ductal carcinoma in situ (DCIS), high-risk benign, low-risk benign, atypical ductal hyperplasia (ADH), flat epithelial atypia (FEA), columnar cell change (CCC), and normal duct.
3. (Original) The method of claim 1, wherein: the tissue comprises lung tissue; and the plurality of classes of tissue conditions comprises: idiopathic pulmonary fibrosis (IPF) and normal.
34. (Original) The system of claim 32, wherein: the tissue comprises lung tissue; and the plurality of classes of tissue conditions comprises: idiopathic pulmonary fibrosis (IPF) and normal.
4. (Original) The method of claim 1, wherein: the tissue comprises brain tissue; and the plurality of classes of tissue conditions comprises: classical cellular tumor and proneural cellular tumor.
35. (Original) The system of claim 32, wherein: the tissue comprises brain tissue; and the plurality of classes of tissue conditions comprises: classical cellular tumor and proneural cellular tumor.
5. (Original) The method of claim 1, wherein a feature type is cytological features or architectural features (AFs).
36. (Original) The system of claim 32, wherein a feature type is cytological features or architectural features (AFs).
6. (Original) The method of claim 5, wherein a feature of the feature type cytological features is of one of the subtypes: nuclear size, nuclear shape, nuclear morphology, or nuclear texture.
37. (Original) The system of claim 36, wherein a feature of the feature type cytological features is of one of the subtypes: nuclear size, nuclear shape, nuclear morphology, or nuclear texture.
7. (Original) The method of claim 5, a feature of the feature type architectural features is of one of the subtypes: an architectural feature based on a color of a group of superpixels in the ROI (AF-C); (ii) an architectural feature based on a cytological phenotype of nuclei in the ROI (AF- N); or (iii) a combined architectural feature (AF-CN) based on both a color of a group of superpixels in the ROI and a cytological phenotype of nuclei in the ROI.
38. (Original) The system of claim 36, a feature of the feature type architectural features is of one of the subtypes: an architectural feature based on a color of a group of superpixels in the ROI (AF-C); (ii) an architectural feature based on a cytological phenotype of nuclei in the ROI (AF-N); or (iii) a combined architectural feature (AF-CN) based on both a color of a group of superpixels in the ROI and a cytological phenotype of nuclei in the ROL.
8. (Original) The method of claim 5, a feature of the feature type architectural features is of one of the subtypes: nuclear arrangement, stromal cellularity, epithelial patterns in ducts, epithelial patterns in glands, cell cobblestoning, stromal density, or hyperplasticity.
39. (Original) The system of claim 36, a feature of the feature type architectural features is of one of the subtypes: nuclear arrangement, stromal cellularity, epithelial patterns in ducts, epithelial patterns in glands, cell cobblestoning, stromal density, or hyperplasticity.
9. (Original) The method of claim 1, wherein the information about the features comprises one or more of: a total number of features types that were detected in the ROI and that correspond to the tissue condition indicated by the label; a count of features of a particular feature type that were detected in the ROI; a measured density of features of the particular feature type in the ROI; or a strength of the particular feature type in indicating the tissue condition.
40. (Original) The system of claim 32, wherein the information about the features comprises one or more of: a total number of features types that were detected in the ROI and that correspond to the tissue condition indicated by the label; a count of features of a particular feature type that were detected in the ROI; a measured density of features of the particular feature type in the ROI; or a strength of the particular feature type in indicating the tissue condition.
10. (Original) The method of claim 1, wherein the explanatory information comprises a confidence score computed by the classifier in designating the label, wherein the confidence score is based on one or more of: a total number of feature types that were detected in the ROI and that correspond to the tissue condition indicated by the label; for a first feature type: (i) a strength of the first feature type in indicating the tissue condition, or (ii) a count of features of the first feature type that were detected in the ROI; or another total number of features types that were detected in the ROI but that correspond to a tissue condition different from the condition associated with the label.
41. (Original) The system of claim 32, wherein the explanatory information comprises a confidence score computed by the classifier in designating the label, wherein the confidence score is based on one or more of: a total number of feature types that were detected in the ROI and that correspond to the tissue condition indicated by the label; for a first feature type: (i) a strength of the first feature type in indicating the tissue condition, or (ii) a count of features of the first feature type that were detected in the ROI; or another total number of features types that were detected in the ROI but that correspond to a tissue condition different from the condition associated with the label.
11. (Original) The method of claim 1, further comprising: in response to the user interacting with the first UI element: generating explanatory description using a standard pathology vocabulary and the stored explanatory information; and displaying the explanatory description in an overlay window, a side panel, or a page.
42. (Original) The system of claim 32, wherein the instructions further program the processing unit to: in response to the user interacting with the first UI element: generate explanatory description using a standard pathology vocabulary and the stored explanatory information; and display the explanatory description in an overlay window, a side panel, or a page.
12. (Original) The method of claim 11, further comprising: highlighting in the ROI, features of a particular feature type, that at least partially indicates the tissue condition indicated by the label, using a color designated to the feature type; and displaying the highlighted ROI in the overlay window, the side panel, or the page.
43. (Original) The system of claim 42, wherein the instructions further program the processing unit to: highlight in the ROI, features of a particular feature type, that at least partially indicates the tissue condition indicated by the label, using a color designated to the feature type; and display the highlighted ROI in the overlay window, the side panel, or the page.
13. (Original) The method of claim 1, further comprising: repeating the identifying, designating, and storing steps for a plurality of different ROIs; and prior to the displaying step, (i) computing a respective risk metric for each of the ROIs, the risk metric of an ROI being based on: (a) designated label of the ROI, or (b) a confidence score for the ROI, and (ii) sequencing the ROIs according to the respective risk metrics thereof, wherein the displaying step comprises: displaying in one panel: (i) at least a portion of the WSI with boundary of the ROI having the highest risk metric highlighted, (ii) the label designated to that ROI, and (iii) a user interface (UI) providing to the user access to the stored explanation for the designated label of that ROI; and displaying in another panel thumbnails of the sequence of ROIs.
44. (Original) The system of claim 32, wherein: the instructions further program the processing unit to: repeat the identify, designate, and store operations for a plurality of different ROIs; and prior to the display operation, (i) compute a respective risk metric for each of the ROls, the risk metric of an ROI being based on: (a) designated label of the ROI, or (b) a confidence score for the ROI, and (ii) sequencing the ROls according to the respective risk metrics thereof; and to perform the display operation, the instructions program the processing unit to: display in one panel: (i) at least a portion of the WSI with boundary of the ROI having the highest risk metric highlighted, (ii) the label designated to that ROI, and (iii) a user interface (UI) providing to the user access to the stored explanation for the designated label of that ROI; and display in another panel thumbnails of the sequence of ROls.
14. (Original) The method of claim 1, further comprising: obtaining the whole slide image (WSI); and identifying the ROI in the WSI, wherein identification of the ROI comprises: (i) marking in the WSI, superpixels of at least two types, one type corresponding to hematoxylin stained tissue and another type corresponding to eosin stained tissue; and (ii) marking segments of pixels of a first type to define an enclosed region as the ROI.
45. (Original) The system of claim 32, wherein the instructions further program the processing unit to: obtain the whole slide image (WSI); and identify the ROI in the WSI, wherein to identify the ROI, the instructions program the processing unit to: (i) mark in the WSI, superpixels of at least two types, one type corresponding to hematoxylin stained tissue and another type corresponding to eosin stained tissue; and (ii) mark segments of pixels of a first type to define an enclosed region as the ROI.
15. (Original) The method of claim 14, further comprising identifying a plurality of ROIs in the WSI.
46. (Original) The system of claim 45, wherein the instructions further program the processing unit to: identify a plurality of ROls in the WSI.
16. (Original) The method of claim 1, further comprising updating a training dataset for the classifier, updating the training dataset comprising: receiving from the user via the one or more additional UI elements feedback for the label designated to the ROI, the feedback indicating correctness of the designated label; and storing a portion of the WSI associated with the ROI and the designated label in a training dataset.
47. (Original) The system of claim 32, wherein the instructions further program the processing unit to: update a training dataset for the classifier wherein, to update the training dataset, the instructions program the processing unit to: receive from the user via the one or more additional UI elements feedback for the label designated to the ROI, the feedback indicating correctness of the designated label; and store a portion of the WSI associated with the ROI and the designated label in a training dataset.
17. (Original) The method of claim 1, wherein the classifier is selected from a group consisting of: a decision tree, a random forest, a support vector machine, an artificial neural network, and a logistic regression based classifier.
48. (Original) The system of claim 32, wherein the classifier is selected from a group consisting of: a decision tree, a random forest, a support vector machine, an artificial neural network, and a logistic regression based classifier.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1, 5-7, 9-12, 14-15, 32, 36-38, 40-43, and 45-46 are rejected under 35 U.S.C. 103 as being unpatentable over Ascierto et al. (US 20170270346 A1) in view of Faust et al. (US 20200272864 A1).
Regarding claim 1, Ascierto et al. teaches a method for performing explainable pathological analysis of medical images (see Figs. 4, 8 and 9 disclose pathology WSIs with named region classes and on-image overlays (e.g., micro/macro-metastasis and peripheries) provide an interpretable, clinician-facing review, see detail on para [0233] and [0263]), the method comprising: for a region of interest (ROI) in a whole slide image (WSI) of a tissue (see para [0007]; “apparatuses for automatically identifying fields of view (FOVs) for regions in melanoma digital image encompassing tumor cells”, see also para [0073]; “the tumor image can be a whole slide image. Each marker image can also be a whole slide image or a part thereof”), identifying features of a plurality of feature types, wherein at least one feature type is at least partially indicative of a pathological condition of the tissue within the ROI (see Fig. 4 disclose taxonomy and labeled examples, para [0048]; “identifying one or more features of each of the pixel blobs, the features comprising at least one of the diameter of the pixel blob, the shape of the pixel blob and/or distance of the pixel blob to the closest neighboring pixel blob in the tumor image; [0051] applying cancer-type specific rules on the determined one or more features of the pixel blobs for: [0052] determining to which one of a plurality of predefined, cancer-type specific intra-tumor region types the pixel blob belongs and using the identified pixel blobs the identified regions within one of the one or more tumors”, see also para [0038]; “the size and shape of inner-tumor regions, peri-tumor regions and/or of different types of metastasis and other forms of tumor cell clusters may depend on the cancer type. By providing cancer-type specific rules for identifying the regions in the tumor image”, Note: features include diameter, size, shape and density each indicative of the tumor condition); storing explanatory information about the designation of the label, the explanatory information comprising information about the identified features (see para [0263]; “An example of a region labeling result for melanoma is shown in FIGS. 4, 8 and 9. The regions of Isolated Melanoma, Micro-metastasis, Periphery of Micro-metastasis, Macro-metastasis, Periphery of Macro-metastasis, Group of Isolated Melanoma and/or and Periphery of Group of Isolated Melanoma are identified”); and displaying: (i) at least a portion of the WSI with boundary of the ROI highlighted, (ii) the label designated to the ROI (see Fig. 8 and Fig. 9 disclose ROI overlays and boundaries around each labeled region, see also para [0184]; “the region generation module 117 outputs the extended region, corresponding to a boundary around the annotated tumor to a display. The extended region is a region in the periphery of an inner-tumor region…. the extended region is displayed on a graphical user interface in form of a visual boundary or data corresponding to a boundary around an inner-tumor region surrounded by said extended region and by the outer boundary of the extended region”). However, Ascierto et al. does not specifically disclose using a classifier trained to classify an image using features of the plurality of feature types into one of a plurality of classes of tissue conditions, (i) classifying the ROI into a class within the plurality of classes, and (ii) designating to the ROI a label indicating a tissue condition associated with the class and with the tissue in the ROI, and (iii) a user interface (UI) comprising: (a) a first UI element for providing to a user access to the stored explanatory information, and (b) one or more additional UI elements enabling the user to provide feedback on the designated label.
In the same field of endeavor Faust et al. teaches using a classifier trained to classify an image using features of the plurality of feature types into one of a plurality of classes of tissue conditions (see para [0020]; “the convolutional neural network trained using pathology images…..generating output indications on the pathology image using the classification data”): (i) classifying the ROI into a class within the plurality of classes (see para [0005]; “determine a region of interest on a pathology slide and a predicted region of interest (ROI) type by classifying a plurality of pathology features abstracted from the pathology slide using the convolutional neural network”), and (ii) designating to the ROI a label indicating a tissue condition associated with the class and with the tissue in the ROI (see para [0021]; “generate output indications of the region of interest on a visual representation of the pathology slide and annotations of the predicted region of interest type on the visual representation of the pathology slide; and update an interface tool to display the output indications and the annotations on a display device”); and (iii) a user interface (UI) comprising: (a) a first UI element for providing to a user access to the stored explanatory information (see para [0023]; “a surface map showing the basis of a prediction for the predicted region of interest type. the surface map being a reduced dimensionality view of a classification for the predicted region of interest type”), and (b) one or more additional UI elements enabling the user to provide feedback on the designated label (see para [0054]; “receive, at the interface tool, an input indication that a specific region of interest is of an unknown type”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling day of the invention to modify an apparatus for automatically identifying fields of view (FOVs) for regions in an image encompassing tumor of Ascierto et al. in view of the use of a deep convolutional neural networks (CNNs) for digital pathology that enable multi-level annotation and visualization of histopathologic slides of Faust et al. in order to automate ROI classification and display explanatory results (see para [0004]).
Regarding claim 5, the rejection of claim 1 is incorporated herein.
Ascierto in the combination further teach wherein a feature type is cytological features or architectural features (AFs) (see para [0171]; “the present invention is applicable to any biological specimen, for example a tissue specimen or cytology specimen”, see also claim 17; “associating at least one of the measured size and a label with each tumor-containing region based on the measured size of the tumor-containing region and generating tumor region characteristic data”, see also para [0030]; “The automated identification of immune cell types, their respective count and their cell densities in predefined tumor regions within the tumor or at the periphery of the tumor may be beneficial as the reproducibility of immune score computation is further increased. Each of said features is automatically identified based on reproducible, objective criteria” Note: region characteristic data implies architectural features).
Regarding claim 6, the rejection of claim 5 is incorporated herein.
Ascierto in the combination further teach wherein a feature of the feature type cytological features is of one of the subtypes: nuclear size, nuclear shape, nuclear morphology, or nuclear texture (see para [0209]; “The measurement information obtained in step 210, e.g. the diameter, size, number of pixels, shape, the type of the intra-tumor region and/or other features of the identified intra-tumor region in the tumor image is evaluated for automatically generating extended tumor regions in step 212 by the module 117”).
Regarding claim 7, the rejection of claim 5 is incorporated herein.
Ascierto in the combination further teach a feature of the feature type architectural features is of one of the subtypes: an architectural feature based on a color of a group of superpixels in the ROI (AF-C); (ii) an architectural feature based on a cytological phenotype of nuclei in the ROI (AF- N); or (iii) a combined architectural feature (AF-CN) based on both a color of a group of superpixels in the ROI and a cytological phenotype of nuclei in the ROI (see para [0191]; “the FOVs are identified as sub-areas within the respective tumor regions or extended tumor regions in dependence on the intensity of groups of pixels in a respective marker image. For example, the regions may be assigned a color (via creation of a heat map) and ranked according to the appearance and/or staining intensity of the groups of pixels (i.e., candidate FOVs) in the marker image of the biological sample”).
Regarding claim 9, the rejection of claim 1 is incorporated herein.
Ascierto in the combination further teach wherein the information about the features comprises one or more of: a total number of features types that were detected in the ROI and that correspond to the tissue condition indicated by the label; a count of features of a particular feature type that were detected in the ROI; a measured density of features of the particular feature type in the ROI; or a strength of the particular feature type in indicating the tissue condition (see para [0022]; “the calculation of the immune score comprises: [0023] for each of the fields of view in each of the two or more registered marker images: [0024] applying a cell detection algorithm on pixel intensity information of the marker image and automatically counting all detected cells within said field of view; [0025] determining the immune cell type of the detected cells; [0026] determining the immune cell density within said field of view; and/or [0027] determining the region type of the region of the tumor image to which said field of view belongs to in the common coordinate system and assigning the cell count, cell type and/or cell density information with the determined region type; [0028] processing the cell count, cell type, density and/or the assigned region type information of all fields of views of the two or more marker images, wherein the height of the immune score correlates with the density of immune cells in the identified regions” Note: the limitation include or option).
Regarding claim 10, the rejection of claim 1 is incorporated herein.
Ascierto in the combination further teach wherein the explanatory information comprises a confidence score computed by the classifier in designating the label, wherein the confidence score is based on one or more of: a total number of feature types that were detected in the ROI and that correspond to the tissue condition indicated by the label; for a first feature type: (i) a strength of the first feature type in indicating the tissue condition, or (ii) a count of features of the first feature type that were detected in the ROI; or another total number of features types that were detected in the ROI but that correspond to a tissue condition different from the condition associated with the label (see para [0206]; “assigning the classified grid points at least one of a high confidence score and a low confidence score, modifying a database of known characteristics of tissue types based on the grid points that were assigned a high confidence score, and generating a modified database, and reclassifying the grid points that were assigned a low confidence score based on the modified database, to segment the tissue (e.g., identify tissue regions in an image)”, Note: the limitation include or option).
Regarding claim 11, the rejection of claim 1 is incorporated herein.
Faust et al. in the combination further teach further comprising: in response to the user interacting with the first UI element: generating explanatory description using a standard pathology vocabulary and the stored explanatory information (see para [0169]; “CAM unit 186 can cause such class activation map, surface map, or “heatmap” to be stored in a database 187 or persistent storage 188 or transmitted over a network 140, for example, to digital pathology platform 110 or entities 150. CAM unit 186 can cause the class activation map, surface map, or “heatmap” to be presented to a user engaged with interface application 130. [0170] Visualization unit 183 can generate and present an integrated summary output or report containing information such as classification, predicted region of interest type of a hyperdimensional space, a surface map showing the basis of the prediction, a class activation map, a t-SNE plot that can show unknown cases classified as “undefined”, likelihood percentages or relative prediction scores associated with alternative predictions or classifications, visual annotations of images”, see also para [0009]; “an original pathology slide, a false colour slide showing the region of interest, an overall view of the original pathology slide and the false colour slide, and a legend indicating the predicted region of interest type and an associated false colour”); and displaying the explanatory description in an overlay window, a side panel, or a page (see para [0010]; “the processor executes the instructions to receive, at the interface tool, an input indication that a specific region of interest is of an unknown type”).
Regarding claim 12, the rejection of claim 11 is incorporated herein.
Faust et al. in the combination further teach further comprising: highlighting in the ROI, features of a particular feature type, that at least partially indicates the tissue condition indicated by the label (see para [0127]; “the class activation map can highlight, annotate, or otherwise depict discriminative image regions or features used by one or more convolutional neural networks to make the classification or prediction of a region of interest type”), using a color designated to the feature type (see para [0215]; “the classes may be colour-coded for visual display”, see also para [0009]; “a false colour slide showing the region of interest, an overall view of the original pathology slide and the false colour slide, and a legend indicating the predicted region of interest type and an associated false colour”); and displaying the highlighted ROI in the overlay window, the side panel, or the page (see para [0091]; “the interface tool can present a display that includes the original slide (e.g., hematoxylin and eosin (H&E) stained), a false colour slide showing the identified regions of interest, an overlay of these two, and/or a legend indicating the predicted ROI type and associated false colour”).
Regarding claim 14, the rejection of claim 1 is incorporated herein.
Ascierto et al. in the combination further teach further comprising: obtaining the whole slide image (WSI); and identifying the ROI in the WSI (see para [0073]; “For example, the tumor image can be a whole slide image. Each marker image can also be a whole slide image or a part thereof), wherein identification of the ROI comprises: (i) marking in the WSI, superpixels of at least two types, one type corresponding to hematoxylin stained tissue and another type corresponding to eosin stained tissue; and (ii) marking segments of pixels of a first type to define an enclosed region as the ROI (see para [0115]; “from multiple slides of serial sections, and computing a tumor region mask from the tumor marker image or hematoxylin and eosin (H&E) stained slide. Based on the size and location of each individual tumor cell cluster, a set of regions of interest are defined. The slide image (whole slide or portion thereof) is divided into multiple areas, i.e., according to the identified region, for example, the inter-tumor area, peri-tumor area and intra-tumor area. FIG. 4 shows an example of a melanoma slide being partitioned into multiple regions”).
Regarding claim 15, the rejection of claim 14 is incorporated herein.
Ascierto et al. in the combination further teach further comprising identifying a plurality of ROIs in the WSI (see para [0117]; “the invention relates to a method which involves identifying regions, for example, tumor areas or regions around a tumor area, partitioning a whole slide image or portion of a whole slide image into multiple regions related to the tumor”).
Regarding claim 32, the scope of claim 32 is fully incorporated in claim 1, and the
rejection of claim 1 analysis is equally applicable here.
Regarding claim 36, the rejection of claim 32 is incorporated herein.
Ascierto in the combination further teach wherein a feature type is cytological features or architectural features (AFs) (see para [0171]; “the present invention is applicable to any biological specimen, for example a tissue specimen or cytology specimen”, see also claim 17; “associating at least one of the measured size and a label with each tumor-containing region based on the measured size of the tumor-containing region and generating tumor region characteristic data”, see also para [0030]; “The automated identification of immune cell types, their respective count and their cell densities in predefined tumor regions within the tumor or at the periphery of the tumor may be beneficial as the reproducibility of immune score computation is further increased. Each of said features is automatically identified based on reproducible, objective criteria” Note: region characteristic data implies architectural features).
Regarding claim 37, the rejection of claim 36 is incorporated herein.
Ascierto in the combination further teach wherein a feature of the feature type cytological features is of one of the subtypes: nuclear size, nuclear shape, nuclear morphology, or nuclear texture (see para [0209]; “The measurement information obtained in step 210, e.g. the diameter, size, number of pixels, shape, the type of the intra-tumor region and/or other features of the identified intra-tumor region in the tumor image is evaluated for automatically generating extended tumor regions in step 212 by the module 117”).
Regarding claim 38, the rejection of claim 36 is incorporated herein.
Ascierto in the combination further teach a feature of the feature type architectural features is of one of the subtypes: an architectural feature based on a color of a group of superpixels in the ROI (AF-C); (ii) an architectural feature based on a cytological phenotype of nuclei in the ROI (AF- N); or (iii) a combined architectural feature (AF-CN) based on both a color of a group of superpixels in the ROI and a cytological phenotype of nuclei in the ROI (see para [0191]; “the FOVs are identified as sub-areas within the respective tumor regions or extended tumor regions in dependence on the intensity of groups of pixels in a respective marker image. For example, the regions may be assigned a color (via creation of a heat map) and ranked according to the appearance and/or staining intensity of the groups of pixels (i.e., candidate FOVs) in the marker image of the biological sample”).
Regarding claim 40, the rejection of claim 32 is incorporated herein.
Ascierto in the combination further teach wherein the information about the features comprises one or more of: a total number of features types that were detected in the ROI and that correspond to the tissue condition indicated by the label; a count of features of a particular feature type that were detected in the ROI; a measured density of features of the particular feature type in the ROI; or a strength of the particular feature type in indicating the tissue condition (see para [0022]; “the calculation of the immune score comprises: [0023] for each of the fields of view in each of the two or more registered marker images: [0024] applying a cell detection algorithm on pixel intensity information of the marker image and automatically counting all detected cells within said field of view; [0025] determining the immune cell type of the detected cells; [0026] determining the immune cell density within said field of view; and/or [0027] determining the region type of the region of the tumor image to which said field of view belongs to in the common coordinate system and assigning the cell count, cell type and/or cell density information with the determined region type; [0028] processing the cell count, cell type, density and/or the assigned region type information of all fields of views of the two or more marker images, wherein the height of the immune score correlates with the density of immune cells in the identified regions” Note: the limitation include or option).
Regarding claim 41, the rejection of claim 32 is incorporated herein.
Ascierto in the combination further teach wherein the explanatory information comprises a confidence score computed by the classifier in designating the label, wherein the confidence score is based on one or more of: a total number of feature types that were detected in the ROI and that correspond to the tissue condition indicated by the label; for a first feature type: (i) a strength of the first feature type in indicating the tissue condition, or (ii) a count of features of the first feature type that were detected in the ROI; or another total number of features types that were detected in the ROI but that correspond to a tissue condition different from the condition associated with the label (see para [0206]; “assigning the classified grid points at least one of a high confidence score and a low confidence score, modifying a database of known characteristics of tissue types based on the grid points that were assigned a high confidence score, and generating a modified database, and reclassifying the grid points that were assigned a low confidence score based on the modified database, to segment the tissue (e.g., identify tissue regions in an image)”, Note: the limitation include or option).
Regarding claim 42, the rejection of claim 32 is incorporated herein.
Faust et al. in the combination further teach further comprising: in response to the user interacting with the first UI element: generating explanatory description using a standard pathology vocabulary and the stored explanatory information (see para [0169]; “CAM unit 186 can cause such class activation map, surface map, or “heatmap” to be stored in a database 187 or persistent storage 188 or transmitted over a network 140, for example, to digital pathology platform 110 or entities 150. CAM unit 186 can cause the class activation map, surface map, or “heatmap” to be presented to a user engaged with interface application 130. [0170] Visualization unit 183 can generate and present an integrated summary output or report containing information such as classification, predicted region of interest type of a hyperdimensional space, a surface map showing the basis of the prediction, a class activation map, a t-SNE plot that can show unknown cases classified as “undefined”, likelihood percentages or relative prediction scores associated with alternative predictions or classifications, visual annotations of images”, see also para [0009]; “an original pathology slide, a false colour slide showing the region of interest, an overall view of the original pathology slide and the false colour slide, and a legend indicating the predicted region of interest type and an associated false colour”); and displaying the explanatory description in an overlay window, a side panel, or a page (see para [0010]; “the processor executes the instructions to receive, at the interface tool, an input indication that a specific region of interest is of an unknown type”).
Regarding claim 43, the rejection of claim 42 is incorporated herein.
Faust et al. in the combination further teach further comprising: highlighting in the ROI, features of a particular feature type, that at least partially indicates the tissue condition indicated by the label (see para [0127]; “the class activation map can highlight, annotate, or otherwise depict discriminative image regions or features used by one or more convolutional neural networks to make the classification or prediction of a region of interest type”), using a color designated to the feature type (see para [0215]; “the classes may be colour-coded for visual display”, see also para [0009]; “a false colour slide showing the region of interest, an overall view of the original pathology slide and the false colour slide, and a legend indicating the predicted region of interest type and an associated false colour”); and displaying the highlighted ROI in the overlay window, the side panel, or the page (see para [0091]; “the interface tool can present a display that includes the original slide (e.g., hematoxylin and eosin (H&E) stained), a false colour slide showing the identified regions of interest, an overlay of these two, and/or a legend indicating the predicted ROI type and associated false colour”).
Regarding claim 45, the rejection of claim 32 is incorporated herein.
Ascierto et al. in the combination further teach further comprising: obtaining the whole slide image (WSI); and identifying the ROI in the WSI (see para [0073]; “For example, the tumor image can be a whole slide image. Each marker image can also be a whole slide image or a part thereof), wherein identification of the ROI comprises: (i) marking in the WSI, superpixels of at least two types, one type corresponding to hematoxylin stained tissue and another type corresponding to eosin stained tissue; and (ii) marking segments of pixels of a first type to define an enclosed region as the ROI (see para [0115]; “from multiple slides of serial sections, and computing a tumor region mask from the tumor marker image or hematoxylin and eosin (H&E) stained slide. Based on the size and location of each individual tumor cell cluster, a set of regions of interest are defined. The slide image (whole slide or portion thereof) is divided into multiple areas, i.e., according to the identified region, for example, the inter-tumor area, peri-tumor area and intra-tumor area. FIG. 4 shows an example of a melanoma slide being partitioned into multiple regions”).
Regarding claim 46, the rejection of claim 45 is incorporated herein.
Ascierto et al. in the combination further teach further comprising identifying a plurality of ROIs in the WSI (see para [0117]; “the invention relates to a method which involves identifying regions, for example, tumor areas or regions around a tumor area, partitioning a whole slide image or portion of a whole slide image into multiple regions related to the tumor”).
Claims 2, and 33 are rejected under 35 U.S.C. 103 as being unpatentable over Ascierto et al. in view of Faust et al. as applied in claims 1 and 32 above, and further in view of Yu (US 20190147592 A1).
Regarding claim 2, the rejection of claim 1 is incorporated herein.
Ascierto et al. in the combination further teach wherein: the tissue comprises breast tissue (see para [0020]; “For example the system may be trained on segmented and classified images of breast or prostate images to assist in cancer screening”). However, the combination of Ascierto et al. and Faust et al. as whole does not teach and the plurality of classes of tissue conditions comprises two or more of: invasive carcinoma, ductal carcinoma in situ (DCIS), high-risk benign, low-risk benign, atypical ductal hyperplasia (ADH), flat epithelial atypia (FEA), columnar cell change (CCC), and normal duct.
In the same field of endeavor Yu teaches and the plurality of classes of tissue conditions comprises two or more of: invasive carcinoma, ductal carcinoma in situ (DCIS), high-risk benign, low-risk benign, atypical ductal hyperplasia (ADH), flat epithelial atypia (FEA), columnar cell change (CCC), and normal duct (see para [0011]; “The tissue may be breast, with the pathology states being cancerous vs. benign and the subtypes being ductal carcinoma vs. lobular carcinoma”, see also para [0037]; “the present invention was also applied to breast cancer patients.….In addition, breast cancer has two major subtypes, invasive ductal carcinoma (n=754) and invasive lobular carcinoma (n=203)”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling day of the invention to modify an apparatus for automatically identifying fields of view (FOVs) for regions in an image encompassing tumor of Ascierto et al. in view of the use of a deep convolutional neural networks (CNNs) for digital pathology that enable multi-level annotation and visualization of histopathologic slides of Faust et al. and preprocessing and postprocessing of histological images of Yu in order to achieve satisfactory performance (see para [0011]).
Regarding claim 33, the rejection of claim 32 is incorporated herein.
Ascierto et al. in the combination further teach wherein: the tissue comprises breast tissue (see para [0020]; “For example the system may be trained on segmented and classified images of breast or prostate images to assist in cancer screening”).
In the same field of endeavor Yu in the combination further teaches and the plurality of classes of tissue conditions comprises two or more of: invasive carcinoma, ductal carcinoma in situ (DCIS), high-risk benign, low-risk benign, atypical ductal hyperplasia (ADH), flat epithelial atypia (FEA), columnar cell change (CCC), and normal duct(see para [0011]; “The tissue may be breast, with the pathology states being cancerous vs. benign and the subtypes being ductal carcinoma vs. lobular carcinoma”, see also para [0037]; “the present invention was also applied to breast cancer patients. ….In addition, breast cancer has two major subtypes, invasive ductal carcinoma (n=754) and invasive lobular carcinoma (n=203)” Note: classes of tissue are sub types of breast cancer). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling day of the invention to modify an apparatus for automatically identifying fields of view (FOVs) for regions in an image encompassing tumor of Ascierto et al. in view of the use of a deep convolutional neural networks (CNNs) for digital pathology that enable multi-level annotation and visualization of histopathologic slides of Faust et al. and preprocessing and postprocessing of histological images of Yu in order to achieve satisfactory pe