Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 7-8, 11-12, 14, 16-17, 29-31, and 34-35 are rejected under 35 U.S.C. 103 as being unpatentable over Barnes et al. (US 20200321102 A1) in view of Matlock et al. (US 20200388033 A1).
Regarding claim 7, Barnes et al. teaches a method of processing medical image data to predict disease progression in non-small cell lung cancer (NSCLC) patients (see para [0015]; “the images are of biological samples stained for the detection of non-small lung cell cancer biomarkers”, see para [0111]; “data is obtained regarding the outcome being tracked (time to death, time to recurrence, or time to progression) and the feature metric for each biomarker being analyzed…. Prognostic Feature Derivation Module….[0113] In some embodiments, a machine learning algorithm may be utilized to determine a set of image feature metrics that are most relevant in predicting a patient outcome”, see also para [0040]; “imaging biomarkers can be used for cancer diagnosis, prognosis, and epidemiology”); the method comprising: receiving a multiplexed tissue image comprising a plurality of cells stained for one or more markers (see para [0054]; “the system and method comprising (a) an image acquisition module 202 to generate or receive simplex or multiplex images”, see also para [0136]; “the images received as input may be multiplex images, i.e. the image received is of a biological sample stained with more than one stain”); evaluating the multiplexed tissue image using a machine learning classifier model (see para [0011]; “the image analysis algorithm detects and classifies cells and/or nuclei within the input images”, see also para [0113]; “a machine learning algorithm may be utilized to determine a set of image feature metrics that are most relevant in predicting a patient outcome”), and predicting whether a patient's NSCLC will progress based on the evaluation of the multiplexed tissue image using the machine learning classifier model (see para [0077]; “a feature extraction module 205 is utilized to derive certain metrics from the received input images. In some embodiments, the derived metrics may be utilized by a classification module 206 such that cells, membranes, and/or nuclei may be identified and/or classified, e.g. as being a tumor cell or a non-tumor cell…a multivariate Cox model module 208 may use a plurality of derived image feature metrics computed using the image feature extraction module 205 or may use prognostic features determined to be most relevant through machine learning using the prognostic feature derivation module 209”, see also para [0107]; “Cox's proportional hazards model is analogous to a multiple regression model …. In this model, the response (dependent) variable is the ‘hazard’. The hazard is the instantaneous event probability at a given time, or the probability that an individual under observation experiences the event in a period centered around that point in time. In the context of survival analysis, the hazard is the probability of dying given that patients have survived up to a given point in time, or the risk for death at that moment”). Barnes et al. additionally disclose the machine learning classifier model classifies as either stable or progressive (see para [0077]; “the derived metrics may be utilized by a classification module 206 such that cells, membranes, and/or nuclei may be identified and/or classified, e.g. as being a tumor cell or a non-tumor cell”), but does not specifically disclose per-cell classification.
In the same field of endeavor, Matlock et al. teaches wherein the machine learning classifier model classifies each of the plurality of cells as either stable or progressive (see para [0033]; “classification model 114 may output a labeled image, which may include the original received image that has been annotated with labels indicating the classification for each cell the model was able to identify/classify”, see also para [0035]; “The cells in the IF images may be automatically labeled by the classification assistant 102 to reflect a classification of each identified cell, similar to the classification described above (e.g., normal versus tumor cells, cell type, etc”, and para [0052]; “Example types of classifications may include normal versus tumor cell classification, cell type classification, tissue classification, biomarker, or other classifications. Further, for normal versus tumor cell classification, the classification request may include an indication of the tumor type (e.g., non-small cell lung cancer)”, Note: normal versus tumor cells implies stable or progressive). Accordingly, it would have been obvious to one of ordinary skill in the art before the invention of the claimed invention to modify Automated system for retrospectively analyzing clinical trial data of Barnes et al. in view of the use of system for automatically classifying cells in a histological stained image of Matlock et al. in order to assist in diagnosing or monitoring a patient (see para [0033]).
Regarding claim 8, the rejection of claim 7 is incorporated herein.
Barnes et al. in the combination further teach further comprising preprocessing the multiplexed tissue image prior to evaluating the multiplexed tissue image (see para [0063]; “the images received or acquired are RGB images or multispectral images. In some embodiments, in place of the captured raw images, any set of optional pre-processed images from the captured raw images can be used, either as an independent input image or in combination with the captured raw images. Accordingly, similar pre-processing step can be used when applying the trained network to an unlabeled image”), wherein preprocessing comprises at least one of: denoising the multiplexed tissue image using Otsu's method of automatic image thresholding; converting the multiplexed tissue image to grayscale; and tiling the multiplexed tissue image into a plurality of n pixel by m pixel frames, where n and m are integers greater than 0 (see para [0074; “This identification may be enabled by image analysis operations such as edge detection, etc. A tissue region mask may be used to remove the non-tissue background noise in the image, for example the non-tissue regions”, see also [0083]; “Otsu's method is used to determine an optimal threshold by minimizing the intra-class variance…. More specifically, Otsu's method is used to automatically perform clustering-based image thresholding or, the reduction of a gray level image to a binary image”)
Regarding claim 11, the rejection of claim 7 is incorporated herein.
Barnes et al. in the combination further teach wherein the multiplexed tissue image is received from one of a medical imaging device or a database (see para [0063]; “With reference to FIG. 2, the digital pathology system 200 runs an image acquisition module 202 to capture images or image data of a biological sample having one or more stains (i.e. the images may be simplex images or multiplex images).. the image acquisition module 202 is a database or memory comprising previously digitized and stored images from patient biological samples stained with one or more stains (or a plurality of digital images for each patient in a cohort of patients)”).
Regarding claim 12, the rejection of claim 7 is incorporated herein.
Matlock et al. in the combination further teach further comprising presenting an indication of the prediction to a user via a user interface (see para [0033]; “Classification model 114 may then output an indication of classified cells present in the stained tissue. For example, classification model 114 may output a labeled image, which may include the original received image that has been annotated with labels indicating the classification for each cell the model was able to identify/classify. The labeled image may be output for display (e.g., via display device 112) and/or saved in memory”).
Regarding claim 14, the rejection of claim 7 is incorporated herein.
Barnes et al. in the combination further teach wherein the machine learning classifier model is a support vector machine (SVM) (see para [0097]; “the classification module 206 comprises a Support Vector Machine (“SVM”)”).
Regarding claim 16, the rejection of claim 7 is incorporated herein.
Barnes et al. in the combination further teach further comprising: and administering treatment to the patient based on the prediction of whether the patient's NSCLC will progress (see para [0014]; “the method further comprises stratifying the patients into diagnostic positive and diagnostic negative groups based on the determined cut point value. In some embodiments, the method further comprises generating Kaplan-Meier response curves. In some embodiments, the method further comprises calculating hazard ratios based on the generated Kaplan-Meier response curves”, see also para [0043]; “the clinical performance of the companion diagnostic is the ability of the test developed for a predictive biomarker (the companion diagnostic) to distinguish treatment responders from non-responders. Companion diagnostics can: (i) identify patients who are most likely to benefit from a particular therapeutic product; (ii) identify patients likely to be at increased risk for serious side effects as a result of treatment with a particular therapeutic product; and/or (iii) monitor response to treatment with a particular therapeutic product for the purpose of adjusting treatment to achieve improved safety or effectiveness”).
Regarding claim 17, the rejection of claim 16 is incorporated herein.
Barnes et al. in the combination further teach wherein the administering treatment comprises starting, stopping, or altering an NSCLC treatment regimen (see para [0014]; “the method further comprises stratifying the patients into diagnostic positive and diagnostic negative groups based on the determined cut point value. In some embodiments, the method further comprises generating Kaplan-Meier response curves. In some embodiments, the method further comprises calculating hazard ratios based on the generated Kaplan-Meier response curves”, see also para [0043]; “the clinical performance of the companion diagnostic is the ability of the test developed for a predictive biomarker (the companion diagnostic) to distinguish treatment responders from non-responders. Companion diagnostics can: (i) identify patients who are most likely to benefit from a particular therapeutic product; (ii) identify patients likely to be at increased risk for serious side effects as a result of treatment with a particular therapeutic product; and/or (iii) monitor response to treatment with a particular therapeutic product for the purpose of adjusting treatment to achieve improved safety or effectiveness”).
Regarding claim 29, Barnes et al. teaches a system for processing medical image data related to non-small cell lung cancer (NSCLC) (see para [0015]; “the images are of biological samples stained for the detection of breast cancer biomarkers. In some embodiments, the images are of biological samples stained for the detection of non-small lung cell cancer biomarkers”), the system comprising: at least one processor; and memory having instructions stored thereon that, when executed by the at least one processor, cause the processor system to perform operations comprising (see para [0016]; “the system comprising: (i) one or more processors, and (iii) a memory coupled to the one or more processors, the memory to store computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising”): receiving a multiplexed tissue image comprising a plurality of cells stained for one or more markers (see para [0054; “an image acquisition module 202 to generate or receive simplex or multiplex images, e.g. acquired images of a biological sample stained with one or more stains (step 300)”, see also para [0136]; “the images received as input may be multiplex images, i.e. the image received is of a biological sample stained with more than one stain”); evaluating the multiplexed tissue image using one of a support vector machine (SVM) classifier or a boosted regression tree (BRT) (see para [0097]; “the classification module 206 comprises a Support Vector Machine (“SVM”)”, see also para [0013]; “the diagnostic feature metric is a combination of multiple image feature metrics or expression scores which are determined based on machine learning, i.e. using a classifier trained to determine those image feature metrics that best stratify patients when presented with patient outcome data and image analysis data”), and predicting whether a patient's NSCLC will progress based on the evaluation of the multiplexed tissue image using the SVM classifier or the BRT see para [0077]; “a feature extraction module 205 is utilized to derive certain metrics from the received input images. In some embodiments, the derived metrics may be utilized by a classification module 206 such that cells, membranes, and/or nuclei may be identified and/or classified, e.g. as being a tumor cell or a non-tumor cell…a multivariate Cox model module 208 may use a plurality of derived image feature metrics computed using the image feature extraction module 205 or may use prognostic features determined to be most relevant through machine learning using the prognostic feature derivation module 209”, see also para [0107]; “Cox's proportional hazards model is analogous to a multiple regression model …. In this model, the response (dependent) variable is the ‘hazard’. The hazard is the instantaneous event probability at a given time, or the probability that an individual under observation experiences the event in a period centered around that point in time. In the context of survival analysis, the hazard is the probability of dying given that patients have survived up to a given point in time, or the risk for death at that moment”), and wherein the BRT outputs a probability of NSCLC progression for each of a plurality of quadrants parsed from the multiplexed tissue image (see para [0107]; “In this model, the response (dependent) variable is the ‘hazard’. The hazard is the instantaneous event probability at a given time, or the probability that an individual under observation experiences the event in a period centered around that point in time. In the context of survival analysis, the hazard is the probability of dying given that patients have survived up to a given point in time, or the risk for death at that moment”, see also para [0145]; “In the regression model, the model predicts the probability of favorable response from a given patient data”). Barnes et al. additionally disclose the SVM classifier classifies as either stable or progressive (see para [0077]; “the derived metrics may be utilized by a classification module 206 such that cells, membranes, and/or nuclei may be identified and/or classified, e.g. as being a tumor cell or a non-tumor cell”), but does not specifically disclose per-cell classification.
In the same field of endeavor, Matlock et al. teaches wherein the SVM classifier classifies each of the plurality of cells as either stable or progressive (see para [0033]; “classification model 114 may output a labeled image, which may include the original received image that has been annotated with labels indicating the classification for each cell the model was able to identify/classify”, see also para [0035]; “The cells in the IF images may be automatically labeled by the classification assistant 102 to reflect a classification of each identified cell, similar to the classification described above (e.g., normal versus tumor cells, cell type, etc”, and para [0052]; “Example types of classifications may include normal versus tumor cell classification, cell type classification, tissue classification, biomarker, or other classifications. Further, for normal versus tumor cell classification, the classification request may include an indication of the tumor type (e.g., non-small cell lung cancer)”, Note: normal versus tumor cells implies stable or progressive). Accordingly, it would have been obvious to one of ordinary skill in the art before the invention of the claimed invention to modify Automated system for retrospectively analyzing clinical trial data of Barnes et al. in view of the use of system for automatically classifying cells in a histological stained image of Matlock et al. in order to assist in diagnosing or monitoring a patient (see para [0033]).
Regarding claim 30, the rejection of claim 29 is incorporated herein.
Barnes et al. in the combination further teach wherein the operations further comprise preprocessing the multiplexed tissue image prior to evaluating the multiplexed tissue image (see para [0063]; “the images received or acquired are RGB images or multispectral images. In some embodiments, in place of the captured raw images, any set of optional pre-processed images from the captured raw images can be used, either as an independent input image or in combination with the captured raw images. Accordingly, similar pre-processing step can be used when applying the trained network to an unlabeled image”), wherein preprocessing comprises at least one of: denoising the multiplexed tissue image using Otsu's method of automatic image thresholding; converting the multiplexed tissue image to grayscale; and tiling the multiplexed tissue image into a plurality of n pixel by m pixel frames, where n and m are integers greater than 0 (see para [0074; “This identification may be enabled by image analysis operations such as edge detection, etc. A tissue region mask may be used to remove the non-tissue background noise in the image, for example the non-tissue regions”, see also [0083]; “Otsu's method is used to determine an optimal threshold by minimizing the intra-class variance…. More specifically, Otsu's method is used to automatically perform clustering-based image thresholding or, the reduction of a gray level image to a binary image”).
Regarding claim 31, the rejection of claim 30 is incorporated herein.
Matlock et al. in the combination further teach wherein the operations further comprise presenting an indication of the prediction to a user via a user interface (see para [0033]; “Classification model 114 may then output an indication of classified cells present in the stained tissue. For example, classification model 114 may output a labeled image, which may include the original received image that has been annotated with labels indicating the classification for each cell the model was able to identify/classify. The labeled image may be output for display (e.g., via display device 112) and/or saved in memory”).
Regarding claim 34, the rejection of claim 30 is incorporated herein.
Barnes et al. in the combination further teach wherein the multiplexed tissue image is received from one of a medical imaging device or a database (see para [0063]; “With reference to FIG. 2, the digital pathology system 200 runs an image acquisition module 202 to capture images or image data of a biological sample having one or more stains (i.e. the images may be simplex images or multiplex images)… the image acquisition module 202 is a database or memory comprising previously digitized and stored images from patient biological samples stained with one or more stains (or a plurality of digital images for each patient in a cohort of patients)”).
Regarding claim 35, the rejection of claim 30 is incorporated herein.
Barnes et al. in the combination further teach wherein the multiplexed tissue image comprises a plurality of individual image files each associated with a single biomarker (see para [0054]; “an image acquisition module 202 to generate or receive simplex or multiplex images, e.g. acquired images of a biological sample stained with one or more stains (step 300)”, see also para [0137]; “in a sample comprising one or more stains and hematoxylin, individual images may be produced for each channel of the one or more stains and hematoxylin”).
Claims 9, and 36-37 are rejected under 35 U.S.C. 103 as being unpatentable over Barnes et al. in view of Matlock et al. as applied in claim 7 and 30 above, and further in view of Yuan (US 20170365053 A1).
Regarding claim 9, the rejection of claim 7 is incorporated herein.
Barnes et al. in the combination further teach wherein evaluating the multiplexed tissue image further comprises, prior to classifying the plurality of cells: extracting cell segments from the multiplexed tissue image using a convolutional neural network (see para [0011]; “the image analysis algorithm detects and classifies cells and/or nuclei within the input images”, see also para [0081]; “the images received as input are processed such as to detect nucleus centers (seeds) and/or to segment the nuclei”); building a count matrix that compares the plurality of cells to the one or more markers from the extracted cell segments (see para [0011]; “the expression score is an H-score. In some embodiments, the expression score is biomarker percent positivity”, see also para [0100]; “counts of specific nuclei for each field of view may be used to determine various marker expression scores, such as percent positivity or an H-Score”). However, the combination of Barnes et al. and Matlock et al. as a whole does not teach clustering the count matrix to characterize cell type heterogeneity using a Gaussian mixture model, approximating tumor regions from the characterized cell types using multiple convex hulls.
In the same field of endeavor, Yuan teaches clustering the count matrix to characterize cell type heterogeneity using a Gaussian mixture model (see para [0079]; “The lymphocytes are then clustered according to their lymphocyte-to-cancer measurements. An unsupervised learning method, such as Gaussian mixture clustering, may be used to cluster lymphocytes according to their proximity to cancer”); approximating tumor regions from the characterized cell types using multiple convex hulls; and identifying cellular neighborhoods based on the tumor regions (see para [0034]; “the distance to the centroid of convex hull region formed by 10 nearby cancer cells d.sub.centroid E”, see also para [0088]; “an intra-tumour lymphocyte is on average 7 μm away from a cancer cell and 3 μm from the centroid of convex hull region formed by nearby cancer cells. An adjacent-tumour lymphocyte may be also close to the nearest cancer cells but would be further away from the centroid of convex hull region because it is not surrounded by cancer cells”). Accordingly, it would have been obvious to one of ordinary skill in the art before the invention of the claimed invention to modify automated system for retrospectively analyzing clinical trial data of Barnes et al. in view of the use of system for automatically classifying cells in a histological stained image of Matlock et al. and analyzing a tumor image to calculate a metric of immune infiltration for the tumor of Yuan in order to recognize as an important property of immune infiltration (see para [0079]).
Regarding claim 36, the rejection of claim 30 is incorporated herein.
Barnes et al. in the combination further teach wherein evaluating the multiplexed tissue image further comprises, prior to classifying the plurality of cells: extracting cell segments from the multiplexed tissue image using a convolutional neural network (see para [0011]; “the image analysis algorithm detects and classifies cells and/or nuclei within the input images”, see also para [0081]; “the images received as input are processed such as to detect nucleus centers (seeds) and/or to segment the nuclei”); building a count matrix that compares the plurality of cells to the one or more markers from the extracted cell segments (see para [0011]; “the expression score is an H-score. In some embodiments, the expression score is biomarker percent positivity”, see also para [0100]; “counts of specific nuclei for each field of view may be used to determine various marker expression scores, such as percent positivity or an H-Score”).
Yuan in the combination further teach clustering the count matrix to characterize cell type heterogeneity using a Gaussian mixture model (see para [0079]; “The lymphocytes are then clustered according to their lymphocyte-to-cancer measurements. An unsupervised learning method, such as Gaussian mixture clustering, may be used to cluster lymphocytes according to their proximity to cancer”); approximating tumor regions from the characterized cell types using multiple convex hulls; and identifying cellular neighborhoods based on the tumor regions (see para [0034]; “the distance to the centroid of convex hull region formed by 10 nearby cancer cells d.sub.centroid E”, see also para [0088]; “an intra-tumour lymphocyte is on average 7 μm away from a cancer cell and 3 μm from the centroid of convex hull region formed by nearby cancer cells. An adjacent-tumour lymphocyte may be also close to the nearest cancer cells but would be further away from the centroid of convex hull region because it is not surrounded by cancer cells”). Accordingly, it would have been obvious to one of ordinary skill in the art before the invention of the claimed invention to modify automated system for retrospectively analyzing clinical trial data of Barnes et al. in view of the use of system for automatically classifying cells in a histological stained image of Matlock et al. and analyzing a tumor image to calculate a metric of immune infiltration for the tumor Yuan in order to recognize as an important property of immune infiltration (see para [0079]).
Regarding claim 37, the rejection of claim 36 is incorporated herein.
Barnes et al. in the combination further teach wherein the operations further comprise: training the SVM classifier or the BRT using the identified cellular neighborhoods (see para [0097]; “the classification module 206 comprises a Support Vector Machine (“SVM”)”).
Claims 10 and 33 are rejected under 35 U.S.C. 103 as being unpatentable over Barnes et al. in view of Matlock et al. as applied in claim 7 and 30 above, and further in view of Hofman et al. NPL “Multiplexed Immunohistochemistry for Molecular and Immune Profiling in Lung Cancer—Just About Ready for Prime-Time?”.
Regarding claim 10, the rejection of claim 7 is incorporated herein. The combination of Barnes et al. and Matlock et al. as a whole does not teach wherein the multiplexed tissue image is a 7-stain image.
In the same field of endeavor Hofman et al. teach wherein the multiplexed tissue image is a 7-stain image (see page 15, 3rd para; “The specificity of the staining has been improved with the use of tyramide techniques allowing simultaneous staining with 7 to 9 colors in a same slide”). Accordingly, it would have been obvious to one of ordinary skill in the art before the invention of the claimed invention to modify automated system for retrospectively analyzing clinical trial data of Barnes et al. in view of the use of system for automatically classifying cells in a histological stained image of Matlock et al. and analyzing a tumor image to calculate a metric of immune infiltration for the tumor of Hofman et al. in order to determine the spatial distribution, activation state of immune cells, and the presence of immunoactive molecular expression. (see page 15, 3rd para).
Regarding claim 33, the rejection of claim 30 is incorporated herein.
Hofman et al. in the combination further teach wherein the multiplexed tissue image is a 7-stain image (see page 15, 3rd para; “The specificity of the staining has been improved with the use of tyramide techniques allowing simultaneous staining with 7 to 9 colors in a same slide”). Accordingly, it would have been obvious to one of ordinary skill in the art before the invention of the claimed invention to modify automated system for retrospectively analyzing clinical trial data of Barnes et al. in view of the use of system for automatically classifying cells in a histological stained image of Matlock et al. and analyzing a tumor image to calculate a metric of immune infiltration for the tumor of Hofman et al. in order to determine the spatial distribution, activation state of immune cells, and the presence of immunoactive molecular expression. (see page 15, 3rd para).
Claims 13 and 32 are rejected under 35 U.S.C. 103 as being unpatentable over Barnes et al. in view of Matlock et al. as applied in claim 7 and 30 above, and further in view of Frank (US 20210004650 A1).
Regarding claim 13, the rejection of claim 7 is incorporated herein. The combination of Barnes et al. and Matlock et al. as a whole does not teach further comprising generating a risk map that indicates a probability of NSCLC progression based on the prediction
In the same field of endeavor, Frank teaches further comprising generating a risk map that indicates a probability of NSCLC progression based on the prediction (see para [0009]; “a probability map in accordance herewith color-codes the probabilities assigned to the examined regions of an image. Using a high degree of subimage overlap results in coverage of large, contiguous regions of the image and fine features. Each colored pixel represents the combined (e.g., averaged) classification probabilities over all sifted and classified tiles containing that pixel”, see also para [0010]; “probability maps …. they highlight all regions relevant to the classification and visually reveal the associated probability levels. In a medical-image context, this representation permits clinicians to readily determine whether a classification is based on the critical anatomy”). Accordingly, it would have been obvious to one of ordinary skill in the art before the invention of the claimed invention to modify automated system for retrospectively analyzing clinical trial data of Barnes et al. in view of the use of system for automatically classifying cells in a histological stained image of Matlock et al. and a neural network produces tile-level classifications that are aggregated to classify the source image of Frank et al. in order to identify regions of different authorship within a single painting (see para [0009).
Regarding claim 32, the rejection of claim 30 is incorporated herein.
Frank in the combination further teach wherein the operations further comprise generating a risk map that indicates a probability of NSCLC progression based on the prediction (see para [0009]; “a probability map in accordance herewith color-codes the probabilities assigned to the examined regions of an image. Using a high degree of subimage overlap results in coverage of large, contiguous regions of the image and fine features. Each colored pixel represents the combined (e.g., averaged) classification probabilities over all sifted and classified tiles containing that pixel”, see also para [0010]; “probability maps …. they highlight all regions relevant to the classification and visually reveal the associated probability levels. In a medical-image context, this representation permits clinicians to readily determine whether a classification is based on the critical anatomy”). Accordingly, it would have been obvious to one of ordinary skill in the art before the invention of the claimed invention to modify automated system for retrospectively analyzing clinical trial data of Barnes et al. in view of the use of system for automatically classifying cells in a histological stained image of Matlock et al. and a neural network produces tile-level classifications that are aggregated to classify the source image of Frank et al. in order to identify regions of different authorship within a single painting (see para [0009).
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Barnes et al. in view of Matlock et al. as applied in claim 7 above, and further in view of Simpson et al. (US 20190019300 A1).
Regarding claim 15, the rejection of claim 7 is incorporated herein.
Barnes et al. in the combination further teach further comprising: parsing the multiplexed tissue image into a plurality of quadrants (see para [0051]; “The digital images can also be divided into a matrix of pixels. The pixels can include a digital value of one or more bits, defined by the bit depth”, Note: matrix of pixels implies quadrants). However, the combination of Barnes et al. and Matlock et al. as a whole does not teach evaluating each of the plurality of quadrants using a boosted regression tree (BRT), wherein the prediction of whether the patient's NSCLC will progress is further based on an output of the BRT, and wherein the BRT outputs a probability of NSCLC progression for each of the plurality of quadrants.
In the same field of endeavor, Simpson et al. teaches and evaluating each of the plurality of quadrants using a boosted regression tree (BRT), wherein the prediction of whether the patient's NSCLC will progress is further based on an output of the BRT, and wherein the BRT outputs a probability of NSCLC progression for each of the plurality of quadrants (see para [0180]; “An exemplary prediction model was constructed with the pre-treatment CT texture of the index tumor as a predictor, and the index tumor volume change rate and the longest dimension change rate as the outcome. …. A least square boosted random forest regression model was tuned with learning rate of about 0.25, and the tree number of 40. A nine fold cross validation was used to construct the model, with nine regression models”, see also para [0105]; “can provide predictive and prognostic information, as shown in breast cancer by the immunohistochemical assessment of molecular markers such as estrogen receptor, progesterone receptor, and HER2”). Accordingly, it would have been obvious to one of ordinary skill in the art before the invention of the claimed invention to modify automated system for retrospectively analyzing clinical trial data of Barnes et al. in view of the use of system for automatically classifying cells in a histological stained image of Matlock et al. and quantifying underlying pixel variation in imaging data of Simpson et al. in order to provide a predictor model (see para [0180]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WINTA GEBRESLASSIE whose telephone number is (571)272-3475. The examiner can normally be reached Monday-Friday9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at 571-270-5180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WINTA GEBRESLASSIE/ Examiner, Art Unit 2677