Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed on 24 March, 2023.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 17 February, 2026 has been entered.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 17 February, 2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Amendment
The amendment filed 17, February 2026 has been entered.
The amendment of claims 21, 23, 25 – 28, 31, and 32 has been acknowledged.
The cancellation of claims 22, 24, 30, and 33 has been acknowledged.
Response to Arguments
Applicant’s arguments, see page 6, section “Rejections under § 103”, filed 17, February 2026 with respect to the rejection of claims 21 – 36 under 35 U.S.C. § 103 has been fully considered but are persuasive.
Applicant states on page 7 of the reply filed 17 February, 2026 that Fuchs et al (U.S. Patent Publication no. 2019/0295252 A1, hereinafter “Fuchs”) fails to teach the amended claim limitation of “a second trained model comprising an attention mechanism”. Specifically, Fuchs discloses that the aggregation model may include a set of transform layers (e.g. input layer, context layer, state layer, and hidden layer) which refers to the structure of a recurrent neural network or long/short term memory type model. The examiner concedes that Fuchs does not teach the second network comprising an attention mechanism, however, secondary art Yip et al (U.S. Patent Publication No. 2020/0258223 A1, hereinafter “Yip”) does explicitly teach using an attention network to determine biomarkers in ¶ 0093 “Biomarkers may be identified through any of the following models. Any models referenced herein may be implemented as artificial intelligence engines and may include gradient boosting models, random forest models, neural networks (NN), regression models, Naive Bayes models, or machine learning algorithms (MLA)… NNs include conditional random fields, convolutional neural networks, attention based neural networks... (emphasis added)”.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 21, 23 – 29, and 31, 32, and 34 – 36 are rejected under 35 U.S.C. 103 as being unpatentable over Fuchs et al (U.S. Patent Publication no. 2019/0295252 A1, hereinafter “Fuchs”) in view of Yip et al (U.S. Patent Publication No. 2020/0258223 A1, hereinafter “Yip”).
Regarding claim 21, Fuchs teaches a computer implemented method of processing an image of tissue, comprising:
obtaining a first set of image portions from an input image of tissue (¶ 0157: For each biomedical image 3232, the tile generator 3216 may generate a set of tiles 3236A-N (hereinafter referred as tiles 3236) from the biomedical image 3232. Each tile 3236 may correspond to a portion of the biomedical image 3232. In some embodiments, the tile generator 3215 may partition or divide the biomedical image 3232 into the set of tiles 3236.);
selecting a second set of two or more image portions from the first set of image portions (¶ 0173: Based on the scores determined for the tiles 3236 from the application of the inference model 3212, the model applier 3218 may select a subset from the set of tiles 3236 to form a subset 3238A-N (hereinafter generally referred to as subset 3238 or selected tiles 3238). In some embodiments, the model applier 3218 may select the tiles 3236 with the highest scores to form the subset 3238. The selected tiles 3238 may represent the tiles 3236 with the highest likelihood of including a feature correlated with or corresponding to the presence of the condition.), the selecting comprising inputting image data of an image portion from the first set into a first trained model comprising a first convolutional neural network In some embodiments, the inference model 3212 may be a convolutional neural network (CNN) and a deep convolutional network (DCN), among others, with the set of transform layers.), the first trained model generating an indication of whether the image portion is associated with a (¶ 0172: By applying the inference model, the model applier 3218 may determine the score for each tile 3236. In some embodiments, the model applier 3218 may determine the score for each condition for each tile 3236. For example, one tile 3236 may be associated with a score indicating likelihood of presence of prostate cancer and another score indicating likelihood of bruising to the organ tissue on the tile 3236.), wherein the second set of the two or more image portions is selected based on the indication of whether the image portion is associated with the (¶ 0173: Based on the scores determined for the tiles 3236 from the application of the inference model 3212, the model applier 3218 may select a subset from the set of tiles 3236 to form a subset 3238A-N (hereinafter generally referred to as subset 3238 or selected tiles 3238). In some embodiments, the model applier 3218 may select the tiles 3236 with the highest scores to form the subset 3238. The selected tiles 3238 may represent the tiles 3236 with the highest likelihood of including a feature correlated with or corresponding to the presence of the condition.); and
determining an indication of whether the input image is associated with the (¶ 0176: The label 3234 may indicate the presence of the condition on the corresponding medical image 3232… The threshold value when the label 3234 indicates lack of the condition may be the same or may differ from the threshold value when the label 3235 indicates the presence of the condition.) wherein the determining comprises inputting first data corresponding to the second set of two or more image portions into a second trained model (¶ 0173: The number of tiles 3238 selected from the original set of tiles 3236 may be in accordance to a predefined number, and may range from 1 to 50.; ¶ 0173: Under the runtime mode, with the selection from the tiles 3236, the model applier 3218 may apply the aggregation model 3214 onto the selected tiles 3238, and feed the selected tiles 3238 into the input of the aggregation model 3214.).
Fuchs does not explicitly teach wherein the biomarker is a molecular biomarker and wherein the determining comprises a second trained model using an attention mechanism.
However, Yip does teach wherein the biomarker is a molecular biomarker (¶ 0007: Yet another tumor characteristic is the presence of specific molecules as a biomarker, including the molecule known as programmed death ligand 1 (PD-L1).; ¶ 0086: In some examples, the multiscale configurations contain pixel-level cell classifiers and cell segmentation models.; ¶ 0087: In some examples, the deep learning frameworks herein include a single-scale configuration trained using a multiple instance learning (MIL) strategy to predict biomarkers presence in histopathology images… In some examples, the single-scale configurations contain slide-level classifiers trained using gene sequencing data, such as RNA sequencing data, and trained to analyze histopathology images having slide-level labels, not tile-level labels.); and wherein the determining comprises a second trained model using an attention mechanism (¶ 0093: Biomarkers may be identified through any of the following models. Any models referenced herein may be implemented as artificial intelligence engines and may include gradient boosting models, random forest models, neural networks (NN), regression models, Naive Bayes models, or machine learning algorithms (MLA)… NNs include conditional random fields, convolutional neural networks, attention based neural networks... (emphasis added)).
Fuchs and Yip are considered to be analogous art as both pertain to determining biomarkers in images. Therefore, it would have been obvious to one of ordinary skill in the art to combine the system for multiple instance learning for classification and localization in biomedical imaging (as taught by Fuchs) and the system for determining biomarkers from histopathology slide images as taught by Yip) before the effective filing date of the claimed invention. The motivation for this combination of references would be the system of Yip includes a stack of convolutional layers interleaved with “shortcut connections,” which skip intermediate layers. These shortcut connections use earlier layers as a reference point to guide deeper layers to learn the residual between layer outputs rather than learning an identity mapping between layers. This innovation improves convergence speed and stability during training, and allows deeper networks to perform better than their shallower counterparts. (See ¶ 0365). This motivation for the combination of Fuchs and Yip is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III).
Regarding claim 23, the Fuchs and Yip combination teaches the method of claim 21.
Additionally, Fuchs teaches wherein the second trained model further comprises a recurrent neural network (¶ 0195: In some embodiments, the aggregation model 3214 may be a recurrent neural network (RNN)).
Regarding claim 24, the Fuchs and Yip combination teaches the method of claim 22.
Additionally, Fuchs teaches wherein the second trained model comprises an attention mechanism (¶ 0195: The aggregation model 3214 may have one or more parameters to determine the classification result for the biomedical image 3232. The aggregation model 3214 may include a set of transform layers ( e.g., input layer, context layer, state layer, and hidden layer). Each transform layer may include at least one of the one or more parameters to convert the set of tiles 3238 to a set of feature maps and to determine the classification result for the entire biomedical image 3232.).
Regarding claim 25, the Fuchs and Yip combination teaches the method of claim 23.
Additionally, Fuchs teaches wherein determining an indication of whether the input image is associated with the biomarker from the second set of image portions comprises:
inputting the first data for each image portion in the second set into the attention mechanism (¶ 0173: Under the runtime mode, with the selection from the tiles 3236, the model applier 3218 may apply the aggregation model 3214 onto the selected tiles 3238, and feed the selected tiles 3238 into the input of the aggregation model 3214.; ¶ 0195: Each transform layer may include at least one of the one or more parameters to convert the set of tiles 3238 to a set of feature maps and to determine the classification result for the entire biomedical image 3232.), wherein the attention mechanism is configured to output an indication of the importance of each image portion (¶ 0194: The classification result may be, for example, a binary value ( e.g., 0 and 1 or true and false) or one of an enumerate value or indicator (e.g., "high", "medium," or "low"), among others.);
selecting a third set of image portions based on the indication of the importance of each image portion (¶ 0197: In some embodiments, prior to feeding the subset 3238 from the different magnification factors, the model applier 3218 may generate an aggregate subset using a combination of the selected tiles 3238.); and
for each image portion in the third set, inputting the first data into the recurrent neural network (¶ 0197: Once generated, the model applier 3218 may feed the aggregate subset to the aggregation model 3214.), the recurrent neural network generating the indication of whether the input image is associated with the biomarker (¶ 0199: The model applier 3218 may identify the classification result for the condition from the last transform layer of the aggregation model 3214. The identification of the classification result may be repeated for multiple conditions ( e.g., prostate tumor, breast lesion, and bruised tissue).).
Regarding claim 26, the Fuchs and Yip combination teaches the method of claim 21.
Additionally, Fuchs teaches wherein the indication of whether the image portion is associated with the biomarker is a probability that the image portion is associated with the biomarker (¶ 0170: The inference model 212 may have one or more parameters to determine the score for each tile 3236… Each transform layer may be of a predefined size to generate the feature maps of a predefined size. In some embodiments, the inference model 3212 may be a convolutional neural network (CNN) and a deep convolutional network (DCN), among others, with the set of transform layers.), wherein selecting the second set comprises selecting the k image portions having the highest probability (¶ 0173: Based on the scores determined for the tiles 3236 from the application of the inference model 3212, the model applier 3218 may select a subset from the set of tiles 3236 to form a subset 3238A-N (hereinafter generally referred to as subset 3238 or selected tiles 3238). In some embodiments, the model applier 3218 may select the tiles 3236 with the highest scores to form the subset 3238.), wherein k is a pre-defined integer greater than 1 (¶ 0173: The number of tiles 3238 selected from the original set of tiles 3236 may be in accordance to a predefined number, and may range from 1 to 50.).
Regarding claim 27, the Fuchs and Yip combination teaches the method of claim 21.
Additionally, Fuchs teaches wherein the first convolutional neural network comprises a first portion comprising at least one convolutional layer and a second portion (Figure 19, Classifier CNN; ¶ 0170: The inference model 3212 may include a set of transform layers (e.g., convolutional layer, pooling layer, rectified layer, and normalization layer)… The inference model 3212 may have any number of transform layers.), wherein the second portion takes as input a one dimensional vector (Figure 19, See Tile Probability and Ranked Tile Section of Inference Network);
wherein determining the indication of whether the input image is associated with the biomarker from the second set of image portions further comprises:
generating the first data for each of the second set of image portions (¶ 0170: The inference model 212 may have one or more parameters to determine the score for each tile 3236… Each transform layer may be of a predefined size to generate the feature maps of a predefined size. In some embodiments, the inference model 3212 may be a convolutional neural network (CNN) and a deep convolutional network (DCN), among others, with the set of transform layers.), generating the first data for an image portion comprising inputting the image data of the image portion into the first portion of the first convolutional neural network (¶ 0170: The inference model 212 may have one or more parameters to determine the score for each tile 3236… Each transform layer may be of a predefined size to generate the feature maps of a predefined size. In some embodiments, the inference model 3212 may be a convolutional neural network (CNN) and a deep convolutional network (DCN), among others, with the set of transform layers.).
Regarding claim 28, the Fuchs and Yip combination teaches the method according to claim 21.
Additionally, Fuchs teaches further comprising:
selecting a fourth set of one or more image portions from the first set of image portions (¶ 0197: In some embodiments, prior to feeding the subset 3238 from the different magnification factors, the model applier 3218 may generate an aggregate subset using a combination of the selected tiles 3238.),
wherein the indication of whether the input image is associated with the biomarker is determined from the fourth set of one or more image portions and the second set of two or more image portions (¶ 0197: Once generated, the model applier 3218 may feed the aggregate subset to the aggregation model 3214.; ¶ 0198: The model applier 3218 may identify the classification result for the condition from the last transform layer of the aggregation model 3214. The identification of the classification result may be repeated for multiple conditions ( e.g., prostate tumor, breast lesion, and bruised tissue).; Examiner’s note: As the aggregate subset is compiled using specific parameters of the second set, it is understood by the examiner that using this dataset to detect a biomarker relies on both the fourth (aggregate) set, and the second set.).
Additionally, Yip does teach the selecting comprising inputting image data of an image portion from the first set into a third trained model comprising a second convolutional neural network (¶ 0029: In some examples, wherein the one of the one or more trained deep learning multiscale classifier models are each configured as a tile-resolution Fully Convolutional Network (FCN) classification model.; ¶ 0180: For biomarker detection, the biomarker classification model 319 may be trained with a CMS model that predicts for each tile, a CMS classification, identifying different tissue types (e.g., stroma) for that classification, in place of trying to mere average CMS classification across all tiles. In an example, each tile would be processed, and the CMS model would generate a compressed representation with each tile's associated pixel data and each tile would be assigned into a class (cluster 1, cluster 2, etc.), based on patterns in each tile's pixel data and similarities among the tiles).
Regarding claim 29, the Fuchs and Yip combination teaches the method of claim 21.
Additionally, Fuchs teaches wherein the biomarker is a cancer biomarker (¶ 0172: For example, one tile 3236 may be associated with a score indicating likelihood of presence of prostate cancer…) and wherein obtaining the first set of image portions from an input image of tissue comprises:
splitting the input image of tissue into image portions (¶ 0157: For each biomedical image 3232, the tile generator 3216 may generate a set of tiles 3236A-N (hereinafter referred as tiles 3236) from the biomedical image 3232. Each tile 3236 may correspond to a portion of the biomedical image 3232. In some embodiments, the tile generator 3215 may partition or divide the biomedical image 3232 into the set of tiles 3236.);
inputting image data of an image portion into a fifth trained model (¶ 0172: The model applier 3218 may apply the inference model 3212 to the set of tiles 3236 for each biomedical image 3232.), the fifth trained model generating an indication of whether the image portion is associated with cancer tissue (¶ 0172: For example, one tile 3236 may be associated with a score indicating likelihood of presence of prostate cancer…; ¶ 0176: The label 3234 may indicate the presence of the condition on the corresponding medical image 3232… The threshold value when the label 3234 indicates lack of the condition may be the same or may differ from the threshold value when the label 3235 indicates the presence of the condition.; Examiner’s note: as the inference model already can determine biomarkers related to cancer, the examiner is interpreting the inference model of Fuchs to perform as this claim’s fifth trained model.); and
selecting the first set of image portions based on the indication of whether the image portion is associated with cancer tissue (¶ 0172: For example, one tile 3236 may be associated with a score indicating likelihood of presence of prostate cancer…; ¶ 0173: Based on the scores determined for the tiles 3236 from the application of the inference model 3212, the model applier 3218 may select a subset from the set of tiles 3236 to form a subset 3238A-N (hereinafter generally referred to as subset 3238 or selected tiles 3238). In some embodiments, the model applier 3218 may select the tiles 3236 with the highest scores to form the subset 3238. The selected tiles 3238 may represent the tiles 3236 with the highest likelihood of including a feature correlated with or corresponding to the presence of the condition.).
Regarding claim 31, the Fuchs and Yip combination teaches a system for processing an image of tissue, comprising:
an input configured to receive an input image of tissue (¶ 0238: Additional devices 3430a-3430n have both input and output capabilities, including, e.g., haptic feedback devices, touchscreen displays, or multitouch displays.; ¶ 0245: In some embodiments, the computing device 3400 may have different processors, operating systems, and input devices consistent with the device.);
an output configured to output an indication of whether the input image is associated with a biomarker (¶ 0238: Additional devices 3430a-3430n have both input and output capabilities, including, e.g., haptic feedback devices, touchscreen displays, or multitouch displays.)
one or more processors (¶ 0245: In some embodiments, the computing device 3400 may have different processors, operating systems, and input devices consistent with the device.), configured to:
obtain a first set of image portions from an input image of tissue received by way of the input (¶ 0157: For each biomedical image 3232, the tile generator 3216 may generate a set of tiles 3236A-N (hereinafter referred as tiles 3236) from the biomedical image 3232. Each tile 3236 may correspond to a portion of the biomedical image 3232. In some embodiments, the tile generator 3215 may partition or divide the biomedical image 3232 into the set of tiles 3236.; ¶ 0165: In some embodiments, the tile generator 3216 may receive the biomedical images 3232 from the imaging device 3204.);
select a second set of two or more image portions from the first set of image portions (¶ 0173: Based on the scores determined for the tiles 3236 from the application of the inference model 3212, the model applier 3218 may select a subset from the set of tiles 3236 to form a subset 3238A-N (hereinafter generally referred to as subset 3238 or selected tiles 3238). In some embodiments, the model applier 3218 may select the tiles 3236 with the highest scores to form the subset 3238. The selected tiles 3238 may represent the tiles 3236 with the highest likelihood of including a feature correlated with or corresponding to the presence of the condition.), the selecting comprising inputting image data of an image portion from the first set into a first trained model comprising a first convolutional neural network (¶ 0170: The inference model 212 may have one or more parameters to determine the score for each tile 3236… Each transform layer may be of a predefined size to generate the feature maps of a predefined size. In some embodiments, the inference model 3212 may be a convolutional neural network (CNN) and a deep convolutional network (DCN), among others, with the set of transform layers.), the first trained model generating an indication of whether the image portion is associated with a molecular biomarker (¶ 0172: By applying the inference model, the model applier 3218 may determine the score for each tile 3236. In some embodiments, the model applier 3218 may determine the score for each condition for each tile 3236. For example, one tile 3236 may be associated with a score indicating likelihood of presence of prostate cancer and another score indicating likelihood of bruising to the organ tissue on the tile 3236.) wherein the second set of the two or more image portions is selected based on the indication of whether the image portion is associated with the (¶ 0173: Based on the scores determined for the tiles 3236 from the application of the inference model 3212, the model applier 3218 may select a subset from the set of tiles 3236 to form a subset 3238A-N (hereinafter generally referred to as subset 3238 or selected tiles 3238). In some embodiments, the model applier 3218 may select the tiles 3236 with the highest scores to form the subset 3238. The selected tiles 3238 may represent the tiles 3236 with the highest likelihood of including a feature correlated with or corresponding to the presence of the condition.);
determine an indication of whether the input image is associated with the (¶ 0176: The label 3234 may indicate the presence of the condition on the corresponding medical image 3232… The threshold value when the label 3234 indicates lack of the condition may be the same or may differ from the threshold value when the label 3235 indicates the presence of the condition.) wherein the determining comprises inputting first data corresponding to the second set of two or more image portions into a second trained model (¶ 0173: The number of tiles 3238 selected from the original set of tiles 3236 may be in accordance to a predefined number, and may range from 1 to 50.; ¶ 0173: Under the runtime mode, with the selection from the tiles 3236, the model applier 3218 may apply the aggregation model 3214 onto the selected tiles 3238, and feed the selected tiles 3238 into the input of the aggregation model 3214.).
and output the indication by way of the output (¶ 0176: The label 3234 may indicate the presence of the condition on the corresponding medical image 3232… The threshold value when the label 3234 indicates lack of the condition may be the same or may differ from the threshold value when the label 3235 indicates the presence of the condition.).
Additionally, Yip teaches wherein the biomarker is a molecular biomarker (¶ 0007: Yet another tumor characteristic is the presence of specific molecules as a biomarker, including the molecule known as programmed death ligand 1 (PD-L1).; ¶ 0086: In some examples, the multiscale configurations contain pixel-level cell classifiers and cell segmentation models.; ¶ 0087: In some examples, the deep learning frameworks herein include a single-scale configuration trained using a multiple instance learning (MIL) strategy to predict biomarkers presence in histopathology images… In some examples, the single-scale configurations contain slide-level classifiers trained using gene sequencing data, such as RNA sequencing data, and trained to analyze histopathology images having slide-level labels, not tile-level labels.); and wherein the determining comprises a second trained model using an attention mechanism (¶ 0093: Biomarkers may be identified through any of the following models. Any models referenced herein may be implemented as artificial intelligence engines and may include gradient boosting models, random forest models, neural networks (NN), regression models, Naive Bayes models, or machine learning algorithms (MLA)… NNs include conditional random fields, convolutional neural networks, attention based neural networks... (emphasis added)).
Regarding claim 32, the Fuchs and Yip combination teaches a computer implemented method of training, comprising:
obtaining a first set of image portions from an input image of tissue (Fuchs ¶ 0157: For each biomedical image 3232, the tile generator 3216 may generate a set of tiles 3236A-N (hereinafter referred as tiles 3236) from the biomedical image 3232. Each tile 3236 may correspond to a portion of the biomedical image 3232. In some embodiments, the tile generator 3215 may partition or divide the biomedical image 3232 into the set of tiles 3236.);
inputting image data of an image portion from the first set into a first model comprising a first convolutional neural network (Fuchs ¶ 0165: In some embodiments, the tile generator 3216 may receive the biomedical images 3232 from the imaging device 3204.; ¶ 0170: In some embodiments, the inference model 3212 may be a convolutional neural network (CNN) and a deep convolutional network (DCN), among others, with the set of transform layers.), the first model generating an indication of whether the image portion is associated with a (Fuchs ¶ 0172: By applying the inference model, the model applier 3218 may determine the score for each tile 3236. In some embodiments, the model applier 3218 may determine the score for each condition for each tile 3236. For example, one tile 3236 may be associated with a score indicating likelihood of presence of prostate cancer and another score indicating likelihood of bruising to the organ tissue on the tile 3236.); and
adapting the first model based on a label associated with the input image of tissue indicating whether the input image is associated with the biomarker (Fuchs ¶ 0166: In some embodiments, the tile generator 3216 may access the training database 3224 to retrieve the biomedical images 3232… The training database 3324 may maintain a set of biomedical images 3232 with the label 3234 for training the inference model 3212 and the aggregation model 3214. The label 3234 may indicate a presence or a lack of a condition on the biomedical image 3232.; ¶ 0174: The threshold value for the label 3234 may correspond to the occurrence of the condition specified by the label 3234, and may indicate a score at which to modify one or more parameters of the inference model 3212.).
selecting a second set of two or more image portions from the first set of image portions based on the indication of whether the image portion is associated with a molecular biomarker (Fuchs ¶ 0173: Based on the scores determined for the tiles 3236 from the application of the inference model 3212, the model applier 3218 may select a subset from the set of tiles 3236 to form a subset 3238A-N (hereinafter generally referred to as subset 3238 or selected tiles 3238). In some embodiments, the model applier 3218 may select the tiles 3236 with the highest scores to form the subset 3238. The selected tiles 3238 may represent the tiles 3236 with the highest likelihood of including a feature correlated with or corresponding to the presence of the condition.)
determining an indication of whether the input image is associated with the (Fuchs ¶ 0176: The label 3234 may indicate the presence of the condition on the corresponding medical image 3232… The threshold value when the label 3234 indicates lack of the condition may be the same or may differ from the threshold value when the label 3235 indicates the presence of the condition.) wherein the determining comprises inputting first data corresponding to the second set of two or more image portions into a second trained model (Fuchs ¶ 0173: The number of tiles 3238 selected from the original set of tiles 3236 may be in accordance to a predefined number, and may range from 1 to 50.; ¶ 0173: Under the runtime mode, with the selection from the tiles 3236, the model applier 3218 may apply the aggregation model 3214 onto the selected tiles 3238, and feed the selected tiles 3238 into the input of the aggregation model 3214.).
adapting the second model based on the label associated with the input image of tissue indicated whether the input image is associated with the molecular biomarker.
Additionally, Yip teaches wherein the biomarker is a molecular biomarker (¶ 0007: Yet another tumor characteristic is the presence of specific molecules as a biomarker, including the molecule known as programmed death ligand 1 (PD-L1).; ¶ 0086: In some examples, the multiscale configurations contain pixel-level cell classifiers and cell segmentation models.; ¶ 0087: In some examples, the deep learning frameworks herein include a single-scale configuration trained using a multiple instance learning (MIL) strategy to predict biomarkers presence in histopathology images… In some examples, the single-scale configurations contain slide-level classifiers trained using gene sequencing data, such as RNA sequencing data, and trained to analyze histopathology images having slide-level labels, not tile-level labels.); and wherein the determining comprises a second trained model using an attention mechanism (¶ 0093: Biomarkers may be identified through any of the following models. Any models referenced herein may be implemented as artificial intelligence engines and may include gradient boosting models, random forest models, neural networks (NN), regression models, Naive Bayes models, or machine learning algorithms (MLA)… NNs include conditional random fields, convolutional neural networks, attention based neural networks... (emphasis added)); and
adapting the second model based on the label associated with the input image of tissue indicated whether the input image is associated with the molecular biomarker (¶ 0093: A MLA or a NN may be trained from a training data set. In an exemplary prediction profile, a training data set may include imaging, pathology, clinical, and/or molecular reports and details of a patient, such as those curated from an EHR or genetic sequencing reports. MLAs include supervised algorithms (such as algorithms where the features/classifications in the data set are annotated) using linear regression, logistic regression, decision trees, classification and regression trees, Naive Bayes, nearest neighbor clustering… Training may include providing optimized datasets, labeling these traits as they occur in patient records, and training the MLA to predict or classify based on new inputs.; ¶ 0123: To analyze the received histopathology image data and other data, the imaging-based biomarker prediction system 102 includes a deep learning framework 150 that implements various machine learning techniques to generate trained classifier models for image-based biomarker analysis from received training sets of image data or sets of image data and other patient information. With trained classifier models, the deep learning framework 150 is further used to analyze and diagnose the presence of image-based biomarkers in subsequent images collected from patients.).
Regarding claim 34, the Fuchs and Yip combination teaches a system comprising a first model and a second model trained according to the method of claim 32 (Figure 32A, Ref. No. 3212 and 3214; ¶ 0164: The image classification system 3202 may include at least one feature classifier 3208, at least one model trainer 3210, at least one inference model 3212 (sometimes referred herein as an inference system), and at least one aggregation model 3214 (sometimes referred herein as an aggregation system), among others.).
Regarding claim 35, the Fuchs and Yip combination teaches a non-transitory computer readable storage medium comprising computer readable code configured to cause a computer to perform the method of claim 21 (¶ 0233: The central processing unit 3421 is any logic circuitry that responds to and processes instructions fetched from the main memory unit 3422.; ¶ 0248: Modules may be implemented in hardware and/or as computer instructions on a non-transient computer readable storage medium, and modules may be distributed across various hardware or computer based components.).
Regarding claim 36, the Fuchs and Yip combination teaches a non-transitory computer readable storage medium comprising computer readable code configured to cause a computer to perform the method of claim 32 (¶ 0233: The central processing unit 3421 is any logic circuitry that responds to and processes instructions fetched from the main memory unit 3422.; ¶ 0248: Modules may be implemented in hardware and/or as computer instructions on a non-transient computer readable storage medium, and modules may be distributed across various hardware or computer based components.).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW JONES whose telephone number is (703)756-4573. The examiner can normally be reached Monday - Friday 8:00-5:00 EST, off Every Other Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571) 272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW B. JONES/Examiner, Art Unit 2667
/MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667