Prosecution Insights
Last updated: April 19, 2026
Application No. 18/580,323

METHOD FOR CREATING MACHINE LEARNING MODEL FOR OUTPUTTING FEATURE MAP

Non-Final OA §101§102§103
Filed
Jan 18, 2024
Examiner
FITZPATRICK, ATIBA O
Art Unit
2677
Tech Center
2600 — Communications
Assignee
National Institute Of Advanced Industrial Science And Technology
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
93%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
775 granted / 881 resolved
+26.0% vs TC avg
Minimal +5% lift
Without
With
+4.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
27 currently pending
Career history
908
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
34.9%
-5.1% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
20.1%
-19.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 881 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in claim 45 of this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 46 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because limitations, “computer readable storage medium for storing a program” can be reasonably interpreted as a program stored on a transitory computer readable medium including a transitory signal. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 27, 28, 30-38 and 40-46 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 20200388029 A1 (Saltz). As per claim 27, Saltz teaches a method of creating a machine learning model comprising: receiving a plurality of data for learning (Saltz: Fig. 1A: mainly 1-4; Fig. 1B: mainly 38; Fig. 1C: mainly 51, 57; Fig. 3B: mainly 531-533; Fig. 3C: mainly 500-501; Fig. 18: mainly 407; Fig. 19: mainly 437); classifying each of the plurality of data for learning into a respective initial cluster of a plurality of initial clusters by using an initial machine learning model, the initial machine learning model being caused to at least learn to output a feature of an inputted datum from the datum (Saltz: para 19: “deep classification learning models (for example, lymphocyte infiltration classification CNN and a necrosis segmentation algorithm) are implemented to generate tumor infiltrating lymphocyte maps that are useful in generating prognostic values in diagnosis and/or related classification”; Para 79: “trains a classification model in order to predict the respective labeling of TILs associated with computationally stained and digitized whole slide images of Hematoxylin and Eosin (H&E) stained pathology specimens obtained from biopsied tissue, and spatially characterizing TIL Maps that are generated by the system and method.”; Para 83: “trains the unsupervised autoencoder on image patches with nuclei in the center”; Para 104: “the system produces a probability map of TILs using an embodiment of a Convolutional Neural Network (CNN) model, that specifically identifies lymphocyte-infiltrated regions in whole slide tissue images (WSIs) of Hematoxylin and Eosin (H&E) stained tissue specimens”; Fig. 1A (shown below): mainly 5; Fig. 5 (shown below): mainly 151-155; Para 178: “computational staining for TILs consists of three distinct and independent stages: 1) training the CNN model; 2) refining the CNN model and 3) deploying and using a model to identify TILS and produce a TIL MAP) or a heatmap that shows the areas of an image that are TILs).”); reclassifying the plurality of initial clusters into a plurality of secondary clusters based on the plurality of data for learning classified into the respective plurality of initial clusters; and creating a machine learning model by causing the initial machine learning model to learn a relationship between the plurality of initial clusters and the plurality of secondary clusters, the machine learning model classifying an inputted datum into one of the plurality of secondary clusters (Saltz: para 16: “This novel approach combines novel deep learning algorithms, as well as methodological optimizations that also incorporates and automates implementation of the intelligence and expert feedback from pathologists, without being overly disruptive or burdensome, and yet, is proven to be effective and useful in more refined cancer classifications and/or diagnosis”; para 20: “a classifier is then trained with the features and the labels assigned by the pathologist. At the end of this process, a classification model is generated, trained and even further re-trained. The classification step applies the classification model to unlabeled test images. Each test image is partitioned into patches. The classification model is then applied to each patch to predict the patch's label with respect to identified TILs and respective threshold levels.”; para 104: “Next, this probability map of TILs undergoes further review through a stratified sampling and/or possibly human feedback. In alternate embodiment(s), this step 6 is fully automated through predictive and deep learning CNN models that automate any potential human feedback once learned by the CNN model.”; para 105: “FIG. 1A also incorporates human feedback and improves its learning as it undergoes further implementations, so that it becomes fully automated and/or partially relies on human feedback”; para 108: “The initial training steps 34-35 are followed by an iterative cycle of review and refinement steps in step 36 in order to improve the prediction accuracy of the lymphocyte CNN. This prediction step 36 generates a probability value of lymphocyte infiltration for each patch in the images set. The patch-level predictions for an image are combined and represented to pathologists as a heatmap (for example, as shown in FIG. 7B further described hereinbelow) for review and visual editing for example, using a TIL-Map editor tool. The pathologists refine the CNN predictions for an image in step 36 by first adjusting the probability value threshold (which globally updates the labels of the patches in the image). If the probability value of a patch exceeds the adjusted threshold, the patch is labeled a TIL patch). Next the system edits the heatmap to correct any identified prediction errors for individual or groups of patches. At the end of the editing step, the updated heatmaps are processed to augment the training dataset. The lymphocyte CNN is re-trained with the updated training dataset. This iterative process continues until adequate prediction accuracy is achieved, as determined by the pathologist feedback (or alternatively automated feedback by the system once trained), whether automated or manual.”; para 120: “If the expert pathologists (or alternatively, the expert system deems the quality unsatisfactory) are not satisfied with the predictions, then additional training data is collected for this cancer type, and the CNNs are re-trained until the expert pathologists are satisfied with the prediction quality. Once deemed satisfactory, then the CNN model is considered trained and is saved to TIL-ModelDB.”; [0126] Referring further to FIG. 1C (and also as described in connection with FIGS. 3B-3C hereinbelow), the system implements an iterative workflow in order to train the CNN models. Iterative model training and respective data labeling is performed in the example embodiment. In an example embodiment, first, an unsupervised image analysis of WSIs is executed to initialize a CNN model. This model is refined in an iterative process in which CNN predictions are reviewed, corrected and refined by expert pathologists. The CNN model is re-trained with the updated data in order to improve its classification performance. After a training phase, the CNN model is applied to patches in the test set. For each test patch, the lymphocyte CNN produces a probability of the patch being a lymphocyte-infiltrated patch. The label of the patch is decided by simple thresholding as shown for example, in FIG. 3C and FIGS. 5A-5B workflows. If the probability value is above a predefined threshold, the patch is classified as lymphocyte-infiltrated, as shown for example in FIG. 1C and FIG. 3D workflows. [0165] Referring to overview of the system and method of determining probability maps associated with TILs, shown in FIG. 3A is an overview of the model, in accordance with an embodiment. Phase (1) 101 comprises initial training and model development. Phase (2) 102 comprises an iterative cycle of prediction, editing of spatial prediction map, and re-training. Phase (3) 103 comprises expert refinement of patches for final thresholding and refinement of the generated maps. In certain embodiments such expert refinement phase is automated as the system (and respective CNN model) is iteratively trained using copious image samples. In alternate embodiments, the system can receive such stratified sampling via human feedback by expert pathologist review of patches for final thresholding and refinement of maps. A user can edit the respective heatmap using the “Lymphocyte Sensitivity,” “Necrosis Specificity,” and “Smoothness” tools. For finer-grain editing, the user can use the “Markup Edit” tool to mark-up specific patches and label them as lymphocyte infiltrated or not. One example implementation provides for edited tumor infiltrating lymphocyte maps by pathologists. The red pixels represent lymphocyte infiltrated patches and the blue pixels indicate patches that are not infiltrated by lymphocytes. Paras 166, 167, 169, 178, 202, 204, 205, 210, 214, 218, 222-225, 228, 230, 234, 237, 240, 260, 265, 266, 274, 298-300; [0170] After this evaluation, another exemplary implementation applied the CNNs to diagnostic H&E WSIs from 13 TCGA tumor types (see below list of tumor types and acronyms) in which lymphocytes are known to be present and including uveal melanoma (UVM) as a type of negative control, as it has the fewest immune cells among TCGA tumors. Each image was partitioned into patches of 50×50 square microns and each patch was classified by the lymphocyte CNN. One or more pathologists examined the prediction results and annotated patches that had been classified incorrectly by the lymphocyte CNN. The lymphocyte CNN was re-trained with the updated dataset. This labeling and retraining process was repeated until the pathologists agreed that the prediction results are reasonable. The disclosed method incorporates a region segmentation phase using the necrosis segmentation CNN to account for necrosis regions. If the necrosis segmentation output was not adequate to support further analysis, the pathologists circled necrosis vs. non-necrosis regions and the necrosis CNN was retrained. In a final step, probability thresholds for determining tumor-infiltrating-lymphocyte (TIL) positive patches were set on an individual-slide basis. TIL maps were generated for 5156 TCGA tumor images from the 13 TCGA cancer types. Tumor types were selected on the basis of known positive involvement of lymphocytes and immunogenicity based on literature. [0226] In an example implementation, a team of three pathologists refined 10 to 20 WSIs in each cancer type using the TIL-Map editor. Each image was assigned to two pathologists. Each pathologist separately adjusted the “Lymphocyte Sensitivity,” “Necrosis Specificity,” “Smoothness” thresholds and manually edited regions in the images using the “Markup Edit” tool in order to generate an accurate patch-level classification in the entire image. Depending on the pathologists' consensus, if re-training was needed, the pathologists collaboratively generated a consensus lymphocyte heatmap for each image. Data from these consensus heatmaps was input back into the lymphocyte CNN in a training step to further improve its performance. [0230] Generally, the lym-CNN is requires further retraining and the necrosis-CNN is generally well-trained. Additional training data for the lym-CNN may be collected in different ways. When implementing analysis of the LUAD and BRCA, pathologists label all patches in about 10 whole slide images, by correcting thresholded heatmaps. Then the system randomly sampled patches. Some of the training retraining patches were collected via a patch-label website. One pathologist may process approximately 20 slides, and circle a representative region for each slide, overloading the “tumor” marker. [0231] Pathologists may collectively label all patches in each region. However, an even more targeted approach by the system is to threshold the TIL heatmap on caMicroscope, and only correct miss-predicted patches, in an alternative embodiment. [0260] FIG. 5 illustrates a flowchart providing the training of a CNN based algorithm for generating tumor infiltrating lymphocyte map, in accordance with an embodiment of the disclosed system and method. The workflow involves an iterative process in which CNN predictions are reviewed, refined, and corrected, if necessary, further by expert pathologists. Manual corrections and refinements are then used to retrain the CNN models in order to improve their performance. In alternative embodiments, such refinements are part of the training models and already automated by prior training and human feedback. [0261] An exemplary iterative workflow 150 for model training and targeted data labeling is shown in FIG. 5. The iterative workflow 150 as illustrated in FIG. 5, is implemented to train respective CNN models. First, an unsupervised image analysis of WSIs is executed to initialize a CNN model. During step 151, the system trains an unsupervised convolutional autoencoder (CAE) (for example, as described hereinabove in exemplary embodiments of FIGS. 2A-2C). An algorithm (CNN) is first trained on image patches in step 151. The system initializes the lymphocyte CNN with the unsupervised CAE in step 152 and may further initialize the necrosis segmentation CNN randomly in step 153. The system next trains the CNNS on an initial supervised dataset in step 154. The system next applies the CNNs to generate probability heat maps 103, for lymphocyte regions (for example, as shown in FIG. 3A, 6C or 7B). This CNN model is further refined in an iterative process in which CNN predictions are reviewed, corrected, and refined by expert pathologists in step 156, in certain embodiments. In alternate embodiments, an expert system that is already trained by human pathologist review, correction and refinement is implemented in step 156 with a fully automated and trained module for refinement in place. The corrected and/or further refined heat maps are used to generate new predictions. The CNN model is re-trained with the updated data in order to improve its classification performance during step 158 and 157. The system extracts new training data for the edited heatmaps 148 and uses that data to retain the CNNs with all supervised data in step 157. Such retrained CNN is now applied to generate further refined probability maps for lymphocytes in step 155. Once the training model is perfected the system ends the refinements phase in step 156 and proceeds to generate and store the trained model in step 159. [0267] In an example implementation, a team of three pathologists reviewed and refined 10 to 20 WSIs in each cancer type by using the TIL-map editing tool. Each image was assigned to two pathologists. Each pathologist separately adjusted the “Lymphocyte Sensitivity,” “Necrosis Specificity,” and “Smoothness” thresholds (for example, as shown in FIGS. 7D-7F) and manually edited regions (see example mark-up 249 as shown in FIG. 7G; see example mark-up 254 as shown in FIG. 7H) in the images using the “Markup Edit” tool in order to generate an accurate patch-level classification in the entire image. Depending on the pathologists consensus, if retraining was needed, the pathologists collaboratively generated a consensus lymphocyte heatmap for each image. Data from these consensus heatmaps was inputted back to the lymphocyte CNN in a training step to further improve its performance. In an alternative embodiment, such pathologist expert input can also be implemented in an automated module that is trained to provide automated feedback via a mark-up module that was trained by prior pathologist expert input, thus eliminating the necessity for human feedback during implementation of the trained and tested CNN that is generated during step 159 of example workflow 150 illustrated in FIG. 5. Fig. 1A (shown below): mainly 6; Fig. 1B (shown below): mainly 36, 44, “Retrain CNN after pathologist review and correct predicted TILs”; PNG media_image1.png 731 1495 media_image1.png Greyscale PNG media_image2.png 815 1500 media_image2.png Greyscale Fig. 5 (shown below): mainly 156-158; PNG media_image3.png 936 842 media_image3.png Greyscale PNG media_image4.png 771 1213 media_image4.png Greyscale PNG media_image5.png 925 1465 media_image5.png Greyscale Fig. 3C (shown below): mainly 50-508; PNG media_image6.png 686 1425 media_image6.png Greyscale PNG media_image7.png 989 1469 media_image7.png Greyscale ). As per claim 28, Saltz teaches the method of claim 27, wherein the reclassifying comprises: presenting the plurality of data for learning classified into the respective plurality of initial clusters to a user; receiving a user input that associates each of the plurality of initial clusters to one of the plurality of secondary clusters; and reclassifying the plurality of initial clusters into a plurality of secondary clusters based on the user input (Saltz: See arguments and citations offered in rejecting claim 27 above). As per claim 30, Saltz teaches the method of claim 27, wherein the plurality of secondary clusters are determined in accordance with a resolution of the plurality of data for learning (Saltz: See arguments and citations offered in rejecting claim 27 above; para 107: “the lymphocyte CNN is trained with 50×50 μm2 patches (equivalent to 100×100 square pixel patches in tissue images acquired at 20× magnification level) from WSIs. The necrosis CNN is trained with larger patches of size 500×500 μm2, as more contextual information results in superior prediction of patches being necrotic.”; Para 114: “The CNN and the CAE are designed to have relatively high resolution input such that one can recognize individual lymphocytes… The CAE encodes (compresses) an input image patch of 50×50 μm2 (100×100 square pixels, corresponding to 20× magnification) into several vectors of length 100, and then reconstructs the input image patch using these encoding vectors”; Para 117: “the system models this as a segmentation problem with larger input patches at a relatively lower resolution: 500×500 μm2 patches are extracted from the image and downsampled 3 times. The resulting patch is 333×333 pixels at 20× magnification. The necrosis segmentation CNN outputs pixel-wise segmentation results. An example module to implement this task is DeconvNet because it is designed to predict pixel-wise class labels and handle structures and objects at multiple scales (which is more suitable for segmentation than patch-level classification). A necrosis segmentation CNN has been shown to achieve high prediction accuracy with several benchmark image datasets. Further the system trains a necrosis segmentation CNN to classify each pixel as inside or outside a necrosis region. The output of the necrosis segmentation CNN is re-sized to match the output resolution of the lymphocyte CNN. If over half of a 50×50 patch intersects with a necrotic region, the patch is classified as non-lymphocyte-infiltrated”; Fig. 1C (shown below): mainly 53, 58). As per claim 31, Saltz teaches the method of claim 27, wherein the plurality of data is a plurality of images (Saltz: See arguments and citations offered in rejecting claim 27 above). As per claim 32, Saltz teaches the method of claim 31, wherein the plurality of images for learning comprise at least one of (only one of the following is required) a plurality of partial images from fragmenting an image at a predefined resolution (Saltz: See arguments and citations offered in rejecting claims 27 and 30 above; para 107: “the lymphocyte CNN is trained with 50×50 μm2 patches (equivalent to 100×100 square pixel patches in tissue images acquired at 20× magnification level) from WSIs. The necrosis CNN is trained with larger patches of size 500×500 μm2, as more contextual information results in superior prediction of patches being necrotic.”; Fig. 1C (shown below): mainly 53, 58), an image (the immediately following is recited as intended use and is not required) for a pathological diagnosis (Saltz: See arguments and citations offered in rejecting claim 27 above: diagnos*; para 167: “ready for use in connection with classification and/or diagnosis”), an image of tissue of a subject with interstitial pneumonia, an image of tissue of a subject without interstitial pneumonia, or a plurality of images of subjects with different diseases (Saltz: See arguments and citations offered in rejecting claim 27 above; para 86: “tumor-infiltrating lymphocytes (TILs) are identified from standard pathology cancer images by a deep-learning-derived “computational stain”. The disclosed system and method processed 5,202 digital images from 13 cancer types”). As per claim 33, Saltz teaches the method of claim 27 further comprising repeating the receiving, the classifying, and the reclassifying of datum within at least one secondary cluster among the plurality of secondary clusters as the plurality of data for learning (Saltz: See arguments and citations offered in rejecting claim 27 above; Para 108: “The patch-level predictions for an image are combined and represented to pathologists as a heatmap (for example, as shown in FIG. 7B further described hereinbelow) for review and visual editing for example, using a TIL-Map editor tool. The pathologists refine the CNN predictions for an image in step 36 by first adjusting the probability value threshold (which globally updates the labels of the patches in the image). If the probability value of a patch exceeds the adjusted threshold, the patch is labeled a TIL patch). Next the system edits the heatmap to correct any identified prediction errors for individual or groups of patches. At the end of the editing step, the updated heatmaps are processed to augment the training dataset. The lymphocyte CNN is re-trained with the updated training dataset. This iterative process continues until adequate prediction accuracy is achieved, as determined by the pathologist feedback (or alternatively automated feedback by the system once trained), whether automated or manual.”; Para 120: “If the expert pathologists (or alternatively, the expert system deems the quality unsatisfactory) are not satisfied with the predictions, then additional training data is collected for this cancer type, and the CNNs are re-trained until the expert pathologists are satisfied with the prediction quality. Once deemed satisfactory, then the CNN model is considered trained and is saved to TIL-ModelDB.”). As per claim 34, Saltz teaches the method of claim 27, wherein the created machine learning model is used for outputting a feature map (Saltz: See arguments and citations offered in rejecting claim 27 above; Para 81: “the CAE detects and encodes nuclei in image patches in tissue images into sparse feature maps”; Para 88: “These TIL maps are derived through computational staining using a convolutional neural network trained to classify patches of images. In accordance with an embodiment, affinity propagation revealed local spatial structure in TIL patterns and correlation with overall survival. TIL map structural patterns were grouped using standard histopathological parameters. These patterns are enriched in particular T cell subpopulations derived from molecular measures. TIL densities and spatial structure were differentially enriched among tumor types, immune subtypes, and tumor molecular subtypes, implying that spatial infiltrate state could reflect particular tumor cell aberration states.”; PNG media_image8.png 613 1425 media_image8.png Greyscale PNG media_image9.png 980 1096 media_image9.png Greyscale ; Fig. 1B (shown above): mainly 32, 33, 38-47; Para 119: “In accordance with the embodiment of FIG. 1C, the approach implements deep learning models. The outputs of the two CNNs are combined to predict TILs and generate a probability map of TIL positive and TIL negative patches (TIL Maps). Next the lym-CNN and necrosis-CNN are used to first assign prediction values for each patch (100×100 pixels in 20×) in representative whole slide images from the training data. Additionally, computed is the color variance for each patch to eliminate white, non-tissue background. Expert pathologists examine the TIL Maps predictions or alternatively, an expert system is trained to automate such review of TIL Maps predictions. If the expert pathologists or system are satisfied with the quality of the predictions (by adjusting the lymphocyte sensitivity and necrosis specificity, referring to example shown in FIGS. 7D-7F) then the model is considered trained and is saved to TIL-ModelDB”; PNG media_image10.png 696 1356 media_image10.png Greyscale Para 122: “The CAE 72 detects and encodes nuclei in image patches in tissue images into sparse feature maps that encode both the location and appearance of nuclei”; Para 125: “A large input H&E patch 57 is processed to yield a TIL map, which is shown as final predicted TIL map 56. A sparse auto encoder as shown in 72 processes encoding of the input image patch 57 and smaller patches 51. The lymphocyte CNN 53 first processes received smaller patches of 50×50 microns 51 within the large patch 57 at 20× magnification and predicts if those patches 51 are lymphocyte-infiltrated. Next the system displays predictions as a “heatmap” (for example as shown in FIG. 6C; FIG. 7B; FIG. 9B), superimposed on the H&E image (upper middle, TIL positive patches 54 shown in dark orange or darker shaded areas 52). The necrosis CNN 58 (lower left and lower middle) takes the larger region with more contextual information 59 to predict if patches are mostly necrotic (shown in light orange or lighter shaded areas of grey 49). The two results are then combined in step 55 as lymphocyte with necrosis filtering to generate the predicted TIL map 56 (shown as upper right predicted TIL map 56) for the final tumor infiltration lymphocyte prediction (TIL-positive patches 56 shown in dark orange or dark shaded areas 52)”; Fig. 3C (shown above): 500-505, 511). As per claim 35, Saltz teaches a method of creating a machine learning model, comprising: receiving a plurality of data classified into at least one secondary cluster by a machine learning model created in accordance with the method of claim 27; classifying each of the plurality of received data into a respective initial cluster of a plurality of initial clusters by using an initial machine learning model, the initial machine learning model being caused to at least learn to output a feature of an inputted datum from the datum; reclassifying the plurality of initial clusters into a plurality of secondary clusters based on the plurality of received data classified into the respective plurality of initial clusters; and creating a machine learning model by causing the initial machine learning model to learn a relationship between the plurality of initial clusters and the plurality of secondary clusters, the machine learning model classifying an inputted datum into one of the plurality of secondary clusters (Saltz: See arguments and citations offered in rejecting claims 27 and 33 above). As per claim 36, Saltz teaches a method of creating a feature map, comprising: receiving a target image; fragmenting the target image into a plurality of regional images (Saltz: See arguments and citations offered in rejecting claims 27 and 34 above; para 18: “The system detect and encodes nuclei in image patches into feature maps that encode both the location and appearance of nuclei.”; para 20: “The classification step applies the classification model to unlabeled test images. Each test image is partitioned into patches. The classification model is then applied to each patch to predict the patch's label with respect to identified TILs and respective threshold levels.”; Fig. 1B (shown above): mainly 34); classifying each of the plurality of regional images into a respective secondary cluster of the plurality of secondary clusters by inputting the plurality of regional images into a machine learning model created by the method of claim 31 (Saltz: See arguments and citations offered in rejecting claims 27 and 34 above; para 20: “The classification step applies the classification model to unlabeled test images. Each test image is partitioned into patches. The classification model is then applied to each patch to predict the patch's label with respect to identified TILs and respective threshold levels.”); and creating a feature map by separating each of the plurality of regional images in the target image in accordance with respective classifications (Saltz: See arguments and citations offered in rejecting claim 34 above). As per claim 37, Saltz teaches the method of claim 36, wherein the separating comprises coloring regional images belonging to the same classification among the plurality of regional images with the same color (Saltz: See arguments and citations offered in rejecting claims 34 and 36 above: computerized staining is coloring; Para 119: “The outputs of the two CNNs are combined to predict TILs and generate a probability map of TIL positive and TIL negative patches (TIL Maps). Next the lym-CNN and necrosis-CNN are used to first assign prediction values for each patch (100×100 pixels in 20×) in representative whole slide images from the training data. Additionally, computed is the color variance for each patch to eliminate white, non-tissue background”). As per claim 38, Saltz teaches a method of estimating a state related to a disease of a subject, comprising: obtaining a feature map created in accordance with the method of claim 36, the target image being an image of tissue of the subject; and estimating a state related to a disease of the subject based on the feature map (Saltz: See arguments and citations offered in rejecting claims 34 and 36 above). As per claim 40, Saltz teaches the method of claim 38, wherein the estimating a state related to a disease of the subject based on the created feature map comprises: calculating a frequency of each of the plurality of secondary clusters from the feature map; and estimating a state related to the disease based on the frequency (Saltz: See arguments and citations offered in rejecting claim 38 above; para 105: “The system next in step 8 incorporates clinical data 13 and any molecular data, in a further integrated and refined TIL Map to produce prognostic caliber numerical statistics and patient level summaries in step 9. These statistics and summaries have been shown to possess diagnostic and prognostic value and may include digitized images, TIL Maps, patient level summaries and/or more targeted classifications of relevant regions of tumoral tissue samples in step 9”; Fig. 3C (shown above): mainly 512; [0328] Examples of Kaplan-Meier curves for median-split clustering indices are shown in FIGS. 13C (BRCA) and 13D (SKCM). In SKCM, increased Banfield Raftery-index (“cluster count”) associates with superior survival, while in BRCA increased Ball-Hall index (“cluster extent”) associates with inferior survival, both adjusted for overall TIL density. Of interest, checkpoint inhibition immunotherapy has been successfully applied to melanoma, while breast cancer tumors have generally been unresponsive to check-point blockade therapy. The association of structure with survival, as evidenced by less favorable survival in tumors with elevated adjusted Ball-Hall index (“cluster extent”) could be worthy of further investigation as a stratification factor for patient tumors in clinical studies of response. PNG media_image11.png 440 913 media_image11.png Greyscale ). As per claim 41, Saltz teaches the method of claim 38, wherein creating the feature map comprises creating a plurality of feature maps, the plurality of feature maps having different resolutions from one another (Saltz: See arguments and citations offered in rejecting claim 38 above; Para 116: “The system implements two different CNNs for classification of necrosis regions and TILs, because evaluation results showed necrosis regions and lymphocytes are better recognized and classified at different image scales. The necrosis CNN model 58 performs better with larger input tissue regions, whereas the lymphocyte CNN model 53 achieves the better targeted results with local, high-resolution image patches (referring to example shown in FIG. 1C).”). As per claim 42, Saltz teaches the method of claim 41, wherein the estimating a state related to a disease based on the created feature map comprises: calculating a frequency of each of the plurality of secondary clusters from each of the plurality of feature maps; and estimating a state related to the disease based on the frequency (Saltz: See arguments and citations offered in rejecting claim 40 above). As per claim 43, Saltz teaches the method of claim 41, wherein the estimating a state related to a disease based on the created feature map comprises: identifying an error in at least one of the plurality of feature maps by using the plurality of feature maps; and estimating a state related to the disease based on at least one feature map excluding the at least one feature map in which an error has been identified (Saltz: See arguments and citations offered in rejecting claim 41 above; para 129: “At each step, a set of slides is excluded from further processing, as follows in step 67 the set is excluded for various reasons such as: 1) no TIL Map was generated Corrupt Image File: either the image file is corrupted, unable to be read, or the image only contains a small portion of the whole slide; 2) Low Resolution: The image does not have enough high resolution (of at least 20×) to be processed by the CNN model; 3) Out of focus: The image is out of focus; 4) Bad Image File: The image is either captured with bad quality, or marked by markers; 5) Processing/Prediction Failed: Either the pipeline failed processing those slides because of malfunctions such as process being terminated or ended in the middle of the process, or the lymphocyte predictions are not good (i.e., a visual inspection of the images showed too many incorrectly labeled patches—results for some of the images, for example, had a high false positive rate due to the cytology of the tumor cells that closely resembled lymphocytes); or 6) Duplicated Image File: there is another image file corresponding to the same diagnostic slide barcode. In step 68 image files with TIL maps are excluded for another set of reasons: no Cluster File was Generated—in clustering indices process, some of the slides have too many TIL patches. As a result, either the clustering indices algorithm cannot fit them into memory to process or it may take too long to finish clustering those slides. Those slides do not yield cluster file results.”; para 130: “image files including cluster output may be excluded for other reasons as follows: exclusions to Create Final List of Single Slide per Participant. For each participant a single slide is selected where multiple slides are available as follows—only the slide containing label DX1 is selected (not labels DX2, DX3, . . . ). In 15 cases there were two DX1 slides for each patient, and one was slide chosen by random sampling. Finally, only slides from TCGA participants with data included in the PanCancer Atlas cohort (the PanCancer Atlas whitelist) are retained for final integrative analysis work in the example embodiment.”; PNG media_image12.png 546 1511 media_image12.png Greyscale ). As per claim 44, Saltz teaches the method of claim 38, further comprising: analyzing survival time of the subject whose state related to the disease has been estimated based on the created feature map; and identifying at least one secondary cluster contributing to the estimated state among a plurality of secondary clusters in the feature map (Saltz: See arguments and citations offered in rejecting claim 38 above; para 11: “many observations suggest that high densities of tumor-infiltrating lymphocytes (TILs) correlate with favorable clinical outcomes, such as longer disease-free survival or improved overall survival (OS) in multiple cancer types.”; para 321: “Clustering indices vary widely over slides, as illustrated in FIG. 13A for the Ball-Hall index. Tumors with relatively high values of this index, such as BRCA and PRAD, are not among those with highest overall infiltrate (referring to FIG. 10A; Panel A). Since the Ball-Hall index scales with approximately cluster extent, this implies that, in some of these tumor types of moderate infiltrate mass, TIL clusters of relatively large spatial extent are formed. In summary, this implies that, in some tumor types, local clustering of TILs may be a more distinctive feature than overall TIL infiltrate, in comparison with other tumor types.”; para 322: “Shown in FIGS. 13A-D, are the Associations of TIL Local Spatial Structure with Cancer Type and Survival. Associations are shown with respective cluster indices, which summarize properties of clusters derived from affinity propagation clusters of the TIL map—properties that provide details on local structure beyond simple densities.”; [0325] Graphical representation in FIG. 13C, provides overall survival for median-stratified TIL fraction-adjusted Ball-Hall index in breast cancer. Significance test p value is shown in the lower left of the FIG. 13C. [0326] In FIG. 13D, shown is a graphical representation similar to FIG. 13C, but for adjusted Banfield-Raftery index in skin cutaneous melanoma. The Banfield-Raftery index is the weighted sum of the logarithms of the mean cluster dispersion and, in the data, often correlates with the number of clusters. Referring also to related FIG. 14, shown is a graphical representation of relation among scores of local spatial structure of the tumor immune with infiltrate Pearson correlation coefficients relating each cluster characterization to all others. The colorbar (or gradient grey scale in black and white versions) shows the correlation coefficient value. [0328] Examples of Kaplan-Meier curves for median-split clustering indices are shown in FIGS. 13C (BRCA) and 13D (SKCM). In SKCM, increased Banfield Raftery-index (“cluster count”) associates with superior survival, while in BRCA increased Ball-Hall index (“cluster extent”) associates with inferior survival, both adjusted for overall TIL density. Of interest, checkpoint inhibition immunotherapy has been successfully applied to melanoma, while breast cancer tumors have generally been unresponsive to check-point blockade therapy. The association of structure with survival, as evidenced by less favorable survival in tumors with elevated adjusted Ball-Hall index (“cluster extent”) could be worthy of further investigation as a stratification factor for patient tumors in clinical studies of response. Figs. 13C-D: survival probability, time; PNG media_image13.png 893 1501 media_image13.png Greyscale ). 45. (new): A system for creating a machine learning model, comprising: receiving means for receiving a plurality of data for learning; classifying means for classifying each of the plurality of data for learning into a respective cluster of a plurality of initial clusters by using an initial machine learning model, the initial machine learning model being caused to at least learn to output a feature of an inputted datum from the datum; reclassifying means for reclassifying the plurality of initial clusters into a plurality of secondary clusters based on the plurality of data for learning classified into the respective plurality of initial clusters; and creating means for creating a machine learning model by causing the initial machine learning model to learn a relationship between the plurality of initial clusters and the plurality of secondary clusters, the machine learning model classifying an inputted datum into one of the plurality of secondary clusters. 46. (new): A computer readable storage medium for storing a program for creating a machine learning model, the program being executed in a computer system comprising a processing unit, the program causing the processing unit to execute processing comprising: receiving a plurality of data for learning; classifying each of the plurality of data for learning into a respective cluster of a plurality of initial clusters by using an initial machine learning model, the initial machine learning model being caused to at least learn to output a feature of an inputted datum from the datum; reclassifying the plurality of initial clusters into a plurality of secondary clusters based on the plurality of data for learning classified into the respective plurality of initial clusters; and creating a machine learning model by causing the initial machine learning model to learn a relationship between the plurality of initial clusters and the plurality of secondary clusters, the machine learning model classifying an inputted datum into one of the plurality of secondary clusters. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 29 is rejected under 35 U.S.C. 103 as being unpatentable over Saltz as applied to claim 38 above, and further in view of US 20210158094 A1 (Ji). As per claim 29, Saltz teaches the method of claim 28. Saltz does not teach the plurality of secondary clusters are defined by the user. Ji teaches the plurality of secondary clusters are defined by the user (Ji: Para 39: “In splitting the first category of images into a first subcategory and a second subcategory, the system can generate a new training data set with images in the first area in the feature map retaining a tag specifying the corresponding user-defined category and images in the second area in the feature map being tagged with a different label. This different label may be derived from the user-defined category. Additionally, based on the label generated for the images in the second area, the system can generate a mapping between internal categories used in retraining the convolutional neural network (or other machine learning model) and the user-defined categories originally included in the training data set so that an output of an image classifier using the retrained convolutional neural network corresponds to one of the user-defined categories rather than an internal category.”; Para 50: “Based on the determination that user-defined category A should be split into internal categories A and A′, the system can generate a new training data set or edit the current training data set to assign a label corresponding to internal category A′ to the images having features mapped to feature space A′ in feature map 400. Additionally, the system can generate a map 410 between the internal categories generated by the system and used to retrain the convolutional neural network and the user-defined categories, which, as discussed, may be used after the convolutional neural network has categorized an image to one of the internal categories, to identify the user-defined category to which the image belongs.”). Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Ji into Saltz since both Saltz and Ji suggest a practical solution of user feedback in correcting machine learning model classifications in improving training of the model in general and Ji additionally provides teachings that can be incorporated into Saltz in that the classifications are user-defined since “Some anomalies may be relatively common, while other anomalies may be less common” (Ji: para 4) “in a user-defined category such that the machine learning model classifies a received image as one of the sub-categories of images” (Ji: para 21). The teachings of Ji can be incorporated into Saltz in that the classifications are user-defined. Furthermore, one of ordinary skill in the art could have combined the elements as claimed by known methods and, in combination, each component functions the same as it does separately. One of ordinary skill in the art would have recognized that
Read full office action

Prosecution Timeline

Jan 18, 2024
Application Filed
Dec 02, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602854
SYSTEM AND METHOD FOR MEDICAL IMAGING
2y 5m to grant Granted Apr 14, 2026
Patent 12586195
OPHTHALMIC INFORMATION PROCESSING APPARATUS, OPHTHALMIC APPARATUS, OPHTHALMIC INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579649
RADIATION IMAGE PROCESSING APPARATUS AND OPERATION METHOD THEREOF
2y 5m to grant Granted Mar 17, 2026
Patent 12555237
CLOSEUP IMAGE LINKING
2y 5m to grant Granted Feb 17, 2026
Patent 12548221
SYSTEMS AND METHODS FOR AUTOMATIC QUALITY CONTROL OF IMAGE RECONSTRUCTION
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
93%
With Interview (+4.9%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 881 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month