Prosecution Insights
Last updated: April 19, 2026
Application No. 18/627,251

BIOLOGICAL CONTEXT FOR ANALYZING WHOLE SLIDE IMAGES

Non-Final OA §101§103§112
Filed
Apr 04, 2024
Examiner
ROSARIO, DENNIS
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Genentech Inc.
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
3y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
385 granted / 557 resolved
+7.1% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
34 currently pending
Career history
591
Total Applications
across all art units

Statute-Specific Performance

§101
16.5%
-23.5% vs TC avg
§103
40.3%
+0.3% vs TC avg
§102
24.6%
-15.4% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 557 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Claim 12,14 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (math and mental process) without significantly more. Claim(s) 1,15,16,17 and 18 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1). Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Tu et al. (US 2023/0070286 A1): Claim(s) 3,4 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Tu et al. (US 2023/0070286 A1) as applied in claim 2 further in view of Martinez Manzano et al. (US 2023/0401590 A1): Claim(s) 5,9 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Chiu et al. (US 2020/0357143 A1): Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Chiu et al. (US 2020/0357143 A1) as applied in claim 5 further in view of Xiao et al. (US 2023/0274248 A1): Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Chiu et al. (US 2020/0357143 A1) as applied in claim 5 further in view of Bassi (US 2006/0050074 A1): Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Chiu et al. (US 2020/0357143 A1) as applied in claim 5 further in view of Bassi (US 2006/0050074 A1) as applied in claim 7 further in view of OGASAWARA et al. (US 2023/0316489 A1): Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Chiu et al. (US 2020/0357143 A1) as applied to claims 5,19 further in view of Hossain et al. (Bi-SAN-CAP: Bi-Directional Self-Attention for Image Captioning): Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Chiu et al. (US 2020/0357143 A1) as applied in claims 5,9 further in view of Bai et al. (US 2022/0270353 A1): Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Chiu et al. (US 2020/0357143 A1) as applied in claims 5,9 further in view of Bai et al. (US 2022/0270353 A1) as applied in claim 11 further in view of Geng et al. (US 2023/0153531 A1): Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Chiu et al. (US 2020/0357143 A1) as applied in claims 5,9 further in view of KAZUHIRO (JP 2016-158059 A) with SEARCH machine translation: Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Chiu et al. (US 2020/0357143 A1) as applied in claims 5,9 further in view of KAZUHIRO (JP 2016-158059 A) with SEARCH machine translation as applied in claim 13 further in view of Jaiswal et al. (US 11,775,617 B1): Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17,18,19 further in view of Chiu et al. (US 2020/0357143 A1) further in view of LIU et al. (CN 112163608 A) with SEARCH machine translation: Response to Amendment The preliminary amendment was received 4/25/2024. Claims 1-20 pending: PNG media_image1.png 721 146 media_image1.png Greyscale Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 120, 121, 365(c), or 386(c) as follows: The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994). The disclosure of the prior-filed application, Application No. PRO 63/253,514 10/07/2021, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application: The claimed “global patten” of claims 1,18,19 is not in Application No. PRO 63/253,514 10/07/2021. The claimed “histological feature” of claims 1,18,19,20 is not in Application No. PRO 63/253,514 10/07/2021. The claimed “the spatial attention models attention to a microscopic visual pattern” of claims 20 is not in Application No. PRO 63/253,514 10/07/2021. The claimed “the sematic attention models attention to a macroscopic visual pattern” of claims 20 is not in Application No. PRO 63/253,514 10/07/2021. Accordingly, claims 1-20 are not entitled to the benefit of the prior application (Application No. PRO 63/253,514 10/07/2021). PNG media_image2.png 997 836 media_image2.png Greyscale Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 12,14 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 12 recites the limitation "the encoded learnable token" in the last line. There is insufficient antecedent basis for this limitation in the claim. The "the encoded learnable token" is interpreted as –[[the]] an encoded learnable token--. Claim 14 recites the limitation "the transformer model" in the last line. There is insufficient antecedent basis for this limitation in the claim. The "the transformer model" is interpreted as –[[the]] a transformer model--. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (math and mental process and human activity) without significantly more. The claim(s) recite(s) “an embedding”1; “the embedding” “encoding2 the corresponding embedding” “generating3 a representation4” “combining the encoded patch embeddings” “performing a pathological task”5 “the transformer model”6).This judicial exception is not integrated into a practical application because the additional elements (such as “image” “patches” “features” “context” “pattern” in claim 1, representative of claims 1,18,19,20; “the transformer” in claim 14) does not improve technology or technical field of the functioning of a computer or reflect thereof in view of applicant’s disclosure (paragraphs [2][33][34][35][59]). The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements adhere to conventional practices in view of applicant’s disclosure (BACKGROUND [3]): 1. (Original) A computer-implemented method for analyzing a whole slide image (WSI) in light of biological context, comprising: extracting an embedding for each of a set of patches sampled from a WSI, wherein the embedding represents one or more histological features of the respective patch of the WSI; for each of the patches, encoding the corresponding embedding with a spatial context and a semantic context, wherein the spatial context represents a local pattern related to the one or more histological features, the local visual pattern spanning a region in the WSI beyond the corresponding patch, and wherein the semantic context represents a global pattern over the WSI as a whole; generating a representation for the WSI by combining the encoded patch embeddings; and performing a pathological task based on the representation for the WSI. PNG media_image3.png 670 916 media_image3.png Greyscale Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1,15,16,17 and 18 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1). PNG media_image4.png 726 441 media_image4.png Greyscale Re 1. (Original), SALTZ discloses A computer-implemented method for analyzing a whole slide image (WSI) (or “representative H&E diagnostic whole-slide images (WSIs)” [0033]: fig. 1C: 57, a patch 57 with smaller patches 51, is of the WSIs in fig. 1B:38) in light of biological context, comprising: extracting (via “a Convolutional Autoencoder (CAE) for fully unsupervised, simultaneous nucleus detection and feature extraction in histopathology tissue images” [0080]: fig. 1C:72: “Sparse Auto Encoder”: fig. 2A:72: “Sparse Autoencoder”: fig. 5:151: “Train an unsupervised Convolutional Autoencoder (CAE)”) an embedding78 (fig. 8G’s mapping from data-points to an image at “Step 5” & “an identity mapping layer” [0379]: fig. 2B:“Part 6” is the closest to the claimed “embedding”9; however, SALTZ does not teach “embedding”10) for (“the center of”) each (“patch” [0098] 1st S: fig. 1C:51:squares) of a set (or “four datasets” [0081] penult S) of patches sampled (via “tumoral tissue samples” [0079] last S) from a WSI (or “computationally stained and digitized whole slide images of Hematoxylin and Eosin (H&E) stained pathology specimens obtained from biopsied tissue” [0079] penult S), wherein the embedding (mapping-layer) represents one or more histological features (or “histopathology tissue image” “feature” [0080] 3rd S) of the respective (center) patch (51) of the WSI; for each of the patches (51), encoding (via said CAE) the corresponding (layer) embedding with a (“larger” [0107] last S) spatial context (fig. 1C:59 relative to fig. 1C:51 via “Recent studies further suggest that the spatial context and the nature of cellular heterogeneity of the tumor microenvironment, in terms of the immune infiltrate into the tumor center and/or invasive margin, are important and correlate with cancer prognosis.” [0011] 4th S) and a semantic11 context12 (i.e., a “training13 data” “set--i.e., interpreting-dataset-- [0097] 3rd S, for a neural network comprising “more contextual information” [0107] last S {of a “term”14-“spatial context” [0011] 4th S: the “spatial context” or the larger patches having as its semantic element “the immune infiltrate into the tumor center and/or invasive margin” SALTZ [0011] penult S}:represented as fig. 1C:59: “larger region with more context” [0125] 5th S surrounding a patch 51), wherein the (larger) spatial (tumor-cell infiltrate) context (59) represents15 a local pattern (via “local patterns…represented among tumor types, immune subtypes, and tumor molecular subtypes” [0339] 2nd S, comprising said infiltrated tumor center) related to the one or more histological features, the local visual (tumor-center) pattern spanning16 (resulting in “a local…spread out pattern …across… tumor…boundaries” [0318] 1st,2nd Ss & “spanning17 tumor types” [0091] penult S) a region18 (or “different tumor regions” [0090] 2nd to last S: fig. 1B: 34: “Extract patches from marked regions”) in the WSI beyond19202122 the corresponding (“to 20x20 magnification”, [0114] last S,-center) patch (given that the WSI is understood to be more than (i.e., beyond) a corresponding patch and is rather a plurality of corresponding patches), and wherein the semantic23 (spatial) context24 (via “The CNN first uses a “learn”-“training data” “set”, [0097] 3rd S, “as more contextual information results in superior prediction of patches being necrotic”, [0107] last S {of semantic elements: --the immune infiltrate into the tumor center and/or invasive margin--}: fig. 1B:38: “Unlabeled set of WSI H&E Images (5455 images, 13 cancer types)”: one of which is shown in fig. 12A as a pattern) represents25 a global pattern (“for the four examples provided in FIGS. 12A-D” [0313]: fig. 12A-D: “TABLE 4”: “Global Pattern” column, such that the “overall structural patterns are differentially represented among…the tumor”, [0339] 2nd S {, wherein tumor is a sematic element of said spatial context}) over (as shown in fig. 1B:38: dark-shape-form inside a square) the WSI as a whole; generating a representation (via “generate the foreground reconstructed image26 87 and background reconstructed image 89” [0145] penult S) for the WSI by combining (via “Finally the two intermediate images 87, 89 are summed to form the final reconstructed image at step 90” [0145] last S) the encoded (via said CAE) patch (layer) embeddings; and performing a pathological (“pipeline” [0084] 1st S) task (via a “pathology” “classification model” [0079] 2nd S: fig. 1A:5: “Trained Model”) based on the representation for the WSI. SALTZ does not teach the difference of claim 1 of: A) an embedding2728…29 B) the embedding represents (one or more histological features)30… C) the corresponding embedding… D) the embeddings. CASALE teaches the difference of claim 1 of: A) (“Contrastive learning models can extract” [0193] 2nd S) an embedding3132 (fig. 1: “STAGE 1“: “GENERATING EMBEDDINGS FROM IMAGE DATA”) … B) the embedding represents (“a progression of the disease of interest” [0232] 4th S: fig. 1: “STAGE 3”: “GENERATING A VISUALIZATION OF PHENOTYPIC EFFECTS”) (one or more histological features)… C) the corresponding (“tile” [0232] 2nd to last S) embedding (fig. 3A:308: “TILE EMBEDDING”)… D) the embeddings (“are linearly predictive of biological endpoints or labels (e.g., progression of the disease of interest) that may otherwise be assigned to such data” [0193] 2nd S). Since SALTZ teaches a neural network (CAE) and “complex diseases”: [0080] Historically, histopathology images are crucial to the study of complex diseases such as cancer. The histologic characteristics of nuclei play a key role in disease diagnosis, prognosis and analysis. In accordance with an embodiment, disclosed is a Convolutional Autoencoder (CAE) for fully unsupervised, simultaneous nucleus detection and feature extraction in histopathology tissue images that is proven useful for the pipeline of digital image pathology analysis, including the quantification of TILs used in the classification and prognosis of cancer cells. , one of skill in the art of neural networks (CAE) and complex diseases can make SALTZ’s be a CASALE’s seeing change “can be applied to complex diseases such as polygenic diseases to enable target identification, cross-clinical trial analysis, and enhance interpretability.”, CASALE [0191] 2nd S, or enhance CAE-CNN training: PNG media_image5.png 1448 1129 media_image5.png Greyscale PNG media_image6.png 1464 1138 media_image6.png Greyscale Re 15. (Original), SALTZ of the combination (illustrated above) of SALTZ, CASALE teaches The method of claim 1, wherein combining the encoded patch embeddings comprises taking an average (via “averaging the tile embeddings”, CASALE [0033]) of the encoded embeddings. Re 16. (Original), SALTZ of the combination (illustrated above) of SALTZ, CASALE teaches The method of claim 1, wherein performing a pathological task based on the annotation of the WSI comprises: classifying the one or more histological features extracted from the WSI; classifying a pathological type (“in clinical analysis and/or prognosis and result in more accurate patient summaries including more accurate classification of the respective cancer type” [0205] 4th S) of the WSI; predicting a progression risk of a disease associated with the one or more histological features; or determining a diagnosis (via “ extract, quantify, characterize and correlate TIL Maps using digitized H&E stained diagnostic tissue slides that are routinely obtained as part of cancer diagnosis”, SALTZ [0016]) of a patient associated with the WSI. Re 17. (Original), SALTZ of the combination (illustrated above) of SALTZ, CASALE teaches The method of claim 16, wherein the pathological task may be performed using a classifier model (“in which deep classification learning models (for example, lymphocyte infiltration classification CNN and a necrosis segmentation algorithm) are implemented to generate tumor infiltrating lymphocyte maps that are useful in generating prognostic values in diagnosis and/or related classification”, SALTZ [0019] last S) or a regressor model. Claim 18 is rejected like claim 1: Re 18. (Currently Amended), SALTZ of the combination (illustrated above) of SALTZ, CASALE teaches One or more computer-readable non-transitory33 storage media (SALTZ: fig. 16:320: “Machine-readable medium(s)”) embodying software for analyzing a whole slide image (WSI) in light of biological context, the software comprising instructions operable when executed to: extract an embedding for each of a set of patches sampled from a WSI, wherein the embedding represents one or more histological features of the respective patch of the WSI; for each of the patches, encode the corresponding embedding with a spatial context and a semantic context, wherein the spatial context represents a local pattern related to the one or more histological features, the local visual pattern spanning a region in the WSI beyond the corresponding patch, and wherein the semantic context represents a global pattern over the WSI as a whole; generate a representation for the WSI by combining the encoded patch embeddings ; and perform a pathological task based on the representation for the WSI Claim 19 is rejected like claims 1 and 18: Re 19. (Currently Amended), SALTZ of the combination (illustrated above) of SALTZ, CASALE teaches A system for analyzing a whole slide image (WSI) in light of biological context comprising one or more processors (SALTZ: fig. 16: 304: “Processing Device(s)”) and a memory coupled to the processors comprising instructions executable by the processors, the processors being operable when executing the instructions to: extract an embedding for each of a set of patches sampled from a WSI, wherein the embedding represents one or more histological features of the respective patch of the WSI; for each of the patches, encode the corresponding embedding with a spatial context and a semantic context, wherein the spatial context represents a local pattern related to the one or more histological features, the local visual pattern spanning a region in the WSI beyond the corresponding patch, and wherein the semantic context represents a global pattern over the WSI as a whole; generate a representation for the WSI by combining the encoded patch embeddings ; and perform a pathological task based on the representation for the WSI. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Tu et al. (US 2023/0070286 A1): PNG media_image7.png 726 474 media_image7.png Greyscale PNG media_image6.png 1464 1138 media_image6.png Greyscale Re 2. (Original), SALTZ of the combination (illustrated above) of SALTZ, CASALE teaches The method of claim 1, wherein the patches are sampled (resulting in “ten patches sampled from each slide” [0166] last S: fig. 3A: step B) by applying a hierarchical sampling strategy (“in step 182” [0269] 3rd S: fig. 5A) to a randomly selected plurality (“8 slides” [0220]) of clusters of the patches. SALTZ of the combination (illustrated above) of SALTZ, CASALE does not teach the difference of claim 2 of “hierarchical” (sampling strategy). Tu teaches the difference of claim 2 of: hierarchical (“sampling iterations” [0019] last S) (sampling strategy). Since SALTZ of the combination (illustrated above) of SALTZ, CASALE teaches sampling, one of skill in the art of sampling can make SALTZ’s of the combination (illustrated above) of SALTZ, CASALE be as Tu’s seeing in the change “hierarchical sampling to iteratively refine clusters and better preserve shape details and structural relationships” Tu, [0037] penult S. Claim(s) 3,4 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Tu et al. (US 2023/0070286 A1) as applied in claim 2 further in view of Martinez Manzano et al. (US 2023/0401590 A1): PNG media_image8.png 726 630 media_image8.png Greyscale Re 3. (Original), SALTZ of the combination (illustrated above) of SALTZ, CASALE,Tu teaches The method of claim 2, further comprising applying the hierarchical sampling strategy by: for each of the randomly selected (“8 slides” Saltz: [0220]: fig. 3D:556: “Randomly select 8 slide images from each group (A-G)” & “examples 135 of unsupervised nucleus detection 137 and foreground 138, background image representation 139 and reconstruction results 140 of using crosswise sparse CAEs” [0252] 1st S: fig. 4A) clusters34 (“of TIL patches derived from the affinity propagation clustering of the TIL patches” [0312] penult S): randomly sampling (resulting in “randomly sampled patches” [0230] 4th S) a centroid of the (TIL: Tumor-Infiltrating Lymphocytes) cluster (comprising a “cluster” “central representative” [0312] last S: fig. 12A-D:right cluster column); for each of the (TIL) patches in the (central representative) cluster, determining a distance of the patch to (i.e., “variance” “dispersion” “In terms of TIL patch distances to a given cluster center” [0320] 3rd S) the centroid; and randomly sampling (resulting in “randomly sampled patches” [0230] 4th S) all patches (resulting in “randomly sampled patches” [0230] 4th S) in the (TIL) cluster having a (center-patch-distance-variance-dispersion) distance to the centroid within a threshold distance. SALTZ of the combination (illustrated above) of SALTZ, CASALE,Tu does not teach the difference of claim 3 of: clusters35… a centroid… the centroid… the centroid within a threshold distance. Martinez teaches the difference of claim 3 of: (“randomly select a number of” [0033] 6th S) clusters36 (fig. 5:502,506,510,514,518: smaller circles)… a centroid (“data point for each of the groups generated by the clustering technique” [0033] penult S: fig. 5:540,508,512,516,520)… the centroid (“will be calculated” [0033] last S)… the centroid (clusters) within a threshold distance (“of the…closest cluster” [0155] 2nd S: fig. 5:526,528,530,532: distances). Since SALTZ of the combination (illustrated above) of SALTZ, CASALE,Tu teaches a cluster, one of skill in the art of clusters can make SALTZ’s of the combination (illustrated above) of SALTZ, CASALE,Tu be as Martinez’s seeing the change resulting in reliable and “stabilized”37 “clusters”, Martinez [0094] 3rd S. Re 4. (Original), SALTZ’s of the combination (illustrated above) of SALTZ, CASALE,Tu, Martinez teaches The method of claim 3, wherein the threshold distance is based on (via th combination (illustrated above) of SALTZ, CASALE,Tu, Martinez) the pathological task. Claim(s) 5,9 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Chiu et al. (US 2020/0357143 A1): PNG media_image9.png 726 630 media_image9.png Greyscale PNG media_image6.png 1464 1138 media_image6.png Greyscale Re 5. (Original), SALTZ of the combination (illustrated above) of SALTZ, CASALE teaches The method of claim 1, wherein encoding the embedding with the spatial context comprises using a spatial encoder (via a “spatial information” “CAE” [0134]: Spatial-info Convolutional Auto Encoder) to encode the embedding with spatial attention by attending to embeddings of one or more nearby (“local population” [0171]) patches in the set (or “four datasets” [0081] penult S). SALTZ of the combination (illustrated above) of SALTZ, CASALE does not teach the difference of claim 5 of: spatial attention by attending to (embeddings)38. Chiu teaches the difference of claim 5: spatial attention by attending to (“the transformed maps from Equation (1), above” [0042]) (embeddings)39. Since SALTZ of the combination (illustrated above) of SALTZ, CASALE teaches feature extraction, one of skill in the art of feature extraction can make SALTZ’s of the combination (illustrated above) of SALTZ, CASALE be as Chiu’s seeing the change “improved visual localization”, Chiu [0010] 1st S: PNG media_image10.png 2109 1137 media_image10.png Greyscale Claim 9 is rejected like claim 5: Re 9. (Original), SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu teaches The method of claim 1, wherein encoding (via said CAE) the (layer) embedding with a (“larger” [0107] last S) semantic context of the corresponding (“to 20x20 magnification”, [0114] last S,-center) patch comprises using a semantic encoder (via said CAE) to encode the embedding with semantic attention (Chiu: fig. 2:230: Semantic Spatial Attention Module) by attending to embeddings of other patches (via “each patch” SALTZ [0098]) in the set (of “four datasets” SALTZ [0081] penult S). Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Chiu et al. (US 2020/0357143 A1) as applied in claim 5 further in view of Xiao et al. (US 2023/0274248 A1): PNG media_image11.png 726 630 media_image11.png Greyscale Re 6. (Original), SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu teaches The method of claim 5, wherein the one or more nearby (“local population” [0171]) patches are defined (as shown by the patch-outlines in fig. 1C:51) as those within a maximum relative distance corresponding to a specified pathological type (of “13 cancer types” [0086] 2nd S) of the WSI. SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu does not teach the difference of claim 6 of: (patches are defined) as… within a maximum relative distance corresponding to (a specified pathological type). Xiao teaches the difference of claim 6 of: (patches are defined) as… within a (”local” [0041] 1st S) maximum (fig. 8:804b,8o4c) relative distance (fig. 8:810) corresponding to (“an item” [0041] 1st S: fig. 8:302) (a specified pathological type). Since SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu teaches counting, one of skill in the art of counting can make SALTZ’s of the combination (illustrated above) of SALTZ, CASALE,Chiu be as Xiao’s seeing the change “accurate, real-time counts”, Xiao [0018] penult S. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Chiu et al. (US 2020/0357143 A1) as applied in claim 5 further in view of Bassi (US 2006/0050074 A1): PNG media_image12.png 726 630 media_image12.png Greyscale Re 7. (Original), SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu teaches The method of claim 5, wherein input for the spatial encoder comprises: a (“center” [0098] 1st S) position of the corresponding patch (fig. 1C:51,57) and a sequence (fig. 4A: “1st”, [0241] 1st S, to 5th images) of absolute (“center” [0098] 1st S) positions of the nearby (“local population” [0171]) patches. SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu does not teach the difference of claim 7 of “absolute”. Bassi teaches the difference of claim 7 of absolute (“position of patch origin”, Bassi [0056]). Since SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu teaches a patch, one of skill in the art of patches can make SALTZ’s of the combination (illustrated above) of SALTZ, CASALE,Chiu be as Bassi’s seeing in the change “correcting small distortions in projectors, cameras, and display devices, to correcting for perspectives like keystone or special wide-angle lens corrections, and to a complete change in image geometry such as forming rectangular panoramas from circular 360 degree images, or other rectangular to polar type mappings.”, Bassi [0028] last S, such as corrections to the lens of microscopes and “optical see-through display”, SALTZ [0391] 2nd S, thereof. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Chiu et al. (US 2020/0357143 A1) as applied in claim 5 further in view of Bassi (US 2006/0050074 A1) as applied in claim 7 further in view of OGASAWARA et al. (US 2023/0316489 A1): PNG media_image13.png 726 810 media_image13.png Greyscale Re 8. (Original), SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu, Bassi teaches The method of claim 7, wherein the absolute positions (“position of patch origin”, Bassi [0056]) are normalized to correspond to a standard level40 of (“20x”) magnification (“level” SALTZ [0107] 1st S). SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu, Bassi does not teach the difference of claim 8 of: (the absolute positions) are normalized to correspond to a standard level41 of (magnification). OGASAWARA teaches the difference of claim 8 of: (the absolute positions) (“images” [0094], annotated below) are normalized to correspond to a standard (“magnification ratio” [0094] annotated below) level42 of (magnification). Since SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu, Bassi teaches magnification, one of skill in the art of magnification can make SALTZ’s of the combination (illustrated above) of SALTZ, CASALE,Chiu,Bassi be as OGASAWARA’s seeing in the change “variations in image resolution…suppressed, and discrimination accuracy…improved”, OGASAWARA [0094], below: PNG media_image14.png 1801 871 media_image14.png Greyscale Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Chiu et al. (US 2020/0357143 A1) as applied to claims 5,19 further in view of Hossain et al. (Bi-SAN-CAP: Bi-Directional Self-Attention for Image Captioning): PNG media_image15.png 726 810 media_image15.png Greyscale Re 10. (Original), SALTZ of the combination (illustrated above) of SALTZ, CASALE, Chiu teaches The method of claim 9, wherein the semantic encoder (via said CAE) is a bidirectional self-attention (CAE) encoder with multi-head attention layers, and wherein the semantic encoder (via said CAE) attends embeddings of the other patches (via “each patch” SALTZ [0098]) in the set (of “four datasets” [0081] penult S). SALTZ of the combination (illustrated above) of SALTZ, CASALE, Chiu does not teach the difference of claim 10 of: a bidirectional self-attention (encoder) with multi-head attention (layers). Hossain teaches the difference of claim 10 of: a (“modified”, 3rd pg.: III Model Architecture, 1st para, 2nd S: fig. 2) bidirectional self-attention (encoder) with multi-head attention (“Multi-Head Attention” 4th p) (layers). Since SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu teaches a query, SALTZ: [0224] It is noted that each patch in a WSI is represented as a rectangle and associated with a classification label and the probability value computed by the CNN. This information is stored as a data element (document) in FeatureDB and indexed to speed up queries by the TIL-Map editor to retrieve and display subsets of patches. After classification results for a set of WSIs have been loaded to the database, a pathologist can use a web browser to view and update the classification results. The pathologist may implement the TIL-Map editor to examine an image, query FeatureDB to retrieve patches visible within the view point and zoom level and display them as a two-color heatmap. The pathologist can edit the heatmap using the “Lymphocyte Sensitivity,” “Necrosis Specificity,” “Smoothness” sliders in a panel 255 (for example, as shown in FIGS. 7E-7F). These slides 255 permit the pathologist to change the threshold value which determines if a patch should be classified as lymphocyte-infiltrated or not. , one of skill in the art of queries can make SALTZ’s of the combination (illustrated above) of SALTZ, CASALE,Chiu be as Hossain’s seeing the change being “useful to…search…queries”, Hossain, 1st page, I. Introduction, 1st para, 2nd S. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Chiu et al. (US 2020/0357143 A1) as applied in claims 5,9 further in view of Bai et al. (US 2022/0270353 A1): PNG media_image16.png 726 810 media_image16.png Greyscale Re 11. (Original), SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu teaches The method of claim 9, wherein input for the semantic encoder comprises the embeddings of the other patches in the set and a learnable token. SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu does not teach the difference of claim 11 of: “a learnable token”. Bai teaches the difference of claim 11: a learnable token (“during the self-attention operation” [0038] 2nd to last S: fig. 4:436: “CLASS TOKEN”). Since SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu teaches a classifier, one of skill in the art of classifiers can make SALTZ’s of the combination (illustrated above) of SALTZ, CASALE,Chiu be as Bai’s seeing the change “most useful to the final classifier”, Bai [0042] 1st S. Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Chiu et al. (US 2020/0357143 A1) as applied in claims 5,9 further in view of Bai et al. (US 2022/0270353 A1) as applied in claim 11 further in view of Geng et al. (US 2023/0153531 A1): PNG media_image17.png 726 810 media_image17.png Greyscale Re 12. (Original), SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu,Bai teaches The method of claim 11, wherein, during a training phase, generating a representation (or “spatially characterizing TIL Maps” [0079] penult S) of the WSI based on the encoded patch embeddings comprises generating an auxiliary representation (via Bai’s fig. 2:140: “MIXED IMAGE” generated from a second/auxiliary cat image) based on the encoded learnable token (“during the self-attention operation” Bai: [0038] 2nd to last S: fig. 4:436: “CLASS TOKEN”). SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu,Bai does not teach the difference of claim 12 of: encoded (learnable token)43. Geng teaches the difference of claim 12: encoded (learnable token)44 (“in the document”): [0040] The various embodiments provide numerous performance benefits over conventional approaches to DocVQA. In contrast to conventional DocVQA approaches, the embodiments described herein include a hierarchical layout graph model that enables both top-down and bottom-up reasoning to locate where an answer (e.g., a reponse to a query) is in a document based on both global and local contexts. In further contrast to conventional approaches, the embodiments include learnable token embeddings based on both layout of the document and semantic information encoded in the document. Furthermore, the embodiments include data augmentation training methods where OCR’ed lines (of the document) are reordered based on the nesting of the lines withing larger document structures. Such features provide performance advantages over conventional DocVQA solutions. Since SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu,Bai teaches a query document, SALTZ: [0224] It is noted that each patch in a WSI is represented as a rectangle and associated with a classification label and the probability value computed by the CNN. This information is stored as a data element (document) in FeatureDB and indexed to speed up queries by the TIL-Map editor to retrieve and display subsets of patches. After classification results for a set of WSIs have been loaded to the database, a pathologist can use a web browser to view and update the classification results. The pathologist may implement the TIL-Map editor to examine an image, query FeatureDB to retrieve patches visible within the view point and zoom level and display them as a two-color heatmap. The pathologist can edit the heatmap using the “Lymphocyte Sensitivity,” “Necrosis Specificity,” “Smoothness” sliders in a panel 255 (for example, as shown in FIGS. 7E-7F). These slides 255 permit the pathologist to change the threshold value which determines if a patch should be classified as lymphocyte-infiltrated or not. one of skill in the art of documents can make SALTZ’s of the combination (illustrated above) of SALTZ, CASALE,Chiu,Bai be as Geng’s seeing the in change “an improvement…queries”, Genf [0022] last S. Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Chiu et al. (US 2020/0357143 A1) as applied in claims 5,9 further in view of KAZUHIRO (JP 2016-158059 A) with SEARCH machine translation: PNG media_image18.png 726 810 media_image18.png Greyscale Re 13. (Original), SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu teaches The method of claim 9, further comprising: enhancing the semantic context (i.e., a “training45 data” “set--i.e., interpreting-dataset-- SALTZ [0097] 3rd S, for a neural network comprising “more contextual information” SALTZ [0107] last S {of a “term”46-“spatial context” [0011] 4th S: the “spatial context” or the larger patches having as its semantic element “the immune infiltrate into the tumor center and/or invasive margin” SALTZ [0011] penult S}:represented as fig. 1C:59: “larger region with more context” SALTZ [0125] 5th S surrounding a patch 51) by regularizing (via “normalized”-“embeddings” Chiu [0045] last S: fig. 1:150) the semantic attention (Chiu: fig.1:130: fig.2:230:“Semantic Spatial Attention Module”) to reduce overemphasis on a few (“TCGA tumor types” (The Cancer Genome Atlas) SALTZ [0090]) of the patches (via SALTZ: fig. 1B: 34: “Extract patches from marked regions”) to generate the representation (via “generate the foreground reconstructed image47 87 and background reconstructed image 89” SALTZ [0145] penult S) for the WSI. SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu does not teach the difference of claim 13 of: enhancing (the semantic context)48…49 to reduce overemphasis on (a few). KAZUHIRO teaches the difference of claim 13 of: enhancing (“for correcting the contrast enhancement”, pg. 16, penult txt blk) (the semantic context)50…51 to reduce overemphasis (“according to the overemphasis suppression”, pg. 16, penult txt blk) on (a few). Since SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu teaches an image, one of skill in the art of images can make SALTZ’s of the combination (illustrated above) of SALTZ, CASALE,Chiu be as KAZUHIRO’s seeing in the change “image quality…improved”, KAZUHIRO, pg. 18, 1st txt blk. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17 and 18 and 19 further in view of Chiu et al. (US 2020/0357143 A1) as applied in claims 5,9 further in view of KAZUHIRO (JP 2016-158059 A) with SEARCH machine translation as applied in claim 13 further in view of Jaiswal et al. (US 11,775,617 B1): PNG media_image19.png 726 816 media_image19.png Greyscale Re 14. (Original), SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu,KAZUHIRO teaches The method of claim 13, wherein regularizing (via “normalized”-“embeddings” Chiu [0045] last S: fig. 1:150) the semantic attentions (Chiu: fig.1:130: fig.2:230:“Semantic Spatial Attention Module”) comprises: calculating an attention map (comprised by “The spatial attention maps” Chiu [0042] last S) over the semantic attentions (“denoted by M” [0040] annotated below) encoded (via “encoding” “CNNs” [0104] 4th S: fig. 1:110,120) for the embeddings (CASALE: fig. 1: “STAGE 1“: “GENERATING EMBEDDINGS FROM IMAGE DATA”) corresponding to patches sampled from the WSI; and adding a negative entropy of the attention map (comprised by “The spatial attention maps” Chiu [0042] last S) to a training objective (“to train the semantic embedding space”, Chiu [0047] 1st S, as an aimed training goal: fig. 1:140: “to generate image embeddings in a semantically aware embedding space” [0045] 1st S) of the transformer model (via: PNG media_image20.png 820 874 media_image20.png Greyscale SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu, KAZUHIRO does not teach the difference of claim 14 of: a negative entropy… the transformer model. Jaiswal teaches the difference of claim 14: a negative entropy (“of discriminator predictions is minimized”, c.6,ll.30-35, “may be weighted with a multiplier a (tuned on 0.1, 1) in the overall objective”,c.6,ll. 45-50, “of the adversarial object-type discriminator 116”, c,7,ll. 60-65: fig. 1:116: “Adversarial Objective-type Discriminator”: fig.2: 105: “Class-agnostic object detector”)… the transformer (“-based”, c. 10,ll. 35-40) model (fig. 2:260: “Natural Language”). Since SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu, KAZUHIRO teaches extraction, one of skill in the art of extraction can make SALTZ’s of the combination (illustrated above) of SALTZ, CASALE,Chiu, KAZUHIRO be as Jaiswal’s seeing the change “beneficial to downstream applications (e.g., application-specific object classification, visual search (object retrieval from large databases), computer vision-based speech processing, etc.) that can use such class-agnostic detections as inputs.”, Jaiswal, c.2,ll. 38-43. Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over SALTZ et al. (US 2020/0388029 A1) in view of CASALE et al. (US 2023/0360758 A1) as applied in claims 1,15,16,17,18,19 further in view of Chiu et al. (US 2020/0357143 A1) further in view of LIU et al. (CN 112163608 A) with SEARCH machine translation: PNG media_image21.png 729 816 media_image21.png Greyscale Claim 20 is rejected like claim 1,18,19: Re 20. (Original), SALTZ of the combination (illustrated above) of SALTZ, CASALE teaches A computer-implemented method for analyzing a whole slide image (WSI) in light of biological context, comprising: extracting an embedding for each of a set of patches sampled from a WSI, wherein the embedding represents one or more histological features of the respective patch of the WSI; for each of the patches: encoding, by a spatial encoder,52 the embedding corresponding to the patch with a spatial attention by attending to the embeddings of nearby patches in the set, wherein the spatial attention models attention to a microscopic (via a “caMicroscope interface”, SALTZ [0223] 3rd S) visual pattern (as visually seen in SALTZ’s figures 12A-D) related to the one or more histological features, the (ca-) microscopic visual pattern (fig. 12A-D) spanning a region in the WSI beyond the corresponding patch; and encoding, by a semantic encoder,53 the embedding corresponding to the patch with a semantic attention by attending to the embeddings of all other patches (via “each patch”, SALTZ [0098], considered one at a time) in the set, wherein the semantic attention models attention to a macroscopic visual pattern over the WSI as a whole; generating a representation for the WSI by combining the encoded patch embeddings; and performing a pathological task based on the representation for the WSI. PNG media_image5.png 1448 1129 media_image5.png Greyscale SALTZ of the combination (illustrated above) of SALTZ, CASALE does not teach the difference of claim 20 of: A) a spatial attention by attending to (the embeddings)54…55 B) the spatial attention models attention… C) a semantic attention by attending to (the embeddings) … D) the semantic attention models attention to a macroscopic (visual patten) … Chiu teaches the difference of claim 20 of: A) a spatial attention (via fig. 2:220:230: “RGB-Spatial Attention Module”: “Semantic-Spatial Attention Module”) by attending to (“the transformed maps” [0042] 3rd S) (the embeddings)56…57 B) the spatial attention models (via “a model to focus58 on informative and stable image regions” [0031] 3rd S) attention… C) a semantic attention by attending to (“the transformed maps” [0042] 3rd S) (the embeddings) … D) the semantic attention (via fig. 2:220:230: “RGB-Spatial Attention Module”: “Semantic-Spatial Attention Module”) models attention to (via “semantic information…to guide a model to focus on informative and stable image regions” [0031] 2nd S) a macroscopic (visual patten). Since SALTZ of the combination (illustrated above) of SALTZ, CASALE teaches feature extraction, one of skill in the art of feature extraction can make SALTZ’s of the combination (illustrated above) of SALTZ, CASALE be as Chiu’s seeing the change “improved visual localization”, Chiu [0010] 1st S: PNG media_image22.png 2087 1137 media_image22.png Greyscale SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu does not teach the last difference of claim 20 of: macroscopic (visual pattern). LIU teaches the last difference of claim 20 of: macroscopic (via “macroscopic integral structure59 information”, pg. 3, 3rd txt blk) (visual pattern). Since SALTZ of the combination (illustrated above) of SALTZ, CASALE,Chiu teaches identification, one of skill in the art can make SALTZ’s of the combination (illustrated above) of SALTZ, CASALE,Chiu be as LIU’s seeing the change “to improve the accuracy of visual relationship identification”, LIU, pg. 2, 6th txt blk: PNG media_image23.png 2908 1137 media_image23.png Greyscale Conclusion The prior art “nearest to the subject matter defined in the claims” (MPEP 707.05) made of record and not relied upon is considered pertinent to applicant's disclosure. The following table lists several references that are relevant to the subject matter claimed and disclosed in this Application. The references are not relied on by the Examiner, but are provided to assist the Applicant in responding to this Office action. Citation Relevance Regev et al. (US 2022/0180975 A1) Regev teaches a contextual global pattern: [0579] Operating directly on the image data allows natural integration of spatial gene expression patterns of surrounding cells and global gene expression patterns like gradients (which are quite important, especially in the context of the brain). as the closest to the claimed “the semantic context represents a global pattern” of claim 1. Juppet et al. (Deep Learning Enables Individual Xenograft Cell Classification in Histological Images by Analysis of Contextual Features) Juppet teaches a color60-texture61 feature is a local pixel pattern: --Such features can be divided between features related to the color and features related to the texture itself, i.e. the local pattern and spatial organisation of pixel intensities.-- as the closest to the claimed “local pattern” or “the local visual pattern” of claim 1. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENNIS ROSARIO whose telephone number is (571)272-7397. The examiner can normally be reached Monday-Friday, 9AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DENNIS ROSARIO/Examiner, Art Unit 2676 /Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676 1 embedding: Mathematics. the mapping of one set into another. (Dictionary.com) 2 encoding: to convert (a nerve signal) into a form that can be received by the brain (Dictionary.com) 3 generating: Mathematics. to trace (a figure) by the motion of a point, straight line, or curve. (Dictionary.com) 4 representation: a mental image or idea so presented; concept. (Dictionary.com) 5 task: a definite piece of work assigned to, falling to, or expected of a person; duty. (Dictionary.com) 6 model: a simplified representation of a system or phenomenon, as in the sciences or economics, with any hypotheses required to describe the system or explain the phenomenon, often mathematically, wherein representation is defined: the expression or designation by some term, character, symbol, or the like, wherein expression is defined: Mathematics. a symbol or a combination of symbols representing a value, relation, or the like, wherein relation is defined: Mathematics. A. a property that associates two quantities in a definite order, as equality or inequality. B. a single- or multiple-valued function. (Dictionary.com) 7 “embedding” is receiving the action of “extracting” 8 embed: Mathematics. to map (a set) into another set. (Dictionary.com) 9 embed: Mathematics. to map (a set) into another set. (Dictionary.com) 10 “embedding” is receiving the action of “extracting” 11 BROAD CLAIM LANGUAGE: sematic: of or relating to semantics, wherein semantics is defined: the meaning, or an interpretation of the meaning, of a word, sign, sentence, etc (“context”) (Dictionary.com) 12 “context” was originally interpreted to mean: the parts of a written or spoken statement that precede or follow a specific word or passage, usually influencing its meaning or effect. (Dictionary.com) hence SALTZ’s teaching of term-spatial context; however, this dictionary sense of “context” is {not consistent} with applicant’s disclosure of “context”; hence SALTZ’s teaching of a grammatical term-spatial context is {not consistent but oddly reads on claim 1} & not evident/obvious/plain to see in applicant’s disclosure. Rather this sense of “context” is taken by the examiner under the broadest reasonable interpretation consistent with applicant’s disclosure: BROAD CLAIM LANGUAGE: context: the set of circumstances or facts (e.g., data) that surround a particular event, situation, etc.(Dictionary.com): i.e., (interpreting) a dataset surrounding something. 13 training: intended for use during an introductory, learning, or transitional period, wherein learning is defined: the act or process of acquiring knowledge or skill, wherein knowledge: the fact or state of knowing; the perception of fact or truth; clear and certain mental apprehension, wherein perception is defined: the act or faculty of perceiving, or apprehending by means of the senses or of the mind; cognition; understanding, wherein perceive is defined: to recognize, discern, envision, or understand, wherein understand is defined: to assign a meaning to; interpret. (Dictionary.com) 14 term: a word or group of words (“the spatial context and the nature of cellular heterogeneity of the tumor microenvironment”, SALTZ [0011] penult S) designating something (“the immune infiltrate into the tumor center and/or invasive margin”, SALTZ [0011] penult S), especially in a particular field (“clinically processing, analyzing, and analyzing tumor-infiltrating lymphocytes (TILs)”, SALTZ [0003] 1st S), as atom in physics, quietism in theology, adze in carpentry, or district leader in politics, wherein designate is defined: to denote; indicate; signify, wherein denote is defined: to be a name or designation for; mean, wherein mean is defined: to have as its (“the spatial context and the nature of cellular heterogeneity of the tumor microenvironment”, SALTZ [0011] penult S) sense or signification; signify, wherein sense is defined: the meaning of a word or phrase in a specific context, especially as isolated in a dictionary or glossary; the semantic element in a word or group of words (via fig. 1A:7: “Refined TIL Map” wherein “T” (Tumor) and “I” (Infiltrate) are the semantic elements of “spatial context”) (Dictionary.com) 15 represent: to be the equivalent of; correspond to (Dictionary.com) 16 verb 17 verb used as an adjective 18 “region” is receiving the action of “spanning” 19 Regarding “beyond” ‘s CLAIM SCOPE: Applicant’s disclosure [104]: 2nd S: “The scope of this disclosure is not limited to the example embodiments described or illustrated herein.”, wherein scope is defined: Linguistics, Logic. the range of words (“spanning a region in the WSI”) or elements of an expression over which a modifier (e.g., a patent examiner) or operator (e.g., a human reading this) has control. (Dictionary.com) 20 beyond: a preposition/adverb (i.e., a modifier: i.e., a patent examiner or a human grammatically reading claim 1) 21 CLAIM SCOPE: --pattern spanning… beyond the corresponding patch—(adverb) 22 CLAIM SCOPE: --the WSI beyond the corresponding patch—(preposition): wherein beyond is defined: more than; in excess of; over and above (Dictionary.com): this sense is taken by the examiner 23 BROAD CLAIM LANGUAGE: sematic: of or relating to semantics, wherein semantics is defined: the meaning, or an interpretation of the meaning, of a word, sign, sentence, etc. (Dictionary.com) 24 BROAD CLAIM LANGUAGE: context: the set of circumstances or facts (e.g., data) that surround a particular event, situation, etc. (Dictionary.com): i.e., (interpreting) a dataset surrounding something. 25 represent: to be the equivalent of; correspond to (Dictionary.com) 26 image: a physical likeness or representation of a person, animal, or thing, photographed, painted, sculptured, or otherwise made visible. (Dictionary.com) 27 “embedding” is receiving the action of “extracting” 28 embed: Mathematics. to map (a set) into another set. (Dictionary.com) 29 ellipses (…) represent claim limitations already taught 30 (italics) represent claim limitations already taught 31 “embedding” is receiving the action of “extracting” 32 embed: Mathematics. to map (a set) into another set. (Dictionary.com) 33 Applicant’s disclosure: [102] Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate, wherein volatile is defined: Computers. of or relating to storage that does not retain data when electrical power is turned off or fails, wherein non-volatile is defined: (of computer memory) having the property of retaining data when electrical power fails or is turned off. (Dictionary.com) 34 “clusters” is the object of “randomly selected” 35 “clusters” is the object of “randomly selected” 36 “clusters” is the object of “randomly selected” 37 stabilized: to make or hold stable, firm, or steadfast, wherein stable is defined: steadfast; not wavering or changeable, as in character or purpose; dependable, wherein dependable is defined: capable of being depended on; worthy of trust; reliable. (Dictionary.com) 38 (italics) represent claim limitations already taught 39 (italics) represent claim limitations already taught 40 BROAD CLAIM LANGUAGE: level: an extent, measure, or degree of intensity, achievement, etc.. (Dictionary.com) 41 BROAD CLAIM LANGUAGE: level: an extent, measure, or degree of intensity, achievement, etc.. (Dictionary.com) 42 BROAD CLAIM LANGUAGE: level: an extent, measure, or degree of intensity, achievement, etc.. (Dictionary.com) 43 (italics) represent claim limitations already taught 44 (italics) represent claim limitations already taught 45 training: intended for use during an introductory, learning, or transitional period, wherein learning is defined: the act or process of acquiring knowledge or skill, wherein knowledge: the fact or state of knowing; the perception of fact or truth; clear and certain mental apprehension, wherein perception is defined: the act or faculty of perceiving, or apprehending by means of the senses or of the mind; cognition; understanding, wherein perceive is defined: to recognize, discern, envision, or understand, wherein understand is defined: to assign a meaning to; interpret. (Dictionary.com) 46 term: a word or group of words (“the spatial context and the nature of cellular heterogeneity of the tumor microenvironment”, SALTZ [0011] penult S) designating something (“the immune infiltrate into the tumor center and/or invasive margin”, SALTZ [0011] penult S), especially in a particular field (“clinically processing, analyzing, and analyzing tumor-infiltrating lymphocytes (TILs)”, SALTZ [0003] 1st S), as atom in physics, quietism in theology, adze in carpentry, or district leader in politics, wherein designate is defined: to denote; indicate; signify, wherein denote is defined: to be a name or designation for; mean, wherein mean is defined: to have as its (“the spatial context and the nature of cellular heterogeneity of the tumor microenvironment”, SALTZ [0011] penult S) sense or signification; signify, wherein sense is defined: the meaning of a word or phrase in a specific context, especially as isolated in a dictionary or glossary; the semantic element in a word or group of words (via fig. 1A:7: “Refined TIL Map” wherein “T” (Tumor) and “I” (Infiltrate) are the semantic elements of “spatial context”) (Dictionary.com) 47 image: a physical likeness or representation of a person, animal, or thing, photographed, painted, sculptured, or otherwise made visible. (Dictionary.com) 48 (italics) represent clam limitations already taught 49 ellipses (…) represent claim limitations already taught 50 (italics) represent clam limitations already taught 51 ellipses (…) represent claim limitations already taught 52 the non-restrictive phrase “, by a spatial encoder,” does not limit claim 20 under the broadest reasonable interpretation 53 the non-restrictive phrase “, by a semantic encoder,” does not limit claim 20 under the broadest reasonable interpretation 54 (italics) represents claim limitations already taught 55 ellipses (…) represent claim limitations already taught 56 (italics) represents claim limitations already taught 57 ellipses (…) represent claim limitations already taught 58 focus: to be or become focused, where focus is defined: to bring to a focus or into focus; cause to converge on a perceived point, wherein focus is defined: a central point, as of attraction, attention, or activity. (Dictionary.com) 59 structure: mode of building, construction, or organization; arrangement of parts, elements, or constituents., wherein mode is defined: a particular type or form of something wherein form is defined: something that gives or determines shape; a mold, wherein shape is defined: something used to give form, as a mold or a pattern. (Dictionary.com) 60 color: the quality of an object or substance with respect to light reflected by the object, usually determined visually by measurement of hue, saturation, and brightness of the reflected light; saturation or chroma; hue, wherein quality is defined: character or nature, as belonging to or distinguishing a thing, wherein character is defined: the aggregate of features and traits that form the individual nature of some person or thing, wherein form is defined: to take a particular form or arrangement, wherein form is defined: the shape of a thing or person, wherein shape is defined: something used to give form, as a mold or a pattern. (Dictionary.com) 61 texture: the characteristic physical structure given to a material, an object, etc., by the size, shape, arrangement, and proportions of its parts, wherein shape is defined: something used to give form, as a mold or a pattern. (Dictionary.com)
Read full office action

Prosecution Timeline

Apr 04, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §101, §103, §112
Apr 14, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586184
METHODS AND APPARATUS FOR ANALYZING PATHOLOGY PATTERNS OF WHOLE-SLIDE IMAGES BASED ON GRAPH DEEP LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12585733
SYSTEMS AND METHODS OF SENSOR DATA FUSION
2y 5m to grant Granted Mar 24, 2026
Patent 12536786
IMAGE LOCALIZATION USING A DIGITAL TWIN REPRESENTATION OF AN ENVIRONMENT
2y 5m to grant Granted Jan 27, 2026
Patent 12518519
PREDICTOR CREATION DEVICE AND PREDICTOR CREATION METHOD
2y 5m to grant Granted Jan 06, 2026
Patent 12518404
SYSTEMS AND METHODS FOR MACHINE LEARNING BASED PHYSIOLOGICAL MOTION MEASUREMENT
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
98%
With Interview (+28.6%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 557 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month