Prosecution Insights
Last updated: April 19, 2026
Application No. 18/254,701

SYSTEMS AND METHODS FOR DETERMINING REGIONS OF INTEREST IN HISTOLOGY IMAGES

Final Rejection §102§103
Filed
May 26, 2023
Examiner
KRETZER, CASEY L
Art Unit
2635
Tech Center
2600 — Communications
Assignee
Owkin Inc.
OA Round
2 (Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
608 granted / 700 resolved
+24.9% vs TC avg
Moderate +12% lift
Without
With
+12.2%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 2m
Avg Prosecution
29 currently pending
Career history
729
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
45.9%
+5.9% vs TC avg
§102
15.8%
-24.2% vs TC avg
§112
28.3%
-11.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 700 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment In the Reply filed 01/06/2026, Applicant has amended claims 1 and 14 to include applying batches of tiles to a feature extractor (in order to extract sets of features) and argues that this/those limitation(s) was/were not taught with the reference(s) cited in the previous action dated 11/10/2025. However, the Examiner respectfully disagrees for the reasons laid out below. Response to Arguments Applicant's arguments filed 01/06/2026 have been fully considered but they are not persuasive. As noted above, Applicant has amended both independent claims 1 and 14 to in effect recite “extracting a first set of features from the first batch of tiles by applying the first batch of tiles to the feature extractor; extracting a second set of features from the second batch of tiles by applying the second batch of tiles to the feature extractor” (emphasis maintained to amended portions) and argues those features were not taught by prior art reference Tellez. However, this is not persuasive because Tellez section 2.1, which was recited on page 6 of the previous action, explicitly states “We extracted relevant information from tissue images using a CNN-based encoder. This network mapped tissue patches into embedding vectors,” wherein the CNN-based encoder was analogized with the claimed feature extractor. This is further demonstrated in Figure 2 of the reference which shows two sets of tiles input into an encoder with outputs fed into “Concatenation” and the caption clearly recites training a feature extractor. Therefore, the previous language is taught by Tellez. Further regarding the Objections to the Specification, this was checked on the previous Office Action Summary as there were objections (i.e. related to typographical issues) to the claims. Since box 8) of the Summary generally refers to claims with have been objected to due to no prior art or other rejections being made, the Examiner checked box 10. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1 and 14 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tellez et al, “Gigapixel Whole-Slide Image Classification Using Unsupervised Image Compression And Contrastive Training” (published April 2018, cited on the IDS filed 06/22/2023) as further evidenced by Tellez et al, “Whole-slide mitosis detection in H&E breast histology using PHH3 as a reference to train distilled stain-invariant convolutional networks” (published March 2018, cited on the IDS filed 06/22/2023, hereafter referred to as Tellez 2). (see MPEP 2131.01, III regarding multiple references in a 35 U.S.C 102 rejection) Regarding claim 14, Tellez teaches a system for training a feature extractor, comprising: an image processor within a processing device (see Tellez section 1, “We propose a CNN-based method that can make predictions at whole-slide level by transforming gigapixel images into compact representations that fit in the GPU memory”) configured to: receive a training set of histology images, wherein each image in the training set of histology images is annotation- free (see section 3, “We used Camelyon16 data [4] to train and evaluate our methodology. We divided the set of slides into training (180), validation (90) and test (128). Each slide is associated with a binary label indicating the presence of tumor metastasis”. While a binary label is mentioned, the present published application distinguishes these from an “annotation” in paragraphs [0031] and paragraph [0035]), tile the training set of histology images into a set of tiles (see section 3, “We trained instances of the five different encoders explained in Sec. 2.1 using a patch size of 128x128 px extracted at 0.5 um/px resolution”), and perform data augmentation on the set of tiles (see section 2.1, “We investigated the effectiveness of several types of encoders trained in an unsupervised manner, using tissue patches that were heavily augmented with the data augmentation routines detailed in [1]”, wherein [1] is Tellez 2) to generate at least two batches of tiles (see section 2.1, “We created an artificial training dataset consisting of pairs of tissue patches representing either the same or different tissue morphology. Positive pairs consisted of patches extracted from the exact same WSI location (although different augmentation). Negative pairs consisted of patches from: a) different WSI locations, and b) neighbor locations but non-overlapping tissue”), wherein each batch of tiles includes randomly augmented views of the original set of tiles (see Tellez 2, page 2129, “We used this annotated set of samples to train CNN2 to distinguish PHH3 candidates among mitotic and non-mitotic patches. During training, we randomly applied several techniques to augment the data and prevent overfitting, namely: rotations, vertical and horizontal mirroring, elastic deformation [32], Gaussian blurring, and translations”); at least one feature extractor configured to extract a first set of features from the first batch of tiles by applying the first batch of tiles to the feature extractor and extract a second set of features from the second batch of tiles by applying the second batch of tiles to the feature extractor (see section 2.1, “We extracted relevant information from tissue images using a CNN-based encoder. This network mapped tissue patches into embedding vectors”. See also Figure 2 and arguments noted above); wherein the processor is further configured to train the feature extractor using a contrastive loss between pairs of the first set of features and the second set of features to bring matching pairs of tiles closer and different pairs of tiles further apart (see section 2.1, “Third, we proposed and trained a novel contrastive encoding scheme…A model composed of two encoders sharing weights, followed by a feature-wise concatenation operation and an MLP, was trained to distinguish between the two classes (see Fig. 2). Because two same patches present the same tissue morphology with heavily altered appearance, the encoder learns to extract high-level semantic features instead of low-level pixel ones, an advantage over encoders based on reconstruction error”). Method claim 1 recites similar limitations as claim 14, and is rejected under similar rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 8 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rony et al, “Deep weakly-supervised learning methods for classification and localization in histology images: a survey” (published at https://arxiv.org/abs/1909.03354v1, September 2019) in view of Tellez et al, “Gigapixel Whole-Slide Image Classification Using Unsupervised Image Compression And Contrastive Training” (published April 2018, cited on the IDS filed 06/22/2023). Regarding claim 21, Rony teaches a system for training a weakly-supervised machine learning model a feature extractor, the system comprising: an input (see Rony Figure 8, input image) for receiving a first set of histology images (see section 1, “The advent of Whole Slide Imaging (WSI) scanners (He et al., 2012), which can perform cost effective and high-throughput digitization of histology slides”) having global labels (see section 3, “This section presents a review of state-of-the-art deep WSL models that can be trained to simultaneously perform two tasks– image classification and object localization– using only WSIs annotated with global labels”); a trained feature extractor configured to generate a plurality of extracted features from the first set of histology images (see section 3.2.1, “1) Spatial pooling. In this category, the beginning of the pipeline is usually the same for all techniques: a CNN extracts K feature maps F RK HW, where K is the number of feature maps which is architecture-dependent”); wherein the weakly-supervised machine learning model is trained using the plurality of extracted features extracted from the first set of histology images having global labels (see section 3.4, “Among bottom-up methods, we find weakly supervised localization methods based on a spatial pooling allowing localization of objects after being trained using using [sic] global labels only”). Rony does not expressively teach wherein the trained feature extractor is trained using the method of claim 1; and wherein the trained feature extractor is trained using a contrastive loss between pairs of a first set of features and a second set of features extracted from a second set of histology images and the second set of histology images are annotation-free. However, as noted above, Tellez in a similar invention in the same field of endeavor teaches training a feature extractor using the method of claim 1 and further teaches wherein the trained feature extractor is trained using a contrastive loss between pairs of a first set of features and a second set of features extracted from a second set of histology images (see section 2.1, “Third, we proposed and trained a novel contrastive encoding scheme…A model composed of two encoders sharing weights, followed by a feature-wise concatenation operation and an MLP, was trained to distinguish between the two classes (see Fig. 2). Because two same patches present the same tissue morphology with heavily altered appearance, the encoder learns to extract high-level semantic features instead of low-level pixel ones, an advantage over encoders based on reconstruction error”) and the second set of histology images are annotation-free (see section 3, “We used Camelyon16 data [4] to train and evaluate our methodology. We divided the set of slides into training (180), validation (90) and test (128). Each slide is associated with a binary label indicating the presence of tumor metastasis”. While a binary label is mentioned, the present published application distinguishes these from an “annotation” in paragraphs [0031] and paragraph [0035]). One of ordinary skill in the art before the effective filing date of the invention would have found it obvious as a matter of simple substitution to replace the feature extractor of Rony with that of Tellez to yield the predictable results of successfully analyzing the histology images. Method claim 8 recites similar limitations as claim 21, and is rejected under similar rationale. Claim(s) 27-36, 38, 40, and 52 is/are rejected under 35 U.S.C. 103 as being unpatentable over You et al, “Real-time intraoperative diagnosis by deep neural network driven multiphoton virtual histology” (published npj Precision Oncology, Vol 3, Article Number 33, December 17, 2019) in view of Tellez et al, “Gigapixel Whole-Slide Image Classification Using Unsupervised Image Compression And Contrastive Training” (published April 2018, cited on the IDS filed 06/22/2023). Regarding claim 52, You teaches a system for determining a plurality of regions of interest in an input histology image, comprising: an image processor within a processing device (see You Figure 1, computer), the image processor configured to receive an input histology image and tile the input histology image into a set of tiles (see Figure 2, input image which is tiled and caption); a trained feature extractor for extracting features from each tile (see Figure 2, DNN and page 5, first column, “In search of an intuitive understanding of the image features used by the trained DNN, we first extracted for each sample tile the neuron activity profile in the penultimate layer of the network. This 512-dimensional vector acts as input to the final neuron that makes the decision to classify the image as cancer or normal, and may thus be considered as a compact representation of the image that captures its salient features for discerning its class”); a clustering module within the processing device, the clustering module configured to cluster the extracted features to assign each tile to one of a plurality of regions of interest for each tile (see Figure 5 and page 5, first column, “This allows us to visualize the collection of images on a ‘canvas’ where images are clustered by their mutual similarity as defined by the DNN. We can see in the resulting plot (Fig. 5) that the DNN tends to cluster tiles with similar optical signatures and shapes”, wherein the DNN also acts as the clustering module); and an output device to output the plurality of regions of interest (see Figure 3 and page 3, first column, “The DNN predicts a cancer versus normal probability score for each tile, allowing us to create a heatmap that highlights regions likely to be cancerous in each image (Fig 3a, b). It is to be noted that a significant portion of breast tissue is adipocytes”). You does not expressively teach the trained feature extractor [is] trained by using a method according to claim 1 [and] trained with an unsupervised machine learning algorithm using a set of training images. However, Tellez in a similar invention in the same field of endeavor teaches a trained feature encoder (see Tellez section 2.1, “We extracted relevant information from tissue images using a CNN-based encoder. This network mapped tissue patches into embedding vectors”) configured to act on tiled (see section 3, “We trained instances of the five different encoders explained in Sec. 2.1 using a patch size of 128x128 px extracted at 0.5 um/px resolution”) histology images (see Abstract) as taught in You wherein the trained feature extractor [is] trained by using a method according to claim 1 (see rejection above); and the trained feature extractor [is] trained with an unsupervised machine learning algorithm using a set of training images (see Abstract). One of ordinary skill in the art before the effective filing date of the invention would have found it obvious to combine the teaching of using an unsupervised algorithm as taught in Tellez with the system taught in You, the motivation being to save processing resources by not using a supervisor in the training process. Method claim 27 recites similar limitations as claim 52, and is rejected under similar rationale. Regarding claim 40¸ the claim recites a non-transitory machine-readable medium with a memory storing code instructions which, when executed by a processor, cause the processor to perform operations as those recited in claim 27 which You in view of Tellez further teaches (see You Figure 1, computer, which is well-known to contain instructions in a memory for a processor). Regarding claim 28, You in view of Tellez teaches all the limitations of claim 27, and further teaches wherein each of the images in the training set of images are annotation-free (see Tellez section 3, “We used Camelyon16 data [4] to train and evaluate our methodology. We divided the set of slides into training (180), validation (90) and test (128). Each slide is associated with a binary label indicating the presence of tumor metastasis”. While a binary label is mentioned, the present published application distinguishes these from an “annotation” in paragraphs [0031] and paragraph [0035]). Regarding claim 29, You in view of Tellez teaches all the limitations of claim 27, and further teaches wherein the input histology image and the training set of images are from the same domain (see Tellez section 3, “We used Camelyon16 data [4] to train and evaluate our methodology. We divided the set of slides into training (180), validation (90) and test (128). Each slide is associated with a binary label indicating the presence of tumor metastasis” wherein the [4] citation on page 3 shows the training images are for breast cancer and You Abstract). Regarding claim 30, You in view of Tellez teaches all the limitations of claim 27, but does not expressively teach wherein the clustering is a K-Means clustering. However, one of ordinary skill in the art before the effective filing date of the invention would have found it obvious as a matter of simple substitution to replace the clustering of You in view of Tellez with the K-Means clustering claimed to yield the predictable results of successfully separating the tiles appropriately. Regarding claim 31, You in view of Tellez teaches all the limitations of claim 27, and further teaches wherein the input histology image is a whole slide image (see You caption for Figure 2 and Tellez Abstract). Regarding claim 32, You in view of Tellez teaches all the limitations of claim 27, and further teaches wherein the input histology image is derived from a patient tissue sample (see You caption for Figure 1). Regarding claim 33, You in view of Tellez teaches all the limitations of claim 32, and further teaches wherein the patient tissue sample is known or suspected to contain a tumor (see You Abstract). Regarding claim 34, You in view of Tellez teaches all the limitations of claim 27, but does not expressively teach wherein the unsupervised machine learning algorithm is a self-supervised machine learning algorithm. However, one of ordinary skill in the art before the effective filing date of the invention would have found it obvious as a matter of simple substitution to replace the unsupervised machine learning algorithm of You in view of Tellez with a self-supervised machine learning algorithm as claimed to yield the predictable results of successfully training and using the feature extractor. Regarding claim 35, You in view of Tellez teaches all the limitations of claim 27, and further teaches wherein the unsupervised machine learning algorithm is a contrastive loss machine learning algorithm (see Tellez section 2.1, “Third, we proposed and trained a novel contrastive encoding scheme…A model composed of two encoders sharing weights, followed by a feature-wise concatenation operation and an MLP, was trained to distinguish between the two classes (see Fig. 2). Because two same patches present the same tissue morphology with heavily altered appearance, the encoder learns to extract high-level semantic features instead of low-level pixel ones, an advantage over encoders based on reconstruction error”) You in view of Tellez does not expressively teach the algorithm include[es] one of Momentum Contrast or Momentum Contrast v2. However, one of ordinary skill in the art before the effective filing date of the invention would have found it obvious as a matter of simple substitution to replace the algorithm of You in view of Tellez with those claimed to yield the predictable results of successfully training and using the feature extractor. Regarding claim 36, You in view of Tellez teaches all the limitations of claim 27, but does not expressively teach wherein the trained feature extractor is a ResNet type of feature extractor. However, one of ordinary skill in the art before the effective filing date of the invention would have found it obvious as a matter of simple substitution to replace the trained feature extractor of You in view of Tellez with that claimed to yield the predictable results of successfully training and using the feature extractor. Regarding claim 38, You in view of Tellez teaches all the limitations of claim 27, and further teaches annotating at least one cluster of extracted features (see You Figure 5 and caption). Claim(s) 37 is/are rejected under 35 U.S.C. 103 as being unpatentable over You et al, “Real-time intraoperative diagnosis by deep neural network driven multiphoton virtual histology” (published npj Precision Oncology, Vol 3, Article Number 33, December 17, 2019) in view of Tellez et al, “Gigapixel Whole-Slide Image Classification Using Unsupervised Image Compression And Contrastive Training” (published April 2018, cited on the IDS filed 06/22/2023) and Hall et al, U.S. Publication No. 2018/0180590. Regarding claim 37, You in view of Tellez teaches all the limitations of claim 27, but does not expressively teach removing background segments from the input histology image. However, Hall in a similar invention in the same field of endeavor teaches a method of analyzing an input histology image (see Hall paragraph [0049]) as taught in You in view of Tellez further comprising removing background segments from the input histology image (see paragraph [0049]). One of ordinary skill in the art before the effective filing date of the invention would have found it obvious to combine the teaching of removing background from a histology image as taught in Hall with the method taught in You in view of Tellez, the motivation being to save processing resources by not analyzing background. Claim(s) 39 is/are rejected under 35 U.S.C. 103 as being unpatentable over You et al, “Real-time intraoperative diagnosis by deep neural network driven multiphoton virtual histology” (published npj Precision Oncology, Vol 3, Article Number 33, December 17, 2019) in view of Tellez et al, “Gigapixel Whole-Slide Image Classification Using Unsupervised Image Compression And Contrastive Training” (published April 2018, cited on the IDS filed 06/22/2023) and Naylor et al, “PREDICTING RESIDUAL CANCER BURDEN IN A TRIPLE NEGATIVE BREAST CANCER COHORT” (published in 2019 IEEE 16th International Symposium on Biomedical Imaging, pages 933-937, April 2019). Regarding claim 39, You in view of Tellez teaches all the limitations of claim 27, but does not expressively teach quantifying the input histology image by a level of expression of a plurality of clusters. However, Naylor in a similar invention in the same field of endeavor teaches a method comprising tiling an input histology image (see Naylor Abstract) into a plurality of tiles, extracting features from the plurality of times (see section 4.2, “This mapping can be divided into 3 steps: 1) finding tissue areas in the WSI, 2) overlaying a grid on this tissue area and 3) encoding each tile of size 224 224 to a vector), and clustering the extracted features (see section 4.2.2, “2. cluster-based down sampling: we first cluster all feature vectors from one patient into ni 40 clusters and then sample the same (small) number of feature vectors from each cluster so that the amount of feature vectors is constant across patients) as taught in You in view of Tellez further comprising quantifying the input histology image by a level of expression of a plurality of clusters (see section 4.2.2, “Once each tile is clustered, we thus represent a WSI by the percentage of patches belonging to each of the k clusters. Hence, we represent a patient’s biopsy by a vector z(i) of size k”). One of ordinary skill in the art before the effective filing date of the invention would have found it obvious to combine the teaching of quantifying input histology images based on clusters as taught in Naylor with the method taught in You in view of Tellez, the motivation being to more easily analyze the results of the clustering. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CASEY L KRETZER whose telephone number is (571)272-5639. The examiner can normally be reached M-F 10:00-7:00 PM Pacific Time. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Payne can be reached at (571)272-3024. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CASEY L KRETZER/Primary Examiner, Art Unit 2635
Read full office action

Prosecution Timeline

May 26, 2023
Application Filed
Nov 05, 2025
Non-Final Rejection — §102, §103
Jan 06, 2026
Response Filed
Jan 26, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602894
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12593971
SYSTEMS FOR TRACKING DISEASE PROGRESSION IN A PATIENT
2y 5m to grant Granted Apr 07, 2026
Patent 12597285
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12592088
ANCHOR FOR LINE RECOGNITION
2y 5m to grant Granted Mar 31, 2026
Patent 12591970
METHODS AND SYSTEMS FOR DETERMINING HEMODYNAMIC PARAMETERS
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+12.2%)
2y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 700 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month