Prosecution Insights
Last updated: April 19, 2026
Application No. 18/294,059

ORGAN IDENTIFICATION USING AI

Non-Final OA §101§103
Filed
Jan 31, 2024
Examiner
SHIFERAW, HENOK ASRES
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Hoffmann-La Roche, Inc.
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
1y 10m
To Grant
91%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
518 granted / 578 resolved
+27.6% vs TC avg
Minimal +2% lift
Without
With
+1.5%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 10m
Avg Prosecution
19 currently pending
Career history
597
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
72.7%
+32.7% vs TC avg
§102
6.2%
-33.8% vs TC avg
§112
4.0%
-36.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 578 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy for EP22190786.0, filed 08/11/2021, has been filed in the instant application. Preliminary Amendment Preliminary amendment filed on 01/31/2024 has been entered and made of record. Claims 1–10 and 12 are amended. Claim 11 is cancelled. Claims 1-10 and 12-13 are pending. Information Disclosure Statement The information disclosure statement (IDS) submitted on 01/31/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure has been considered by the examiner. Claim Rejections - 35 USC § 101 Claim 12 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 12 is drawn to “a recording medium readable by a computer and having recorded thereon a computer program including instructions.” A “computer readable medium" is defined in the specification to include: “the computer-readable storage media can be non-transitory, e.g., as one or more instructions executable by a cloud computing platform and stored on a tangible storage device” [¶0034]. The broadest reasonable interpretation of a claim drawn to a computer readable medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim covers non-statutory subject matter. The claims, as defined in the specification, cover both non-statutory subject matter and statutory subject matter. A claim drawn to such a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments by adding the limitation "non-transitory" to the claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1–4, 6–10, and 12–13 are rejected under 35 U.S.C. 103 as being unpatentable over Wetteland et al. (Wetteland, Rune, et al. "A multiscale approach for whole-slide image segmentation of five tissue classes in urothelial carcinoma slides." Technology in Cancer Research & Treatment 19 (2020): 1533033820946787) (hereafter, “Wetteland”), disclosed in IDS, in view of Fuchs et al. (US 2019/0295252 A1) (hereafter, “Fuchs”). Regarding claim 1, Wetteland discloses method of identifying a tissue type [this paper proposes an automatic method for classifying WSI tiles from urothelial carcinoma cases into the following categories: urothelium, stroma, muscle, damaged tissue, blood, and background, pg. 2, right column, Introduction, second paragraph] in digital histological images of human or animal tissue [the data material consists of digital whole-slide images from patients diagnosed with primary papillary urothelial carcinoma, pg. 4, right column, Data Material, first paragraph ... the prepared tissue samples are scanned at 400x magnification using the Leica SCN400 slide scanner, producing image files in Leica’s SCN file format, pg. 5, left column, Data Material, first paragraph], the method comprising: training a convolutional neural network [this paper proposes an automatic multiscale system, merging inputs of 25x, 100x, and 400x magnification, based on a CNN for classification of whole-slide histological images into six classes, pg. 4, right column, Aims and Contributions, first paragraph] to identify a particular target tissue type [an expert pathologist carefully annotated selected regions in the WSI, where each region includes one of the six classes. A total of 239 regions belonging to the five foreground classes was annotated in WSI from 32 unique patients ... the raw pixel intensity is used to train the models, pg. 5, left column, CV dataset, first paragraph; right column, second paragraph ... all models were trained using the SGD optimizer ... the models were trained in a stratified 5-fold cross-validation fashion, pg. 7, right column, Training procedure and model selection, first paragraph; second paragraph] in a plurality of training data sets of digital histological images of human or animal tissue [to maximize the number of valid tiles, an automatic search algorithm was developed ... tile sizes of 64 x 64, 128 x 128, and 256 x 256 pixels were tested when extracting tiles with the automatic program, pg. 5, left column, CV dataset, second paragraph; third paragraph], inputting a test data set of digital histological images of human or animal tissue into the trained convolutional neural network [Figure 3; the system accepts input WSI of any size and outputs a corresponding segmentation image from the input. The system is tested on the seven WSIs in the inference dataset ... tiles selected as non-background are then extracted and fed to the multiscale model for further classification, pg. 6, left column, Proposed System, first paragraph, second paragraph], and receiving as an output result of the convolutional neural network a probability value that the inputted test data set corresponds to the target tissue type [Figure 3; tiles are classified according to the highest prediction score, pg. 6,right column, Proposed System, second paragraph ... each class is given a separate color, and the final segmentation image is saved, pg. 7, left column, Proposed System, second paragraph] wherein the training of the convolutional neural network [Figure 4; the TRI-CNN model has only one configuration: TRI-25x-100x-400x, and is depicted in Figure 4, pg. 7, left column, Multiscale model structure, first paragraph ... all models were trained using the SGD optimizer, pg. 7, right column, Training procedure and model selection, first paragraph] comprises performing with the plurality of training data sets [the prepared tissue samples are scanned at 400x magnification using the Leica SCN400 slide scanner, producing image files in Leica’s SCN file format. The images are stored as a pyramidal tiled image with several down-sampled versions of the base image ... the Vips library is capable of extracting the base image as well as the down-sampled versions, making it easy to extract the dataset at each resolution, pg. 5, left column, Data Material, first paragraph] of digital histological images of human or animal tissue [the data material consists of digital whole-slide images from patients, pg. 4, right column, Data Material, first paragraph] the steps of: selecting a target tissue area of a training data set [Figure 2; an expert pathologist carefully annotated selected regions in the WSI, where each region includes one of the six classes, pg. 5, left column, CV dataset, first paragraph ... the annotated region (marked with red at level 0) determines which tiles to extract. Tiles are then extracted at the desired location from all three levels, pg. 4, Figure 2 citation of Related Work section], dividing the target tissue area into a first set of tiles of constant size and having a first image magnification [Figure 2 & 3; when extracting tiles from the WSI, a grid of nonoverlapping tiles was superimposed upon the annotated region at 400x magnification level, pg. 5, left column, CV dataset, second paragraph], dividing the target tissue area into at least one second set of tiles of constant size and having a second image magnification different from the first image magnification [25x magnification: Figure 2 & 3; the algorithm checks the number of valid tiles for all possible positions of the grid. The grid location with the highest number of valid tiles was used to extract the dataset from that region ... when a tile is saved from the region, the corresponding tiles from 25x and 100x magnification were also extracted in such a manner that the center pixel is the same in all three magnification levels, pg. 5, left column, CV dataset, second paragraph; third paragraph], inputting the first set of tiles and the at least one second set of tiles into the convolutional neural network [Figure 4; each input is fixed at 128 x 128 x 3 pixels, which is the size of each tile. The input is fed into a pre-trained VGG16 network which acts as a feature extractor, pg. 7, left column, Multiscale model structure, second paragraph], wherein the convolutional neural network is an at least two-headed convolutional neural network in which the first set of tiles and the at least one second set of tiles are processed in parallel whereby features of the first set of tiles and the at least one second set of tiles are concatenated [the DI- and TRI-CNN models have two and three parallel VGG16 branches, respectively, resulting in multiple feature vectors. These feature vectors are concatenated before entering the classification network, pg. 7, right column, Multiscale model structure, first paragraph], and labelling output results of the convolutional neural network with respect to the target tissue type [Figure 2 & 4; followed by a global average pooling (GAP) layer providing a feature vector representation of the input. This feature vector is then fed into a classification network consisting of two fully-connected (FC) layers, each followed by a dropout layer, and a final softmax layer with one output node for each class ... tiles are classified according to the highest prediction score, pg. 7, left column, Multiscale model structure, second paragraph; pg. 6, right column, Proposed System, second paragraph]. Wetteland fails to explicitly disclose a computer-implemented method. However, Fuchs teaches a computer-implemented method [the system may include an inference system maintainable on one or more processors ... the feature classifier may select a subset of tiles from the plurality of tiles for the biomedical image by applying the inference system to the plurality of tiles, para 0012]. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Wetteland’s reference in view of Fuchs to provide sufficient processor power, as recognized by Fuchs. Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Fuchs with Wetteland to obtain the invention as specified in claim 1. Regarding claim 2, which claim 1 is incorporated, Wetteland discloses wherein the size of the tiles of all the sets of tiles are identical [a tile size of 128 x 128 was thus chosen, pg. 5, left column, CV dataset, third paragraph ... each input is fixed at 128 x 128 x 3 pixels which is the size of each tile, pg. 7, left column, Multiscale model structure, second paragraph]. Regarding claim 3, which claim 1 is incorporated, Wetteland discloses wherein the centroids of the first set of tiles and the at least one second set sets of tiles are identical [Figure 2; when a tile is saved from the region, the corresponding tiles from 25x and 100x magnification were also extracted in such a manner that the center pixel is the same in all three magnification levels, pg. 5, left column, CV dataset, third paragraph]. Regarding claim 4, which claim 1 is incorporated, Wetteland discloses wherein the training data sets and test data set of digital histological images of human or animal tissue are whole slide images [Wetteland, two datasets were collected from the described data material, referred to as the CV dataset and the inference dataset, pg. 5, left column, Data Material, second paragraph ... an expert pathologist carefully annotated selected regions in the WSI, where each region includes one of the six classes. A total of 239 regions belonging to the five foreground classes was annotated in WSI from 32 unique patients, pg. 5, left column, CV dataset, first paragraph ... in addition to the CV dataset, seven WSIs were selected to be used as interference, pg. 5, right column, Inference dataset, first paragraph]. Regarding claim 6, which claim 1 is incorporated, Wetteland discloses wherein dividing the target tissue area into the first tile set and the at least one second tile set comprises: extracting a foreground mask of the target tissue area [Figure 3; an expert pathologist carefully annotated selected regions in the WSI, where each region includes one of the six classes, pg. 5, left column, CV dataset, first paragraph ... a binary background mask is produced from the 25x level of the WSI, generated by checking the pixel intensity value and splitting them into background or non-background tiles, pg. 6, left column, Proposed System, second paragraph], providing annotations classifying areas of the target tissue area [Figure 3; a total of 239 regions belonging to the five foreground classes was annotate in WSI, pg. 5, left column, CV dataset, first paragraph ... tiles are classified according to the highest prediction score, pg. 6, right column, Proposed System, second paragraph ... each class is given a separate color, pg. 7, left column, Proposed System, second paragraph], and merging the annotations with the foreground mask [Figure 3; a grid of nonoverlapping tiles was superimposed upon the annotated region, pg. 5, left column, CV dataset, second paragraph ... the final segmentation image is saved, pg. 7, left column, Proposed System, second paragraph]. Regarding claim 7, which claim 1 is incorporated, Wetteland fails to explicitly disclose wherein the first set of tiles and the at least one second set of tiles correspond to image magnification factors of 1.25, 5, and 10. However, Fuchs discloses wherein the first set of tiles and the at least one second set of tiles correspond to image magnification factors of 1.25, 5, and 10 [magnification factors of 5 and 10 (the examiner interprets the claim limitation to require only 2 magnification factors as recited): Figure 5; tiling can be performed at different magnification levels and with various levels of overlap between adjacent tiles. In this work three magnification levels (5×, 10× and 20×) were investigated, para 0074]. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Wetteland’s reference in view of Fuchs to improve accuracy by integrating information at different magnifications, as recognized by Fuchs. Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Fuchs with Wetteland to obtain the invention as specified in claim 7. Regarding claim 8, which claim 1 is incorporated, Wetteland discloses further comprising applying a binary training model for annotation of the target tissue type [Figure 5 & 7; Each model was therefore also tested with this binary-class approach to see if it improved classification results for urothelium tissue, pg. 8, left column, Training procedure and model selection, second paragraph ... the binary-class segmentation image only outlines the urothelium class, pg. 11, right column, Binary-class vs. multiclass segmentation images, first paragraph]. Regarding claim 9, which claim 1 is incorporated, Wetteland fails to explicitly disclose wherein the training of the convolutional neural network comprises random horizontal and/or vertical flips of the first set of tiles and the at least one second set of tiles. However, Fuchs teaches wherein the training of the convolutional neural network comprises random horizontal and/or vertical flips of the first set of tiles and the at least one second set of tiles [during training, tiles are augmented on the fly with random horizontal flips, para 0070]. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Wetteland’s reference in view of Fuchs to determine if augmentation during training could lower generalization error, as recognized by Fuchs. Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Fuchs with Wetteland to obtain the invention as specified in claim 9. Regarding claim 10, which claim 1 is incorporated, Wetteland discloses wherein the training of the convolutional neural network comprises variations of the color, hue, saturation, brightness and/or contrast of the tile images [no normalization of the stain color is performed on the data, and the raw pixel intensity is used to train the models, pg. 5, right column, CV dataset, second paragraph]. Regarding claim 12, which claim 1 is incorporated, Wetteland fails to explicitly disclose a recording medium readable by a computer and having recorded thereon a computer program including instructions for executing the steps of the method. However, Fuchs teaches a recording medium readable by a computer and having recorded thereon a computer program including instructions for executing the steps of the method [modules may be implemented in hardware and/or as computer instructions on a non-transient computer readable storage medium, para 0248]. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Wetteland’s reference in view of Fuchs for structural independence, as recognized by Fuchs. Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Fuchs with Wetteland to obtain the invention as specified in claim 12. Regarding claim 13, which claim 1 is incorporated, Wetteland fails to explicitly disclose a processing device comprising a storage unit having stored thereon a trained convolutional neural network. However, Fuchs teaches a processing device comprising a storage unit having stored thereon a trained [the model applier 3218 may establish the inference model 3212. Under training mode for the image classification system, the model applier 3218 may initialize the inference model 3212. Under runtime mode, the model applier 3218 may identify the previously established inference model 3212, para 0169] convolutional neural network [the image classification system 3202 may include at least one feature classifier 3208, at least one model trainer 3210, at least one inference model 3212 (sometimes referred herein as an inference system) ... each of the components of system 3200 may be implemented using hardware (e.g., processing circuitry and memory) ... the inference model 3212 may be a convolutional neural network (CNN), para 0164, 0170]. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Wetteland’s reference in view of Fuchs to allow for modification of the model, as recognized by Fuchs. Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Fuchs with Wetteland to obtain the invention as specified in claim 13. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Wetteland ("A multiscale approach for whole-slide image segmentation of five tissue classes in urothelial carcinoma slides.") in view of Fuchs (US 2019/0295252 A1), as applied above, and further in view of Khorshed et al. (Khorshed, Tarek, Mohamed N. Moustafa, and Ahmed Rafea. "Multi-tissue cancer classification of gene expressions using deep learning." 2020 IEEE Sixth International Conference on Big Data Computing Service and Applications (BigDataService). IEEE, 2020) (hereafter, “Khorshed”). Regarding claim 5, which claim 1 is incorporated, neither Wetteland nor Fuchs appears to explicitly disclose wherein the target tissue type is one of identified tissue types for tissues of different organs. However, Khorshed teaches wherein the target tissue type is one of identified tissue types for tissues of different organs [the first stage depends on collecting human samples representing multiple types of cancer tumors collected from multiple tissues spanning different organs across the body, pg. 129, right column, B. Deep Learning System Architecture, first paragraph ... the second stage represents building and training a deep CNN to automatically learn the molecular signatures of the full set of whole-transcriptome gene expressions and produce a trained model which can be used for classification of cancer tumors, pg. 130, left column, B. Deep Learning System Architecture, first paragraph]. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Wetteland’s reference in view of Fuchs and further in view of Khorshed to amplify the discrimination score for classification and detecting more complex genomic alterations at multiple organ sites of origin, as recognized by Khorshed. Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Khorshed with Wetteland and Fuchs to obtain the invention as specified in claim 5. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 2024/0037747 A1 to Raedt et al. discloses a computer implemented system for determining an overall-classifier for source-histological images that generates first tiles and second tiles and uses machine learning networks to process a first-classifier and a second classifier for the first tiles and second tiles, respectively, with a classifier combiner that combines the first-classifier and second-classifier to determine an overall-classifier. US 2020/0250398 A1 to Courtiol et al. discloses a method and apparatus that classifies an image that includes tiling a region of interest of an input image into a set of tiles and extracts a feature vector for each tile with a convolutional neural network US 2021/0192732 A1 to Al-Qaisi et al. discloses systems and methods for a machine learning model to segment an optical coherence tomography (OCT) image by labeling different tissues in the first OCT image using a graph search algorithm and extracting first image tiles, extracting second image tiles by manipulating at least one image tile from the first image tiles through rotating and/or flipping, and training the machine learning model using the first image tiles and second image tiles to perform segmentation of a second OCT image with a trained machine learning model. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TOLUWANI MARY-JANE IJASEUN whose telephone number is (571)270-1877. The examiner can normally be reached Monday - Friday 7:30AM-4PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TOLUWANI MARY-JANE IJASEUN/Examiner, Art Unit 2676 /Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676
Read full office action

Prosecution Timeline

Jan 31, 2024
Application Filed
Jan 07, 2026
Non-Final Rejection — §101, §103
Mar 27, 2026
Interview Requested
Apr 06, 2026
Applicant Interview (Telephonic)
Apr 06, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597117
METHOD, PROGRAM, APPARATUS, AND SYSTEM FOR ABNORMALITY DETECTION SUCH AS FOR DETERMINING WHETHER A PLURALITY OF CONTAINERS TO BE STACKED ON A PALLET IS NORMAL OR ABNORMAL
2y 5m to grant Granted Apr 07, 2026
Patent 12555231
DETECTING ISCHEMIC STROKE MIMIC USING DEEP LEARNING-BASED ANALYSIS OF MEDICAL IMAGES
2y 5m to grant Granted Feb 17, 2026
Patent 12536796
REMOTE SOIL AND VEGETATION PROPERTIES DETERMINATION METHOD AND SYSTEM
2y 5m to grant Granted Jan 27, 2026
Patent 12525056
METHOD AND DEVICE FOR MULTI-DNN-BASED FACE RECOGNITION USING PARALLEL-PROCESSING PIPELINES
2y 5m to grant Granted Jan 13, 2026
Patent 12499506
INFERENCE MODEL CONSTRUCTION METHOD, INFERENCE MODEL CONSTRUCTION DEVICE, RECORDING MEDIUM, CONFIGURATION DEVICE, AND CONFIGURATION METHOD
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
91%
With Interview (+1.5%)
1y 10m
Median Time to Grant
Low
PTA Risk
Based on 578 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month