Prosecution Insights
Last updated: April 19, 2026
Application No. 17/637,023

SEMANTIC IMAGE RETRIEVAL FOR WHOLE SLIDE IMAGES

Non-Final OA §103
Filed
Feb 21, 2022
Examiner
WINDSOR, COURTNEY J
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Memorial Sloan Kettering Cancer Center
OA Round
3 (Non-Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
217 granted / 252 resolved
+24.1% vs TC avg
Moderate +9% lift
Without
With
+9.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
32 currently pending
Career history
284
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
51.1%
+11.1% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 252 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 8, 2025 has been entered. Response to Amendment Claims 8, 10, 12-15, 17, 19, 21, 23, 25-27 have been amended changing the scope and contents of the claim. Response to Arguments Applicant’s arguments with respect to claim(s) 8, 15 and 21 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Objections Claim 21 is objected to because of the following informalities: Claim 21, “the image retrieval model comprising” should read “the convolutional image retrieval model comprising” Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 8-13, 15-19 and 21-26 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2007/0217676 to Grauman et al. (hereinafter Grauman), and further in view of CN109800768A (a machine translation obtained from Google Patents; hereinafter CN ‘768) and CN-109145958-A (a machine translation obtained from SEARCH; hereinafter CN ‘958). Regarding independent claim 8, Grauman discloses A method (abstract, “A method for classifying or comparing objects”), comprising: identifying, by a computing system, a first biomedical image with which to find at least one of a plurality of second biomedical images (paragraph 0100, “In a classifying a novel image process 70, a novel test image is provided in step 71 to the image pre-processing process”); an encoder having a second plurality of kernel parameters to generate a first hash code for the first biomedical image based on the feature map (paragraph 0182, “We developed an efficient embedding function for the normalized partial matching similarity between sets, and show how to exploit random hyperplane properties to construct hash functions that satisfy locality-sensitive constraints. The result is a bounded approximate similarity search algorithm that finds (1+ε)-approximate nearest neighbor images in O(N1/(1+ε)) time for a database containing N images represented by (varying numbers of) local features;” paragraph 0186, “The technique is a novel embedding for a set of vectors that enables sub-linear time approximate similarity search over partial correspondences with random hyperplane hash functions. The idea is to encode a point set with a weighted multi-resolution histogram in such a way that a dot product between any two such encodings will reflect the similarity of the original point sets according to an approximate, normalized partial matching between their component feature vectors.”) ; selecting, by the computing system, from the plurality of second biomedical images corresponding to a plurality of second hash codes, a subset of second biomedical images using the first hash code for the first biomedical image (paragraph 0182, “ Matching local features across images is often useful when comparing or recognizing objects or scenes, and efficient techniques for obtaining image-to-image correspondences have been developed. However, given a query image, searching a very large image database with such measures remains impractical. We introduce a sublinear time randomized hashing algorithm for indexing sets of feature vectors under their partial correspondences. We developed an efficient embedding function for the normalized partial matching similarity between sets, and show how to exploit random hyperplane properties to construct hash functions that satisfy locality-sensitive constraints;” paragraph 0187, “In image retrieval terms, this means we first take a collection of images, each one of which is represented in some fashion by a set of feature vectors. For example, each could be described by a set of SIFT descriptors extracted at salient points, or a set of shape context histograms or geometric blur descriptors extracted at edge points, or a set of color distributions, etc. The database items are prepared by mapping every set of vectors to a single high-dimensional vector via the embedding function. After this embedding, the dot product between any two examples would reflect the partial matching similarity between the original feature sets, that is, the strength of the correspondence between their local parts. All embedded database examples are next encoded as binary hash key strings, with each bit determined with a random hash function designed to probabilistically give similar responses for examples with similar dot products.”); and providing, by the computing system, the subset of second biomedical images identified using the first biomedical image (paragraph 0141, “Items in the cluster are then ranked according to their flow magnitudes, and examples falling within a specified top percentile of this ranking are identified as candidate prototypes. In our implementation we have evaluated the categories learned;” paragraph 0185, “Our framework applies to general matchings not only between object instances, but also between textures or categories, which often exhibit stronger appearance variation and may not be isolated from a database on the basis of a few discriminative features alone. Instead, the joint matching of all component features may be preferable and such matchings have been shown to yield good category level comparisons;” Figs 10-14 are read as being biomedical images in that they contain images of people; paragraph 0211, “A query hash key is indexed into each sorted order with a binary search, and the 2M nearest examples found this way are the approximate nearest neighbors. See Charikar as cited above for details. Having pulled up these nearest bit vectors, we then compute the actual pyramid match similarity values between their associated database pyramids and the query's pyramid. The retrieved neighbors are ranked according to these scores, and this ranked list is the final output of the algorithm.”). Grauman fails to explicitly disclose as further recited. However, CN ‘768 discloses applying, by the computing system, a convolutional image retrieval model to the first biomedical image (abstract, “The invention discloses a method for learning the hash feature representation of semi-supervised GAN.”… “According to the similarity between the images contained in the image annotation information, the invention determines the image retrieval-oriented depth hash optimization target, mines the semantic information of the image annotation, and improves the accuracy of the image retrieval;” page 5, “Figure 7 is a schematic diagram of the convolutional network of the semi-supervised GANs of the present invention.”), the convolutional image retrieval model comprising: a convolution block having a first plurality of kernel parameters and a convolutional neural network (page 5, “Figure 7 is a schematic diagram of the convolutional network of the semi-supervised GANs of the present invention.”) to generate a feature map using the first biomedical image (page 6, “Extract the abstract semantic features of images using unsupervised SGANs, see Figure 1, and then map the features of the image to a low-dimensional Hamming space through a hashing method to obtain the hash feature representation of the image.;” page 13, “Conv refers to convolutional layers. in_channels is the number of input feature maps, and out_channels is the number of output feature maps. The third parameter of Conv is the size of the convolution kernel (kernel), the fourth parameter refers to the size of the stride (stride), and the last parameter is the size of the padding (padding).”), the discriminator having been trained on unlabeled images (page 8, “Figure 7 shows the structure of the discriminative network in DSH-SGANs. The inputs of the discriminant network are labeled images, unlabeled images, and the generated labeled images and unlabeled images, respectively.”… “ All images, both labeled and unlabeled, are used for adversarial learning with generative adversarial networks.”), Grauman is directed toward object analysis within images (abstract, paragraph 0005) and paragraph 0182, “We demonstrate our approach applied to image retrieval for images represented by sets of local appearance features, and show that searching over correspondences is now scalable to large image databases.” CN ‘768 is directed toward “The invention discloses a method for learning the hash feature representation of semi-supervised GAN.” … “ According to the similarity between the images contained in the image annotation information, the invention determines the image retrieval-oriented depth hash optimization target, mines the semantic information of the image annotation, and improves the accuracy of the image retrieval (abstract).” As can be easily seen by one of ordinary skill in the art before the effective filing date of the claimed invention both Grauman and CN ‘958 are directed toward similar methods of endeavor of image retrieval based on image features. Further, Grauman utilizes the work of neural networks in various forms to carry out their methodology (paragraph 0051, 0118). One well known method of training networks is to utilize unsupervised training, where training occurs based on unlabeled data. One of ordinary skill in the art before the effective filing date of the claimed invention would be aware there are often not enough labeled data sets to use for training in a supervised manner; thus, unsupervised learning allows training to occur when there aren’t enough data samples to use. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of CN ‘768 to ensure networks can still be trained even when there are not enough labeled data, making the network more applicable across a variety of data samples. Grauman and CN ‘768 fail to explicitly disclose as further recited. However, CN ‘958 discloses the first plurality of kernel parameters transferred from a preliminary model (page 6, “the network parameters of the discriminator network using the pre-trained model on the Image-Net data set”), wherein the first plurality of kernel parameters are set based on assigned values learned by a discriminator within the preliminary model (page 6, “the network parameters of the discriminator network using the pre-trained model on the Image-Net data set”), the preliminary model being a separate model from the convolutional image retrieval model (page 6, “the network parameters of the discriminator network using the pre-trained model on the Image-Net data set;” the pre-trained model is read as different from the implemented model). As noted above, Grauman and CN ‘758 are directed toward image retrieval using neural networks. Further, Grauman is directed toward object analysis within images (abstract, paragraph 0005) and paragraph 0182, “We demonstrate our approach applied to image retrieval for images represented by sets of local appearance features, and show that searching over correspondences is now scalable to large image databases.” CN ‘958 is directed toward utilizing neural networks for image processing tasks specifically related to object detection (pages 2-3). As can be easily seen by one of ordinary skill in the art both Grauman, CN ‘758 and CN ‘958 are directed toward similar methods of endeavor of image processing. Further, Grauman utilizes the work of neural networks in various forms to carry out their methodology (paragraph 0051, 0118). One well known method of optimizing training for networks is to independently train the discriminator which provide benefits of allowing the discriminator to be more robust, and reducing processing power of performing training on both networks together. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of CN ‘958 to ensure the most accurate and efficient network. Regarding dependent claim 9, the rejection of claim 8 is incorporated herein. Additionally, Grauman in the combination further discloses wherein selecting the subset of second biomedical images further comprises: generating a distance metric between the first hash code and a corresponding second hash code of the plurality of second hash codes for a corresponding second biomedical image of the plurality of second biomedical images (Figure 25, Step 209; paragraph 0213, “Continue with step 209 to process hash keys for Hamming space approximate-NN search according to Charikar as described above and generate M=O(N1/(1+ε)) random k-dimensional permutations, permute all database hash keys by each one, and sort each list of permuted keys;” hamming space is read as the distance metric); determining that the distance metric between the first hash code and corresponding second hash code is within a threshold metric (paragraph 0186, “Approximate similarity search in the Hamming space of the hash keys then identifies the approximate nearest neighbors according to the approximate matching score, in sub-linear time in the number of database examples;” the appropriate matching score is read as a threshold in that they have to be of an “appropriate” value); and including, into the subset of second biomedical images, the second biomedical image corresponding to the second hash code (being that the images match and have their hash codes, they are read as already being in the hash code corresponding to the second biomedical image). Regarding dependent claim 10, the rejection of claim 8 is incorporated herein. Additionally, Grauman in the combination further discloses wherein the encoder of the convolutional image retrieval model further comprises a threshold layer having at least a subset of the second plurality of kernel parameters to generate a first discrete value for the first hash code when an input value to the threshold layer satisfies a threshold and generate a second discrete value for the first hash code when the input value to the threshold layer does not satisfy the threshold (Paragraph 0182, “We introduce a sublinear time randomized hashing algorithm for indexing sets of feature vectors under their partial correspondences;” paragraph 0187, “For example, each could be described by a set of SIFT descriptors extracted at salient points, or a set of shape context histograms or geometric blur descriptors extracted at edge points, or a set of color distributions, etc;” paragraph 0185, “Our framework applies to general matchings not only between object instances, but also between textures or categories, which often exhibit stronger appearance variation and may not be isolated from a database on the basis of a few discriminative features alone;” paragraph 0184, “our embedding allows input feature sets to have varying cardinalities, and provides for hashing over a normalized partial match. This is an important advantage for handling outlier “unmatchable” features, as we will demonstrate hereinbelow;” paragraph 0185, “given a set of feature vectors, to efficiently retrieve the most similar sets from a database of sets, with similarity defined in terms of one-to-one correspondences (a matching);” being a 1:1 match is read as the threshold). Regarding dependent claim 11, the rejection of claim 8 is incorporated herein. Additionally, Grauman in the combination further discloses wherein each hash code of the plurality of second hash codes has a set of values defining one or more features of a corresponding labeled image, the set of values of the plurality of second hash codes corresponding to at least one of a color, a texture, an object type, and semantic information (Paragraph 0182, “We introduce a sublinear time randomized hashing algorithm for indexing sets of feature vectors under their partial correspondences;” paragraph 0187, “For example, each could be described by a set of SIFT descriptors extracted at salient points, or a set of shape context histograms or geometric blur descriptors extracted at edge points, or a set of color distributions, etc;” paragraph 0185, “Our framework applies to general matchings not only between object instances, but also between textures or categories, which often exhibit stronger appearance variation and may not be isolated from a database on the basis of a few discriminative features alone.”). Regarding dependent claim 12, the rejection of claim 8 is incorporated herein. Additionally, Grauman in the combination further discloses wherein the convolution block of the convolutional image retrieval model comprises the first plurality of kernel parameters transferred from the preliminary model, the preliminary model established using a training dataset having a plurality of unlabeled images different from a plurality of labeled images used to establish the convolutional image retrieval model (paragraph 0116, “It should be appreciated that current approaches to object and scene recognition typically require some amount of supervision, whether it is in the form of class labels for training examples, foreground-background segmentations, or even a detailed labeling of objects' component parts”… “In this invention, we offer an efficient method to automatically learn groupings over sets of unordered local features by embedding the sets into a space where they cluster according to their partial-match correspondences. Each image two nodes (sets) is weighted according to how well some subset of the two sets' features may be put into correspondence, with correspondence quality determined by descriptor similarity.”… “To improve specificity, and to develop a predictive classifier that can label unseen images, we develop a method to find prototypical examples in each cluster that are more likely to be class inliers, and then use these prototypes to train a predictive model. We detect prototype examples by examining the pattern of partial match correspondences within a cluster;” paragraph 0121, “Given a collection of unlabeled images, our method produces a partition of the data into a set of learned categories, as well as a set of classifiers trained from these ranked partitions which can recognize the categories in novel images”… “A thresholded subset of the refined groupings compose the learned categories, which are used to train a set of predictive classifiers for labeling unseen examples;” paragraph 0146, “We trained support vector machines with the pyramid match kernel using the labels produced with varying amounts of semi-supervision.”). Regarding dependent claim 13, the rejection of claim 8 is incorporated herein. Additionally, Grauman in the combination further discloses wherein the convolutional image retrieval model lacks a classifier used to update at least one of the first plurality of kernel parameters of the convolution block and the second plurality of kernel parameters of the encoder based on a comparison between a classification for a sample biomedical image generated by the classifier and a labeled classifier for the sample biomedical image as identified in a training dataset (this is read as a feedback loop of the network, which Grauman lacks (i.e. reads on the claim that it doesn’t have the feedback loop)). Regarding independent claim 15, the rejection of claim 8 applies directly. Additionally, Grauman in the combination further discloses a system (paragraph 0003, “This invention relates generally to computer searching and retrieval systems and more particularly to systems and techniques to identify and match objects.”), comprising: a computing system having one or more processors coupled with memory (paragraph 0003, “This invention relates generally to computer searching and retrieval systems and more particularly to systems and techniques to identify and match objects;” paragraph 0218, “Our implementation of the pyramid match requires on average 0.1 ms to compare two sets averaging 1400 features each, on a machine with a 2.4 GHz processor and 2 GB of memory.”), configured to: identify a first biomedical image with which to find at least one of a plurality of second biomedical images (see claim 8 analysis); apply a convolutional image retrieval model to the first biomedical image, the convolutional image retrieval model (see claim 8 analysis) comprising: a convolution block having a first plurality of kernel parameters and a convolutional neural network to generate a feature map using the first biomedical image, the first plurality of kernel parameters transferred from a preliminary model, wherein the first plurality of kernel parameters are set based on assigned values learned by a discriminator within the preliminary model, the discriminator having been trained on unlabeled images, the preliminary model being a separate model from the convolutional image retrieval model (see claim 8 analysis); and an encoder having a second plurality of kernel parameters to generate a first hash code for the first biomedical image based on the feature map (see claim 8 analysis); identify, from the plurality of second biomedical images corresponding to a plurality of second hash codes, a subset of second biomedical images using the first hash code for the first biomedical image (see claim 8 analysis); and provide the subset of second biomedical images identified using the first biomedical image (see claim 8 analysis). Regarding dependent claim 16, the rejection of claim 15 is incorporated herein. Additionally, the rejection of claim 9 applies directly. Regarding dependent claim 17, the rejection of claim 15 is incorporated herein. Additionally, the rejection of claim 10 applies directly. Regarding dependent claim 18, the rejection of claim 15 is incorporated herein. Additionally, the rejection of claim 11 applies directly. Regarding dependent claim 19, the rejection of claim 15 is incorporated herein. Additionally, the rejection of claim 12 applies directly. Regarding independent claim 21, the rejection of claim 8 applies directly. Additionally, Grauman in the combination further discloses A non-transitory computer readable medium configured to store processor-readable instructions, wherein when executed by a processor (paragraph 0180, “The flow diagram does not depict the syntax of any particular programming language. Rather, the flow diagrams herein illustrate the functional information one of ordinary skill in the art requires to fabricate circuits or to generate computer software to perform the processing required of the particular apparatus. It should be noted that many routine program elements, such as initialization of loops and variables and the use of temporary variables are not shown. It will be appreciated by those of ordinary skill in the art that unless otherwise indicated herein, the particular sequence of steps described is illustrative only and can be varied without departing from the spirit of the invention. Thus, unless otherwise stated the steps described are unordered meaning that, when possible, the steps can be performed in any convenient or desirable order implementing the concepts as described herein;” see also Figure 1), the instructions perform operations comprising: identifying, by a computing system, a first biomedical image with which to find at least one of a plurality of second biomedical images (see claim 8 analysis); applying, by the computing system, a convolutional image retrieval model to the first biomedical image (see claim 8 analysis), the image retrieval model comprising: a convolution block having a first plurality of kernel parameters and a convolutional neural network to generate a feature map using the first biomedical image, the first plurality of kernel parameters transferred from a preliminary model, wherein the first plurality of kernel parameters are set based on assigned values learned by a discriminator within the preliminary model, the discriminator having been trained on unlabeled images, the preliminary model being a separate model from the convolutional image retrieval model (see claim 8 analysis); and an encoder having a second plurality of kernel parameters to generate a first hash code for the first biomedical image based on the feature map (see claim 8 analysis); selecting, by the computing system, from the plurality of second biomedical images corresponding to a plurality of second hash codes, a subset of second biomedical images using the first hash code for the first biomedical image (see claim 8 analysis); and providing, by the computing system, the subset of second biomedical images identified using the first biomedical image (see claim 8 analysis). Regarding dependent claim 22, the rejection of claim 21 is incorporated herein. Additionally, the rejection of claim 9 applies directly. Regarding dependent claim 23, the rejection of claim 21 is incorporated herein. Additionally, the rejection of claim 10 applies directly. Regarding dependent claim 24, the rejection of claim 21 is incorporated herein. Additionally, the rejection of claim 11 applies directly. Regarding dependent claim 25, the rejection of claim 21 is incorporated herein. Additionally, the rejection of claim 12 applies directly. Regarding dependent claim 26, the rejection of claim 21 is incorporated herein. Additionally, the rejection of claim 13 applies directly. Claim(s) 14, 20 and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Grauman further in view of CN ‘758 and CN ‘958 as applied to claims 8, 15 and 21 respectively above, and further in view of WO 2009017483 (hereinafter WO ‘483). Regarding dependent claim 14, the rejection of claim 8 is incorporated herein. Additionally, Grauman, CN ‘758 and CN ‘958 in the combination as a whole fails to explicitly disclose wherein identifying the first biomedical image further comprises receiving the first biomedical image derived from a tissue sample via a histopathological image preparer. However, WO ‘483 discloses wherein identifying the first biomedical image further comprises receiving the first biomedical image derived from a tissue sample via a histopathological image preparer (abstract, “This invention relates to computer-aided diagnostics using content-based retrieval of histopathological image features. Specifically, the invention relates to the extraction of image features from a histopathological image based on predetermined criteria and their analysis for malignancy determination;” page 16, line 8, “In one embodiment, the systems described herein further comprise means for sorting retrieved images according to their image content similarity to the query image; and displaying said retrieved images to the user in the order of said sorting, whereby, in one embodiment, the first displayed image is most similar to said query image;” page 16, line 1, “In one embodiment, provided herein is a content-based image retrieval system for the comparison of novel histopathological images with a database of histopathological images of known clinical significance, comprising: obtaining a histological image”). As seen above, Grauman, CN ‘758 and CN ‘958 are directed toward image processing with neural networks. Further, Grauman discloses “Still another embodiment includes a method for matching objects comprising creating a set of feature vectors for each object of interest, mapping each set of feature vectors to a single high-dimensional vector to create an embedding vector and encoding each embedding vector with a binary hash string (abstract)” and “relates generally to computer searching and retrieval systems and more particularly to systems and techniques to identify and match objects (paragraph 0003).” WO ‘483 is directed toward, “computer-aided diagnostics using content-based retrieval of histopathological image features. Specifically, the invention relates to the extraction of image features from a histopathological image based on predetermined criteria and their analysis for malignancy determination (abstract).” As can be easily seen by one of ordinary skill in the art at the time of filing the claimed invention, Grauman, CN ‘758, CN ‘958 and WO ‘483 are directed toward similar methods of endeavor of image analysis. Further, both Grauman and WO ‘483 allow for retrieving images with similar characteristics. Though Grauman, CN ‘758 and CN ‘958 fails to disclose applying their method of image retrieval to histological images, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of WO ‘483 in order to permit retrieving images similar to a patient’s tissues in question that have been imaged. Permitting retrieval of similar tissue images allows physicians to compare a current patients treatment options to a previous patient in a similar medical situation ideally creating better patient outcomes. Regarding dependent claim 20, the rejection of claim 15 is incorporated herein. Additionally, Grauman, CN ‘758 and CN ‘958 in the combination as a whole fails to explicitly disclose wherein identifying the first biomedical image further comprises receiving the first biomedical image derived from a tissue sample via a histopathological image preparer. However, WO ‘483 discloses wherein identifying the first biomedical image further comprises receiving the first biomedical image derived from a tissue sample via a histopathological image preparer (abstract, “This invention relates to computer-aided diagnostics using content-based retrieval of histopathological image features. Specifically, the invention relates to the extraction of image features from a histopathological image based on predetermined criteria and their analysis for malignancy determination;” page 16, line 8, “In one embodiment, the systems described herein further comprise means for sorting retrieved images according to their image content similarity to the query image; and displaying said retrieved images to the user in the order of said sorting, whereby, in one embodiment, the first displayed image is most similar to said query image;” page 16, line 1, “In one embodiment, provided herein is a content-based image retrieval system for the comparison of novel histopathological images with a database of histopathological images of known clinical significance, comprising: obtaining a histological image”). As seen above, Grauman, CN ‘758 and CN ‘958 are directed toward image processing with neural networks. Further, Grauman discloses “Still another embodiment includes a method for matching objects comprising creating a set of feature vectors for each object of interest, mapping each set of feature vectors to a single high-dimensional vector to create an embedding vector and encoding each embedding vector with a binary hash string (abstract)” and “relates generally to computer searching and retrieval systems and more particularly to systems and techniques to identify and match objects (paragraph 0003).” WO ‘483 is directed toward, “computer-aided diagnostics using content-based retrieval of histopathological image features. Specifically, the invention relates to the extraction of image features from a histopathological image based on predetermined criteria and their analysis for malignancy determination (abstract).” As can be easily seen by one of ordinary skill in the art at the time of filing the claimed invention, Grauman, CN ‘758, CN ‘958 and WO ‘483 are directed toward similar methods of endeavor of image analysis. Further, both Grauman and WO ‘483 allow for retrieving images with similar characteristics. Though Grauman, CN ‘758 and CN ‘958 fails to disclose applying their method of image retrieval to histological images, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of WO ‘483 in order to permit retrieving images similar to a patient’s tissues in question that have been imaged. Permitting retrieval of similar tissue images allows physicians to compare a current patients treatment options to a previous patient in a similar medical situation ideally creating better patient outcomes. Regarding dependent claim 27, the rejection of claim 21 is incorporated herein. Additionally, Grauman, CN ‘758 and CN ‘958 in the combination as a whole fails to explicitly disclose wherein identifying the first biomedical image further comprises receiving the first biomedical image derived from a tissue sample via a histopathological image preparer. However, WO ‘483 discloses wherein identifying the first biomedical image further comprises receiving the first biomedical image derived from a tissue sample via a histopathological image preparer (abstract, “This invention relates to computer-aided diagnostics using content-based retrieval of histopathological image features. Specifically, the invention relates to the extraction of image features from a histopathological image based on predetermined criteria and their analysis for malignancy determination;” page 16, line 8, “In one embodiment, the systems described herein further comprise means for sorting retrieved images according to their image content similarity to the query image; and displaying said retrieved images to the user in the order of said sorting, whereby, in one embodiment, the first displayed image is most similar to said query image;” page 16, line 1, “In one embodiment, provided herein is a content-based image retrieval system for the comparison of novel histopathological images with a database of histopathological images of known clinical significance, comprising: obtaining a histological image”). As seen above, Grauman, CN ‘758 and CN ‘958 are directed toward image processing with neural networks. Further, Grauman discloses “Still another embodiment includes a method for matching objects comprising creating a set of feature vectors for each object of interest, mapping each set of feature vectors to a single high-dimensional vector to create an embedding vector and encoding each embedding vector with a binary hash string (abstract)” and “relates generally to computer searching and retrieval systems and more particularly to systems and techniques to identify and match objects (paragraph 0003).” WO ‘483 is directed toward, “computer-aided diagnostics using content-based retrieval of histopathological image features. Specifically, the invention relates to the extraction of image features from a histopathological image based on predetermined criteria and their analysis for malignancy determination (abstract).” As can be easily seen by one of ordinary skill in the art at the time of filing the claimed invention, Grauman, CN ‘758, CN ‘958 and WO ‘483 are directed toward similar methods of endeavor of image analysis. Further, both Grauman and WO ‘483 allow for retrieving images with similar characteristics. Though Grauman, CN ‘758 and CN ‘958 fails to disclose applying their method of image retrieval to histological images, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of WO ‘483 in order to permit retrieving images similar to a patient’s tissues in question that have been imaged. Permitting retrieval of similar tissue images allows physicians to compare a current patients treatment options to a previous patient in a similar medical situation ideally creating better patient outcomes. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: K. G. Dizaji, F. Zheng, N. S. Nourabadi, Y. Yang, C. Deng and H. Huang, "Unsupervised Deep Generative Adversarial Hashing Network," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 3664-3673, doi: 10.1109/CVPR.2018.00386. discloses, “we propose a new deep unsupervised hashing function, called HashGAN, which efficiently obtains binary representation of input images without any supervised pretraining (abstract).” Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to Courtney J. Nelson whose telephone number is (571)272-3956. The examiner can normally be reached Monday - Friday 8:00 - 4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached on 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /COURTNEY JOAN NELSON/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Feb 21, 2022
Application Filed
Mar 13, 2025
Non-Final Rejection — §103
Jun 11, 2025
Applicant Interview (Telephonic)
Jun 11, 2025
Examiner Interview Summary
Jun 18, 2025
Response Filed
Aug 06, 2025
Final Rejection — §103
Nov 07, 2025
Response after Non-Final Action
Dec 08, 2025
Request for Continued Examination
Dec 10, 2025
Response after Non-Final Action
Jan 13, 2026
Non-Final Rejection — §103
Apr 07, 2026
Applicant Interview (Telephonic)
Apr 07, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603175
METHOD AND APPARATUS FOR DETERMINING DIAGNOSIS RESULT DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12597188
SYSTEMS AND METHODS FOR PROCESSING ELECTRONIC IMAGES FOR PHYSIOLOGY-COMPENSATED RECONSTRUCTION
2y 5m to grant Granted Apr 07, 2026
Patent 12597494
METHOD AND APPARATUS FOR TRAINING MEDICAL IMAGE REPORT GENERATION MODEL, AND IMAGE REPORT GENERATION METHOD AND APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12588881
PROVIDING A RESULT DATA SET
2y 5m to grant Granted Mar 31, 2026
Patent 12592016
Material-Specific Attenuation Maps for Combined Imaging Systems Background
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
96%
With Interview (+9.4%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 252 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month