Prosecution Insights
Last updated: April 19, 2026
Application No. 17/991,618

SYSTEMS AND METHODS FOR SPECIMEN INTERPRETATION USING ONE OR MORE MACHINE LEARNING MODELS AND CELL-LEVEL FEATURES OF INDIVIDUAL CELLS

Final Rejection §103§DP
Filed
Nov 21, 2022
Examiner
DICKERSON, CHAD S
Art Unit
2683
Tech Center
2600 — Communications
Assignee
UPMC
OA Round
2 (Final)
63%
Grant Probability
Moderate
3-4
OA Rounds
2y 9m
To Grant
86%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
376 granted / 600 resolved
+0.7% vs TC avg
Strong +23% interview lift
Without
With
+23.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
35 currently pending
Career history
635
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
55.5%
+15.5% vs TC avg
§102
14.9%
-25.1% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 600 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see page 8, filed 8/8/2025, with respect to specification objection have been fully considered and are persuasive. The objection of the specification has been withdrawn. Applicant’s arguments, see page 8, filed 8/8/2025, with respect to claim objections have been fully considered and are persuasive. The objection of the claims has been withdrawn. Applicant’s arguments, see page 8, filed 8/8/2025, with respect to the Double Patenting rejection have been fully considered and are persuasive. The Double Patenting rejection of the claims has been withdrawn. Applicant’s arguments with respect to claim(s) 27-49 have been considered but are moot because the new ground of rejection does not rely on all references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The arguments state that the applied reference does not perform the aspect of “identifying, based on the first data structure, a second data structure including a set of metrics representing the specimen slide image, wherein the set of metrics is calculated based on an aggregation of the feature scores corresponding to the one or more individual cells”. The reference of Chukka is used to cure the deficiency of the primary reference. The reference of Chukka discloses determining the presence of a cancer cells that are being evaluated, which is taught in ¶ [42]. The system identifies the features values of cells evaluated and performs a statistical analysis of the features, such as averaging or performing a mean of the feature values. This is taught in ¶ [32], [33], [57], [167] and [173]. Based on the teaching of aggregating feature values to calculate a set of metrics used to be inputs into a machine learning model, this performs the features of the contended features above. Therefore, based on the above, the combination of Chukka with the primary reference performs the features of the claims. Thus, based on the above, the features of the claims are disclosed below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 27-34, 36-41, 45, 46, 48 and 49 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ventana Medical (US Pub 2018/0204048) in view of Chukka (US Pub 2017/0243051). Re claim 27: Ventana Medical discloses a method, comprising: identifying a first data structure including feature scores indicative of cell-level features for each of one or more individual cells within a plurality of cells in at least a portion of a specimen slide image (e.g. a feature vector is generated for the grid points for the cells that are analyzed. The feature vector is developed for each grid point where features are extracted within the neighborhoods of points, which is taught in ¶ [76], [92], [104]-[108] and [110].); [0076] The term “cytoplasmic staining” shall refer to a group of pixels arranged in a pattern bearing the morphological characteristics of a cytoplasmic region of a cell. [0092] The stained cellular samples are visualized under a microscope or scanned by a whole slide scanner and a digital image thereof is captured. VI. Pattern Segmentation [0104] After the digital image 101 has been acquired, a pattern segmentation function 102 is performed, dividing the image into regions having distinct patterns of staining. [0105] In an exemplary embodiment, the pattern segmentation function comprises whole slide tissue segmentation based on predefined staining patterns. The segmentation is performed by extracting features from the neighborhood of a grid of points (GPs) sampled on the input image 201 and classifying them into different staining pattern types. An exemplary workflow for a pattern segmentation function for whole slide tissue segmentation is illustrated at FIG. 2. [0106] One or more processors 200 implement an image channel extraction (ICE) module 211 to execute a channel extraction function 221 on the input image 201 to separate the input image into different image channels. The image channel or channels corresponding to the features to be extracted is selected, and a feature-mapped image 202 is generated consisting pixels corresponding to the features that are relevant to the pattern segmentation. For example, where both analyte-related features and structure-related features are relevant, separate channels representing the local amounts of stains correlating with those features may be generated by ICE module 211. For example, where hematoxylin and DAB staining are relevant to the pattern analysis, a color deconvolution or unmixing method such as the method described in Ruifrok, A. and Johnston, D., “Quantification of histochemical staining by color de-convolution,” Analyt. Quant. Cytol. Histol. 23, 291-299 (2001) is applied to decompose the original RGB image into Hematoxylin (HTX) and DAB channels. These channels highlight different tissue structures in the tissue image, thus, they may be referred to as structural image channels. More precisely, the HTX channel highlights nuclei regions, the DAB channel highlights target compartments, Therefore, features extracted from these channels are useful in describing the tissue structures. Likewise, the DAB channel highlights regions where an analyte of interest is located, and thus can be useful in describing the staining pattern. The selection of structural image channels and analyte image channels can be adjusted for each segmentation problem. For example, for chromogenically-stained images, structural image channels can include the counterstain channel, one or more chromogen channels, hue, and/or luminance. In an exemplary embodiment, the staining patterns are classified according to: (1) tumor or non-tumor regions; and (2) the pattern of analyte staining. In this example, the hematoxylin channel is selected to identify features relevant to the presence or absence of tumor regions, and the channel corresponding to the label for the analyte of interest is selected to identify features relevant to particular analyte staining patterns. [0107] One or more processors 200 implement a grid point module 212 to execute a grid point function 222 on the feature mapped image 202 to divide the feature mapped image 202 into a plurality of patches by sampling a uniform grid of seed points in the image and specifying an interval or neighborhood for each seed point. For example, a grid of points (GPs) with an interval of d=15 pixels may be overlaid on the WS image, enabling feature extraction module 213 to extract features from the neighborhood of these GPs and classification module 214 to classify the features and therefore GPs into different staining patterns and/or tissue types. The interval size is not limited to 15 pixels, and may vary. Further, the grid may be in any shape, such as square, rectangular, hexagonal, etc. [0108] One or more processors 200 implement a feature extraction module 213 to execute a feature extraction function 223 on one or more of the image channels. For each GP associated with each image channel, feature extraction module 213 extracts image features in the neighborhood of these points, and different types of image texture features are extracted. For example, given a neighborhood size s, and image channel c, let Ω.sub.s,c denote a neighborhood of size s×s, at channel c, from which features are extracted. Features computed for all Ω.sub.s,c ∀s∈S, c∈ C (where S, C denote the sets of selected neighborhood sizes, and selected channels, respectively) are concatenated to generate a feature vector containing rich information to represent the GP. In one experimental embodiment, for instance, S = [50, 100; 150] pixels and C={HTX, DAB}. identifying a second data structure including a set of metrics representing the specimen slide image, wherein the set of metrics is calculated based on an aggregation of the feature scores corresponding to the one or more individual cells (e.g. the features are computed into a matrix of pixel intensity and further calculates features from the matrix into a sum. The computed values can be used to determine distances between parts within the image, which is taught in ¶ [109] and [110]. A score is calculated to determine if a part of the features belong to a specific pattern, which is taught in ¶ [110].); and [0109] The texture features being computed are co-occurrence features. For co-occurrence features, feature extraction module 213 may compute the co-occurrence matrix (CM) of pixel intensity, and compute 13 Haralick features from this CM [see Haralick, R., et al.: Textural Features for Image Classification. IEEE Trans. Sys., Man., Cyber. 3 (6), 610-621 (1973)], including energy, correlation, inertia, entropy, inverse difference moment, sum average, sum variance, sum entropy, difference average, difference variance, difference entropy, and two information measures of correlation. In addition to the conventional gray-level CM (GLCM), which may be computed for each channel individually, the inter-channel or color co-occurrence matrix (CCM) may additionally be used. The CCM is created from the co-occurrence of pixel intensities in two different image channels, i.e., to compute the CCM from two channels Ci; Cj using a displacement vector d=[dx; dy], the co-occurrence of the pixel intensity is computed at location (x; y) in Ci and the pixel intensity at location (x+dx; y+dy) in Cj. The advantage of the CCM is that it captures the spatial relationship between different tissue structures (highlighted in different channels), without the need of explicitly segmenting them. Further, Haralick features may be computed from the GLCMs of all two channels, and Haralick features computed from the CCMs of all pairs of channels (HTX-DAB). In an experimental embodiment, the total number of features may be 13×3×3=117. [0110] Subsequent to feature extraction, one or more processors 200 implement a classifier module 214 that executes a trained pattern recognition algorithm 224 to classify each patch according to the patterns being investigated. The output of the classifier module is a confidence score indicating the likelihood that the patch belongs to one of the patterns being investigated. The patch is assigned to the pattern with the highest score and pattern map 203 is built based on the pattern to which each patch is assigned. providing the second data structure to a machine learning model configured to determine, based on the second data structure, a presence or absence of a disease or disease type identified in the specimen slide image (e.g. a classifier evaluates the acquired grid point data of the feature mapped image are evaluated to determine if the rea or pattern matches a pattern associated with a specific type of tissue, tumor or non-tumor region. This is explained in ¶ [106]-[110] above, [111] and [112].); and [0111] The trained pattern recognition algorithm 224 is built by causing the one or more processors 200 implement the classifier module 214 to execute a training function 225 on a set of training images stored in a training database 216. The images of the training database are annotated on the basis of the particular staining pattern present therein. By evaluating images with known patterns, the classifier module can identify particular features that signify membership in a particular pattern category. [0112] Various different pattern recognition algorithms can be implemented, including supervised learning algorithms. In an embodiment, the classifier is a supervised learning classifier. In another embodiment, the supervised learning classifier is selected from the group consisting of decision tree, ensemble, k-nearest neighbor, linear regression, naive Bayes, neural network, logistic regression, perceptron, support vector machine (SVM), and relevance vector machine (RVM). In another embodiment, the classifier is selected from the group consisting of SVM, random forest, and k-nearest neighbor classification. determining, using the machine learning model, the presence or absence of the disease or disease type (e.g. the classifier outputs a patch or area with the highest score with the highest likelihood of being associated with a patch or pattern associated with a tumor or type of tissue, which is taught in ¶ [106]-[112] above.). However, Ventana Medical fails to specifically teach the features of identifying, based on the first data structure, a second data structure including a set of metrics representing the specimen slide image, wherein the set of metrics is calculated based on an aggregation of the feature scores corresponding to the one or more individual cells, and providing the second data structure to a machine learning model configured to determine, based on the first data structure and the second data structure, an output indicating a presence or absence of a disease or disease type identified in the specimen slide image; and determining, using the output from the machine learning model, the presence or absence of the disease or disease type. However, this is well known in the art as evidenced by Chukka. Similar to the primary reference, Chukka discloses determining a specimen as a part of a disease class utilizing vectors and a machine learning (same field of endeavor or reasonably pertinent to the problem). Chukka discloses identifying, based on the first data structure, a second data structure including a set of metrics representing the specimen slide image, wherein the set of metrics is calculated based on an aggregation of the feature scores corresponding to the one or more individual cells (e.g. features of the cells within the system involves determining certain characteristics of the cells, which is taught in ¶ [43] and [44]. These features are used for statistical analysis, such as an average, mean or media, that is used to determine a contextual feature value, which is taught in ¶ [55] and [56].), and [0043] According to embodiments, the first object feature is one of: i. an intensity value of the object, the intensity value correlating with the amount of a stain or a biomarker bound to the object represented by the object; ii. a diameter of the object; iii. a size of the object, e.g. the area or number of pixels covered by the object; iv. a shape property of the object; v. a texture property of the object vi. a distance of an object to the next neighbor object. [0044] In case a second and/or a further object feature is analyzed according to said embodiment, the second object feature and/or the further object feature is a remaining one of the properties i-vi. A plurality of other object features may also be used by the classifiers of these and other embodiments. [0055] According to embodiments, the computing of the first context feature value comprises computing a statistical average of the first object feature values of the plurality of objects in the digital image. In addition or alternatively, the computing of the second context feature value comprises computing a statistical average of the second object feature values of the plurality of objects in the digital image. In addition or alternatively, the computing of the each further context feature value comprises computing a statistical average of the respective further object feature values of the plurality of objects in the digital image. [0056] The statistical average can be, for example, the arithmetic mean, a median, a mid-range, an expectation value or any other form of average derived from the object feature values of the totality or sub group of objects in the area of the digital image. providing the second data structure to a machine learning model configured to determine, based on the first data structure and the second data structure, an output indicating a presence or absence of a disease or disease type identified in the specimen slide image (e.g. the features and the context feature values are provided to a SVM, which is a form of a machine learning model. The SVM outputs information indicating a likelihood of being a tumor cell or lymphocyte. This is explained in ¶ [32], [33], [57], [167] and [173].); and [0032] For example, the first classifier may be a support vector machine (SVM) having been trained on a first object feature “cancer cell size” and a second classifier may be an SVM having been trained on a second object feature “intensity of blue color” of a nucleus stained with hematoxylin. [0033] SVMs are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. Given a set of training digital images with pixel blobs, each marked for belonging to one of two categories, a SVM training algorithm builds a model that assigns new examples into one category or the other, making it a non-probabilistic binary classifier. The SVM classifier may be a linear classifier. Alternatively, if a non-linear SVM kernel function is used, the SVM classifier can be a non-linear classifier. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall on. [0057] According to embodiments, the method further comprises generating the first classifier. The first classifier is generated by: reading, by an untrained version of the first classifier, a plurality of digital training images from a storage medium, each training digital image comprising a plurality of pixel blobs respectively representing objects of one or more different object classes, each pixel blob being annotated as a member or as a non-member of the object class; analyzing each of the training digital images for identifying, for each annotated pixel blob, a training first object feature value of the first object feature of said pixel blob; analyzing each of the training digital images for computing one or more training first context feature values, each training first context feature value being a derivative of the training first object feature values or of other training object feature values of a plurality of pixel blobs in said training digital image or being a derivative of a plurality of pixels of the training digital image; training the untrained version of the first classifier by inputting, for each of the pixel blobs, at least the annotation, the training first object feature value and the one or more training first context feature values to the untrained version of the first classifier, thereby creating the first classifier, the first classifier being configured to calculate a higher likelihood for an object of being member in a particular object class in case the first object feature value of said object is more similar to the training first object feature values of the pixel blobs annotated as being a member of said particular object class than to the training first object feature values of pixel blobs annotated as not being a member of said particular object class, whereby the likelihood further depends on intra-image context information contained in the first or other context feature value. [0167] Depending on the embodiment, the first and second classifiers may both be SVMs, neuronal networks, or any other type of classifier. According to embodiments, the type of the first and the second classifier differs. In some embodiments, a “super-classifier” or “end-classifier” is provided that takes the likelihoods 714, 716 output by each of the object feature-specific classifiers as input for calculating a final, combined likelihood 718 of a particular object to belong to a particular class (e.g. “lymphocyte cell”). For example, the end-classifier could be a nonlinear SVM classifier, e.g. a Gaussian kernel SVM. The likelihoods 714, 716 could be percentage values or other numerical values which are indicative of a likelihood of an object to be a member of a particular class. [0173] According to some embodiments, the combined likelihood 718 may be calculated as the arithmetic mean of all object feature specific likelihoods, e.g. (size-based-based likelihood 714+blue-intensity-based membership likelihood 716)/2. According to other embodiments, the computation of the combined likelihood 718 may be more complex. For example, the object feature specific likelihoods 710, 712 may be weighted in accordance with the predictive power of the respective object feature, whereby the weights may be predefined or may be automatically determined during a training phase of an end-classifier that shall compute the combined likelihood. It is also possible that the data values 714, 716 are not likelihood values but other forms of numerical values being indicative of a likelihood an object belongs to a particular class. determining, using the output from the machine learning model, the presence or absence of the disease or disease type (e.g. an end classifier can be used to take the likelihoods to determine a particular tumor or lymphocyte cell is present, which is taught in ¶ [167] above.). Therefore, in view of Chukka, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of identifying, based on the first data structure, a second data structure including a set of metrics representing the specimen slide image, wherein the set of metrics is calculated based on an aggregation of the feature scores corresponding to the one or more individual cells, and providing the second data structure to a machine learning model configured to determine, based on the first data structure and the second data structure, an output indicating a presence or absence of a disease or disease type identified in the specimen slide image; and determining, using the output from the machine learning model, the presence or absence of the disease or disease type, incorporated in the device of Ventana Medical, in order to utilize multiple inputs into a machine learning model to predict the disease of the cells, which can improve the accuracy of the system (as stated in Chukka ¶ [17] and [18]). Re claim 28: (New) Ventana Medical discloses the method of claim 27, further comprising, prior to identifying the first data structure: receiving an image of a specimen slide comprising a plurality of biological cells (e.g. an image of a specimen slide is acquired that contains a plurality of cells, which is taught in ¶ [104], [105] above and [72]-[74].); [0072] By “cellular sample” is meant a collection of cells obtained from a subject or patient. A biological sample can be a tissue or a cell sample. The source of the tissue or cell sample may be solid tissue as from a fresh, frozen and/or preserved organ or tissue sample or biopsy or aspirate; blood or any blood constituents; bodily fluids such as cerebral spinal fluid, amniotic fluid, peritoneal fluid, or interstitial fluid; cells from any time in gestation or development of the subject. The cellular sample can also be obtained from in vitro tissue or cell culture. The cellular sample may contain compounds which are not naturally intermixed with the cells in nature such as preservatives, anticoagulants, buffers, fixatives, nutrients, antibiotics, or the like. Examples of cellular samples herein include, but are not limited to, tumor biopsies, circulating tumor cells, serum or plasma, primary cell cultures or cell lines derived from tumors or exhibiting tumor-like properties, as well as preserved tumor samples, such as formalin-fixed, paraffin-embedded tumor samples or frozen tumor samples. [0073] As used herein, the term “tissue sample” shall refer to a cellular sample that preserves the cross-sectional spatial relationship between the cells as they existed within the subject from which the sample was obtained. “Tissue sample” shall encompass both primary tissue samples (i.e. cells and tissues produced by the subject) and xenografts (i.e. foreign cellular samples implanted into a subject). [0074] As used herein, the term “cytological sample” refers to a cellular sample comprising cells derived directly from a subject that have been partially or completely disaggregated, such that the sample no longer reflects the spatial relationship of the cells as they existed in the subject from which the cellular sample was obtained. Examples of cytological samples include tissue scrapings (such as a cervical scraping), fine needle aspirates, samples obtained by lavage of a subject, et cetera. detecting each of the one or more individual cells within the plurality of cells (e.g. a plurality of cells is captured within the image with a biological analysis device, which is taught in ¶ [93].); and [0093] The present methods, systems, and apparatuses all may include a biological image analysis device, which functions to analyze the image of the cellular sample according to the presently disclosed methods. The biological image analysis device includes at least a processor and a memory coupled to the processor, the memory to store computer-executable instructions that, when executed by the processor, cause the processor to perform operations. determining coordinates for each of the one or more individual cells (e.g. the spatial relationship is determined between the different tissue structures in the cell sample, which is taught in ¶ [109] above.). Re claim 29: (New) Ventana Medical discloses the method of claim 28, further comprising extracting, for each of the one or more individual cells, an image of the individual cell, wherein the individual cell is centered on the extracted image of the individual cell, each extracted image representing an independent individual cell (e.g. for an individual cell, an extracted image of the cell is captured, with the image centered on membrane area of the cell, which is taught in ¶ [162] and illustrated in figure 10.). [0162] Exemplary resulting pattern maps 710 are shown at FIG. 9 and FIG. 10. FIG. 9 shows an input image (A) and a pattern segmentation map (B) for a tumor region containing membrane staining (red) (exemplarily indicated by an arrow labeled with “R” in FIG. 9(B)), cytoplasmic staining (blue) (exemplarily indicated by an arrow labeled with “B” in FIG. 9(B)), and membrane-punctate staining (pink) (exemplarily indicated by an arrow labeled with “P” in FIG. 9(B)). FIG. 10 shows an input image (A) and a pattern segmentation map (B) for a tumor region containing membrane staining (red) (exemplarily indicated by an arrow labeled with “R” in FIG. 10(B)), punctate staining (green) (exemplarily indicated by an arrow labeled with “G” in FIG. 10(B)), cytoplasmic staining (blue) (exemplarily indicated by an arrow labeled with “B” in FIG. 10(B)), and membrane-punctate staining (yellow) (exemplarily indicated by an arrow labeled with “Y” in FIG. 10(B)). Re claim 30: (New) Ventana Medical discloses the method of claim 29, further comprising: processing each of the extracted images to generate a cell type score (e.g. the output of the classifier can include a score associated with a patch related to predetermined pattern areas. The areas are associated with a cell or tissue type, which is taught in ¶ [107] and [110] above.); and identifying a set of one or more of the extracted images having a cell type score within a predetermined range, wherein the cell type score indicates a likelihood that the cell is a target cell type (e.g. the score associated with a patch determines that the area belongs to a particular pattern that is correlated with a tissue type. The higher the score, the greater the likelihood it belongs to the tissue type, which is taught in ¶ [107] and [110] above.). Re claim 31: (New) Ventana Medical discloses the method of claim 27, further comprising ranking the one or more individual cells based on the feature scores (e.g. a patch is assigned to the pattern with the highest score, which means the system selects the between various scores to determine the highest value. This positioning of a score in reference to others serves as a ranking. See ¶ [110] for the information regarding selecting the highest score.). Re claim 32: (New) Ventana Medical discloses the method of claim 27, further comprising classifying each of the one or more individual cells into one of a plurality of clusters based on the feature scores (e.g. the system discloses candidate compartment segmentation that segments or separates the regions of the cells into clusters based on the features of analytically relevant biological characteristics, which is taught in ¶ [85].). [0085] The present systems and methods address this problem by performing three distinct segmentation functions on a digital image of the tissue sample, and then identify different types of compartments on the basis of the three segmentations: [0086] (1) a “pattern segmentation,” which segments the digital image into single staining regions (i.e. regions contains only a single analytically distinct pattern of analyte staining) and compound staining regions (i.e., contains two or more analytically distinct staining patterns intermixed with one another); [0087] (2) a “candidate compartment segmentation,” which segments the digital image into “candidate biological compartment regions” (i.e. sets of pixel clusters corresponding to the analyte of interest that have the characteristics of an analytically relevant biological compartment) and “non-candidate compartment regions”; and [0088] (3) an “analyte intensity segmentation,” which segments the digital image into separate intensity bins on the basis of analyte staining intensity (typically into “high,” “low,” and “background” intensity bins, although others may be used if appropriate). Re claim 33: (New) Ventana Medical discloses the method of claim 27, further comprising obtaining the feature scores of the first data structure using one or more additional machine learning models that are distinct from the machine learning model configured to determine the presence or absence of the disease or disease type identified in the specimen slide image (e.g. a trained pattern recognition algorithm (214) is used to calculate a score pertaining to the likelihood of the patch belonging to a particular pattern, which is taught in ¶ [110] above. ¶ [107] discloses classifying different features into patterns associated with tissue types, which reflects different models.). Re claim 34: (New) Ventana Medical discloses the method of claim 27, wherein at least a portion of the cell-level features are representative of at least one of a group comprising cytomorphologic criteria and histologic criteria (e.g. ¶ [106] discloses criteria that is used to detect tumor or non-tumor regions.). [0003] The present disclosure relates, among other things, to automated analysis of histochemical or cytological samples stained for analytes having complex staining patterns, including samples in which analytically distinct patterns of analyte staining are intermixed. Re claim 36: Ventana Medical discloses the method of claim 27, further comprising generating summary statistics based on the first data structure (e.g. the system calculates summary statistics based on the features detected with the image, which is seen in ¶ [119], [127] and [134].). [0119] One way to smooth the image is to apply a Gaussian smoothing operator. A Gaussian smoothing operator uses a two-dimensional (2D) Gaussian distribution to assign a weighted intensity value to each pixel on the basis of that pixel's neighborhood. The formula for a one dimensional Gaussian distribution is: PNG media_image1.png 48 348 media_image1.png Greyscale where x is the distance from the origin pixel in the selected axis and 6 is the standard deviation of the Gaussian distribution. [0127] Other smoothing and/or ridge detections can also be chosen. For example, for image smoothing, median filtering or mean filtering may be used. For edge detection, a structure tensor method may be used. [0133] The purpose of the k-means clustering algorithm is to group the pixels into different clusters based on their intensity values, i.e., pixels with similar intensity values will be grouped into one single cluster. The algorithm will results in k clusters, corresponding to k different levels/bins of intensity. The algorithm first initialize k mean values for the k clusters, denoted by m.sub.1, m.sub.2, . . . , m.sub.k. Next the following two steps are performed alternatively: [0134] 1. Assignment: Assign each pixel to the cluster whose mean value is closest to the pixel intensity (compared to other clusters). The following formula may be used for the assignment function: S.sub.i.sup.(t)={χ.sub.p:∥χ.sub.p−m.sub.i.sup.(t)∥.sup.2≤∥χ.sub.p−m.sub.j.sup.(t)∥.sup.2∀j, 1≤j≤k}, where each χ.sub.p is assigned to exactly one S.sup.(t), even if it could be assigned to two or more of them. Re claim 37: Ventana Medical discloses the method of claim 36, wherein the summary statistics are selected from the group consisting of mean, median, standard deviation, variance, kurtosis, skew, histograms, principal components analysis, and combinations thereof (e.g. as seen in ¶ [119], [127] and [134] above, the invention uses mean, median and other calculations that can be associated with summary statistics of the image.). Re claim 38: (New) Ventana Medical discloses the method of claim 27, further comprising: providing one or more outputs indicative of the presence or absence of the disease or disease type (e.g. an output of a features that are associated with a tissue type or feature score is performed in order to determine the cell area evaluated, which is seen in ¶ [107]-[111] above.), wherein the one or more outputs are selected from the group consisting of summary statistics, a cell type cluster score, one or more feature scores, an image of one or more cells, a composite image having a plurality of images of multiple cells, and combinations thereof (e.g. summary statistics are calculated regarding the evaluated image, which is taught in ¶ [119], [127] and [134] above. The feature score is used to determine the likelihood of a patch belonging to a specific pattern, which is output and explained in ¶ [107]-[111] above.). Re claim 39: (New) Ventana Medical discloses the method of claim 27, wherein the machine learning model is trained using at least one of a group comprising cytomorphologic training data and histologic training data (e.g. the pattern recognition algorithm is trained to determine cytomorphological data of the patch in comparison to the patterns, which is taught in ¶ [110] and [111] above.). Re claim 40: (New) Ventana Medical discloses the method of claim 27, wherein the machine learning model is trained using histologic data when available and cytomorphologic data when the histological data is not available (e.g. the invention discloses classifying the features according to tissue types, which is taught in ¶ [107] above and associated with the cytomorphologic data. The cells can then be classified by the classifier, which is taught in ¶ [151]. Since both are available one or both methods can occur to use the histological data or the cytomorphological data, which the different methods are disclosed in ¶ [03], [150], [151], [160]-[162].). [0003] The present disclosure relates, among other things, to automated analysis of histochemical or cytological samples stained for analytes having complex staining patterns, including samples in which analytically distinct patterns of analyte staining are intermixed. XII. Cell Quantification [0150] In certain analyses, it may be useful to normalize the pixel quantification so that it can be compared across images. One way to do this is to quantify cell nuclei. A method of quantifying cell nuclei is disclosed in Nguyen et al., Using contextual information to classify nuclei in histology images, 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), p 995-998 (Apr. 16-19, 2015), the contents of which are incorporated by reference in its entirety. Other methods of quantifying nuclei may be used as well. Each nucleus may be considered to correlate with a single cell. [0151] If desired, the cells can then be classified into different types corresponding to the compartments in its proximity, satisfying a distance constraint, e.g., a cell is classified as a stained membrane cell if there is a pixel p belonging to the membrane compartment, which is within a distance d to the cell. [0160] The texture features being computed are co-occurrence features. For co-occurrence features, feature extraction module 813 may compute the co-occurrence matrix (CM) of pixel intensity, and compute 13 Haralick features from this CM [see Haralick, R., et al.: Textural Features for Image Classification. IEEE Trans. Sys., Man., Cyber. 3 (6), 610-621 (1973)], including energy, correlation, inertia, entropy, inverse difference moment, sum average, sum variance, sum entropy, difference average, difference variance, difference entropy, and two information measures of correlation. In addition to the conventional gray-level CM (GLCM), which may be computed for each channel individually, the inter-channel or color co-occurrence matrix (CCM) may additionally be used. The CCM is created from the co-occurrence of pixel intensities in two different image channels, i.e., to compute the CCM from two channels Ci;Cj using a displacement vector d=[dx; dy], the co-occurrence of the pixel intensity is computed at location (x; y) in Ci and the pixel intensity at location (x+dx; y+dy) in Cj. The advantage of the CCM is that it captures the spatial relationship between different tissue structures (highlighted in different channels), without the need of explicitly segmenting them. Further, Haralick features may be computed from the GLCMs of all two channels, and Haralick features computed from the CCMs of all pairs of channels (HTX-DAB). In this example, the total number of features is 13×3×3=117. [0161] Subsequent to feature extraction, one or more processors 800 implement an SVM classifier module 814 that executes a trained support vector machine (SVM) algorithm 824 that was trained 825 on a training database 816 consisting of 10 images. [0162] Exemplary resulting pattern maps 710 are shown at FIG. 9 and FIG. 10. FIG. 9 shows an input image (A) and a pattern segmentation map (B) for a tumor region containing membrane staining (red) (exemplarily indicated by an arrow labeled with “R” in FIG. 9(B)), cytoplasmic staining (blue) (exemplarily indicated by an arrow labeled with “B” in FIG. 9(B)), and membrane-punctate staining (pink) (exemplarily indicated by an arrow labeled with “P” in FIG. 9(B)). FIG. 10 shows an input image (A) and a pattern segmentation map (B) for a tumor region containing membrane staining (red) (exemplarily indicated by an arrow labeled with “R” in FIG. 10(B)), punctate staining (green) (exemplarily indicated by an arrow labeled with “G” in FIG. 10(B)), cytoplasmic staining (blue) (exemplarily indicated by an arrow labeled with “B” in FIG. 10(B)), and membrane-punctate staining (yellow) (exemplarily indicated by an arrow labeled with “Y” in FIG. 10(B)). Re claim 41: (New) Ventana Medical discloses the method of claim 27, wherein the machine learning model is trained by combining a histological test with a cytomorphologic test (e.g. the invention discloses uses staining to determine the different areas regarding specific cells and if an area belongs to tumor tissue, which is taught in ¶ [03], [106] and [150] above.). Re claim 48: (New) Ventana Medical discloses the method of claim 27, further comprising: displaying a single composite displayed image comprising a plurality of selected individual cell images extracted from the specimen slide image (e.g. regions are within a cell image that can be together as a whole in order to show different parts of the cell or tissue, which is taught in ¶ [141] and [142]. The overall specimen captured with different compartments can be displayed on a user interface, which is taught in ¶ [110], [139], [157] and [159] above.). [0141] Optionally, it may be desirable to perform a regional segmentation function 106 on the input image to segment the image into different regions, such as to divide tissue samples according to the predominant tissue type (i.e. tumor, stroma, immune, etc.) found in a particular region or identify regions corresponding to different types of organelles (such as to identify pixels corresponding to nuclei or membranes). The output of this function is a regional map 116 that categorizes pixels according to the region type that they are associated with. The regional map can be used for positive selection or for negative selection. That is, the regional map can be used to identify particular regions of the cellular sample in which the analysis and/or quantification should be focused (positive selection), or it could be used for identifying regions of the cellular sample that can be ignored in analysis and/or quantification (negative selection). [0142] In an example of positive selection, assume that only tumor regions are relevant to an analysis and/or quantification of a tissue sample. A tissue segmentation can be performed to identify tumor regions within the tissue sample. A tumor map can then be made of the tissue sample. This tissue map can then be used as the input image for the pattern segmentation function 102, the candidate compartment segmentation function 103, and/or the analyte intensity segmentation function 104 (represented generally by the dotted line 107). Re claim 49: (New) Ventana Medical discloses the method of claim 27, wherein the first data structure is a first feature vector, wherein the second data structure is a second feature vector indicative of slide-level features (e.g. the first vector can be associated with patterns corresponding to tissue types while a matrix is used in correlation with a vector to determine pixel intensity related to cell or slide-level features, which is taught in ¶ [106]-[110] above.), and wherein the aggregation of the feature scores corresponding to the one or more individual cells comprises an aggregation of the cell-level features for each of the one or more individual cells (e.g. the gathering of scores in association with the patches or patterns within the captured image are used to analyze the cell level features of whether a pattern is matched. The scores are gathered onto an image and a highest score is determined, which is taught in ¶ [106]-[111] above.). Re claim 45: (New) Ventana Medical discloses the method of claim 27, further comprising: displaying a generated image of the specimen slide (e.g. a user interface is used to display a generated image of a specimen on a slide, which is taught in ¶ [105] above, [124], [125] and [189]), [0124] FIG. 3 demonstrates a membrane candidate segmentation processes in which image smoothing and ridge detection are performed separately. One or more processors 300 implement an image channel extraction (ICE) module 311 to execute a channel extraction function 321 on the input image 301 to separate the input image into different image channels. The image channel corresponding to the detectable label for the analyte of interest is selected, and an analyte-mapped image 302 is generated consisting pixels clusters corresponding to the analyte of interest. The operator may interact with the ICE module 311 and/or the analyte-mapped image 303 via a user interface 306 to, for example, select a channel or channels to map. One or more processors 300 then implement a smoothing module 312 to execute a smoothing function 322 (such as a Gaussian smoothing operator) on the analyte-mapped image to generate a smoothed image 303. The operator may interact with the smoothing module 312 and/or the smoothed image 303 via a user interface 306 whereby the operator may vary the parameters, such as by manually selecting a standard deviation 6 and/or kernel size. One or more processors 300 then implements a Laplacian module 313 to execute a Laplacian function 323 on the smoothed image to generate a Laplacian image 304. The operator may interact with the Laplacian module 313 and/or the Laplacian image 304 via a user interface 306 whereby the operator may vary the parameters, such as by manually selecting a kernel size. One or more processors 300 may then implement a threshold module 314 to execute a thresholding function 324 on the Laplacian image to adjust threshold levels for ridge identifications in the Laplacian image and to generate a thresholded image 305. The operator may interact with the threshold module 314 and/or the thresholded image 305 via a user interface 306 that allows for selection of an intensity threshold. [0125] FIG. 4 demonstrates a membrane candidate segmentation process in which image smoothing and ridge detection are performed simultaneously. One or more processors 400 implement an image channel extraction (ICE) module 411 to execute a channel extraction function 421 on the input image 401 to separate the input image into different image channels. The image channel corresponding to the detectable label for the analyte of interest is selected, and an analyte-mapped image 402 is generated consisting pixels clusters corresponding to the analyte of interest. The operator may interact with the ICE module 411 and/or the analyte-mapped image 404 via a user interface 406 to, for example, select a channel or channels to map, set cutoffs, etc. One or more processors 400 implement a Laplacian of Gaussian (LoG) module 415 to execute a combined Laplacian-Gaussian function 425 on the analyte-mapped image 402 to generate an LoG image 407. The operator may interact with the LoG module 415 and/or the LoG image 407 via a user interface 406 whereby the operator may vary the parameters, such as by manually selecting a standard deviation 6 and/or kernel size. One or more processors 400 may then implement a threshold module 414 to execute a thresholding function 424 on the LoG image 407 to adjust threshold levels for ridge identifications in the LoG image 407 and to generate a thresholded image 405. The operator may interact with the threshold module 414 and/or the thresholded image 405 via a user interface 406 that allows for intensity thresholding. the generated image including a visual representation of a prediction score for each of the one or more individual cells (e.g. a visual representation of a score is performed for the candidate compartment map and analyte intensity map, which is taught in ¶ [110] above, ¶ [139], [157] and [159].). [0139] Referring back to FIG. 1, once the pattern map 112, the candidate compartment map 113, and the analyte intensity map 114 have been generated, a true compartment identification function is performed to identify true compartments in the compound staining regions. This process involves overlaying the pattern map 112, the candidate compartment map 113, and the analyte intensity map 114. At least within the compound staining areas, the candidate compartments are matched with the appropriate intensity bin for that compartment in that compound staining region. Pixels that fall within both a candidate compartment and an appropriate intensity bin are classified as “true compartment” pixels. A true compartment map 115 is then generated for each compartment of interest, composed of: (1) all pixels classified as true compartment pixels from compound staining regions; and (2) all pixels corresponding to analyte of interest from single stain regions for the compartment of interest. [0157] Input images 701 were visually analyzed and 9 distinct staining patterns were identified: (1) ligand negative nontumor tumor; (2) ligand negative tumor tumor; (3) ligand positive cytoplasmic tumor; (4) ligand positive punctate tumor; (5) ligand positive membrane tumor; (6) ligand positive membrane-cytoplasmic tumor; (7) ligand positive membrane-punctate tumor; (8) ligand positive cytoplasmic-punctate tumor; and (9) ligand positive non-tumor. [0159] The one or more processors 800 implement a grid point module 812 to execute a grid point function 822 on the feature mapped image 802 to overlay a grid of points (GPs) with an interval of d=15 pixels on the feature mapped image, followed by a feature extraction module 813 to execute a feature extraction function 823 on the image image channels. For each GP associated with each image channel, feature extraction module 813 extracts image features in the neighborhood of these points, and different types of image texture features are extracted. For example, gi
Read full office action

Prosecution Timeline

Nov 21, 2022
Application Filed
May 03, 2025
Non-Final Rejection — §103, §DP
Aug 08, 2025
Response Filed
Nov 12, 2025
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602908
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12603960
IMAGE ANALYSIS APPARATUS, IMAGE ANALYSIS SYSTEM, IMAGE ANALYSIS METHOD, PROGRAM, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM COMPRISING READING A PRINTED MATTER, ANALYZING CONTENT RELATED TO READING OF THE PRINTED MATTER AND ACQUIRING SUPPORT INFORMATION BASED ON AN ANALYSIS RESULT OF THE CONTENT FOR DISPLAY TO ASSIST A USER IN FURTHER READING OPERATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12579817
Vehicle Control Device and Control Method Thereof for Camera View Control Based on Surrounding Environment Information
2y 5m to grant Granted Mar 17, 2026
Patent 12522110
APPARATUS AND METHOD OF CONTROLLING THE SAME COMPRISING A CAMERA AND RADAR DETECTION OF A VEHICLE INTERIOR TO REDUCE A MISSED OR FALSE DETECTION REGARDING REAR SEAT OCCUPATION
2y 5m to grant Granted Jan 13, 2026
Patent 12519896
IMAGE READING DEVICE COMPRISING A LENS ARRAY INCLUDING FIRST LENS BODIES AND SECOND LENS BODIES, A LIGHT RECEIVER AND LIGHT BLOCKING PLATES THAT ARE BETWEEN THE LIGHT RECEIVER AND SECOND LENS BODIES, THE THICKNESS OF THE LIGHT BLOCKING PLATES EQUAL TO OR GREATER THAN THE SECOND LENS BODIES THICKNESS
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
63%
Grant Probability
86%
With Interview (+23.0%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 600 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month