Prosecution Insights
Last updated: April 18, 2026
Application No. 18/949,828

Devices, Systems, and Methods for Digital Microscopy

Non-Final OA §103
Filed
Nov 15, 2024
Examiner
FINDLEY, CHRISTOPHER G
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
IDEXX Laboratories, Inc.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
89%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
580 granted / 752 resolved
+19.1% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
28 currently pending
Career history
780
Total Applications
across all art units

Statute-Specific Performance

§101
4.1%
-35.9% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
25.5%
-14.5% vs TC avg
§112
5.0%
-35.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 752 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Tahara (US 20240219305 A1) in view of Arar et al. (US 20190080450 A1). Re claim 1, Tahara discloses a method for interrogating a sample with a microscopy analyzer, the method comprising: capturing one or more first images from an imaging sensor (Tahara: paragraph [0065], detection unit 6102 may also include an image sensor such as a CCD or a CMOS, wherein the detection unit 6102 can acquire an image (such as a bright-field image, a dark-field image, or a fluorescent image, for example) of biological particles); determining a stain intensity (Tahara: paragraph [0067], the light data may be data of light intensity, and the light intensity may be light intensity data of light including fluorescence (the light intensity data may include feature quantities such as area, height, and width); paragraph [0068], in the case of a spectral flow cytometer, the processing unit also performs a fluorescence separation process on the light data, and acquires the light intensity data corresponding to the fluorescent dye); modifying an intensity of a light source based at least in part on the determined stain intensity (Tahara: paragraph [0076], in step S101, the information processing unit 103 starts the output adjustment process, wherein the output adjustment process may be performed in a device setting stage before an analysis process of the biological sample by the biological sample analyzer is started, or may be performed in the middle of the analysis process of the biological sample by the biological sample analyzer); and in response to modifying the intensity of the light source, capturing one or more second images from the imaging sensor (Tahara: paragraph [0076], in step S101, the information processing unit 103 starts the output adjustment process, wherein the output adjustment process may be performed in a device setting stage before an analysis process of the biological sample by the biological sample analyzer is started). Tahara does not specifically disclose inputting the one or more first images and the one or more second images into one or more machine learning models; identifying, via the one or more machine learning models, one or more characteristics of the sample in the one or more first images and one or more second images; and transmitting instructions that cause a graphical user interface to display the one or more characteristics of a fluid sample in the one or more first images and one or more second images. However, Arar discloses generating the first and/or second classifier by training a machine-learning algorithm may be advantageous, because the classifiers are created automatically in a data-driven manner (Arar: paragraph [0081]). Thus, the classifiers can automatically learn to identify the extended tissue type and/or contrast level from one or more image features which were automatically identified during the training as features having predictive power in respect to the tissue type or contrast level class membership (Arar: paragraph [0081]). Arar discloses an image analysis method for automatically determining the staining quality of an IHC stained biological sample, the method comprising: receiving a digital image of an IHC stained tissue sample of a patient, the pixel intensities of the image correlating with the amount of a tumor-marker-specific stain; extracting a plurality of features from the received digital image; inputting the extracted features into a first classifier, the first classifier being configured to identify the extended tissue type of the tissue depicted in the digital image as a function of at least some first ones of the extracted features, the extended tissue type being a tissue type with a defined expression level of the tumor marker; inputting the extracted features into a second classifier, the second classifier being configured to identify a contrast level of the tissue depicted in the digital image as a function of at least some second ones of the extracted features, the contrast level indicating the intensity contrast of pixels of the stained tissue; computing a staining quality score for the tissue depicted in the digital image as a function of the identified extended tissue type and the identified contrast level (Arar: paragraphs [0331]-[0336]). The combined output of the first and second classifier 622 and 624 in Arar is used by a prediction logic 626 for predicting the staining quality of an image or image region (Arar: paragraph [0239]). The predicted staining quality can be used by a range extraction logic 628 for automatically identifying staining parameter value ranges which are particular for the respective extended tissue types and which can safely be assumed to yield high quality staining (Arar: paragraph [0239]). The resides computed by the range extraction logic 628 can be visualized and displayed to a user via a screen by a parameter range plotter 638 (Arar: paragraph [0239]). Since Tahara and Arar relate to observation and analysis of samples, one of ordinary skill in the art before the effective filing date would have found it obvious to combine the machine learning of Arar with the system of Tahar in order to improve classification of detected image features (Arar: paragraph [0066]). Re claim 2, Tahara discloses that the fluid sample comprises a biological sample (Tahara: paragraph [0061]). Re claim 3, Tahara discloses that the biological sample comprises one or more of the following: (i) blood (Tahara: paragraph [0061]); (ii) urine; (iii) saliva; (iv) ear wax; (v) fine needle aspirates; (vi) lavage fluids; (vii) body cavity fluids; and (viii) fecal matter. Re claim 4, Tahara does not specifically disclose that the one or more machine learning models comprise one or more of the following: (i) an artificial neural network, (ii) a support vector machine, (iii) a regression tree, or (iv) an ensemble of regression trees. However, Arar discloses use of “machine-learning logic,” which is a computer-executable program logic adapted to “learn” from training data, i.e., to process training data and automatically adapt an internal model of the word such that the model better fits to the training data (Arar: paragraph [0070]). For example, a machine-learning logic can be a classifier or a regressor that analyzes the training images for the specific tissue-type with available quality annotations and that outputs probability maps from the trained tissue-type and contrast classifiers (Arar: paragraph [0070]). A machine-learning logic can be, for example, an artificial neural network (ANN), a support vector machine (SVM), or the like (Arar: paragraph [0070]). Since Tahara and Arar relate to observation and analysis of samples, one of ordinary skill in the art before the effective filing date would have found it obvious to combine the machine learning of Arar with the system of Tahar in order to improve classification of detected image features (Arar: paragraph [0066]). Re claim 5, Tahara discloses, prior to inputting the one or more first images and the one or more second images into the one or more machine learning models, applying one or more image enhancements to at least one of the one or more first images and the one or more second images (Tahara: paragraph [0076], in step S101, the information processing unit 103 starts the output adjustment process, wherein the output adjustment process may be performed in a device setting stage before an analysis process of the biological sample by the biological sample analyzer is started; paragraphs [0119]-[0121], correct scattered light data). Re claim 6, Tahara does not specifically disclose that, prior to inputting the one or more first images and the one or more second images into the one or more machine learning models, training the one or more machine learning models with one or more training images that share a characteristic with at least one of the one or more first images or the one or more second images. However, Arar discloses a “machine-learning logic” is a computer-executable program logic adapted to “learn” from training data, i.e., to process training data and automatically adapt an internal model of the word such that the model better fits to the training data (Arar: paragraph [0070]). For example, a machine-learning logic can be a classifier or a regressor that analyzes the training images for the specific tissue-type with available quality annotations and that outputs probability maps from the trained tissue-type and contrast classifiers (Arar: paragraph [0070]). Since Tahara and Arar relate to observation and analysis of samples, one of ordinary skill in the art before the effective filing date would have found it obvious to combine the machine learning of Arar with the system of Tahar in order to improve classification of detected image features (Arar: paragraph [0066]). Re claim 7, Tahara does not specifically disclose that training the one or more machine learning models comprises, based on inputting the one or more training images into the one or more machine learning models: (i) predicting, by the one or more machine learning model, an outcome of a determined condition of the one or more training images; (ii) comparing the outcome to the characteristic of the one or more training images; and (iii) adjusting, based on comparing the outcome to the characteristic of the one or more training images, the one or more machine learning models. However, Arar discloses generating the first and/or second classifier by training a machine-learning algorithm may be advantageous, because the classifiers are created automatically in a data-driven manner (Arar: paragraph [0081]). Thus, the classifiers can automatically learn to identify the extended tissue type and/or contrast level from one or more image features which were automatically identified during the training as features having predictive power in respect to the tissue type or contrast level class membership (Arar: paragraph [0081]). Arar discloses an image analysis method for automatically determining the staining quality of an IHC stained biological sample, the method comprising: receiving a digital image of an IHC stained tissue sample of a patient, the pixel intensities of the image correlating with the amount of a tumor-marker-specific stain; extracting a plurality of features from the received digital image; inputting the extracted features into a first classifier, the first classifier being configured to identify the extended tissue type of the tissue depicted in the digital image as a function of at least some first ones of the extracted features, the extended tissue type being a tissue type with a defined expression level of the tumor marker; inputting the extracted features into a second classifier, the second classifier being configured to identify a contrast level of the tissue depicted in the digital image as a function of at least some second ones of the extracted features, the contrast level indicating the intensity contrast of pixels of the stained tissue; computing a staining quality score for the tissue depicted in the digital image as a function of the identified extended tissue type and the identified contrast level (Arar: paragraphs [0331]-[0336]). The combined output of the first and second classifier 622 and 624 in Arar is used by a prediction logic 626 for predicting the staining quality of an image or image region (Arar: paragraph [0239]). The predicted staining quality can be used by a range extraction logic 628 for automatically identifying staining parameter value ranges which are particular for the respective extended tissue types and which can safely be assumed to yield high quality staining (Arar: paragraph [0239]). The resides computed by the range extraction logic 628 can be visualized and displayed to a user via a screen by a parameter range plotter 638 (Arar: paragraph [0239]). Since Tahara and Arar relate to observation and analysis of samples, one of ordinary skill in the art before the effective filing date would have found it obvious to combine the machine learning of Arar with the system of Tahar in order to improve classification of detected image features (Arar: paragraph [0066]). Re claim 8, Tahara does not specifically disclose that training the one or more machine learning models comprises one or more of supervised learning, semi-supervised learning, reinforcement learning, or unsupervised learning. However, Arar discloses classification tasks can be performed by a first trained classifier having been trained in a supervised manner on a set of training images having been annotated with ground-truth labels for tissue type and tumor marker expression status (Arar: paragraph [0275]). In case a sufficient number of training images with annotated staining quality scores or staining quality labels is not available, an alternative approach for generating the prediction logic which is based on unsupervised learning can be applied (Arar: paragraph [0290]). Since Tahara and Arar relate to observation and analysis of samples, one of ordinary skill in the art before the effective filing date would have found it obvious to combine the machine learning of Arar with the system of Tahar in order to improve classification of detected image features (Arar: paragraph [0066]). Re claim 9, Tahara does not specifically disclose adjusting a contrast level of the one or more first images or the one or more second images based on a normalization of the one or more first images or the one or more second images. However, Arar discloses, in some embodiments, the grayscale image is further processed, e.g. by adjusting the contrast of the inverted image via a round of histogram equalization and/or by smoothing the adjusted image by convolving the adjusted image with a low-pass filter kernel (Arar: paragraph [0114]). Since Tahara and Arar relate to observation and analysis of samples, one of ordinary skill in the art before the effective filing date would have found it obvious to combine the machine learning of Arar with the system of Tahar in order to improve classification of detected image features (Arar: paragraph [0066]). Re claim 10, Tahara does not specifically disclose that adjusting the contrast level comprises using an automatic gain control feature. However, Arar discloses, in some embodiments, the grayscale image is further processed, e.g. by adjusting the contrast of the inverted image via a round of histogram equalization and/or by smoothing the adjusted image by convolving the adjusted image with a low-pass filter kernel (Arar: paragraph [0114]). Since Tahara and Arar relate to observation and analysis of samples, one of ordinary skill in the art before the effective filing date would have found it obvious to combine the machine learning of Arar with the system of Tahar in order to improve classification of detected image features (Arar: paragraph [0066]). Re claim 11, Tahara does not specifically disclose that adjusting the contrast level is based on the determined stain intensity. According to embodiments, the identification of the primary background comprises receiving the digital image as an RGB image (Arar: paragraph [0113]). The stained tissue regions correspond to low-intensity image regions and unstained tissue regions correspond to high intensity image regions (Arar: paragraph [0113]). The method further comprises converting the RGB image into a grayscale image and inverting the grayscale image such that stained tissue regions correspond to high-intensity image regions and unstained tissue regions correspond to low-intensity image regions (Arar: paragraph [0113]). Arar discloses, in some embodiments, the grayscale image is further processed, e.g. by adjusting the contrast of the inverted image via a round of histogram equalization and/or by smoothing the adjusted image by convolving the adjusted image with a low-pass filter kernel (Arar: paragraph [0114]). Since Tahara and Arar relate to observation and analysis of samples, one of ordinary skill in the art before the effective filing date would have found it obvious to combine the machine learning of Arar with the system of Tahar in order to improve classification of detected image features (Arar: paragraph [0066]). Re claim 12, Tahara does not specifically disclose that adjusting the contrast level is based on a command received from a controller. However, Arar discloses generating, for each of the extended tissue types depicted in the received digital images, a respective multi-dimensional staining quality plot, at least two plot dimensions respectively representing one of the staining protocol parameters used for staining the tissues depicted in the received images, the staining quality scores computed for each staining protocol value being graphically represented in the form of a grey-level scale or color scale or in the form of a further dimension of the staining quality plot (Arar: paragraph [0427]); and presenting the staining quality plot on a display screen for enabling a user to manually select, selectively for the extended tissue type for which the plot was generated and for each of the staining protocol parameters, a parameter value range that corresponds to high quality tissue staining (Arar: paragraph [0428]). Since Tahara and Arar relate to observation and analysis of samples, one of ordinary skill in the art before the effective filing date would have found it obvious to combine the machine learning of Arar with the system of Tahar in order to improve classification of detected image features (Arar: paragraph [0066]). Re claim 13, Tahara does not specifically disclose that the method further comprises: determining, via the one or more machine learning models, an image enhancement for the one or more first images or the one or more second images; applying, based on the determined image enhancement, the image enhancement to the one or more first images or the one or more second images; and outputting, via the graphical user interface, the one or more enhanced images. However, Arar discloses generating, for each of the extended tissue types depicted in the received digital images, a respective multi-dimensional staining quality plot, at least two plot dimensions respectively representing one of the staining protocol parameters used for staining the tissues depicted in the received images, the staining quality scores computed for each staining protocol value being graphically represented in the form of a grey-level scale or color scale or in the form of a further dimension of the staining quality plot (Arar: paragraph [0427]); and presenting the staining quality plot on a display screen for enabling a user to manually select, selectively for the extended tissue type for which the plot was generated and for each of the staining protocol parameters, a parameter value range that corresponds to high quality tissue staining (Arar: paragraph [0428]). Arar discloses, in some embodiments, the grayscale image is further processed, e.g. by adjusting the contrast of the inverted image via a round of histogram equalization and/or by smoothing the adjusted image by convolving the adjusted image with a low-pass filter kernel (Arar: paragraph [0114]). Since Tahara and Arar relate to observation and analysis of samples, one of ordinary skill in the art before the effective filing date would have found it obvious to combine the machine learning of Arar with the system of Tahar in order to improve classification of detected image features (Arar: paragraph [0066]). Re claim 14, Tahara does not specifically disclose that applying the image enhancement to the one or more first images or the one or more second images comprises applying one or more of the following to the one or more images: (i) a saturation enhancement; (ii) a brightness enhancement; (iii) a contrast enhancement; and (iv) a focal setting enhancement. However, Arar discloses, in some embodiments, the grayscale image is further processed, e.g. by adjusting the contrast of the inverted image via a round of histogram equalization and/or by smoothing the adjusted image by convolving the adjusted image with a low-pass filter kernel (Arar: paragraph [0114]). Since Tahara and Arar relate to observation and analysis of samples, one of ordinary skill in the art before the effective filing date would have found it obvious to combine the machine learning of Arar with the system of Tahar in order to improve classification of detected image features (Arar: paragraph [0066]). Re claim 15, Tahara discloses that the sample is disposed on a glass slide of the microscopy analyzer (Tahara: paragraph [0062], the flow channel C and the flow channel structure including the flow channel C may be made of a material such as plastic or glass). Re claim 16, Tahara discloses that the sample is disposed on a plastic slide of the microscopy analyzer (Tahara: paragraph [0062], the flow channel C and the flow channel structure including the flow channel C may be made of a material such as plastic or glass). Re claim 17, Tahara discloses that the light source is a brightfield light source (Tahara: paragraph [0065], the detection unit 6102 can acquire an image (such as a bright-field image, a dark-field image, or a fluorescent image, for example) of biological particles). Re claim 18, Tahara does not specifically disclose that determine the stain intensity comprises inputting the one or more first images into the one or more machine learning models and determining, via the one or more machine learning models, the stain intensity. However, Arar discloses generating the first and/or second classifier by training a machine-learning algorithm may be advantageous, because the classifiers are created automatically in a data-driven manner (Arar: paragraph [0081]). Thus, the classifiers can automatically learn to identify the extended tissue type and/or contrast level from one or more image features which were automatically identified during the training as features having predictive power in respect to the tissue type or contrast level class membership (Arar: paragraph [0081]). Arar discloses an image analysis method for automatically determining the staining quality of an IHC stained biological sample, the method comprising: receiving a digital image of an IHC stained tissue sample of a patient, the pixel intensities of the image correlating with the amount of a tumor-marker-specific stain; extracting a plurality of features from the received digital image; inputting the extracted features into a first classifier, the first classifier being configured to identify the extended tissue type of the tissue depicted in the digital image as a function of at least some first ones of the extracted features, the extended tissue type being a tissue type with a defined expression level of the tumor marker; inputting the extracted features into a second classifier, the second classifier being configured to identify a contrast level of the tissue depicted in the digital image as a function of at least some second ones of the extracted features, the contrast level indicating the intensity contrast of pixels of the stained tissue; computing a staining quality score for the tissue depicted in the digital image as a function of the identified extended tissue type and the identified contrast level (Arar: paragraphs [0331]-[0336]). Since Tahara and Arar relate to observation and analysis of samples, one of ordinary skill in the art before the effective filing date would have found it obvious to combine the machine learning of Arar with the system of Tahar in order to improve classification of detected image features (Arar: paragraph [0066]). Claim 19 recites the corresponding non-transitory, computer-readable medium having instructions stored thereon, wherein the instructions, when executed by one or more processors, cause the one or more processors to perform the method of claim 1. Therefore, arguments analogous to those presented for claim 1 are applicable to claim 19. Arar discloses that their invention may be a system, a method, and/or a computer program product (Arar: paragraph [0320]). The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the invention (Arar: paragraph [0321]). Accordingly, claim 19 has been analyzed and rejected with respect to claim 1 above. Claim 20 recites the corresponding microscopy analyzer system to perform the method of claim 1. Therefore, arguments analogous to those presented for claim 1 are applicable to claim 20. Tahara discloses that the detection unit 6102 includes at least one photodetector that detects light generated by emitting light onto biological particles (Tahara: paragraph [0065]). The detection unit 6102 may also include an image sensor such as a CCD or a CMOS (Tahara: paragraph [0065]). With the image sensor, the detection unit 6102 can acquire an image (such as a bright-field image, a dark-field image, or a fluorescent image, for example) of biological particles (Tahara: paragraph [0065]). The flow channel C and the flow channel structure including the flow channel C may be made of a material such as plastic or glass (Tahara: paragraph [0062]). Arar discloses that their invention may be a system, a method, and/or a computer program product (Arar: paragraph [0320]). The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the invention (Arar: paragraph [0321]). Accordingly, claim 20 has been analyzed and rejected with respect to claim 1 above. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER G FINDLEY whose telephone number is (571)270-1199. The examiner can normally be reached Monday-Friday 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at (571)272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTOPHER G FINDLEY/Primary Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Nov 15, 2024
Application Filed
Apr 02, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604018
CONVENTIONAL AND NEURAL NETWORK CODECS FOR RANDOM ACCESS VIDEO CODING
2y 5m to grant Granted Apr 14, 2026
Patent 12590799
Systems and Methods for Estimating Depth from Projected Texture using Camera Arrays
2y 5m to grant Granted Mar 31, 2026
Patent 12593031
IMAGE ENCODING/DECODING METHOD, DEVICE, AND RECORDING MEDIUM HAVING BITSTREAM STORED THEREIN
2y 5m to grant Granted Mar 31, 2026
Patent 12574546
METHOD AND DEVICE FOR ENCODING OR DECODING IMAGE ON BASIS OF INTER MODE
2y 5m to grant Granted Mar 10, 2026
Patent 12574504
IMAGE ENCODING/DECODING METHOD, DEVICE, AND RECORDING MEDIUM HAVING BITSTREAM STORED THEREIN
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
89%
With Interview (+11.8%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 752 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month