Prosecution Insights
Last updated: April 19, 2026
Application No. 17/915,717

CUSTOMIZING VIRTUAL STAIN

Non-Final OA §102§103
Filed
Sep 29, 2022
Examiner
HSU, JONI
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Carl Zeiss Microscopy GmbH
OA Round
3 (Non-Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
95%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
741 granted / 848 resolved
+25.4% vs TC avg
Moderate +7% lift
Without
With
+7.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
34 currently pending
Career history
882
Total Applications
across all art units

Statute-Specific Performance

§101
8.4%
-31.6% vs TC avg
§103
59.7%
+19.7% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
3.1%
-36.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 848 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see p. 9, filed December 16, 2025, with respect to Claims 11 and 24 have been fully considered and are persuasive. The 35 U.S.C. 102 rejections of Claims 11 and 24 have been withdrawn. Applicant’s arguments with respect to claim(s) 1-3, 8, 10, 12-16, 21, 23, 25, and 26 have been considered but are moot because new grounds of rejection are made in view of Stumpe (US 20200394825A1). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 10, 12-14, 23, 25, and 26 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Stumpe (US 20200394825A1). As per Claim 1, Stumpe teaches a method of virtual staining of a tissue sample, the method comprising: obtaining imaging data depicting the tissue sample, processing the imaging data in at least one machine-learning logic (202), the at least one machine-learning logic being configured to provide multiple output images all depicting the tissue sample comprising a given virtual stain, the multiple output images depicting the tissue sample comprising the given virtual stain at different colorings associated with different staining laboratory processes, and obtaining, from the at least one machine-learning logic, at least one output image of the multiple output images (special stained images of the same tissue specimens in four different staining protocols or regimes, such as a suite of images of the tissue specimen stained with four different special stains, in use, a pathologist considering an H&E stained image 200 of a lung specimen supplies the image to the model 202 and it returns four different virtual stain images of the lung specimen stained with the suite of four different special stains 204, [0067], machine learning predictor model 202, [0066], output is an RGB image with the same tissue morphology but different colors, depending on the respective special stain that is predicted, [0057], user views the H&E stained image and activates an icon to switch between different virtual stains, essentially recoloring the H&E image into the respective stain image, [0066]). As per Claim 10, Stumpe teaches wherein the imaging data (200) of the tissue sample comprises one or more input images depicting at least one of the tissue sample not comprising a chemical stain associated with the virtual stain or comprising a further chemical stain that is not associated with the virtual stain (pathologist considering an H&E stained image 200 of a lung specimen supplies the image to the model 202 and it returns four different virtual stain images of the lung specimen stained with the suite of four different special stains 204, [0067]). As per Claim 12, Stumpe teaches a method of training at least one machine-learning logic (202) for virtual staining of a tissue sample, the method comprising: obtaining training imaging data depicting one or more tissue samples, obtaining multiple reference output images, the multiple reference output images depicting the one or more tissue samples, or one or more further tissues samples all comprising a given chemical stain, different reference output images depict the one or more tissue samples or the one or more further tissue samples comprising the given chemical stain at different colorings provided by different staining laboratory processes, and training the at least one machine-learning logic based on the training imaging data and the multiple reference output images (model 202 is trained from H&E stained lung cancer tissue specimens and corresponding special stained images of the same tissue specimens in four different staining protocols or regimes, such as a suite of images of the tissue specimen stained with four different special stains, in use, a pathologist considering an H&E stained image 200 of a lung specimen supplies the image to the model 202 and it returns four different virtual stain images of the lung specimen stained with the suite of four different special stains 204, [0067], machine learning predictor model 202, [0066], output is an RGB image with the same tissue morphology but different colors, depending on the respective special stain that is predicted, [0057], user views the H&E stained image and activates an icon to switch between different virtual stains, essentially recoloring the H&E image into the respective stain image, [0066]). As per Claim 13, Claim 13 is similar in scope to Claim 12, and therefore is rejected under the same rationale. As per Claims 14, 23, 25, and 26, these claims are similar in scope to Claims 1, 10, 12, and 13 respectively, and therefore are rejected under the same rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 2, 3, 15, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stumpe (US 20200394825A1) in view of Xu (see citation below). As per Claim 2, Stumpe is relied upon for the teachings as discussed above relative to Claim 1. However, Stumpe does not teach wherein the at least one machine-learning logic comprises a single machine-learning logic, the single machine-learning logic comprising a conditional input, wherein setting the conditional input selects between the colorings associated with the different staining laboratory processes, to thereby obtain the respective output image of the multiple output images from the single machine-learning logic. However, Xu teaches wherein the at least one machine-learning logic comprises a single machine-learning logic, the single machine-learning logic comprising a conditional input, wherein setting the conditional input selects between the colorings associated with the different staining laboratory processes, to thereby obtain the respective output image of the multiple output images from the single machine-learning logic (conditional CycleGAN network to transform the H&E stained images into IHC stained images, facilitating virtual IHC staining on the same slide, p. 1, Abstract; enforce the model to learn mutual representation with conditional query, p. 6, last paragraph; virtual staining can be exploited bi-directionally, H&E staining can also be faithfully reproduced from an IHC stained slide, p. 14, 2nd paragraph). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Stumpe so that the at least one machine-learning logic comprises a single machine-learning logic, the single machine-learning logic comprising a conditional input, wherein setting the conditional input selects between the colorings associated with the different staining laboratory processes, to thereby obtain the respective output image of the multiple output images from the single machine-learning logic as suggested by Xu. Xu suggests that it is advantageous to produce an image virtual stained with one dye from another, for example, producing IHC images from images dyed with the relatively cheaper and widely available staining techniques such as H&E. This may be very useful in many clinical diagnostic and AI-based applications, such as to improve the effectiveness of pathological examination by reducing the eyeballing time in visual screen of the slides, and to increase the segmentation and classification performance of AI models, if the H&E slides can be virtually IHC stained, with a few simple post-processing, the generated IHC images can provide highly precise segmentation of tissue regions of interest (p. 3, 2nd paragraph). In some instances, it is advantageous to faithfully reproduce H&E staining from an IHC stained slide. The IHC staining can then be easily segmented and used as mask to automatically extract annotations from the virtual H&E staining with a pixel-level accuracy, reducing the need for manual annotation and overcoming the intrinsic limitation of inter-slide variation in serial tissue slides. This enables also to improve the accuracy of patch-based training data (p. 14, 2nd paragraph). 15. As per Claim 3, Stumpe does not teach wherein the conditional input selects between the colorings from a predefined set of candidate colorings associated with predefined staining laboratory processes. However, Xu teaches wherein the conditional input selects between the colorings from a predefined set of candidate colorings associated with predefined staining laboratory processes (p. 1, Abstract; p. 6, last paragraph; p. 14, 2nd paragraph). This would be obvious for the reasons given in the rejection for Claim 2. 16. As per Claims 15-16, these claims are similar in scope to Claims 2-3 respectively, and therefore are rejected under the same rationale 17. Claim(s) 8 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stumpe (US 20200394825A1) in view of Detone (US 20220028110A1). 18. As per Claim 8, Stumpe is relied upon for the teachings as discussed above relative to Claim 1. However, Stumpe does not teach wherein the at least one machine-learning logic comprises a neural network comprising a decoder branch and multiple decoder heads for the decoder branch, wherein different ones of the multiple decoder heads are used to obtain different ones of the multiple output images. However, Detone teaches wherein the at least one machine-learning logic comprises a neural network comprising a decoder branch and multiple decoder heads for the decoder branch, wherein different ones of the multiple decoder heads are used to obtain different ones of the multiple output images (subnetworks 112, 114 can provide different outputs based on the same input, and represent different branches of neural network 100, [0050], neural network 100 includes a single shared encoder that processes and reduces the input image dimensionality, once processed by the encoder, the architecture splits into two decoder heads, which learn task specific weights, [0078], neural network determines, for each image, a respective set of interest points and a respective descriptor, interest points can be determined by the interest point detection decoder head, [0133]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Stumpe so that the at least one machine-learning logic comprises a neural network comprising a decoder branch and multiple decoder heads for the decoder branch, wherein different ones of the multiple decoder heads are used to obtain different ones of the multiple output images as suggested by Detone. It is well-known in the art that decoder heads are typically used when a machine learning model needs to generate different types of output for each task. 19. As per Claim 21, Claim 21 is similar in scope to Claim 8, and therefore is rejected under the same rationale. Allowable Subject Matter 20. Claims 4-7, 9, 11, 17-20, 22, and 24 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. 21. The following is a statement of reasons for the indication of allowable subject matter: The prior art taken singly or in combination do not teach or suggest the combination of all the limitations of Claim 11 and base Claim 1, and in particular, do not teach wherein the imaging data of the tissue sample comprises one or more input images depicting the tissue sample comprising the given virtual stain at a first coloring, wherein the at least one output image depicting the tissue sample comprises a second coloring different from the first coloring. Claim 24 is similar in scope to Claim 11, and therefore also contains allowable subject matter. Prior Art of Record Xu, Zhaoyang et al; GAN-based virtual Re-Staining: A Promising Solution for Whole Slide Image Analysis”; 13 January 2019; pp. 1-14; https://arxiv.org/pdf/1901.04059v1 Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONI HSU whose telephone number is (571)272-7785. The examiner can normally be reached M-F 10am-6:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JH /JONI HSU/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Sep 29, 2022
Application Filed
Apr 11, 2025
Non-Final Rejection — §102, §103
Jul 10, 2025
Response Filed
Oct 20, 2025
Final Rejection — §102, §103
Dec 09, 2025
Interview Requested
Dec 12, 2025
Applicant Interview (Telephonic)
Dec 12, 2025
Examiner Interview Summary
Dec 16, 2025
Response after Non-Final Action
Jan 12, 2026
Non-Final Rejection — §102, §103
Apr 09, 2026
Interview Requested
Apr 15, 2026
Examiner Interview Summary
Apr 15, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592028
METHODS AND DEVICES FOR IMMERSING A USER IN AN IMMERSIVE SCENE AND FOR PROCESSING 3D OBJECTS
2y 5m to grant Granted Mar 31, 2026
Patent 12586306
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR MODELING OBJECT
2y 5m to grant Granted Mar 24, 2026
Patent 12586260
CREATING IMAGE ENHANCEMENT TRAINING DATA PAIRS
2y 5m to grant Granted Mar 24, 2026
Patent 12581168
A METHOD FOR A MEDIA FILE GENERATING AND A METHOD FOR A MEDIA FILE PROCESSING
2y 5m to grant Granted Mar 17, 2026
Patent 12561850
IMAGE GENERATION WITH LEGIBLE SCENE TEXT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
95%
With Interview (+7.2%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 848 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month