Prosecution Insights
Last updated: April 19, 2026
Application No. 18/253,069

IMAGING METHOD AND SYSTEM FOR GENERATING A DIGITALLY STAINED IMAGE, TRAINING METHOD FOR TRAINING AN ARTIFICIAL INTELLIGENCE SYSTEM, AND NON-TRANSITORY STORAGE MEDIUM

Final Rejection §103
Filed
May 16, 2023
Examiner
SALEH, ZAID MUHAMMAD
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Prospective Instruments GmbH
OA Round
2 (Final)
65%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
28 granted / 43 resolved
+3.1% vs TC avg
Strong +48% interview lift
Without
With
+48.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
30 currently pending
Career history
73
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
28.0%
-12.0% vs TC avg
§112
4.4%
-35.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 43 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 14, 15, 17, 25 – 29 remain pending. Claims 14, 25 and 28 are Amended Claims 1 – 13 have been canceled Response to Amendment The amendment filed 01/21/2026 overcomes the following objections/rejections. 112 (d) Rejection Response to Arguments Applicant's arguments filed January 21, 2026 with respect to claims 14, 15, 17, 25 – 29 have been considered but are moot because the new grounds of rejection do not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 14, 26 and 27 are rejected under 35 U.S.C 103 as being unpatentable over Navid “Digital staining through the application of deep neural networks to multi -modal multi-photon microscopy” (hereinafter Navid) in view of Freytag Patent Application Publication No. WO-2021198252-A1 (hereinafter Freytag). Regarding claim 14, Navid discloses an imaging method for generating a digitally stained image of a biological tissue probe from a physical image of an unstained biological tissue probe (Navid in [Abstract] discloses, “Deep neural networks have been used to map multi-modal, multi-photon microscopy measurements of a label-free tissue sample to its corresponding histologically stained brightfield microscope colour image”), the method comprising:G1) obtaining a physical image of an unstained biological tissue probe by optical microscopy (Navid in [Section – 2.1] discloses, “It was first observed with an integrated multi-modal microscope capable of recording spatially co-registered TPEF, FLIM, SHG, and optical coherence tomography (OCT) modalities”), G2) generating a digitally stained image from the physical image by using an artificial intelligence system (Navid in [Section – 1; Last paragraph] discloses, “These DNNs were used to produce qualitatively accurate visual reconstructions of the stained images from label-free observations using two different MPM techniques”), wherein the system is trained to predict a digitally stained image obtainable by staining the probe in a physical staining method (Navid in [Section – 1; Last paragraph] discloses, “A combination of TPEF and FLIM was used as the source dataset to train the DNNs. An H&E-stained brightfield microscope image of the same tissue sample was used as the target dataset”), wherein step G1) comprises obtaining the physical image of the unstained probe by simultaneous multi-modal microscopy (Navid in [Section – 2.2.1] discloses, “TPEF and FLIM modes were already co-registered since they were recorded simultaneously on the same imaging instrument”) Navid doesn’t disclose about the limitation as indicated via strike-through above. Freytag discloses without spatial co-registration of different imaginq modalities (Freytag in [Page – 4, Paragraph – 1] discloses, “training the cycle generative adversarial network may be performed without a registration of the training imaging data and the reference images on a global scale. Alternatively, or in addition, the training may also be performed free of a registration on a local scale”). It would have been obvious to one with one having an ordinary skill in art before the effective filling date of the claimed invention to integrate the technique of Freytag into the system of Navid because it would make the computation of the system more faster by reducing the computational burden. Summary of Citations (Freytag) [Page – 4, Paragraph – 1]; “training the cycle generative adversarial network may be performed without a registration of the training imaging data and the reference images on a global scale. Alternatively, or in addition, the training may also be performed free of a registration on a local scale”. Summary of Citations (Navid) [Abstract]; “Deep neural networks have been used to map multi-modal, multi-photon microscopy measurements of a label-free tissue sample to its corresponding histologically stained brightfield microscope colour image”. [Section – 1; Last Paragraph]; “These DNNs were used to produce qualitatively accurate visual reconstructions of the stained images from label-free observations using two different MPM techniques. A combination of TPEF and FLIM was used as the source dataset to train the DNNs. An H&E-stained brightfield microscope image of the same tissue sample was used as the target dataset”. [Section – 2.1]; “The tissue section was a 10 μm thick slice of ex vivo label-free fixed rat liver tissue mounted on a glass microscope slide. It comprised hepatic cells to which capillaries deliver blood. It was first observed with an integrated multi-modal microscope capable of recording spatially co-registered TPEF, FLIM, SHG, and optical coherence tomography (OCT) modalities”. [Section – 2.2.1]; “TPEF and FLIM modes were already co-registered since they were recorded simultaneously on the same imaging instrument”. Regarding claims 26 and 27, the grounds of rejection from the last Office Action with respect to Navid in view of Freytag apply here. Claims 15, 28 and 29 are rejected under 35 U.S.C 103 as being unpatentable over Navid in view of Freytag and further in view of Nelson Patent Application Publication No. WO-2017146813-A1 (hereinafter Nelson). Regarding claim 15, the ground of rejection based on Nelson from previous non-final Office Action of 10/21/2025 applies in here. Regarding claim 28, Navid discloses a system for generating a digitally stained image of a biological tissue probe and/or for training an artificial intelligence system, the system comprising (Navid in [Abstract] discloses, “Deep neural networks have been used to map multi-modal, multi-photon microscopy measurements of a label-free tissue sample to its corresponding histologically stained brightfield microscope colour image”):- an optical microscopic system for obtaining physical images of biological tissue probes by simultaneous multi-modal microscopy (Navid in [Section – 2.1, Paragraph – 2] discloses, ““It was first observed with an integrated multi-modal microscope capable of recording spatially co-registered TPEF, FLIM, SHG, and optical coherence tomography (OCT) modalities”);, each pair comprising - a physical image of an unstained biological tissue probe obtained by simultaneous multi-modal microscopy (Navid in [Section – 2.1, Paragraph – 1] discloses, “The multi-modal dataset used for this study comprised a 16 spectral channel TPEF mode, a single channel FLIM mode, and a 3 channel stained brightfield microscope image of the same rat liver tissue sample”), a stained image of said probe obtained in a physical staining method (Navid in [Section – 2.1, Paragraph – 3] discloses, “After measuring its TPF characteristics, the tissue was stained using H&E and observed under a brightfield microscope”); and - a processing unit for performing;- the imaging method according to claim14. Nelson further discloses a data storage for storing a multitude of image pairs (Nelson in [0034] discloses, “training example including one or more images of one or more cells, and, for each training example, one or more corresponding stained images. In some cases, for each training example, the corresponding stained images can depict the cells being stained with a variety of stains”). Freytag further discloses without spatial co- registration of different imaqinq modalities (Freytag in [Page – 4, Paragraph – 1] discloses, “training the cycle generative adversarial network may be performed without a registration of the training imaging data and the reference images on a global scale. Alternatively, or in addition, the training may also be performed free of a registration on a local scale”). Summary of Citations (Navid) [Abstract]; “Deep neural networks have been used to map multi-modal, multi-photon microscopy measurements of a label-free tissue sample to its corresponding histologically stained brightfield microscope colour image”. [Section – 1; Last Paragraph]; “These DNNs were used to produce qualitatively accurate visual reconstructions of the stained images from label-free observations using two different MPM techniques. A combination of TPEF and FLIM was used as the source dataset to train the DNNs. An H&E-stained brightfield microscope image of the same tissue sample was used as the target dataset”. [Section – 2.1]; “The tissue section was a 10 μm thick slice of ex vivo label-free fixed rat liver tissue mounted on a glass microscope slide. It comprised hepatic cells to which capillaries deliver blood. It was first observed with an integrated multi-modal microscope capable of recording spatially co-registered TPEF, FLIM, SHG, and optical coherence tomography (OCT) modalities”. [Section – 2.2.1]; “TPEF and FLIM modes were already co-registered since they were recorded simultaneously on the same imaging instrument”. Summary of Citations (Freytag) [Page – 4, Paragraph – 1]; “training the cycle generative adversarial network may be performed without a registration of the training imaging data and the reference images on a global scale. Alternatively, or in addition, the training may also be performed free of a registration on a local scale”. Summary of Citations (Nelson) Paragraph [0034]; “The system obtains training data for the stained cell neural network (step 302). The training data includes multiple training examples, with each training example including one or more images of one or more cells, and, for each training example, one or more corresponding stained images. In some cases, for each training example, the corresponding stained images can depict the cells being stained with a variety of stains”. Regarding claim 29, the ground of rejection based on Nelson from previous non-final Office Action of 10/21/2025 applies in here. Claim 17 is rejected under 35 U.S.C 103 as being unpatentable over Navid in view of Freytag and further in view of Sue Patent Publication No. US-11928820-B2 (hereinafter Sue), Wendel Patent Application Publication No. WO-2016149542-A1 (hereinafter Wendel) and Lefkofsky US Patent Application Publication No. US-20210118559-A1 (hereinafter Lefkofsky). The ground of rejection based on Sue, Wendel and Lefkofsky from previous non-final Office Action of 10/21/2025 applies in here. Claims 25 are rejected under 35 U.S.C 103 as being unpatentable over Navid in view of Freytag and further in view of Zhaoyang “GAN-based Virtual Re-Staining: A Promising Solution for Whole Slide Image Analysis” (hereinafter Zhaoyang), applicant submitted prior art. Regarding claim 25, Navid in the combination discloses the method as claimed in claim 14 further comprising training the system with a training architecture that comprises a generator-discriminator network (Zhaoyang in [Section – 1.2] discloses, “A significant amount of investigations have been made to explore the potential of GAN in natural images related tasks like image synthesizing, image super-resolution, and style transfer”). It would have been obvious to one with one having an ordinary skill in art before the effective filling date of the claimed invention to integrate the teaching of Zhaoyang into the system of Navid to improve original network and to improve translation accuracy of cancer diagnosis (Abstract). Summary of Citations (Zhaoyang) [Section – 1.2]; “A significant amount of investigations have been made to explore the potential of GAN in natural images related tasks like image synthesizing, image super-resolution, and style transfer”. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZAID MUHAMMAD SALEH whose telephone number is (703)756-1684. The examiner can normally be reached M-F 8 am - 5 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached on (571)272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272- 1000. /ZAID MUHAMMAD SALEH/ Examiner, Art Unit 2668 2/05/2026 /VU LE/Supervisory Patent Examiner, Art Unit 2668
Read full office action

Prosecution Timeline

May 16, 2023
Application Filed
Oct 13, 2025
Non-Final Rejection — §103
Jan 21, 2026
Response Filed
Feb 05, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602944
AUTHENTICATION OF DENDRITIC STRUCTURES
2y 5m to grant Granted Apr 14, 2026
Patent 12586501
DISPLAY DEVICE, DISPLAY METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12586396
INFORMATION PROCESSING APPARATUS AND SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12562535
METHOD FOR DETECTING UNDESIRED CONNECTION ON PRINTED CIRCUIT BOARD
2y 5m to grant Granted Feb 24, 2026
Patent 12555344
METHOD AND APPARATUS FOR IMPROVING VIDEO TARGET DETECTION PERFORMANCE IN SURVEILLANCE EDGE COMPUTING
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
65%
Grant Probability
99%
With Interview (+48.4%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 43 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month