Prosecution Insights
Last updated: April 19, 2026
Application No. 18/597,385

Diagnostic image translation using a deep neural network with both imaging parameters and diagnostic images as input

Non-Final OA §103
Filed
Mar 06, 2024
Examiner
HARANDI, SIAMAK
Art Unit
2662
Tech Center
2600 — Communications
Assignee
The Board Of Trustees Of The Leland Stanford Junior University Office Of The General Counsel
OA Round
1 (Non-Final)
91%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
98%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
669 granted / 738 resolved
+28.7% vs TC avg
Moderate +8% lift
Without
With
+7.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
18 currently pending
Career history
756
Total Applications
across all art units

Statute-Specific Performance

§101
16.6%
-23.4% vs TC avg
§103
37.4%
-2.6% vs TC avg
§102
17.6%
-22.4% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 738 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgement is made of Applicant’s claim of priority and the benefit of U.S. Provisional Patent Application No. 63/450,225, filed on March 6, 2023. Information Disclosure Statement The information disclosure statement (“IDS”) filed on 07/18/2025 has not been considered, because it does not include the required IDS Size Fee Assertion under 37 C.F.R. § 1.17(v). Drawings The 24-page drawings have been considered and placed on record in the file. Status of Claims Claims 1-11 are pending. Claim Objections Claim 7 is objected to because of the following informalities: it appears that the recited list of diagnostic images in Claim 7, should end by the term “or” before “adiabatic”. In addition, please change the presentation format of Claim t in order for the second line to start at the beginning of the line without a tab spacing. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim 1-5 and 7-9 are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al. (“Quantitative Parametric Mapping of Tissues Properties from Standard Magnetic Resonance Imaging Enabled by Deep Learning” – 10 August 2021) in view of Nakamura et al. (US 2018/0313917). Consider Claim 1, Wu discloses “A method for diagnostic imaging” (Wu, Abstract) “comprising: (Wu, Fig. 1(b), where it is disclosed: “(b) In single parametric mapping, a quantitative tissue relaxation parametric map (e.g., 𝑇1) is extracted from several images acquired using the same pulse sequence but different acquisition parameters (e.g., flip angle), and the result is further compensated by field map (e.g., 𝐵1). Here, the field map is specifically measured, and parameter quantification is typically implemented using least squares fitting” and Fig. 1(d), where the specialized model of deep learning is disclosed); “wherein generating the translated diagnostic images comprises applying both the predetermined image acquisition parameters and the diagnostic images as input to an input layer of the deep neural network; wherein the predetermined image acquisition parameters are input to the deep neural network in the form of parameter image maps with imaging parameter values at each pixel of the parameter image maps; wherein the translated diagnostic images are produced as output from an output layer of the deep neural network” (Wu, 2nd page, Fig. 1(d): “(d) In the proposed 𝑄2𝑀𝑅𝐼 method, tissue relaxation properties (e.g., 𝑇1, 𝑇2) and field maps (e.g., 𝐵0, 𝐵1) are derived from one or a few MR images with conventional 𝑇1/𝑇2 weighting, which are obtainable in clinical practice. Other biophysical and biochemical parametric maps (e.g., proton density fat fraction map) can also be derived in the same deep learning framework without further processing.” PNG media_image1.png 298 278 media_image1.png Greyscale ). Although Wu discloses standard diagnostic images for its study (Wu, 10th page, 1st paragraph acquiring of the T1 and T2 weighted knee images; and 2nd page, Figs. 1(a), 1(b), wherein standard diagnostic MRI images of the knee of a patients is acquired), it does not explicitly disclose : “performing a diagnostic imaging scan using predetermined image acquisition parameters prescribed in an imaging protocol to produce diagnostic images”. However, in an analogous field of endeavor, Nakamura discloses “the MRI apparatus 1A according to the second embodiment performs a main imaging (also referred to as “main scan”) to generate the diagnostic image in accordance with the line length and the imaging condition (also referred to as “imaging parameter”) selected by the pre-scan” (Nakamura, Paragraph [0097]). Accordingly, before the effective filing date of the instant application, it would have been obvious to one of ordinary skill in the art to combine Wu with teachings of Nakamura to perform a diagnostic MRI imaging scan using predetermined image acquisition parameters. One of ordinary skill in the art could have combined the two references in order to acquire MRI scans according to Nakamura in order to further apply the techniques of Wu to create a robust quantitative and efficient MR images without additional data acquisition. Accordingly, it would have been obvious to combine Wu and Nakamura to obtain the invention in claim 1. Consider Claim 2, the combination of Wu and Nakamura discloses “ The method of claim 1wherein the diagnostic imaging is magnetic resonance imaging, wherein the diagnostic images are T1-weighted images acquired using variable flip angles, wherein the predetermined image acquisition parameters comprise variable flip angles, and wherein the translated diagnostic images comprise a T1 map” (Wu, 3rd page, Fig. 2, where T1 map is shown and 2nd paragraph: “𝑻1 and 𝑻𝟐 ∗ Mapping of the Knee For 𝑇1 mapping of the knee, 𝑇1, 𝜌, and 𝐵1 maps were predicted from single 𝑇1-weighted images. Here, the ground truth T1 map was extracted from four 𝑇1-weighted images (acquired using variable flip angles of 5°, 10°, 20°, and 30°, respectively) via least squares fitting, and further corrected by the measured 𝐵1 map; 𝜌 map was calculated from a 𝑇1 weighted image and 𝑇1 map”). Consider Claim 3, the combination of Wu and Nakamura discloses “The method of claim 2 wherein the diagnostic images are T1-weighted images acquired with two distinct flip angles, wherein the translated diagnostic images comprise an uncompensated T1 map” (Wu, 3rd page, Fig. 2 shows an uncompensated T1 mapping and 2nd paragraph lists various distinct variable flip angles). Consider Claim 4, Wu discloses “The method of claim 2 wherein the predetermined image acquisition parameters are combined with a B1 map to produce actual variable flip angles; wherein the actual variable flip angles are input into the neural network in the form of a nominal flip angle modulated by the B1 map; wherein the translated diagnostic images comprise a compensated T1 map that takes in account B1 inhomogeneity” (Wu, 3rd page, Fig. 3 and 3rd paragraph “High fidelity mapping was achieved consistently across patients. A representative case is shown in Fig. 3. In the resultant T1 maps, compensation for 𝐵1 inhomogeneity was automatically achieved without use of a measured 𝐵1 map. It is intriguing that the accurately predicted 𝐵1 map was implicitly incorporated into the T1 mapping models, which mitigated the need for actual 𝐵1 measurement). Consider Claim 5, the combination of Wu and Nakamura discloses “The method of claim 2 wherein the translated diagnostic images comprise a ρ map” (Wu, 3rd page, Fig. 2(a)). Consider Claim 7, the combination of Wu and Nakamura discloses the method of claim 1 wherein the diagnostic images are T2 * weighted image (Wu, 3rd page, Fig. 2(b) and 5th page second paragraph where T2* weighted images are disclosed). Consider Claim 8, the combination of Wu, Nakamura, and Weiss discloses “The method of claim 1 wherein the predetermined image acquisition parameters are inversion times, spin-lock times, echo times, spin-lock times, or number of adiabatic inversion recovery pulses”(emphasis added) (Wu, 5th page, where certain acquired echo times are disclosed). Consider Claim 9, the combination of Wu and Nakamura discloses “The method of claim 1 wherein the deep neural network is a convolutional network, attention convolutional network, pure attention network, or generative adversarial network” (Wu, 7th page, 1st paragraph, 1st line, the convolutional neural network). Claims 6 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al. (“Quantitative Parametric Mapping of Tissues Properties from Standard Magnetic Resonance Imaging Enabled by Deep Learning” – 10 August 2021) in view of Nakamura et al. (US 2018/0313917), and in further view of Kenneth L. Weiss (US 11,580,626). Consider Claim 6, the combination of Wu and Nakamura is not relied on for disclosing “The method of claim 1 wherein the diagnostic imaging is chemical shift encoded magnetic resonance imaging (MRI) using dual echo image acquisition; wherein the diagnostic images are in-phase and out-of-phase complex MRI images, wherein the predetermined image acquisition parameters are echo times, and wherein the translated diagnostic images are water and fat images.” However, in an analogous field of endeavor, Weiss discloses the limitations (Weiss, column 2, lines 36-45: PNG media_image2.png 132 310 media_image2.png Greyscale . In addition, Weiss, Claim 17, discloses “multi-echo chemical shift encoded (CSE) MRI sequence is a dual echo (two point) Dixon sequence). Accordingly, before the effective filing date of the instant application, it would have been obvious to one of ordinary skill in the art to combine the combination of Wu and Nakamura with teachings of Weiss to use a dual echo chemical shift MRI. One of ordinary skill in the art could have substituted the MRI scanner of the combination of Wu and Nakamura with the MRI system of Weiss, and the results would have been predictable, i.e., generation of diagnostic images. Accordingly, it would have been obvious to combine Wu, Nakamura, and Weiss to obtain the invention in claim 6. Consider Claim 10, the combination of Wu, Nakamura, and Weiss discloses “The method of claim 1 wherein the deep neural network is trained using training diagnostic images and corresponding translated images generated using least square fitting for generating quantitative parametric maps” (Wu, 2nd page, fig. 1(b), least square fitting) and projected power approach for generating water and fat images” (Weiss, column 2, lines 36-45, where the water and fat contents are disclosed). The proposed combination as well as the motivation for combining the Wu, Nakamura, and Weiss references presented in the rejection of Claim 6, apply to Claim 10 and are incorporated herein by reference. Thus, the method recited in Claim 6 is met by Wu, Nakamura, and Weiss. Allowable Subject Matter Claim 11 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: consider Claim 11, Wu discloses “The method of claim 1 wherein the deep neural network is trained using training diagnostic images via self-supervised learning technique comprising inputting the training diagnostic images to the deep neural network to produce estimated translated images as output” (Wu, 7th page, first paragraph, the self-attention convolutional neural network and Fig. 1(d)). However, none of the cited prior art references, alone or in combination, provides a motivation to teach the ordered combination of the above-listed limitation with “generating from the estimated translated images synthetic images using a model-based calculation, and computing a loss function by comparing the synthetic images to the training diagnostic images”. Conclusion and Contact Information The prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure: Litwiller et al. (US 2022/0130084). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Siamak HARANDI whose telephone number is (571)270-1832. The examiner can normally be reached Monday - Friday 9:30 - 6:00 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Siamak Harandi/Primary Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Mar 06, 2024
Application Filed
Mar 04, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599350
COMPUTED TOMOGRAPHY BASED IMAGING OF VASA VASORUM DENSITY FOR DETECTION AND MONITORING OF INFLAMMATION AND ANGIOGENESIS IN VASCULAR WALL
2y 5m to grant Granted Apr 14, 2026
Patent 12593012
MEDICAL IMAGE PROCESSING DEVICE, MEDICAL IMAGE PROCESSING METHOD, AND ENDOSCOPE SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12582330
SYSTEMS AND METHODS FOR COMPUTER-ASSISTED SHAPE MEASUREMENTS IN VIDEO
2y 5m to grant Granted Mar 24, 2026
Patent 12586228
DEVICE AND METHOD FOR CALCULATING ATRIAL WALL THICKNESS
2y 5m to grant Granted Mar 24, 2026
Patent 12571747
OVERLAY MEASURING METHOD
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
91%
Grant Probability
98%
With Interview (+7.5%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 738 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month