Prosecution Insights
Last updated: April 19, 2026
Application No. 17/982,023

SYSTEMS AND METHODS FOR PROCESSING REAL-TIME CARDIAC MRI IMAGES

Final Rejection §103
Filed
Nov 07, 2022
Examiner
CADEAU, WEDNEL
Art Unit
2632
Tech Center
2600 — Communications
Assignee
Shanghai United Imaging Intelligence Co. Ltd.
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
91%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
381 granted / 532 resolved
+9.6% vs TC avg
Strong +20% interview lift
Without
With
+19.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
42 currently pending
Career history
574
Total Applications
across all art units

Statute-Specific Performance

§101
2.5%
-37.5% vs TC avg
§103
75.6%
+35.6% vs TC avg
§102
3.5%
-36.5% vs TC avg
§112
16.5%
-23.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 532 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Prior arts cited in this office action: Shalhon Livne et al. (US 20230252632 A1, hereinafter “Shalhon”) Lu et al. (US 20150091563 A1, hereinafter “Lu”) Response to Arguments Applicant's arguments filed 06/24/2025 have been fully considered but they are not persuasive. Applicant’s Arguments/Remarks: applicant argues that Shalhon was filed on Feb. 9, 2023, after the “November 7, 2022” filing date of the present application. While Shalhon claims priority to U.S. Provisional App. No. 63/308,550, filed Feb. 10, 2022 (hereinafter “the Shalhon Provisional”), the passages cited by the Office Action from Shalhon never appeared in the Shalhon Provisional. As such, Shalhon is not entitled to the “Feb. 10, 2022” priority date and cannot be used as prior art against the pending claims. As a matter of fact, the terms “subset” and “mask score” never even appeared in the Shalhon Provisional, so Shalhon cannot claim priority to the Shalhon Provisional and does not qualify as prior art against the pending claims. Examiner’s Response: examiner disagrees with applicant assertion above that Shalhon cannot be used as prior art because the provisional application with the correct priority date does not teach applicant invention as claimed. While the teaching of the limitation relied upon by the examiner is more detail in the publish application and easier to understand the provisional applicant also teaches the same limitation. For example, in the provisional application page 20 lines 6-13 , Shalhon teaches Another method of assessing vascular filling, used in some embodiments, is to examine vascular masks generated from vascular segmentation, and select the frames where the detected vascular lengths are the longest. In other words, the length is served as a mask score and be selecting the frames where the vascular lengths are the longest clearly shows that a subset of the frames are selected based on the mask score length. Shalhon further teaches in some embodiments, mask-type ML-based vascular identifier outputs are filtered according to the size of their connected components (blob sizes) (Shalhon page 21 lines 12-13). Therefore, applicant argument that the provisional application of Shalhon does not qualify as prior art has no merit. Applicant’s Arguments/Remarks: While the Office Action tries to cure the deficiency of Shalhon with Lu, the disclosures cited from Lu are also lacking. For example, the Office Action cites paragraphs [0087] and [0093] of Lu, but those paragraphs never even contemplate using ML models to automatically recognize the slices of cardiac images, much less grouping them based on the recognized slices for a cardiac analysis task. Accordingly, the pending claims should be patentable over Shalhon and Lu even if Shalhon is still deemed as prior art Examiner’s Response: examiner disagrees with applicant assertion above that the combination of the cited prior arts does not teach or suggest applicant invention as claimed. Shalhon teaches An ML product may also be a classifier, in the sense that it is configured to classify its input (in whole or in part) as belonging or not belonging to a certain class, and/or as having a certain estimated likelihood of belonging to a certain class. Optionally this classification considers just one class, or there may be a plurality of classes tested at once by a single classifier. With respect to image classifications in particular, classifying certain parts of the image as belonging and/or being likely to belong to a certain structure is also referred to herein as “identifying” that structure in the image. More particularly, in some embodiments, the identifying comprises segmenting the image into two or more groups, with at least one of the segmented groups being identified as representing the structure in the image (Shalhon Page 11 lines 1-5). A vascular identifier may optionally produce a data structure which performs classification of an image as a whole. For example, the data structure may classify the image according to its likelihood of being an image obtained near a certain phase of the heartbeat cycle (Page 12 lines 5-10). The classification here can be interpreted as grouping. The mask-type ML-based vascular identifier may be one trained using mask data, with the mask data identifying which portions of a vascular image should be considered as vascular portions. For formula-based vascular identifiers, the parameters available for adjustment are not necessarily suited to making anatomical selections of the type needed. (Page 26 lines 1-7). It is a potential advantage to select images taken from the same phase of the heartbeat cycle. Images taken during diastole in particular have the potential advantage of being near the time when the ventricles of the heart are most expanded, and thus the blood vessels at their largest (page 19 22-31). Shalhon does not use the word slice however, by relying on Shalhon and consider the frames as slices (under broadest reasonable interpretation) one of ordinary skill in the art can see that, frames (slices) are chosen and classified (grouped) based on the cardiac phase such that the network can learn from the phase information. Lu further teaches Once the anchor slice(s) are selected, a subset of the source slices of the plurality of source slices are selected in act 72 based on a correlation of the source slice data and the anchor slice data of the selected anchor slice(s). The subset of the source slices may be for a respective phase of the repetitive motion. The source slices may be organized or grouped by motion phase before or after the slice selection of the act 72. For example, the source slices may be grouped by motion phase in accordance with the timestamp data and/or the motion phase to which the source slice data is normalized based on the timestamp data. In the embodiment of FIG. 2, the source slices are binned by motion phase before the selection procedure for ease in illustrating the assembly of a respective volume for each motion phase. Lu teaches that the system can be implemented using machine learning. Furthermore, Shalhon already cited to teach using machine learning to select frames with particular cardiac phase. Therefore, contrary to applicant assertion one of ordinary skill in the art would have known how to used machine leaning nor only to detect frames (slice) with a particular desired anatomical structure but also classify (group) them such that certain condition of the heat can be determined. Furthermore, applicant is reminded that the test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressly suggested in any one or all of the references. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981). In this case grouping or classifying images or segments of cardiac phases to determine any cardiac anomaly. Therefore, claims 1-20 are not allowable over the cited prior arts. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Shalhon Livne et al. (US 20230252632 A1, hereinafter “Shalhon”) in view of Lu et al. (US 20150091563 A1, hereinafter “Lu”). Regarding claims 1 and 11: Shalhon teaches an apparatus (Shalhon [0003], [0055], fig. 1, where Shalhon teaches in some embodiments thereof, relates to the field of vascular imaging and more particularly, but not exclusively, to vascular computer modelling and segmentation), comprising: at least one processor (Shalhon [0003], [0055], fig. 1, where Shalhon teaches the system 102 may represent, for example, a system of one or more computers or processors which implements the techniques described herein) configured to: obtain a plurality of medical images of a heart (Shalhon [0027], [0050], claim 29, where Shalhon teaches accessing an image sequence comprising a plurality of vascular images, the vascular images depicting a portion of a heart from a particular viewpoint and the vascular images being associated with different times within a time range); determine, based on one or more machine-learned (ML) image recognition models, a slice and a cardiac phase associated with each of the plurality of medical images (Shalhon [0050], [0064], [0124] Abstract, where Shalhon teaches a vascular identifier may optionally produce a data structure which performs classification of an image as a whole. For example, the data structure may classify the image according to its likelihood of being an image obtained near a certain phase of the heartbeat cycle. The neural network may also be trained to assign a classification associated with a cardiac phase); select a first group of medical images from the plurality of medical images based at least on the slice and cardiac phase associated with each of the plurality of medical images and a requirement of a cardiac analysis task (Shalhon [0050], [0064], [0124] Abstract, claims 29 and 58, where Shalhon teaches generate segmentation masks associated with the subset, wherein the segmentation masks segment vessels included in the respective vascular images which form the subset, wherein mask scores are determined for the segmentation masks which are indicative of a size or length associated with a vessel included in a segmentation mask, and wherein the subset is filtered to remove one or more vascular images based on the mask scores; and determine a particular vascular image included in the filtered subset, wherein the determination is based analyzing one or more image quality measures determined for each vascular image included in the filtered subset, wherein the particular vascular image is configured for inclusion in an interactive user interface); and provide the first group of medical images for performing the cardiac analysis task (Shalhon [0046]-[0048], fig. 12, where Shalhon teaches the system may leverage machine learning techniques to identify images in the image sequence which depict the heart in a particular cardiac phase. the system may then use machine learning techniques to output segmentation masks for these identified images. The segmentation masks may then be analyzed to identify size or length metrics associated with vessels (herein referred to as ‘mask scores’). For example, a mask score may indicate a length associated with a vessel. In this example, the length may indicate a length associated with a centerline from a first end of a vessel to a second end of the vessel) Shalhon fails to explicitly teach determine a slice associated with each of the plurality of medical images. However, Lu teaches the subset of the source slices may be for a respective phase of the repetitive motion. The source slices may be organized or grouped by motion phase before or after the slice selection of the act 72. For example, the source slices may be grouped by motion phase in accordance with the timestamp data and/or the motion phase to which the source slice data is normalized based on the timestamp data (Lu [0087], [0093]). Therefore, taking the teachings of Shalhon and Lu as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to determine a slice and a cardiac phase associated with each of the plurality of medical images, since each slice and each phase can provide extra information about the condition of the heart by being able to properly reconstruct the heart and heart tissues such that diseases and/or anomalies can be better detected (Lu [0003]). Regarding claims 2 and 12: Shalhon in view of Lu teaches wherein the plurality of medical images is captured based on a real-time magnetic resonance imaging (MRI) technique and spans multiple cardiac phases and multiple slices of the heart (Shalhon [0027]; Lu [0002]-[0007], [0087], [0093]). Regarding claims 3 and 13: Shalhon in view of Lu teaches wherein the plurality of medical images includes a first medical image of the heart captured consecutively with a second medical image of the heart, the first and second medical images being associated with respective cardiac phases and slices, and wherein the first and second medical images differ from each other with respect to at least one of the cardiac phases or the slices associated with the first and second medical images (Shalhon [0027], [0043], claim 29; Lu [0002]-[0007], [0087], [0093]). Regarding claims 4 and 14: Shalhon in view of Lu teaches wherein the at least one processor is further configured to determine, automatically, a view associated with each of the plurality of medical images based on the one or more ML image recognition models, and select the first group of medical images further based on the view associated with each of the plurality of medical images (Shalhon [0027], [0031], [0043], claim 29; Lu [0002]-[0007], [0028], [0087], [0093]). Regarding claims 5 and 15: Shalhon in view of Lu teaches wherein the view includes a short-axis view, a 2-chamber long-axis view, a 3-chamber long-axis view, or a 4-chamber long-axis view of the heart (Lu [0021], [0028], [0051], [0110]). Regarding claims 6 and 16: Shalhon in view of Lu teaches wherein the first group of medical images is associated with a first cardiac cycle, and wherein the at least one processor is further configured to: select a second group of medical images from the plurality of medical images based at least on the requirement of the cardiac analysis task and the slice and cardiac phase associated with each of plurality of medical images, wherein the second group of medical images is associated with a second cardiac cycle and is misaligned with the first group of medical images with respect to one or more time spots; generate one or more additional medical images of the heart for the second group of medical images; and add the one or more additional medical images to the second group of medical images such that the second group of medical images is aligned with the first group of medical images with respect to the one or more time spots (Shalhon [0064], [0126], [0161]; Lu [0009]-[0011], [0038], [0057]). Regarding claims 7 and 17: Shalhon in view of Lu teaches wherein the at least one processor is further configured to determine respective timestamps of the medical images comprised in the first group of medical images and the second group of medical images, and wherein the one or more additional medical images are generated for the second group of medical images based at least on the determined timestamps (Lu [0058], [0067], [0083], [0093]). Regarding claims 8 and 18: Shalhon in view of Lu teaches wherein the one or more additional medical images are generated based on an interpolation technique or a machine-learned image synthesis model (Shalhon [0090]; Lu [0083], [0098]-[0099]). Regarding claims 9 and 19: Shalhon in view of Lu teaches wherein the at least one processor is further configured to register a first medical image of the first group of medical images with a second medical image of the first group of medical images, the registration compensating for a respiratory motion associated with the first medical image or the second medical image (Shalhon [0141], [0161]; Lu [0090], [0105]). Regarding claims 10 and 20: Shalhon in view of Lu teaches wherein the at least one processor is further configured to perform the cardiac analysis task based on the first group of medical images (Shalhon [0004], [0046]-[0048], fig. 12; Lu [0009]-[0010], [0015]-[0016], [0026], figs. 4 and 5). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEDNEL CADEAU whose telephone number is (571)270-7843. The examiner can normally be reached Mon-Fri 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chieh Fan can be reached at 571-272-3042. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WEDNEL CADEAU/Primary Examiner, Art Unit 2632 September 8, 2025
Read full office action

Prosecution Timeline

Nov 07, 2022
Application Filed
Mar 19, 2025
Non-Final Rejection — §103
Jun 24, 2025
Response Filed
Sep 08, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586241
POSITION DETERMINATION METHOD, DEVICE, AND SYSTEM, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12573052
METHOD AND APPARATUS FOR IMAGE SEGMENTATION
2y 5m to grant Granted Mar 10, 2026
Patent 12573022
ANOMALY DETECTION FOR COMPONENT THROUGH MACHINE-LEARNING BASED IMAGE PROCESSING AND CONSIDERING UPPER AND LOWER BOUND VALUES
2y 5m to grant Granted Mar 10, 2026
Patent 12573076
POSITION MEASUREMENT SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12567178
THREE-DIMENSIONAL DATA ENCODING METHOD, THREE-DIMENSIONAL DATA DECODING METHOD, THREE-DIMENSIONAL DATA ENCODING DEVICE, AND THREE-DIMENSIONAL DATA DECODING DEVICE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
91%
With Interview (+19.6%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 532 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month