Prosecution Insights
Last updated: April 19, 2026
Application No. 18/499,071

SYSTEM AND METHOD FOR MOTION GUIDED RETROSPECTIVE GATING

Final Rejection §103
Filed
Oct 31, 2023
Examiner
PEHLKE, CAROLYN A
Art Unit
3799
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
GE Precision Healthcare LLC
OA Round
2 (Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
3y 7m
To Grant
91%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
294 granted / 478 resolved
-8.5% vs TC avg
Strong +29% interview lift
Without
With
+29.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
39 currently pending
Career history
517
Total Applications
across all art units

Statute-Specific Performance

§101
4.8%
-35.2% vs TC avg
§103
41.3%
+1.3% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
30.0%
-10.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 478 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “movement data processing unit” in claims 10 and dependents, and “data correction unit” in claims 6 and 15. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-5 , 7, and 10-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huang et al. (US 2022/0117494 A1, Apr. 21, 2022) (hereinafter “Huang”) in view of Sachs et al. (US 2013/0310655 A1, Nov. 21, 2013) (hereinafter “Sachs”). Regarding claim 1: Huang discloses a method of identifying movement of a patient during a medical imaging scan, the method comprising: initiating motion detection data acquisition using at least one motion detection apparatus configured to obtain patient contour data ([0120]-[0121], where these paragraphs refer back to previously described steps associated with figs. 6-8; [0097] - a 2D or 3D image of the patient is "patient contour data"; [0044]-[0046] - the "motion detection apparatus" is the laser ultrasonic component of a medical device); initiating a medical imaging scan of the patient to acquire scan data ([0121] - "during the scan" means that the medical imaging scan is also initiated, [0124]); computing a numeric motion score based on the patient contour data and embedding the motion score with the corresponding scan data ([0125] - value 1 and value 0 are "a numeric motion score" where the data is synchronized ["embedded"] with the scan data); determining a motion score curve for duration of the medical imaging scan based on the motion detection data ([0125] - retrospective gating curve); removing portions of the scan data corresponding to a motion score curve outside of an acceptable range ([0125]); and selecting the scan data corresponding to a motion score curve within the acceptable range for reconstruction ([0125]). While the numeric score of Huang encompasses all of the acquired image data, Huang is silent on the numeric score being calculated for each view of the scan. Sachs, in the same field of endeavor, discloses acquiring medical image scan data comprising a plurality of frames (“views”) and having a time stamp value ([0025]) along with motion detection data of a patient contour ([0040], [0042]), having a time stamp value corresponding to the image timing information ([0027], [0033], [0046]), that may be from various sources including a non-contact motion detection apparatus ([0028], [0039]). Sachs further discloses determining of the magnitude of the motion data meets or exceeds a threshold ([0049]) and the image data and motion data are associated (“embedded”) ([0051]). Sachs further discloses that operating on individual frames (and corresponding motion data) reduces artifacts, such as residual blur or “before and after” motion, caused by time window division ([0005]). It would have been prima facie obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to determine the numeric motion score of Huang based on individual frames rather than time windows in order to reduce motion artifacts in view of the teachings of Sachs. Regarding claim 2: Huang further discloses wherein the motion score curve is a compilation of motion scores calculated for each view for the duration of the scan ([0125] - "synchronously"). Regarding claim 3: Huang further discloses determining a baseline position of the patient used to determine the motion score for each view ([0125] – this is considered to be implicitly disclosed as the image-based motion is measured as a displacement/change in position over time; there must be an initial determination of the position - "baseline" - in order to measure the subsequent motion, where the subsequent motion measurements are the "motion score" for each frame). Regarding claims 4 and 5: Huang further discloses wherein the motion score curve is determined for a contour of the patient or for a surface of the patient ([0121]-[0124] refer back to previously described steps associated with figs. 6-8; [0089]-[0092], [0095] - a surface, a contour, or both may be used). Regarding claim 7: Huang further discloses wherein the motion detection data acquisition uses one or more motion detection apparatuses to detect motion in real-time during the scan ([0125]). Regarding claim 10: Huang discloses a medical imaging system ([0048]), comprising: a computed tomography (CT) imaging system ([0041]-[0042], [0049]), comprising: a gantry having a bore, rotatable about an axis of rotation; an X-ray source mounted on the gantry and configured to emit an X-ray beam ([0042]); an X-ray controller to operate the X-ray source ([0042], [0048]); and an X-ray detector configured to detect the X-ray beam emitted by the X-ray source ([0042]); a motion detection system coupled to the CT imaging system, wherein the motion detection system includes: a motion detection apparatus mounted on the gantry and configured to obtain patient contour data representing a surface contour of a patient (laser ultrasonic component 160, 420, 520; [0046] – a reconstructed image is “patient contour data” [0048], [0064], [0074]-[0077], figs. 4-5; [0089] – a position or a region on the surface of a patient including “surface of the skin above the sternum” is a “surface contour of a patient); a controller to control operation of the motion detection apparatus ([0048], processing device 120, [0068], [0079]); and a movement data processing unit to obtain data from the motion detection apparatus, wherein the movement data processing unit obtains real-time patient contour data during an imaging scan to generate contour-based motion information ([0121]-[0124] refer back to previously described steps associated with figs. 6-8; [0095]-[0096]; [0125]); and a processor to determine a numeric motion score of the patient based on the contour data, embed the motion score with the corresponding view of scan data, and select views of scan data in which the corresponding motion score is in an acceptable range for image reconstruction ([0125] - value 1 and value 0 are "a numeric motion score" where the data is synchronized ["embedded"] with the scan data). While obtaining a baseline position of the patient is considered to be implicitly disclosed as the image-based motion is measured as a displacement/change in position over time; there must be an initial determination of the position - "baseline" - in order to measure the subsequent motion. However, Huang is silent on the details of how the motion calculation is performed and does not describe wherein the real-time movement data is compared to the baseline position of the patient. Additionally, while the numeric score of Huang encompasses all of the acquired image data, Huang is silent on the numeric score being calculated for each view of the scan. Sachs, in the same field of endeavor, discloses acquiring medical image scan data comprising a plurality of frames (“views”) and having a time stamp value ([0025]) along with motion detection data of a patient contour ([0040]-[0042]; ribcage, “outline of the patient”), having a time stamp value corresponding to the image timing information ([0027], [0033], [0046]), that may be from various sources including a non-contact motion detection apparatus ([0028], [0039]) and is relative to a baseline position ([0041] – “base position”). Sachs further discloses determining of the magnitude of the motion data meets or exceeds a threshold ([0049]) and the image data and motion data are associated (“embedded”) ([0051]). Sachs further discloses that operating on individual frames (and corresponding motion data) reduces artifacts, such as residual blur or “before and after” motion, caused by time window division ([0005]); and that measurement relative to a baseline position allows the magnitude of the motion to be measured. It would have been prima facie obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to measure the motion relative to the baseline position (“base position”) as disclosed by Sachs in order to easily determine the magnitude of the motion. It would further have been prima facie obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to determine the numeric motion score of Huang based on individual frames rather than time windows in order to reduce motion artifacts in view of the teachings of Sachs. Regarding claim 11: Huang further discloses determining a motion score curve from a compilation of the motion scores calculated for each view for the duration of the scan ([0121]-[0124] refer back to previously described steps associated with figs. 6-8; [0095]-[0096]; [0125]). Regarding claim 12: Huang further discloses wherein the motion score is based on the baseline position of the patient and the real-time movement data corresponding to each view during the scan ([0125]). Regarding claims 13 and 14: Huang further discloses wherein the motion score curve is determined for a contour of the patient or for a surface of the patient ([0121]-[0124] refer back to previously described steps associated with figs. 6-8; [0089]-[0092], [0095] - a surface, a contour, or both may be used). Regarding claim 15: Huang further discloses wherein the processor includes a data correction unit to automatically select the views of the scan data corresponding to the motion score within the acceptable range ([0125], processing device 120). Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huang and Sachs as applied to claim 1 above, and further in view of Kaufman et al. (US 2003/0016782 A1, Jan. 23, 2003) (hereinafter “Kaufman”). Regarding claim 6: Huang and Sachs disclose the method of claim 1. Huang further discloses wherein selecting the scan data corresponding to a motion score curve within the acceptable range is automatically determined by a data correction unit ([0125]) but does not disclose wherein a user confirms the selection. Kaufman, in the same field of endeavor, teaches a retrospective gating technique where the user can revise the gating slice selections made by the system in order to remove any slices that may be poor quality (fig. 1, [0060]). It would have been prima facie obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to modify the method of Huang and Sachs by providing the user with an opportunity to revise (“confirm”) the automated selections as taught by Kaufman in order to ensure a quality reconstruction. Claim(s) 8-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huang and Sachs as applied to claim 7 above, and further in view of Anthony et al. (US 2021/0076944 A1, Mar. 18, 2021) (hereinafter “Anthony”). Regarding claims 8 and 9: Huang and Sachs disclose the method of claim 7. Huang further discloses wherein the motion detection apparatus(es) comprises a laser ultrasound device, but is silent on the motion detection apparatus(es) including a LiDAR scanner or a 3D camera. Anthony, in the same problem solving area of laser ultrasound, teaches that laser ultrasound can be optimized for use on a patient by incorporating additional point tracking, such as LiDAR and/or a camera, to compensate for irregularities of the patient's skin surface and body shape (fig. 5, [0070], [0072]-[0073], where at least stereo and structured light cameras are "3D camera[s]"). It would have been prima facie obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to modify the method of Huang and Sachs by including a LiDAR and/or 3D camera with the laser ultrasound device as taught by Anthony in order to optimize the localization and compensate for irregularities of the patient’s skin surface. Claim(s) 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huang and Sachs as applied to claim 10 above, and further in view of Anthony et al. (US 2021/0076944 A1, Mar. 18, 2021) (hereinafter “Anthony”). Regarding claims 16-17: Huang and Sachs disclose the system of claim 10 wherein the motion detection apparatus(es) comprises a laser ultrasound device, but is silent on the motion detection apparatus(es) including a LiDAR scanner or a 3D camera. Anthony, in the same problem solving area of laser ultrasound, teaches that laser ultrasound can be optimized for use on a patient by incorporating additional point tracking, such as LiDAR and/or a camera, to compensate for irregularities of the patient's skin surface and body shape (fig. 5, [0070], [0072]-[0073], where at least stereo and structured light cameras are "3D camera[s]"). It would have been prima facie obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to modify the system of Huang and Sachs by including a LiDAR and/or 3D camera with the laser ultrasound device as taught by Anthony in order to optimize the localization and compensate for irregularities of the patient’s skin surface. Response to Arguments Applicant’s arguments, filed 01/20/2026, have been fully considered but are moot in view of the updated grounds of rejection necessitated by amendment. However, in the interest of advancing prosecution, certain arguments will be addressed. Applicant argues that the instant disclosure describes “contour” as the surface shape or outline of a subject and includes data that defines geometry, edges or surface models. Applicant further asserts that a region is not inherently equivalent to the described contour. Examiner respectfully disagrees and notes that the instant disclosure describes measuring motion data from a "contour" in paragraphs [0053]-[0055], where eqns. 1 and 2 calculate the motion score from a single point (x,y) or a surface (x,y,z) respectively. While the Huang reference does not use the term "contour," this description does not appear to meaningfully differ from the disclosure of measuring a motion value from the displacement of either a point or a region on the surface of a patient (see at least [0089]). Huang also discloses that the ultrasound signal 120 (from which the motion measurement is derived) may be a reconstructed ultrasound image ([0097]) where an image would also reasonably be considered "patient contour data." Examiner further notes that “surface models” do not appear to be described in the instant disclosure. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAROLYN A PEHLKE whose telephone number is (571)270-3484. The examiner can normally be reached 9:00am - 5:00pm (Central Time), Monday - Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Koharski can be reached at (571) 272-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CAROLYN A PEHLKE/Primary Examiner, Art Unit 3799
Read full office action

Prosecution Timeline

Oct 31, 2023
Application Filed
Oct 15, 2025
Non-Final Rejection — §103
Jan 08, 2026
Interview Requested
Jan 15, 2026
Applicant Interview (Telephonic)
Jan 15, 2026
Examiner Interview Summary
Jan 20, 2026
Response Filed
Feb 23, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599362
IMAGE PROCESSING DEVICE, IMAGE PROCESSING SYSTEM, IMAGE DISPLAY METHOD, AND IMAGE PROCESSING PROGRAM
2y 5m to grant Granted Apr 14, 2026
Patent 12582849
DETERMINING ULTRASOUND-BASED BLOOD-BRAIN BARRIER OPENING OR INCREASED PERMEABILITY USING PHYSIOLOGIC SIGNALS
2y 5m to grant Granted Mar 24, 2026
Patent 12558063
Triphalangeal Ultrasound Probe Stabilization Feature
2y 5m to grant Granted Feb 24, 2026
Patent 12551297
SYSTEMS AND METHODS OF REGISTRATION COMPENSATION IN IMAGE GUIDED SURGERY
2y 5m to grant Granted Feb 17, 2026
Patent 12543952
IMAGING SYSTEM AND METHOD FOR FLUORESCENCE GUIDED SURGERY
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
91%
With Interview (+29.2%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 478 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month