Prosecution Insights
Last updated: April 19, 2026
Application No. 18/800,174

Surgical Guidance with Compounded Ultrasound Imaging

Non-Final OA §102§103§112
Filed
Aug 12, 2024
Examiner
FARAG, AMAL ALY
Art Unit
3798
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Siemens Healthcare
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
131 granted / 197 resolved
-3.5% vs TC avg
Strong +38% interview lift
Without
With
+38.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
30 currently pending
Career history
227
Total Applications
across all art units

Statute-Specific Performance

§101
10.6%
-29.4% vs TC avg
§103
47.0%
+7.0% vs TC avg
§102
12.2%
-27.8% vs TC avg
§112
25.2%
-14.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 197 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Claims 13-21 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected Inventions II, medical system drawn to claims 13-15 and Invention III method for surgical guidance by a medical system drawn to claims 16-21, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 11/25/2025. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 11-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding Claim 11, limitation “…wherein compounding comprises providing the 3D representation as a deformation field…” is unclear the correlation of the deformation field with the neural field of claim 1, if any. It is unclear if the deformation field replaces the neural field representation or in addition to or something else. The metes and bounds of the claim is unclear. Regarding Claim 12, limitation “…wherein compounding comprises compounding with the neural field…”, is unclear the correlation of the deformation field of claim 11 and neural field of claim 1, if any. The metes and bounds of the claim is unclear. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless –(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-7 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Toporek et. al. (U.S.20220207743, June 30, 2022)0(hereinafter, “Toporek”). Regarding Claim 1, Toporek teaches: A method for surgical guidance with compounded ultrasound imaging by an ultrasound system (Fig. 9, [0067]), the method comprising: scanning, by the ultrasound system, tissue of a patient, the scanning resulting in two- dimensional (2D) representations (“Acoustic probe 220 includes an array of acoustic transducer elements 422, a beamformer 424, a signal processor 426, and a probe communication interface 428.” [0066]; “The acoustic imaging system preferably acquires a sequence of 2D acoustic images of a region of interest, which may include an organ of interest, in the human body.” [0105]) ; tracking positions of the 2D representations (“…acoustic probe 220 may include or be associated with an inertial measurement unit 421 or another tracking device for obtaining relative orientation and position information for acoustic probe 220, and the 2D acoustic images obtained by acoustic imaging system 200 via acoustic probe 220 include or have associated therewith pose or tracking information for acoustic probe 220 while the 2D acoustic images are being acquired.” [0067]); compounding the 2D representations by input of the positions and 2D representations to a neural field, the neural field trained with a joint optimization of poses and parameters of the neural field, the compounding providing a three-dimensional (3D) representation of the patient (“FIG. 1 illustrates generation of a three dimensional (3D) volumetric acoustic image 102 by compounding a series of two dimensional (2D) acoustic images…” [0039];“An operation 920 includes constructing a three dimensional acoustic image of the ROI in the subject from the series of spatially tracked two dimensional acoustic images of the ROI, wherein the three dimensional acoustic image of the ROI in the subject is in an acoustic three dimensional coordinate system.” [0123]; “An operation 935 includes determining, for each of the spatially tracked two dimensional acoustic images (obtained in operation 915) of the ROI in the subject its actual pose in the standardized three dimensional coordinate system (defined in operation 905) using: a pose of the spatially tracked two dimensional acoustic image in the acoustic image three dimensional coordinate system (defined in operation 930) corresponding to the spatially tracked two dimensional acoustic image, and a coordinate system transformation from the corresponding acoustic image three dimensional coordinate system to the standardized three dimensional coordinate system.” [0125]; “An operation 945 includes performing an optimization process on a convolutional neural network (CNN) by providing the spatially tracked two dimensional acoustic images to the CNN and adjusting parameters of the CNN to minimize differences between predicted poses generated by the CNN for the spatially tracked two dimensional acoustic images and the actual poses of the spatially tracked two dimensional acoustic images.” [0126]. See Figs. 1 and 9); rendering a first image from the 3D representation (“A volume reconstruction controller (VRC) is configured to reconstruct a 3D acoustic image of the ROI or a reference structure (e.g., an organ) in the ROI from the sequence of 2D acoustic images and their poses predicted by the convolutional neural network…” [0108]); and displaying the first image (“The display device 216 may display the 3D volumetric acoustic image to a user…” [0112]). Regarding Claim 2, Toporek teaches the claim limitations as noted above. Toporek further teaches: wherein tracking comprises tracking with a camera, probe detection, or electromagnetic sensing (“…FIG. 9, using an embodiment of acoustic imaging system 200 which includes and utilizes IMU 421and/or another tracking device or system (e.g., electromagnetic or optical) which allows acoustic imaging system 200 to capture or acquire sets of spatially tracked two dimensional (2D) acoustic images.” [0071]; “An operation 915 includes obtaining a series of spatially tracked two dimensional acoustic images of the ROI in the subject using a tracking device, such as an EM or optical tracker.” [0121]). Regarding Claim 3, Toporek teaches the claim limitations as noted above. Toporek further teaches: wherein scanning comprises scanning while moving a transducer probe relative to the patient (“The acoustic imaging system preferably acquires a sequence of 2D acoustic images of a region of interest, which may include an organ of interest, in the human body. The acoustic imaging system employs an acoustic probe, which in some embodiments may be a hand-held transrectal ultrasound (TRUS) or transthoracic echocardiography (TTE) transducer.” [0105]). Regarding Claim 4, Toporek teaches the claim limitations as noted above. Toporek further teaches: wherein the 2D representations comprise images of ultrasound intensity (“The acoustic imaging system preferably acquires a sequence of 2D acoustic images of a region of interest, which may include an organ of interest, in the human body. The acoustic imaging system employs an acoustic probe, which in some embodiments may be a hand-held transrectal ultrasound (TRUS) or transthoracic echocardiography (TTE) transducer.” [0105]), wherein compounding comprises providing the 3D representation as voxels representing intensity distribution in three dimensions, and wherein rendering comprises rendering the first image as ultrasound intensities as a function of location in two dimensions (“The DPC is configured to: load a single case from the training dataset, segment the area of interest or organ of interest from the 3D acoustic images ; based on the segmented mask create a mesh using, e.g., a marching cubes algorithm that is known in the art; and based on the mesh define a standardized 3D coordinate system…” [0097]; “A volume reconstruction controller (VRC) is configured to reconstruct a 3D acoustic image of the ROI or a reference structure (e.g., an organ) in the ROI from the sequence of 2D acoustic images and their poses predicted by the convolutional neural network…” [0108]; “An operation 930 includes defining an acoustic image three dimensional coordinate system from the three dimensional volumetric acoustic image of the ROI in the subject, based on the segmentation of the acoustic images of the actual reference structure (e.g., an actual organ) in the subject in operation 925.” [0124]) Regarding Claim 5, Toporek teaches the claim limitations as noted above. Toporek further teaches: wherein the 2D representations comprise 2D segmentations of an object (“An operation 925 includes segmenting a reference structure in the three dimensional volumetric image of the ROI in the subject.” [0123], wherein compounding comprises providing, by the neural field, a signed distance field representing a 3D segmentation of the object as the 3D representation, and wherein rendering the first image comprises rendering the first image from the 3D segmentation of the object (“The DPC is configured to: load a single case from the training dataset, segment the area of interest or organ of interest from the 3D acoustic images ; based on the segmented mask create a mesh using, e.g., a marching cubes algorithm that is known in the art; and based on the mesh define a standardized 3D coordinate system…” [0097]; “A volume reconstruction controller (VRC) is configured to reconstruct a 3D acoustic image of the ROI or a reference structure (e.g., an organ) in the ROI from the sequence of 2D acoustic images and their poses predicted by the convolutional neural network…” [0108]; “An operation 930 includes defining an acoustic image three dimensional coordinate system from the three dimensional volumetric acoustic image of the ROI in the subject, based on the segmentation of the acoustic images of the actual reference structure (e.g., an actual organ) in the subject in operation 925.” [0124]). Regarding Claim 6, Toporek teaches the claim limitations as noted above. Toporek further teaches: wherein compounding comprises compounding by the neural field, the neural field trained with a loss based on comparison of a rendering of an output with one of the 2D representations (“Training a convolutional neural network (CNN) to predict the 2D acoustic frame positions in the standardized 3D coordinates, by providing to the network input/output pairs (each 2D acoustic image Si paired with its pose Ti in standardized 3D coordinates) and performing an optimization of the parameters/weights of the CNN until the predicted poses are optimally predicted compared to the actual poses based on the 2D acoustic image input.” [0082]). Regarding Claim 7, Toporek teaches the claim limitations as noted above. Toporek further teaches: wherein compounding comprises compounding by the neural field, the neural field comprising a coordinate-based neural network (“The DPC is configured to: load a single case from the training dataset, segment the area of interest or organ of interest from the 3D acoustic images ; based on the segmented mask create a mesh using, e.g., a marching cubes algorithm that is known in the art; and based on the mesh define a standardized 3D coordinate system…” [0097]; “A volume reconstruction controller (VRC) is configured to reconstruct a 3D acoustic image of the ROI or a reference structure (e.g., an organ) in the ROI from the sequence of 2D acoustic images and their poses predicted by the convolutional neural network…” [0108]; “An operation 930 includes defining an acoustic image three dimensional coordinate system from the three dimensional volumetric acoustic image of the ROI in the subject, based on the segmentation of the acoustic images of the actual reference structure (e.g., an actual organ) in the subject in operation 925.” [0124]). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Toporek in view of Keller et. al. (U.S. 20230052645, February 16, 2023)(hereinafter, “Keller”). Regarding Claim 8, Toporek teaches the claim limitations as noted above. Toporek does not teach: wherein compounding comprises compounding where the coordinate-based neural network comprises sinusoidal or multiresolution hash positional encodings. Keller in the field of neural network-based systems teaches a system and method that augments a neural network by a multiresolution hash encoding [0021][0026]. Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the coordinate-based neural network of Toporek to comprise multiresolution hash encoding as taught in Keller improving “…accuracy and performance while being agnostic to the application implemented by the neural network.” (Keller, [0021]). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Toporek as applied to claim 1 above, and further in view of Munkberg et. al. (U.S. 20200126191, April 23, 2020)(hereinafter, “Munkberg”). Regarding Claim 9, Toporek teaches the claim limitations as noted above. Toporek does not explicitly teach: wherein rendering comprises sampling the neural field directly. Munkberg in the field of neural network-based systems teaches: “FIG. 2D illustrates a block diagram of a temporal adaptive sampling and denoising system 200…The warped external recurrent neural network 200 includes a sample map estimator neural network model 210, a renderer 205, a denoiser neural network model and combiner 220, and a temporal warp function 215.” [0065]; “The denoiser neural network model and combiner 220 is applied to adaptively sampled rendered images that may include artifacts. The denoiser neural network model and combiner 220 receives rendered image frames (e.g., adaptive samples) output by the renderer 205 or receives rendered image frames from other sources to produce reconstructed images.” [0067]. Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the rendering in Toporek to comprise sampling the neural field directly as taught in Munkberg “…to achieve significantly improved image quality and temporal stability…” (Munkberg, Abstract). Claims 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Toporek as applied to claim 1 above, and further in view of Lipnik et. al. (U.S. 20230149135, May 18, 2023)(hereinafter, “Lipnik”). Regarding Claim 10, Toporek teaches the claim limitations as noted above. Toporek does not teach: wherein rendering comprises view optimization with differentiable rendering. Lipnik in the field of image rendering-based systems teaches: “Differentiable rendering can be used as a “reconstruction free” alternative to the construction of the first 3D model and subsequent registration of the first 3D model with the initial 3D surface model. Differentiable rendering may be used to perform optimizations using a gradient descent…” [0075]; “…differentiable rendering may be employed in order to make the optimization amenable to gradient descent, which can be used to estimate the tooth motions by solving the optimization program. In some cases, the optimization program may operate based on an assumption that silhouette renderings are sufficient, and binary masks may be extracted from the video frames accordingly. Separately, the camera poses may be derived or estimated from the video frames... The estimated tooth motions may then be used to update the 3D mesh by applying any one or more suitable mesh deformation algorithms…” [0076]. Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the rendering in Toporek to comprise view optimization with differentiable rendering as taught in Lipnik enabling end-to-end optimization and data training efficiency. Regarding Claim 11, Toporek teaches the claim limitations as noted above. with regards to limitations: wherein compounding comprises providing the 3D representation as a deformation field based on the scanning as real-time ultrasound, and wherein rendering comprises rendering a pre-operative image based on the deformation field, Toporek further teaches: “FIG. 1 illustrates generation of a three dimensional (3D) volumetric acoustic image 102 by compounding a series of two dimensional (2D) acoustic images…” [0039];“An operation 920 includes constructing a three dimensional acoustic image of the ROI in the subject from the series of spatially tracked two dimensional acoustic images of the ROI, wherein the three dimensional acoustic image of the ROI in the subject is in an acoustic three dimensional coordinate system.” [0123]; “An operation 935 includes determining, for each of the spatially tracked two dimensional acoustic images (obtained in operation 915) of the ROI in the subject its actual pose in the standardized three dimensional coordinate system (defined in operation 905) using: a pose of the spatially tracked two dimensional acoustic image in the acoustic image three dimensional coordinate system (defined in operation 930) corresponding to the spatially tracked two dimensional acoustic image, and a coordinate system transformation from the corresponding acoustic image three dimensional coordinate system to the standardized three dimensional coordinate system.” [0125]; “An operation 945 includes performing an optimization process on a convolutional neural network (CNN) by providing the spatially tracked two dimensional acoustic images to the CNN and adjusting parameters of the CNN to minimize differences between predicted poses generated by the CNN for the spatially tracked two dimensional acoustic images and the actual poses of the spatially tracked two dimensional acoustic images.” [0126]. Toporek does not explicitly teach providing the 3D representation as a deformation field. Lipnik in the field of image rendering-based systems teaches: “Differentiable rendering can be used as a “reconstruction free” alternative to the construction of the first 3D model and subsequent registration of the first 3D model with the initial 3D surface model. Differentiable rendering may be used to perform optimizations using a gradient descent…” [0075]; “…differentiable rendering may be employed in order to make the optimization amenable to gradient descent, which can be used to estimate the tooth motions by solving the optimization program. In some cases, the optimization program may operate based on an assumption that silhouette renderings are sufficient, and binary masks may be extracted from the video frames accordingly. Separately, the camera poses may be derived or estimated from the video frames... The estimated tooth motions may then be used to update the 3D mesh by applying any one or more suitable mesh deformation algorithms…” [0076]. Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the representation in Toporek to be represented as a deformation field as taught in Lipnik enabling end-to-end optimization and data training efficiency. Regarding Claim 12, Toporek teaches the claim limitations as noted above. Toporek further teaches: wherein compounding comprises compounding with the neural field: “FIG. 1 illustrates generation of a three dimensional (3D) volumetric acoustic image 102 by compounding a series of two dimensional (2D) acoustic images…” [0039];“An operation 920 includes constructing a three dimensional acoustic image of the ROI in the subject from the series of spatially tracked two dimensional acoustic images of the ROI, wherein the three dimensional acoustic image of the ROI in the subject is in an acoustic three dimensional coordinate system.” [0123]; “An operation 935 includes determining, for each of the spatially tracked two dimensional acoustic images (obtained in operation 915) of the ROI in the subject its actual pose in the standardized three dimensional coordinate system (defined in operation 905) using: a pose of the spatially tracked two dimensional acoustic image in the acoustic image three dimensional coordinate system (defined in operation 930) corresponding to the spatially tracked two dimensional acoustic image, and a coordinate system transformation from the corresponding acoustic image three dimensional coordinate system to the standardized three dimensional coordinate system.” [0125]; “An operation 945 includes performing an optimization process on a convolutional neural network (CNN) by providing the spatially tracked two dimensional acoustic images to the CNN and adjusting parameters of the CNN to minimize differences between predicted poses generated by the CNN for the spatially tracked two dimensional acoustic images and the actual poses of the spatially tracked two dimensional acoustic images.” [0126]. Toporek does not teach: the neural field trained using differentiable deformable volume rendering. Lipnik in the field of image rendering-based systems teaches: “Differentiable rendering can be used as a “reconstruction free” alternative to the construction of the first 3D model and subsequent registration of the first 3D model with the initial 3D surface model. Differentiable rendering may be used to perform optimizations using a gradient descent…” [0075]; “…differentiable rendering may be employed in order to make the optimization amenable to gradient descent, which can be used to estimate the tooth motions by solving the optimization program. In some cases, the optimization program may operate based on an assumption that silhouette renderings are sufficient, and binary masks may be extracted from the video frames accordingly. Separately, the camera poses may be derived or estimated from the video frames... The estimated tooth motions may then be used to update the 3D mesh by applying any one or more suitable mesh deformation algorithms…” [0076]. Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the neural field trained in Toporek to be using differentiable deformable volume rendering as taught in Lipnik enabling end-to-end optimization and data training efficiency. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Planche et. al. U.S. 20230111048 teaches a system for differentiable networks used for object recognition. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMAL FARAG whose telephone number is (571)270-3432. The examiner can normally be reached 8:30 - 5:30 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Keith Raymond can be reached at (571) 270-1790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMAL ALY FARAG/Primary Examiner, Art Unit 3798
Read full office action

Prosecution Timeline

Aug 12, 2024
Application Filed
Jan 06, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12575744
DATA PROCESSING DEVICE AND METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12569220
BLOOD FLOW MEASUREMENT SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12564373
Spatially Aware Medical Device Configured for Performance of Insertion Pathway Approximation
2y 5m to grant Granted Mar 03, 2026
Patent 12564386
PROCESSING APPARATUS AND CONTROL METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12564387
ULTRASOUND DIAGNOSTIC APPARATUS AND ULTRASOUND DIAGNOSTIC SYSTEM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
99%
With Interview (+38.3%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 197 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month