Prosecution Insights
Last updated: April 19, 2026
Application No. 19/113,335

METHODS OF GENERATING IMAGE DATA FOR THREE-DIMENSIONAL TOPOGRAPHICAL VOLUMES, INCLUDING DICOM-COMPLIANT IMAGE DATA FOR SURGICAL NAVIGATION, AND ASSOCIATED SYSTEMS, DEVICES, AND METHODS

Non-Final OA §103§112
Filed
Mar 19, 2025
Examiner
SHENG, CHAO
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Montefiore Medical Center
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
3y 4m
To Grant
91%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
170 granted / 276 resolved
-8.4% vs TC avg
Strong +29% interview lift
Without
With
+29.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
32 currently pending
Career history
308
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
46.8%
+6.8% vs TC avg
§102
16.5%
-23.5% vs TC avg
§112
31.4%
-8.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 276 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 1, 4 – 6, 8, 9, 11 – 13 and 15 – 19 are objected to because of the following informalities: Claim 1 line 9 – 10, limitation "wherein conforming the sequence includes" should read "wherein the step of conforming the sequence includes". Claim 4 line 2, limitation "obtaining the 3D topographical volume of the patient includes" should read "the step of obtaining the 3D topographical volume of the patient includes". Claim 4 line 4, limitation "voxelating the 3D topographical volume includes" should read "the step of voxelating the 3D topographical volume includes". Claim 5 line 1 – 2, limitation "wherein volume rendering the 3D voxelated volume includes" should read "wherein the step of volume rendering the 3D voxelated volume includes". Claim 6 line 1 – 2, limitation "wherein volume rendering the 3D voxelated volume includes" should read "wherein the step of volume rendering the 3D voxelated volume includes". Claim 8 line 1 – 2, limitation "wherein: volume rendering the 3D voxelated volume includes" should read "wherein: the step of volume rendering the 3D voxelated volume includes". Claim 9 line 1 – 2, limitation "wherein voxelating the 3D topographical volume includes" should read "wherein the step of voxelating the 3D topographical volume includes". Claim 11 line 1 – 2, limitation "wherein processing the image data includes" should read "wherein the step of processing the image data includes". Claim 12 line 1, limitation "wherein processing the image data includes" should read "wherein the step of processing the image data includes". Claim 12 line 2 – 3, limitation "real word units" should read "real world units". Claim 12 line 4, limitation "real word units" should read "real world units". Claim 13 line 1 – 2, limitation "wherein conforming the sequence of 2D cross- sectional images to the imaging standard includes" should read "wherein the step of conforming the sequence of 2D cross- sectional images to the imaging standard includes". Claim 15 line 1 – 2, limitation "wherein registering the volumetric dataset to the patient includes" should read "wherein the step of registering the volumetric dataset to the patient includes". Claim 16 line 1 – 2, limitation "wherein registering the volumetric dataset to the patient further includes" should read "wherein the step of registering the volumetric dataset to the patient further includes". Claim 17 line 1 – 2, limitation "wherein registering the volumetric dataset to the patient includes" should read "wherein the step of registering the volumetric dataset to the patient includes". Claim 18 line 1, limitation "wherein displaying the 3D volume includes" should read "wherein the step of displaying the 3D volume includes". Claim 19 line 1 – 2, limitation "wherein displaying the 3D volume includes" should read "wherein the step of displaying the 3D volume includes". Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 7 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 7 recites limitation “wherein volume rendering includes” in line 1, it is unclear the above limitation is reciting the step of volume rendering the 3D voxelated volume, which is introduced in claim 1, or the step of volume rendering the first 3D voxelated volume, which is introduced in claim 5, or the step of volume rendering the second 3D voxelated volume, which is introduced in claim 5. Claim 7 is dependent on claim 5, which is dependent on claim 1. Without further description of the volume rendering step, it is unclear which specific step is recited since there are three different rendering steps introduced earlier. Thus, the above limitation renders claim indefinite. For the purpose of examination, the above limitation is interpreted as any reasonable volume rendering step introduced. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim 1, 2, 8 and 10 – 13 are rejected under 35 U.S.C. 103 as being unpatentable over Hill et al. (US 2012/0169712 A1; published on 07/05/2012) (hereinafter "Hill") in view of Crawford et al. (US 2022/0270762 A1; published on 08/25/2022) (hereinafter "Crawford"). Regarding claim 1, Hill teaches a method of generating a volumetric dataset for surgical navigation ("… to provide a system and method that generate a volumetric rendering of the region of interest while simultaneously displaying the position of the medical device within the region of interest." [0007]), the method comprising: obtaining a three-dimensional (3D) topographical volume of a patient ("… registering a volumetric data set 34 within coordinate system 32, generating a volumetric rendering 36 of the region of interest from data set 34 …" [0021]) representing one or more surface contours of patient anatomy ("… as well as identify structures of interest within the region of interest." [0010]; "Volumetric data set 34 may consist of a set of intensities, gradients or derived statistical properties for each of several volumetric data elements within the region of interest ... volumetric data set 34 may be computed by ECU 20 from a series of intracardiac echocardiography images obtained using an ICE catheter 56 or 72." [0026]); voxelating the 3D topographical volume to generate a 3D voxelated volume ("Data set 34 may be resolved into voxels as described hereinbelow." [0023]); volume rendering the 3D voxelated volume into a sequence of two-dimensional (2D) cross- sectional images ("… rendering 36 may further includes images 42, 44, 46 taken through planar cross-section through volumetric data set 34." [0027]), wherein each 2D cross-sectional image of the sequence includes a 2D slice of the 3D voxelated volume (see Fig.2). Hill fails to explicitly teach the step of conforming the sequence of 2D cross-sectional images to an imaging standard, wherein conforming the sequence includes processing image data included in the 2D cross- sectional images of the sequence. However, in the same field of endeavor, Crawford teaches conforming the sequence of 2D cross-sectional images to an imaging standard ("The medical images may be formatted in a standard compliant manner such as with DICOM." [0066]), wherein conforming the sequence includes processing image data included in the 2D cross- sectional images of the sequence ("For example, the results of the automatic image segmentation may take the form of a series of binary pixel arrays contained in medical images, e.g., DICOM files. When assembled into a volume, the binary pixel arrays may be used to mask the areas of the source pixel volume that are not relevant to the identified anatomy." [0110]). It would have been prima facie obvious to one ordinary skilled in the art before the effective filing date of the invention to modify the volumetric dataset processing as taught by Hill with the additional patient specific metadata and image processing as taught by Crawford. Doing so would make it possible "for analyzing medical images of a patient to create 3D models to assist in diagnosis, planning, and/or treatment" (see Crawford; [0005]). Regarding claim 2, Hill in view of Crawford teaches all claim limitations, as applied in claim 1, and Hill further teaches obtaining one or more 3D topographical images of the patient anatomy ("… volumetric data set 34 may be computed by ECU 20 from a series of intracardiac echocardiography images … Alternatively, however, volumetric data set 34 may comprise, for example, a magnetic resonance volumetric data set or a computed tomography volumetric data set." [0026]); and generating, based at least in part on the one or more 3D topographical images, a 3D model of the patient anatomy ("ECU 20 may generate rendering 36 by mapping data from the volumetric data set 34 into a three-dimensional voxel model." [0027]). Regarding claim 8, Hill in view of Crawford teaches all claim limitations, as applied in claim 1, and Hill further teaches wherein: volume rendering the 3D voxelated volume includes advancing a frame through the 3D voxelated volume at equally spaced, non-overlapping intervals ("ECU 20 then projects the voxel model directly into a two-dimensional image 40 to form a three-dimensional representation of the region of interest by casting a ray of light from each of the pixels in image 40 in a direction normal to the viewing plane." [0027]; here the pixel-to-pixel interval is equally spaced and non-overlapping); and the 2D slices included in the 2D cross-sectional images of the sequence are non- overlapping slices of the 3D voxelated volume (the 2D slices is based on image 40 pixel to pixel projection, since the pixels are not overlapping, the projection in normal direction is not overlapping). Regarding claim 10, Hill in view of Crawford teaches all claim limitations, as applied in claim 1, and Crawford further teaches wherein the imaging standard is a Digital Imaging and Communications in Medicine (DICOM) Standard ("The medical images may be formatted in a standard compliant manner such as with DICOM." [0066]). It would have been prima facie obvious to one ordinary skilled in the art before the effective filing date of the invention to modify the volumetric dataset processing as taught by Hill with the additional patient specific metadata and image processing as taught by Crawford. Doing so would make it possible "for analyzing medical images of a patient to create 3D models to assist in diagnosis, planning, and/or treatment" (see Crawford; [0005]). Regarding claim 11, Hill in view of Crawford teaches all claim limitations, as applied in claim 10, and Crawford further teaches wherein processing the image data includes assigning, consistent with different tissue types as described in the DICOM Standard for grayscale images, new pixel values to pixels of the 2D slices ("… the separate anatomical features are mapped to the original medical images, such that only the original grey scale values or Hounsfield units for the separate anatomical features are shown in the medical images … and the background may be removed from the medical images ..." [0111]; here the background or non-related tissue are assigned with value 0 (pixel value removed)). It would have been prima facie obvious to one ordinary skilled in the art before the effective filing date of the invention to modify the volumetric dataset processing as taught by Hill with the additional patient specific metadata and image processing as taught by Crawford. Doing so would make it possible "for analyzing medical images of a patient to create 3D models to assist in diagnosis, planning, and/or treatment" (see Crawford; [0005]). Regarding claim 12, Hill in view of Crawford teaches all claim limitations, as applied in claim 10, and Crawford further teaches wherein processing the image data includes performing one or more digital measurements to determine (a) widths and heights in real word units of image frames or of the 2D slices included in the 2D cross-sectional images, and (b) one or more lengths in real word units of the 3D voxelated volume ("… physical measurements may be generated of the mesh, or any sub-mesh, or otherwise delineated region in the physical scene, which may include: length, breadth, height, angles, curvature, tortuosity of a mesh, etc." [0115]). It would have been prima facie obvious to one ordinary skilled in the art before the effective filing date of the invention to modify the volumetric dataset processing as taught by Hill with the additional patient specific metadata and image processing as taught by Crawford. Doing so would make it possible "for analyzing medical images of a patient to create 3D models to assist in diagnosis, planning, and/or treatment" (see Crawford; [0005]). Regarding claim 13, Hill in view of Crawford teaches all claim limitations, as applied in claim 1, and Hill further teaches wherein conforming the sequence of 2D cross- sectional images to the imaging standard includes: calculating one or more dimensional attributes consistent with the imaging standard, the one or more dimensional attributes including image position relative to the patient ("These images are affiliated with the position and orientation data and processed by an ECU 20 unit and combined, based on registering multiple two-dimensional images at known positions and orientations (e.g., two-dimensional images captured by ICE catheter 56 or 72), into a three-dimensional volumetric data set 34." [0023]), or slice location ("A volumetric data element contains coordinates for locating the element in space as well as one or more properties such as intensity, intensity gradient or derived statistical properties." [0027]; here the slice location is determined from the voxel locations which constructed the slice). In addition, Crawford further teaches obtaining one or more identifiers consistent with the imaging standard, the one or more identifiers including an identifier of the patient ("The medical images may be formatted in a standard compliant manner such as with DICOM. The medical images may include metadata embedded therein indicative of a patient specific pathology associated with the patient specific anatomical features in the medical images." [0066]); or processing the 2D cross-sectional images of the sequence such that the 2D cross-sectional images have pixel padding that is consistent with the imaging standard ("determining start and end points of the isolated anatomical feature; taking slices at predefined intervals along an axis from the start point to the end point; calculating a cross-sectional area of each slice defined by a perimeter of the isolated anatomical feature ... calculating an overall 3D volume of the isolated anatomical feature ..." [0012]). It would have been prima facie obvious to one ordinary skilled in the art before the effective filing date of the invention to modify the volumetric dataset processing as taught by Hill with the additional patient specific metadata and image processing as taught by Crawford. Doing so would make it possible "for analyzing medical images of a patient to create 3D models to assist in diagnosis, planning, and/or treatment" (see Crawford; [0005]). Claim 3 – 7 are rejected under 35 U.S.C. 103 as being unpatentable over Hill in view of Crawford, as applied in claim 2, and further in view of Bojarski et al. (US 2016/0143744 A1; published on 05/26/2016) (hereinafter "Bojarski"). Regarding claim 3, Hill in view of Crawford teaches all claim limitations, as applied in claim 2, and Hill further teaches wherein the 3D model is a composite 3D model that includes (a) a first 3D topographical volume representing actual topographical anatomy of the patient and a representation of device ("… generating a volumetric rendering 36 of the region of interest from data set 34 and superimposing a representation of device 12 on rendering 36 based on the position signal generated by sensor 24." [0021]). Although Hill in view of Crawford fails to explicitly teach the composite 3D model that includes first and second 3D topographical volume, Hill teaches a capability of superimposing two image data to generate a composite 3D model. In the same field of endeavor, Bojarski teaches wherein the 3D model is a composite 3D model (see Figs. 30A and 30B; the 3D models illustrate the desired surface to match the current bone anatomy structure) that includes (a) a first 3D topographical volume representing actual topographical anatomy of the patient ("Once the practitioner has obtained the necessary measurements, the information can be used to generate a model representation of the target joint being assessed 2730. This model representation can be in the form of a topographical map or image. The model representation of the joint can be in one, two, or three dimensions." [0529]) and (b) a second 3D topographical volume representing a desired change to the actual topographical anatomy of the patient ("… the practitioner optionally can generate a projected model representation of the target joint in a corrected condition 2740 … the practitioner can then select a joint implant 2750 that is suitable to achieve the corrected joint anatomy." [0530]). It would have been prima facie obvious to one ordinary skilled in the art before the effective filing date of the invention to modify the composite 3D model as taught by Hill with the combined target and projected 3D model as taught by Bojarski. By providing images of the patient's joint, "patient-adapted implant components can be selected and/or designed to include features (e.g., surface contours, curvatures, widths, lengths, thicknesses, and other features) that match existing features in the single, individual patient's joint as well as features that approximate an ideal and/or healthy feature that may not exist in the patient prior to a procedure" (see Bojarski; [0011]). Regarding claim 4, Hill in view of Crawford and Bojarski teaches all claim limitations, as applied in claim 3, and Hill further teaches wherein: obtaining the 3D topographical volume of the patient includes obtaining the first 3D topographical volume and obtaining the second 3D topographical volume ("… registering a volumetric data set 34 within coordinate system 32, generating a volumetric rendering 36 of the region of interest from data set 34 …" [0021]; this process can be applied to any acquired dataset, which includes the target joint data and projected joint data as taught by Bojarski); and voxelating the 3D topographical volume includes voxelating the first 3D topographical volume into a first 3D voxelated volume and voxelating the second 3D topographical volume into a second 3D voxelated volume ("Data set 34 may be resolved into voxels as described hereinbelow." [0023]; this process can be applied to any acquired dataset, which includes the target joint data and projected joint data as taught by Bojarski). Regarding claim 5, Hill in view of Crawford and Bojarski teaches all claim limitations, as applied in claim 4, and Hill further teaches wherein volume rendering the 3D voxelated volume includes volume rendering the first 3D voxelated volume into a first sequence of 2D cross- sectional images and volume rendering the second 3D voxelated volume into a second sequence of 2D cross-sectional images ("… rendering 36 may further includes images 42, 44, 46 taken through planar cross-section through volumetric data set 34." [0027]; this process can be applied to any acquired dataset, which includes the target joint data and projected joint data as taught by Bojarski). Regarding claim 6, Hill in view of Crawford and Bojarski teaches all claim limitations, as applied in claim 4, and Hill further teaches wherein volume rendering the 3D voxelated volume includes volume rendering the first 3D voxelated volume and the second 3D voxelated volume into a single sequence of 2D cross-sectional images ("… generating a volumetric rendering 36 of the region of interest from data set 34 and superimposing a representation of device 12 on rendering 36 based on the position signal generated by sensor 24." [0021]; "… rendering 36 may further includes images 42, 44, 46 taken through planar cross-section through volumetric data set 34." [0027]; since two datasets are superimposed as one, the render results in a single sequence). Regarding claim 7, Hill in view of Crawford and Bojarski teaches all claim limitations, as applied in claim 5, and Crawford further teaches wherein volume rendering includes: assigning pixels included in 2D slices that describe the first 3D voxelated volume, first pixel values in a first range of values that correspond to a first range of colors; and assigning pixels included in 2D slices that describe the second 3D voxelated volume, second pixel values in a second range of values that correspond to a second range of colors different from the first range of colors ("Additionally or alternatively, the specific colors of transparency values may be mapped to labeled 3D surface mesh model to generate a volumetric render, as shown in 2616 of FIG. 26 and FIG. 27D. For example, a color map of the pixel intensities may be mapped directly to the 3D voxel intensities within only the segmentation to allow for specific volumetric visualization of the isolated anatomical feature." [0112]; here different area of interest are assigned with different color, considering the Figs.30A and 30B of Bojarski showing two different interested area). It would have been prima facie obvious to one ordinary skilled in the art before the effective filing date of the invention to modify the volumetric dataset processing as taught by Hill with the additional patient specific metadata and image processing as taught by Crawford. Doing so would make it possible "for analyzing medical images of a patient to create 3D models to assist in diagnosis, planning, and/or treatment" (see Crawford; [0005]). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Hill in view of Crawford, as applied in claim 1, and further in view of Bojarski. Regarding claim 9, Hill in view of Crawford teaches all claim limitations, as applied in claim 1, except wherein voxelating the 3D topographical volume includes smoothing the 3D voxelated volume. However, in the same field of endeavor, Bojarski teaches wherein voxelating the 3D topographical volume includes smoothing the 3D voxelated volume ("Optionally, the 3D representation of the biological structure can be generated or manipulated, for example, smoothed or corrected …" [0328]). It would have been prima facie obvious to one ordinary skilled in the art before the effective filing date of the invention to modify the composite 3D model as taught by Hill with the combined target and projected 3D model as taught by Bojarski. By providing images of the patient's joint, "patient-adapted implant components can be selected and/or designed to include features (e.g., surface contours, curvatures, widths, lengths, thicknesses, and other features) that match existing features in the single, individual patient's joint as well as features that approximate an ideal and/or healthy feature that may not exist in the patient prior to a procedure" (see Bojarski; [0011]). Claim 14 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over Hill in view of Bojarski and Crawford. Regarding claim 14, Hill teaches a method of providing surgical navigation ("… to provide a system and method that generate a volumetric rendering of the region of interest while simultaneously displaying the position of the medical device within the region of interest." [0007]; "As a result, the clinician can more readily navigate medical devices within the region of interest as well as identify structures of interest within the region of interest." [0010]), the method comprising: obtaining a volumetric dataset that includes a first sub-volume representing actual topographical anatomy of a patient ("… registering a volumetric data set 34 within coordinate system 32, generating a volumetric rendering 36 of the region of interest from data set 34 …" [0021]; "… volumetric data set 34 may be computed by ECU 20 from a series of intracardiac echocardiography images obtained using an ICE catheter 56 or 72." [0026]), and wherein the volumetric dataset (b) is based, at least in part, on one or more three-dimensional (3D) topographical images of the patient ("… as well as identify structures of interest within the region of interest." [0010]; "Volumetric data set 34 may consist of a set of intensities, gradients or derived statistical properties for each of several volumetric data elements within the region of interest." [0026]); registering the volumetric dataset to the patient ("ECU 20 provides a means for registering a volumetric data set 34 within coordinate system 32 …" [0021]; see Fig.1, the coordinate system 32 is refer to the patient coordinate system); and displaying a 3D volume reconstructed based, at least in part, on two-dimensional (2D) cross-sectional images included in the volumetric dataset ("Referring to FIG. 2, in accordance with the present teachings, display 22 is also configured to display a volumetric rendering 36 of the region of interest and an indication of the position of device 12 within the region of interest." [0024]). Hill fails to explicitly teach obtaining a volumetric dataset that includes a second sub-volume representing desired topographical anatomy of the patient, wherein the desired topographical anatomy of the patient represents a desired change to the actual topographical anatomy of the patient, and wherein the volumetric dataset (a) is compliant with a Digital Imaging and Communications in Medicine (DICOM) Standard. However, in the same field of endeavor, Bojarski teaches obtaining a volumetric dataset that includes a first sub-volume representing actual topographical anatomy of a patient ("Once the practitioner has obtained the necessary measurements, the information can be used to generate a model representation of the target joint being assessed 2730. This model representation can be in the form of a topographical map or image. The model representation of the joint can be in one, two, or three dimensions." [0529]) and a second sub-volume representing desired topographical anatomy of the patient, wherein the desired topographical anatomy of the patient represents a desired change to the actual topographical anatomy of the patient ("… the practitioner optionally can generate a projected model representation of the target joint in a corrected condition 2740 … the practitioner can then select a joint implant 2750 that is suitable to achieve the corrected joint anatomy." [0530]). It would have been prima facie obvious to one ordinary skilled in the art before the effective filing date of the invention to modify the composite 3D model as taught by Hill with the combined target and projected 3D model as taught by Bojarski. By providing images of the patient's joint, "patient-adapted implant components can be selected and/or designed to include features (e.g., surface contours, curvatures, widths, lengths, thicknesses, and other features) that match existing features in the single, individual patient's joint as well as features that approximate an ideal and/or healthy feature that may not exist in the patient prior to a procedure" (see Bojarski; [0011]). In addition, in the same field of endeavor, Crawford teaches wherein the volumetric dataset (a) is compliant with a Digital Imaging and Communications in Medicine (DICOM) Standard ("The medical images may be formatted in a standard compliant manner such as with DICOM." [0066]). It would have been prima facie obvious to one ordinary skilled in the art before the effective filing date of the invention to modify the volumetric dataset processing as taught by Hill with the additional patient specific metadata and image processing as taught by Crawford. Doing so would make it possible "for analyzing medical images of a patient to create 3D models to assist in diagnosis, planning, and/or treatment" (see Crawford; [0005]). Regarding claim 15, Hill in view of Bojarski and Crawford teaches all claim limitations, as applied in claim 14, and Hill further teaches wherein registering the volumetric dataset to the patient includes registering the first sub-volume to existing topographical anatomy of the patient ("ECU 20 provides a means for registering a volumetric data set 34 within coordinate system 32 …" [0021]; see Fig.1, the coordinate system 32 is refer to the patient coordinate system). Regarding claim 16, Hill in view of Bojarski and Crawford teaches all claim limitations, as applied in claim 15, and Bojarski further teaches wherein registering the volumetric dataset to the patient further includes overlaying the second sub-volume onto the first sub-volume using a best fit algorithm ("Using the difference between the topographical condition of the joint and the projected image of the joint, the practitioner can then select a joint implant 2750 that is suitable to achieve the corrected joint anatomy." [0530]; see Figs. 30A and 30B; the 3D models illustrate the desired surface to match the current bone anatomy structure). It would have been prima facie obvious to one ordinary skilled in the art before the effective filing date of the invention to modify the composite 3D model as taught by Hill with the combined target and projected 3D model as taught by Bojarski. By providing images of the patient's joint, "patient-adapted implant components can be selected and/or designed to include features (e.g., surface contours, curvatures, widths, lengths, thicknesses, and other features) that match existing features in the single, individual patient's joint as well as features that approximate an ideal and/or healthy feature that may not exist in the patient prior to a procedure" (see Bojarski; [0011]). Regarding claim 17, Hill in view of Bojarski and Crawford teaches all claim limitations, as applied in claim 14, and Hill further teaches wherein registering the volumetric dataset to the patient includes registering the volumetric dataset to the patient such that a bulk of the volumetric dataset is positioned internal to the patient ("ECU 20 provides a means for registering a volumetric data set 34 within coordinate system 32, generating a volumetric rendering 36 of the region of interest from data set 34 and superimposing a representation of device 12 on rendering 36 based on the position signal generated by sensor 24." [0021]; "... within a region of interest in a body 14 such as a heart 16." [0016]; when the region of interest is heart, the registered volume is within patient body). Regarding claim 18, Hill in view of Bojarski and Crawford teaches all claim limitations, as applied in claim 14, and Bojarski further teaches wherein displaying the 3D volume includes displaying a difference between the first sub-volume and the second sub-volume (see Figs. 30A and 30B; the mesh illustrates the difference (implant to fix issue) between desired surface and the current bone anatomy structure), and wherein the difference represents a depth between the patient's actual topographical anatomy and the patient's desired topographical anatomy ("Using the difference between the topographical condition of the joint and the projected image of the joint, the practitioner can then select a joint implant 2750 that is suitable to achieve the corrected joint anatomy." [0530]). It would have been prima facie obvious to one ordinary skilled in the art before the effective filing date of the invention to modify the composite 3D model as taught by Hill with the combined target and projected 3D model as taught by Bojarski. By providing images of the patient's joint, "patient-adapted implant components can be selected and/or designed to include features (e.g., surface contours, curvatures, widths, lengths, thicknesses, and other features) that match existing features in the single, individual patient's joint as well as features that approximate an ideal and/or healthy feature that may not exist in the patient prior to a procedure" (see Bojarski; [0011]). Regarding claim 19, Hill in view of Bojarski and Crawford teaches all claim limitations, as applied in claim 14, and Hill further teaches wherein displaying the 3D volume includes: tracking a position of a physical instrument ("… provide position and orientation data to ECU 20 such that the position and orientation of catheter 72—and the image data provided by array 58—can be determined in six degrees of freedom." [0017]); and displaying a composite volume at a location that the physical instrument contacts the patient ("The method may continue with the step 54 of superimposing a representation of medical device 12 on images 40, 42, 44, 46 responsive to position signal generated by sensor 24 on device 12 ... ECU 20 therefore includes a superimposition module for superimposing a representation of device 12 on rendering 36." [0028]). In addition, Bojarski further teaches the composite volume includes a difference between the first sub-volume and the second sub-volume (see Figs. 30A and 30B; the mesh illustrates the difference (implant to fix issue) between desired surface and the current bone anatomy structure). It would have been prima facie obvious to one ordinary skilled in the art before the effective filing date of the invention to modify the composite 3D model as taught by Hill with the combined target and projected 3D model as taught by Bojarski. By providing images of the patient's joint, "patient-adapted implant components can be selected and/or designed to include features (e.g., surface contours, curvatures, widths, lengths, thicknesses, and other features) that match existing features in the single, individual patient's joint as well as features that approximate an ideal and/or healthy feature that may not exist in the patient prior to a procedure" (see Bojarski; [0011]). Regarding claim 20, Hill teaches a modeling and navigation system ("… to provide a system and method that generate a volumetric rendering of the region of interest while simultaneously displaying the position of the medical device within the region of interest." [0007]; "As a result, the clinician can more readily navigate medical devices within the region of interest as well as identify structures of interest within the region of interest." [0010]), comprising: a three-dimensional (3D) imaging device configured to obtain one or more 3D topographical images of patient anatomy ("… volumetric data set 34 may be computed by ECU 20 from a series of intracardiac echocardiography images obtained using an ICE catheter 56 or 72." [0026]); a computing device ("… may be computed by ECU 20 …" [0026]) configured to generate one or more 3D models based, at least in part, on the one or more 3D topographical images ("… registering a volumetric data set 34 within coordinate system 32, generating a volumetric rendering 36 of the region of interest from data set 34 …" [0021]), wherein the one or more 3D models include a representation of actual topographical patient anatomy ("… registering a volumetric data set 34 within coordinate system 32, generating a volumetric rendering 36 of the region of interest from data set 34 …" [0021]), voxelate the one or more 3D models into one or more 3D voxelated volumes ("Data set 34 may be resolved into voxels as described hereinbelow." [0023]), volume render the one or more 3D voxelated volumes into one or more sequences of 2D cross-sectional images ("… generating a volumetric rendering 36 of the region of interest from data set 34 and superimposing a representation of device 12 on rendering 36 based on the position signal generated by sensor 24." [0021]; "… rendering 36 may further includes images 42, 44, 46 taken through planar cross-section through volumetric data set 34." [0027]), process image data included in the 2D cross-sectional images ("The method may continue with the step 50 of registering volumetric data set 34 in coordinate system 32. Volumetric data set 34 may consist of a set of intensities, gradients or derived statistical properties for each of several volumetric data elements within the region of interest." [0026]); and a surgical navigation system ("… system 18 may comprise a system that employs magnetic fields to detect the position of device 12 within body 14 …" [0020]) configured to reconstruct a 3D volume based, at least in part, on the one or more sequences of 2D cross-sectional images ("ECU 20 then projects the voxel model directly into a two-dimensional image 40 to form a three-dimensional representation of the region of interest by casting a ray of light from each of the pixels in image 40 in a direction normal to the viewing plane." [0027]), register the 3D volume to the patient ("ECU 20 provides a means for registering a volumetric data set 34 within coordinate system 32 …" [0021]; see Fig.1, the coordinate system 32 is refer to the patient coordinate system), track a position of a physical instrument ("… provide position and orientation data to ECU 20 such that the position and orientation of catheter 72—and the image data provided by array 58—can be determined in six degrees of freedom." [0017]), and displaying a composite 3D volume at a location that the physical instrument contacts the patient ("The method may continue with the step 54 of superimposing a representation of medical device 12 on images 40, 42, 44, 46 responsive to position signal generated by sensor 24 on device 12 ... ECU 20 therefore includes a superimposition module for superimposing a representation of device 12 on rendering 36." [0028]). In addition, in the same field of endeavor, Bojarski teaches wherein the one or more 3D models include a representation of actual topographical patient anatomy ("Once the practitioner has obtained the necessary measurements, the information can be used to generate a model representation of the target joint being assessed 2730. This model representation can be in the form of a topographical map or image. The model representation of the joint can be in one, two, or three dimensions." [0529]) and a representation of desired topographical patient anatomy ("… the practitioner optionally can generate a projected model representation of the target joint in a corrected condition 2740 … the practitioner can then select a joint implant 2750 that is suitable to achieve the corrected joint anatomy." [0530]), and the composite 3D volume includes a difference between the first sub-volume and the second sub-volume (see Figs. 30A and 30B; the mesh illustrates the difference (implant to fix issue) between desired surface and the current bone anatomy structure). It would have been prima facie obvious to one ordinary skilled in the art before the effective filing date of the invention to modify the composite 3D model as taught by Hill with the combined target and projected 3D model as taught by Bojarski. By providing images of the patient's joint, "patient-adapted implant components can be selected and/or designed to include features (e.g., surface contours, curvatures, widths, lengths, thicknesses, and other features) that match existing features in the single, individual patient's joint as well as features that approximate an ideal and/or healthy feature that may not exist in the patient prior to a procedure" (see Bojarski; [0011]). In addition, in the same field of endeavor, Crawford teaches a computing device configured to conform the one or more sequences of 2D cross-sectional images to an imaging standard ("The medical images may be formatted in a standard compliant manner such as with DICOM." [0066]). It would have been prima facie obvious to one ordinary skilled in the art before the effective filing date of the invention to modify the volumetric dataset processing as taught by Hill with the additional patient specific metadata and image processing as taught by Crawford. Doing so would make it possible "for analyzing medical images of a patient to create 3D models to assist in diagnosis, planning, and/or treatment" (see Crawford; [0005]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Isobe et al. (US 5,995,108; published on 11/30/1999) teach an apparatus and method for composing and displaying 3D image from a plurality of 3D image dataset. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHAO SHENG whose telephone number is (571)272-8059. The examiner can normally be reached Monday to Friday, 8:30 am to 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne M. Kozak can be reached at (571) 270-0552. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHAO SHENG/ Primary Examiner, Art Unit 3797
Read full office action

Prosecution Timeline

Mar 19, 2025
Application Filed
Jan 22, 2026
Non-Final Rejection — §103, §112
Apr 15, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594001
APPARATUS FOR RECORDING PROBE MOVEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12578825
ANCHOR CONFIGURATIONS FOR AN ARRAY OF ULTRASONIC TRANSDUCERS
2y 5m to grant Granted Mar 17, 2026
Patent 12569152
Method to Non-Invasively Assess Elevated Left Ventricular End-Diastolic Pressure
2y 5m to grant Granted Mar 10, 2026
Patent 12564354
CARTILAGE DEGENERATION ANALYSIS DEVICE, DEVICE FOR DIAGNOSING OR AIDING DIAGNOSIS WHICH CONTAINS SAME, METHOD FOR DETERMINING DEGREE OF DEGENERATION OF CARTILAGE, AND METHOD FOR EVALUATING DRUG EFFICACY OF TEST SUBSTANCE
2y 5m to grant Granted Mar 03, 2026
Patent 12564447
SYSTEMS, METHODS, AND DEVICES FOR DEVELOPING PATIENT-SPECIFIC SPINAL IMPLANTS, TREATMENTS, OPERATIONS, AND/OR PROCEDURES
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
91%
With Interview (+29.2%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 276 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month