Prosecution Insights
Last updated: April 19, 2026
Application No. 18/427,758

METHODS AND SYSTEMS FOR REGISTRATION

Non-Final OA §102§103
Filed
Jan 30, 2024
Examiner
TRAN, DUY ANH
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Wuhan United Imaging Surgical Co. Ltd.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
104 granted / 128 resolved
+19.3% vs TC avg
Strong +18% interview lift
Without
With
+17.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
29 currently pending
Career history
157
Total Applications
across all art units

Statute-Specific Performance

§101
12.9%
-27.1% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
26.7%
-13.3% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 128 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. CN-2021108756746, filed on 04/18/2024 and Application No. CN-2022101471976, filed on 04/18/2024 . Information Disclosure Statement The information disclosure statement (IDS) submitted on 04/28/2024, 11/11/2024, 07/01/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Status Claim(s) 1-4, 7-10, 23-27 and 29-30 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ketcha et al (“Multi-stage 3D–2D registration for correction of anatomical deformation in image-guided spine surgery.”; Ketcha). Claim(s) 5-6, 11, 28 and 31 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ketcha et al (“Multi-stage 3D–2D registration for correction of anatomical deformation in image-guided spine surgery.”; Ketcha), and in view of Frank et al (U.S. 20120027261 A1; Frank). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-4, 7-10, 23-27 and 29-30 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ketcha et al (“Multi-stage 3D–2D registration for correction of anatomical deformation in image-guided spine surgery.”; Ketcha). Regarding claim 1, Ketcha discloses a method for registration (Abstract: “A multi-stage image-based 3D–2D registration method is presented that maps annotations in a 3D image”) implemented by at least one processor (2.1.4: Optimization: “ computation time and reduce GPU memory,”) the method comprising: obtaining a three-dimensional (3D) image (Preoperative CT image) of a target object captured before a surgery (Fig.2 : Automatically Masked CT volume) and at least one two-dimensional (2D) image (Intraoperative radiographs) captured during the surgery (Fig.2 Intra-Op Projection Radiograph(p); (2.1. Rigid 3D–2D registration framework: “the method intends to aid target localization by mapping vertebral labels defined in the preoperative CT image (or MR image (De Silva et al 2017)) to the intraoperative radiograph via image-based 3D–2D registration.”; 2.3.1. Single-stage registration with sub-image extent n1.: “retrospective clinical data set of 24 patients undergoing thoracolumbar spine surgery, consisting of 24 CT images and 61 intraoperative radiographs. Preoperative CT included data from three scanner manufacturers … Intraoperative radiographs were all acquired with a mobile radiography system”) obtaining a registered 3D image and a first transformation matrix between a 3D image coordinate system corresponding to the 3D image and a surgical space coordinate system corresponding to the surgery by performing, based on the at least one 2D image, posture transformation on the 3D image; (2.1.2. Projection geometry and DRR formation: “rward projections were computed within a fixed camera geometry with a virtual detector centered at the origin and an x-ray point source positioned at (xs, ys, zs) with zs defined to be perpendicular to the detector plane… A rigid 6DOF transformation,Tr, consisting of 3 translations (xr, yr, zr) and 3 rotations (ηr, θr, φr) defined the pose of the CT volume within this camera geometry. Given the geometry and CT position, a projective transformation matrix, T3×4, was defined to map a location in the CT coordinate system (x, y, z) to its projected location in the DRR (u, v) according to equation (1): … where c is a constant that normalizes the third element of the 2D position vector. The DRR was generated via ray-tracing (Cabral et al 1994) with line integrals computed using trilinear interpolation.”; 2.1.4. Optimization.: “To distribute these initializations, a plane-splitting kD tree partitioning of the search space (Bentley 1975) was implemented where the search space was divided by iteratively splitting the largest current subspace in half. The number of multi-starts (MS = 50), the population sampling size (λ = 125), and the search range (SR) along each 6DOF dimension (±[100 mm, 200 mm, 75 mm, 15°, 10°, 10°]) were selected based on a sensitivity study using a clinical image dataset, considering trade-offs in computation time, robustness, and initialization error. The chosen SR values reflect the assumption of a coarse estimate longitudinal initialization but fairly accurate rotational initialization that comes from knowledge of patient positioning (e.g. knowing that the image is a lateral radiograph).”) determining a 2D target image corresponding to a target part of the target object in each of the at least one 2D image and determining a 3D target image corresponding to the target part in the registered 3D image; (2.1.1. Binary volume masking; 2.2.1. Multi-stage progression. The key feature of msLevelCheck is that at each subsequent stage, k, the 3D image is divided into multiple 3D sub-images, each focusing on (possibly overlapping) local regions and are independently registered to the p (2D image) using the outputs from the previous stage (Tr;k−1) to determine the initialization. the sub-images are further divided to focus on smaller, increasingly local 3D regions until the final stage at which the output registration transforms are used to compute annotation locations on the 2D image. the multi-stage framework yields a transformation of the annotations from the 3D CT to the 2D radiograph that is globally deformable yet locally rigid to improve the registration accuracy at each annotation”) and obtaining a target transformation matrix by performing, based on the 3D target image and the 2D target image of each 2D image, posture transformation on the registered 3D image to optimize the first transformation matrix. (2.2.4. Scaling optimization parameters. The accuracy is expected to gradually improve as the multi-stage registration progresses, and registration parameters are accordingly adjusted to a finer range and scale. As the transformation estimate approaches the solution at each stage, parameters governing the search range (SR, as outlined in section 2.1.4) are scaled to better suit the smaller region of interest and improve registration runtime.” Regarding claim 2, Ketcha discloses wherein the obtaining a registered 3D image and a first transformation matrix between a 3D image coordinate system corresponding to the 3D image and a surgical space coordinate system corresponding to the surgery by performing, based on the at least one 2D image, posture transformation on the 3D image includes: obtaining a down sampled 3D image and at least one down sampled 2D image by down sampling, based on a preset multiplier, the 3D image and the at least one 2D image; (2.1.2. Projection geometry and DRR formation: “A rigid 6DOF transformation, Tr, consisting of 3 translations (xr, yr, zr) and 3 rotations (ηr, θr, φr) defined the pose of the CT volume within this camera geometry … To achieve pixel-wise correspondence, the virtual detector was defined to have dimensions and pixel size identical to the projection image, which has been resampled to a specified isotropic pixel size (apix) and rectangularly cropped to exclude collimator edges … the volume was downsampled isotropically to apix’2. The step length for ray casting was chosen to be 2 voxels (equivalently, apix)”; 2.2.5. Enhancing structural image features: “the downsampling of p is reduced (by decreasing apix) along with the kernel width σ (characteristic width of the Gaussian smoothing kernel) for the image gradient calculation”) obtaining an adjusted down sampled 3D image by adjusting, based on a preset step size, a posture corresponding to the down sampled 3D image; (2.1.2. Projection geometry and DRR formation: “To achieve pixel-wise correspondence, the virtual detector was defined to have dimensions and pixel size identical to the projection image, which has been resampled to a specified isotropic pixel size (apix) and rectangularly cropped to exclude collimator edges … a soft tissue threshold of 150 HU (shown previously to be insensitive to the particular threshold choice in the range 50–300 HU) was applied to the CT image (setting the value to 0 if below) to remove low attenuation regions in the forward projection”) for each of the at least one 2D image, obtaining a first projection image corresponding to the 2D image by projecting, based on a capturing posture of the 2D image, the adjusted down sampled 3D image; (2.2.5. Enhancing structural image features: “the downsampling of p is reduced (by decreasing apix) along with the kernel width σ (characteristic width of the Gaussian smoothing kernel) for the image gradient calculation … 2.2.5. Enhancing structural image features: “the downsampling of p is reduced (by decreasing apix) along with the kernel width σ (characteristic width of the Gaussian smoothing kernel) for the image gradient calculation … the p image is cropped to contain only the region that is defined by the search range and sub-image extent of the current registration.”) in response to determining that the at least one first projection image and the at least one downsampled 2D image satisfy a first preset condition, obtaining the registered 3D image by upsampling, based on the preset multiplier, the adjusted downsampled 3D image; and determining the first transformation matrix based on the posture transformation process from the 3D image to the registered 3D image. (2.1.2. Projection geometry and DRR formation: “The DRR was generated via ray-tracing (Cabral et al 1994) with line integrals computed using trilinear interpolation. To achieve pixel-wise correspondence, the virtual detector was defined to have dimensions and pixel size identical to the projection image, which has been resampled to a specified isotropic pixel size (apix) and rectangularly cropped to exclude collimator edges and burnt-in text annotations. … was applied to the CT image (setting the value to 0 if below) to remove low attenuation regions in the forward projection, and basic overlap of the DRR with the projection image was ensured by translating the CT volume along the longitudinal direction of the patient,”) Regarding claim 3, Ketcha discloses further comprising: determining a first similarity degree based on the at least one first projection image and the at least one downsampled 2D image; and in response to determining that the first similarity degree is larger than a similarity threshold, determining that the at least one first projection image and the at least one down sampled 2D image satisfy the first preset condition. (2.1.3. Similarity metric. Similarity between the DRR and the intraoperative radiograph was evaluated using Gradient Orientation (GO), The GO similarity was defined as: (Eq2) and reflects the pixel-wise similarity in gradient direction, w′, among pixels whose gradient magnitude passes a threshold t in both images, defined as the median gradient intensity. Here, θi is the angle difference (radians) in gradient direction between the DRR and p at pixel i.”) Regarding claim 4, Ketcha discloses after for each of the at least one 2D image, obtaining the first projection image corresponding to the 2D image by projecting, based on the capturing posture of the 2D image, the adjusted downsampled 3D image, the method further comprises: in response to determining that the at least one first projection image and the at least one downsampled 2D image do not satisfy the first preset condition,(2.2.5. Enhancing structural image features: “the p image is cropped to contain only the region that is defined by the search range and sub-image extent of the current registration. adaptive histogram equalization is applied to the radiograph to locally enhance the contrast and thereby accentuate structures that may otherwise fall beneath the gradient threshold applied during GO calculation, an effect that becomes increasingly likely as the impact of noise rises due to the reduction in down-sampling and gradient kernel width.”) adjusting the posture corresponding to the adjusted downsampled 3D image based on the preset step size; (2.1.2. Projection geometry and DRR formation: “To achieve pixel-wise correspondence, the virtual detector was defined to have dimensions and pixel size identical to the projection image, which has been resampled to a specified isotropic pixel size (apix) and rectangularly cropped to exclude collimator edges … a soft tissue threshold of 150 HU (shown previously to be insensitive to the particular threshold choice in the range 50–300 HU) was applied to the CT image (setting the value to 0 if below) to remove low attenuation regions in the forward projection”) and repeating the operation of obtaining the first projection image corresponding to each of the at least one 2D image until the at least one first projection image and the at least one downsampled 2D image satisfy the first preset condition. (2.1.3. Similarity metric. Similarity between the DRR and the intraoperative radiograph was evaluated using Gradient Orientation (GO), to provide a high degree of robustness against image content mismatch (e.g. presence of surgical tools in the radiograph but not the CT) as well as poor radiographic image quality.”; 2.1.4. Optimization: “CMA-ES is a stochastic, derivative-free optimization method where, to provide robustness against local minima, the update (which includes both the parameter estimate and covariance matrix) at each iteration is determined by sampling a total of λ points in the parameter space. Sampling is performed according to a Gaussian distribution defined by the covariance matrix and the current parameter estimate.”) Regarding claim 7, Ketcha discloses the determining the first transformation matrix based on the posture transformation process from the 3D image to the registered 3D image includes: obtaining a second transformation matrix between a 2D imaging device coordinate system and the surgical space coordinate system; (Fig.5 and 2.3.1. Single-stage registration with sub-image extent n1 : “The robustness of single-stage registration for such small sub-images was evaluated in an IRB approved retrospective clinical data set of 24 patients undergoing thoracolumbar spine surgery, consisting of 24 CT images and 61 intraoperative radiographs … Intraoperative radiographs were all acquired with a mobile radiography system .. Binary Volumetric masks were automatically generated with the number of adjacent vertebrae (n1) ranging from 7 down to 5, 3, and 1, centered on a central vertebra in the radiograph.”)obtaining a third transformation matrix between the 3D image coordinate system and the 2D imaging device coordinate system based on the posture transformation process from the 3D image to the registered 3D image; (2.1: “Rigid registration is performed by determining a rigid 6 degree-of-freedom (6DOF) transformation of the CT image that optimizes the similarity between the digitally reconstructed radiograph (DRR) and the intraoperative radiograph (p).”) and determining the first transformation matrix based on the second transformation matrix and the third transformation matrix. (Fig.2 and 4. Discussion and conclusion: “The multi-stage approach amounts to 3D–2D registration that is locally rigid (at progressively finer scales) and yet globally deformable (with respect to mapping of label annotations from the 3D image to the radiograph). … it produces a series of rigid transformations by which point annotation landmarks are transformed independently, and thus deformably with respect to the underlying image.”) Regarding claim 8, Ketcha discloses the determining a 2D target image corresponding to a target part of the target object in each of the at least one 2D image and determining a 3D target image corresponding to the target part in the registered 3D image includes: determining the 3D target image corresponding to the target part in the registered 3D image; for each of the at least one 2D image, obtaining a fourth transformation matrix between a 2D image coordinate system corresponding to the 2D image and the surgical space coordinate system; Fig.5 and 2.3.1. Single-stage registration with sub-image extent n1 : “The robustness of single-stage registration for such small sub-images was evaluated in an IRB approved retrospective clinical data set of 24 patients undergoing thoracolumbar spine surgery, consisting of 24 CT images and 61 intraoperative radiographs … Intraoperative radiographs were all acquired with a mobile radiography system .. Binary Volumetric masks were automatically generated with the number of adjacent vertebrae (n1) ranging from 7 down to 5, 3, and 1, centered on a central vertebra in the radiograph.”) and determining the 2D target image in the 2D image based on the 3D target image, the first transformation matrix, and the fourth transformation matrix. (Figs.2-3 and 4. Discussion and conclusion: “a multi-stage 3D–2D registration algorithm (msLevelCheck) for mapping label annotations (e.g. vertebral labels or other point features demarked in preoperative 3D images as part of existing clinical workflow) to intraoperative radiographs under conditions of strong anatomical deformation. The multi-stage approach amounts to 3D–2D registration that is locally rigid (at progressively finer scales) and yet globally deformable (with respect to mapping of label annotations from the 3D image to the radiograph). … it produces a series of rigid transformations by which point annotation landmarks are transformed independently, and thus deformably with respect to the underlying image.”) Regarding claim 9, Ketcha discloses the determining the 2D target image in the 2D image based on the 3D target image, the first transformation matrix, and the fourth transformation matrix includes: determining, based on the 3D target image, at least one 3D coordinate of at least one representative point of the target part in the 3D image coordinate system and a size parameter of the target part; (2.2.2. Definition of sub-images: “subsets of the 3D preoperative vertebral labels are used to generate 3D binary masks around local regions using the same principle of binary volumetric masking as described in section 2.1.1. The size of the sub-images at each stage k is set by nk, the number of labels chosen to generate each mask (represented as the number of dots in the binary volume mask of figure 5) … binary masking provides a segmentation-free region of interest for various locations along the spinal column”) determining at least one 2D coordinate of the at least one representative point in the 2D image coordinate system based on the at least one 3D coordinate, the first transformation matrix, and the fourth transformation matrix; and determining the 2D target image in the 2D image based on the at least one 2D coordinate and the size parameter. (Figs.2-3 and 4. Discussion and conclusion: “a multi-stage 3D–2D registration algorithm (msLevelCheck) for mapping label annotations (e.g. vertebral labels or other point features demarked in preoperative 3D images as part of existing clinical workflow) to intraoperative radiographs under conditions of strong anatomical deformation. The multi-stage approach amounts to 3D–2D registration that is locally rigid (at progressively finer scales) and yet globally deformable (with respect to mapping of label annotations from the 3D image to the radiograph). … it produces a series of rigid transformations by which point annotation landmarks are transformed independently, and thus deformably with respect to the underlying image.”) Regarding claim 10, Ketcha discloses the obtaining a fourth transformation matrix between a 2D image coordinate system corresponding to the 2D image and the surgical space coordinate system includes: obtaining a second transformation matrix between a 2D imaging device coordinate system and the surgical space coordinate system; obtaining a fifth transformation matrix between the 2D imaging device coordinate system and the 2D image coordinate system; (Fig.5 and 2.3.1. Single-stage registration with sub-image extent n1 : “The robustness of single-stage registration for such small sub-images was evaluated in an IRB approved retrospective clinical data set of 24 patients undergoing thoracolumbar spine surgery, consisting of 24 CT images and 61 intraoperative radiographs … Intraoperative radiographs were all acquired with a mobile radiography system .. Binary Volumetric masks were automatically generated with the number of adjacent vertebrae (n1) ranging from 7 down to 5, 3, and 1, centered on a central vertebra in the radiograph.”) and determining the fourth transformation matrix based on the second transformation matrix and the fifth transformation matrix. (Figs.2-3 and 4. Discussion and conclusion: “a multi-stage 3D–2D registration algorithm (msLevelCheck) for mapping label annotations (e.g. vertebral labels or other point features demarked in preoperative 3D images as part of existing clinical workflow) to intraoperative radiographs under conditions of strong anatomical deformation. The multi-stage approach amounts to 3D–2D registration that is locally rigid (at progressively finer scales) and yet globally deformable (with respect to mapping of label annotations from the 3D image to the radiograph). … it produces a series of rigid transformations by which point annotation landmarks are transformed independently, and thus deformably with respect to the underlying image.”) Regarding claim 23, Ketcha discloses a non-transitory computer-readable storage medium storing one or more computer instructions, wherein when a computer reads the one or more computer instructions from the storage medium, (2.1.4: Optimization: “ computation time and reduce GPU memory,”) the computer executes a method for registration, (Abstract: “A multi-stage image-based 3D–2D registration method is presented that maps annotations in a 3D image”) the method comprising: obtaining a three-dimensional (3D) image (Preoperative CT image) of a target object captured before a surgery (Fig.2 : Automatically Masked CT volume) and at least one two-dimensional (2D) image (Intraoperative radiographs) captured during the surgery (Fig.2 Intra-Op Projection Radiograph(p); (2.1. Rigid 3D–2D registration framework: “the method intends to aid target localization by mapping vertebral labels defined in the preoperative CT image (or MR image (De Silva et al 2017)) to the intraoperative radiograph via image-based 3D–2D registration.”; 2.3.1. Single-stage registration with sub-image extent n1.: “retrospective clinical data set of 24 patients undergoing thoracolumbar spine surgery, consisting of 24 CT images and 61 intraoperative radiographs. Preoperative CT included data from three scanner manufacturers … Intraoperative radiographs were all acquired with a mobile radiography system”) obtaining a registered 3D image and a first transformation matrix between a 3D image coordinate system corresponding to the 3D image and a surgical space coordinate system corresponding to the surgery by performing, based on the at least one 2D image, posture transformation on the 3D image; (2.1.2. Projection geometry and DRR formation: “rward projections were computed within a fixed camera geometry with a virtual detector centered at the origin and an x-ray point source positioned at (xs, ys, zs) with zs defined to be perpendicular to the detector plane… A rigid 6DOF transformation,Tr, consisting of 3 translations (xr, yr, zr) and 3 rotations (ηr, θr, φr) defined the pose of the CT volume within this camera geometry. Given the geometry and CT position, a projective transformation matrix, T3×4, was defined to map a location in the CT coordinate system (x, y, z) to its projected location in the DRR (u, v) according to equation (1): … where c is a constant that normalizes the third element of the 2D position vector. The DRR was generated via ray-tracing (Cabral et al 1994) with line integrals computed using trilinear interpolation.”; 2.1.4. Optimization.: “To distribute these initializations, a plane-splitting kD tree partitioning of the search space (Bentley 1975) was implemented where the search space was divided by iteratively splitting the largest current subspace in half. The number of multi-starts (MS = 50), the population sampling size (λ = 125), and the search range (SR) along each 6DOF dimension (±[100 mm, 200 mm, 75 mm, 15°, 10°, 10°]) were selected based on a sensitivity study using a clinical image dataset, considering trade-offs in computation time, robustness, and initialization error. The chosen SR values reflect the assumption of a coarse estimate longitudinal initialization but fairly accurate rotational initialization that comes from knowledge of patient positioning (e.g. knowing that the image is a lateral radiograph).”) determining a 2D target image corresponding to a target part of the target object in each of the at least one 2D image and determining a 3D target image corresponding to the target part in the registered 3D image; (2.1.1. Binary volume masking; 2.2.1. Multi-stage progression. The key feature of msLevelCheck is that at each subsequent stage, k, the 3D image is divided into multiple 3D sub-images, each focusing on (possibly overlapping) local regions and are independently registered to the p (2D image) using the outputs from the previous stage (Tr;k−1) to determine the initialization. the sub-images are further divided to focus on smaller, increasingly local 3D regions until the final stage at which the output registration transforms are used to compute annotation locations on the 2D image. the multi-stage framework yields a transformation of the annotations from the 3D CT to the 2D radiograph that is globally deformable yet locally rigid to improve the registration accuracy at each annotation”) and obtaining a target transformation matrix by performing, based on the 3D target image and the 2D target image of each 2D image, posture transformation on the registered 3D image to optimize the first transformation matrix. (2.2.4. Scaling optimization parameters. The accuracy is expected to gradually improve as the multi-stage registration progresses, and registration parameters are accordingly adjusted to a finer range and scale. As the transformation estimate approaches the solution at each stage, parameters governing the search range (SR, as outlined in section 2.1.4) are scaled to better suit the smaller region of interest and improve registration runtime.”) Regarding claim 24, Ketcha discloses a system for registration, (Abstract: “A multi-stage image-based 3D–2D registration method is presented that maps annotations in a 3D image”) comprising: at least one storage device including a set of instructions; and at least one processor configured to communicate with the at least one storage device, wherein when executing the set of instructions, the at least one processor is configured to direct the system to perform operations (2.1.4: Optimization: “ computation time and reduce GPU memory,”) including: obtaining a three-dimensional (3D) image (Preoperative CT image) of a target object captured before a surgery (Fig.2 : Automatically Masked CT volume) and at least one two-dimensional (2D) image (Intraoperative radiographs) captured during the surgery (Fig.2 Intra-Op Projection Radiograph(p); (2.1. Rigid 3D–2D registration framework: “the method intends to aid target localization by mapping vertebral labels defined in the preoperative CT image (or MR image (De Silva et al 2017)) to the intraoperative radiograph via image-based 3D–2D registration.”; 2.3.1. Single-stage registration with sub-image extent n1.: “retrospective clinical data set of 24 patients undergoing thoracolumbar spine surgery, consisting of 24 CT images and 61 intraoperative radiographs. Preoperative CT included data from three scanner manufacturers … Intraoperative radiographs were all acquired with a mobile radiography system”) obtaining a registered 3D image and a first transformation matrix between a 3D image coordinate system corresponding to the 3D image and a surgical space coordinate system corresponding to the surgery by performing, based on the at least one 2D image, posture transformation on the 3D image; (2.1.2. Projection geometry and DRR formation: “rward projections were computed within a fixed camera geometry with a virtual detector centered at the origin and an x-ray point source positioned at (xs, ys, zs) with zs defined to be perpendicular to the detector plane… A rigid 6DOF transformation,Tr, consisting of 3 translations (xr, yr, zr) and 3 rotations (ηr, θr, φr) defined the pose of the CT volume within this camera geometry. Given the geometry and CT position, a projective transformation matrix, T3×4, was defined to map a location in the CT coordinate system (x, y, z) to its projected location in the DRR (u, v) according to equation (1): … where c is a constant that normalizes the third element of the 2D position vector. The DRR was generated via ray-tracing (Cabral et al 1994) with line integrals computed using trilinear interpolation.”; 2.1.4. Optimization.: “To distribute these initializations, a plane-splitting kD tree partitioning of the search space (Bentley 1975) was implemented where the search space was divided by iteratively splitting the largest current subspace in half. The number of multi-starts (MS = 50), the population sampling size (λ = 125), and the search range (SR) along each 6DOF dimension (±[100 mm, 200 mm, 75 mm, 15°, 10°, 10°]) were selected based on a sensitivity study using a clinical image dataset, considering trade-offs in computation time, robustness, and initialization error. The chosen SR values reflect the assumption of a coarse estimate longitudinal initialization but fairly accurate rotational initialization that comes from knowledge of patient positioning (e.g. knowing that the image is a lateral radiograph).”) determining a 2D target image corresponding to a target part of the target object in each of the at least one 2D image and determining a 3D target image corresponding to the target part in the registered 3D image; (2.1.1. Binary volume masking; 2.2.1. Multi-stage progression. The key feature of msLevelCheck is that at each subsequent stage, k, the 3D image is divided into multiple 3D sub-images, each focusing on (possibly overlapping) local regions and are independently registered to the p (2D image) using the outputs from the previous stage (Tr;k−1) to determine the initialization. the sub-images are further divided to focus on smaller, increasingly local 3D regions until the final stage at which the output registration transforms are used to compute annotation locations on the 2D image. the multi-stage framework yields a transformation of the annotations from the 3D CT to the 2D radiograph that is globally deformable yet locally rigid to improve the registration accuracy at each annotation”) and obtaining a target transformation matrix by performing, based on the 3D target image and the 2D target image of each 2D image, posture transformation on the registered 3D image to optimize the first transformation matrix. (2.2.4. Scaling optimization parameters. The accuracy is expected to gradually improve as the multi-stage registration progresses, and registration parameters are accordingly adjusted to a finer range and scale. As the transformation estimate approaches the solution at each stage, parameters governing the search range (SR, as outlined in section 2.1.4) are scaled to better suit the smaller region of interest and improve registration runtime.”) Regarding claim 25, Ketcha discloses wherein the obtaining a registered 3D image and a first transformation matrix between a 3D image coordinate system corresponding to the 3D image and a surgical space coordinate system corresponding to the surgery by performing, based on the at least one 2D image, posture transformation on the 3D image includes: obtaining a down sampled 3D image and at least one down sampled 2D image by down sampling, based on a preset multiplier, the 3D image and the at least one 2D image; (2.1.2. Projection geometry and DRR formation: “A rigid 6DOF transformation, Tr, consisting of 3 translations (xr, yr, zr) and 3 rotations (ηr, θr, φr) defined the pose of the CT volume within this camera geometry … To achieve pixel-wise correspondence, the virtual detector was defined to have dimensions and pixel size identical to the projection image, which has been resampled to a specified isotropic pixel size (apix) and rectangularly cropped to exclude collimator edges … the volume was downsampled isotropically to apix’2. The step length for ray casting was chosen to be 2 voxels (equivalently, apix)”; 2.2.5. Enhancing structural image features: “the downsampling of p is reduced (by decreasing apix) along with the kernel width σ (characteristic width of the Gaussian smoothing kernel) for the image gradient calculation”) obtaining an adjusted down sampled 3D image by adjusting, based on a preset step size, a posture corresponding to the down sampled 3D image; (2.1.2. Projection geometry and DRR formation: “To achieve pixel-wise correspondence, the virtual detector was defined to have dimensions and pixel size identical to the projection image, which has been resampled to a specified isotropic pixel size (apix) and rectangularly cropped to exclude collimator edges … a soft tissue threshold of 150 HU (shown previously to be insensitive to the particular threshold choice in the range 50–300 HU) was applied to the CT image (setting the value to 0 if below) to remove low attenuation regions in the forward projection”) for each of the at least one 2D image, obtaining a first projection image corresponding to the 2D image by projecting, based on a capturing posture of the 2D image, the adjusted down sampled 3D image; (2.2.5. Enhancing structural image features: “the downsampling of p is reduced (by decreasing apix) along with the kernel width σ (characteristic width of the Gaussian smoothing kernel) for the image gradient calculation … 2.2.5. Enhancing structural image features: “the downsampling of p is reduced (by decreasing apix) along with the kernel width σ (characteristic width of the Gaussian smoothing kernel) for the image gradient calculation … the p image is cropped to contain only the region that is defined by the search range and sub-image extent of the current registration.”) in response to determining that the at least one first projection image and the at least one downsampled 2D image satisfy a first preset condition, obtaining the registered 3D image by upsampling, based on the preset multiplier, the adjusted downsampled 3D image; and determining the first transformation matrix based on the posture transformation process from the 3D image to the registered 3D image. (2.1.2. Projection geometry and DRR formation: “The DRR was generated via ray-tracing (Cabral et al 1994) with line integrals computed using trilinear interpolation. To achieve pixel-wise correspondence, the virtual detector was defined to have dimensions and pixel size identical to the projection image, which has been resampled to a specified isotropic pixel size (apix) and rectangularly cropped to exclude collimator edges and burnt-in text annotations. … was applied to the CT image (setting the value to 0 if below) to remove low attenuation regions in the forward projection, and basic overlap of the DRR with the projection image was ensured by translating the CT volume along the longitudinal direction of the patient,”) Regarding claim 26, Ketcha discloses further comprising: determining a first similarity degree based on the at least one first projection image and the at least one downsampled 2D image; and in response to determining that the first similarity degree is larger than a similarity threshold, determining that the at least one first projection image and the at least one down sampled 2D image satisfy the first preset condition. (2.1.3. Similarity metric. Similarity between the DRR and the intraoperative radiograph was evaluated using Gradient Orientation (GO), The GO similarity was defined as: (Eq2) and reflects the pixel-wise similarity in gradient direction, w′, among pixels whose gradient magnitude passes a threshold t in both images, defined as the median gradient intensity. Here, θi is the angle difference (radians) in gradient direction between the DRR and p at pixel i.”) Regarding claim 27, Ketcha discloses after for each of the at least one 2D image, obtaining the first projection image corresponding to the 2D image by projecting, based on the capturing posture of the 2D image, the adjusted downsampled 3D image, the method further comprises: in response to determining that the at least one first projection image and the at least one downsampled 2D image do not satisfy the first preset condition,(2.2.5. Enhancing structural image features: “the p image is cropped to contain only the region that is defined by the search range and sub-image extent of the current registration. adaptive histogram equalization is applied to the radiograph to locally enhance the contrast and thereby accentuate structures that may otherwise fall beneath the gradient threshold applied during GO calculation, an effect that becomes increasingly likely as the impact of noise rises due to the reduction in down-sampling and gradient kernel width.”) adjusting the posture corresponding to the adjusted downsampled 3D image based on the preset step size; (2.1.2. Projection geometry and DRR formation: “To achieve pixel-wise correspondence, the virtual detector was defined to have dimensions and pixel size identical to the projection image, which has been resampled to a specified isotropic pixel size (apix) and rectangularly cropped to exclude collimator edges … a soft tissue threshold of 150 HU (shown previously to be insensitive to the particular threshold choice in the range 50–300 HU) was applied to the CT image (setting the value to 0 if below) to remove low attenuation regions in the forward projection”) and repeating the operation of obtaining the first projection image corresponding to each of the at least one 2D image until the at least one first projection image and the at least one downsampled 2D image satisfy the first preset condition. (2.1.3. Similarity metric. Similarity between the DRR and the intraoperative radiograph was evaluated using Gradient Orientation (GO), to provide a high degree of robustness against image content mismatch (e.g. presence of surgical tools in the radiograph but not the CT) as well as poor radiographic image quality.”; 2.1.4. Optimization: “CMA-ES is a stochastic, derivative-free optimization method where, to provide robustness against local minima, the update (which includes both the parameter estimate and covariance matrix) at each iteration is determined by sampling a total of λ points in the parameter space. Sampling is performed according to a Gaussian distribution defined by the covariance matrix and the current parameter estimate.”) Regarding claim 29, Ketcha discloses the determining the first transformation matrix based on the posture transformation process from the 3D image to the registered 3D image includes: obtaining a second transformation matrix between a 2D imaging device coordinate system and the surgical space coordinate system; (Fig.5 and 2.3.1. Single-stage registration with sub-image extent n1 : “The robustness of single-stage registration for such small sub-images was evaluated in an IRB approved retrospective clinical data set of 24 patients undergoing thoracolumbar spine surgery, consisting of 24 CT images and 61 intraoperative radiographs … Intraoperative radiographs were all acquired with a mobile radiography system .. Binary Volumetric masks were automatically generated with the number of adjacent vertebrae (n1) ranging from 7 down to 5, 3, and 1, centered on a central vertebra in the radiograph.”)obtaining a third transformation matrix between the 3D image coordinate system and the 2D imaging device coordinate system based on the posture transformation process from the 3D image to the registered 3D image; (2.1: “Rigid registration is performed by determining a rigid 6 degree-of-freedom (6DOF) transformation of the CT image that optimizes the similarity between the digitally reconstructed radiograph (DRR) and the intraoperative radiograph (p).”) and determining the first transformation matrix based on the second transformation matrix and the third transformation matrix. (Fig.2 and 4. Discussion and conclusion: “The multi-stage approach amounts to 3D–2D registration that is locally rigid (at progressively finer scales) and yet globally deformable (with respect to mapping of label annotations from the 3D image to the radiograph). … it produces a series of rigid transformations by which point annotation landmarks are transformed independently, and thus deformably with respect to the underlying image.”) Regarding claim 30, Ketcha discloses the determining a 2D target image corresponding to a target part of the target object in each of the at least one 2D image and determining a 3D target image corresponding to the target part in the registered 3D image includes: determining the 3D target image corresponding to the target part in the registered 3D image; for each of the at least one 2D image, obtaining a fourth transformation matrix between a 2D image coordinate system corresponding to the 2D image and the surgical space coordinate system; Fig.5 and 2.3.1. Single-stage registration with sub-image extent n1 : “The robustness of single-stage registration for such small sub-images was evaluated in an IRB approved retrospective clinical data set of 24 patients undergoing thoracolumbar spine surgery, consisting of 24 CT images and 61 intraoperative radiographs … Intraoperative radiographs were all acquired with a mobile radiography system .. Binary Volumetric masks were automatically generated with the number of adjacent vertebrae (n1) ranging from 7 down to 5, 3, and 1, centered on a central vertebra in the radiograph.”) and determining the 2D target image in the 2D image based on the 3D target image, the first transformation matrix, and the fourth transformation matrix. (Figs.2-3 and 4. Discussion and conclusion: “a multi-stage 3D–2D registration algorithm (msLevelCheck) for mapping label annotations (e.g. vertebral labels or other point features demarked in preoperative 3D images as part of existing clinical workflow) to intraoperative radiographs under conditions of strong anatomical deformation. The multi-stage approach amounts to 3D–2D registration that is locally rigid (at progressively finer scales) and yet globally deformable (with respect to mapping of label annotations from the 3D image to the radiograph). … it produces a series of rigid transformations by which point annotation landmarks are transformed independently, and thus deformably with respect to the underlying image.”) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 5-6, 11, 28 and 31 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ketcha et al (“Multi-stage 3D–2D registration for correction of anatomical deformation in image-guided spine surgery.”; Ketcha), and in view of Frank et al (U.S. 20120027261 A1; Frank). Regarding claim 5, Ketcha discloses all the claims invention except the in response to determining that at least one first projection image and the at least one downsampled 2D image satisfy a first preset condition, obtaining the registered 3D image by upsampling, based on the preset multiplier, the adjusted downsampled 3D image includes: in response to determining that the at least one first projection image and the at least one downsampled 2D image satisfy the first preset condition, and the preset multiplier does not satisfy a second preset condition, updating the 3D image by upsampling, based on the preset multiplier, the adjusted downsampled 3D image; decreasing the preset multiplier; and downsampling, based on the decreased preset multiplier, the updated 3D image and the at least one 2D image to repeat the operation of obtaining the first projection image corresponding to each of the at least one 2D image until the at least one first projection image and the at least one downsampled 2D image satisfy the first preset condition and the decreased preset multiplier satisfies the second preset condition. Frank discloses the in response to determining that at least one first projection image and the at least one downsampled 2D image satisfy a first preset condition, obtaining the registered 3D image by upsampling, based on the preset multiplier, the adjusted downsampled 3D image includes: in response to determining that the at least one first projection image and the at least one downsampled 2D image satisfy the first preset condition, and the preset multiplier does not satisfy a second preset condition, updating the 3D image by upsampling, based on the preset multiplier, the adjusted downsampled 3D image; decreasing the preset multiplier; and downsampling, based on the decreased preset multiplier, the updated 3D image and the at least one 2D image to repeat the operation of obtaining the first projection image corresponding to each of the at least one 2D image until the at least one first projection image and the at least one downsampled 2D image satisfy the first preset condition and the decreased preset multiplier satisfies the second preset condition. (Fig.3 and block 101, 102, 103 and Paragraphs 59-61: “ Once the optional intensity adjustment at block 101 has been performed, the method may proceed to optional block 102, where the region of interest in the AP image 82 and the lateral image 84 may be adjusted. … Should the predefined box or window require adjustment, the surgeon may simply adjust the size and positioning of this region or box on the desired area of interest. … Once the region of interest, box or window in these views has been adjusted at block 102, the method may proceed to optional block 103, where alignment of the acquired lateral image with the lateral DRR may be performed using a similarity/cost measure on the adjusted region of interest from block 102. … By lining up both the 3D image data from the CT scan and the 2D image data from the fluoroscope, via aligning the lateral image with lateral DRR at optional step 103, the image data is already lined up and adjusted to perform the 2D to 3D registration in the refinement step.”, it show that the person ordinary skill in the art would understand that adjust intensity and size of region of interest for align both 3D image and 2D image using a similarity/cost measure interpreted as upsampling and downsampling the 3D image and 2d image based on the preset condition; ) Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Ketcha by including adjust intensity, region of interest and align both 3D image and 2D image that is taught by Frank, to make the invention that a method and apparatus for performing 2D to 3D registration; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the accuracy and efficiency in 2D to 3D registration as well as enhancing the overall accuracy of the digitally reconstructed radiograph (DRRs). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention. Regarding claim 6, Ketcha, as modified by Frank, discloses all the claims invention. Frank further discloses further comprising: in response to determining that the preset multiplier is equal to a threshold, determining that the at least one first projection image and the at least one downsampled 2D image satisfy the second preset condition. (Paragraph 60: “Once the region of interest, box or window in these views has been adjusted at block 102, the method may proceed to optional block 103, where alignment of the acquired lateral image with the lateral DRR may be performed using a similarity/cost measure on the adjusted region of interest from block 102. … By aligning both the two-dimensional DRR and the two-dimensional fluoro lateral image, updated or refined rotation and X/Y translation information (orientation and position) is generated for use later during the refinement process. … yhe refinement process uses at least two similarity and/or cost measures (similarity/cost measures) selected from normalized mutual information algorithms, mutual information algorithms, gradient difference algorithms, gradient algorithms, line contour algorithms, surface contour algorithms, and pattern intensity algorithms”; Paragraph 74: “This process changes position and orientation parameters simultaneously to maximize the similarity between the DRRs and the actual radiographs until an acceptable accuracy is met.”, ) Regarding claim 11, Ketcha discloses all the claims invention except the obtaining a target transformation matrix by performing, based on the 3D target image and the 2D target image in each of the at least one 2D image, posture transformation on the registered 3D image to optimize the first transformation matrix includes:for each of the at least one 2D image, obtaining a second projection image corresponding to the 2D image by projecting, based on a capturing posture of the 2D image, the 3D target image; determining a second similarity degree based on the at least one second projection image and the at least one 2D target image; and obtaining the target transformation matrix by adjusting, based on the second similarity degree, a posture corresponding to the registered 3D image to optimize the first transformation matrix. Frank discloses the obtaining a target transformation matrix by performing, based on the 3D target image and the 2D target image in each of the at least one 2D image, posture transformation on the registered 3D image to optimize the first transformation matrix (Fig.3 and Paragraph 67: “the registration at block 110 is part of the refinement process, where the software uses image matching algorithms to refine the initial registration. … In this refinement process, similarity and/or cost measures are used to tell how well the image is matched. The iterative refinement algorithm changes position and orientation parameters simultaneously to maximize the similarity between the DRRs and the actual radiographs.”) includes: for each of the at least one 2D image, obtaining a second projection image corresponding to the 2D image by projecting, based on a capturing posture of the 2D image, the 3D target image; (Paragraph 12: “ acquiring the three-dimensional image data having first patient orientation information, acquiring a first two-dimensional image having second patient orientation information and acquiring a second two-dimensional image having third patient orientation information. The method further includes identifying a center of the body of interest in the first and second images, generating first and second digitally reconstructed radiographs, identifying the center of the body of interest in the first and second digitally reconstructed radiographs and registering the first and second two-dimensional images with the three-dimensional image data using at least a first similarity/cost measure and a second similarity/cost measure.”) determining a second similarity degree based on the at least one second projection image and the at least one 2D target image; (Paragraph 12: “identifying the center of the body of interest in the first and second digitally reconstructed radiographs and registering the first and second two-dimensional images with the three-dimensional image data using at least a first similarity/cost measure and a second similarity/cost measure.”; Paragraph 62: “From the initialization phase, the process proceeds to block 110, where refined 2D to 3D registration is performed using the position and orientation information calculated in the initialization phase as a starting point. … he similarity/cost measures are selected from generally two types of registration processes, which include the above-noted algorithms. These processes are either image-based or model-based. The image-based registration process is based on the two-dimensional fluoroscopic images and utilizes pattern or image intensity algorithms or is based upon identifying features within the objects. These features utilize known line contour algorithms or gradient algorithms. The model-based registration process is based upon the 3D captured data, such as the CT or MRI data. This process generally includes obtaining surface models of the area of interest, based upon the 3D data and then generating projection rays based on the contour in the fluoroscopic images.”) and obtaining the target transformation matrix by adjusting, based on the second similarity degree, a posture corresponding to the registered 3D image to optimize the first transformation matrix. (Fig.3 and Paragraph 62, Paragraph 67: “the registration at block 110 is part of the refinement process, where the software uses image matching algorithms to refine the initial registration. … In this refinement process, similarity and/or cost measures are used to tell how well the image is matched. The iterative refinement algorithm changes position and orientation parameters simultaneously to maximize the similarity between the DRRs and the actual radiographs.”) Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Ketcha by including an initial orientation, an initial position and image matching algorithms are used to refine registration that is taught by Frank, to make the invention that a method and apparatus for performing 2D to 3D registration; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the accuracy and efficiency in 2D to 3D registration as well as enhancing the overall accuracy of the digitally reconstructed radiograph (DRRs). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention. Regarding claim 28, Ketcha discloses all the claims invention except in response to determining that at least one first projection image and the at least one downsampled 2D image satisfy a first preset condition, obtaining the registered 3D image by upsampling, based on the preset multiplier, the adjusted downsampled 3D image includes:in response to determining that the at least one first projection image and the at least one downsampled 2D image satisfy the first preset condition, and the preset multiplier does not satisfy a second preset condition, updating the 3D image by upsampling, based on the preset multiplier, the adjusted downsampled 3D image; decreasing the preset multiplier; and downsampling, based on the decreased preset multiplier, the updated 3D image and the at least one 2D image to repeat the operation of obtaining the first projection image corresponding to each of the at least one 2D image until the at least one first projection image and the at least one downsampled 2D image satisfy the first preset condition and the decreased preset multiplier satisfies the second preset condition. Frank discloses the in response to determining that at least one first projection image and the at least one downsampled 2D image satisfy a first preset condition, obtaining the registered 3D image by upsampling, based on the preset multiplier, the adjusted downsampled 3D image includes: in response to determining that the at least one first projection image and the at least one downsampled 2D image satisfy the first preset condition, and the preset multiplier does not satisfy a second preset condition, updating the 3D image by upsampling, based on the preset multiplier, the adjusted downsampled 3D image; decreasing the preset multiplier; and downsampling, based on the decreased preset multiplier, the updated 3D image and the at least one 2D image to repeat the operation of obtaining the first projection image corresponding to each of the at least one 2D image until the at least one first projection image and the at least one downsampled 2D image satisfy the first preset condition and the decreased preset multiplier satisfies the second preset condition. (Fig.3 and block 101, 102, 103 and Paragraphs 59-61: “ Once the optional intensity adjustment at block 101 has been performed, the method may proceed to optional block 102, where the region of interest in the AP image 82 and the lateral image 84 may be adjusted. … Should the predefined box or window require adjustment, the surgeon may simply adjust the size and positioning of this region or box on the desired area of interest. … Once the region of interest, box or window in these views has been adjusted at block 102, the method may proceed to optional block 103, where alignment of the acquired lateral image with the lateral DRR may be performed using a similarity/cost measure on the adjusted region of interest from block 102. … By lining up both the 3D image data from the CT scan and the 2D image data from the fluoroscope, via aligning the lateral image with lateral DRR at optional step 103, the image data is already lined up and adjusted to perform the 2D to 3D registration in the refinement step.”, it show that the person ordinary skill in the art would understand that adjust intensity and size of region of interest for align both 3D image and 2D image using a similarity/cost measure interpreted as upsampling and downsampling the 3D image and 2d image based on the preset condition; ) Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Ketcha by including adjust intensity, region of interest and align both 3D image and 2D image that is taught by Frank, to make the invention that a method and apparatus for performing 2D to 3D registration; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the accuracy and efficiency in 2D to 3D registration as well as enhancing the overall accuracy of the digitally reconstructed radiograph (DRRs). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention. Regarding claim 31,wherein the obtaining a target transformation matrix by performing, based on the 3D target image and the 2D target image in each of the at least one 2D image, posture transformation on the registered 3D image to optimize the first transformation matrix (Fig.3 and Paragraph 67: “the registration at block 110 is part of the refinement process, where the software uses image matching algorithms to refine the initial registration. … In this refinement process, similarity and/or cost measures are used to tell how well the image is matched. The iterative refinement algorithm changes position and orientation parameters simultaneously to maximize the similarity between the DRRs and the actual radiographs.”) includes: for each of the at least one 2D image, obtaining a second projection image corresponding to the 2D image by projecting, based on a capturing posture of the 2D image, the 3D target image; (Paragraph 12: “ acquiring the three-dimensional image data having first patient orientation information, acquiring a first two-dimensional image having second patient orientation information and acquiring a second two-dimensional image having third patient orientation information. The method further includes identifying a center of the body of interest in the first and second images, generating first and second digitally reconstructed radiographs, identifying the center of the body of interest in the first and second digitally reconstructed radiographs and registering the first and second two-dimensional images with the three-dimensional image data using at least a first similarity/cost measure and a second similarity/cost measure.”) determining a second similarity degree based on the at least one second projection image and the at least one 2D target image; (Paragraph 12: “identifying the center of the body of interest in the first and second digitally reconstructed radiographs and registering the first and second two-dimensional images with the three-dimensional image data using at least a first similarity/cost measure and a second similarity/cost measure.”; Paragraph 62: “From the initialization phase, the process proceeds to block 110, where refined 2D to 3D registration is performed using the position and orientation information calculated in the initialization phase as a starting point. … he similarity/cost measures are selected from generally two types of registration processes, which include the above-noted algorithms. These processes are either image-based or model-based. The image-based registration process is based on the two-dimensional fluoroscopic images and utilizes pattern or image intensity algorithms or is based upon identifying features within the objects. These features utilize known line contour algorithms or gradient algorithms. The model-based registration process is based upon the 3D captured data, such as the CT or MRI data. This process generally includes obtaining surface models of the area of interest, based upon the 3D data and then generating projection rays based on the contour in the fluoroscopic images.”) and obtaining the target transformation matrix by adjusting, based on the second similarity degree, a posture corresponding to the registered 3D image to optimize the first transformation matrix. (Fig.3 and Paragraph 62, Paragraph 67: “the registration at block 110 is part of the refinement process, where the software uses image matching algorithms to refine the initial registration. … In this refinement process, similarity and/or cost measures are used to tell how well the image is matched. The iterative refinement algorithm changes position and orientation parameters simultaneously to maximize the similarity between the DRRs and the actual radiographs.”) Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Ketcha by including an initial orientation, an initial position and image matching algorithms are used to refine registration that is taught by Frank, to make the invention that a method and apparatus for performing 2D to 3D registration; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the accuracy and efficiency in 2D to 3D registration as well as enhancing the overall accuracy of the digitally reconstructed radiograph (DRRs). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Liu et al (U.S. 20120069176 A1) , Marker-Free Tracking Registration and Calibration for Em-Tracked Endoscopic System”, teaches about A system and method for image-based registration between images includes locating a feature in a pre-operative image and comparing real-time images taken with a scope with the pre-operative image taken of the feature to find a real-time image that closely matches the pre-operative image. A closest match real-time image is registered to the pre-operative image to determine a transformation matrix between a position of the pre-operative image and a position of the real-time image such that the transformation matrix permits tracking real-time image coordinates in pre-operative image space. Derda et al (U.S. 20180150960 A1), Registering Three- Dimensional Image Data of an Imaged Object with a set of Two-Dimensional Projection Images of the Object”, teaches about three-dimensional image data of an imaged object, such as the bone structure of a patient, comprise first and second rigid parts movably connected to each other. Sub-regions within the three-dimensional image data are divided into at least first image data and second image data. A set of two-dimensional projection images of the imaged object are taken from first and second different projection directions, while the first and the second rigid parts are in a second state of position and orientation. A processing device registers the first image data with the set of two-dimensional projection images and separately registers the second image data with the set of two-dimensional projection images to obtain first and second registration information, respectively, which is used to determine the position and orientation of the first and second rigid parts in the second state. Carrell et al (U.S. 20190304108 A1), “Deformation Correction”, teaches about method for adapting 3D image datasets so that they can be registered and combined with 2D images of the same subject, wherein deformation or movement of parts of the subject has occurred between obtaining the 3D image and the 2D image. 2D-3D registrations of the images with respect to multiple features visible in both images are used to provide point correspondences between the images in order to provide an interpolation function that can be used to determine the position of a feature visible in the first image but not the second image and thus mark the location of the feature on the second image. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Duy A Tran whose telephone number is (571)272-4887. The examiner can normally be reached Monday-Friday 8:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ONEAL R MISTRY can be reached at (313)-446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DUY TRAN/Examiner, Art Unit 2674 /ONEAL R MISTRY/Supervisory Patent Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Jan 30, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573024
IMAGE AUGMENTATION FOR MACHINE LEARNING BASED DEFECT EXAMINATION
2y 5m to grant Granted Mar 10, 2026
Patent 12561934
AUTOMATIC ORIENTATION CORRECTION FOR CAPTURED IMAGES
2y 5m to grant Granted Feb 24, 2026
Patent 12548284
METHOD FOR ANALYZING ONE OR MORE ELEMENT(S) OF ONE OR MORE PHOTOGRAPHED OBJECT(S) IN ORDER TO DETECT ONE OR MORE MODIFICATION(S), AND ASSOCIATED ANALYSIS DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12530798
LEARNED FORENSIC SOURCE SYSTEM FOR IDENTIFICATION OF IMAGE CAPTURE DEVICE MODELS AND FORENSIC SIMILARITY OF DIGITAL IMAGES
2y 5m to grant Granted Jan 20, 2026
Patent 12505539
CELL BODY SEGMENTATION USING MACHINE LEARNING
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+17.5%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 128 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month