Prosecution Insights
Last updated: April 19, 2026
Application No. 18/316,276

METHODS AND APPARATUS FOR THREE-DIMENSIONAL RECONSTRUCTION

Non-Final OA §102§103
Filed
May 12, 2023
Examiner
COFINO, JONATHAN M
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Jointvue LLC
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
2y 4m
To Grant
94%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
130 granted / 210 resolved
At TC average
Strong +32% interview lift
Without
With
+32.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
13 currently pending
Career history
223
Total Applications
across all art units

Statute-Specific Performance

§101
6.4%
-33.6% vs TC avg
§103
64.7%
+24.7% vs TC avg
§102
10.2%
-29.8% vs TC avg
§112
12.3%
-27.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 210 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on/after Mar. 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4, 23-25, 27, 62-64, 84-86, and 88 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Mahfouz et al. (U.S. PG-PUB 2013/0144135, 'MAHFOUZ-2013'). Regarding claim 1 and claim 62, MAHFOUZ-2013 discloses a method of generating a virtual 3-D patient-specific bone/anatomical model, the method comprising: obtaining a preliminary virtual 3-D bone model of a first bone (MAHFOUZ-2013; FIG. 7, element 230=> ‘LOAD AVERAGE BONE MODEL FROM STATISTICAL ATLAS’, element 232=> ‘LOAD BONE MODEL FROM STATISTICAL ATLAS BASED ON DEMOGRAPHICS OF PATIENT’; ¶ 0107; “… the average model ("AVERAGE" branch of Block 210) is loaded (Block 230) or a subset model is selected ("SELECTED" branch of Block 210) from the statistical atlas based on demographics that are similar to the patient and loaded (Block 232) for optimization. ”); obtaining a supplemental image of the first bone (MAHFOUZ-2013; FIG. 7; ¶ 0070; “… construction of a 3-D patient-specific bone model … is described. The method begins with acquiring … RF signals from an A-mode ultrasound beam scan of a bone. To acquire the RF signals for creating the 3-D patient-specific model of the knee joint 114, the patient's knee joint 114 is positioned and held in one of the two or more degrees of flexion (Block 152).”); registering the preliminary virtual 3-D bone model of the first bone with the supplemental image of the first bone (MAHFOUZ-2013; FIG. 7; ¶ 0096; “With the bone contours isolated from each of the RF signals, the bone contours may now be transformed into a point cloud. … the resultant bone contours 180 may then undergo registration with the optical system to construct a bone point cloud 194 representing the surface of at least a portion of each scanned bone (Block 186), which is described herein as a multiple step registration process. … the process is a two-step registration process. The registration step (Block 186) begins by transforming the resultant bone contour 180 from a 2D contour in the ultrasound frame into a 3-D contour in the world frame (Block 188). This transformation is applied to all resultant bone contours 180 extracted from all of the acquired RF signals 142.” FIG. 7; ¶ 0107; “The bone point cloud 194 is then applied to the loaded model (Block 234) so that the shape descriptors of the loaded model may be changed to create the 3-D patient-specific model. If desired, … shape descriptor(s) may be constrained ("YES" branch of Block 254) so that the 3-D patient-specific model will have the same anatomical characteristics as the loaded model.”); extracting geometric information about the first bone from the supplemental image of the first bone (MAHFOUZ-2013; FIGS. 7, 16B; ¶ 0074; “After all data and RF signal acquisition is complete, the computer 54 … automatically isolates that portion of the RF signal, i.e., the bone contour, from each of the … RF signals. … the computer 54 may sample the echoes comprising the RF signals to extract a bone contour for generating a 3-D point cloud 165 (Block 164).”); and generating a refined virtual 3-D patient-specific bone model of the first bone (MAHFOUZ-2013; FIG. 7, Block 260; ¶ 0119; “An ultrasound procedure … may … generate approximately 5000 ultrasound images. The generated 3-D patient-specific models …, when compared against CT-based segmented models, yielded an average error of approximately 2 mm.”) by refining the preliminary virtual 3-D bone model of the first bone using the geometric information about the first bone from the supplemental image of the first bone (MAHFOUZ-2013; FIG. 7; ¶ 0107; “The bone point cloud 194 is then applied to the loaded model (Block 234) so that the shape descriptors of the loaded model may be changed to create the 3-D patient-specific model. If desired, … shape descriptor(s) may be constrained ("YES" branch of Block 254) so that the 3-D patient-specific model will have the same anatomical characteristics as the loaded model. … the … shape descriptor(s) are set (Block 238). With the constraints set, the loaded model may be deformed (or optimized [‘refining’]) (Block 240) into a model that resembles the appropriate bone and not an irregularly, randomly shaped model. If no constraints are desired ("NO" branch of Block 240) and then the loaded model is optimized (Block 240).” FIG. 19; ¶ 0108; “Changing the shape descriptors to optimize the loaded model (Block 240) may be carried out by … optimization algorithm(s) [‘refining’], … to find the values of the principal components coefficients to create the 3-D patient-specific new model … The illustrated optimization algorithm includes a two-step optimization method of successively-applied algorithms to obtain the 3-D patient-specific model that best fits the bone point cloud 194 …”). Regarding claim 2 and claim 63, MAHFOUZ-2013 discloses the method of claim 1 and the method of claim 62, wherein obtaining the preliminary 3-D bone model comprises obtaining a point cloud of the first bone and reconstructing the preliminary 3-D bone model by morphing a generalized 3-D bone model using the point cloud of the first bone (MAHFOUZ; FIG. 7; ¶ 0107; “The bone point cloud 194 is then applied to the loaded model (Block 234) so that the shape descriptors of the loaded model may be changed to create the 3-D patient-specific model. If desired, … shape descriptor(s) may be constrained ("YES" branch of Block 254) so that the 3-D patient-specific model will have the same anatomical characteristics as the loaded model. … the … shape descriptor(s) are set (Block 238). With the constraints set, the loaded model may be deformed … (Block 240) into a model that resembles the appropriate bone [‘reconstructing the preliminary 3-D bone model by morphing a generalized 3-D bone model’] and not an irregularly, randomly shaped model.”). Regarding claim 3 and claim 64, MAHFOUZ-2013 discloses the method of claim 2 and the method of claim 63, wherein obtaining the point cloud of the first bone utilizes a first imaging modality (MAHFOUZ; ¶ 0053; “To generate the 3-D patient-specific model, … raw RF signals [are] acquired using A-mode ultrasound … A bone contour is then isolated in each of the … RF signals and transformed into a point cloud [which] [is] used to optimize a 3-D model of the bone such that the patient-specific model [is] generated.”); wherein obtaining the supplemental image of the first bone utilizes a second imaging modality (MAHFOUZ; ¶ 0004; “One … method of forming patient-specific models is the use of previously-acquired X-Ray images as a priori information to guide the morphing of a template bone model whose projection matches the X-Ray images.”); and wherein the first imaging modality is different than the second imaging modality ([The Examiner notes that ultrasound and x-ray are different imaging modalities.]). Regarding claim 4, MAHFOUZ-2013 discloses the method of claim 3, wherein the first imaging modality comprises ultrasound (MAHFOUZ; FIG. 7; ¶ 0070; “… one method 150 of acquiring data for construction of a 3-D patient-specific bone model … is described. The method begins with acquiring … RF signals from an A-mode ultrasound beam scan of a bone.”). Regarding claim 23 and claim 84, MAHFOUZ-2013 discloses the method of claim 1 and the method of claim 62, wherein registering the preliminary 3-D bone model of the first bone with the supplemental image of the first bone comprises solving for a pose of the preliminary 3-D bone model which produces a 2-D projection corresponding to a projection of the supplemental image (MAHFOUZ; ¶ 0004; “One alternative method of forming patient-specific models [‘solving for a pose’] is the use of previously-acquired X-Ray images as a priori information to guide the morphing of a template bone model [‘preliminary 3-D bone model’] whose projection matches the X-Ray images [‘projection of the supplemental image’].”). Regarding claim 24 and claim 85, MAHFOUZ-2013 discloses the method of claim 1 and the method of claim 62, wherein obtaining a supplemental image of the first bone comprises obtaining … supplemental images of the first bone (MAHFOUZ-2013; FIG. 7, elements 152, 154, 156, 158; ¶ 0070-72); wherein registering the preliminary virtual 3-D bone model of the first bone with the supplemental image of the first bone comprises registering the preliminary virtual 3-D bone model of the first bone with the … supplemental images of the first bone (MAHFOUZ-2013; FIG. 7, element 234; ¶ 0107; “The bone point cloud 194 is then applied to the loaded model (Block 234) so that the shape descriptors of the loaded model may be changed to create the 3-D patient-specific model.”); wherein extracting geometric information about the first bone from the supplemental images of the first bone comprises extracting geometric information about the first bone from the … supplemental images of the first bone (MAHFOUZ-2013; ¶ 0074; “… computer 54 may sample the echoes comprising the RF signals to extract a bone contour for generating a 3-D point cloud 165 (FIG. 16B) (Block 164).”); and wherein refining the preliminary virtual 3-D bone model of the first bone using the geometric information about the first bone from the supplemental image of the first bone comprises refining the preliminary virtual 3-D bone model of the first bone using the geometric information about the first bone from the … supplemental images of the first bone (MAHFOUZ-2013; FIG. 7, ‘Block 240’; ¶ 0107-108; “… the loaded model may be deformed (or optimized) (Block 240) into a model that resembles the appropriate bone and not an irregularly, randomly shaped model.”). Regarding claim 25 and claim 86, MAHFOUZ-2013 discloses the method of claim 1 and the method of claim 62, further comprising: obtaining a preliminary virtual 3-D bone model of a second bone (MAHFOUZ-2013; FIG. 7, elements 230, 232; ¶ 0107); obtaining a supplemental image of the second bone (MAHFOUZ-2013; FIG. 7; ¶ 0070); registering the preliminary virtual 3-D bone model of the second bone with the supplemental image of the second bone (MAHFOUZ-2013; FIG. 7; ¶ 0096; FIG. 7; ¶ 0107); extracting geometric information about the second bone from the supplemental image of the second bone (MAHFOUZ-2013; FIGS. 7, 16B; ¶ 0074); and generating a refined virtual 3-D patient-specific bone model of the second bone by refining the preliminary virtual 3-D bone model of the second bone using the geometric information about the second bone from the supplemental image of the second bone (MAHFOUZ; FIG. 7; ¶ 0107; FIG. 19; ¶ 0108). Regarding claim 27 and claim 88, MAHFOUZ-2013 discloses the method of claim 1 and the method of claim 62, wherein extracting geometric information from the supplemental image of the first bone comprises extracting (MAHFOUZ-2013; ¶ 0074; “After all data and RF signal acquisition is complete, the computer … automatically isolates that portion of the RF signal, i.e., the bone contour [‘curvature of the first bone’], from each of the … RF signals. … the computer 54 may sample the echoes comprising the RF signals to extract a bone contour for generating a 3-D point cloud 165 (FIG. 16B) (Block 164).”). Claims 49-51 and 53-57 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by McKinnon et al. (U.S. PG-PUB 2022/0039864, 'MCKINNON'). Regarding claim 49, MCKINNON discloses a method of generating a virtual 3-D patient-specific model of a ligament, the method comprising: obtaining a virtual 3-D patient-specific bone model of a joint (MCKINNON; ¶ 0025; “… the computer system … updates the surgical plan based on the position of the ultrasound probe by registering the image to a preoperative anatomical model associated with the patient.” ¶ 0073; “The registration process that registers the CASS 100 to the relevant anatomy of the patient also can involve the use of anatomical landmarks, such as landmarks on a bone or cartilage. … the CASS 100 can include a 3D model of the relevant bone or joint and the surgeon can intraoperatively collect data regarding the location of bony landmarks on the patient's actual bone using a probe that is connected to the CASS.”); detecting at least one ligament loci on the virtual 3-D patient-specific bone model (MCKINNON; ¶ 0216; “… the elastography systems … can be useful for a variety of other preoperative diagnostic applications … The systems … could be used in a clinic to assess ligament health or injury … Elastography could be used to detect small tears or strains in a patient's ligaments that are not easily detectable from static images.”); obtaining ultrasound data pertaining to a ligament associated with the at least one ligament loci by scanning, using ultrasound, the ligament (MCKINNON; FIG. 8; ¶ 0180; “… the ultrasound system 505 may produce medical images of a portion of a patient's anatomy upon which a surgical procedure is … performed. … The ultrasound system 505 may be used to retrieve medical images of a patient's knee. … The ultrasound system 505 may retrieve images that depict … a patient's bony anatomy and a patient's soft tissues, including the patient's ligaments.”); and reconstructing a virtual 3-D model of the ligament using the ultrasound data (MCKINNON; FIG. 9; ¶ 0185-186; “… the … image(s) and/or information determinable from the … image(s) may be provided 560 to a surgical planning system. The … image(s) may be used to determine information pertaining to the location of the surgical site. … a stiffness measure associated with … ligament(s) at or near the surgical site may be determinable from the ultrasound image using elastography. Such stiffness values (or other information retrieved from an ultrasound image) may be determined based on the provided images. … the surgical planning system may perform 565 … musculoskeletal simulation(s) using the … image(s) and/or information retrieved from the … image(s). … the ligament stiffness values may be provided to the surgical planning system as inputs for a musculoskeletal simulation. The … musculoskeletal simulation(s) may be used to estimate the position and orientation of a patient's bony anatomy based on the … image(s). … the surgical planning system may perform 565 … simulation(s) to reconstruct the patient's bony anatomy. … the surgical planning system may estimate generation and attachment sites for … ligament(s) as a result of the … simulation(s).”). Regarding claim 50, MCKINNON discloses the method of claim 49, wherein obtaining the ultrasound data pertaining to the ligament is performed at … joint angles of the joint across the joint's range of motion (MCKINNON; ¶ 0189; “… the type of musculoskeletal simulations performed 565 by the surgical planning system may vary among systems. … the surgical planning system may perform 565 … simulation(s) that include a virtual test rig to simulate a deep knee bend [‘performed at … joint angles of the joint across the joint's range of motion’] performed by a patient.”). Regarding claim 51, MCKINNON discloses the method of claim 49, wherein obtaining the virtual 3-D patient-specific bone model of the joint comprises reconstructing the joint using ultrasound (MCKINNON; ¶ 0184; “FIG. 9 depicts a flow diagram of an illustrative method of performing preoperative planning using ultrasound elastography … [A] SWEI-capable ultrasound system may be used to retrieve 555 … image(s) of a location of a patient that is proximate to a surgical site. … the SWEI-capable ultrasound system may retrieve 555 … image(s) that are used to characterize the stiffness of collateral ligaments. … ligament structures may be identified in an ultrasound image …” ¶ 0189; “… the type of musculoskeletal simulations performed 565 by the surgical planning system may vary among systems. … the surgical planning system may perform 565 … simulation(s) that include a virtual test rig to simulate a deep knee bend performed by a patient.”). Regarding claim 53, MCKINNON discloses the method of claim 49, wherein detecting the at least one ligament loci on the patient-specific virtual 3-D bone model comprises determining … insertion location(s) of the ligament (MCKINNON; ¶ 0209; “The ultrasound probe may be used for preoperative registration of a bone model [which] may be performed for either image-based or imageless systems. … Because the bone model is registered preoperatively and the ultrasound probe is tracked, the collateral ligaments may be identified and characterized intraoperatively using automated techniques.”). Regarding claim 54, MCKINNON discloses the method of claim 49, wherein scanning, using ultrasound, the ligament comprises providing automated guidance information (MCKINNON; ¶ 0130; “… the Surgical Computer 150 may use data from … machine learning models, etc. to help guide the surgical procedure.” ¶ 0143; “… a data-driven tool that more accurately models anatomical response and guides the surgical plan can improve the existing approach.” ¶ 0170; “… the point probe is used to paint the femoral neck to provide high-resolution data that allows the surgeon to … understand where to make the neck cut. The navigation system … then guides the surgeon as they perform the neck cut.”). Regarding claim 55, MCKINNON discloses the method of claim 54, wherein providing the automated guidance information comprises providing a display comprising a current position of an ultrasound probe relative to … anatomical structure(s) (MCKINNON; FIGS. 1, 5A; ¶ 0075; “The Display 125 provides … (GUIs) that display images collected by the Tissue Navigation System 120 as well other information relevant to the surgery. … the Display 125 overlays image information collected from various modalities (e.g., CT, MRI, X-ray, fluorescent, ultrasound, etc.) collected pre-operatively or intra-operatively to give the surgeon various views of the patient's anatomy as well as real-time conditions. … As an alternative or supplement to the Display 125, … member(s) of the surgical staff may wear an … (AR) … (HMD). … the Surgeon 111 is wearing an AR HMD 155 that may … overlay pre-operative image data on the patient or provide surgical planning suggestions.”). Regarding claim 56, MCKINNON discloses the method of claim 54, wherein providing the automated guidance information comprises providing a display comprising an indication of a desired location (MCKINNON; FIG. 9; ¶ 0190; “During or after the preoperative planning process, the surgical planning system may display 570 … step(s) for a surgical plan. The … step(s) may be presented 570 in text, through the use of pictures depicting where and how to perform a particular step, and/or the like. … using pictorial representations, … ultrasound image(s) … may be displayed 570 over a portion of an image displaying the patient's anatomy to connote a characteristic. … a ligament may be highlighted in a particular color to designate that the ligament is loose [‘indication of a desired location of scanning’] based upon the biomechanical modeling … The … step(s) for the surgical plan may be displayed … 570 to a surgeon … preoperatively and/or intraoperatively.”). Regarding claim 57, MCKINNON discloses the method of claim 54, wherein providing the automated guidance information comprises providing a display comprising (MCKINNON; FIG. 9, element 570; ¶ 0190-192; ¶ 0212; “… acoustic force radiation impulse imaging with B-mode ultrasound may be used in place of a SWEI-enabled ultrasound system and/or an SSI-enabled ultrasound system.”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 USC 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 26 and 87 are rejected under 35 U.S.C. 103 as being unpatentable over MAHFOUZ as applied to claims 25 and 86 above, respectively, and further in view of Mahfouz (U.S. PG-PUB 2016/0361071, 'MAHFOUZ-2016'). Regarding claim 26 and claim 87, MAHFOUZ discloses the method of claim 25 and the method of claim 86, wherein obtaining the point cloud of the second bone comprises performing an ultrasound scan of the second bone (MAHFOUZ; FIG. 7, ‘Block 152’; ¶ 70; “The method begins with acquiring … RF signals from an A-mode ultrasound beam scan of a bone.”); wherein obtaining the supplemental image of the second bone comprises: obtaining a 2-D X-ray of the second bone (MAHFOUZ; ¶ 0004; “One alternative method of forming patient-specific models is the use of previously-acquired X-Ray images as a priori information to guide the morphing of a template bone model whose projection matches the X-Ray images.”). MAHFOUZ does not explicitly disclose that the 2-D X-ray of the second bone includes … portion(s) of the second bone that was not visible on the ultrasound scan of the second bone, which MAHFOUZ-2016 discloses (MAHFOUZ-2016; ¶ 0049; “To increase the accuracy of the X-ray reconstruction (taking 2D images and creating a virtual 3D model), a hybrid approach may be utilized that makes use of ultrasound imaging to capture the surface of the non-occluded bone.”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 25 and the method of claim 86 of MAHFOUZ to include the disclosure that the 2-D X-ray of the second bone includes … portion(s) of the second bone that was not visible on the ultrasound scan of the second bone of MAHFOUZ-2016. The motivation for this modification is to utilize ultrasound to capture the areas where the patient-specific instrument will mate with the patient bone. This approach can enhance the accuracy of the interface between the patient anatomy and the generated patient-specific instruments (MAHFOUZ-2016; ¶ [0049]). Claim 31 is rejected under 35 U.S.C. 103 as being unpatentable over MAHFOUZ-2013 in view of Chiou et al. (U.S. Patent 12,211,151; 'CHIOU'). Regarding claim 31, MAHFOUZ-2013 discloses a method of generating a virtual 3-D patient-specific bone model, the method comprising: obtaining ultrasound data pertaining to an exterior surface of a first bone (MAHFOUZ-2013; ¶ 0011; “… a method for 3-D reconstruction of a bone surface includes imaging the bone with A-mode ultrasound. … RF signals [are] acquired while imaging. Imaging of the bone is also tracked.”). MAHFOUZ-2013 does not explicitly disclose: obtaining X-ray data pertaining to … an internal feature … and/or an occluded feature of the first bone, which CHIOU discloses (CHIOU; Col. 174, Lines 19-23; “… a standard model of the bone can be used and can be deformed using … the known landmarks, distances, dimensions, surfaces, edges, angles, axes, curvatures, shapes, lengths, widths, depths and/or other features derived from the x-ray images …” Col. 98, Lines 16-25; “The patient specific marker or template can be developed from CT, MRI or ultrasound scans as well as x-ray imaging. … any multi-planar 2D or 3D imaging modality is applicable, in particular when it provides information on surface shape or provides information to derive estimates of surface shape of an anatomic region. The patient specific marker or template can include … surface(s) that are designed or manufactured to fit in any joint or in a spine or other anatomic locations a corresponding Cartilage surface of a patient; Subchondral bone surface of a patient …”); or generating a 3-D patient-specific bone model of the first bone using the ultrasound data and the X-ray data, the 3-D patient-specific bone model representing the exterior surface of the first bone and the … internal feature … and/or the occluded feature of the first bone, which CHIOU also discloses (CHIOU; Col. 45, Lines 1-21; “With any of the optical imaging and/or 3D scanner techniques, if there are holes in the acquisition and/or scan and/or 3D surface, repeat scanning can be performed to fill the holes. The scanned surface can also be compared against a 3D surface or 3D model of the surgical site … obtained from an imaging study, e.g. an ultrasound, a CT or MRI scan, or obtained via bone morphing from x-rays … Discrepancies in surface geometry between the 3D model or 3D surface generated with the optical imaging system and/or the 3D scanner and the 3D surface or 3D model obtained from an imaging study or bone morphing from x-rays, can be determined; similarly, it can be determined if the surfaces or 3D models display sufficient commonality to allow for registration of the intra-operative 3D surface or 3D model obtained with the optical imaging system and/or 3D scanner and the 3D surface or 3D model obtained from the pre-operative imaging study or bone morphing from x-rays.”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify method of generating a virtual 3-D patient-specific bone model of MAHFOUZ-2013 to include the obtaining X-ray data pertaining to an internal feature and/or an occluded feature of the first bone and the generating a 3-D patient-specific bone model of the first bone using the ultrasound data and the X-ray data, the 3-D patient-specific bone model representing the exterior surface of the first bone and the internal feature and/or the occluded feature of the first bone of CHIOU. The motivation for this modification is to combine the use the ultrasound modality (typically yielding 3-D imaging data, but may not penetrate bone tissue) and the x-ray modality (typically yielding 2-D imaging data, but tends to penetrate bone tissue) to achieve a fuller, more complete picture of the entire three-dimensional bone object. Claims 35-37 and 39 are rejected under 35 U.S.C. 103 as being unpatentable over Grupp, Jr. et al. (U.S. PG-PUB 2023/0355309, 'GRUPP') in view of Thompson et al. (US PG-PUB 2022/0192844, 'THOMPSON'). Regarding claim 35, claim 37, and claim 39, GRUPP discloses a method of determining a spine-pelvis tilt, the method comprising: ([THOMPSON discloses this limitation.]); PNG media_image1.png 506 693 media_image1.png Greyscale obtaining a first/second/third ultrasound point cloud of the pelvis and a first/second/third ultrasound point cloud of a lumbar spine with the pelvis and the lumbar spine in a first/second/third functional position (GRUPP; FIGS. 4A, 5; ¶ 0047; “At block 531, the method 530 can include receiving initial image data of a spine (e.g., the spine 309 …) including multiple vertebrae [The Examiner notes that the bottom of the spine includes the sacrum, containing five vertebrae, which is also part of the pelvis.]. … the initial image data is preoperative image data [which] can be … medical scan data representing a 3D volume of a patient, such as … (CT) scan data, … (MRI) scan data, ultrasound images … The initial image data can be captured intraoperatively … [and] can comprise 2D or 3D X-ray images, … CT images, MRI images, … captured of the patient … The initial image data comprises a point cloud … [and] comprises segmented 3D CT scan data of … spine 309 (e.g., segmented on a per-vertebra basis).”); registering the virtual 3-D model of the pelvis to the first/second/third point cloud of the pelvis (GRUPP; FIG. 4A; [The Examiner notes that FIG. 4A depicts five spinal columns, with the second-from-right spinal column depicting a lumbar deformity. The Examiner further notes that all of the spinal columns are depicted with a sacrum, which is essentially the bottom five vertebrae of the spinal column.]; FIG. 5; ¶ 50; “… at block 534, the method 530 can include registering the initial image data to the intraoperative image data to … establish a transform/mapping/transformation between the intraoperative image data and the initial image data such that these data sets can be represented in the same coordinate system. … the registration process matches (i) 3D points in a point cloud … representing the initial image data to (ii) 3D points in a point cloud … representing the intraoperative image data. … the registration processing device 105 generates a 3D point cloud … from the intraoperative image data … and registers the point cloud … to the initial image data by detecting positions of fiducial markers and/or feature points visible in both data sets. … where the initial image data comprises CT scan data, rigid bodies of bone surface calculated from the CT scan data [are] registered to the corresponding points/surfaces of the point cloud …” [The Examiner asserts that by capturing a whole/lumbar spine, at least the sacrum of the spine would be captured, and therefore at least some of the pelvis would be captured. The Examiner asserts that at least one of the spinal columns depicted may represent a ‘first/second/third functional position’.]); and determining a first/second/third spine-pelvis tilt in the first/second/third functional position using a first/second/third relative angle of the first/second/third point cloud of the lumbar spine to the 3-D model of the pelvis (GRUPP; FIGS. 4-12; ¶ 32; “… the alignment processing device 109 determines … alignment parameters (e.g., geometric parameters) for a surgical procedure, such as … angle(s), … For a spinal surgical procedure, the alignment processing device 109 can determine … a Cobb angle …, pelvic tilt, pelvic angle, sacral slope, pelvic incidence, … center sacral vertical line (CSVL), … T1 pelvic angle, L1 tilt, L1 pelvic angle, etc., of the spine. … The alignment processing device 109 can determine the alignment parameters of the spine in real-time … based on (i) initial image data of the spine, (ii) intraoperative image data of the spine captured by the camera array 110, and (iii) the registration between the initial image data and the intraoperative image data of the spine. … the system 100 can compute formulas and/or scores based on the alignment parameters, such as a global alignment and proportion (GAP) score.”). GRUPP does not explicitly disclose obtaining a virtual 3-D model of a pelvis, which THOMPSON discloses (THOMPSON; FIG. 3; ¶ 0063; “At step 301, medical images of the hip joint are received and segmented to generate a virtual bone model of the pelvis. … the medical images may be collected using CT technology, Mill technology, or some other medical imaging modality. The images are then segmented, i.e., processed to differentiate areas of the images that correspond to the pelvis …”); Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of determining a spine-pelvis tilt of GRUPP to include the obtaining a virtual 3-D model of a pelvis of THOMPSON. The motivation for this modification is to implement a surgical plan, which is typically established prior to performing a surgical procedure with a robotically-assisted surgical system. Based on the surgical plan, the surgical system guides, controls, or limits movements of the surgical tool during portions of the surgical procedure. Guidance and/or control of the surgical tool serves to protect the patient and to assist the surgeon during implementation of the surgical plan (THOMPSON; ¶ [0004]). Regarding claim 36, GRUPP-THOMPSON disclose the method of claim 35, further comprising positioning … the pelvis (THOMPSON; ¶ 0076; “At step 306, a registration process is executed to register the relative positions of the patient's pelvis … A probe may be tracked by the tracking system 222 and touched to various points on the pelvis to determine a pose of the pelvis.”). Claim 38 is rejected under 35 U.S.C. 103 as being unpatentable over GRUPP in view of THOMPSON as applied to claim 37 above, and further in view of Mahfouz (U.S. PG-PUB 2019/0133693, 'MAHFOUZ-2019'). Regarding claim 38, GRUPP-THOMPSON disclose the method of claim 37; however, GRUPP-THOMPSON do not explicitly disclose that the method of claim 37 further comprises positioning … the pelvis , which MAHFOUZ-2019 discloses (MAHFOUZ-2019; ¶ 0191; “… the surgical navigation computer provides a [GUI] … that may display virtual models of the patient's pelvis and a virtual model of the surgical tool in question, … and may update the orientation of the pelvis and surgical tool in real time via the display providing position and orientation information to the surgeon. … After resurfacing using the cup reamer is complete, the IMU may be removed from the cup reamer and fixed rigidly to a cup inserter with a known orientation relative to the inserter direction. The cup inserter may then be utilized to place the cup implant, with the IMUs continuing to provide acceleration feedback that the software utilizes to calculate position to provide real time feedback as to the position of the pelvis with respect to the cup inserter.”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 37 of GRUPP-THOMPSON to include the positioning the pelvis into the second functional position of MAHFOUZ-2019. The motivation for this modification is to couple intraoperative radiography imaging with an inertial tracking system, so that the patient can be registered in the operating room without the overhead of manufacturing patient specific instrument or manually identifying landmarks (MAHFOUZ-2019; ¶ [0005]). Claim 52 is rejected under 35 U.S.C. 103 as being unpatentable over MCKINNON as applied to claim 51 above, and further in view of MAHFOUZ-2013. Regarding claim 52, MCKINNON discloses the method of claim 51; however, MCKINNON does not explicitly disclose that reconstructing the joint using ultrasound comprises obtaining … point cloud(s) associated with … bone(s) of the joint, which MAHFOUZ-2013 discloses (MAHFOUZ-2013; FIG. 7; ¶ 0096; “With the bone contours isolated from each of the RF signals, the bone contours may now be transformed into a point cloud. … the resultant bone contours 180 may then undergo registration with the optical system to construct a bone point cloud 194 representing the surface of at least a portion of each scanned bone (Block 186), which is … a multiple step registration process.”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 51 of MCKINNON to include the obtaining [a] point cloud associated with … bone(s) of the joint of MAHFOUZ-2013. The motivation for this modification is to generate the 3-D patient-specific model, raw RF signals are acquired using A-mode ultrasound acquisition methodologies. A bone contour is then isolated in each of the plurality of RF signals and transformed into a point cloud, which may then be used to optimize a 3-D model of the bone such that the patient-specific model may be generated (MAHFOUZ-2013; ¶ [0053]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN M COFINO whose telephone number is (303) 297-4268. The examiner can normally be reached Monday-Friday 10A-4P MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at 571-272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JONATHAN M COFINO/Examiner, Art Unit 2614 /KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

May 12, 2023
Application Filed
Nov 01, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597201
INTERACTIVE METHOD AND SYSTEM FOR DISPLAYING MEASUREMENTS OF OBJECTS AND SURFACES USING CO-REGISTERED IMAGES AND 3D POINTS
2y 5m to grant Granted Apr 07, 2026
Patent 12597202
GEOLOGICALLY MEANINGFUL SUBSURFACE MODEL GENERATION BASED ON A TEXT DESCRIPTION
2y 5m to grant Granted Apr 07, 2026
Patent 12536207
METHOD AND APPARATUS FOR RETRIEVING THREE-DIMENSIONAL (3D) MAP
2y 5m to grant Granted Jan 27, 2026
Patent 12511829
MAP GENERATION APPARATUS, MAP GENERATION METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING PROGRAM
2y 5m to grant Granted Dec 30, 2025
Patent 12505605
SOLVING LOW EFFICIENCY OF MOVING ADJUSTMENT CAUSED BY CONTROLLING MOVEMENT OF IMAGE USING MODEL PARAMETERS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
94%
With Interview (+32.2%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 210 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month