DETAILED ACTION
Summary
Claims 1-10, 13-15, 26, and 29-31 are pending in the application. Claims 1-10, 13-15, 26, and 29-31 are rejected under 35 USC 103.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/21/2025 has been entered.
Claim Interpretation
Claim 31 recites “in accordance with a determination that the anatomy of interest is mis-oriented in the two-dimensional imaging… in accordance with a determination that the anatomy of interest is correctly orient in the two-dimensional images”. These are contingent limitations. The broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met (MPEP 2111.04(II)). Therefore, by the broadest reasonable interpretation, only one of the contingent clauses needs to be met in order for the art to read on the claim (as the anatomy of interest is necessarily either mis-oriented, or correctly oriented).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-9, 11-13, 26, and 29-30 are rejected under 35 U.S.C. 103 as being unpatentable over Fouts et al. (U.S PGPub 2020/0253667 A1) in view of Mosnier et al. (U.S PGPub 2021/0201483 A1) and Veilleux et al. (U.S PGPub 2019/0298452 A1).
Regarding Claim 1, Fouts teaches a method of generating and displaying a resection curve from two-dimensional imaging associated with the anatomy of interest (Fig. 20), comprising:
receiving the two-dimensional imaging associated with the anatomy of interest (Fig. 20, Step 1) [0008]+[0226];
detecting a plurality of anatomical features of the anatomy of interest (anatomy of interest is femur and femur head and neck are the plurality of features) in the two-dimensional imaging (Fig. 20, Steps 11-12) [0278]+[0281];
determining characteristics of the plurality of anatomical features based on the detection of the plurality of anatomical features [0281]-[0283] (the midline and location of cam pathology are both characteristics);
generating at least one measurement of the anatomy of interest based on at least some of the characteristics of the plurality of anatomical features [0284]-[0289];
generating a resection curve for guiding bone removal based on the at least one measurement (Fig. 20, Step 15) [0300]+[0302];
and displaying a graphical user interface comprising the resection curve overlaid on the two- dimensional imaging (Fig. 20, Step 17) [0319]+[0325].
Fouts fails to explicitly teach wherein the plurality of anatomical features are automatically detected anatomical features using at least one object detection machine learning model.
Mosnier teaches a method for identifying body parts in a medical image (Abstract). This system automatically detects a plurality of anatomical features using at least one object detection machine learning model [0030]+[0040] (as the system is identifying objects (i.e. the object is vertebra endpoints)).
It would have been obvious to one of ordinary skill in the art before the effective filing date to substitute the method of detecting the features in Fouts with automatically detecting the features using an object detection machine learning model, as taught by Mosnier, as the substitution for one known method of identifying features in an image with another yields predictable results to one of ordinary skill in the art. One of ordinary skill would have been able to carry out such a substitution, and the results of using an object detection machine learning model to detect the features in an image are reasonably predictable.
The combination fails to explicitly teach without a user input associated with identifying the anatomy of interest.
Veilleux teaches a method for identifying landmarks in the body (Abstract). This system uses an automated (i.e. without a user input) algorithm in order to identify major landmarks in an image [0010]+[0012].
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the combined system to detect anatomical features without a user input, as taught by Veilleux, because this reduces errors in the identification based on human input, as recognized by Veilleux [0010].
Regarding Claim 2, the combination teaches the invention as claimed. Fouts further discloses wherein determining the characteristics of the plurality of anatomical features comprises determining an initial estimate of a characteristic of a first anatomical feature based on the detection of the plurality of anatomical features (“good guess” of where femur is) and determining a final estimate of the characteristic of the first anatomical feature based on the initial estimate (reducing the error from the good guess) [0278].
Regarding Claim 3, the combination teaches the invention substantially as claimed. Fouts further discloses wherein the initial estimate of the characteristic of the first anatomical feature comprises an estimate of at least one of a location and a size of the first anatomical feature [0278], and determining the final estimate comprises searching for a perimeter of the first anatomical feature based on the estimate of at least one of the location and the size of the first anatomical feature [0278] (overlaying the set of points on the strong edges is considered searching for the perimeter).
Regarding Claim 4, the combination teaches the invention as claimed. Fouts further discloses wherein the plurality of anatomical features comprises a head and neck of a femur [0278]+[0281] and the characteristics comprise a location of mid-line of the neck [0279]+[0281].
Regarding Claim 5, the combination teaches the invention as claimed. Fouts further discloses wherein the at least one measurement comprises an Alpha Angle generated based on the location of a mid-line of the anatomy of interest [0285]-[0289].
Regarding Claim 6 the combination teaches the invention as claimed. Fouts further discloses automatically generating the resection curve based on the Alpha Angle [0292]-[0294].
Regarding Claim 7, the combination teaches the invention substantially as claimed. Fouts further teaches wherein the plurality of anatomical features detected comprises a plurality of features of a femur [0278]+[0281] and the at least one measurement comprises an orientation of the femur relative to a predefined femur orientation [0327] (alpha angle compared to “normal” alpha angle).
Regarding Claim 8, the combination teaches the invention as claimed. Fouts further teaches determining a three-dimensional model of the femur with the two-dimensional imaging based on the orientation of the femur [0332].
Fouts fails to explicitly teach the alignment of the 3D model.
Veilleux teaches a system for diagnosing the hip (Abstract). This system determines the alignment of 3D models [0045].
It would have been obvious to one of ordinary skill in the art before the effective filing date to determine the alignment of the 3D model, as taught by Fouts, because this ensures that the model is consistent with a reference, thereby increasing the accuracy of the measurements derived from the model, as recognized by Veilleux [0045].
Regarding Claim 9, the combination teaches the invention substantially as claimed. Fouts further teaches further comprising comparing the orientation to a predefined orientation threshold and, in response to determining that the orientation is beyond the predefined orientation threshold, notifying the user [0328] (compare alpha angle to normal range, if outside range is notifies the user by changing color).
Regarding Claim 13, the combination teaches the invention as claimed. Fouts further teaches wherein the plurality of anatomical features detected comprises a plurality of features of a pelvis [0008]+[0350] and the at least one measurement comprises an orientation of the pelvis [0351]-[0352] relative to a predefined pelvis orientation [0353]-[0354].
Fouts is silent regarding the pelvic detection occurring in the same embodiment as the femur detection. However, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify the system of Fouts to also look at the pelvic orientation, as also taught by Fouts, as the substitution for one identifying known anatomical feature with another yields predictable results to one of ordinary skill in the art. One of ordinary skill would have been able to carry out such a substitution, and the results of detecting features of the pelvis are reasonably predictable.
Regarding Claim 26, Fouts teaches a system for generating and displaying a resection curve from two-dimensional imaging associated with the anatomy of interest (Fig. 20), the system comprising one or more processors, memory, and one or more programs stored in the memory for execution by the one or more processors [0223]+[0400] for causing the system to:
receive the two-dimensional imaging associated with the anatomy of interest (Fig. 20, Step 1) [0008]+[0226];
detect a plurality of anatomical features of the anatomy of interest(anatomy of interest is femur and femur head and neck are the plurality of features) in the two-dimensional imaging (Fig. 20, Steps 11-12) [0278]+[0281];
determine characteristics of the plurality of anatomical features based on the detection of the plurality of anatomical features [0281]-[0283] (the midline and location of cam pathology are both characteristics);
generate at least one measurement of the anatomy of interest based on at least some of the characteristics of the plurality of anatomical features [0284]-[0289];
generate a resection curve for guiding bone removal based on the at least one measurement (Fig. 20, Step 15) [0300]+[0302];
and display a graphical user interface comprising the resection curve overlaid on the two-dimensional imaging (Fig. 20, Step 17) [0319]+[0325].
Fouts fails to explicitly teach automatically wherein the plurality of anatomical features are automatically detected using at least one object detection machine learning model.
Mosnier teaches a method for identifying body parts in a medical image (Abstract). This system automatically detects a plurality of anatomical features using at least one object detection machine learning model [0030]+[0040] (as the system is identifying objects (i.e. the object is vertebra endpoints)).
It would have been obvious to one of ordinary skill in the art before the effective filing date to substitute the method of detecting the features in Fouts with automatically detecting the features using an object detection machine learning model, as taught by Mosnier, as the substitution for one known method of identifying features in an image with another yields predictable results to one of ordinary skill in the art. One of ordinary skill would have been able to carry out such a substitution, and the results of using an object detection machine learning model to detect the features in an image are reasonably predictable.
The combination fails to explicitly teach without a user input associated with identifying the anatomy of interest.
Veilleux teaches a method for identifying landmarks in the body (Abstract). This system uses an automated (i.e. without a user input) algorithm in order to identify major landmarks in an image [0010]+[0012].
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the combined system to detect anatomical features without a user input, as taught by Veilleux, because this reduces errors in the identification based on human input, as recognized by Veilleux [0010].
Regarding Claim 29, the combination of references teaches the invention substantially as claimed. Fouts further teaches wherein the resection curve comprises at least one of one or more curves, one or more lines, or a spline [0295].
Regarding Claim 30, the combination of references teaches the invention substantially as claimed. Fouts further teaches wherein the resection curve comprises at least one of one or more curves, one or more lines, or a spline [0295].
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable Fouts in view of Mosnier and Veilleux as applied to claim 1 above, and further in view of Savvides et al. (U.S PGPub 2018/0096457 A1).
Regarding Claim 10, the combination teaches the invention substantially as claimed. Fouts fails to explicitly teach wherein the at least one object detection machine learning model generates a plurality of scored bounding boxes for the plurality of anatomical features and the characteristics of the plurality of anatomical features are determined based on bounding boxes that have scores that are above a predetermined threshold.
Savvides teaches a method of using machine learning to analyze images (Abstract). This system generates a plurality of scored bounding boxes for the plurality of anatomical features [0006] and the characteristics of the plurality of anatomical features are determined based on bounding boxes that have scores that are above a predetermined threshold [0006]+[0020].
It would have been obvious to one of ordinary skill in the art to modify the machine learning algorithm of the combination to use scored bounding boxes to determine the characteristics, as taught by Savvides, because this algorithm provides better identification of the features when image quality is low, as recognized by Savvides [0017]. One of ordinary skill in the art would recognize that, in the combination, the scored bounding boxes would be applied to the medical images to obtain the characteristics of the anatomical features.
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Fouts in view of Mosnier and Veilleux as applied to claim 13 above, and further in view of Chakraverty et al. (Chakraverty, Julian K., et al. "Cam and pincer femoroacetabular impingement: CT findings of features resembling femoroacetabular impingement in a young population without symptoms." American Journal of Roentgenology 200.2 (2013): 389-395).
Regarding Claim 14, the combination of references teaches the invention substantially as claimed. Fouts further teaches notifying a user based on the center edge angle [0354].
Fouts fails to explicitly teach comparing the orientation to a predefined orientation threshold and, in response to determining that the orientation is beyond the predefined orientation threshold.
Chakraverty teaches a system for determining FAI (Abstract). This system compares the acetabular center edge angle to a predefined threshold (Pg. 394, Table 4, Angle greater that 40°).
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Fouts to compare the center edge angle to a threshold, as taught by Chakraverty, and if it is over a threshold notify a user because a center edge angle which is greater than 40° indicates impingement, as recognized by Chakraverty (Pg. 391, Col 1, Lateral edge angle). By notifying the user when the angle is over the threshold, the system can thereby better indicate that the patient has FAI, as recognized by Chakraverty (Abstract).
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Fouts in view of Mosnier and Veilleux as applied to claim 1 above, and further in view of Rappaport et al. (U.S PGPub 2008/0075348 A1).
Regarding Claim 15, the combination teaches the invention as claimed. Fouts fails to explicitly teach wherein the at least one measurement of the anatomy of interest is generated using a regression machine learning model.
Rappaport teaches a system for analyzing the hip (Abstract). This system uses a regression machine learning algorithm to generate a measurement [0219]+[0231].
It would have been obvious to one of ordinary skill in the art before the effective filing date to substitute the method of obtaining the measurement of Fouts with a regression machine learning algorithm, as taught by Rappaport, as the substitution for one known machine learning algorithm with another yields predictable results to one of ordinary skill in the art. One of ordinary skill would have been able to carry out such a substitution, and the results of using a regression algorithm are reasonably predictable.
Claim 31 is rejected under 35 U.S.C. 103 as being unpatentable over Fouts in view of Mosnier and Veilleux as applied to claim 1 above, and further in view of Scanlan et al. (U.S PGPub 2016/0235381 A1).
Regarding Claim 31, Fouts further teaches prior to generating the resection curve for guiding bone removal based on the at least one measurement (Fig. 20, Step 15) [0300]+[0302] and displaying the graphical user interface comprising the resection curve overlaid on the two-dimensional imaging (Fig. 20, Step 17) [0319]+[0325]:
determining whether the two-dimensional imaging is adequate [0234];
when the two-dimensional imaging is adequate, generating the resection curve for guiding bone removal based on the at least one measurement (Fig. 20, Step 15) [0300]+[0302] and displaying the graphical user interface comprising the resection curve overlaid on the two-dimensional imaging (Fig. 20, Step 17) [0319]+[0325].
While Fouts teaches looking at the two dimensional imaging to see if it is adequate, Fouts is silent regarding determining whether the anatomy of interest is mis-oriented in the two-dimensional imaging based on the at least one measurement. Fourts fails to explicitly teach in accordance with a determination that the anatomy of interest is mis-oriented in the two-dimensional imaging, prompting the user to reposition an imager or the anatomy of interest, and in accordance with a determination that the anatomy of interest is correctly oriented in the two-dimensional imaging doing something else.
Scanlan teaches for optimally visualizing a region of interest (Abstract). This system determining whether the anatomy of interest is mis-oriented in the two-dimensional imaging based on the at least one measurement [0058]+[0082]-[0085] (the system uses 2D slices to determine the ideal alpha angle present for the imaging to be considered optimal). If the system is mis-oriented (i.e. the image does not have that alpha angle), the user can then reposition either the imager or anatomy of interest [0098]-[0100] and then obtain the x-ray image using the correct orientation [0105].
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the combined system so the user determines if the images is adequate based on if the anatomy is mis-oriented, and then readjusts either the imager or the patient to obtain the optimal orientation, as taught by Scanlan, because it reduces the number of x-rays needed to obtain the optimal view during a procedure, as recognized by Scanlan [0114]-[0115]. Furthermore, one of ordinary skill would recognize that if the anatomy in the image is correct, they could proceed with the generation of the resection curve using that image as not obtaining an additional image would reduce the patient’s exposure to x-ray.
Response to Arguments
Applicant's arguments filed 11/21/2025 have been fully considered but they are not persuasive.
Applicant’s arguments with respect to claim 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Veilleux was brought in to teach the newly added limitations. Therefore, claims 1-10, 13-15, and 29-30 remain rejected under 35 USC 103.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Udupa et al. (U.S PGPub 2017/0091574 A1), which teaches an automated method for anatomy recognition.
Blau et al. (U.S PGPub 2018/0318012 A1), which teaches a method for positions an imager based on 2D images.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN D MATTSON whose telephone number is (408)918-7613. The examiner can normally be reached Monday - Friday 9 AM - 5 PM PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pascal Bui-Pho can be reached at (571) 272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SEAN D MATTSON/ Primary Examiner, Art Unit 3798