Prosecution Insights
Last updated: April 19, 2026
Application No. 18/415,400

APPARATUS AND METHOD FOR AUTOMATED ANALYSIS OF LOWER EXTREMITY IMAGE

Non-Final OA §103§112
Filed
Jan 17, 2024
Examiner
PHAM, NHUT HUY
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Connecteve Co. Ltd.
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
42 granted / 53 resolved
+17.2% vs TC avg
Strong +27% interview lift
Without
With
+26.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
31 currently pending
Career history
84
Total Applications
across all art units

Statute-Specific Performance

§101
9.4%
-30.6% vs TC avg
§103
62.2%
+22.2% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
14.5%
-25.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 53 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The United States Patent & Trademark Office appreciates the application that is submitted by the inventor/assignee. The United States Patent & Trademark Office reviewed the following application and has made the following comments below. Information Disclosure Statement The information disclosure statement (IDS) submitted on 01/17/2024 is considered and attached. Priority This application claims benefit of foreign priority under 35 U.S.C. 119(a)-(d) of: KR10-2023-0006749, filed in Korea on 01/17/2023. Claim Status Claims 1-10 are rejected under 35 USC § 112 (b). Claims 1-3, 5 and 7-8 are rejected over Yen in view of Fitz. Claims 4 and 9 are rejected over Yen in view of Fitz in view of Hu. Claims 6 and 10 are objected. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The examiner strongly suggested that appropriate corrections be made to clarify the claim scope. Regarding Claim 1, the claim recites the following, each of which renders the claim indefinite: “the anatomical landmark” on line 10 (unclear antecedent basis, the Examiner only found “a plurality of anatomical landmarks”). Regarding Claim 7, the claim recites the following, each of which renders the claim indefinite: “the anatomical landmark” on line 7 (unclear antecedent basis, the Examiner only found “a plurality of anatomical landmarks”). Claims 2-6 and 8-10 are also rejected due to their dependence on rejected independent claims 1 and 7, respectively. Claim Objections Claim 7-10 is/are objected to because of informalities. The examiner recommends the following changes. Claim 7, line(s) 1-2, there is a duplicate “comprising” this should be “ Claims 8-10 depend either directly or indirectly from the objection of claim 7, therefore they are also objected. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 5 and 7-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yen et al. (US-20230186469-A1, filed 2022, hereinafter Yen) in view of Fitz et al. (US-20220047278-A1, published 2022, hereinafter Fitz). CLAIM 1 In regards to Claim 1, Yen teaches an apparatus for automated analysis of knee joint space in a lower extremity image (Yen, ¶ [0007]: “The method … implemented in a computer system”, Abstract: “a method for improving the diagnostic accuracy of an artificial intelligence (AI) to diagnose osteoarthritis (OA) … from at least one input skeletal image…”), comprising: a processor; and a memory including one or more sequences of instructions (Yen, ¶ [0012]: “a non-transitory computer-readable medium having stored thereon a set of instructions that are executable by a processor of a computer system”) which, when executed by the processor, causes steps to be performed comprising: Yen does not explicitly disclose generating a pre-lower extremity image by preprocessing an original lower extremity image. Fitz is in the same field of art of system to joint assessment. Further, Fitz teaches generating a pre-lower extremity image by preprocessing (Fitz, ¶ [0226]: “Preprocessing (filtering) of the slice images can be used to improve the contrast of the bone regions so that they can be extracted accurately using simple thresholding or a more involved image segmentation tool like LiveWire or active contour models.” The Examiner notes outputting a preprocessed image corresponds to “generating”) an original lower extremity image (Fitz, ¶ [0225]: “a CT scan covering at least the hip, knee and ankle region is acquired.”) from a camera (Fitz, ¶ [0197]: “CT or MRI scanner”). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yen by incorporating preprocessing method that is taught by Fitz, to make a medical imaging system that can preprocess images; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need to improve visibility of bone in images (Fitz, ¶ [0226]: “Preprocessing (filtering) of the slice images can be used to improve the contrast of the bone regions so that they can be extracted accurately using simple thresholding or a more involved image segmentation tool like LiveWire or active contour models.”). The combination of Yen and Fitz then teaches identifying a plurality of anatomical landmarks in the pre-lower extremity image based on a machine learning model (Yen, ¶ [0051-0052]: “A knee landmark model is used to identify the positions of knee joint. A trained a CNN model (HRnet) is used to predict those positions (landmarks)”); generating the lower extremity image in which a position of the anatomical landmark is identified by processing the pre-lower extremity image (Yen, ¶ [0033]: “AI image analysis/recognition comprises … the ROI check identifies the ROI block of a specific object (e.g. a knee joint or a hip) from the input image. The ROI check may be executed by another AI trained by a collection of images labeled with a specific object and the location of ROI within the images”; ¶ [0054 and 0057]: “The input is the knee joint ROI inferred … The inputs are 4 knee joint ROIs (LT, MT, LF, MF, as shown in FIG. 4 )”. Yen teaches identifying region of interest within input images, generating and inputting identified ROIs into other models); and deriving a width of the knee joint space in the lower extremity image using the position of the anatomical landmark. (Yen, ¶ [0036]: “for KOA diagnosis, the first model may be an object detection model to find the region of interest (bounding box) of the knee joint from the entire image. The second model may find the edge of the knee joint bone or specific anatomical locations from the region of interest (ROI) according to the results of the first model. The third model may use these points to calculate the joint space width (JSW)”) Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. CLAIM 2 Regarding claim 2, the combination of Yen and Fitz teaches the apparatus of claim 1. In addition, the combination of Yen and Fitz teaches generating images in which a specific anatomical area of the lower extremity image is enlarged (Yen, ¶ [0018-0019], see FIG. 4 and FIG. 5 with annotations below, FIG. 5 is an enlarged view of an identified area in FIG. 4) (Fitz, ¶ [0139]: “FIG. 8A shows a magnified view of an area of diseased cartilage”); PNG media_image1.png 1228 2614 media_image1.png Greyscale and displaying the images on a display device. (Fitz, ¶ [0387]: “Multiple planes may be displayed simultaneously, for example using a split screen display”) CLAIM 3 Regarding claim 3, the combination of Yen and Fitz teaches the apparatus of claim 2. In addition, the combination of Yen and Fitz teaches the images comprise at least one of a hip joint image, a knee joint image and an ankle joint image. (Fitz, ¶ [0225]: “a CT scan covering at least the hip, knee and ankle region is acquired.”) (Yen, ¶ [0033-0034]: “the ROI check identifies the ROI block of a specific object (e.g. a knee joint or a hip) from the input image…ankle joint region(s)”) CLAIM 5 Regarding claim 5, the combination of Yen and Fitz teaches the apparatus of claim 1. In addition, the combination of Yen and Fitz teaches the position of the anatomical landmark comprises at least one of a medial or (***The Examiner notes since a listing with “or” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required.) lateral femoral condyle, a medial or lateral anterior border of tibia measured from a midline of the medial plateau or a center point of the medial plateau, a medial or lateral posterior border of tibia measured from the center point of the medial plateau. (Yen, ¶ [0036]: “The fifth model may label the areas of medial tibial, lateral tibial, medial femoral, lateral femoral and other local small areas derived from the points of previous models”; see FIG. 4) CLAIM 7 Regarding Claim 7, Yen teaches a method for automated analysis of knee joint space in a lower extremity image (Yen, Abstract: “a method for improving the diagnostic accuracy of an artificial intelligence (AI) to diagnose osteoarthritis (OA) … from at least one input skeletal image…”). Yen does not explicitly disclose generating a pre-lower extremity image by preprocessing an original lower extremity image. Fitz is in the same field of art of system to joint assessment. Further, Fitz teaches generating a pre-lower extremity image by preprocessing (Fitz, ¶ [0226]: “Preprocessing (filtering) of the slice images can be used to improve the contrast of the bone regions so that they can be extracted accurately using simple thresholding or a more involved image segmentation tool like LiveWire or active contour models.” The Examiner notes outputting a preprocessed image corresponds to “generating”) an original lower extremity image (Fitz, ¶ [0225]: “a CT scan covering at least the hip, knee and ankle region is acquired.”) from a camera (Fitz, ¶ [0197]: “CT or MRI scanner”). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yen by incorporating preprocessing method that is taught by Fitz, to make a medical imaging system that can preprocess images; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need to improve visibility of bone in images (Fitz, ¶ [0226]: “Preprocessing (filtering) of the slice images can be used to improve the contrast of the bone regions so that they can be extracted accurately using simple thresholding or a more involved image segmentation tool like LiveWire or active contour models.”). The combination of Yen and Fitz then teaches identifying a plurality of anatomical landmarks in the pre-lower extremity image based on a machine learning model (Yen, ¶ [0051-0052]: “A knee landmark model is used to identify the positions of knee joint. A trained a CNN model (HRnet) is used to predict those positions (landmarks)”); generating the lower extremity image in which a position of the anatomical landmark is identified by processing the pre-lower extremity image (Yen, ¶ [0033]: “AI image analysis/recognition comprises … the ROI check identifies the ROI block of a specific object (e.g. a knee joint or a hip) from the input image. The ROI check may be executed by another AI trained by a collection of images labeled with a specific object and the location of ROI within the images”; ¶ [0054 and 0057]: “The input is the knee joint ROI inferred … The inputs are 4 knee joint ROIs (LT, MT, LF, MF, as shown in FIG. 4 )”. Yen teaches identifying region of interest within input images, generating and inputting identified ROIs into other models); and deriving a width of the knee joint space in the lower extremity image using the position of the anatomical landmark. (Yen, ¶ [0036]: “for KOA diagnosis, the first model may be an object detection model to find the region of interest (bounding box) of the knee joint from the entire image. The second model may find the edge of the knee joint bone or specific anatomical locations from the region of interest (ROI) according to the results of the first model. The third model may use these points to calculate the joint space width (JSW)”) Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. CLAIM 8 Regarding claim 8, the combination of Yen and Fitz teaches the method of claim 7. In addition, the combination of Yen and Fitz teaches generating images in which a specific anatomical area of the lower extremity image is enlarged (Yen, ¶ [0018-0019], see FIG. 4 and FIG. 5 with annotations below, FIG. 5 is an enlarged view of an identified area in FIG. 4) (Fitz, ¶ [0139]: “FIG. 8A shows a magnified view of an area of diseased cartilage”); PNG media_image1.png 1228 2614 media_image1.png Greyscale and displaying the images on a display device. (Fitz, ¶ [0387]: “Multiple planes may be displayed simultaneously, for example using a split screen display”) Claim(s) 4 and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yen in view of Fitz, and further in view of Hu et al. (US20220028166A1, hereinafter Hu). CLAIM 4 Regarding Claim 4, the combination of Yen and Fitz teaches the apparatus of Claim 1. In addition, the combination of Yen and Fitz teaches identifying a plurality of anatomical landmarks in the pre-lower extremity image based on a machine learning model (Yen, ¶ [0051-0052]); and deriving a width of the knee joint space in the lower extremity image using the position of the anatomical landmark. (Yen, ¶ [0036]). The combination of Yen and Fitz does not explicitly disclose generating a marker indicating the width of knee joint space on the lower extremity image; and displaying the lower extremity image including the marker on a display device. PNG media_image2.png 900 1678 media_image2.png Greyscale Hu is in the same field of art of system for processing knee join images. Further, Hu teaches generating a marker indicating the width of knee joint space on the lower extremity image. (Hu, ¶ [0009-0010]: “the processor is further configured to re-segment the raw image data and recreate a three-dimensional surface model of a portion of the patient's joint … the processor is further configured to place a marker or line on the surface model of the patient's joint. The marker may be placed on a visual representation of the patient's distal femoral epicondyles and a line may be drawn through the condyles.”; see FIG. 8 E-F. Hu teaches generating a line connecting two landmarks); and displaying the lower extremity image including the marker on a display device. (Hu, ¶ [0049]: “In FIG. 8F, the landmark points 870 and the axis 880 are displayed in conjunction with the oblique 2D slice.”, see annotated FIG. 8 below) Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yen and Fitz by incorporating method to render graphical elements indicates anatomical landmarks that is taught by Hu, to make a system to identify and visualize landmarks; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need to improve user’s experience (“The data set that is obtained indicates any particular variations or nuances in the patient's anatomy, and processing that data can provide a surgeon with a detailed map of the relevant body portion ahead of time.”). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. CLAIM 9 Regarding Claim 9, the combination of Yen and Fitz teaches the method of Claim 7. In addition, the combination of Yen and Fitz teaches identifying a plurality of anatomical landmarks in the pre-lower extremity image based on a machine learning model (Yen, ¶ [0051-0052]); and deriving a width of the knee joint space in the lower extremity image using the position of the anatomical landmark. (Yen, ¶ [0036]). The combination of Yen and Fitz does not explicitly disclose generating a marker indicating the width of knee joint space on the lower extremity image; and displaying the lower extremity image including the marker on a display device. Hu is in the same field of art of system for processing knee join images. Further, Hu teaches generating a marker indicating the width of knee joint space on the lower extremity image. (Hu, ¶ [0009-0010]: “the processor is further configured to re-segment the raw image data and recreate a three-dimensional surface model of a portion of the patient's joint … the processor is further configured to place a marker or line on the surface model of the patient's joint. The marker may be placed on a visual representation of the patient's distal femoral epicondyles and a line may be drawn through the condyles.”; see FIG. 8 E-F. Hu teaches generating a line connecting two landmarks); and displaying the lower extremity image including the marker on a display device. (Hu, ¶ [0049]: “In FIG. 8F, the landmark points 870 and the axis 880 are displayed in conjunction with the oblique 2D slice.”, see annotated FIG. 8 below) PNG media_image2.png 900 1678 media_image2.png Greyscale Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yen and Fitz by incorporating method to render graphical elements indicates anatomical landmarks that is taught by Hu, to make a system to identify and visualize landmarks; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need to improve user’s experience (“The data set that is obtained indicates any particular variations or nuances in the patient's anatomy, and processing that data can provide a surgeon with a detailed map of the relevant body portion ahead of time.”). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Allowable Subject Matter Claims 6 and 10 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The closest prior arts for Claim 6 and 10 are: Yen et al. (US-20230186469-A1), which directed to a method for improving the diagnostic accuracy of an artificial intelligence to diagnose osteoarthritis. The method comprises: receiving, by a grading module implemented in a computer system, a plurality of feature values; and generating, by the grading module, a quantitative Kellgren-Lawrence (KL) grade based on the plurality of feature values; and the quantitative KL grade is used to diagnose osteoarthritis. Lee et al. (US-20250131560-A1, filed 2022), which directed to a medical image analysis method comprises: acquiring a medical image to be analyzed; acquiring a first feature region related to a first knee from the medical image through the knee detection model; acquiring a second feature region related to a second knee from the medical image through the knee detection model; and acquiring a first knee image to be analyzed, related to the first knee, and a second knee image to be analyzed, related to the second knee, on the basis of the first feature region and the second feature region of the medical image. While both Yen and Lee teach deep learning based system to analyze knee joint images, calculate joint space width for osteoarthritis diagnosis. Neither Yen, or Lee, nor the combination teaches “the width of the knee joint space is derived by calculating an average of a first distance and a second distance, wherein the first distance is a distance between a medial or lateral femoral condyle and a medial or lateral anterior border of tibia measured from a midline of a medial plateau or a center point of the medial plateau, and the second distance is a distance between the medial or lateral femoral condyle and a medial or lateral posterior border of tibia, measured from the center point of the medial plateau.” Pertinent Arts The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Erne et al. (Erne, Felix, et al. "Automated artificial intelligence-based assessment of lower limb alignment validated on weight-bearing pre-and postoperative full-leg radiographs." Diagnostics 12.11, published 2022) Chan et al. (Chan, E. F., et al. "Characterization of the mid-coronal plane method for measurement of radiographic change in knee joint space width across different levels of image parallax." Osteoarthritis and Cartilage 29.9, published 2021) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NHUT HUY (JEREMY) PHAM whose telephone number is (703)756-5797. The examiner can normally be reached Mo - Fr. 8:30am - 6pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, O'Neal Mistry can be reached on (313)446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NHUT HUY PHAM/Examiner, Art Unit 2674 /Ross Varndell/Primary Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Jan 17, 2024
Application Filed
Feb 10, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598397
DIRT DETECTION METHOD AND DEVICE FOR CAMERA COVER
2y 5m to grant Granted Apr 07, 2026
Patent 12598074
FACIAL RECOGNITION METHOD AND APPARATUS, DEVICE, AND MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12597254
TRACKING OPERATING ROOM PHASE FROM CAPTURED VIDEO OF THE OPERATING ROOM
2y 5m to grant Granted Apr 07, 2026
Patent 12592087
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12579622
METHOD AND APPARATUS FOR PROCESSING IMAGE SIGNAL, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.8%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 53 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month