Prosecution Insights
Last updated: April 19, 2026
Application No. 18/928,316

AUTOMATED FRAME FEATURE DETECTION IN FRAME MATCHING PROCESS USING MACHINE LEARNING

Non-Final OA §103
Filed
Oct 28, 2024
Examiner
FERNANDEZ, KATHERINE L
Art Unit
3798
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Depuy Synthes Products Inc.
OA Round
1 (Non-Final)
57%
Grant Probability
Moderate
1-2
OA Rounds
4y 5m
To Grant
95%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
442 granted / 770 resolved
-12.6% vs TC avg
Strong +38% interview lift
Without
With
+37.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
58 currently pending
Career history
828
Total Applications
across all art units

Statute-Specific Performance

§101
6.9%
-33.1% vs TC avg
§103
42.9%
+2.9% vs TC avg
§102
17.1%
-22.9% vs TC avg
§112
25.6%
-14.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 770 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “neural network trainer configured to be trained..configured to generate…” in claim 1. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The “neural network trainer” has been interpreted to correspond to a processor, as set forth in paragraph [0109] of Applicant’s PG-Pub 2025/0152244, along with the algorithm/steps for performing the functions as set forth in the specification, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Objections Claims 1, 9 ,11, 19 and 29 are objected to because of the following informalities: In claim 1, in line 5, “image” should be changed to --- images ---. Applicant is advised that should claim 7 be found allowable, claim 9 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). In claim 11, line 2, --- detect --- should be added before “adjustment”. In claim 19, in line 5, “image” should be changed to --- images ---. In claim 29, line 2, --- detect --- should be added before “adjustment”. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-36 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gutmann et al. (US Pub No. 2021/0312625) in view of Blau (US Pub No. 2022/0028113) and Parasuraman et al. (“Training Convolutional Neural Networks (CNN) for Counterfeit IC Detection by the Use of Simulated X-Ray Images”, 2021). With regards to claims 1 and 19, Gutmann et al. disclose an automatic feature matching method and system for orthopedic fixators, comprising: a frame generator (“computing devices”) configured to generate a simulation of an orthopedic fixator system (i.e. “fixation apparatus”) (paragraph [0110], referring to the operations/sub-operations being performed by one or more computing devices; paragraph [0107], referring to the “Treatment Simulation tab 1121 causing interface 1100 to display a graphic representation (i.e. simulation) 1150 of the position and orientation of the first and the second anatomical structure segments and the rings of the fixation apparatus”; Figure 11); an artificial X-ray generator (“computing devices”) configured to generate a plurality of artificial X-ray images (i.e. “simulated images”, 1501-A, 1501-B) including the generated simulation of the orthopedic fixator system in the plurality of artificial X-ray image (paragraph [0110], referring to the operations/sub-operations being performed by one or more computing devices; paragraphs [0111]-[0112], referring to the display of images 1501-A and 1501-B using one or more graphical user interfaces of a computing system, wherein the images 1501-A and 1501-B are simulated images (as opposed to actual x-rays) which are images of a fixator (1510) including fixator ring (1511), distal fixator ring (1512), fixator struts (1513) and twelve hinges (1541); Figures 14, 15A-25); detect orthopedic fixator system features based on the plurality of artificial X-ray images (Abstract; paragraph [0126], referring to size characteristics of the hinges of the fixator in the first image being determined by the software by performing an automated image analysis (e.g., using a Hough transformation) on the first image, wherein the software may reasonably assume that the size characteristics of the hinges should be the same or similar in the first and second images; paragraphs [0128]-[0137]; Figure 14). However, though Guttman et al. do disclose that orthopedic fixator system features are detected based on the plurality of artificial X-ray images, Guttman et al. do not specifically disclose that the detection of the orthopedic fixator system features is performed by having the plurality of artificial X-ray images be labelled and that the system further comprises a neural network training data including the plurality of labelled artificial X-ray images and a neural network trainer configured to be trained based on the plurality of labeled artificial X-ray images, wherein the neural network trainer is configured to generate a frame detection neural network that is configured to detect the orthopedic fixator system features in real X-ray images input into the frame detection neural network. Blau discloses classifying a first object in an X-ray projection image using artificial intelligence, wherein the object may correspond to an anatomical structure (i.e. bone), a nail, a bone plate or a bone screw, or a surgical tool like a sleeve, k-wire or aiming device (Abstract; paragraphs [0012], [0020]). A deep neural net (DNN) may be utilized for a classification of the object in an X-ray projection image (paragraphs [0031]-[0032]). A neural net may be trained on the basis of simulated X-ray images, wherein a first neural net may be trained to evaluate X-ray image data so as to classify an anatomical structure in the 2D projection image, whereas a second neural net may be trained to detect the location of that structure in the 2D projection image (paragraphs [0035]-[0036], note that the neural net corresponds to a neural network, wherein simulated/artificial X-ray images are used to train the neural network and wherein the neural network is configured to detect fixator system features (i.e. nail, surgical tool like an aiming device) in real X-ray images (i.e. 2D projection image) input into the neural network). The use of artificial intelligence simplifies product development, is more cost-effective and allows an operating room workflow that more closely resembles the typical workflow (paragraph [0015]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have the system of Guttman et al. further comprise a neural network training data including the plurality of labelled artificial X-ray images and a neural network trainer configured to be trained based on the plurality of labeled artificial X-ray images, wherein the neural network trainer is configured to generate a frame detection neural network that is configured to detect the orthopedic fixator system features in real X-ray images input into the frame detection neural network, as taught by Blau, in order to implement the object (i.e. orthopedic fixator system features) detection using artificial intelligence, thereby simplifying product development, save cost and allow an operating room workflow that more closely resembles the typical workflow (paragraph [0015]). However, the above combined references do not specifically disclose that the detection of the orthopedic fixator system features is performed by having the plurality of artificial X-ray images be labelled, wherein the training data includes the plurality of “labelled” artificial X-ray images. Parasuraman et al. disclose the use of a virtual X-ray simulation tool to generate synthetic (i.e. artificial) radiographic images of a component under test and validate its effectiveness as training data for the convolutional neural network in identifying a counterfeit component (Abstract). CNN is a method in deep learning that has shown significant achievement in the image recognition task, wherein the CNN is trained with synthetic X-ray images which are known/labeled as authentic or counterfeit (Abstract; pg. 5, Section 3.4 and 4.1, referring to the training of CNN with a synthetic dataset set which is divided into authentic images and counterfeit images, which implicitly requires labeling the images as such; further see pg. 6, Section 5. Conclusions, referring to generating a synthetic annotated/labeled dataset for training a neural network). A CNN trained with synthetic images provided a maximum prediction accuracy of 99.6%, thus providing an increase in prediction accuracy and precision compared to a network trained with actual X-rays (Abstract; pg. 6, Section 5. Conclusions). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have the detection of the orthopedic fixator system features of the above combined references be performed by having the plurality of artificial X-ray images be labelled, wherein the training data includes the plurality of “labelled” artificial X-ray images, as taught by Parasuraman, in order to provide an increase in prediction accuracy and precision in classification of images (Abstract; pg. 6, Section 5. Conclusions). With regards to claims 2 and 20, Gutmann et al. disclose that the frame generator randomly selects orthopedic fixator parameters and configurations based upon clinical data (paragraph [0107], referring to the one or more graphical representations of the position and orientation of the first and second anatomical structure segments and the rings of the fixation apparatus may include day-by-day graphical representations of the position and orientation of the first and second anatomical structure segments and the rings of the fixation apparatus throughout treatment for the anatomical structure deformity, thereby resulting in a random selection of fixator parameters and configurations associated). With regards to claims 3 and 21, Gutmann et al. disclose that the frame generator further generates anatomy elements connected to the generated orthopedic fixator (paragraph [0107], referring to the one or more graphic representations of the position and orientation of the first and second anatomical structure segments; see Figure 11). With regards to claims 4 and 22, Gutmann et al. disclose that the artificial X-ray generator generates multiple X-rays of the generated orthopedic frame using different aspects and X-ray parameters (paragraphs [0111]-[0112], referring to the images 1501A and 1501B corresponding to an anteroposterior (AP) view image and a lateral (LAT) view image, respectively, and thus associated with different aspects and X-ray parameters). With regards to claims 5-6 and 23-24, the above combined references disclose that the system further comprises labeled clinical images and datasets of orthopedic fixators, wherein the neural network training data includes the labeled clinical images of orthopedic fixators (see Parasuraman, pg. 5, Section 3.4, referring to the CNN being further trained with available actual X-ray images of labeled authentic and counterfeit components, wherein in the above combined references, said “actual X-ray images” would comprise of the actual X-ray images of Gutmann et al, which comprise of clinical images and datasets of orthopedic fixators (see Gutmann et al., paragraph [0112], referring non-simulated images, such as x-rays captured using an imager may be used, which show an actual fixator that is physically attached to an actual anatomical structure segment). With regards to claims 7-9 and 25-27, Parasuraman et al. disclose that at least a subset of the labelled clinical images of orthopedic fixators are used by the neural network trainer to validate and test the frame detection neural network (Abstract, pg. 5, Sections 3.4, 4, 4.1, referring to the virtual X-ray simulation tool being used to generate synthetic radiographic images to test and validate its effectiveness as training data for the convolutional neural network; Figure 7). With regards to claims 10 and 28, Gutmann et al. disclose that the generated frame detection neural network is configured to detect hinges and their positions in the images input into the frame detection neural network (paragraphs [0109]-[0110], [0113]-[0114], referring to detecting hinge locations and their positions in the images; Figures 14A,B). With regards to claims 11 and 29, Gutmann et al. disclose that the generated frame detection neural network is configured to [detect] adjustment members (i.e. fixator rings) in the images input into the frame detection neural network (paragraph [0039], referring to the fixator rings corresponding to adjustment members; paragraph [0114], referring to the software using the indicated hinge locations to determine locations of the fixator rings in the images; Figure 14). With regards to claims 12 and 30, Gutmann et al. disclose that the generated frame detection neural network is configured to detect hinges and adjustment members (i.e. fixator rings) in the images input into the frame detection neural network (paragraphs [0109]-[0110], [0113]-[0114], referring to detecting hinge locations and their positions in the images; paragraph [0039], referring to the fixator rings corresponding to adjustment members; paragraph [0114], referring to the software using the indicated hinge locations to determine locations of the fixator rings in the images; Figures 14A,B). With regards to claims 13 and 31, Blau discloses that the generated frame detection neural network is configured to detect a full frame of the orthopedic fixator in the images input into the frame detection neural network (paragraphs [0035]-[0036], referring to the neural network may receive a simulated image as its input and then return a refined image, which would correspond to a full frame of the orthopedic fixator in the above combined refrences). With regards to claims 14 and 32, Gutmann et al. disclose that the generated frame detection neural network is configured to detect features (i.e. hinge, fixator rings) in two images (i.e. 1501-A, 1501-B) input into the frame detection neural network (paragraphs [0109]-[0110], [0113]-[0114], referring to detecting hinge locations and their positions in the images; paragraph [0039], referring to the fixator rings corresponding to adjustment members; paragraph [0114], referring to the software using the indicated hinge locations to determine locations of the fixator rings in the images; Figures 14A,B). With regards to claims 15 and 33, Parasuraman et al. disclose that the system further comprises an image processor configured to apply image processing to one or more of said plurality of labelled artificial X-ray images to generate a plurality of processed labelled artificial X-ray images (pg. 2, left column, 1st paragraph, referring to the synthetic dataset being generated inducing dimensional tolerance by automated variance scaling (i.e. image processing) of the part cross-section, to create diversity to the synthetic dataset); wherein the network training data further includes the plurality of processed labelled artificial X-ray images, and the neural network trainer is configured to be trained based on the plurality of labelled artificial X-ray images and/or the plurality of processed labelled artificial X-ray images (pg. 5, Sections 3.4, 4.1, referring to the CNN being trained using the synthetic data set dividing into a training dataset of identified/labeled authentic and counterfeit images). With regards to claims 16 and 34, Parasuraman et al. disclose that the image processor is configured to apply image processing including at least one of blurring, aspect ratio, noise, annotations, brightness, contrast, rotation, scaling, translation, color, and cropping (pg. 2, left column, 1st paragraph, referring to the synthetic dataset being generated inducing dimensional tolerance by automated variance scaling (i.e. image processing) of the part cross-section, to create diversity to the synthetic dataset). With regards to claims 17 and 35, Blau discloses that the neural network trainer generates one or more trained models configured to detect orthopedic fixators in an image (paragraph [0036], referring to the first neural net being trained to evaluate X-ray image data so as to classify an anatomical structure in the 2D projection image, wherein a second neural net may be trained to detect the location of that structure in the 2D projection image, and thus the “first neural net” corresponds to the trained model). With regards to claims 18 and 36, Blau discloses that the trained model is used to train a new model for detecting orthopedic fixators using a new neural network training dataset (paragraph [0036], referring to the first neural net being trained to evaluate X-ray image data so as to classify an anatomical structure in the 2D projection image, wherein a second neural net may be trained to detect the location of that structure in the 2D projection image, and a third net may be trained to determine the 3D location of that structure with respect to a coordinate system, and thus the “second neural net” corresponds to the “new model”). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHERINE L FERNANDEZ whose telephone number is (571)272-1957. The examiner can normally be reached Monday-Friday 9:00 AM - 5:30 PM (ET). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pascal Bui-Pho can be reached at (571) 272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KATHERINE L FERNANDEZ/Primary Examiner, Art Unit 3798
Read full office action

Prosecution Timeline

Oct 28, 2024
Application Filed
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599309
METHOD AND DEVICE FOR DETERMINING VOLEMIC STATUS AND VASCULAR TONE
2y 5m to grant Granted Apr 14, 2026
Patent 12579646
SYSTEM AND METHOD FOR DETERMINING A RISK OF HAVING OR DEVELOPING STEATOHEPATITIS AND/OR A COMPLICATION THEREOF
2y 5m to grant Granted Mar 17, 2026
Patent 12569151
SYSTEM FOR MONITORING AN OCCUPANT OF A MOTOR VEHICLE
2y 5m to grant Granted Mar 10, 2026
Patent 12573502
ULTRASOUND UTILITY STATION
2y 5m to grant Granted Mar 10, 2026
Patent 12564383
ENHANCED ULTRASOUND IMAGING APPARATUS AND ASSOCIATED METHODS OF WORK FLOW
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
57%
Grant Probability
95%
With Interview (+37.8%)
4y 5m
Median Time to Grant
Low
PTA Risk
Based on 770 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month