Prosecution Insights
Last updated: April 19, 2026
Application No. 18/682,215

AUTOFOCUS

Non-Final OA §103
Filed
Feb 08, 2024
Examiner
PATEL, PINALBEN V
Art Unit
2673
Tech Center
2600 — Communications
Assignee
Brainlab AG
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
484 granted / 545 resolved
+26.8% vs TC avg
Moderate +10% lift
Without
With
+9.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
23 currently pending
Career history
568
Total Applications
across all art units

Statute-Specific Performance

§101
9.1%
-30.9% vs TC avg
§103
59.9%
+19.9% vs TC avg
§102
5.9%
-34.1% vs TC avg
§112
14.9%
-25.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 545 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 02/08/2024 and 06/27/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Priority Foreign priority is not claimed. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. Claims 20, 21-22, 25-27 are interpreted to invoke 35 U.S.C. 112(f). The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a receiving unit”, “a first obtaining unit”, “a generating unit”, “a second obtaining unit”, “a determining unit”, in claims 20 are 21. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The original specifications pg. 10 lines 28-33 discloses CPU (processor(s) to execute the functions performed by the unit(s). If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 9-22 and 24-27 are rejected under 35 U.S.C. 103 as being unpatentable over Breininger et al. (US Pub No. 20190347793 A1, as provided) in view of Blau et al. (US Pub No. 20210248779 A1, as provided). Regarding Claim 1, Breininger discloses A medical computer-implemented method of assessing an imaging quality measure of medical images, the method comprising: receiving a first positioning data set configured to position an imaging source and an imaging detector of a medical imaging X-Ray system in relation to an object to be imaged into a first imaging position; (Breininger, [0075], discloses method of the determination of a reference position of at least one object in a reference image. A second step entails the determination of a current position of the object in a current fluoroscopic image. According to the invention, the current fluoroscopic image corresponds to a current fluoroscopic image acquired at a point in time during the course of an intervention, i.e. a medical intervention. The current fluoroscopic image is preferably a two-dimensional projection acquired via a C-arm X-ray system, but it can also correspond to a three-dimensional dataset. The fluoroscopic image is typically acquired without the administration of contrast medium, but can also be generated with the administration of contrast medium; current object image at first position image is obtained by a sensor system) obtaining a first medical image of the object from the first imaging position; (Breininger, [0075], discloses method of the determination of a reference position of at least one object in a reference image. A second step entails the determination of a current position of the object in a current fluoroscopic image. According to the invention, the current fluoroscopic image corresponds to a current fluoroscopic image acquired at a point in time during the course of an intervention, i.e. a medical intervention. The current fluoroscopic image is preferably a two-dimensional projection acquired via a C-arm X-ray system, but it can also correspond to a three-dimensional dataset. The fluoroscopic image is typically acquired without the administration of contrast medium, but can also be generated with the administration of contrast medium; current object image at first position image is obtained by a sensor system) generating at least a second positioning data set configured to position the imaging source and the imaging detector of the medical imaging X-Ray system in relation to the object to be imaged into a second imaging position, wherein the second positioning data set differs from the first positioning data set in at least one positioning parameter; (Breininger, [0076], discloses the reference image corresponds to a preferably three-dimensional image dataset. This was acquired or generated at a reference time point, preferably before the medical intervention. The reference image is, for example, embodied as a computed-tomography image or magnetic-resonance image. The reference image can also be embodied as a two-dimensional image. Common to the current fluoroscopic image and the reference image is that fact that they both depict the same body region of interest of the patient and hence directly or indirectly include the same anatomical objects or structures within the body region of interest; second image of the object or region of patient (reference image) is generated at second position captured with imaging sensor) obtaining at least a second medical image of the object from the second imaging position; (Breininger, [0076], discloses the reference image corresponds to a preferably three-dimensional image dataset. This was acquired or generated at a reference time point, preferably before the medical intervention. The reference image is, for example, embodied as a computed-tomography image or magnetic-resonance image. The reference image can also be embodied as a two-dimensional image. Common to the current fluoroscopic image and the reference image is that fact that they both depict the same body region of interest of the patient and hence directly or indirectly include the same anatomical objects or structures within the body region of interest; second image of the object or region of patient (reference image) is generated at second position captured with imaging sensor) determining at least one imaging quality measure in the first medical image and in the at least second medical image and comparing the determined at least one imaging quality measure with an evaluation criterion of the at least one imaging quality measured and deriving an assessment result, wherein: (Breininger, [0078-0080], discloses in which the object is a blood vessel, the determination of the current position of the object in the current fluoroscopic image includes the identification or segmentation of an instrument located in the blood vessel, an endoprosthesis such as, for example, a stent or stent part, an aortic valve or cardiac valve, the outer contour and/or the volume of the blood vessel. Herein, the invention is based on the assumption that the position or location of medical instruments, insofar as they are within the blood vessel, substantially represent the current position or the current course of the blood vessel; further step entails the generation of the superposition image by superimposing the current fluoroscopic image and the reference image. To this end, generally at the start of the medical intervention, the reference image is registered to existing fluorescent images, i.e. brought into conformity based on various common features (for example the course of vessels or bone structure). Preferably, the conformity is good enough to be valid for other viewing directions as well. Methods that are known per se can be used for the registration; further step entails the determination of at least one parameter characterizing a measure of discrepancy between the reference position and the current position of an object in the superposition image. The superimposition of the reference image and the current fluoroscopic image can, for example, reveal differences between the current course and the reference course due to patient movement and/or for the preferred case “object=blood vessel” due to the medical instruments introduced into the blood vessel. The current position can be displaced, twisted, stretched and/or the like relative to the reference position. These discrepancies are visualized by the superposition and automatically quantized in the current step; quality parameter is determined by comparing the two first and second images by superposing each other and determining discrepancies within the superposed images) in case of a positive assessment result of the first medical image or the at least second medical image selecting of the deriving comprises: selecting the respective medical image with the positive assessment results and providing the selected medical image and the corresponding positioning data set for further processing; and (Breininger, [0106-0112], discloses based on the parameter characterizing the measure of discrepancy, derivation of at least one acquisition parameter for a medical imaging system if the parameter for the measure of discrepancy exceeds a predetermined threshold value; acquisition of a modified fluoroscopic image using the acquisition parameter; determination of a current position of the object in the modified fluoroscopic image; generation of a modified superposition image by superimposing the modified fluoroscopic image and the reference image; determination of at least one parameter characterizing a modified measure of discrepancy between the reference position and the current position of the object in the superposition image; displaying the modified measure of discrepancy for a user; quality of parameter is determined by comparing the two images and if the result is positive or negative is determined by comparing the superposed image with a threshold and if it exceeds threshold then positive and if not then it is negative assessment) in case of a negative assessment result of the first medical image or the at least second medical image generating, the deriving comprises: generating at least one further positioning data set configured to position the imaging source and the imaging detector of the medical imaging X-Ray system in relation to the object to be imaged into a further imaging position, and obtaining at least one further medical image of the object from the further imaging positioning, wherein the at least one further positioning data set differs from the first positioning data set and the second positioning data set in at least one positioning parameter; and deriving an assessment result of the at least one further medical image. (Breininger, [0113-0115], discloses if at least one of the determined parameters characterizing a measure of discrepancy has a value outside a predetermined range of values or above a predetermined threshold value, this indicates that the discrepancies in the location, position or course of the segmented structures in the current fluoroscopic image and reference image are so great that it is not possible to achieve a meaningful superimposition in the sense that the superimposition cannot be corrected in the way described above, at least one embodiment of the invention suggests in dependence on the parameter characterizing the measure of discrepancy the derivation of at least one acquisition parameter for the acquisition of a new, modified fluoroscopic image. Insofar, the method according to the invention can suggest at least one acquisition parameter, for example changed angulation, changed collimation, changed X-ray tube voltage or the like, and acquire a modified fluoroscopic image, which is more suitable for correction as described above. The derivation of an acquisition parameter can be user-triggered or take place automatically. The acquisition of the modified fluoroscopic image with the derived acquisition parameter can be user-triggered; at least one embodiment of the invention is then performed using the modified fluoroscopic image and a parameter characterizing a modified measure of discrepancy determined. If, after the performance of the described steps, the parameter characterizing the measure of discrepancy is still not within the tolerance, the method can be repeated, another acquisition parameter derived and a modified fluoroscopic image acquired etc.; if the superposing image quality parameter is less than threshold (negative assessment), a further processing parameters are selected and images are processed again to achieve the quality) Breininger does not explicitly disclose position an imaging source and an imaging detector of a medical imaging X-Ray system in relation to an object to be imaged into a first imaging position; Blau discloses position an imaging source and an imaging detector of a medical imaging X-Ray system in relation to an object to be imaged into a first imaging position; (Blau, [0044], [0127], [0137], discloses physical dimensions of an object are related to the dimensions of its projection in an X-ray image through the intercept theorem (also known as basic proportionality theorem) because the X-ray beams originate from the X-ray source (the focal point) and are detected by an X-ray detector in the image plane. The precise imaging depth (which is the distance of the object from the image plane) is not generally required in the context of this invention. However, if an object is sufficiently large, the imaging depth may be determined through the intercept theorem, and the larger the object, the more precise this determination will be. Yet even for small objects, an approximate estimation of imaging depth may be possible. Alternatively, the imaging depth may also be determined if the size of the X-ray detector and the distance between image plane and focal point are known; depending on the bone shape there may still be a remaining ambiguity or matching error in the 3D reconstruction based on one image only. This may be alleviated by acquiring multiple images, potentially from different viewing directions, by rotating and/or translating the C-arm between images. In general, additional images from different imaging directions are more helpful, and the more different the imaging directions are (e.g., AP and ML images), the more helpful additional images may be in terms of a determination of 3D information. However, even adding images from only slightly different viewing angles, which may be more easily acquired during surgery instead of changing to completely different view (AP to ML or vice versa), may be beneficial; a system may be caused by a software program product executed on a processing unit of the system, to receive a first X-ray image, wherein the first X-ray image is a projection image of at least one object, to classify the at least one object and to determine at least one point in the first X-ray image. Then, the system may be caused to receive a second X-ray image, wherein the second X-ray image is a projection image generated with an imaging direction which differs from the imaging direction utilized to generate the first X-ray image. In the second image, the at least one object is again classified and the at least one point is determined. Based on the classification of the at least one object in the first and second X-ray images as well as based on the determination of the at least one point in both X-ray images, the two images can be registered and a 3D position of the point relative to the at least one object can be determined; X-ray source and detector positions are changed to obtain patient or object image from different angle points or different positions) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Bleininger and Blau in view of Geng with method of placing imaging source at first and second position and capture images of the object and determining one parameter that differs in the position of the imaging and calculate positive or negative outcome if adjust position of imaging to keep the parameter at specific value, with the teachings of Blau having, that places the imaging being image detector and source at specific view points and orientations to capture various positions of the object in order to accurately obtain images of object with specific parameter values in applications including medical imaging techniques. Regarding Claim 2, The combination of Breininger and Blau further discloses wherein the at least one imaging quality measure comprises at least one of: a maximum depicted length of the object; a minimum area of the object; an aspect ratio of the object; a contour of the object; a digital reconstructed radiograph of the object; an inner contour of the obj etc.; and/or a parameter of a directional change of an intensity or a color. (Breininger, [0106-0112], discloses based on the parameter characterizing the measure of discrepancy, derivation of at least one acquisition parameter for a medical imaging system if the parameter for the measure of discrepancy exceeds a predetermined threshold value; acquisition of a modified fluoroscopic image using the acquisition parameter; determination of a current position of the object in the modified fluoroscopic image; generation of a modified superposition image by superimposing the modified fluoroscopic image and the reference image; determination of at least one parameter characterizing a modified measure of discrepancy between the reference position and the current position of the object in the superposition image; displaying the modified measure of discrepancy for a user; quality of parameter is determined by comparing the two images and if the result is positive or negative is determined by comparing the superposed image with a threshold and if it exceeds threshold then positive and if not then it is negative assessment). Additionally, the rational and motivation to combine the references Breininger and Blau as applied in rejection of claim 1 apply to this claim. Regarding Claim 9, The combination of Breininger and Blau further discloses wherein the first positioning data set, the second positioning data set, and the further positioning data set each comprise a first angle (i.e. orbital) and/or a second angle of the imaging source and/or the imaging detector in relation to the object. (Blau, [0044], [0127], [0137], discloses physical dimensions of an object are related to the dimensions of its projection in an X-ray image through the intercept theorem (also known as basic proportionality theorem) because the X-ray beams originate from the X-ray source (the focal point) and are detected by an X-ray detector in the image plane. The precise imaging depth (which is the distance of the object from the image plane) is not generally required in the context of this invention. However, if an object is sufficiently large, the imaging depth may be determined through the intercept theorem, and the larger the object, the more precise this determination will be. Yet even for small objects, an approximate estimation of imaging depth may be possible. Alternatively, the imaging depth may also be determined if the size of the X-ray detector and the distance between image plane and focal point are known; depending on the bone shape there may still be a remaining ambiguity or matching error in the 3D reconstruction based on one image only. This may be alleviated by acquiring multiple images, potentially from different viewing directions, by rotating and/or translating the C-arm between images. In general, additional images from different imaging directions are more helpful, and the more different the imaging directions are (e.g., AP and ML images), the more helpful additional images may be in terms of a determination of 3D information. However, even adding images from only slightly different viewing angles, which may be more easily acquired during surgery instead of changing to completely different view (AP to ML or vice versa), may be beneficial; a system may be caused by a software program product executed on a processing unit of the system, to receive a first X-ray image, wherein the first X-ray image is a projection image of at least one object, to classify the at least one object and to determine at least one point in the first X-ray image. Then, the system may be caused to receive a second X-ray image, wherein the second X-ray image is a projection image generated with an imaging direction which differs from the imaging direction utilized to generate the first X-ray image. In the second image, the at least one object is again classified and the at least one point is determined. Based on the classification of the at least one object in the first and second X-ray images as well as based on the determination of the at least one point in both X-ray images, the two images can be registered and a 3D position of the point relative to the at least one object can be determined; X-ray source and detector positions are changed to obtain patient or object image from different angle points or different positions). Additionally, the rational and motivation to combine the references Breininger and Blau as applied in rejection of claim 1 apply to this claim. Regarding Claim 10, The combination of Breininger and Blau further discloses wherein positioning the first angle of the imaging source and the first angle of the imaging detector are positioned independently from each other. (Blau, [0044], [0127], [0137], discloses physical dimensions of an object are related to the dimensions of its projection in an X-ray image through the intercept theorem (also known as basic proportionality theorem) because the X-ray beams originate from the X-ray source (the focal point) and are detected by an X-ray detector in the image plane. The precise imaging depth (which is the distance of the object from the image plane) is not generally required in the context of this invention. However, if an object is sufficiently large, the imaging depth may be determined through the intercept theorem, and the larger the object, the more precise this determination will be. Yet even for small objects, an approximate estimation of imaging depth may be possible. Alternatively, the imaging depth may also be determined if the size of the X-ray detector and the distance between image plane and focal point are known; depending on the bone shape there may still be a remaining ambiguity or matching error in the 3D reconstruction based on one image only. This may be alleviated by acquiring multiple images, potentially from different viewing directions, by rotating and/or translating the C-arm between images. In general, additional images from different imaging directions are more helpful, and the more different the imaging directions are (e.g., AP and ML images), the more helpful additional images may be in terms of a determination of 3D information. However, even adding images from only slightly different viewing angles, which may be more easily acquired during surgery instead of changing to completely different view (AP to ML or vice versa), may be beneficial; a system may be caused by a software program product executed on a processing unit of the system, to receive a first X-ray image, wherein the first X-ray image is a projection image of at least one object, to classify the at least one object and to determine at least one point in the first X-ray image. Then, the system may be caused to receive a second X-ray image, wherein the second X-ray image is a projection image generated with an imaging direction which differs from the imaging direction utilized to generate the first X-ray image. In the second image, the at least one object is again classified and the at least one point is determined. Based on the classification of the at least one object in the first and second X-ray images as well as based on the determination of the at least one point in both X-ray images, the two images can be registered and a 3D position of the point relative to the at least one object can be determined; X-ray source and detector positions are changed to obtain patient or object image from different angle points or different positions). Additionally, the rational and motivation to combine the references Breininger and Blau as applied in rejection of claim 1 apply to this claim. Regarding Claim 11, The combination of Breininger and Blau further discloses wherein the generation of generating the at least second positioning data set comprises adding an incremental change to the at least one positioning parameter of the first positioning data set. (Blau, [0044], [0127], [0137], discloses physical dimensions of an object are related to the dimensions of its projection in an X-ray image through the intercept theorem (also known as basic proportionality theorem) because the X-ray beams originate from the X-ray source (the focal point) and are detected by an X-ray detector in the image plane. The precise imaging depth (which is the distance of the object from the image plane) is not generally required in the context of this invention. However, if an object is sufficiently large, the imaging depth may be determined through the intercept theorem, and the larger the object, the more precise this determination will be. Yet even for small objects, an approximate estimation of imaging depth may be possible. Alternatively, the imaging depth may also be determined if the size of the X-ray detector and the distance between image plane and focal point are known; depending on the bone shape there may still be a remaining ambiguity or matching error in the 3D reconstruction based on one image only. This may be alleviated by acquiring multiple images, potentially from different viewing directions, by rotating and/or translating the C-arm between images. In general, additional images from different imaging directions are more helpful, and the more different the imaging directions are (e.g., AP and ML images), the more helpful additional images may be in terms of a determination of 3D information. However, even adding images from only slightly different viewing angles, which may be more easily acquired during surgery instead of changing to completely different view (AP to ML or vice versa), may be beneficial; a system may be caused by a software program product executed on a processing unit of the system, to receive a first X-ray image, wherein the first X-ray image is a projection image of at least one object, to classify the at least one object and to determine at least one point in the first X-ray image. Then, the system may be caused to receive a second X-ray image, wherein the second X-ray image is a projection image generated with an imaging direction which differs from the imaging direction utilized to generate the first X-ray image. In the second image, the at least one object is again classified and the at least one point is determined. Based on the classification of the at least one object in the first and second X-ray images as well as based on the determination of the at least one point in both X-ray images, the two images can be registered and a 3D position of the point relative to the at least one object can be determined; X-ray source and detector positions are changed to obtain patient or object image from different angle points or different positions). Additionally, the rational and motivation to combine the references Breininger and Blau as applied in rejection of claim 1 apply to this claim. Regarding Claim 12, The combination of Breininger and Blau further discloses wherein the added incremental change is either constant or variable in dependency of depending on the assessment result, wherein the assessment result comprises a range with a plurality of entries, preferably more than 5 entries, most preferably 10 entries. (Blau, [0044], [0127], [0137], discloses physical dimensions of an object are related to the dimensions of its projection in an X-ray image through the intercept theorem (also known as basic proportionality theorem) because the X-ray beams originate from the X-ray source (the focal point) and are detected by an X-ray detector in the image plane. The precise imaging depth (which is the distance of the object from the image plane) is not generally required in the context of this invention. However, if an object is sufficiently large, the imaging depth may be determined through the intercept theorem, and the larger the object, the more precise this determination will be. Yet even for small objects, an approximate estimation of imaging depth may be possible. Alternatively, the imaging depth may also be determined if the size of the X-ray detector and the distance between image plane and focal point are known; depending on the bone shape there may still be a remaining ambiguity or matching error in the 3D reconstruction based on one image only. This may be alleviated by acquiring multiple images, potentially from different viewing directions, by rotating and/or translating the C-arm between images. In general, additional images from different imaging directions are more helpful, and the more different the imaging directions are (e.g., AP and ML images), the more helpful additional images may be in terms of a determination of 3D information. However, even adding images from only slightly different viewing angles, which may be more easily acquired during surgery instead of changing to completely different view (AP to ML or vice versa), may be beneficial; a system may be caused by a software program product executed on a processing unit of the system, to receive a first X-ray image, wherein the first X-ray image is a projection image of at least one object, to classify the at least one object and to determine at least one point in the first X-ray image. Then, the system may be caused to receive a second X-ray image, wherein the second X-ray image is a projection image generated with an imaging direction which differs from the imaging direction utilized to generate the first X-ray image. In the second image, the at least one object is again classified and the at least one point is determined. Based on the classification of the at least one object in the first and second X-ray images as well as based on the determination of the at least one point in both X-ray images, the two images can be registered and a 3D position of the point relative to the at least one object can be determined; X-ray source and detector positions are changed to obtain patient or object image from different angle points or different positions). Additionally, the rational and motivation to combine the references Breininger and Blau as applied in rejection of claim 1 apply to this claim. Regarding Claim 13, The combination of Breininger and Blau further discloses wherein the object is a medical element arranged in a human body, in particular wherein the object is one of the following: a screw, a screw head, an intramedullary nail, an implant, a medical instrument, a medical instrument holder, a robotic medical instrument holder. (Blau, [0117-0018], discloses the drill is a comparatively thin instrument of more or less constant diameter, which in a typical X-ray image is only a few pixels wide. In the X-ray projection, an object tilted in imaging direction is depicted wider on one end and smaller on the other. This, however, may only be detectable if this change in width is sufficiently strong (e.g., at least one pixel wide). For a required accuracy in angle detection of, e.g., less than 3 degrees with a thin drill of, e.g., less than 4 mm diameter, this may not generally be the case; localizing an instrument such as a drill (with diameter of a few mm), of which only the front part is visible in the X-ray, from a viewing angle (which is the drill's tilt) close to 90 degrees may not generally be possible with sufficient accuracy because the sine function of small angles has a slope close to zero. For instance, at an angle of 70 degrees, the drill's projection is only shortened by 6 percent, which leads to an insignificant change in the drill tip's projected shape. Such a small change may not be sufficient to determine the drill's tilt with an accuracy of approx. 3 degrees. The limit of detectability is a tilt of approximately 65 degrees, where the drill's projection is shortened by 9.4 percent. Depending on the tool, this may or may not be sufficient for the required accuracy. [0119] The smaller the viewing angle (tilt) is, the easier it becomes to distinguish a difference in tilt of, say, 2 degrees. This is shown in the X-ray image in FIG. 11, which depicts the projections and outlines of two drills: The white solid line labeled as 11.D1 corresponds to a drill with tilt of 45 degrees, and the white dashed line labeled as 11.D2 corresponds to a drill with tilt of 43 degrees. Since these outlines differ in some places, they may be distinguished by the system. Smaller viewing angles lead to more clearly distinguishable outlines. This may be observed in the X-ray image in FIG. 12 showing the projections and outlines of two drills: The white solid line labeled as 12.D1 corresponds to a drill with tilt of 25 degrees, and the white dashed line labeled as 12.D2 corresponds to a drill with tilt of 23 degrees. These outlines now differ clearly in some places, and hence may be easily distinguished by the system; instrument arrangement in human body images are obtained). Additionally, the rational and motivation to combine the references Breininger and Blau as applied in rejection of claim 1 apply to this claim. Regarding Claim 14, The combination of Breininger and Blau further discloses wherein the object is an area of an anatomy of a human body. (Balu, [0003-0005], discloses orthopedics or orthopedic trauma surgery or spinal surgery, it is a common task to aim for a target object or target structure (as part of a target object) with a relatively thin instrument. Target structures may be anatomical (e.g., a pedicle) or parts of other instruments or implants (e.g., a distal locking hole of a long antegrade intramedullary nail). In general, the goal may be to determine the 3D relative position and 3D relative orientation between instrument and target object. Based on available intraoperative 2D imaging techniques, this may be challenging. It is particularly difficult if the precise geometry of the target object is unknown, and/or if the instrument is known, but not uniquely localizable in 3D space based on the 2D X-ray image; in orthopedics or orthopedic trauma surgery or spinal surgery, it is a common task to aim for a target object or target structure (as part of a target object) with a relatively thin instrument. Target structures may be anatomical (e.g., a pedicle) or parts of other instruments or implants (e.g., a distal locking hole of a long antegrade intramedullary nail). In general, the goal may be to determine the 3D relative position and 3D relative orientation between instrument and target object. Based on available intraoperative 2D imaging techniques, this may be challenging. It is particularly difficult if the precise geometry of the target object is unknown, and/or if the instrument is known, but not uniquely localizable in 3D space based on the 2D X-ray image; for surgical procedures, preoperative CT scans may be performed, which allow a more precise planning of the procedure. This is the case, for instance, when operating within a complex 3D structure, or when drilling or placing screws within narrow anatomical structures or in the vicinity of critical structures (e.g., spinal cord, nerves, aorta). Typical examples of such procedures are the placements of sacroiliac or pedicle screws. When the target structure is a tool or an implant, its 3D geometry is typically known: An example is the distal locking procedure, where a 3D model of, or 3D information about the target object (nail), and in particular the target structure distal locking hole (a cylinder) is available; target object is within human body). Additionally, the rational and motivation to combine the references Breininger and Blau as applied in rejection of claim 1 apply to this claim. Regarding Claim 15, The combination of Breininger and Blau further discloses wherein obtaining the at least second medical image comprises obtaining a plurality of medical images, in particular 5 to 10 medical images and corresponding positioning data. (Blau, [0044], [0127], [0137], discloses physical dimensions of an object are related to the dimensions of its projection in an X-ray image through the intercept theorem (also known as basic proportionality theorem) because the X-ray beams originate from the X-ray source (the focal point) and are detected by an X-ray detector in the image plane. The precise imaging depth (which is the distance of the object from the image plane) is not generally required in the context of this invention. However, if an object is sufficiently large, the imaging depth may be determined through the intercept theorem, and the larger the object, the more precise this determination will be. Yet even for small objects, an approximate estimation of imaging depth may be possible. Alternatively, the imaging depth may also be determined if the size of the X-ray detector and the distance between image plane and focal point are known; depending on the bone shape there may still be a remaining ambiguity or matching error in the 3D reconstruction based on one image only. This may be alleviated by acquiring multiple images, potentially from different viewing directions, by rotating and/or translating the C-arm between images. In general, additional images from different imaging directions are more helpful, and the more different the imaging directions are (e.g., AP and ML images), the more helpful additional images may be in terms of a determination of 3D information. However, even adding images from only slightly different viewing angles, which may be more easily acquired during surgery instead of changing to completely different view (AP to ML or vice versa), may be beneficial; a system may be caused by a software program product executed on a processing unit of the system, to receive a first X-ray image, wherein the first X-ray image is a projection image of at least one object, to classify the at least one object and to determine at least one point in the first X-ray image. Then, the system may be caused to receive a second X-ray image, wherein the second X-ray image is a projection image generated with an imaging direction which differs from the imaging direction utilized to generate the first X-ray image. In the second image, the at least one object is again classified and the at least one point is determined. Based on the classification of the at least one object in the first and second X-ray images as well as based on the determination of the at least one point in both X-ray images, the two images can be registered and a 3D position of the point relative to the at least one object can be determined; X-ray source and detector positions are changed to obtain patient or object image from different angle points or different positions). Additionally, the rational and motivation to combine the references Breininger and Blau as applied in rejection of claim 1 apply to this claim. Regarding Claim 16, The combination of Breininger and Blau further discloses wherein the determination of the determining the at least one imaging quality measure in the at least second medical image is performed comprises: determining the at least one imaging quality measure in the at least second medical image directly after obtaining the at least second medical image is obtained and before obtaining a third medical image or the further medical image is obtained. (Breininger, [0106-0112], discloses based on the parameter characterizing the measure of discrepancy, derivation of at least one acquisition parameter for a medical imaging system if the parameter for the measure of discrepancy exceeds a predetermined threshold value; acquisition of a modified fluoroscopic image using the acquisition parameter; determination of a current position of the object in the modified fluoroscopic image; generation of a modified superposition image by superimposing the modified fluoroscopic image and the reference image; determination of at least one parameter characterizing a modified measure of discrepancy between the reference position and the current position of the object in the superposition image; displaying the modified measure of discrepancy for a user; quality of parameter is determined by comparing the two images and if the result is positive or negative is determined by comparing the superposed image with a threshold and if it exceeds threshold then positive and if not then it is negative assessment) (Breininger, [0113-0115], discloses if at least one of the determined parameters characterizing a measure of discrepancy has a value outside a predetermined range of values or above a predetermined threshold value, this indicates that the discrepancies in the location, position or course of the segmented structures in the current fluoroscopic image and reference image are so great that it is not possible to achieve a meaningful superimposition in the sense that the superimposition cannot be corrected in the way described above, at least one embodiment of the invention suggests in dependence on the parameter characterizing the measure of discrepancy the derivation of at least one acquisition parameter for the acquisition of a new, modified fluoroscopic image. Insofar, the method according to the invention can suggest at least one acquisition parameter, for example changed angulation, changed collimation, changed X-ray tube voltage or the like, and acquire a modified fluoroscopic image, which is more suitable for correction as described above. The derivation of an acquisition parameter can be user-triggered or take place automatically. The acquisition of the modified fluoroscopic image with the derived acquisition parameter can be user-triggered; at least one embodiment of the invention is then performed using the modified fluoroscopic image and a parameter characterizing a modified measure of discrepancy determined. If, after the performance of the described steps, the parameter characterizing the measure of discrepancy is still not within the tolerance, the method can be repeated, another acquisition parameter derived and a modified fluoroscopic image acquired etc.; if the superposing image quality parameter is less than threshold (negative assessment), a further processing parameters are selected and images are processed again to achieve the quality). Additionally, the rational and motivation to combine the references Breininger and Blau as applied in rejection of claim 1 apply to this claim. Regarding Claim 17, The combination of Breininger and Blau further discloses wherein the determination of the determining the at least one imaging quality measure in the respective first medical image and the at least second medical image comprises determining, using an image processing algorithm, the at least one imaging quality measure in the respective first medical image and the at least second medical image. (Breininger, [0106-0112], discloses based on the parameter characterizing the measure of discrepancy, derivation of at least one acquisition parameter for a medical imaging system if the parameter for the measure of discrepancy exceeds a predetermined threshold value; acquisition of a modified fluoroscopic image using the acquisition parameter; determination of a current position of the object in the modified fluoroscopic image; generation of a modified superposition image by superimposing the modified fluoroscopic image and the reference image; determination of at least one parameter characterizing a modified measure of discrepancy between the reference position and the current position of the object in the superposition image; displaying the modified measure of discrepancy for a user; quality of parameter is determined by comparing the two images and if the result is positive or negative is determined by comparing the superposed image with a threshold and if it exceeds threshold then positive and if not then it is negative assessment) (Breininger, [0113-0115], discloses if at least one of the determined parameters characterizing a measure of discrepancy has a value outside a predetermined range of values or above a predetermined threshold value, this indicates that the discrepancies in the location, position or course of the segmented structures in the current fluoroscopic image and reference image are so great that it is not possible to achieve a meaningful superimposition in the sense that the superimposition cannot be corrected in the way described above, at least one embodiment of the invention suggests in dependence on the parameter characterizing the measure of discrepancy the derivation of at least one acquisition parameter for the acquisition of a new, modified fluoroscopic image. Insofar, the method according to the invention can suggest at least one acquisition parameter, for example changed angulation, changed collimation, changed X-ray tube voltage or the like, and acquire a modified fluoroscopic image, which is more suitable for correction as described above. The derivation of an acquisition parameter can be user-triggered or take place automatically. The acquisition of the modified fluoroscopic image with the derived acquisition parameter can be user-triggered; at least one embodiment of the invention is then performed using the modified fluoroscopic image and a parameter characterizing a modified measure of discrepancy determined. If, after the performance of the described steps, the parameter characterizing the measure of discrepancy is still not within the tolerance, the method can be repeated, another acquisition parameter derived and a modified fluoroscopic image acquired etc.; if the superposing image quality parameter is less than threshold (negative assessment), a further processing parameters are selected and images are processed again to achieve the quality). Additionally, the rational and motivation to combine the references Breininger and Blau as applied in rejection of claim 1 apply to this claim. Regarding Claim 18, The combination of Breininger and Blau further discloses wherein the evaluation criterion is a predefined threshold. (Breininger, [0106-0112], discloses based on the parameter characterizing the measure of discrepancy, derivation of at least one acquisition parameter for a medical imaging system if the parameter for the measure of discrepancy exceeds a predetermined threshold value; acquisition of a modified fluoroscopic image using the acquisition parameter; determination of a current position of the object in the modified fluoroscopic image; generation of a modified superposition image by superimposing the modified fluoroscopic image and the reference image; determination of at least one parameter characterizing a modified measure of discrepancy between the reference position and the current position of the object in the superposition image; displaying the modified measure of discrepancy for a user; quality of parameter is determined by comparing the two images and if the result is positive or negative is determined by comparing the superposed image with a threshold and if it exceeds threshold then positive and if not then it is negative assessment) (Breininger, [0113-0115], discloses if at least one of the determined parameters characterizing a measure of discrepancy has a value outside a predetermined range of values or above a predetermined threshold value, this indicates that the discrepancies in the location, position or course of the segmented structures in the current fluoroscopic image and reference image are so great that it is not possible to achieve a meaningful superimposition in the sense that the superimposition cannot be corrected in the way described above, at least one embodiment of the invention suggests in dependence on the parameter characterizing the measure of discrepancy the derivation of at least one acquisition parameter for the acquisition of a new, modified fluoroscopic image. Insofar, the method according to the invention can suggest at least one acquisition parameter, for example changed angulation, changed collimation, changed X-ray tube voltage or the like, and acquire a modified fluoroscopic image, which is more suitable for correction as described above. The derivation of an acquisition parameter can be user-triggered or take place automatically. The acquisition of the modified fluoroscopic image with the derived acquisition parameter can be user-triggered; at least one embodiment of the invention is then performed using the modified fluoroscopic image and a parameter characterizing a modified measure of discrepancy determined. If, after the performance of the described steps, the parameter characterizing the measure of discrepancy is still not within the tolerance, the method can be repeated, another acquisition parameter derived and a modified fluoroscopic image acquired etc.; if the superposing image quality parameter is less than threshold (negative assessment), a further processing parameters are selected and images are processed again to achieve the quality). Additionally, the rational and motivation to combine the references Breininger and Blau as applied in rejection of claim 1 apply to this claim. Regarding Claim 19, The combination of Breininger and Blau further discloses wherein comprising: deriving the evaluation criterion is derived from the at least one imaging quality measure in the first medical image and in the at least second medical image. (Breininger, [0106-0112], discloses based on the parameter characterizing the measure of discrepancy, derivation of at least one acquisition parameter for a medical imaging system if the parameter for the measure of discrepancy exceeds a predetermined threshold value; acquisition of a modified fluoroscopic image using the acquisition parameter; determination of a current position of the object in the modified fluoroscopic image; generation of a modified superposition image by superimposing the modified fluoroscopic image and the reference image; determination of at least one parameter characterizing a modified measure of discrepancy between the reference position and the current position of the object in the superposition image; displaying the modified measure of discrepancy for a user; quality of parameter is determined by comparing the two images and if the result is positive or negative is determined by comparing the superposed image with a threshold and if it exceeds threshold then positive and if not then it is negative assessment) (Breininger, [0113-0115], discloses if at least one of the determined parameters characterizing a measure of discrepancy has a value outside a predetermined range of values or above a predetermined threshold value, this indicates that the discrepancies in the location, position or course of the segmented structures in the current fluoroscopic image and reference image are so great that it is not possible to achieve a meaningful superimposition in the sense that the superimposition cannot be corrected in the way described above, at least one embodiment of the invention suggests in dependence on the parameter characterizing the measure of discrepancy the derivation of at least one acquisition parameter for the acquisition of a new, modified fluoroscopic image. Insofar, the method according to the invention can suggest at least one acquisition parameter, for example changed angulation, changed collimation, changed X-ray tube voltage or the like, and acquire a modified fluoroscopic image, which is more suitable for correction as described above. The derivation of an acquisition parameter can be user-triggered or take place automatically. The acquisition of the modified fluoroscopic image with the derived acquisition parameter can be user-triggered; at least one embodiment of the invention is then performed using the modified fluoroscopic image and a parameter characterizing a modified measure of discrepancy determined. If, after the performance of the described steps, the parameter characterizing the measure of discrepancy is still not within the tolerance, the method can be repeated, another acquisition parameter derived and a modified fluoroscopic image acquired etc.; if the superposing image quality parameter is less than threshold (negative assessment), a further processing parameters are selected and images are processed again to achieve the quality). Additionally, the rational and motivation to combine the references Breininger and Blau as applied in rejection of claim 1 apply to this claim. Claims 20, 21-22, 24-27 recite system, method and device with elements and steps corresponding to the method steps recited in Claims 1, 1, 10, 12, 2, 9 and 10 respectively. Therefore, the recited elements of the system, method and device Claims 20, 21-22, 24-27 are mapped to the proposed combination in the same manner as the corresponding steps of Claims 1, 1, 10, 12, 2, 9 and 10 respectively. Additionally, the rationale and motivation to combine the Breininger and Blau references presented in rejection of Claim 1, apply to these claims. Furthermore, the combination of Breininger and Blau further discloses device and unit(s) (Blau, [0002], [0061], discloses invention relates to the fields of artificial intelligence and computer assisted surgery. In particular, the invention relates to a device and a method for determining 3D representations and relative 3D positions and relative 3D orientations between objects based on an X-ray projection image. The method may be implemented as a computer program executable on a processing unit of the device (Blau, [0061], discloses A computer program product may preferably be loaded into the random-access memory of a data processor. The data processor or processing unit of a system according to an embodiment may thus be equipped to carry out at least a part of the described process. Further, the invention relates to a computer-readable medium such as a CD-ROM on which the disclosed computer program may be stored. However, the computer program may also be presented over a network like the World Wide Web and can be downloaded into the random-access memory of the data processor from such a network. Furthermore, the computer program may also be executed on a cloud-based processor, with results presented over the network). Furthermore, the combination of Breininger and Blau further discloses a system (Blau, [0002], [0016-0017], [0061], discloses invention relates to the fields of artificial intelligence and computer assisted surgery. In particular, the invention relates to a device and a method for determining 3D representations and relative 3D positions and relative 3D orientations between objects based on an X-ray projection image. The method may be implemented as a computer program executable on a processing unit of the device[0061] A computer program product may preferably be loaded into the random-access memory of a data processor. The data processor or processing unit of a system according to an embodiment may thus be equipped to carry out at least a part of the described process. Further, the invention relates to a computer-readable medium such as a CD-ROM on which the disclosed computer program may be stored. However, the computer program may also be presented over a network like the World Wide Web and can be downloaded into the random-access memory of the data processor from such a network. Furthermore, the computer program may also be executed on a cloud-based processor, with results presented over the network). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Matsuzaki et al. (US Pub No. 20220175338 A1, provided is a technique capable of calculating a three-dimensional position of a treatment tool only by processing an X-ray image without using an external signal of a body movement monitor or the like and capable of eliminating an influence of body movement. A plurality of combinations of a plurality of X-ray images from a plurality of X-ray images acquired at different angles or times are used to obtain a parameter serving as an index of calculation accuracy of a three-dimensional position of a treatment tool for each combination and to calculate the three-dimensional position of the treatment tool based on a combination of X-ray images serving as a parameter having the highest accuracy, Abstract, Fig. 1) Any inquiry concerning this communication or earlier communications from the examiner should be directed to PINALBEN V PATEL whose telephone number is (571)270-5872. The examiner can normally be reached M-F: 10am - 8pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at 571-272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Pinalben Patel/Examiner, Art Unit 2673
Read full office action

Prosecution Timeline

Feb 08, 2024
Application Filed
Mar 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602824
SUBSTRATE TREATING APPARATUS AND SUBSTRATE TREATING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12596437
Monitoring System and Method Having Gesture Detection
2y 5m to grant Granted Apr 07, 2026
Patent 12597235
INFORMATION PROCESSING APPARATUS, LEARNING METHOD, RECOGNITION METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586215
VEHICLE POSE
2y 5m to grant Granted Mar 24, 2026
Patent 12586217
VISION SENSOR, OPERATING METHOD OF VISION SENSOR, AND IMAGE PROCESSING DEVICE INCLUDING THE VISION SENSOR
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+9.9%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 545 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month