DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant’s election of Group II, claims 1, 14-26, 28-29, and 31-36, in the reply filed on February 19th, 2026 is acknowledged. Please note: the absence of any statement indicating whether the requirement to restrict is traversed or the failure to provide reasons for traverse will be treated as an election without traverse. See MPEP 818.01. Therefore, applicant’s election is without traverse.
Additionally, the examiner acknowledges Applicant’s request for additionally examining claim 37 from unelected Group I. Please see its examination below.
Claims 2-13, 27, 30, and 38-42 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected Group I, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on February 19th, 2026.
Claim Objections
Claims objected to because of the following informalities: "priori" in all of these claims should be "a priori".
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 28-29, 31-33, and 37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fouts et al. (US-20230149092-A1) and further in view of Iwase et al. (US-20210390696-A1).
Regarding claim 1, Fouts teaches:
receiving a source image of the anatomical region of the subject (“The first image data captures anatomy of interest of a patient and at least one obstruction obscuring at least a portion of the anatomy of interest of the patient,” Para [0127]), the source image acquired by an electronic imaging device using a source imaging modality (“The first image data may have been generated by any suitable imaging modality, including, for example, X-ray imaging (including radiographic imaging and fluoroscopic imaging), visible light imaging, magnetic resonance imaging (MRI), computed tomography (CT) imaging, and ultrasound imaging,” Para [0127]);
generating a synthetic image using a trained machine learning model (“the obstructions are identified and replaced using at least one machine learning model that is trained on training images that include the anatomy of interest,” Para [0116]) trained on a plurality of images having known anatomical features and locations based on the anatomical region in the source image (“a data set is generated from the first image data that accounts for the obstruction (or obstructions)… the data set can be a second image in which at least a portion of the obstruction (or obstructions) has been altered based on the anatomy of interest,” Para [0129]),
wherein the synthetic image contains at least one synthetic anatomical feature of the imaged anatomical region that can be determined from an anatomical model (“training images that include the anatomy of interest”) of the anatomical region of the subject (“the obstructions are identified and replaced using at least one machine learning model that is trained on training images that include the anatomy of interest,” Para [0116]) using the trained machine learning model (“In the image 312, the portions of the image 300 corresponding to the obstruction 302 have been replaced with representations 308, 310 of the portions 304, 306 of the femur obscured by the obstruction 302,” Para [0131], also see Fig. 3A and 3B),
PNG
media_image1.png
327
496
media_image1.png
Greyscale
and wherein the at least one synthetic anatomical feature of the anatomical region of the subject is an anatomical feature that should appear in the source image but is partially or totally obscured and/or distorted in the source image (Figs. 3A and 3B, the portions of the femur 304, 306, are blocked by the obstruction 302 in Fig. 3A, and then representations 308, 310, of the femur are replaced in Fig 3B).
Fouts is not relied upon to teach the following limitations as further claimed. Iwase, however, further teaches:
A computerized method for electronically generating an augmented digital image of an anatomical region of a subject (“a display controlling unit configured to cause a composite image obtained by combining the first image and the second image” and “first image… is a medical image of a predetermined site of a subject,” Para [0010]), the computerized method comprising:
partially or completely combining the source image (“input image”) and the synthetic image (“high quality image”) to produce the augmented digital image (“the outputting unit 405 (display controlling unit) may output a composite image obtained by combining an input image (first image) and a high quality image (second image),” Para [0531]).
Iwase is considered to be analogous to the claimed invention because they are in the same field of creating high quality medical images using machine learning models. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Iwase into Fouts for the benefit of higher quality medical image outputs.
Claim 28 is considered to be a corresponding claim to method claim 1. Therefore, the rejection of claim 1 applies to claim 28.
Regarding claim 29, the rejection of claim 28 is incorporated herein. Fouts in view of Iwase teaches the apparatus of claim 28, and Fouts further teaches:
wherein the synthetic image has an appearance of being acquired by an electronic imaging device that is the same or similar to the electronic device capturing the source image (Fig. 3B (synthetic image result) appears to have come from an x-ray, just like Fig. 3A (original image with obstruction)).
Regarding claim 31, the rejection of claim 29 is incorporated herein. Fouts in view of Iwase teaches the apparatus of claim 29, and Fouts further teaches:
wherein rendering the augmented digital image comprises incorporating at least one computer-generated graphical augmentation comprising at least one computer-generated graphical overlay, computer-generated anatomical measurement, computer- generated anatomical representation (“at least a portion of the obstruction may be replaced by a representation of the anatomy obscured by the obstruction, or by a representation of other background or surrounding context within the data set… the altering of the obstruction in the data set can be done using at least one machine learning model,” Para [0129]), or a combination thereof.
Regarding claim 32, the rejection of claim 31 is incorporated herein. Fouts in view of Iwase teaches the apparatus of claim 31, and Fouts further teaches:
wherein the at least one computer-generated anatomical representation is derived from ultrasound, computed tomography, magnetic resonance, x-ray images, or a combination thereof (“The first image data of FIG. 3A is an X-ray image 300 of a portion of a hip joint”, Para [0128]), and “The image 312 in FIG. 3B was generated from the image 300 of FIG. 3A,” Para [0131]).
Regarding claim 33, the rejection of claim 31 is incorporated herein. Fouts in view of Iwase teaches the apparatus of claim 31, and Fouts further teaches:
wherein the at least one computer-generated anatomical representation is derived from at least one a priori synthetic anatomical model (“the obstructions are identified and replaced using at least one machine learning model that is trained on training images that include the anatomy of interest. With this training, the at least one machine learning model “knows” what is likely obscured by an obstruction and can generate a realistic representation of the obscured anatomy,” Para [0116]).
Regarding claim 37, the rejection of claim 28 is incorporated herein. Fouts in view of Iwase teaches the apparatus of claim 28, and Fouts further teaches:
wherein the source image is provided in a three-dimensional volume (“The first image data can be… one or more slices of a three-dimensional imaging modality (for example, MRI or CT),” Para [0127]) and the augmented digital image is provided in a three-dimensional volume (“a data set is generated from the first image data that accounts for the obstruction (or obstructions). The data set can include… a volume, such as a DICOM data set, or any other simulation of a physical space, whether two or three dimensional,” Para [0129]).
Claim(s) 14-16 and 35 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fouts et al. (US-20230149092-A1) in view of Iwase et al. (US-20210390696-A1) as applied to claims 1 and 28 above, and further in view of Luciano et al. (US-20240265667-A1).
Regarding claim 14, the rejection of claim 1 is incorporated herein. Fouts in view of Iwase teaches the method of claim 1, but are not relied upon to teach the following limitations. Luciano, however, further teaches:
using a trained machine learning network that classifies regions of the augmented image as corresponding to bone anatomy, muscle anatomy, vascular anatomy, adipose anatomy, or other anatomy (Fig 18B, classifies areas of image as bone and fat, for example), to produce a spatial classification map (“FIG. 18B depicts the output 1812 generated by the segmentation model in response to processing the MRI image 1810,” Para [0127]).
PNG
media_image2.png
455
705
media_image2.png
Greyscale
Luciano is considered to be analogous to the claimed invention because they are in the same field of medical image segmentation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Luciano into Fouts and Iwase for the benefit of easier image-based diagnosis of a patient.
Regarding claim 15, the rejection of claim 14 is incorporated herein. Fouts in view of Iwase and Luciano teach the method of claim 14, and Luciano further teaches:
wherein the spatial classification map (Fig. 9B) is applied to and/or combined (to create Fig. 9C) with the source image (Fig. 9A) or augmented digital image to improve visualization of at least one predetermined classification (“FIG. 9A is an example 2D scan of a spine of a patient, according to embodiments. FIG. 9B is an example of a segmentation output of the image of FIG. 9A. FIG. 9C is an example of combined image data and segmentation output of the image of FIG. 9A,” Para [0026]).
PNG
media_image3.png
710
283
media_image3.png
Greyscale
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Luciano into Fouts and Iwase for the benefit of easier image-based diagnosis of a patient.
Regarding claim 16, the rejection of claim 15 is incorporated herein. Fouts in view of Iwase and Luciano teach the method of claim 15, and Luciano further teaches:
wherein the improved visualization is generated by altering pixel intensity, coloration, texture, lighting, or a combination thereof (Fig. 9C, parts of the image are darker in color, such as the vertebral body 926 from Fig. 9B, to make the result of 9C (Fig. 9A plus 9B) easier to interpret).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Luciano into Fouts and Iwase for the benefit of easier image-based diagnosis of a patient.
Claim 35 is considered to be a corresponding claim to method claim 14. Therefore, the rejection of claim 14 applies to claim 35.
Claim(s) 17-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fouts et al. (US-20230149092-A1) in view of Iwase et al. (US-20210390696-A1) and Luciano et al. (US-20240265667-A1) as applied to claim 14 above, and further in view of Riem et al. (US-20230386042-A1).
Regarding claim 17, the rejection of claim 14 is incorporated herein. Fouts in view of Iwase and Luciano teach the method claim 14, but are not relied upon to teach the following limitations. Riem, however, further teaches:
wherein the spatial classification map (“label maps”) is processed to determine at least one quantitative measure (“muscle volume” or “fat infilitration ratio”) related to the anatomical region of the subject (“v) determine at least one of intra-muscular fat or extra-muscular fat based on the label maps; and vi) quantify muscle volume for the at least one muscle and a fat infiltration ratio,” Para [0009]).
Riem is considered to be analogous to the claimed invention because they are in the same field of modeling and segmenting medical images. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Riem into Fouts, Iwase, and Luciano for the benefit of more accurate image-based diagnoses.
Regarding claim 18, the rejection of claim 17 is incorporated herein. Fouts in view of Iwase, Luciano, and Riem teach the method of claim 17, and Riem further teaches:
wherein the at least one quantitative measure is one or more of a distance measurement, a dimensional measurement (“vi) quantify muscle volume for the at least one muscle,” Para [0009]), or a positional measurement.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Riem into Fouts, Iwase, and Luciano for the benefit of more accurate image-based diagnoses.
Claim(s) 23-26, and 34 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fouts et al. (US-20230149092-A1) in view of Iwase et al. (US-20210390696-A1) as applied to claims 1 and 28 above, and further in view of Zhao et al. (US-20200242767-A1).
Regarding claim 23, the rejection of claim 1 is incorporated herein. Fouts in view of Iwase teach the method of claim 1, but are not relied upon to teach the following limitations. Zhao, however, further teaches:
wherein a pose of the augmented digital image (“fluoroscopic reference frame”) is spatially registered to a pose of at least one a priori anatomical model (“model reference frame”; “The fluoroscopic reference frame and the model reference frame have been registered and the registered images are displayed side-by-side. With the reference frames registered, structures from one image may, optionally, be superimposed or overlaid on the other image,” Para [0078]) to determine at least one quantitative measurement related to the anatomical region of the subject (finding positions of tumors; “with the fluoroscopic view registered to the anatomic model, anatomic features such as target tissue, tumors, or other landmarks may be identified in the fluoroscopic image and used to locate the corresponding feature in the anatomic model,” Para [0082]).
Zhao is considered to be analogous to the claimed invention because they are both in the field of augmenting medical images. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Zhao into Fouts and Iwase for the benefit of more accurate image-based diagnoses.
Regarding claim 24, the rejection of claim 23 is incorporated herein. Fouts in view of Iwase teach the method of claim 23, but are not relied upon to teach the following limitations. Zhao, however, further teaches:
wherein the at least one quantitative measurement is one or more of a distance measurement, a dimensional measurement, a positional measurement (finding position of tumors; “with the fluoroscopic view registered to the anatomic model, anatomic features such as target tissue, tumors, or other landmarks may be identified in the fluoroscopic image and used to locate the corresponding feature in the anatomic model,” Para [0082]), or a comparison of positional alignment with ideal positional alignment for a more medical procedure.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Zhao into Fouts and Iwase for the benefit of more accurate image-based diagnoses.
Regarding claim 25, the rejection of claim 23 is incorporated herein. Fouts in view of Iwase teach the method of claim 23, but are not relied upon to teach the following limitations. Zhao, however, further teaches:
wherein the at least one priori anatomical model is a synthetic anatomical model (“prior image data, including pre-operative… image data, is obtained from imaging technology such as, CT, MRI, thermography, ultrasound, OCT, thermal imaging, impedance imaging, laser imaging, or nanotube X-ray imaging… an anatomic model is created from the prior image data,” Para [0087]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Zhao into Fouts and Iwase for the benefit of more accurate image-based diagnoses.
Regarding claim 26, the rejection of claim 23 is incorporated herein. Fouts in view of Iwase teach the method of claim 23, but are not replied upon to teach the following limitations. Zhao, however, further teaches:
wherein the at least one priori anatomical model is composed of data from an imaging modality such as ultrasound, computed tomography, magnetic resonance, x-ray, or multi-modality fusion (“some embodiments of the method 450 may begin at a process 452, in which prior image data, including pre-operative or intra-operative image data, is obtained from imaging technology such as, CT, MRI, thermography, ultrasound, OCT, thermal imaging, impedance imaging, laser imaging, or nanotube X-ray imaging,” Para [0067]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Zhao into Fouts and Iwase for the benefit of more accurate image-based diagnoses.
Regarding claim 34, the rejection of claim 28 is incorporated herein. Fouts in view of Iwase teach the method of claim 28, but are not replied upon to teach the following limitations. Zhao, however, further teaches:
wherein the apparatus contains at least one registration unit that spatially registers the augmented digital image (“fluoroscopic images”) to at least one a priori anatomical model (“prior-time image”) stored in a computer memory (“At a process 466 and with reference to FIGS. 10 and 11, the registered frames of reference are displayed as the catheter traverses the patient anatomy, allowing the clinician viewing the display image(s) to utilize the benefits of real-time instrument tracking in the fluoroscopic images with the anatomic detail of the prior-time image (e.g., a CT image),” Para [0078]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Zhao into Fouts and Iwase for the benefit of more accurate image-based diagnoses.
Allowable Subject Matter
Claims objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The claims are allowable on account of the prior art failing to anticipate or render obvious the limitations of the allowable subject matter.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Erkamp et al. (US-20240412359-A1) teaches the processor for increasing a medical image’s quality and reducing image artifacts in the image.
Samset et al. (US-20180260950-A1) teaches a method for registering images to a geometrical model of a prior anatomical structure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RACHEL A OMETZ whose telephone number is (571)272-2535. The examiner can normally be reached 6:45am-4:00pm ET Monday-Thursday, 6:45am-1:00pm ET every other Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at 571-272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Rachel Anne Ometz/Examiner, Art Unit 2668 3/3/26
/VU LE/Supervisory Patent Examiner, Art Unit 2668