DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/28/26 has been entered. Currently, claims 1-35 are pending.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 12, 22, 27, 28, and 34 have been considered but are moot in view of a new ground(s) of rejection.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1, 2, 4-13, 15-23, 25-28, and 30-34 are rejected under 35 U.S.C. 103(a) as being unpatentable over O’Connor et al. (WO 2021/055522), cited in the IDS dated 7/23/24, in view of Shelton IV et al. (2021/0196109).
Regarding claims 1 and 12, O’Connor discloses a system for determining an attribute associated with anatomy of interest of a patient, the system comprising one or more processors, memory, and one or more programs stored in the memory for execution by the one or more processors and a method for determining an attribute associated with anatomy of interest of a patient comprising:
receiving first image data capturing the anatomy of interest of the patient and at least one obstruction obscuring at least a portion of the anatomy of interest of the patient (see paras 19 and 30, a first image of an anatomy of interest is captured by an image capture device 103, obstruction of the anatomy of interest is detected);
generating, using at least one machine learning model, second image data in which at least a portion of the obstruction is replaced, wherein the at least one machine learning model is trained to generate the second image data by filling in a region of the first image data that corresponds to the at least a portion of the obstruction with a representation of the at least a portion of the anatomy of interest that was generated by the at least one machine learning model based on the first image data (see paras 19, 29-30, 43, and 46-47, combinational logic/algorithms, such as a machine learning model, neural network, or AI, are used to detect an obstruction that is obstructing the anatomy of interest and removing the obstruction or making the obstruction translucent such that the anatomy of interest that was hidden becomes visible).
O’Connor does not disclose expressly determining at least one attribute associated with resection of at least a portion of the anatomy of interest based on the second image data.
Shelton discloses determining at least one attribute associated with resection of at least a portion of the anatomy of interest based on the second image data (see paras 118, 123, 141, 177, and 331, a critical structure is determined, a tumor 2332 can be identified for removal, or resection, surgical visualization system 100 utilizes images of the anatomy of interest that does not contain obstructions to display image data to a clinician when an obstruction is detected).
Regarding claims 22 and 27, O’Connor discloses a system for determining an attribute associated with anatomy of interest of a patient, the system comprising one or more processors, memory, and one or more programs stored in the memory for execution by the one or more processors and a method for determining an attribute associated with anatomy of interest of a patient comprising:
receiving first image data capturing the anatomy of interest of the patient and at least one obstruction obscuring at least a portion of the anatomy of interest of the patient (see paras 19 and 30, a first image of an anatomy of interest is captured by an image capture device 103, obstruction of the anatomy of interest is detected); and
determining a location of the obstruction relative to the anatomy of interest within the first image data using at least one machine learning model, wherein determining the location of the obstruction relative to the anatomy of interest comprises identifying pixels associated with an intersection between the obstruction and the anatomy of interest (see paras 19, 26, 29-30, and 43, combinational logic/algorithms, such as a machine learning model, neural network, or AI, are used to detect an obstruction that is obstructing the anatomy of interest and removing the obstruction or making the obstruction translucent such that the anatomy of interest that was hidden becomes visible, pixels associated with the intersection between the obstruction and the anatomy of interest are identified).
O’Connor does not disclose expressly determining at least one attribute associated with resection of at least a portion of the anatomy of interest without using the pixels associated with the intersection between the obstruction and the anatomy of interest.
Shelton discloses determining at least one attribute associated with resection of at least a portion of the anatomy of interest without using the pixels associated with the intersection between the obstruction and the anatomy of interest (see paras 118, 123, 141, 177, and 331, a critical structure is determined, a tumor 2332 can be identified for removal, or resection, surgical visualization system 100 utilizes images of the anatomy of interest that does not contain obstructions to display image data to a clinician when an obstruction is detected).
Regarding claims 28 and 34, O’Connor discloses a system for compensating for an obstruction in imaging of anatomy of a patient, the system comprising one or more processors, memory, and one or more programs stored in the memory for execution by the one or more processors and a method for compensating for an obstruction in imaging of anatomy of a patient comprising:
receiving image data capturing anatomy of interest of the patient and at least one obstruction obscuring a portion of the anatomy of interest (see paras 19 and 30, a first image of an anatomy of interest is captured by an image capture device 103, obstruction of the anatomy of interest is detected);
detecting the at least one obstruction in the image data using at least one machine learning model (see para 43, combinational logic/algorithms, such as a machine learning model, neural network, or AI, are used to detect an obstruction that is obstructing the anatomy of interests);
generating a data set from the image data in which at least a portion of the at least one obstruction is altered based on the anatomy of interest, wherein the at least one machine learning model is trained to generate the data set by filling in a region of the image data that corresponds to the at least a portion of the at least one obstruction with a representation of the portion of the anatomy of interest that was generated by the at least one machine learning model based on the image data (see paras 19, 29-30, 43, and 46-47, combinational logic/algorithms, such as a machine learning model, neural network, or AI, are used to detect an obstruction that is obstructing the anatomy of interest and removing the obstruction or making the obstruction translucent such that the anatomy of interest that was hidden becomes visible); and
generating a visual guidance associated with the at least a portion of the anatomy of interest based on the determined at least one attribute; and displaying the visual guidance (see paras 58 and 64, visual guidance is displayed to the user about the anatomy of interest and the removal of the obstruction from the anatomy of interest).
O’Connor does not disclose expressly determining at least one attribute associated with resection of at least a portion of the anatomy of interest based on the data set and generating a visual guidance associated with the resection.
Shelton discloses determining at least one attribute associated with resection of at least a portion of the anatomy of interest based on the data set (see paras 118, 123, 141, 177, and 331, a critical structure is determined, a tumor 2332 can be identified for removal, or resection, surgical visualization system 100 utilizes images of the anatomy of interest that does not contain obstructions to display image data to a clinician when an obstruction is detected); and
generating a visual guidance associated with the resection of the at least a portion of the anatomy of interest based on the determined at least one attribute; and displaying the visual guidance (see paras 118-119, 123, 126-127, 133, 158, and 161, surgical visualization system 100 displays visual guidance to a clinician to aid in the resection of a tumor).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the guidance for resection based on an identified attribute, as described by Shelton, with the system of O’Connor.
The suggestion/motivation for doing so would have been to ensure proper visual depiction of an anatomy of interest allowing a practitioner to accurately perform a surgical procedure thereby reducing risk to the patient.
Therefore, it would have been obvious to combine Shelton with O’Connor to obtain the invention as specified in claims 1, 12, 22, 27, 28, and 34.
Regarding claims 2, 13, and 23, O’Connor further discloses generating a visual guidance associated with the anatomy of interest based on the determined at least one attribute and adding the visual guidance to the second image data (see paras 58 and 64, visual guidance is displayed to the user about the anatomy of interest and the removal of the obstruction from the anatomy of interest).
Regarding claims 4 and 15, O’Connor further discloses wherein the second image data is displayed intraoperatively for guiding a surgical procedure (see paras 58 and 64, visual guidance is displayed to a surgeon in real time during a procedure about the anatomy of interest and the removal of the obstruction from the anatomy of interest).
Regarding claims 5, 16, 25, and 30, O’Connor further discloses wherein determining the at least one attribute comprises identifying at least a portion of a perimeter of the anatomy of interest based at least in part on the representation of the at least a portion of the anatomy of interest obscured by the obstruction (see paras 26 and 29-30, an obstruction is identified in the anatomy of interest, pixels associated with the intersection between the obstruction and the anatomy of interest are identified).
Regarding claims 6, 17, 26, and 31, O’Connor further discloses wherein the obstruction obscures the at least a portion of the perimeter in the first image data (see paras 26 and 29-30, an obstruction is identified in the anatomy of interest, pixels associated with the intersection between the obstruction and the anatomy of interest are identified).
Regarding claims 7 and 18, O’Connor further discloses wherein the first image data is an X-ray image (see paras 30 and 52, the first image is an X-ray image).
Regarding claims 8, 19, and 32, O’Connor further discloses wherein generating the second image data comprises using a first machine learning model to identify the obstruction and using a second machine learning model to generate the second image data based on the identification of the obstruction by the first machine learning model (see paras 43 and 64, combinational logic/algorithms, such as a machine learning model, neural network, or AI, are used to detect an obstruction that is obstructing the anatomy of interest and generate an image without any obstruction to the anatomy of interest).
Regarding claims 9, 20, and 33, O’Connor further discloses displaying the second image data with a representation of the at least one obstruction overlaid on the representation of the at least a portion of the anatomy of interest (see paras 19 and 58, an obstruction that is obstructing the anatomy of interest is replaced with an image overlay).
Regarding claims 10 and 21, O’Connor further discloses wherein the at least one obstruction is at least one surgical instrument (see paras 55, a surgical instrument that is obstructing the anatomy of interest is detected).
Regarding claim 11, O’Connor further discloses wherein the at least one machine learning model comprises a diffusion-based machine learning model (see paras 43, the system can use a diffusion-based machine learning model).
Regarding claim 35, Shelton further discloses wherein the at least one attribute is determined based on the at least a portion of the anatomy of interest that was obscured by the at least one obstruction (see paras 117-119, 124, 126-128, 160, and 163, a first image of an anatomy of interest is captured by a camera of an imaging device 120, obstruction the anatomy of interest is detected).
Claims 3, 14, 24, and 29 are rejected under 35 U.S.C. 103(a) as being unpatentable over O’Connor and Shelton as applied to claims 1, 12, 22, and 28 above, and further in view of Quaid III (US 2004/0034282).
O’Connor and Shelton do not disclose expressly wherein the visual guidance provides guidance for bone removal.
Quaid III discloses wherein the visual guidance provides guidance for bone removal (see paras 10, 63, 93, 96, 110, and 120, bone resections/removal is performed based on visual guidance).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the guidance for bone removal utilizing X-ray images, as described by Quaid III, with the system of O’Connor and Shelton.
The suggestion/motivation for doing so would have been to ensure proper visual depiction of an anatomy of interest allowing a practitioner to accurately perform a surgical procedure thereby reducing risk to the patient.
Therefore, it would have been obvious to combine Quaid III with O’Connor and Shelton to obtain the invention as specified in claims 3, 14, 24, and 29.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK R MILIA whose telephone number is (571) 272-7408. The examiner can normally be reached Monday-Friday, 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached at 571-270-3438. The fax number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARK R MILIA/Primary Examiner, Art Unit 2681