Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant has canceled claims 2 and 5.
Application has pending claims 1, 3-4, and 6-21.
Response to Arguments
Applicant’s arguments, see Remarks pages 7-8 of 12, filed 12/10/2025, with respect to the rejection of claims 1-3 and 16-20 under 35 U.S.C. 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of BROWN, in further view of JP-2011510685-A.
Applicant’s arguments with respect to the 35 U.S.C. 103 rejections in regards to claims 4, 6-8, 13-15, and 21 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Based on these facts, this action is made NON-FINAL.
Claim Objections
Claim 1 objected to because of the following informalities: step e, which was previously “determining a location of an ROI…” was adjusted to include the newly amended “aligning the camera…”. As such, there is a new step, step g, which needs to be underlined as it is an additional step added onto the claim. Additionally, “an ROI” should be corrected to “a ROI” in now step f. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over John Eric Tkaczyk US-20190282194-A1, hereinafter Eric, in further view of JP-2011510685-A, hereinafter Ref. B.
As per claim 19, Eric discloses a radiography imaging system comprising (see Eric ¶12 and FIG. 1, wherein the x-ray imaging system is disclosed):a. a radiation source (see Eric ¶12-13, wherein an x-ray source is disclosed); b. a detector alignable with the radiation source (see Eric ¶12-13 and FIG. 1, wherein the detector is aligned with the radiation source); c. a camera alignable with the detector (see Eric ¶12 and FIG. 1, wherein the optical sensor 108 is disclosed and is aligned with the detector); d. a controller operably connected to the radiation source, the detector and the camera to generate image data and camera images, the control processing unit including image processing circuitry and an interconnected database for processing the image data from the detector (see Eric ¶14-17, wherein the processing subsystem, which contains the control unit, is configured to generate a data signal to initiate an acquisition from the optical sensor, i.e., camera to generate camera images. See also ¶24, wherein the image processing unit, which is part of the processing subsystem and is connected to the control unit, generates main-shot parameters, such as pixel values, i.e., image data. See further ¶32, wherein the imaging database is disclosed); e. a display operably connected to the controller for presenting information to a user (see Eric ¶18, wherein the display which receives control signals from an operator and displays scanning options is disclosed); and f. a user interface operably connected to the controller to enable user input to the controller (see Eric ¶18, wherein the x-ray device receives input control signals when operated by an operator. See further ¶28, wherein the control unit is coupled with an input device); wherein the image processing circuitry is configured generate one or more preshot images from the image data (see Eric ¶31 and FIG. 2, wherein the pre-shot X-ray image is generated based on an optical image), to determine a location of an ROI within the one or more preshot images of a subject (see Eric ¶34 step 312, wherein position information of the anatomy 162 is generated based on the pre-shot X-ray image 148), and to adjust exposure parameters for the operation of the radiation source to obtain one or more main shots of the subject corresponding to the image data for the ROI from the one or more preshot images (see Eric ¶14 and ¶34, wherein the optical sensor is focused on the region of interest for the optical image and the pre-shot x-ray image. See also Eric ¶15-16, wherein the pre-shot and main shot parameters are disclosed; the main shot uses more exposure than the pre-shot as described prior in Eric ¶11).
However, Eric fails to explicitly disclose where Ref. B teaches:the image processing circuitry is configured to align the one or more preshot images with the camera image (see JP pages 4-5/48, wherein the optical scanning system acquires a 3D scan, or image, and is aligned with the CT scan, such as an x-ray, data using reference markers. See further top of page 10, wherein the CT scan is low-dose, i.e., preshot image. Similarly, in Application Specification ¶10, a low-dose x-ray, or CT, image is a preshot image).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Eric’s system by using Ref. B’s teaching by including the alignment step to the camera and preshot images in order to further aid the doctor or technician acquire a better visual on the area or object of interest for the main shot image.
As per claim 20, Eric, in combination with Ref. B, discloses the radiography imaging system of claim 19, wherein the image processing circuitry is configured to receive user input to determine the location of the ROI within the one or more preshot images (see Eric ¶19-20, wherein the image processing unit uses the optical image and the pre-shot x-ray image to find the region of interest using localization and mapping techniques. See prior ¶17 and ¶28, wherein input control is disclosed).
As per claim 1, Eric discloses a method for determining the location of one or more regions of interest (ROIs) within one or more preshot images taken of an anatomy, the method comprising the steps of:a. providing an imaging system comprising (see Eric ¶12 and FIG. 1, wherein the x-ray imaging system is disclosed):i. a radiation source (see Eric ¶12-13, wherein an x-ray source is disclosed); ii. a detector alignable with the radiation source, the detector having a support on or against which a subject to be imaged is adapted to be positioned (see Eric ¶12-13 and FIG. 1, wherein the detector is aligned with the radiation source and placed against the subject at a suitable position); iii. a camera aligned with the detector (see Eric ¶12 and FIG. 1, wherein the optical sensor 108 is disclosed and is aligned with the detector); iv. a controller operably connected to the radiation source and detector to generate image data in an imaging procedure performed by the imaging system, and to the camera to generate camera images, the controller including a central processing unit and interconnected database for processing the image data from the detector to create preshot images (see Eric ¶14-17, wherein the processing subsystem, which contains the control unit, is configured to generate a data signal to initiate an acquisition from the optical sensor, i.e., camera to generate camera images, and configured to control the x-ray source to generate pre-shot images from the detector. See also ¶24, wherein the image processing unit, which is part of the processing subsystem and connected to the control unit, generates main-shot parameters, such as pixel values, i.e., image data. See further ¶32, wherein the imaging database is disclosed);v. a display operably connected to the controller for presenting information to a user (see Eric ¶18, wherein the display which receives control signals from an operator and displays scanning options is disclosed); and vi. a user interface operably connected to the control processing unit to enable user input to the control processing unit (see Eric ¶18, wherein the x-ray device receives input control signals when operated by an operator. See further ¶28, wherein the control unit is coupled with an input device); b. positioning the subject between the radiation source and the detector (see Eric ¶12-13 and FIG. 1, wherein the subject is between the detector 115 and the radiation source 104); c. operating the radiation source and detector to generate one or more preshot images (see Eric ¶12-13 and FIG. 1, wherein an operator controls the radiation source and another operator, or control unit, controls the x-ray detector mechanism which is configured to make changes to the x-ray parameters. See further ¶15, wherein the pre-shot x-ray image is taken); d. operating the camera to generate a camera image (see Eric ¶12 and FIG. 1, wherein the x-ray device 102, which contains the optical sensor 108, is controlled by an operator 106); f. determining a location of an ROI within the one or more preshot images (see Eric ¶14, wherein the optical sensor is focused on the region of interest for the optical image and the pre-shot x-ray image. Also see ¶34 step 312, wherein position information of the anatomy 162 is generated based on the pre-shot X-ray image 148); and g. adjusting exposure parameters for the operation of the radiation source to obtain one or more main shots of the subject corresponding to the image data for the ROI from the one or more preshot images (see Eric ¶15-16, wherein the pre-shot and main shot parameters are disclosed; the main shot uses more exposure than the pre-shot as described prior in Eric ¶11).
However, Eric fails to explicitly disclose where Ref. B teaches,e. aligning the camera image with the one or more preshot images (see JP page 5/48, wherein the 3D scan, or image, from the optical imaging device, is aligned with the CT data using reference markers. See further top of page 10, wherein the CT scan is low-dose, i.e., preshot image).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Eric’s system by using Ref. B’s teaching by including the alignment step to the camera and preshot images in order to further aid the doctor or technician acquire a better visual on the area or object of interest for the main shot image.
As per claim 2, Eric, in combination with Ref. B, discloses the method of claim 1, further comprising the step of aligning the camera image with the one or more preshot images prior to determining the location of the ROI within the one or more preshot images (see Eric ¶19, wherein if the optical image does not contain the ROI in the pre-shot X-ray image, the optical sensor is adjusted).
As per claim 3, Eric, in combination with Ref. B, discloses the method of claim 1, wherein the step of determining a location of the ROI in the one or more preshot images comprises providing an indication of the location of the ROI within the camera image (see Eric ¶20, wherein a 3D depth point-cloud technique is used on the region of interest in the optical image in order to triangulate the distance of the ROI).
As per claim 16, Eric, in combination with Ref. B, discloses the method of claim 1, wherein the step of adjusting exposure parameters for the operation of the radiation source comprises adjusting one or more of kVp, mA, ms, filter, positioning, and field of view for each corresponding main shot (see Eric ¶13 and more specifically ¶21, wherein the exposure parameters, such as the kVp, mA, viewing angle, and positioning, are adjusted).
As per claim 17, Eric, in combination with Ref. B, discloses the method of claim 1, further comprising the step of obtaining one or more main shot images of the ROIs determined within the one or more preshot images after adjusting the exposure parameters (see Eric ¶11 and ¶13, wherein the exposure is adjusted to become a main shot, and more specifically Eric ¶16-17, where the main shot x-ray image is generated of the region of interest, same ROI 160 of the preshot image from ¶15).
As per claim 18, Eric, in combination with Ref. B, discloses the method of claim 17, wherein the step of obtaining the one or more main shot images further comprises pasting together the one or more main shot images (see Eric ¶16, ¶24, and ¶34 and FIG. 3, wherein the main shot image is comprised of a plurality of pre-shot x-ray images in the optimal focal spot positions, i.e., pasting together the pre-shot images to create the main shot image).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Eric, in combination with Ref. B, in further view of Yun Zou US-9610057-B2, hereinafter Zou.
As per claim 4, Eric, in combination with Ref. B, fails to explicitly disclose where Zou teaches: the method of claim 3, wherein the step of providing the indication of the location of the ROI within the camera image comprises providing the indication on the camera image with the user interface (see Zou col. 5 lines 49-60, wherein the graphical data is rendered and displayed on the user interface. See also Zou col. 8 lines 1-17 and col. 2 lines 54-55 (FIG. 3). See finally Zou col. 11 lines 36-42 and FIG. 3, wherein the graphical representation is of the ROI within the pre-shot image).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Eric’s, in combination with Ref. B, method by using Zou teaching by including the indication of the ROI with the user interface to Eric’s display in order to present the user the precise location of the ROI.
Claims 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over Eric, in combination with Ref. B and Zou, in further view of STEWART YOUNG US-20210401391-A1, hereinafter YOUNG.
As per claim 6, Eric, in combination with Ref. B and Zou, fails to explicitly disclose where YOUNG teaches: the method of claim 5, wherein the step of presenting the camera image on the display comprises presenting the camera image as an overlay on the one or more preshot images (see YOUNG ¶121 and FIG. 4, wherein the x-ray image 66, i.e., preshot image, contains the boundary error as shown in image 68. See also ¶118, wherein the boundary error is an overlay for the x-ray image).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Eric’s, in combination with Ref. B and Zou, method by using YOUNG’s teaching by including an overlay to the preshot image in order to clarify the ROI by displaying the region in the overlay.
As per claim 7, Eric, in combination with Ref. B and Zou, fails to explicitly disclose where YOUNG teaches: the method of claim 4, wherein the step of providing the indication on the camera image comprises drawing the indication on the camera image (see YOUNG ¶118 and FIG. 4, wherein the processing unit draws the boundary error displayed as a solid marker).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Eric, in combination with Ref. B and Zou, method by using YOUNG’s teaching by including drawing the indication to the camera image in order to clarify the ROI on the camera image.
As per claim 8, Eric, in combination with Ref. B, Zou, and YOUNG, discloses the method of claim 7, further comprising the step of providing a corresponding indication on the one or more preshot images after drawing the indication on the camera image (see YOUNG ¶132, wherein the processing unit can place a contour mask on the x-ray image, wherein the contour mask contains the boundary error indication).
Claims 9-12 are rejected under 35 U.S.C. 103 as being unpatentable over Eric, in combination with Ref. B, in further view of YOUNG.
As per claim 9, Eric, in combination with Ref. B, fails to explicitly disclose where YOUNG teaches: the method of claim 3, wherein the step of providing the indication of the location of the ROI within the camera image comprises providing a system-generated indication on the camera image (see YOUNG ¶118 and FIG. 4, wherein the processing unit generates the boundary error region on the image 68).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Eric’s, in combination with Ref. B, method by using YOUNG’s teaching by including a system generated boundary error, or indication, to the camera image in order to more rapidly obtain the indication by using the system.
As per claim 10, Eric, in combination with Ref. B and YOUNG, discloses the method of claim 9, further comprising the step of modifying the system- generated indication on the camera image (see YOUNG ¶124-125, wherein the processing unit can modify the boundary error region using annotated x-ray information. See further ¶128-133, for further configurations the processing unit incorporates to adjust the region of interest).
As per claim 11, Eric, in combination with Ref. B and YOUNG, discloses the method of claim 10, wherein the step of modifying the system-generated indication comprises modifying the system-generated indication with the user interface (see YOUNG ¶93, wherein the user annotates x-ray information of x-ray image data which are suffering a boundary error on a system console, or user interface).
As per claim 12, Eric, in combination with Ref. B, fails to explicitly disclose where YOUNG teaches:The method of claim 3, wherein the step of providing an indication of the location of the ROI within the camera image further comprises providing an indication of an imaging area boundary within the camera image in conjunction with the indication of the ROI (see YOUNG ¶111 and ¶118 and FIG. 4, wherein the boundary error region is defined and lines around the boundary which coincides with a boundary error).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Eric’s, in combination with Ref. B, method by using YOUNG’s teaching by including the boundary of the indication to the camera image in order to further limit where the ROI is actually located.
Claims 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Eric, in combination with Ref. B and YOUNG, in further view of PAIK DAVID SEUNGWON WO-2022212771-A2, hereinafter DAVID.
As per claim 14, Eric, in combination with Ref. B and YOUNG, fails to explicitly disclose where DAVID teaches: the method of claim 12, wherein the step of providing the indication of the location of the ROI as an output from the AI model comprises providing at least one of a thickness map and an ROI map as an output from the AI model (see DAVID ¶108-109, wherein the AI model outputs the anatomic ROI. See also DAVID ¶154, wherein the AI can also output dimensionality, i.e., thickness, of an anatomical region. See further ¶164, wherein the AI has an anatomic mapper).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Eric’s, in combination with Ref. B and YOUNG, method by using DAVID’s teaching by including an AI model to the ROI indication step in order to more quickly and accurately determine the thickness map using a trained model.
As per claim 15, Eric, in combination with Ref. B and YOUNG, fails to explicitly disclose where DAVID teaches: the method of claim 12, wherein the step of providing the indication of the location of the ROI as an output from the Al model comprises providing an ROI view position prediction as an output from the Al model (see DAVID ¶154-155, wherein the predicted AI finding comprises an image workspace viewer of the radiological imaging findings).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Eric’s, in combination with Ref. B and YOUNG, method by using DAVID’s teaching by including position prediction to the AI model in order to more quickly find the starting position of the ROI by using a trained model.
Claims 13 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Eric in further view of BONTUS CLAAS WO-2022184736-A1, hereinafter CLAAS.
As per claim 13, Eric, in combination with Ref. B, fails to explicitly disclose where CLAAS teaches:The method of claim 1, wherein the step of determining a location of the ROI in the one or more preshot images comprises:a. inputting the one or more preshot images to an image processing AI model (see CLAAS bottom of page 10/53, wherein the low-dose CT image is input into the machine learning algorithm); and b. providing the indication of the location of the ROI as an output from the AI model (see CLAAS top of page 11/53, wherein the high-dose CT image is output with the plurality of desired points of the motion of the ROI on a coordinate system).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Eric’s, in combination with Ref. B, method by using CLAAS’s teaching by including an AI model to the ROI location determination in order to train the model so as to more quickly find where the ROIs.
As per claim 21, Eric, in combination with Ref. B, fails to explicitly disclose where CLAAS teaches: the radiography imaging system of claim 19, wherein the image processing circuitry includes an AI model trained to automatically determine the location of the ROI within the one or more preshot images (see CLAAS bottom of page 10 and top of page 11, wherein the low-dose CT images are used by the machine learning model to determine the desired target points of the region of interest on a coordinate system).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Eric’s, in combination with Ref. B, system by using CLAAS’s teaching by including an AI model to the ROI location determination in order to train the model so as to more quickly find where the ROIs in preshot images.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Bradley Obas Felix whose telephone number is (703)756-1314. The examiner can normally be reached M-F 8-5 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at 5712728243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRADLEY O FELIX/Examiner, Art Unit 2671
/VINCENT RUDOLPH/Supervisory Patent Examiner, Art Unit 2671