Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 12, 2026 has been entered.
Claim Status
Applicant’s amendment filed on February 12, 2026 is acknowledged. Currently claims 1-20 are pending. Claims 1 and 11 have been amended.
Response to Amendments
Applicant’s remarks and amendments filed February 12, 2026, have been entered. Applicant’s arguments regarding the 35 U.S.C. 112(f) claim interpretation previously set forth in the Final Office Action mailed November 17, 2025, are not persuasive. Therefore, the 35 U.S.C. 112(f) claim interpretation still remains.
Response to Arguments
Applicant's arguments filed February 12, 2026, have been fully considered but they are not persuasive.
On pages 9-10 of Applicant’s remarks, Applicant alleges that Thibault does not describe processing “the image sensor raw data” simultaneously into two resultant image data sets that differ with regard to image quality. In contrast, Thibault describes that two image data sets are acquired with different settings and then that data may be processed. Thibault does not describe one set of data that is processed to simultaneously create into two resultant image data sets that differ with regard to image quality.
The examiner respectfully disagrees. The examiner asserts that Thibault teaches processing the image sensor raw data to simultaneously create two resultant image datasets that differ with regard to image quality ([0029] “Optionally the first set of data at the first kVp setting and the second set of data at the second kVp setting are acquired simultaneously using a system with two separate x-ray tubes.” wherein the raw datasets are acquired simultaneously) (see further analysis for claim 1 below). Applicant fails to disclose within the claims the specific method that is used to process the image sensor raw data to simultaneously create two resultant image datasets that differ with regard to image quality. Therefore, under the broadest reasonable interpretation, the Examiner interprets “processing image sensor raw data” to be equivalent to “acquiring image sensor raw data”.
In paragraphs [0053] and [0054] of Applicant’s specifications, Applicant discloses the X-ray detector is configured for recording image sensor raw data, but does not disclose how the X-ray detector simultaneously processes image sensor raw data. The Examiner respectfully suggests Applicant to amend independent claims 1 and 11 to reflect the novel method used to simultaneously process the image sensor raw data to create two resultant image datasets that differ with regard to image quality.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier.
Such claim limitations are:
“a control unit”, “a processing unit”, and “a unit” in claims 8 and 10 described in paragraphs [0053] and [0054], and thus, claim 9 is similarly interpreted.
“an input unit” in claim 10 described in paragraph [0016].
Regarding the claim limitation above, 112(f) is invoked because “unit” is a non-functional generic placeholder expressed merely by the function it performs. Although claims 8 and 10 are drafted as method claims, the term “unit” is Applicant’s claim term preceding a functional limitation of “control/process/perform/input”. Because Applicant fails to recite sufficiently definite structure for the term “unit”, the claimed limitation is akin to a generic term.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3 and 7-13 are rejected under 35 U.S.C. 103 as being unpatentable of Saget et al., US 20190122330 A1, (hereinafter “Saget”) in view of Hwang et al., US 20190108618 A1, (hereinafter “Hwang”) in view of Thibault et al., US 20130343624 A1, (hereinafter “Thibault”).
Regarding claim 1, Saget teaches a method for recording and processing image sensor data of an examination object that is able to be generated by a medical imaging system ([0010] “a computer-implemented method, performed by a micro-processor configured by a readable program code to perform the method to correct for distortion of an anatomical image captured from an X-ray imaging system” wherein a medical imaging system is an X ray imaging system), the method comprising:
recording image sensor raw data of the examination object using an image detector of the medical imaging system ([0016] “The data capture module can be configured to obtain data from a sensor. The sensor can be in electronic communication with an augmented or mixed reality grid or trackable”;[0085] “the system components include an input of a series of x-ray or fluoroscopic images of a selected surgical site” wherein image sensor raw data is data from a sensor and the examination object is a selected surgical site) ([0010] “a computer-implemented method, performed by a micro-processor configured by a readable program code to perform the method to correct for distortion of an anatomical image captured from an X-ray imaging system” wherein a medical imaging system is an X ray imaging system);
processing the image sensor raw data to simultaneously create, is configured for further processing by a machine ([0075] “The following system and method generally relate to: a computing platform having a graphical user interface for displaying subject image data and apply data science techniques such as machine and deep learning” wherein human perception is a graphical user interface and further processing by a machine is machine and deep learning).
outputting the ([0081] “The electronic display device 150 provides a displayed composite image and graphical user interface 151. The graphical user interface 151 is configured to: allow manipulation of a grid template by a user 155, such as a surgeon, physician assistant, surgical scrub nurse, imaging assistant, and support personnel.” wherein a display unit is the electronic display device);
providing the ([0091] “Module 5 is made of an image quality scoring algorithm to assess the quality of an acquired medical image for its intended use. The image quality scoring algorithm is an image processing algorithm that is based on machine learning or deep learning from a good and bad medical image training dataset for a specific application.” wherein further processing by a machine is the image quality scoring algorithm based on machine or deep learning); and
using a processing result arising from the ([0091] “Module 5 is made of an image quality scoring algorithm to assess the quality of an acquired medical image for its intended use. The image quality scoring algorithm is an image processing algorithm that is based on machine learning or deep learning from a good and bad medical image training dataset for a specific application.” wherein a processing result is the quality of an acquired medical image; wherein further processing by a machine is the image quality scoring algorithm based on machine or deep learning),
Saget does not specifically disclose the following: 1) two resultant image datasets of the same examination object differing with regard to the image quality, 2) a first resultant image dataset, 3) a second resultant image dataset, and 4) wherein the first resultant image dataset has at least one of a lower noise level than the second resultant image dataset, a lower spatial resolution than the second resultant image dataset, or a lower image sharpness than the second resultant image dataset.
However, Hwang teaches the following: 1) two resultant image datasets of the same examination object differing with regard to the image quality, 2) a first resultant image dataset, 3) a second resultant image dataset ([0013] “wherein application of the first strided convolutional filter to the patch of raw image data generates a first set of weighted data representative of the patch of raw image data, the first set of weighted data having a first resolution; and a second strided convolutional filter having a second array of weights, wherein application of the second strided convolutional filter generates a second set of weighted data representative of the patch of raw image data, the second set of weighted data having a second resolution that is of a lower resolution than the first resolution.” wherein a first resultant image dataset is the first set of weighted data having a first resolution and a second resultant image dataset is the second set of weighted data having a second resolution), and 4) wherein the first resultant image dataset has at least one of a lower noise level than the second resultant image dataset, a lower spatial resolution than the second resultant image dataset, or a lower image sharpness than the second resultant image dataset ([0013] “wherein application of the first strided convolutional filter to the patch of raw image data generates a first set of weighted data representative of the patch of raw image data, the first set of weighted data having a first resolution; and a second strided convolutional filter having a second array of weights, wherein application of the second strided convolutional filter generates a second set of weighted data representative of the patch of raw image data, the second set of weighted data having a second resolution that is of a lower resolution than the first resolution.” wherein a first resultant image dataset is the first set of weighted data having a first resolution and a second resultant image dataset is the second set of weighted data having a second resolution; wherein Hwang teaches a the second resultant image dataset that has a lower resolution than the first resultant image dataset and it is obvious to one of ordinary skill in the art to have instead a first resultant image dataset that has a lower resolution than the second resultant image dataset).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Saget by obtaining two resultant image datasets from the same anatomical object that differ in image quality, suggested by Hwang. For example, Hwang suggests that depending on the application of the output image data, the raw image data can generate a first and second set of weighted data with a first and second resolution respectively in accordance to the convolutional filter applied. The motivation would be to have image datasets, of the same object, that can be processed at different qualities or resolutions to undergo various image processing methods. Therefore, it would have been obvious to combine Saget with Hwang to obtain the method specified in claim 1.
Saget in view of Hwang does not specifically disclose simultaneously creating two resultant image datasets.
However, Thibault teaches simultaneously creating two resultant image datasets ([0029] “Optionally the first set of data at the first kVp setting and the second set of data at the second kVp setting are acquired simultaneously using a system with two separate x-ray tubes.” wherein the raw datasets are acquired simultaneously).
It would have been obvious to one of ordinary skill in the art to simultaneously create, of Thibault, the resultant image datasets of Saget in view of Hwang to improve the efficiency and speed of the method disclosed by Saget in view of Hwang.
Regarding claim 2, Saget in view of Hwang and Thibault teaches the method as claimed in claim 1, wherein an image sensor raw dataset is recorded (Saget - [0016] “the data capture module can be configured to obtain data from a sensor”) and the first resultant image dataset and the second resultant image dataset are generated from a same recorded image sensor raw dataset using two different image processing methods (Hwang - [0013] “wherein application of the first strided convolutional filter to the patch of raw image data generates a first set of weighted data representative of the patch of raw image data, the first set of weighted data having a first resolution; and a second strided convolutional filter having a second array of weights, wherein application of the second strided convolutional filter generates a second set of weighted data representative of the patch of raw image data, the second set of weighted data having a second resolution that is of a lower resolution than the first resolution.” wherein a first resultant image dataset is the first set of weighted data having a first resolution and a second resultant image dataset is the second set of weighted data having a second resolution; wherein the two different image processing methods are the two different convolutional filters).
The motivation for combining Saget, Hwang, and Thibault is the same motivation as used for claim 1 above.
Regarding claim 3, Saget in view of Hwang and Thibault teaches the method as claimed in claim 1, wherein two image sensor raw datasets of the examination object are recorded (Saget - [0016] “The data capture module can be configured to obtain data from a sensor. The sensor can be in electronic communication with an augmented or mixed reality grid or trackable”; [0085] “the system components include an input of a series of x-ray or fluoroscopic images of a selected surgical site” wherein image sensor raw data is data from a sensor and the examination object is a selected surgical site) in a same examination situation with different recording parameters, a first image processing method generates the first resultant image dataset from a first image sensor raw dataset and a second image processing method generates the second resultant image dataset from a second image sensor raw dataset (Hwang - [0013] “wherein application of the first strided convolutional filter to the patch of raw image data generates a first set of weighted data representative of the patch of raw image data, the first set of weighted data having a first resolution; and a second strided convolutional filter having a second array of weights, wherein application of the second strided convolutional filter generates a second set of weighted data representative of the patch of raw image data, the second set of weighted data having a second resolution that is of a lower resolution than the first resolution.” wherein a first resultant image dataset is the first set of weighted data having a first resolution and a second resultant image dataset is the second set of weighted data having a second resolution).
The motivation for combining Saget, Hwang, and Thibault is the same motivation as used for claim 1 above.
Regarding claim 7, Saget in view of Hwang and Thibault teaches the method as claimed in claim 1, wherein the medical imaging system (Saget - [0010] “a computer-implemented method, performed by a micro-processor configured by a readable program code to perform the method to correct for distortion of an anatomical image captured from an X-ray imaging system” wherein a medical imaging system is an X ray imaging system) includes an angiography X-ray system, a fluoroscopy X-ray system, a computed tomography system, a magnetic resonance tomography system or an ultrasonic system (Saget - [0092] “Preoperative images can include multiple imaging modalities such as X-ray, fluoroscopy, ultrasound, computed tomography, terahertz imaging, or magnetic resonance imaging and can include imagery of the nonoperative, or contralateral, side of a patient's anatomy”).
The motivation for combining Saget, Hwang, and Thibault is the same motivation as used for claim 1 above.
Regarding claim 8, Saget in view of Hwang and Thibault teaches a medical imaging system configured to perform the method of claim 1, the medical imaging system comprising:
a control unit configured to control the medical imaging system (Saget - [0078] “The computing platform 100 includes a plurality of software modules 103 to receive and process medical image data,” wherein a control unit is the computing platform) (Saget - [0010] “a computer-implemented method, performed by a micro-processor configured by a readable program code to perform the method to correct for distortion of an anatomical image captured from an X-ray imaging system” wherein a medical imaging system is an X ray imaging system);
an image sensor configured to record the image sensor raw data of the examination object (Saget - [0016] “The data capture module can be configured to obtain data from a sensor. The sensor can be in electronic communication with an augmented or mixed reality grid or trackable”; [0085] “the system components include an input of a series of x-ray or fluoroscopic images of a selected surgical site” wherein image sensor raw data is data from a sensor and the examination object is a selected surgical site);
a processing unit configured to process the image sensor raw data to the first resultant image dataset and the second resultant image dataset (Saget - [0079] “For example, the subject matter described herein can be implemented in software executed by at least one processor 101.”) (Hwang - [0013] “wherein application of the first strided convolutional filter to the patch of raw image data generates a first set of weighted data representative of the patch of raw image data, the first set of weighted data having a first resolution; and a second strided convolutional filter having a second array of weights, wherein application of the second strided convolutional filter generates a second set of weighted data representative of the patch of raw image data, the second set of weighted data having a second resolution that is of a lower resolution than the first resolution.” wherein a first resultant image dataset is the first set of weighted data having a first resolution and a second resultant image dataset is the second set of weighted data having a second resolution);
the display unit configured to output the first resultant image dataset (Saget - [0081] “The electronic display device 150 provides a displayed composite image and graphical user interface 151. The graphical user interface 151 is configured to: allow manipulation of a grid template by a user 155, such as a surgeon, physician assistant, surgical scrub nurse, imaging assistant, and support personnel.” wherein a display unit is the electronic display device) (Hwang - [0013] “wherein application of the first strided convolutional filter to the patch of raw image data generates a first set of weighted data representative of the patch of raw image data, the first set of weighted data having a first resolution” wherein a first resultant image dataset is the first set of weighted data having a first resolution); and
a unit configured to perform the further processing (Saget - [0091] “Module 5 is made of an image quality scoring algorithm to assess the quality of an acquired medical image for its intended use. The image quality scoring algorithm is an image processing algorithm that is based on machine learning or deep learning from a good and bad medical image training dataset for a specific application.” wherein further processing by a machine is the image quality scoring algorithm based on machine or deep learning).
The motivation for combining Saget, Hwang, and Thibault is the same motivation as used for claim 1 above.
Regarding claim 9, Saget in view of Hwang and Thibault teaches the medical imaging system as claimed in claim 8, wherein the control unit is configured to control at least some functions of the medical imaging system automatically (Saget - [0078] “The computing platform 100 includes a plurality of software modules 103 to receive and process medical image data,” wherein a control unit is the computing platform) (Saget - [0080] “the computing platform 100 can be configured to perform one or more aspects associated with automated intraoperative surgical guidance in medical images”).
The motivation for combining Saget, Hwang, and Thibault is the same motivation as used for claim 1 above.
Regarding claim 10, Saget in view of Hwang and Thibault teaches a complete medical system including an imaging system configured to perform the method of claim 1 and a robot system configured to navigate an instrument in a hollow organ of a patient and image-monitored by the imaging system, the complete medical system comprising:
a control unit configured to control the imaging system (Saget - [0078] “The computing platform 100 includes a plurality of software modules 103 to receive and process medical image data,” wherein a control unit is the computing platform) (Saget - [0010] “a computer-implemented method, performed by a micro-processor configured by a readable program code to perform the method to correct for distortion of an anatomical image captured from an X-ray imaging system” wherein a medical imaging system is an X ray imaging system);
an image sensor configured to record the image sensor raw data (Saget - [0016] “The data capture module can be configured to obtain data from a sensor. The sensor can be in electronic communication with an augmented or mixed reality grid or trackable”; [0085] “the system components include an input of a series of x-ray or fluoroscopic images of a selected surgical site” wherein image sensor raw data is data from a sensor and the examination object is a selected surgical site);
a processing unit configured to process the image sensor raw data to the first resultant image dataset the second resultant image dataset (Saget - [0079] “For example, the subject matter described herein can be implemented in software executed by at least one processor 101.”) (Hwang - [0013] “wherein application of the first strided convolutional filter to the patch of raw image data generates a first set of weighted data representative of the patch of raw image data, the first set of weighted data having a first resolution; and a second strided convolutional filter having a second array of weights, wherein application of the second strided convolutional filter generates a second set of weighted data representative of the patch of raw image data, the second set of weighted data having a second resolution that is of a lower resolution than the first resolution.” wherein a first resultant image dataset is the first set of weighted data having a first resolution and a second resultant image dataset is the second set of weighted data having a second resolution);
a robot control system configured to control a robot-assisted drive system (Saget - [0083] “The computing platform 100 is configured to synchronize with surgical facilitator 160 such as a robot or a haptic feedback device to provide the same predictive guidance as described throughout as an enabler for robotic surgery.” wherein a robot control system is a surgical facilitator);
an input unit configured to receive an input (Saget - [0081] “The dynamic surgical guidance system 1 includes an input of a series of x-ray or fluoroscopic images of a selected surgical site”);
the display unit configured to output the first resultant image dataset (Saget - [0081] “The electronic display device 150 provides a displayed composite image and graphical user interface 151. The graphical user interface 151 is configured to: allow manipulation of a grid template by a user 155, such as a surgeon, physician assistant, surgical scrub nurse, imaging assistant, and support personnel.” wherein a display unit is the electronic display device) (Hwang - [0013] “wherein application of the first strided convolutional filter to the patch of raw image data generates a first set of weighted data representative of the patch of raw image data, the first set of weighted data having a first resolution” wherein a first resultant image dataset is the first set of weighted data having a first resolution); and
a unit configured to perform the further processing using a machine learning algorithm, wherein the further processed data is used for controlling the navigation (Saget - [0091] “Module 5 is made of an image quality scoring algorithm to assess the quality of an acquired medical image for its intended use. The image quality scoring algorithm is an image processing algorithm that is based on machine learning or deep learning from a good and bad medical image training dataset for a specific application.” wherein further processing by a machine is the image quality scoring algorithm based on machine or deep learning) (Saget - [0084] “The computing platform 100 is configured to synchronize with an intelligence guided trackable capable of creating augmented grids or avatars of implants, instruments or anatomy 170 to provide the same predictive guidance as described throughout as an enabler for intelligence guided artificial reality trackable navigation.”).
The motivation for combining Saget, Hwang, and Thibault is the same motivation as used for claim 1 above.
Regarding claim 11, the claim recites similar limitations to claim 1 but in the form of a method that is able to be created by a complete medical system. Therefore, claim 11 recites similar limitations to claim 1 and is rejected for similar rationale and reasoning (see the analysis for claim 1 above).
Regarding claim 12, the claim recites similar limitations to claim 2 but in the form of a method that is able to be created by a complete medical system. Therefore, claim 12 recites similar limitations to claim 2 and is rejected for similar rationale and reasoning (see the analysis for claim 2 above).
Regarding claim 13, the claim recites similar limitations to claim 3 but in the form of a method that is able to be created by a complete medical system. Therefore, claim 13 recites similar limitations to claim 3 and is rejected for similar rationale and reasoning (see the analysis for claim 3 above).
Claims 4-6 and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable of Saget et al., US 20190122330 A1, (hereinafter “Saget”) in view of Hwang et al., US 20190108618 A1, (hereinafter “Hwang”), in further view of Thibault et al., US 20130343624 A1, (hereinafter “Thibault”) in further view of Cao et al., US 10945688 B2, (hereinafter “Cao”).
Regarding claim 4, Saget in view of Hwang and Thibault teaches the method as claimed in claim 1, wherein the medical imaging system (Saget - [0010] “a computer-implemented method, performed by a micro-processor configured by a readable program code to perform the method to correct for distortion of an anatomical image captured from an X-ray imaging system” wherein a medical imaging system is an X ray imaging system) includes an X-ray device (Saget - [0010] “anatomical image captured from an X-ray imaging system”).
Saget in view of Hwang and Thibault does not specifically disclose the image detector includes an X-ray detector.
However, Cao teaches the image detector includes an X-ray detector ([Col.2, lines 8-9] “Disclosed herein is an X-ray imaging system suitable for detecting x-ray, comprising: a first X-ray detector”).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Saget in view of Hwang and Thibault to include an X-ray detector, disclosed by Cao. The motivation would be that one of ordinary skill in the art knows that X-ray detectors are known for their imaging properties. Further, an X-ray detector can be used to measure the properties of the X-ray device, disclosed by Saget, for the purpose of image analysis. Therefore, it would have been obvious to combine Saget in view of Hwang and Thibault with Cao to obtain the method specified in claim 4.
Regarding claim 5, Saget in view of Hwang and Thibault discloses the method of claim 3. Cao teaches the method as claimed in claim 3, wherein the recording parameters are based on at least one of,
an X-ray voltage ([Col.2, lines 47-50] “According to an embodiment, the first X-ray detector comprises: an X-ray absorption layer comprising an electrode; a first voltage comparator configured to compare a voltage of the electrode to a first threshold”),
an X-ray current,
a pulse width of the X-ray pulse of an X-ray source,
a filter setting of an X-ray collimator,
a zoom setting of an image system, or
a recording frequency.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Saget in view of Hwang and Thibault to include recording parameters based on metrics such as voltage disclosed by Cao. The motivation would be to use recording parameters to measure the intensity and behavior of the X-ray signals. Cao teaches that measuring the X-ray intensity is important as, in the instance that the intensity is too low or too high, there is a chance of missing an incident X-ray photon. Therefore, it would have been obvious to combine Saget in view of Hwang and Thibault and Cao to obtain the method specified in claim 5.
Regarding claim 6, Saget in view of Hwang and Thibault teaches the method as claimed in claim 1, wherein the image quality of at least one of the first resultant image dataset or the second resultant image dataset (Saget - [0091] “Module 5 is made of an image quality scoring algorithm to assess the quality of an acquired medical image for its intended use. The image quality scoring algorithm is an image processing algorithm that is based on machine learning or deep learning from a good and bad medical image training dataset for a specific application.”) (Hwang - [0013] “wherein application of the first strided convolutional filter to the patch of raw image data generates a first set of weighted data representative of the patch of raw image data, the first set of weighted data having a first resolution; and a second strided convolutional filter having a second array of weights, wherein application of the second strided convolutional filter generates a second set of weighted data representative of the patch of raw image data, the second set of weighted data having a second resolution that is of a lower resolution than the first resolution.” wherein a first resultant image dataset is the first set of weighted data having a first resolution and a second resultant image dataset is the second set of weighted data having a second resolution) is determined
Saget in view of Hwang and Thibault does not specifically disclose the at least one metric is a signal-to-noise ratio, a contrast dynamic, an image sharpness, a spatial resolution, or an edge sharpness.
However, Cao teaches the image quality of a first or second resultant image dataset is determined by at least one of a signal-to-noise ratio ([Col.8, lines 37-42] “The capacitor may be in the feedback path of an amplifier. The amplifier configured as such is called a capacitive transimpedance amplifier (CTIA). CTIA has high dynamic range by keeping the amplifier from saturating and improves the signal-to-noise ratio by limiting the bandwidth in the signal path.”), a contrast dynamic, an image sharpness, a spatial resolution, or an edge sharpness
The motivation for combining Saget in view of Hwang and Thibault and Cao is the same motivation as used for claim 5 above.
Regarding claim 14, the claim recites similar limitations to claim 4 and is therefore rejected for similar rationale and reasoning (see the analysis for claim 4 above).
Regarding claim 15, the claim recites similar limitations to claim 4 and is therefore rejected for similar rationale and reasoning (see the analysis for claim 4 above).
Regarding claim 16, the claim recites similar limitations to claim 5 and is therefore rejected for similar rationale and reasoning (see the analysis for claim 5 above).
Regarding claim 17, the claim recites similar limitations to claim 6 and is therefore rejected for similar rationale and reasoning (see the analysis for claim 6 above).
Regarding claim 18, the claim recites similar limitations to claim 6 and is therefore rejected for similar rationale and reasoning (see the analysis for claim 6 above).
Regarding claim 19, the claim recites similar limitations to claim 7 and is therefore rejected for similar rationale and reasoning (see the analysis for claim 7 above).
Regarding claim 20, the claim recites similar limitations to claim 7 and is therefore rejected for similar rationale and reasoning (see the analysis for claim 7 above).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMANDA PEARSON whose telephone number is (703)-756-5786. The examiner can normally be reached Monday - Friday 9:00 - 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached on (571)- 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMANDA H PEARSON/Examiner, Art Unit 2666