DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-15 are pending in this application.
Interpretation under 35 U.S.C. §112(f)
Applicant’s arguments, see page 2, line 2 through line 16, and the amendment to the claims, filed January 13, 2026, with respect to the interpretation of claims 11-15 under 35 U.S.C. §112(f), have been fully considered and are persuasive. The interpretation of claims 11-15 under 35 U.S.C. §112(f) has been withdrawn.
Rejection under 35 U.S.C. §112(a)
Applicant’s arguments, see page 1, line 20 through line 25, and the amendment to claims, filed January 13, 2026, with respect to the rejection of claims 11-16 under 35 U.S.C. §112(b), have been fully considered and are persuasive. The rejection of claims 11-16 under 35 U.S.C. §112(b) has been withdrawn.
Rejection under 35 U.S.C. §112(a)/(b), 112(f) related
Applicant’s arguments, see page 1, line 20 through line 25, and the amendment to claims, filed January 13, 2026, with respect to the rejection of claims 11-16 under 35 U.S.C. §112(b), have been fully considered and are persuasive. The rejection of claims 11-16 under 35 U.S.C. §112(b) has been withdrawn.
Rejection under 35 U.S.C. §112(b)
Applicant’s arguments, see page 1, line 20 through line 25, and the amendment to claims, filed January 13, 2026, with respect to the rejection of claims 1-16 under 35 U.S.C. §112(b), have been fully considered and are persuasive. The rejection of claims 1-16 under 35 U.S.C. §112(b) has been withdrawn.
Rejection under 35 U.S.C. §102
Applicant’s arguments, see page 2, line 2 through line 16, and the amendment to the claims, filed January 13, 2026, with respect to the rejection of claims 1-9 and 11-16 under 35 U.S.C. §102(a)(1) as being anticipated by Park (U.S. Patent Application Publication No. US 2019/0156524 A1), have been fully considered and are persuasive. The rejection of claims 1-9 and 11-16 under 35 U.S.C. §102(a)(1) as being anticipated by Park (U.S. Patent Application Publication No. US 2019/0156524 A1) has been withdrawn.
Rejection under 35 U.S.C. §103
Applicant’s arguments, see page 2, line 2 through line 16, and the amendment to the claims, filed January 13, 2026, with respect to the rejection of claim 10 under 35 U.S.C. 103(a) as being unpatentable over Park (U.S. Patent Application Publication No. US 2019/0156524 A1) in view of Zaharchuk (U.S. Patent Application Publication No. US 2018/0286037 A1), have been fully considered and are persuasive. The rejection of claim 10 under 35 U.S.C. 103(a) as being unpatentable over Park (U.S. Patent Application Publication No. US 2019/0156524 A1) in view of Zaharchuk (U.S. Patent Application Publication No. US 2018/0286037 A1) has been withdrawn.
Claim rejections - 35 U.S.C. §112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1-15 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1-5, 7, 9, 12-15 recite the limitation “CTs”, this term has not been defined in the claims and is thus vague and indefinite.
Claims 3 and 4 are word for word identical.
Claims 6, 8 and 10 variously depend from an indefinite base claim.
Objection to New Matter Added to Specification
The amendment filed January 13, 2026 is objected to under 35 U.S.C. 132(a) because it introduces new matter into the disclosure. 35 U.S.C. 132(a) states that no amendment shall introduce new matter into the disclosure of the invention. The added material which is not supported by the original disclosure is as follows:
Claim 12 recites “a non-transitory computer readable medium having instructions stored thereon, wherein execution of the instructions by a processor causes the processor to …”, this newly added material is not supported by the original disclosure.
Claim 15 recites “a processor and a memory having instructions stored thereon, wherein execution of the instructions by the processor causes the processor to …”, this newly added material is not supported by the original disclosure.
Applicant is required to cancel the new matter in the reply to this Office Action.
Claim rejections - 35 U.S.C. §112(a)
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
Claims 5, 12, 14 and 15 are rejected under 35 U.S.C. 112(a), as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, at the time the application was filed, had possession of the claimed invention.
As described above, the disclosure does not provide adequate support for the term newly amended recitations: Claim 12 recites “a non-transitory computer readable medium having instructions stored thereon, wherein execution of the instructions by a processor causes the processor to …”, and Claim 15 recites “a system comprising a processor and a memory having instructions stored thereon, wherein execution of the instructions by the processor causes the processor to …”. In particular, the specification does not mention a system comprising a processor and a memory nor does not it mention a computer readable medium having instructions stored thereon, wherein execution of the instructions by a processor causes the processor to (carry out the recited functions). The specification does not demonstrate that applicant has made an invention that achieves the claimed functions because the invention is not described with sufficient detail such that one of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention.
New Grounds of Rejection
Applicant’s arguments with respect to claims 1-15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. §102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-4, 6, 7, 9 and 12-15 are rejected under 35 U.S.C. §102(a)(1) as being anticipated by Kosomaa et al. (U.S. Patent Application Publication No. US 2022/0189100 A1) (hereafter referred to as “Kosomaa”).
With regard to claim 1, Kosomaa describes training a convolutional neural network with low noise training data to generate a trained convolutional neural network configured to generate denoised Computed Tomography images of a subject from raw Computed Tomography images of the same, the low noise training data comprising multiple noisy Computed Tomography images of the same subject that are visually different (see Figures 1A and 1B, and refer for example to paragraphs [0034] through [0036] - which discuss the obtaining of the computed tomography images, paragraphs [0038] and [0039] - which discuss the neural networks, and paragraphs [0061] through [0064] – which discusses that the training image data input to the neural network are low noise image data), wherein the multiple noisy Computed Tomography images of the same subject are processed by employing multiple timepoint Computed Tomography images having similarity between repeated scans for a given patient, and wherein the multiple noisy Computed Tomography images are intensity normalized and spatially co-registered at each timepoint among the multiple noisy Computed Tomography images (refer for example to paragraphs [0036], [0073], [0152], [0155], [0157] and [0159] which discusses that the multiple computed tomography images are timepoint computed tomography images of the same subject over repeated scans for a given patient which are intensity normalized and spatially co-registered).
As to claim 2, Kosomaa describes wherein different timepoint Computed Tomography images are free from distortions and unique due to variations in date and time of acquisition, scanner manufacturer or model, imaging protocols, radiation dose, head position, or imaging plane (refer for example to paragraphs [0036] and [0061]).
In regard to claim 3, Kosomaa describes wherein different timepoint Computed Tomography images are retrieved and determined to have normal variations of image noise, imaging planes, and head positions (refer to paragraphs [0036] and [0061]).
With regard to claim 4, Kosomaa describes wherein different timepoint Computed Tomography images are retrieved and determined to have normal variations of image noise, imaging planes, and head positions (refer to paragraphs [0036] and [0061]).
As to claim 6, Kosomaa describes wherein the convolutional neural network has an encoder-decoder architecture (refer for example to paragraph [0143]).
With regard to claim 7, Kosomaa describes herein the trained convolutional neural network is configured to reduce image noise in brain tissue between 2.5 times to 3.5 times, increase contrast of detection between about 2.5 times to 3.5 times, or reduce image noise by about sqrt(n) where n is the number of noisy Computed Tomography images of the respective given person used for the training noise (refer for example to paragraphs [0043]).
In regard to claim 9, Kosomaa describes wherein the multiple timepoint Computed Tomography images had a normalized image attenuation distribution normalized by shifting an attenuation mode to a fixed value to eliminate attenuation shifts due to scanner (refer to paragraphs [0036] and [0061]).
In regard to claim 12, Kosomaa describes a non-transitory computer readable medium having instructions stored thereon, wherein execution of the instructions by a processor causes the processor (see Figures 4 and 5, and refer for example to paragraphs [0120], [0134], [0144] and [0145]) to receive a Computed Tomography image of a patient; execute a trained convolutional neural network using the received Computed Tomography image as an input to generate a denoised Computed Tomography image (see Figures 1A and 1B, and refer for example to paragraphs [0034] through [0036] - which discuss the obtaining of the computed tomography images, paragraphs [0038] and [0039] - which discuss the neural networks, and paragraphs [0061] through [0064] – which discusses that the training image data input to the neural network are low noise image data), wherein the multiple noisy Computed Tomography images of the same patient are processed by employing multiple timepoint Computed Tomography images having similarity between repeated scans for a given patient, and wherein the multiple noisy Computed Tomography images are intensity normalized and spatially co-registered at each timepoint among the multiple noisy Computed Tomography images (refer for example to paragraphs [0036], [0073], [0152], [0155], [0157] and [0159] which discusses that the multiple computed tomography images are timepoint computed tomography images of the same subject over repeated scans for a given patient which are intensity normalized and spatially co-registered).
With regard to claim 13, Kosomaa describes wherein the different timepoint Computed Tomography images are free from distortions and unique due to variations in date and time of acquisition, scanner manufacturer or model, imaging protocols, radiation dose, head position, or imaging plane, or wherein the different timepoint Computed Tomography images are retrieved and determined from an existing database of Computed Tomography scans, and wherein the different timepoint Computed Tomography images are retrieved and determined to have normal variations of image noise, imaging planes, and head positions (refer to paragraphs [0036] and [0061]).
As to claim 14, Kosomaa describes wherein the different timepoint Computed Tomography images are processed to remove skull and extracranial bright area to be used for the training, wherein the convolutional neural network has an encoder-decoder architecture, or wherein the trained convolutional neural network is a rotation-reflection equivariant U-Net with group convolutional (refer for example to paragraph [0143], the convolutional neural network has an encoder-decoder architecture).
In regard to claim 15, Kosomaa describes a processor and a memory having instructions stored thereon, wherein execution of the instructions by the processor causes the processor (see Figures 4 and 5, and refer for example to paragraphs [0120], [0134], [0144] and [0145]) to receive a Computed Tomography image of a patient; execute a trained convolutional neural network using the received Computed Tomography image as an input to generate a denoised Computed Tomography image (see Figures 1A and 1B, and refer for example to paragraphs [0034] through [0036] - which discuss the obtaining of the computed tomography images, paragraphs [0038] and [0039] - which discuss the neural networks, and paragraphs [0061] through [0064] – which discusses that the training image data input to the neural network are low noise image data), wherein the multiple noisy Computed Tomography images of the same patient are processed by employing multiple timepoint Computed Tomography images having similarity between repeated scans for a given patient, and wherein the multiple noisy Computed Tomography images are intensity normalized and spatially co-registered at each timepoint among the multiple noisy Computed Tomography images (refer for example to paragraphs [0036], [0073], [0152], [0155], [0157] and [0159] which discusses that the multiple computed tomography images are timepoint computed tomography images of the same subject over repeated scans for a given patient which are intensity normalized and spatially co-registered).
Claims 1-4, 6-9 and 12-15 are rejected under 35 U.S.C. §102(a)(1) as being anticipated by Sandfort et al. (U.S. Patent Application Publication No. US 2022/0036517 A1) (hereafter referred to as “Sandfort”).
With regard to claim 1, Sandfort describes training a convolutional neural network with low noise training data to generate a trained convolutional neural network configured to generate denoised Computed Tomography images of a subject from raw Computed Tomography images of the same, the low noise training data comprising multiple noisy Computed Tomography images of the same subject that are visually different (see Figure 7 and 8, and refer for example to paragraphs [0020], [0022], [0064] and [0066]) wherein the multiple noisy Computed Tomography images of the same subject are processed by employing multiple timepoint Computed Tomography images having similarity between repeated scans for a given patient, and wherein the multiple noisy Computed Tomography images are intensity normalized and spatially co-registered at each timepoint among the multiple noisy Computed Tomography images (see Figure 7 and refer for example to paragraphs [0020], [0022] and [0025]).
As to claim 2, Sandfort describes wherein different timepoint Computed Tomography images are free from distortions and unique due to variations in date and time of acquisition, scanner manufacturer or model, imaging protocols, radiation dose, head position, or imaging plane (refer for example to paragraph [0023] and [0025]).
In regard to claim 3, Sandfort describes wherein different timepoint Computed Tomography images are retrieved and determined to have normal variations of image noise, imaging planes, and head positions (refer to paragraph [0023] and [0025]).
With regard to claim 4, Sandfort describes wherein different timepoint Computed Tomography images are retrieved and determined to have normal variations of image noise, imaging planes, and head positions (refer to paragraph [0023] and [0025]).
As to claim 6, Sandfort describes wherein the convolutional neural network has an encoder-decoder architecture (as clearly illustrated in Figure 8 and refer for example to paragraphs [0028] and [0029]).
With regard to claim 7, Sandfort describes herein the trained convolutional neural network is configured to reduce image noise in brain tissue between 2.5 times to 3.5 times, increase contrast of detection between about 2.5 times to 3.5 times, or reduce image noise by about sqrt(n) where n is the number of noisy Computed Tomography images of the respective given person used for the training noise and refer for example to paragraphs [0020], [0022], [0064] and [0066]).
As to claim 8, Sandfort describes wherein the trained convolutional neural network is a rotation-reflection equivariant U-Net with group convolutional neural network (refer for example to paragraphs [0023], [0027] and [0028]).
In regard to claim 9, Sandfort describes wherein the multiple timepoint Computed Tomography images had a normalized image attenuation distribution normalized by shifting an attenuation mode to a fixed value to eliminate attenuation shifts due to scanner and (refer for example to paragraphs [0023] and [0025]).
In regard to claim 12, Sandfort describes a non-transitory computer readable medium having instructions stored thereon, wherein execution of the instructions by a processor causes the processor (refer for example to paragraph [0029] which describes the NVIDIA processor) to receive a Computed Tomography image of a patient; execute a trained convolutional neural network using the received Computed Tomography image as an input to generate a denoised Computed Tomography image (see Figure 7 and 8, and refer for example to paragraphs [0020], [0022], [0064] and [0066]), wherein the multiple noisy Computed Tomography images of the same patient are processed by employing multiple timepoint Computed Tomography images having similarity between repeated scans for a given patient, and wherein the multiple noisy Computed Tomography images are intensity normalized and spatially co-registered at each timepoint among the multiple noisy Computed Tomography images (see Figure 7 and refer for example to paragraphs [0020], [0022] and [0025]).
With regard to claim 13, Sandfort describes wherein the different timepoint Computed Tomography images are free from distortions and unique due to variations in date and time of acquisition, scanner manufacturer or model, imaging protocols, radiation dose, head position, or imaging plane, or wherein the different timepoint Computed Tomography images are retrieved and determined from an existing database of Computed Tomography scans, and wherein the different timepoint Computed Tomography images are retrieved and determined to have normal variations of image noise, imaging planes, and head positions (refer for example to paragraphs [0022], [0023] and [0025]).
As to claim 14, Sandfort describes wherein the different timepoint Computed Tomography images are processed to remove skull and extracranial bright area to be used for the training, wherein the convolutional neural network has an encoder-decoder architecture, or wherein the trained convolutional neural network is a rotation-reflection equivariant U-Net with group convolutional (as clearly illustrated in Figure 8 and refer for example to paragraphs [0028] and [0029]).
In regard to claim 15, Sandfort describes a processor and a memory having instructions stored thereon, wherein execution of the instructions by the processor causes the processor (refer for example to paragraph [0029] which describes the NVIDIA processor) to receive a Computed Tomography image of a patient; execute a trained convolutional neural network using the received Computed Tomography image as an input to generate a denoised Computed Tomography image (see Figure 7 and 8, and refer for example to paragraphs [0020], [0022], [0064] and [0066]), wherein the multiple noisy Computed Tomography images of the same patient are processed by employing multiple timepoint Computed Tomography images having similarity between repeated scans for a given patient, and wherein the multiple noisy Computed Tomography images are intensity normalized and spatially co-registered at each timepoint among the multiple noisy Computed Tomography images (see Figure 7 and refer for example to paragraphs [0020], [0022] and [0025]).
Allowable Subject Matter
Claims 5 and 10-11 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b), set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
Relevant Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Mentl, Chen, Chan, Duffy, Wong and Huber all disclose systems similar to applicant’s claimed invention.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jose L. Couso whose telephone number is (571) 272-7388. The examiner can normally be reached on Monday through Friday from 5:30am to 1:30pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella, can be reached on 571-272-7778. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of an application may be obtained from the Patent Center information webpage on the USPTO website. For more information about the Patent Center, see https://www.uspto.gov/patents/apply/patent-center. Should you have questions about access to the Patent Center, contact the Patent Electronic Business Center (EBC) at 571-272-4100 or via email at: ebc@uspto.gov .
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
/JOSE L COUSO/Primary Examiner, Art Unit 2667
January 16, 2026