DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are pending.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-5, 6-12, 15-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claims 1, 8 and 15, the following 2-step analysis is applied for analyzing the 35 U.S.C. § 101 subject matter eligibility of the claims.
Step 1: The Statutory Categories
Claim(s) 1, 8, and 15 recite(s) a "processor," a "system," and a "method," which fall under the statutory categories of a machine and a process.
Step 2A: The Judicial Exceptions
Prong 1: do the claims recite an exception?
Claim(s) 1, 8, and 15 is/are directed to the abstract idea of a mathematical concept and/or a mental process. Specifically, the claims recite using "one or more neural networks" to perform mathematical data manipulation—namely, extracting, denoising, and combining texture and pixel data to generate an upsampled image. These limitations represent mathematical algorithms and the abstract processing of information.
Prong 2: is the exception integrated into a practical application?
The claim(s) does/do not integrate the abstract idea into a practical application. The claims recite generating an upsampled image using the abstract idea, but do so at a high level of functional generality ("based, at least in part, on: denoising... and combining"). The recitation of generic hardware ("a processor," "one or more circuits," "a system") simply instructs the practitioner to apply the abstract mathematical idea on a generic computer. It does not provide a specific, technical improvement to the functioning of the computer itself or an otherwise eligible, specific technological process.
Step 2B: The Inventive Concept
Do the claims amount to "significantly more" than the exception?
The additional elements in the claim(s), whether considered individually or as an ordered combination, do not amount to significantly more than the abstract idea. The generic processors, circuits, and systems merely provide a conventional technological environment to execute the neural network's mathematical data processing. This amounts to no more than well-understood, routine, and conventional computer functions in the field.
Conclusion: Claim(s) 1, 8, and 15 is/are directed to an abstract idea and lacks an inventive concept. Claim(s) 1, 8, and 15 is/are rejected as ineligible subject matter under 35 U.S.C. § 101.
Regarding dependent claims 2-5, 6-7, 9-12 and 16-19: limitations in these dependent claims have been examined in a similar way as to the above independent claims. It was found that claims 2-5, 6-7, 9-12 and 16-19 are ineligible subject matter under 35 U.S.C. § 101:
Claims 2, 9 and 16: Ineligible. Merely identifies an additional data input (a noisy version); insignificant extra-solution activity.
Claims 3, 10 and 17: Ineligible. Merely specifies the source of the data; mere data gathering.
Claims 4, 11 and 18: Ineligible. Simply defines the data types (high vs. low resolution); acts as a generic field-of-use limitation.
Claims 5, 12 and 19: Ineligible. Adds another generic, functional recitation of a "neural network"; still abstract mathematical data processing.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention.
Claims 1, 8 and 15 recite limitation “denoising texture data extracted from the one or more images ...”. There is no support found in the specification and the drawings for the limitation. As shown in Fig, 4 and described in paragraph [0099], input image data 302 is separated by separator 410 into two parts: the top part is noise-free texture data, and the bottom part is the noisy pixel data for denoising. There is no denoising for the texture data since it is already noise-free.
Claim(s) 2-7, 9-14 and 16-20 is/are rejected under 112(a) for the same reason as given in their respective base claim(s).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more Claim(s) particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more Claim(s) particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claims 1, 8 and 15 recite limitation “the separately denoised pixel data”. There is insufficient antecedent basis for this limitation in the claim.
Claim(s) 2-7, 9-14 and 16-20 is/are rejected under 112(b) for the same reason as given in their respective base claim(s).
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Villar-Corrales et al (Deep Learning, 2021) in view of Bako et al (Kernel-Predicting CNs for Denoising, 2017).
Regarding claims 1, 8 and 15, Villar-Corrales teaches
(Claim interpretation: note the 112(a) and 112(b) rejections to the claims; claim 1 is interpreted based on the embodiment as shown in Fig. 4 and described in paragraph [0099] of the specification of the instant application. That is, in this office action, claim 1 is interpreted as “1. A processor comprising: one or more circuits to cause one or more neural networks to generate an upsampled version of one or more images based, at least in part, on: denoising pixel data extracted from the one or more images separately from texture data extracted from the one or more images; and combining the denoised pixel data with the separately extracted texture data.”)(Villar-Corrales, Fig. 1; "Recent advances in deep learning have led to significant improvements in single image super-resolution (SR) research.", [abstract]; "In this work, we employ the Wide Activation Super-Resolution (WDSR) model [13] as a building block to investigate architectures for joint denoising and super-resolution.", p2:c1; "We evaluate our architectures with the Wide Activation Super-Resolution model (WDSR) [13] on noisy versions of the images from the DIV2K [21] dataset.", p2:c1; neural networks are used (specifically the WDSR deep learning model) to generate a super-resolved, i.e., upsampled, version of input images; the WDSR architecture, as described and depicted in Fig. 1, is a neural network that produces high-resolution output images from lower-resolution inputs)
denoising texture data extracted from the one or more images separately from pixel data from the one or more images; and
(Villar-Corrales, Fig. 1 (bottom, in-network design); "In-network (abbreviated in-net) is shown on the bottom of Fig. 1. Here, the denoiser integrates into the residual connection. Hence, the SR model can jointly combine low-level features from the denoised input and high-level features from the noisy input.", p2:c1; "The second architecture, “in-network”, reconstructs the HR image by combining low-level features extracted from the denoised input and high-level features extracted from the noisy input.", p2:c1; " The “in-network” architecture reconstructs the high-resolution image by combining low-level features from the denoiser with high-level features from the noisy input.", p4:c2; as shown in Fig. 1 (bottom, in-network), there are two distinct and separate paths: (a) the lower path, the residual connection, integrates the denoiser, processing pixel-level (low-level) data from the denoised input image; and (b) the upper path, the residual body, processes the noisy (undenoised) input to extract high-level features, which correspond to texture data, without applying denoising; these two paths operate separately and in parallel: pixel data (low-level features) is denoised via the residual connection path, while texture data (high-level features) is extracted from the original noisy input via the main residual body path)
combining the denoised texture data with the separately denoised pixel data.
(Villar-Corrales, Fig. 1 (bottom, in-network design); "the SR model can jointly combine low-level features from the denoised input and high-level features from the noisy input.", p2:c1; ““in-network” combines both tasks at feature level", [abstract]; "The “in-network” architecture reconstructs the high-resolution image by combining low-level features from the denoiser with high-level features from the noisy input.", p4:c2; combining the outputs of the two separate paths described above; the in-network design, as depicted in Fig. 1 (bottom), merges the low-level features (denoised pixel data from the residual connection/lower path) with the high-level features (texture data from the residual body/upper path) to reconstruct the high-resolution output image)
Villar-Corrales does not expressly disclose but Bako teaches:
a processor comprising: one or more circuits to …
(Bako, “an Nvidia Quadro M6000 GPU”, p.97:7; GPUs may be used to implement the IN-network architecture of the denoising SR image system of Villar-Corrales for high speed operations)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the hardware circuit teachings (GPUs) of Bako into the architecture of Villar-Corrales in order to provide a GPU implementation for neural network that facilitates high-speed, production-quality execution of the dual-path denoising and upsampling process. The combination of Villar-Corrales and Bako also teaches other enhanced capabilities.
Regarding claims 1, 8 and 15, Villar-Corrales teaches
(Villar-Corrales, Fig. 1, in the “the pre-network architectural design (center)”, a low-resolution image (“low-resolution images”, sec. 3, p2:c1) is inputted to a denoiser to generate a denoised image of the input LR image; the denoised LR image is further inputted to an original WDSR architecture (same as the architecture of Fig. 1, top); “The original WDSR architecture is shown on top of Figure 1. It consists of two paths. The main path is on top, consisting of a user-defined number B of residual blocks. Each block consists of two convolutional layers followed by weight normalization [22] and ReLU activation. The lower path is a residual connection. It provides low-level features from the input to the output, which is critically important for SR tasks [14]. Both paths contain a pixel-shuffle layer [23], which performs the upsampling for image super-resolution”, sec. 2, p2:c1; in the pre-network architecture (Fig. 1, center), the top path generates denoised high-level features (=> “denoised pixel data from the one or more images”) extracted from the denoised image because of the denoiser at the front; the lower path generates denoised “low-level features” (=> “denoised version of texture data extracted from the one or more images”); the high-level features from the top path and the low-level features from the lower path are combined to produce an upsampled SR image)
Villar-Corrales does not expressly disclose but Bako teaches:
a processor comprising: one or more circuits to use one or more neural networks…
(Bako, “an Nvidia Quadro M6000 GPU”, p.97:7; GPUs may be used to implement the IN-network architecture of the denoising SR image system of Villar-Corrales for high speed operations)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the teachings of Bako into the system or method of Villar-Corrales in order to use GPUs to implement the IN-network architecture of the denoising SR image system of Villar-Corrales for high speed operations. As to be discussed on claim 6, incorporating Bako’s teachings into the system or method of Villar-Corrales can also be aimed to denoise diffuse and specular components separately using a two-CNN framework. By isolating these components, each CNN can be tailored to the specific noise characteristics of the diffuse (low-frequency, smooth variations) and specular (high-frequency, sharp highlights) regions, leading to more precise and efficient denoising. This separation reduces the complexity of the learning task, enabling each network to specialize and achieve better performance. Additionally, it avoids potential artifacts from cross-component interference, resulting in cleaner outputs and improved preservation of texture and detail in both components. The combination of Villar-Corrales and Bako also teaches other enhanced capabilities.
Regarding claims 2, 9 and 16, the combination of Villar-Corrales and Bako teaches its/their respective base claim(s)
The combination further teaches the processor of claim 1, wherein the one or more circuits are to generate the upsampled version of one or more images further based on a noisy version of the one or more images.
(Villar-Corrales, see comments on claim 1; Fig. 1, center, input image is a LR image)
Regarding claims 3, 10 and 17, the combination of Villar-Corrales and Bako teaches its/their respective base claim(s)
The combination further teaches the processor of claim 1, wherein the texture data is extracted from a noisy version of the one or more images.
(Villar-Corrales, Fig. 1, center, “The lower path is a residual connection. It provides low-level features from the input to the output, which is critically important for SR tasks”, p2-c1; the low-level features such as textures in the input noisy image is denoised separately in the lower path for ultimately generating a denoised image at the output of the In-network architecture)
(Villar-Corrales, see comments on claim 1)
Regarding claims 4, 11 and 18, the combination of Villar-Corrales and Bako teaches its/their respective base claim(s)
The combination further teaches the processor of claim 1, wherein the upsampled version of the one or more images is a high-resolution image and the one or more images are one or more low-resolution images.
(Villar-Corrales, see comments on claim 1; Fig. 1, center, input image is a LR image, output image is an upsampled SR image)
Regarding claims 5, 12 and 19, the combination of Villar-Corrales and Bako teaches its/their respective base claim(s)
The combination further teaches the processor of claim 1, wherein the one or more circuits are to generate the denoised pixel data of the one or more images using a neural network to denoise a noisy version of the one or more images.
(Villar-Corrales, Fig. 1, center, the top path generates denoised high-level features using a ResNet, sec. 2, p2:c1)
Regarding claims 6, 13 and 20, the combination of Villar-Corrales and Bako teaches its/their respective base claim(s)
The combination further teaches the processor of claim 1, wherein the one or more circuits are to generate the denoised pixel data of the one or more images by separately denoising a diffuse light version of a noisy one or more images and a specular light version of the noisy one or more images.
(Bako, Fig. 2; “two-network framework for denoising diffuse and specular components of the image separately”, p.97:2-c2; this approach for denoising diffuse light and specular light may be applied to the denoiser of Villar-Corrales (Fig. 1 (center) to achieve optimum results for mitigating noise in an image caused by diffuse light and specular light)
Regarding claims 7 and 14, the combination of Villar-Corrales and Bako teaches its/their respective base claim(s)
The combination further teaches the processor of claim 1,
wherein the one or more circuits are to generate the denoised pixel data of the one or more images by separately denoising a diffuse light version of a noisy one or more images and a specular light version of the noisy one or more images,
wherein the one or more neural networks are to use different neural networks to denoise the diffuse light version and the specular light version.
(Bako, Villar-Corrales, see comments on claim 6)
Response to Arguments
Applicant's arguments filed on 2/10/2026 with respect to one or more of the pending claims have been fully considered but they are not persuasive.
Regarding claim(s) 1, 8 and 15, Applicant, the remarks, argues that the combination of the cited reference(s) fails to teach the newly amended limitations in the claims.
The Examiner respectfully disagreed. The office action has been updated to address applicant’s argument. See the updated review comments for details.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIANXUN YANG whose telephone number is (571)272-9874. The examiner can normally be reached on MON-FRI: 8AM-5PM Pacific Time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272- 1000.
/JIANXUN YANG/
Primary Examiner, Art Unit 2662 3/27/2026