Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on October 8, 2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 5, 7-8, 21-22 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Xia et. al. (Chinese Patent CN 110852948 A).
Regarding claim 1, Xia et. al. discloses an image super-resolution method, comprising: determining a super-resolution requirement comprising a requirement for converting an image with a first resolution into an image with a second resolution greater than the first resolution; and determining a preset image super-resolution network that meets the super-resolution requirement (Xia et. al. paragraphs [0011]-[0015]); inputting a to-be-performed-super-resolution image with the first resolution into the preset image super-resolution network, to obtain a result image with the second resolution output by the preset image super-resolution network; wherein the preset image super-resolution network is configured for extracting one or more image features based on a preset channel attention module (Xia et. al. paragraphs [0016]-[0022]); the preset channel attention module is configured for: determining one or more initial weights for one or more channel images according to an input feature map, taking one or more initial weights that meet a preset weight condition as one or more result weights, and outputting a result feature map according to the determined result weights (Xia et. al. paragraphs [0047]-[0050] where the calculation of the weight condition is the residual error learning and the mapping relationship function shown, Fg = Hg(Fg-1)).
PNG
media_image1.png
610
1508
media_image1.png
Greyscale
PNG
media_image2.png
976
1506
media_image2.png
Greyscale
PNG
media_image3.png
862
1304
media_image3.png
Greyscale
Regarding claim 5, Xia et. al. discloses the method of claim 1, wherein determining the one or more initial weights for the one or more channel images according to the input feature map comprises at least one of: extracting features from the input feature map, to obtain the initial weights for the channel images; performing a pooling process on the input feature map to obtain a pooling result, and extracting features from the obtained pooling result to obtain the initial weights for the channel images; or extracting features from the input feature map, performing a pooling process on the extracted features to obtain a pooling result, and extracting features from the obtained pooling result to obtain the initial weights for the channel images (Xia et. al. paragraphs [0055]-[0057]).
PNG
media_image4.png
319
858
media_image4.png
Greyscale
Regarding claim 7, Xia et. al. discloses the method of claim 1, wherein determining the preset image super-resolution network that meets the super-resolution requirement comprises: determining an original image super-resolution network, wherein the original image super- resolution network is configured for extracting image features from an input image with the first resolution and outputting a result image with the second resolution according to the extracted image features; and performing a preset process on the original image super-resolution network to obtain the preset image super-resolution network that meets the super-resolution requirement (Xia et. al. paragraphs [0011]-[0015], where the super-resolution network includes a feature extraction module for extracting original image features and applies a preset super-resolution network to output a high-resolution image).
PNG
media_image5.png
346
860
media_image5.png
Greyscale
Regarding claim 8, Xia et. al. discloses the method of claim 7, wherein the original image super-resolution network is specifically configured for extracting the image features based on an original channel attention module; wherein the original channel attention module is configured for determining the initial weights for the channel images according to the input feature map, and outputting an original feature map according to the determined initial weights; and wherein the preset process comprises: converting one or more original channel attention modules in the original image super-resolution network into preset channel attention modules by configuring a preset weight condition for each of the original channel attention modules according to the super-resolution requirement, such that a preset image super-resolution network obtained by the converting meets the super-resolution requirement (Xia et. al. paragraphs [0016]-[0023]).
PNG
media_image6.png
666
866
media_image6.png
Greyscale
Regarding claim 21, Xia et. al. discloses an electronic device, comprising: at least one processor; and a memory connected communicatively to the at least one processor, wherein, the memory stores instructions that are executable by the at least one processor, and the instructions, when executed by the at least one processor, enable the at least one processor determine a super-resolution requirement comprising a requirement for converting an image with a first resolution into an image with a second resolution greater than the first resolution; and determine a preset image super-resolution network that meets the super-resolution requirement; input a to-be-performed-super-resolution image with the first resolution into the preset image super-resolution network, to obtain a result image with the second resolution output by the preset image super-resolution network; wherein the preset image super-resolution network is configured for extracting one or more image features based on a preset channel attention module; the preset channel attention module is configured for: determining one or more initial weights for one or more channel images according to an input feature map, taking one or more initial weights that meet a preset weight condition as one or more result weights, and outputting a result feature map according to the determined result weights (Xia et. al. paragraphs [0025]-[0027]).
PNG
media_image7.png
417
852
media_image7.png
Greyscale
Regarding claim 22, Xia et. al. discloses a computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements operations of: determining a super-resolution requirement comprising a requirement for converting an image with a first resolution into an image with a second resolution greater than the first resolution; and determining a preset image super-resolution network that meets the super-resolution requirement; inputting a to-be-performed-super-resolution image with the first resolution into the preset image super-resolution network, to obtain a result image with the second resolution output by the preset image super-resolution network; wherein the preset image super-resolution network is configured for extracting one or more image features based on a preset channel attention module; the preset channel attention module is configured for: determining one or more initial weights for one or more channel images according to an input feature map, taking one or more initial weights that meet a preset weight condition as one or more result weights, and outputting a result feature map according to the determined result weights (Xia et. al. paragraph [0024]).
PNG
media_image8.png
76
860
media_image8.png
Greyscale
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Xia et. al. (Chinese Patent CN110852948 A) in view of Ou et. al. (Chinese Patent CN 115620109 A).
Regarding claim 2, Xia et. al. discloses the method of claim 1. However, Xia et. al. fails to disclose wherein a minimum absolute value of the determined result weights is greater than or equal to a maximum absolute value of the initial weights that do not meet the preset weight condition.
Ou et. al. teaches wherein a minimum absolute value of the determined result weights is greater than or equal to a maximum absolute value of the initial weights that do not meet the preset weight condition (Ou et. al. paragraph [0080] where the weights of each connection in the neural network is assessed and optimized in the process of generating the super-resolution architecture).
PNG
media_image9.png
724
1316
media_image9.png
Greyscale
The threshold set for the result weights is directly responsible for the resolution of the final high-resolution image. This is an important aspect for the solution to the problem outlined by the claimed invention. Thus, it would have been obvious for one skilled in the art prior to the effective filing date of the claimed invention to have combined the teachings of Xia et. al. and the teachings of Ou et. al. so that that the weights have specific values set.
Claim 3 is rejected under 35 U.S.C. 103 as being obvious over Xia et. al. (Chinese Patent 110852948 A) in view of Navarrete Michelini et. al. (WO2021/073493 A1).
Regarding claim 3, Xia et. al. discloses the method of claim 1. However, Xia et. al. fails to disclose wherein an initial weight that meets the preset weight condition comprises at least one of: an absolute value of the initial weight being greater than a preset weight lower limit; a sequence number of the initial weight being less than a preset sequence number where the initial weights are sorted in a sequence of absolute values from large to small; or the sequence number of the initial weight being in a top N% where the initial weights are sorted in the sequence of absolute values from large to small.
Navarrete Michelini et. al. teaches wherein an initial weight that meets the preset weight condition comprises at least one of: an absolute value of the initial weight being greater than a preset weight lower limit; a sequence number of the initial weight being less than a preset sequence number where the initial weights are sorted in a sequence of absolute values from large to small; or the sequence number of the initial weight being in a top N% where the initial weights are sorted in the sequence of absolute values from large to small (Navarrete Michelini et. al. paragraphs [0018]-[0019], Figure 7A-B, where the initial feature images of N stages with resolutions sorted from high to low are a direct result of the weights applied).
The ordering of the weights from high to low in the sorting process enhances image quality and prepare the data for further analysis. This is an essential component of the claimed invention. Thus, it would have been obvious for one skilled in the art prior to the effective filing date of the claimed invention to have combined the teachings of Xia et. al. and Navarrete Michelini et. al. so that these series of techniques and algorithms are included in the method of Xia et. al.
PNG
media_image10.png
994
640
media_image10.png
Greyscale
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Xia et. al. (Chinese Patent CN 110852948 A) in view of Tao et. al. (Chinese Patent CN 111461973 A).
Regarding claim 4, Xia et. al. discloses the method of claim 1. However, Xia et. al. fails to disclose wherein outputting the result feature map according to the determined result weights comprises: obtaining corresponding single-channel feature maps by computing products of the determined result weights with corresponding channel images respectively; and obtaining the result feature map by stacking the obtained single-channel feature maps.
Tao et. al. teaches wherein outputting the result feature map according to the determined result weights comprises: obtaining corresponding single-channel feature maps by computing products of the determined result weights with corresponding channel images respectively; and obtaining the result feature map by stacking the obtained single-channel feature maps (Tao et. al. paragraphs [0010]-[0015]).
PNG
media_image11.png
646
1514
media_image11.png
Greyscale
The feature maps are an essential role in the solution of the problem because they provide a structured representation of the image data. Thus, it would have been obvious for one skilled in the art prior to the effective filing date of the claimed invention to have combined the teachings of Xia et. al. with the teachings of Tao et. al. so that the feature maps are accurately represented.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Xia et. al. (Chinese Patent CN 110852948 A) in view of Lu, Enmin and Xiaoxiao Hu. “Image super-resolution via channel attention and spatial attention.” Applied Intelligence 52 (2021): 2260 - 2268.
Regarding claim 6, Xia et. al. discloses the method of claim 5. However, Xia et. al. fails to disclose wherein the pooling process comprises at least one of: performing a global maximum pooling process; performing a global average pooling process; or performing the global maximum pooling process and the global average pooling process respectively to obtain two pooling features, and then stacking the obtained two pooling features.
Lu et. al. teaches wherein the pooling process comprises at least one of: performing a global maximum pooling process; performing a global average pooling process; or performing the global maximum pooling process and the global average pooling process respectively to obtain two pooling features, and then stacking the obtained two pooling features (Lu et. al. 3.2 Attention module, where the spatial attention module is a kind of comprehensive information module that utilizes global spatial information).
The pooling process ensures that all of the essential information is captured in the high-
resolution image result. Thus, it would have been obvious for one skilled in the art prior to the
effective filing date of the claimed invention to have combined the teachings of Xia et. al. and Lu
et. al. to incorporate the pooling features.
PNG
media_image12.png
422
406
media_image12.png
Greyscale
Claim(s) 9, 10, 12 are rejected under 35 U.S.C. 103 as being unpatentable over Xia et. al. (Chinese Patent CN 110852948 A) in view of Mo et. al. (Chinese Patent CN 113298239 B).
Regarding claim 9, Xia et. al. discloses the method of claim 8. However, Xia et. al. fails to disclose wherein the super-resolution requirement further comprises a computing power limit requirement; wherein determining the original image super-resolution network comprises: determining an original channel module number upper limit according to the computing power limit requirement in the super-resolution requirement, and determining an original image super-resolution network that meets the super-resolution requirement; wherein a number of the original channel attention modules comprised in the original image super-resolution network is less than or equal to the original channel module number upper limit.
Mo et. al. teaches wherein the super-resolution requirement further comprises a computing power limit requirement; wherein determining the original image super-resolution network comprises: determining an original channel module number upper limit according to the computing power limit requirement in the super-resolution requirement, and determining an original image super-resolution network that meets the super-resolution requirement; wherein a number of the original channel attention modules comprised in the original image super-resolution network is less than or equal to the original channel module number upper limit (Mo et. al. paragraphs [0105]-[0112], Abstract, where the solution is a method that can search the image super-resolution network under different computing power limits, and the size of the network is scalable).
PNG
media_image13.png
896
1128
media_image13.png
Greyscale
It requires a significant amount of computing resources to enable large scale high-resolution image processing. By setting limits to the computational power, resources are efficiently allocated and the neural network becomes more efficient. Thus, it would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to have combined the teachings of Xia et. al. with the teachings of Mo et. al. so that computational resources are not wasted by extra processing.
Regarding claim 10, Xia et. al. discloses the method of claim 8. However, Xia et. al. fails to disclose wherein the super-resolution requirement further comprises a computing power limit requirement; wherein converting the one or more original channel attention modules in the original image super-resolution network into the preset channel attention modules by configuring the preset weight condition for each of the original channel attention modules, comprises: determining a number M of to-be-converted channel modules according to the computing power limit requirement in the super-resolution requirement; and converting M original channel attention modules in the original image super-resolution network into M preset channel attention modules by configuring the preset weight condition for each of the M original channel attention modules.
Mo et. al. teaches wherein the super-resolution requirement further comprises a computing power limit requirement; wherein converting the one or more original channel attention modules in the original image super-resolution network into the preset channel attention modules by configuring the preset weight condition for each of the original channel attention modules, comprises: determining a number M of to-be-converted channel modules according to the computing power limit requirement in the super-resolution requirement; and converting M original channel attention modules in the original image super-resolution network into M preset channel attention modules by configuring the preset weight condition for each of the M original channel attention modules (Mo et. al. paragraphs [0088]-[0092]).
PNG
media_image14.png
714
1116
media_image14.png
Greyscale
It requires a significant amount of computing resources to enable large scale high-resolution image processing. By setting limits to the computational power, resources are efficiently allocated and the neural network becomes more efficient. Thus, it would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to have combined the teachings of Xia et. al. with the teachings of Mo et. al. so that computational effort is reduced to the essentials.
Regarding claim 12, Xia et. al. discloses the method of claim 1. However, Xia et. al. fails to disclose wherein the super-resolution requirement further comprises a computing power limit requirement; wherein determining the preset image super-resolution network that meets the super- resolution requirement comprises: determining a preset channel module number upper limit according to the computing power limit requirement in the super-resolution requirement, and determining a preset image super-resolution network that meets the super-resolution requirement; wherein a number of preset channel attention modules comprised in the preset image super- resolution network is less than or equal to the preset channel module number upper limit.
Mo et. al. teaches wherein the super-resolution requirement further comprises a computing power limit requirement; wherein determining the preset image super-resolution network that meets the super- resolution requirement comprises: determining a preset channel module number upper limit according to the computing power limit requirement in the super-resolution requirement, and determining a preset image super-resolution network that meets the super-resolution requirement; wherein a number of preset channel attention modules comprised in the preset image super- resolution network is less than or equal to the preset channel module number upper limit (Mo et. al. paragraphs [0105]-[0112], Abstract, where the solution is a method that can search the image super-resolution network under different computing power limits, and the size of the network is scalable).
It requires a significant amount of computing resources to enable large scale high-resolution image processing. By setting limits to the computational power, resources are efficiently allocated and the neural network becomes more efficient. Thus, it would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to have combined the teachings of Xia et. al. with the teachings of Mo et. al. so that computational effort is reduced to the essentials.
Claim(s) 11, 13, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Xia et. al. (Chinese Patent CN 110852948 A) in view of Bae et. al. (Korean Patent KR 20220085280 A).
Regarding claim 11, Xia et. al. discloses the method of claim 8. However, Xia et. al. fails to disclose wherein the super-resolution requirement further comprises a computing power limit requirement; wherein converting the one or more original channel attention modules in the original image super-resolution network into the preset channel attention modules by configuring the preset weight condition for each of the original channel attention modules, comprises: according to the computing power limit requirement in the super-resolution requirement, determining a number M of to-be-converted original channel attention modules and a channel image retention ratio for each of the to-be-converted original channel attention modules; and converting M original channel attention modules in the original image super-resolution network into M preset channel attention modules by configuring the preset weight condition for each of the M original channel attention modules according to a corresponding channel image retention ratio; wherein an initial weight that meets the preset weight condition comprises: a sequence number of the initial weight being in a top N% where the initial weights are sorted in a sequence of absolute values from large to small, wherein N% is the corresponding channel image retention ratio.
Bae et. al. teaches wherein the super-resolution requirement further comprises a computing power limit requirement; wherein converting the one or more original channel attention modules in the original image super-resolution network into the preset channel attention modules by configuring the preset weight condition for each of the original channel attention modules, comprises: according to the computing power limit requirement in the super-resolution requirement, determining a number M of to-be-converted original channel attention modules and a channel image retention ratio for each of the to-be-converted original channel attention modules; and converting M original channel attention modules in the original image super-resolution network into M preset channel attention modules by configuring the preset weight condition for each of the M original channel attention modules according to a corresponding channel image retention ratio; wherein an initial weight that meets the preset weight condition comprises: a sequence number of the initial weight being in a top N% where the initial weights are sorted in a sequence of absolute values from large to small, wherein N% is the corresponding channel image retention ratio (Bae et. al. Abstract, paragraphs [0025]-[0027] where pruning operations are done to reduce the computational power needed).
PNG
media_image15.png
808
1512
media_image15.png
Greyscale
The pruning operations and branches selected for final output is important to the efficiency of the neural network considering the computational power needed to operate. Thus, it would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to have combined the teachings of Xia et. al. and the teachings of Bae et. al. so that the method of high-resolution image processing includes the elements shown by Bae et. al.
Regarding claim 13, Xia et. al. discloses the method of claim 1. However, Xia et. al. fails to disclose wherein the super-resolution requirement further comprises a computing power limit requirement; wherein determining the preset image super-resolution network that meets the super- resolution requirement comprises: determining a channel image retention ratio for each of preset channel attention modules according to the computing power limit requirement in the super-resolution requirement, and determining a preset image super-resolution network that meets the super-resolution requirement; wherein, for each preset channel attention module in the preset image super-resolution network, an initial weight that meets the preset weight condition for the preset channel attention module comprises: a sequence number of the initial weight being in a top N% where the initial weights are sorted in a sequence from large to small, wherein N% is the corresponding channel image retention ratio.
Bae et. al. teaches wherein the super-resolution requirement further comprises a computing power limit requirement; wherein determining the preset image super-resolution network that meets the super- resolution requirement comprises: determining a channel image retention ratio for each of preset channel attention modules according to the computing power limit requirement in the super-resolution requirement, and determining a preset image super-resolution network that meets the super-resolution requirement; wherein, for each preset channel attention module in the preset image super-resolution network, an initial weight that meets the preset weight condition for the preset channel attention module comprises: a sequence number of the initial weight being in a top N% where the initial weights are sorted in a sequence from large to small, wherein N% is the corresponding channel image retention ratio (Bae et. al. Abstract, paragraphs [0025]-[0027] where pruning operations are done to reduce the computational power needed).
The channel retention ratio in image processing is crucial as it directly affects the quality and detail of the reconstructed image. Thus, it would have been obvious for one skilled in the art prior to the effective filing date of the claimed invention to have included the teachings of Xia et. al. with the teachings of Bae et. al. so that the reconstructed image is clearer.
Regarding claim 14, Xia et. al. discloses the method of claim 1, wherein the preset image super-resolution network comprises an original convolution layer; wherein the original convolution layer is configured for extracting convolution features from the input feature map. However, Xia et. al. fails to disclose that this is done through at least two branches respectively to obtain at least two branch convolution feature maps, and outputting a sum of the obtained branch convolution feature maps.
Bae et. al. teaches extracting convolution features from the input feature map through at least two branches respectively to obtain at least two branch convolution feature maps, and outputting a sum of the obtained branch convolution feature maps (Bae et. al. paragraphs [0022]-[0023]).
PNG
media_image16.png
510
1486
media_image16.png
Greyscale
The feature map after branching shown by Bae et. al. is crucial to the claimed invention to produce a better preservation of the original image’s spatial and spectral information. Thus, it would have been obvious for one skilled in the art prior to the effective filing date of the claimed invention to have combined the teachings of Xia et. al. and Bae et. al. so that the original image is clearer and better retained after reconstruction.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Xia et. al. (Chinese Patent CN 110852948 A) in view of Bae et. al. (Korean Patent KR 20220085280 A) as applied to claim 14 above, and further in view of Japanese Patent (JP-6922005-B2).
Regarding claim 15, the combination of Xia et. al. and Bae et. al. discloses the method of claim 14. However, Xia et. al. and Bae et. al. fails to disclose wherein at least one branch in the original convolution layer comprises: a direction convolution layer configured for extracting inter-pixel gradient features in a preset direction (JP 6922005 B2 paragraphs [0051]-[0052]).
JP 6922005 B2 teaches wherein at least one branch in the original convolution layer comprises: a direction convolution layer configured for extracting inter-pixel gradient features in a preset direction (JP 6922005 B2 paragraphs [0051]-[0052]).
PNG
media_image17.png
384
1130
media_image17.png
Greyscale
The two-dimensional feature map that corresponds to the pixel of the original image is a key feature of the solution of the claimed invention so that the color is accurately represented. Thus, it would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to have combined the teachings of Xia et. al. and JP 6922005 B2.
Claim(s) 16, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Xia et. al. (Chinese Patent CN 110852948 A) in view of Bae et. al. (Korean Patent KR 20220085280 A) as applied to claim 14 above, and further in view of Li et. al. (Chinese Patent CN 112070664 B).
Regarding claim 16, the combination of Xia et. al. and Bae et. al. discloses the method of claim 14, wherein the preset image super- resolution network comprises a preset convolution layer. However, Xia et. al. and Bae et. al. fails to disclose wherein the method further comprises: performing a merging process on a trained preset image super-resolution network to obtain the preset convolution layer; wherein the merging process comprises: obtaining a corresponding preset convolution layer by performing merging operation on the at least two branches of the original convolution layer in the trained preset image super-resolution network; wherein the preset convolution layer is configured for performing a single convolution operation on the input feature map, and outputting a sum of at least two branch result feature maps in a corresponding original convolution layer.
Li et. al. teaches wherein the method further comprises: performing a merging process on a trained preset image super-resolution network to obtain the preset convolution layer; wherein the merging process comprises: obtaining a corresponding preset convolution layer by performing merging operation on the at least two branches of the original convolution layer in the trained preset image super-resolution network; wherein the preset convolution layer is configured for performing a single convolution operation on the input feature map, and outputting a sum of at least two branch result feature maps in a corresponding original convolution layer (Li et. al. paragraph [0140] where the merging process is construed as iterative fusion).
PNG
media_image18.png
446
1342
media_image18.png
Greyscale
It is important to the claimed invention for the merging or fusion of the various layers of the neural network so that the key features of the original image are still preset even after reconstruction. Thus, it would have been obvious for one skilled in the art prior to the effective filing date of the claimed invention to have combined the teachings of Xia et. al., Bae et. al. and Li et. al.
Regarding claim 17, the combination of Xia et. al. and Bae et. al. discloses the method of claim 16. Li et. al. further teaches wherein an input of at least one preset channel attention module comprises the output of the preset convolution layer.
PNG
media_image19.png
154
1324
media_image19.png
Greyscale
It is important to the claimed invention for the merging or fusion of the various layers of the neural network so that the key features of the original image are still preset even after reconstruction by the neural network algorithm. Thus, it would have been obvious for one skilled in the art prior to the effective filing date of the claimed invention to have combined the teachings of Xia et. al., Bae et. al. and Li et. al.
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Xia et. al. (Chinese Patent CN 110852948 A) in view of Venkatesh (Chinese Patent CN 114008664 A).
Regarding claim 18, Xia et. al. discloses the method of claim 1. However, Xia et. al. fails to disclose wherein the preset image super-resolution network is configured for: extracting image features from an input image with the first resolution and obtaining a first intermediate image with the second resolution according to the extracted image features; performing a preset computing operation on the input image with the first resolution, to obtain a second intermediate image with the second resolution; and outputting the result image with the second resolution according to a sum of the first intermediate image and the second intermediate image.
Venkatesh teaches wherein the preset image super-resolution network is configured for: extracting image features from an input image with the first resolution and obtaining a first intermediate image with the second resolution according to the extracted image features; performing a preset computing operation on the input image with the first resolution, to obtain a second intermediate image with the second resolution; and outputting the result image with the second resolution according to a sum of the first intermediate image and the second intermediate image (Venkatesh paragraph [0076]).
PNG
media_image20.png
550
1504
media_image20.png
Greyscale
An intermediate image is important in high resolution image processing because it helps in the manipulation and analysis of the images. The intermediate image serves as a bridge between the original raw image and the final output. Thus, it would have been obvious for one skilled in the art prior to the effective filing date of the claimed invention to have combined the teachings of Xia et. al. and Venkatesh to incorporate an intermediate image in the method of the high-resolution image processing.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JESSICA YIFANG LIN whose telephone number is (571)272-6435. The examiner can normally be reached M-F 7:00am-6:15pm, with optional day off.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at 571-272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JESSICA YIFANG LIN/Examiner, Art Unit 2668 February 13, 2026
/VU LE/Supervisory Patent Examiner, Art Unit 2668