Prosecution Insights
Last updated: April 19, 2026
Application No. 17/493,661

SYSTEM AND METHOD OF CONVOLUTIONAL NEURAL NETWORK

Non-Final OA §102§103§112
Filed
Oct 04, 2021
Examiner
RUSH, ERIC
Art Unit
2677
Tech Center
2600 — Communications
Assignee
National Tsing Hua University
OA Round
5 (Non-Final)
61%
Grant Probability
Moderate
5-6
OA Rounds
3y 5m
To Grant
97%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
383 granted / 628 resolved
-1.0% vs TC avg
Strong +36% interview lift
Without
With
+36.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
32 currently pending
Career history
660
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
40.0%
+0.0% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
27.7%
-12.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 628 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is responsive to the request for continued examination (RCE) received 12 January 2026 and the amendments and remarks received 16 December 2025. Claims 1 - 17 and 21 - 23 are currently pending. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12 January 2026 has been entered. Claim Objections Claim 1 is objected to because of the following informalities: Line 21 of claim 1 recites, in part, “calculating the first values of pixels of a first intermediate image” which appears to contain inconsistent claim terminology and/or a minor informality. The Examiner suggests amending the claim to --calculating the first values of pixels of [[a]] the first intermediate image-- in order to maintain consistency with lines 11 - 13 of claim 1 and to improve the clarity and precision of the claim. Appropriate correction is required. Claim 1 is objected to because of the following informalities: Lines 23 - 24 of claim 1 recite, in part, “calculating the second values of pixels of a second intermediate image” which appears to contain inconsistent claim terminology and/or a minor informality. The Examiner suggests amending the claim to --calculating the second values of pixels of [[a]] the second intermediate image-- in order to maintain consistency with lines 16 - 17 of claim 1 and to improve the clarity and precision of the claim. Appropriate correction is required. Claim 7 is objected to because of the following informalities: Line 4 of claim 7 recites, in part, “generate parameters that associated with a plurality of scaled images” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claim to --generate parameters. Appropriate correction is required. The objections to claim 15, due to minor informalities, are hereby withdrawn in view of the amendments and remarks received 16 December 2025. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. The rejections to claims 1 - 20 under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, are hereby withdrawn in view of the amendments and remarks received 16 December 2025. Response to Arguments Applicant’s arguments with respect to claims 7 - 14 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant's arguments filed 16 December 2025 have been fully considered but they are not persuasive. On pages 14 - 16 of the remarks the Applicant’s Representative argues that Rossi et al. fail to teach “the limitations of ‘V3 = A2((X2 - U2) / √([Q2]^2 + E) ) + B2 …Y3 = A2((Y2 - U2) / √([Q2]^2 + E) ) + B2’”. The Applicant’s Representative argues that Rossi et al. fail to teach the aforementioned disputed claim limitations at least because Rossi et al. do not explicitly teach “of lower and higher resolution CNNs sharing parameters” and that, for example, in Rossi et al., “lower and higher resolution CNNs may be performed with the same mean value and a different deviation” or “ lower and higher resolution CNNs may be performed with the same mean value, the same deviation and, different affine parameters.” Therefore, the Applicant’s Representative argues that Rossi et al. fail to teach the aforementioned disputed claim limitations. The Examiner respectfully disagrees. The Examiner asserts that Rossi et al. disclose the aforementioned disputed claim limitation(s), see at least section 24a of the Office Action mailed 07 October 2025 and figures 3A - 5, page 1 paragraphs 0007 - 0009, page 2 paragraph 0019, page 3 paragraphs 0021 - 0027, page 4 paragraphs 0032 - 0035 and page 5 paragraph 0039 - page 6 paragraph 0042 of Rossi et al. wherein they disclose that another “normalization technique that has been found to lead to particularly improved results in artistic style transfer is so-called ‘instance’ normalization. Instance normalization is similar to layer normalization, but it also calculates the aforementioned mean and variance statistics separately across each channel in each example. Instance normalization's usefulness with respect to artistic style transfer derives from the insight that the network should be agnostic to the contrast in the target image that is to be stylized. In fact, in some cases, one goal of the artistic style transfer processes is that the contrast of the stylized output image should actually be similar to the contrast of the source image, and thus the contrast information of the target image that is being stylized can (or should) be discarded to obtain the best stylized image results” [0023], that “a two-network solution may be employed, e.g., with each network (and any normalization factor computations) being executed on a suitable processing device. Such embodiments may thus, for example, be able to maintain the quality benefits of using instance normalization for artistic style transfer operations (or any other image processing operations requiring such computations), while not experiencing the additional latency and memory costs typically associated with transferring information repeatedly between different processors in a given system or using processors that are not powerful enough to perform such operations on higher resolution images and/or in a real-time or near real-time setting” [0027], that while figure 2 “shows a single neural network for the application of the selected artistic style, according to some embodiments, more than one ‘version’ of style transfer neural network may be created for each artistic style. The different versions of the neural networks may, e.g., operate at different resolution levels, have different numbers of layers, different kinds of layers, different network architectures, and/or have different optimizations applied” [0032], that, in figure 3B, “a first processing device, PROC 1, may be used exclusively to operate on a lower resolution version 352 of the first target image. In some embodiments, the lower resolution version 352 of the first target image may be the result of a downscaling and/or sub-sampling of the high resolution first target image 302, e.g., an 8× or 16× downscaling. In the example of FIG. 3B, the first processing device may proceed to evaluate the various convolutional layers of the artistic style transfer network (e.g., as shown in Steps 354, 358, and 362), largely as described above with reference to FIG. 3A, but simply on a lower resolution version of the first target image. For the output of each convolutional layer, one or more sets of parameters (e.g., normalization factors and/or scaled or biased versions thereof) may be determined (e.g., as shown in Steps 356 and 360), and packaged as a parameter set to be transferred to the version of the neural network running on the second processing device, PROC 2. For example, the first parameter set 357 (‘PARAM. SET 1’) may be applied to the input of the first convolutional layer 380 of a higher resolution artistic style transfer network executing on the second processing device, PROC 2. Likewise, the second parameter set 361 (‘PARAM. SET 2’) may be applied to the input of the second convolutional layer 382 of the higher resolution artistic style transfer network executing on the second processing device, and so forth for each such layer in the network for which parameters are necessary or desired” [0035], that “one or more layers of the lower resolution network 406, e.g., convolution layer N (412), may generate one or more parameters, such as the aforementioned instance normalization factors” [0041], that “the lower resolution network 406 may be executed on one or more processing devices uniquely suited to determining the aforementioned sets of parameters, while the higher resolution network 414 may be executed on one or more processing devices that are better able to operate and evaluate convolutional layers on higher resolution images (though may not be as well-suited to determine the sets of parameters), thus resulting in a better quality stylized output image. According to some embodiments, any parameters (or scaled/biased versions of such parameters) determined by the lower resolution network 406 may be transferred through the connective portion of the network 418 to the higher resolution network 414 in a single transfer operation, so as to minimize the number of transfers of information between processing devices during the stylization of a single image frame” [0041] and the “output of the lower resolution network 406, i.e., after processing by each of convolutional layers 1..N in the network (as well as one or more additional optional low resolution convolutions following layer N, if needed), may also be output as its own low resolution stylized output image (426), if so desired. According to some embodiments utilizing a hybrid network architecture, such as the network 400 shown in FIG. 4, the output of the higher resolution network 414, i.e., after processing by convolutional layer N+2 (422), may result in a high resolution stylized output image (424)” [0042]. The Examiner asserts that, as shown herein above and in the cited portions, Rossi et al. disclose that their artistic style transfer process applies instance normalization at a plurality of layers of a convolutional neural network (CNN) to create a stylized output image of an input image, that an artistic style may be stored as a plurality of layers in a style transfer neural network, such as a CNN, that different “versions” of a style transfer neural network may be created for each artistic style, such as versions that operate at different resolution levels, that a lower resolution style transfer CNN may generate instance normalization factors for each convolutional layer and transfer the instance normalization factors to a corresponding convolutional layer of a higher resolution style transfer CNN, and that both of their lower and higher resolution style transfer CNNs may output their own stylized output image of an input image at lower and higher resolutions, respectively, see at least figure 4 elements 424 and 426 and page 6 paragraph 0042 of Rossi et al. The Examiner asserts that in order for the lower resolution style transfer CNN of Rossi et al. to also output its own low resolution stylized output image of the input image, the instance normalization factors it determines also need to be applied to its input image, the lower resolution version of the input image. Furthermore, although Rossi et al. disclose that one or more scaling and/or biasing operations may be applied to the normalization factors determined by the lower resolution style transfer CNN before they are applied at the higher resolution style transfer CNN, the Examiner asserts that such scaling and/or biasing operations are not expressly required by Rossi et al., i.e., the same normalization factors may be applied at both of their lower and higher resolution style transfer CNNs. Therefore, the Examiner asserts that Rossi et al. disclose the aforementioned disputed claim limitations. On pages 17 - 19 of the remarks the Applicant’s Representative argues that claim 15 is allowable over Rossi et al. in view of Chung et al. in view of Georgescu et al. at least because Georgescu et al. fail to teach “the limitations of ‘processing the first image block with a kernel to generate the second image block; and processing the first scaled image with the kernel to generate the first intermediate image’ as explicitly recited in claim 15.” The Applicant’s Representative argues that Georgescu et al. merely teach that kernels are used in all convolutional filters and do not teach “processing an image patch and a scaled image with the same kernel.” Therefore, the Applicant’s Representative argues that Georgescu et al. fail to teach the aforementioned disputed claim limitations. The Examiner respectfully disagrees. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Furthermore, the Examiner asserts that Rossi et al. in view of Chung et al. in view of Georgescu et al. disclose the aforementioned disputed claim limitations. The Examiner asserts that Rossi et al. disclose processing the first image with a kernel to generate the second image and processing the first scaled image with the kernel to generate the first intermediate image, i.e., processing an input image and a scaled image with the same kernel, see at least the abstract, figures 2 - 5, page 1 paragraphs 0007 - 0008, page 2 paragraphs 0010 and 0019, page 3 paragraph 0027, page 4 paragraphs 0031 - 0035, page 5 paragraph 0039 - page 6 paragraph 0042 and page 6 paragraphs 0044 - 0047 of Rossi et al. wherein they disclose that the “extracted artistic style may then be stored as a plurality of layers in one or more neural networks” [0007], that “the artistic style may be applied to the target images and/or video sequence of images using a first version of the neural network by a first processing device at a first resolution to generate one or more sets of parameters (e.g., normalization factors), which parameters may then be mapped for use by a second version of the neural network by a second processing device at a second resolution” [0008], that “Convolutional Neural Networks consist of layers of small computational units that process visual information in a hierarchical fashion, e.g., often represented in the form of ‘layers.’ The output of a given layer consists of so-called ‘feature maps,’ i.e., differently-filtered versions of the input image... To obtain a representation of the ‘style’ of an input image, Gatys proposes using a feature space that is built on top of the filter responses in multiple layers of the network and that consists of the correlations between the different filter responses over the spatial extent of the feature maps” [0019], that while “the example of FIG. 2 above shows a single neural network for the application of the selected artistic style, according to some embodiments, more than one ‘version’ of style transfer neural network may be created for each artistic style. The different versions of the neural networks may, e.g., operate at different resolution levels, have different numbers of layers, different kinds of layers, different network architectures, and/or have different optimizations applied” [0032], that “neural network 400 may comprise a hybrid architecture, e.g., including a first part, e.g., lower resolution network (406), that may be executed on a first processing device and that may comprise many convolutional layers (e.g., 408, 410, 412, etc.) and a second part, e.g., higher resolution network (414) that may be executed on a second processing device and that may also comprise a number of convolutional layers (e.g., 416, 422)” [0039] and that the “output of the lower resolution network 406, i.e., after processing by each of convolutional layers 1..N in the network (as well as one or more additional optional low resolution convolutions following layer N, if needed), may also be output as its own low resolution stylized output image (426), if so desired. According to some embodiments utilizing a hybrid network architecture, such as the network 400 shown in FIG. 4, the output of the higher resolution network 414, i.e., after processing by convolutional layer N+2 (422), may result in a high resolution stylized output image (424)” [0042]. The Examiner asserts that, as shown herein above and in the cited portions, Rossi et al. disclose that an artistic style may be stored as a plurality of layers in a style transfer neural network, such as a convolutional neural network (CNN), and that different “versions” of the style transfer neural network may be created for the artistic style, such as versions of the style transfer neural network that operate at different resolution levels. Additionally, the Examiner asserts that CNNs utilize kernels, i.e., filters, to generate the outputs of their convolutional layers and thus, for example, the filter utilized by a first convolutional layer of a style transfer neural network of Rossi et al. corresponds to the claimed kernel. The Examiner asserts that one of ordinary skill in the art would understand that first and second versions of a style transfer neural network for application of a selected artistic style of Rossi et al. that operate at different resolution levels would utilize the same kernels to generate stylized output images at the different resolution levels at least because they are being utilized to output images in the same selected artistic style. Furthermore, the Examiner asserts that one of ordinary skill in the art would understand that in order for low and high resolution CNNs to output low and high resolution stylized images, respectively, in a same artistic style that a kernel utilized in a convolutional layer of the low resolution CNN would be the same as a kernel utilized in a corresponding convolutional layer of the high resolution CNN. Thus, the Examiner asserts that Rossi et al. disclose processing the first image with a kernel to generate the second image and processing the first scaled image with the kernel to generate the first intermediate image, i.e., processing an input image and a scaled image with the same kernel. The Examiner notes that Rossi et al. fail to disclose expressly processing the first image block to generate the second image block., i.e., Rossi et al. fail to expressly disclose processing the input image in blocks. However, analogous art Georgescu et al. disclose “processing the first image block with a kernel to generate the second image block”, i.e., processing the input image in blocks, see at least the abstract, figures 4 and 5, page 1 paragraphs 0020 - 0025, page 2 paragraphs 0031 - 0032, page 2 paragraph 0041 - page 3 paragraph 0046, page 4 paragraphs 0065 - 0069, page 5 paragraphs 0080 - 0082, page 6 paragraph 0089, page 7 paragraphs 0126 - 0129 and 0131 - 0137 of Georgescu et al. wherein they disclose that “[m]ulti-stage convolution is performed on image patches extracted from the histological image followed by multi-stage transpose convolution to recover a layer matched in size to the input image patch. The output image patch thus has a one-to-one pixel-to-pixel correspondence with the input image patch” [abstract], that “image processing will typically be applied to patches which are of a manageable size (e.g. ca. 500x500 pixels) for processing by a CNN. The WSI will thus be processed on the basis of splitting it into patches, analyzing the patches with the CNN, then reassembling the output (image) patches into a probability map of the same size as the WSI” [0066] and that their “neural network is similar in design to the VGG-16 architecture of Simonyan and Zisserman 2014 [6]. It uses very small 3×3 kernels in all convolutional filters” [0082]. The Examiner asserts that, as shown herein above and in the cited portions, Georgescu et al. disclose, at least, “processing the first image block with a kernel to generate the second image block”. Therefore, the Examiner asserts that Rossi et al. in view of Chung et al. in view of Georgescu et al. disclose the aforementioned disputed claim limitations. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1 - 3, 5 and 6 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Rossi et al. U.S. Publication No. 2020/0380639 A1. - With regards to claim 1, Rossi et al. disclose a method, (Rossi et al., Abstract, Figs. 3A - 5, Pg. 1 ¶ 0007 - Pg. 2 ¶ 0011, Pg. 4 ¶ 0033 - 0035, Pg. 6 ¶ 0045 - 0048, Pg. 7 ¶ 0052) comprising: downscaling an input image to generate a scaled image; (Rossi et al., Figs. 3B - 5, Pg. 2 ¶ 0010, Pg. 4 ¶ 0035, Pg. 5 ¶ 0040, Pg. 6 ¶ 0045 - 0047) performing, to the scaled image, a first convolutional neural networks (CNN) modeling process with first non-local operations, to generate global parameters; (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0010, Pg. 3 ¶ 0023 - 0025, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0038, Pg. 5 ¶ 0040 - 0041, Pg. 6 ¶ 0044 - 0047) and performing, to the input image, a second CNN modeling process with second non-local operations that are performed with the global parameters including a mean value of the scaled image, to generate an output image corresponding to the input image, (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0010, Pg. 3 ¶ 0021 - 0025, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0036, Pg. 5 ¶ 0040 - Pg. 6 ¶ 0042, Pg. 6 ¶ 0045 - 0047) wherein performing the first CNN modeling process comprises generating a first intermediate image from the scaled image by calculating first values of pixels of the first intermediate image from values of pixels of the scaled image and the mean value, (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0021 - 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042) performing the second CNN modeling process comprises generating a second intermediate image between the input image and the output image by calculating second values of pixels of the second intermediate image from values of pixels of the input image and the mean value, (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0021 - 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042) and each of the first values and the second values is decreased when the mean value increases, (Rossi et al., Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0021 - 0025, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042 [The Examiner asserts that each of the first values and the second values calculated by Rossi et al. is decreased when the mean value increases at least because equation 3 of Rossi et al., utilized to calculate the instance normalized values of pixels, i.e., each of the first values and the second values, substantially corresponds to equation 1 in the instant specification for instance normalization, see at least page 9 paragraph 0043 - page 10 paragraph 0044 and page 12 paragraph 0053 of the instant specification]) wherein performing the first CNN modeling process comprises calculating the first values of pixels of a first intermediate image from the scaled image; (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0021 - 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042) performing the second CNN modeling process comprises calculating the second values of pixels of a second intermediate image from the input image, (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0021 - 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042) the first values are calculated by V3 = A2((X2 - U2) / √ ([Q2]^2 + E) ) + B2, V3 is one of the first values, X2 is one of the values of pixels of the scaled image, (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0021 - 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042) U2 is the mean value, Q2 is a standard deviation included in the global parameters, E is a positive real number, and A2 and B2 are affine parameters, (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0021 - 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042 [The Examiner asserts that the broadest reasonable interpretation of the affine parameters A2 and B2 encompass interpretations wherein A2 and B2 are 1 and 0, respectively.]) and the second values are calculated by Y3 = A2((Y2 - U2) / √ ([Q2]^2 + E) ) + B2, and Y3 is one of the second values and Y2 is one of the values of pixels of the input image. (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0021 - 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042) - With regards to claim 2, Rossi et al. disclose the method of claim 1, wherein performing the second CNN modeling process with the second non-local operations comprises: performing first CNN operations and the second non-local operations alternately to generate first intermediate images in order, (Rossi et al., Figs. 3A - 5, Pg. 2 ¶ 0019, Pg. 3 ¶ 0023 - 0025, Pg. 4 ¶ 0033 - Pg. 5 ¶ 0038, Pg. 6 ¶ 0044 - 0047 [“the first parameter set 357 (‘PARAM. SET 1’) may be applied to the input of the first convolutional layer 380 of a higher resolution artistic style transfer network executing on the second processing device, PROC 2. Likewise, the second parameter set 361 (‘PARAM. SET 2’) may be applied to the input of the second convolutional layer 382 of the higher resolution artistic style transfer network executing on the second processing device, and so forth for each such layer in the network for which parameters are necessary or desired”]) wherein each of the second non-local operations is performed with a corresponding one of the global parameters to generate a corresponding one of the first intermediate images. (Rossi et al., Figs. 3A - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0010, Pg. 4 ¶ 0033 - Pg. 5 ¶ 0038, Pg. 5 ¶ 0041) - With regards to claim 3, Rossi et al. disclose the method of claim 2, wherein performing the first CNN modeling process with the first non-local operations comprises: performing second CNN operations and the first non-local operations alternately to generate second intermediate images in order, (Rossi et al., Figs. 3A - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0010 and 0019, Pg. 3 ¶ 0023 - 0025, Pg. 4 ¶ 0033 - 0035, Pg. 5 ¶ 0041, Pg. 6 ¶ 0044 - 0047 [“The output of a given layer consists of so-called ‘feature maps,’ i.e., differently-filtered versions of the input image”]) wherein each of the first non-local operations is performed with a corresponding one of the global parameters to generate a corresponding one of the second intermediate images; (Rossi et al., Figs. 3A - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0023 - 0025, Pg. 4 ¶ 0033 - 0035, Pg. 5 ¶ 0038 and 0040 - 0041, Pg. 6 ¶ 0045 - 0047) and generating a next one of the global parameters based on the corresponding one of the second intermediate images. (Rossi et al., Figs. 3A - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0010 and 0019, Pg. 3 ¶ 0023 - 0025, Pg. 4 ¶ 0033 - 0035, Pg. 5 ¶ 0040 - 0041, Pg. 6 ¶ 0044 - 0047) - With regards to claim 5, Rossi et al. disclose the method of claim 1, wherein performing the first CNN modeling process with the first non-local operations comprises: extracting first global parameters of the global parameters from the scaled image; (Rossi et al., Figs. 3A - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0010, Pg. 3 ¶ 0023 - 0025, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0038, Pg. 5 ¶ 0040 - 0041, Pg. 6 ¶ 0045 - 0046) transforming the scaled image based on the first global parameters to generate a first one of first intermediate images; (Rossi et al., Figs. 3A - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0023 - 0025, Pg. 4 ¶ 0033 - 0035, Pg. 5 ¶ 0038 and 0040 - 0041, Pg. 6 ¶ 0045 - 0047) and transforming each one of the first intermediate images based on a corresponding one of the global parameters to generate a next one of the first intermediate images. (Rossi et al., Figs. 3A - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0010 and 0019, Pg. 3 ¶ 0023 - 0025, Pg. 4 ¶ 0033 - 0035, Pg. 5 ¶ 0040 - 0041, Pg. 6 ¶ 0044 - 0047) - With regards to claim 6, Rossi et al. disclose the method of claim 1, wherein the global parameters further include a standard deviation of the scaled image. (Rossi et al., Fig. 5, Pg. 1 ¶ 0008, Pg. 3 ¶ 0021 - 0025 [The Examiner notes that the standard deviation of a set of values is equal to the square root of the variance of the set of values, thus variance and standard deviation are directly related.]) Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Rossi et al. U.S. Publication No. 2020/0380639 A1 as applied to claim 1 above, and further in view of Georgescu et al. U.S. Publication No. 2019/0206056 A1. - With regards to claim 4, Rossi et al. disclose the method of claim 1, further comprising: wherein performing the first CNN modeling process with the first non-local operations comprises: extracting global features of the input image from the scaled image to generate the global parameters; (Rossi et al., Figs. 3A - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0010, Pg. 3 ¶ 0023 - 0025, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0038, Pg. 5 ¶ 0040 - 0041, Pg. 6 ¶ 0045 - 0046) and wherein performing the second CNN modeling process with the second non-local operations comprises: applying the global parameters to the input image to generate first intermediate images having the global features; (Rossi et al., Figs. 3B - 5, Pg. 2 ¶ 0010 and 0019, Pg. 3 ¶ 0023 - 0025, Pg. 4 ¶ 0035, Pg. 6 ¶ 0045 - 0047 [“The output of a given layer consists of so-called ‘feature maps,’ i.e., differently-filtered versions of the input image” and “the first parameter set 357 (‘PARAM. SET 1’) may be applied to the input of the first convolutional layer 380 of a higher resolution artistic style transfer network executing on the second processing device, PROC 2. Likewise, the second parameter set 361 (‘PARAM. SET 2’) may be applied to the input of the second convolutional layer 382 of the higher resolution artistic style transfer network executing on the second processing device, and so forth for each such layer in the network for which parameters are necessary or desired”]) and generating the output image based on the first intermediate images. (Rossi et al., Figs. 3B - 5, Pg. 2 ¶ 0010, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0039, Pg. 5 ¶ 0041 - Pg. 6 ¶ 0043, Pg. 6 ¶ 0046 - 0047) Rossi et al. fail to disclose explicitly dividing the input image into a plurality of first image blocks, wherein the output image includes a plurality of second image blocks corresponding to the plurality of first image blocks; applying parameters to one of the plurality of first image blocks; and generating one of the plurality of second image blocks corresponding to the one of the plurality of first image blocks. Pertaining to analogous art, Georgescu et al. disclose dividing the input image into a plurality of first image blocks, (Georgescu et al., Abstract, Fig. 5, Pg. 1 ¶ 0019 - 0020, Pg. 2 ¶ 0029 - 0031 and 0040 - 0041, Pg. 3 ¶ 0043 - 0046, Pg. 4 ¶ 0065 - 0066, Pg. 5 ¶ 0080 - 0081) wherein the output image includes a plurality of second image blocks corresponding to the plurality of first image blocks; (Georgescu et al., Abstract, Figs. 1B & 5, Pg. 1 ¶ 0023 - 0025, Pg. 3 ¶ 0043 - 0046, Pg. 4 ¶ 0066, Pg. 4 ¶ 0072 - Pg. 5 ¶ 0074, Pg. 6 ¶ 0089) applying parameters to one of the plurality of first image blocks to generate first intermediate images; (Georgescu et al. Abstract, Figs. 1A, 1B & 5, Pg. 1 ¶ 0020 - 0025, Pg. 2 ¶ 0029 - 0031, Pg. 2 ¶ 0040 - Pg. 3 ¶ 0046, Pg. 4 ¶ 0066 - 0069, Pg. 4 ¶ 0072 - Pg. 5 ¶ 0074, Pg. 5 ¶ 0083 - 0087, Pg. 6 ¶ 0093) and generating one of the plurality of second image blocks corresponding to the one of the plurality of first image blocks based on the first intermediate images. (Georgescu et al. Abstract, Figs. 1A, 1B & 5, Pg. 1 ¶ 0020 - 0025, Pg. 2 ¶ 0029 - 0031, Pg. 2 ¶ 0040 - Pg. 3 ¶ 0046, Pg. 4 ¶ 0066 - 0069, Pg. 4 ¶ 0072 - Pg. 5 ¶ 0074, Pg. 5 ¶ 0083 - 0087, Pg. 6 ¶ 0093) Rossi et al. and Georgescu et al. are combinable because they are both directed towards processing input images with convolutional neural networks (CNNs) to generate output images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Rossi et al. with the teachings of Georgescu et al. This modification would have been prompted in order to enhance the base device of Rossi et al. with the well-known and applicable technique Georgescu et al. applied to a similar device. Dividing the input image into a plurality of first image blocks that correspond to a plurality of second image blocks of the output image and processing the plurality of first image blocks by the second CNN modeling process, as taught by Georgescu et al., would enhance the base device of Rossi et al. by helping make sure that the processing is amendable for digital processing by a suitable processor, as taught and suggested by Georgescu et al., see at least page 4 paragraphs 0065 - 0066 of Georgescu et al. Furthermore, this modification would enhance the base device of Rossi et al. by helping facilitate processing of high-resolution images since the high-resolution images would be split into image blocks of a more manageable size for processing by the CNN. Moreover, this modification would have been prompted by the teachings and suggestions of Rossi et al. that various fusions of operations may be performed on the network data to help reduce bandwidth usage caused by processing large-sized images, see at least page 6 paragraphs 0044 and 0048 of Rossi et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the input image would be divided into a plurality of first image blocks that correspond to a plurality of second image blocks of the output image and the plurality of first image blocks would be processed by the second CNN modeling process so as to help make processing of high-resolution input images more amenable for digital processing by a suitable processor. Therefore, it would have been obvious to combine Rossi et al. with Georgescu et al. to obtain the invention as specified in claim 4. Claims 7 - 14 are rejected under 35 U.S.C. 103 as being unpatentable over Rossi et al. U.S. Publication No. 2020/0380639 A1 in view of Georgescu et al. U.S. Publication No. 2019/0206056 A1 in view of Dijkman et al. U.S. Publication No. 2017/0011281 A1. - With regards to claim 7, Rossi et al. disclose a system, (Rossi et al., Figs. 2, 4 & 6, Pg. 1 ¶ 0007 - Pg. 2 ¶ 0011, Pg. 3 ¶ 0026 - 0027, Pg. 6 ¶ 0049 - Pg. 7 ¶ 0052) comprising: a first memory configured to receive and store an input image; (Rossi et al., Figs. 3A - 6, Pg. 2 ¶ 0010 - 0011, Pg. 3 ¶ 0027, Pg. 6 ¶ 0044 - 0045, Pg. 6 ¶ 0049 - Pg. 7 ¶ 0052) and a chip being separated from the first memory, (Rossi et al., Fig. 6, Pg. 2 ¶ 0011, Pg. 3 ¶ 0027, Pg. 6 ¶ 0049 - Pg. 7 ¶ 0052 [“Processor 605 may be a system-on-chip such as those found in mobile devices and include one or more central processing units (CPUs)” and “graphics hardware 620 may include one or more programmable graphics processing units (GPUs) and/or one or more specialized SoCs. As mentioned above, in some embodiments, the graphics hardware 620 may comprise a first processing device having a first set of capabilities and second processing device having a second set of capabilities, wherein the first and second processing devices may work together according to a specified protocol to perform a graphics or image processing task, such as artistic style transfer of images or video”]) and configured to generate parameters associated with a plurality of scaled images and with non-local information of the input image, (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0010, Pg. 2 ¶ 0019, Pg. 3 ¶ 0023 - 0025, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0038, Pg. 5 ¶ 0040 - 0041, Pg. 6 ¶ 0044 - 0047) wherein each of the plurality of scaled images has a size smaller than a size of the input image, (Rossi et al., Figs. 3B - 5, Pg. 2 ¶ 0019, Pg. 4 ¶ 0035, Pg. 5 ¶ 0038 - 0041) the chip comprising: a first processing device configured to downscale the input image, (Rossi et al., Figs. 3B - 6, Pg. 1 ¶ 0008, Pg. 2 ¶ 0010 - 0011, Pg. 3 ¶ 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - 0041, Pg. 6 ¶ 0045, Pg. 6 ¶ 0048 - Pg. 7 ¶ 0052 [“Processor 605 may be a system-on-chip such as those found in mobile devices and include one or more central processing units (CPUs)” and “graphics hardware 620 may include one or more programmable graphics processing units (GPUs) and/or one or more specialized SoCs. As mentioned above, in some embodiments, the graphics hardware 620 may comprise a first processing device having a first set of capabilities and second processing device having a second set of capabilities, wherein the first and second processing devices may work together according to a specified protocol to perform a graphics or image processing task, such as artistic style transfer of images or video”]) and configured to store the parameters, (Rossi et al., Figs. 3B - 6, Pg. 1 ¶ 0009 - Pg. 2 ¶ 0011, Pg. 3 ¶ 0026 - 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0038 - 0039 and 0041, Pg. 6 ¶ 0044 - Pg. 7 ¶ 0052) wherein the chip is further configured to process, by performing first convolutional neural networks (CNN) operations with first non-local operations, the input image being downscaled, to generate the plurality of scaled images; (Rossi et al., Figs. 3B - 5, Pg. 2 ¶ 0011 and 0019, Pg. 4 ¶ 0031 - 0032 and 0035, Pg. 5 ¶ 0038 - 0041, Pg. 6 ¶ 0044 - 0047) and a second processing device configured to receive the parameters from the first processing device and to receive the input image, (Rossi et al., Figs. 3B - 6, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0011, Pg. 3 ¶ 0027, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0041, Pg. 6 ¶ 0045 - Pg. 7 ¶ 0052 [“Processor 605 may be a system-on-chip such as those found in mobile devices and include one or more central processing units (CPUs)” and “graphics hardware 620 may include one or more programmable graphics processing units (GPUs) and/or one or more specialized SoCs. As mentioned above, in some embodiments, the graphics hardware 620 may comprise a first processing device having a first set of capabilities and second processing device having a second set of capabilities, wherein the first and second processing devices may work together according to a specified protocol to perform a graphics or image processing task, such as artistic style transfer of images or video”]) and configured to generate a portion of an output image based on a portion of the input image and the parameters, (Rossi et al., Figs. 3B - 5, Pg. 2 ¶ 0010, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0036, Pg. 5 ¶ 0038, Pg. 6 ¶ 0042 - 0043 and 0046 - 0048) wherein the first processing device is further configured to perform a first instance normalization to the plurality of scaled images, (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0021 - 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042) and the second processing device is further configured to perform a second instance normalization to the portion of the input image, (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0021 - 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042) wherein the first CNN operations comprise calculating first values of pixels of a first intermediate image from a scaled image of the plurality of scaled images, (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0021 - 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042) wherein the second processing device is further configured to perform second CNN operations, the second CNN operations comprise calculating second values of pixels of a second intermediate image from the input image, (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0021 - 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042) the first values are calculated by V3 = A2((X2 - U2) / √ ([Q2]^2 + E) ) + B2, V3 is one of the first values, X2 is one of values of pixels of the scaled image, (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0021 - 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042) U2 is a mean value, Q2 is a standard deviation included in global parameters, E is a positive real number, and A2 and B2 are affine parameters, (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0021 - 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042 [The Examiner asserts that the broadest reasonable interpretation of the affine parameters A2 and B2 encompass interpretations wherein A2 and B2 are 1 and 0, respectively.]) and the second values are calculated by Y3 = A2((Y2 - U2) / √ ([Q2]^2 + E) ) + B2, and Y3 is one of the second values and Y2 is one of values of pixels of the input image, (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0021 - 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042) and wherein one of the first processing device and the second processing device comprises: a processing circuit configured to generate the plurality of scaled images and the parameters. (Rossi et al., Figs. 3B - 6, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0011, Pg. 3 ¶ 0023 - 0027, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0041, Pg. 6 ¶ 0045 - Pg. 7 ¶ 0052 [“Processor 605 may be a system-on-chip such as those found in mobile devices and include one or more central processing units (CPUs)” and “graphics hardware 620 may include one or more programmable graphics processing units (GPUs) and/or one or more specialized SoCs. As mentioned above, in some embodiments, the graphics hardware 620 may comprise a first processing device having a first set of capabilities and second processing device having a second set of capabilities, wherein the first and second processing devices may work together according to a specified protocol to perform a graphics or image processing task, such as artistic style transfer of images or video.”]) Rossi et al. fail to disclose explicitly wherein after the portion of the output image is generated, in response to another portion of the output image being not generated, the second processing device receives an image block from the first memory, and wherein one of the first processing device and the second processing device comprises: an on-chip memory coupled between the first memory and the processing circuit. Pertaining to analogous art, Georgescu et al. disclose a system, (Georgescu et al., Abstract, Figs. 6 - 8, Pg. 2 ¶ 0038 - Pg. 3 ¶ 0047, Pg. 4 ¶ 0065, Pg. 7 ¶ 0142 - Pg. 8 ¶ 0151) comprising: a first memory configured to receive and store an input image; (Georgescu et al., Figs. 5 - 8, Pg. 1 ¶ 0019, Pg. 2 ¶ 0038 - 0040, Pg. 3 ¶ 0047, Pg. 4 ¶ 0065 - 0068, Pg. 7 ¶ 0134 and 0141, Pg. 8 ¶ 0144 - 0148 and 0151) and a chip being separated from the first memory, (Georgescu et al., Figs. 6 - 8, Pg. 2 ¶ 0038 - 0039, Pg. 3 ¶ 0046 - 0048, Pg. 7 ¶ 0142 - Pg. 8 ¶ 0151) the chip comprising: a second processing device (Georgescu et al., Figs. 6 - 8, Pg. 2 ¶ 0038 - 0039, Pg. 3 ¶ 0046 - 0048, Pg. 7 ¶ 0142 - Pg. 8 ¶ 0151) configured to receive the input image, (Georgescu et al., Abstract, Figs. 1A, 1B & 5, Pg. 1 ¶ 0019 - 0022, Pg. 2 ¶ 0038 - 0042, Pg. 4 ¶ 0065 - 0068, Pg. 5 ¶ 0080 - 0081, Pg. 7 ¶ 0134 - 0137) and configured to generate a portion of an output image based on a portion of the input image and parameters, (Georgescu et al., Abstract, Figs. 1A, 1B & 5, Pg. 1 ¶ 0022 - 0025, Pg. 3 ¶ 0043 - 0046, Pg. 4 ¶ 0065 - 0069 and 0072, Pg. 5 ¶ 0080 - 0082, Pg. 5 ¶ 0088 - Pg. 6 ¶ 0089, Pg. 7 ¶ 0131 - 0137) wherein after the portion of the output image is generated, in response to another portion of the output image being not generated, the second processing device receives an image block from the first memory. (Georgescu et al., Abstract, Figs. 1A, 1B & 5 - 8, Pg. 1 ¶ 0020 - 0025, Pg. 2 ¶ 0031, Pg. 3 ¶ 0043 - 0047, Pg. 4 ¶ 0065 - 0066, Pg. 5 ¶ 0080 - 0081, Pg. 7 ¶ 0134 - 0137 [“image processing will typically be applied to patches which are of a manageable size (e.g. ca. 500x500 pixels) for processing by a CNN. The WSI will thus be processed on the basis of splitting it into patches, analyzing the patches with the CNN, then reassembling the output (image) patches into a probability map of the same size as the WSI.”]) Georgescu et al. fail to disclose explicitly wherein one of the first processing device and the second processing device comprises: an on-chip memory coupled between the first memory and the processing circuit. Pertaining to analogous art, Dijkman et al. disclose a system, (Dijkman et al., Figs. 1, 2, 4 & 5, Pg. 1 ¶ 0010 - 0013, Pg. 2 ¶ 0031 - 0035) comprising: a first memory; (Dijkman et al., Figs. 1, 2, 4 & 5, Pg. 2 ¶ 0034, Pg. 4 ¶ 0053 - Pg. 5 ¶ 0056, Pg. 9 ¶ 0113 - Pg. 10 ¶ 0122) and a chip being separated from the first memory, (Dijkman et al., Figs. 1 & 4, Pg. 2 ¶ 0031 - 0034, Pg. 4 ¶ 0053 - Pg. 5 ¶ 0056, Pg. 9 ¶ 0113 - Pg. 10 ¶ 0122) the chip comprising: a first processing device; (Dijkman et al., Figs. 1 & 4, Pg. 2 ¶ 0031 - 0034, Pg. 4 ¶ 0053 - 0055) and a second processing device, (Dijkman et al., Figs. 1 & 4, Pg. 2 ¶ 0031 - 0034, Pg. 4 ¶ 0053 - 0055) wherein one of the first processing device and the second processing device comprises: a processing circuit; (Dijkman et al., Figs. 1 & 4, Pg. 2 ¶ 0031 - 0034, Pg. 4 ¶ 0053 - 0055) and an on-chip memory coupled between the first memory and the processing circuit. (Dijkman et al., Figs. 1, 2, 4 & 5, Pg. 2 ¶ 0031 - 0034, Pg. 4 ¶ 0053 - Pg. 5 ¶ 0056, Pg. 8 ¶ 0106 - 0107, Pg. 9 ¶ 0113 - Pg. 10 ¶ 0122) Rossi et al. and Georgescu et al. are combinable because they are both directed towards processing input images with convolutional neural networks (CNNs) to generate output images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Rossi et al. with the teachings of Georgescu et al. This modification would have been prompted in order to enhance the base device of Rossi et al. with the well-known and applicable technique Georgescu et al. applied to a comparable device. Generating portions of an output image based on portions of the input image such that after a first portion of the output image is generated and in response to another portion of the output image being not generated an image block from the first memory is received and input for processing, as taught by Georgescu et al., would enhance the base device of Rossi et al. by helping make sure that the processing is amendable for digital processing by a suitable processor, as taught and suggested by Georgescu et al., see at least page 4 paragraphs 0065 - 0066 of Georgescu et al. Furthermore, this modification would enhance the base device of Rossi et al. by helping facilitate processing of high-resolution images since the high-resolution images would be split into image blocks of a more manageable size for processing by the CNN. Moreover, this modification would have been prompted by the teachings and suggestions of Rossi et al. that various fusions of operations may be performed on the network data to help reduce bandwidth usage caused by processing large-sized images, see at least page 6 paragraphs 0044 and 0048 of Rossi et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the output image of the base device of Rossi et al. would be generated by processing portions, image blocks, of the input image individually to generate respective portions of the output image so as to help make processing of high-resolution input images more amenable for digital processing by a suitable processor. In addition, Rossi et al. and Georgescu et al. and Dijkman et al. are combinable because they are all directed towards processing input images with convolutional neural networks (CNNs) to generate output images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Rossi et al. and Georgescu et al. with the teachings of Dijkman et al. This modification would have been prompted in order to substitute the system-on-chip of Rossi et al. for the system-on-a-chip (SOC) with on-chip memory utilized by Dijkman et al. The SOC with on-chip memory utilized by Dijkman et al. could be substituted in place of the system-on-chip of Rossi et al. utilizing well-known techniques in the art and would likely yield predictable results, in that, in the combination, the SOC with on-chip memory utilized by Dijkman et al. would be utilized to perform the image processing tasks of the combined base device. Furthermore, this modification would have been prompted by the teachings and suggestions of Rossi et al. that various electronic and processing devices may be utilized to implement their teachings, such as a system-on-chip (SoC) and graphics hardware that includes one or more programmable graphics processing units (GPUs) and/or one or more specialized SoCs, and that the graphics hardware may comprise a first and second processing devices having different sets of capabilities, see at least page 2 paragraph 0011 and page 6 paragraph 0049 - page 7 paragraph 0052 of Rossi et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a system-on-a-chip (SOC) with on-chip memory would be utilized to perform the image processing tasks of the combined base device. Therefore, it would have been obvious to combine Rossi et al. with Georgescu et al. and Dijkman et al. to obtain the invention as specified in claim 7. - With regards to claim 8, Rossi et al. in view of Georgescu et al. in view of Dijkman et al. disclose the system of claim 7, wherein the first processing device comprises: a sampling circuit configured to downscale the input image; (Rossi et al., Figs. 3B - 6, Pg. 2 ¶ 0010 - 0011, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - 0041, Pg. 6 ¶ 0045, Pg. 6 ¶ 0047 - Pg. 7 ¶ 0052) a first memory circuit configured to store the plurality of scaled images; (Rossi et al., Figs. 3B - 6, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0011, Pg. 2 ¶ 0019, Pg. 3 ¶ 0027, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0036, Pg. 5 ¶ 0038 - 0040, Pg. 6 ¶ 0048 - Pg. 7 ¶ 0052) the processing circuit configured to generate the plurality of scaled images and the parameters; (Rossi et al., Figs. 3B - 6, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0011, Pg. 2 ¶ 0019, Pg. 3 ¶ 0023 - 0025, Pg. 4 ¶0031, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0041, Pg. 6 ¶ 0044 - 0046, Pg. 6 ¶ 0048 - Pg. 7 ¶ 0052) and a second memory circuit configured to store the parameters and configured to transmit the parameters to the second processing device. (Rossi et al., Figs. 3B - 6, Pg. 1 ¶ 0009 - Pg. 2 ¶ 0011, Pg. 3 ¶ 0026 - 0027, Pg. 4 ¶ 0033 - 0035, Pg. 5 ¶ 0038 - 0039 and 0041, Pg. 6 ¶ 0044 - Pg. 7 ¶ 0052) - With regards to claim 9, Rossi et al. in view of Georgescu et al. in view of Dijkman et al. disclose the system of claim 7, wherein the first processing device comprises: a sampling circuit configured to downscale the input image; (Rossi et al., Figs. 3B - 6, Pg. 2 ¶ 0010 - 0011, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - 0041, Pg. 6 ¶ 0045, Pg. 6 ¶ 0047 - Pg. 7 ¶ 0052) and a first memory circuit configured to store the parameters and configured to transmit the parameters to the second processing device; (Rossi et al., Figs. 3B - 6, Pg. 1 ¶ 0009 - Pg. 2 ¶ 0011, Pg. 3 ¶ 0026 - 0027, Pg. 4 ¶ 0033 - 0035, Pg. 5 ¶ 0038 - 0039 and 0041, Pg. 6 ¶ 0044 - Pg. 7 ¶ 0052) and the second processing device comprises: the processing circuit configured to generate the plurality of scaled images and the parameters, (Rossi et al., Figs. 3B - 6, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0011, Pg. 2 ¶ 0019, Pg. 3 ¶ 0023 - 0025, Pg. 4 ¶0031, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0041, Pg. 6 ¶ 0044 - 0046, Pg. 6 ¶ 0048 - Pg. 7 ¶ 0052) and configured to generate the portion of the output image after the parameters are generated; (Rossi et al., Figs. 3B - 6, Pg. 2 ¶ 0010 - 0011, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0036, Pg. 5 ¶ 0038, Pg. 6 ¶ 0042 - 0043 and 0046 - 0048) and a second memory circuit configured to store the plurality of scaled images, (Rossi et al., Figs. 3B - 6, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0011, Pg. 2 ¶ 0019, Pg. 3 ¶ 0027, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0036, Pg. 5 ¶ 0038 - 0040, Pg. 6 ¶ 0048 - Pg. 7 ¶ 0052) and configured to store the portion of the output image after the parameters are generated. (Rossi et al., Figs. 3B - 6, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0011, Pg. 2 ¶ 0019, Pg. 3 ¶ 0027, Pg. 4 ¶ 0033 - Pg. 5 ¶ 0036, Pg. 5 ¶ 0038 - 0039, Pg. 6 ¶ 0044, Pg. 6 ¶ 0048 - Pg. 7 ¶ 0052) - With regards to claim 10, Rossi et al. in view of Georgescu et al. in view of Dijkman et al. disclose the system of claim 7, further comprising: a second memory being separated from the first memory and the chip, (Rossi et al., Figs.3A - 6, Pg. 1 ¶ 0009 - Pg. 2 ¶ 0011, Pg. 3 ¶ 0027, Pg. 5 ¶ 0038 - Pg. 5 ¶ 0041, Pg. 6 ¶ 0044, Pg. 6 ¶ 0048 - Pg. 7 ¶ 0052) and configured to store the plurality of scaled images and the input image being downscaled; (Rossi et al., Figs. 3B - 6, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0011, Pg. 2 ¶ 0019, Pg. 3 ¶ 0027, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0036, Pg. 5 ¶ 0038 - 0041, Pg. 6 ¶ 0048 - Pg. 7 ¶ 0052) and wherein the first processing device comprises: a sampling circuit configured to downscale the input image and transmit the input image being downscaled to the second memory; (Rossi et al., Figs. 3B - 6, Pg. 2 ¶ 0010 - 0011, Pg. 4 ¶ 0035, Pg. 5 ¶ 0038 - 0041, Pg. 6 ¶ 0045, Pg. 6 ¶ 0048 - Pg. 7 ¶ 0052) a first memory circuit configured to store a part of the plurality of scaled images; (Rossi et al., Figs. 3B - 6, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0011, Pg. 2 ¶ 0019, Pg. 3 ¶ 0027, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0036, Pg. 5 ¶ 0038 - 0040, Pg. 6 ¶ 0048 - Pg. 7 ¶ 0052) the processing circuit configured to generate the parameters corresponding to the part of the plurality of scaled images; (Rossi et al., Figs. 3B - 6, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0011, Pg. 2 ¶ 0019, Pg. 3 ¶ 0023 - 0025, Pg. 4 ¶0031, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0041, Pg. 6 ¶ 0044 - 0046, Pg. 6 ¶ 0048 - Pg. 7 ¶ 0052) and a second memory circuit configured to store the parameters and configured to transmit the parameters to the second processing device. (Rossi et al., Figs. 3B - 6, Pg. 1 ¶ 0009 - Pg. 2 ¶ 0011, Pg. 3 ¶ 0026 - 0027, Pg. 4 ¶ 0033 - 0035, Pg. 5 ¶ 0038 - 0039 and 0041, Pg. 6 ¶ 0044 - Pg. 7 ¶ 0052) - With regards to claim 11, Rossi et al. in view of Georgescu et al. in view of Dijkman et al. disclose the system of claim 7, wherein the first processing device comprises: a sampling circuit configured to downscale the input image and transmit the input image being downscaled to the first memory; (Rossi et al., Figs. 3B - 6, Pg. 2 ¶ 0010 - 0011, Pg. 4 ¶ 0035, Pg. 5 ¶ 0038 - 0041, Pg. 6 ¶ 0045, Pg. 6 ¶ 0048 - Pg. 7 ¶ 0052) and a first memory circuit configured to store the parameters and configured to transmit the parameters to the second processing device; (Rossi et al., Figs. 3B - 6, Pg. 1 ¶ 0009 - Pg. 2 ¶ 0011, Pg. 3 ¶ 0026 - 0027, Pg. 4 ¶ 0033 - 0035, Pg. 5 ¶ 0038 - 0039 and 0041, Pg. 6 ¶ 0044 - Pg. 7 ¶ 0052) and the second processing device comprises: the processing circuit configured to generate the plurality of scaled images and the parameters, (Rossi et al., Figs. 3B - 6, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0011, Pg. 2 ¶ 0019, Pg. 3 ¶ 0023 - 0025, Pg. 4 ¶0031, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0041, Pg. 6 ¶ 0044 - 0046, Pg. 6 ¶ 0048 - Pg. 7 ¶ 0052) and configured to generate the portion of the output image after the parameters are generated; (Rossi et al., Figs. 3B - 6, Pg. 2 ¶ 0010 - 0011, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0036, Pg. 5 ¶ 0038, Pg. 6 ¶ 0042 - 0043 and 0046 - 0048) and a second memory circuit configured to store the portion of the input image, (Rossi et al., Figs. 3B - 6, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0011, Pg. 2 ¶ 0019, Pg. 3 ¶ 0027, Pg. 4 ¶ 0033 - Pg. 5 ¶ 0036, Pg. 5 ¶ 0038 - 0039, Pg. 6 ¶ 0044, Pg. 6 ¶ 0048 - Pg. 7 ¶ 0052) and configured to transmit the input image being downscaled from the first memory to the processing circuit. (Rossi et al., Figs. 3B - 6, Pg. 2 ¶ 0010 - 0011, Pg. 4 ¶ 0035, Pg. 5 ¶ 0038 - 0041, Pg. 6 ¶ 0045, Pg. 6 ¶ 0048 - Pg. 7 ¶ 0052) - With regards to claim 12, Rossi et al. in view of Georgescu et al. in view of Dijkman et al. disclose the system of claim 7, wherein the second processing device is further configured to process the portion of the input image by performing the second CNN operations with second non-local operations to generate a plurality of intermediate images, (Rossi et al., Figs. 3A - 5, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0010, Pg. 2 ¶ 0019, Pg. 3 ¶ 0023 - 0025, Pg. 4 ¶ 0033 - Pg. 5 ¶ 0039, Pg. 5 ¶ 0041 - Pg. 6 ¶ 0042, Pg. 6 ¶ 0044 - 0047) wherein the second processing device is further configured to generate one of the plurality of intermediate images based on a former one of the plurality of intermediate images and a corresponding one of the parameters. (Rossi et al., Figs. 3A - 5, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0010, Pg. 2 ¶ 0019, Pg. 4 ¶ 0033 - Pg. 5 ¶ 0039, Pg. 5 ¶ 0041) - With regards to claim 13, Rossi et al. in view of Georgescu et al. in view of Dijkman et al. disclose the system of claim 12, wherein the chip is further configured to perform one of the first CNN operations to generate the former one of the plurality of intermediate images, (Rossi et al., Figs. 3B - 5, Pg. 2 ¶ 0010 - 0011 and 0019, Pg. 4 ¶ 0031 - 0032 and 0035, Pg. 5 ¶ 0038 - 0041, Pg. 6 ¶ 0044 - 0047) and the corresponding one of the parameters. (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0011, Pg. 3 ¶ 0023 - 0025, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0038, Pg. 5 ¶ 0040 - 0041, Pg. 6 ¶ 0044 - 0047) - With regards to claim 14, Rossi et al. in view of Georgescu et al. in view of Dijkman et al. disclose the system of claim 12, wherein one of the first CNN operations and one of the second CNN operations correspond to a same CNN layer. (Rossi et al., Figs. 3A - 5, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0010, Pg. 4 ¶ 0033 - Pg. 5 ¶ 0038) Claims 15 - 17, 21 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Rossi et al. U.S. Publication No. 2020/0380639 A1 in view of Chung et al. U.S. Patent No. 10,740,939 in view of Georgescu et al. U.S. Publication No. 2019/0206056 A1. - With regards to claim 15, Rossi et al. disclose a method, (Rossi et al., Abstract, Figs. 3A - 5, Pg. 1 ¶ 0007 - Pg. 2 ¶ 0011, Pg. 4 ¶ 0033 - 0035, Pg. 6 ¶ 0045 - 0048, Pg. 7 ¶ 0052) comprising: downscaling an input image to generate a first scaled image; (Rossi et al., Figs. 3B - 5, Pg. 2 ¶ 0010, Pg. 4 ¶ 0035, Pg. 5 ¶ 0040, Pg. 6 ¶ 0045 - 0047) extracting, from the first scaled image, first parameters associated with global features of the input image; (Rossi et al., Figs. 3A - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0010, Pg. 3 ¶ 0023 - 0025, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0038, Pg. 5 ¶ 0040 - 0041, Pg. 6 ¶ 0045 - 0046) performing a first non-local operation with the first parameters to a first image of the input image, to generate a second image; (Rossi et al., Figs. 3B - 5, Pg. 2 ¶ 0010 and 0019, Pg. 3 ¶ 0023 - 0027, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0038, Pg. 6 ¶ 0044 - 0047 [“the first parameter set 357 (‘PARAM. SET 1’) may be applied to the input of the first convolutional layer 380 of a higher resolution artistic style transfer network executing on the second processing device, PROC 2. Likewise, the second parameter set 361 (‘PARAM. SET 2’) may be applied to the input of the second convolutional layer 382 of the higher resolution artistic style transfer network executing on the second processing device, and so forth for each such layer in the network for which parameters are necessary or desired”]) performing a first convolutional neural networks (CNN) operation to the second image to generate a third image; (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0010, Pg. 2 ¶ 0019, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0036, Pg. 5 ¶ 0041 - Pg. 6 ¶ 0042, Pg. 6 ¶ 0045 - 0047) and generating a portion of an output image corresponding to the input image based on the third image; (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0010, Pg. 2 ¶ 0019, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0036, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042, Pg. 6 ¶ 0045 - 0047) wherein performing a second CNN operation comprises calculating first values of pixels of a first intermediate image from the first scaled image; (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0021 - 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042) wherein generating the portion of the output image comprises performing the first CNN operation comprising calculating second values of pixels of a second intermediate image from the input image, (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0021 - 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042) the first values are calculated by V3 = A2((X2 - U2) / √ ([Q2]^2 + E) ) + B2, V3 is one of the first values, X2 is one of values of pixels of the first scaled image, (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0021 - 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042) U2 is a mean value, Q2 is a standard deviation included in the global features, E is a positive real number, and A2 and B2 are affine parameters, (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0021 - 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042 [The Examiner asserts that the broadest reasonable interpretation of the affine parameters A2 and B2 encompass interpretations wherein A2 and B2 are 1 and 0, respectively.]) and the second values are calculated by Y3 = A2((Y2 - U2) / √ ([Q2]^2 + E) ) + B2, and Y3 is one of the second values and Y2 is one of values of pixels of the input image; (Rossi et al., Figs. 3B - 5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0019, Pg. 3 ¶ 0021 - 0027, Pg. 4 ¶ 0035, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042) processing the first image with a kernel to generate the second image; (Rossi et al., Abstract, Figs. 2 - 5, Pg. 1 ¶ 0007 - Pg. 2 ¶ 0010, Pg. 2 ¶ 0019, Pg. 3 ¶ 0027, Pg. 4 ¶ 0031 - 0035, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042, Pg. 6 ¶ 0044 - 0047 [“The extracted artistic style may then be stored as a plurality of layers in one or more neural networks”, “the artistic style may be applied to the target images and/or video sequence of images using a first version of the neural network by a first processing device at a first resolution to generate one or more sets of parameters (e.g., normalization factors), which parameters may then be mapped for use by a second version of the neural network by a second processing device at a second resolution”, “Convolutional Neural Networks consist of layers of small computational units that process visual information in a hierarchical fashion, e.g., often represented in the form of ‘layers.’ The output of a given layer consists of so-called ‘feature maps,’ i.e., differently-filtered versions of the input image... To obtain a representation of the ‘style’ of an input image, Gatys proposes using a feature space that is built on top of the filter responses in multiple layers of the network and that consists of the correlations between the different filter responses over the spatial extent of the feature maps” and “neural network 400 may comprise a hybrid architecture, e.g., including a first part, e.g., lower resolution network (406), that may be executed on a first processing device and that may comprise many convolutional layers (e.g., 408, 410, 412, etc.) and a second part, e.g., higher resolution network (414) that may be executed on a second processing device and that may also comprise a number of convolutional layers (e.g., 416, 422).” The Examiner asserts that convolutional neural networks (CNNs) utilize kernels, i.e., filters, to generate the outputs of their convolutional layers and thus the filter utilized by a convolutional layer of the higher resolution CNN of Rossi et al. corresponds to the claimed kernel.]) and processing the first scaled image with the kernel to generate the first intermediate image. (Rossi et al., Abstract, Figs. 2 - 5, Pg. 1 ¶ 0007 - Pg. 2 ¶ 0010, Pg. 2 ¶ 0019, Pg. 3 ¶ 0023 - 0027, Pg. 4 ¶ 0031 - Pg. 5 ¶ 0036, Pg. 5 ¶ 0039 - Pg. 6 ¶ 0042, Pg. 6 ¶ 0044 - 0047 [“The extracted artistic style may then be stored as a plurality of layers in one or more neural networks”, “the artistic style may be applied to the target images and/or video sequence of images using a first version of the neural network by a first processing device at a first resolution to generate one or more sets of parameters (e.g., normalization factors), which parameters may then be mapped for use by a second version of the neural network by a second processing device at a second resolution”, “Convolutional Neural Networks consist of layers of small computational units that process visual information in a hierarchical fashion, e.g., often represented in the form of ‘layers.’ The output of a given layer consists of so-called ‘feature maps,’ i.e., differently-filtered versions of the input image... To obtain a representation of the ‘style’ of an input image, Gatys proposes using a feature space that is built on top of the filter responses in multiple layers of the network and that consists of the correlations between the different filter responses over the spatial extent of the feature maps”, “While the example of FIG. 2 above shows a single neural network for the application of the selected artistic style, according to some embodiments, more than one ‘version’ of style transfer neural network may be created for each artistic style. The different versions of the neural networks may, e.g., operate at different resolution levels, have different numbers of layers, different kinds of layers, different network architectures, and/or have different optimizations applied”, “neural network 400 may comprise a hybrid architecture, e.g., including a first part, e.g., lower resolution network (406), that may be executed on a first processing device and that may comprise many convolutional layers (e.g., 408, 410, 412, etc.) and a second part, e.g., higher resolution network (414) that may be executed on a second processing device and that may also comprise a number of convolutional layers (e.g., 416, 422)” and “The output of the lower resolution network 406, i.e., after processing by each of convolutional layers 1..N in the network (as well as one or more additional optional low resolution convolutions following layer N, if needed), may also be output as its own low resolution stylized output image (426), if so desired.” Rossi et al. disclose that an artistic style may be stored as a plurality of layers in a style transfer neural network, such as a convolutional neural network (CNN), and that different “versions” of the style transfer neural network may be created for the artistic style, such as versions of the style transfer neural network that operate at different resolution levels. The Examiner asserts that one of ordinary skill in the art would understand that first and second versions of a style transfer neural network for application of a selected artistic style of Rossi et al. that operate at different resolution levels would utilize the same kernels to generate stylized output images at the different resolution levels at least because they are being utilized to output images in the same selected artistic style. Furthermore, the Examiner asserts that one of ordinary skill in the art would understand that in order for low and high resolution CNNs to output low and high resolution stylized images, respectively, in a same artistic style that a kernel utilized in a convolutional layer of the low resolution CNN would be the same as a kernel utilized in a corresponding convolutional layer of the high resolution CNN.]) Rossi et al. fail to disclose explicitly performing the first convolutional neural networks (CNN) operation to a first image block of a plurality of image blocks in the input image, to generate a second image block; performing the first non-local operation to the second image block to generate a third image block; generating a portion of an output image based on the third image block; after the portion is generated, performing the first CNN operation to a fourth image block beside the first image block, to generate another portion beside the portion, wherein the third image block has a number of pixels being the same as each of a number of pixels of the first image block and a number of pixels of the second image block; and processing the first image block to generate the second image block. Pertaining to analogous art, Chung et al. disclose performing a first convolutional neural networks (CNN) operation to a first image of the input image, to generate a second image; (Chung et al., Figs. 9 & 11A, Col. 14 Line 60 - Col. 15 Line 51) performing a first non-local operation with the first parameters to the second image to generate a third image; (Chung et al., Figs. 9 & 11A, Col. 12 Lines 13 - 37, Col. 14 Line 60 - Col. 15 Line 51) generating a portion of an output image corresponding to the input image based on the third image; (Chung et al., Figs. 9 & 11A, Col. 14 Line 60 - Col. 15 Line 51) and processing the first image with a kernel to generate the second image. (Chung et al., Figs. 9, 11A & 12, Col. 2 Lines 3 - 6, Col. 10 Line 64 - Col. 11 Line 37, Col. 12 Lines 9 - 63, Col. 14 Line 60 - Col. 16 Line 3) Chung et al. fail to disclose explicitly performing an operation to a first image block of a plurality of image blocks to generate a second image block; performing an operation to the second image block to generate a third image block; generating a portion of an output image based on the third image block; after the portion is generated, performing the first CNN operation to a fourth image block beside the first image block, to generate another portion beside the portion, wherein the third image block has a number of pixels being the same as each of a number of pixels of the first image block and a number of pixels of the second image block; and processing the first image block to generate the second image block. Pertaining to analogous art, Georgescu et al. disclose performing a first convolutional neural networks (CNN) operation to a first image block of a plurality of image blocks in the input image, to generate a second image block; (Georgescu et al. Abstract, Figs. 1A, 1B & 5, Pg. 1 ¶ 0020 - 0025, Pg. 2 ¶ 0029 - 0031, Pg. 2 ¶ 0040 - Pg. 3 ¶ 0046, Pg. 4 ¶ 0066 - 0069, Pg. 4 ¶ 0072 - Pg. 5 ¶ 0074, Pg. 5 ¶ 0083 - 0087, Pg. 6 ¶ 0093) performing a first non-local operation to the second image block to generate a third image block; (Georgescu et al. Abstract, Figs. 1A, 1B & 5, Pg. 1 ¶ 0020 - 0025, Pg. 2 ¶ 0029 - 0031, Pg. 2 ¶ 0040 - Pg. 3 ¶ 0046, Pg. 4 ¶ 0066 - 0069, Pg. 4 ¶ 0072 - Pg. 5 ¶ 0074, Pg. 5 ¶ 0083 - 0087, Pg. 6 ¶ 0093) generating a portion of an output image corresponding to the input image based on the third image block; (Georgescu et al. Abstract, Figs. 1A, 1B & 5, Pg. 1 ¶ 0020 - 0025, Pg. 2 ¶ 0029 - 0031, Pg. 2 ¶ 0040 - Pg. 3 ¶ 0046, Pg. 4 ¶ 0066 - 0069, Pg. 4 ¶ 0072 - Pg. 5 ¶ 0074, Pg. 5 ¶ 0083 - 0087, Pg. 6 ¶ 0093) after the portion is generated, performing the first CNN operation to a fourth image block beside the first image block, to generate another portion beside the portion, (Georgescu et al., Abstract, Figs. 1A, 1B & 5 - 8, Pg. 1 ¶ 0020 - 0025, Pg. 2 ¶ 0031, Pg. 3 ¶ 0043 - 0047, Pg. 4 ¶ 0065 - 0066, Pg. 5 ¶ 0080 - 0081, Pg. 7 ¶ 0134 - 0137 [“image processing will typically be applied to patches which are of a manageable size (e.g. ca. 500x500 pixels) for processing by a CNN. The WSI will thus be processed on the basis of splitting it into patches, analyzing the patches with the CNN, then reassembling the output (image) patches into a probability map of the same size as the WSI”]) wherein the third image block has a number of pixels being the same as each of a number of pixels of the first image block and a number of pixels of the second image block, (Georgescu et al., Abstract, Figs. 1A, 1B & 5, Pg. 1 ¶ 0020 - 0025, Pg. 2 ¶ 0031, Pg. 2 ¶ 0041 - Pg. 3 ¶ 0046, Pg. 4 ¶ 0065 - 0072, Pg. 5 ¶ 0077, 0080 - 0083 and 0085 - 0087, Pg. 6 ¶ 0089 and 0093) and processing the first image block with a kernel to generate the second image block. (Georgescu et al. Abstract, Figs. 1A, 1B, 4 & 5, Pg. 1 ¶ 0020 - 0025, Pg. 2 ¶ 0029 - 0032, Pg. 2 ¶ 0040 - Pg. 3 ¶ 0046, Pg. 4 ¶ 0066 - 0069, Pg. 4 ¶ 0072 - Pg. 5 ¶ 0074, Pg. 5 ¶ 0082 - 0087, Pg. 6 ¶ 0093, Pg. 7 ¶ 0126 - 0129 and 0131 - 0137) Rossi et al. and Chung et al. are combinable because they are both directed towards image processing methods and devices that utilize convolutional neural networks (CNNs) to apply an artistic style to an input image. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Rossi et al. with the teachings of Chung et al. This modification would have been prompted in order to enhance the base device of Rossi et al. with the well-known and applicable technique Chung et al. applied to a comparable device. Performing the first CNN operation to the input image prior to performing the first non-local operation to the second image, as taught by Chung et al., would enhance the base device of Rossi et al. by improving its ability to obtain the best stylized output image results possible since the stylized output image would undergo the non-local operation processing last before being output so as to help ensure that the contrast information of the stylized output image matches the contrast information of the source image, the low-resolution version of the input image, as closely as possible. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the first CNN operation would be performed on the input image prior to performing the first non-local operation so as to help ensure that the best stylized output image results possible are obtained by ensuring that the contrast information of the stylized output image matches the contrast information of the source image, the low-resolution version of the input image, as closely as possible. In addition, Rossi et al. in view of Chung et al. and Georgescu et al. are combinable because they are all directed towards processing input images with convolutional neural networks (CNNs) to generate output images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Rossi et al. in view of Chung et al. with the teachings of Georgescu et al. This modification would have been prompted in order to enhance the combined base device of Rossi et al. in view of Chung et al. with the well-known and applicable technique Georgescu et al. applied to a similar device. Individually performing the first operations to abutting image blocks of the input image to generate output image blocks having a same number of pixels, as taught by Georgescu et al., would enhance the combined base device by helping make sure that the processing is amendable for digital processing by a suitable processor, as taught and suggested by Georgescu et al., see at least page 4 paragraphs 0065 - 0066 of Georgescu et al. Furthermore, this modification would enhance the combined base device by helping facilitate processing of high-resolution images since the high-resolution images would be split into image blocks of a more manageable size for processing by the CNN. Moreover, this modification would have been prompted by the teachings and suggestions of Rossi et al. that various fusions of operations may be performed on the network data to help reduce bandwidth usage caused by processing large-sized images, see at least page 6 paragraphs 0044 and 0048 of Rossi et al., and by the teachings and suggestions of Chung et al. that the convolution method can be applied to image patches, see at least column 12 lines 38 - 54 of Chung et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the first operations would be applied to image blocks of the input image individually so as to help make processing of high-resolution input images more amenable for digital processing by a suitable processor. Therefore, it would have been obvious to combine Rossi et al. with Chung et al. and Georgescu et al. to obtain the invention as specified in claim 15. - With regards to claim 16, Rossi et al. in view of Chung et al. in view of Georgescu et al. disclose the method of claim 15, further comprising: storing the first parameters in a memory; (Rossi et al., Figs. 3B - 6, Pg. 1 ¶ 0009 - Pg. 2 ¶ 0011, Pg. 3 ¶ 0026 - 0027, Pg. 4 ¶ 0033 - 0035, Pg. 5 ¶ 0038 - 0039 and 0041, Pg. 6 ¶ 0044 - Pg. 7 ¶ 0052) and when the third image block is required for the first non-local operation, receiving the first parameters from the memory. (Rossi et al., Figs. 3B - 6, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0011, Pg. 3 ¶ 0027, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0041, Pg. 6 ¶ 0045 - Pg. 7 ¶ 0052) - With regards to claim 17, Rossi et al. in view of Chung et al. in view of Georgescu et al. disclose the method of claim 15, further comprising: performing the second CNN operation to the first scaled image to generate a second scaled image; (Rossi et al., Figs. 3B - 5, Pg. 2 ¶ 0019, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0039, Pg. 5 ¶ 0041 - Pg. 6 ¶ 0042) and performing a second non-local operation with the first parameters to the second scaled image to generate a third scaled image. (Rossi et al., Figs. 3A - 5, Pg. 2 ¶ 0019, Pg. 3 ¶ 0023 - 0025, Pg. 4 ¶ 0033 - Pg. 5 ¶ 0039, Pg. 5 ¶ 0041 - Pg. 6 ¶ 0042) - With regards to claim 21, Rossi et al. in view of Chung et al. in view of Georgescu et al. disclose the method of claim 15, wherein E is equal to a small value. (Rossi et al., Pg. 3 ¶ 0021 and 0025 [“wherein ε is a small value to avoid divide by zero errors”.]) Rossi et al. fail to disclose expressly wherein E is equal to 10-5, however, it has been held that when the general conditions of a claim are disclosed in the prior art, discovering the optimum “ranges, or measurements” involves only routine skill in the art. See MPEP § 2144.05. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Rossi et al. in view of Chung et al. in view of Georgescu et al. to include utilizing 10-5 as the value of E. The Examiner asserts that it has been held that when the general conditions of a claim are disclosed in the prior art, discovering the optimum “ranges, or measurements” involves only routine skill in the art. The normal desire of scientists or artisans to improve upon what is already generally known provides the motivation to determine where in a disclosed set of percentage ranges is the optimum combination of percentages. Therefore, it would have been obvious to one or ordinary skill in the art at the time of the invention to utilize 10-5 as the value of E for a variety of reasons, such as due to the simplicity of its use in calculations, due to its small value and/or to avoid divide by zero errors, as suggested by Rossi et al., see at least page 3 paragraph 0025 of Rossi et al., and to improve the efficiency and reliability of the combined base device by simplifying calculations of instance normalized values while simultaneously ensuring that divide by zero computational errors are avoided. Also, see MPEP § 2144.05. This modification could be completed according to well-known techniques in the art and would likely yield predictable results, in that 10-5 would be utilized as the value of E in the pixel value calculations of the combined base device. Therefore, it would have been obvious to combine Rossi et al. in view of Chung et al. in view of Georgescu et al. with E equal to 10-5 to obtain the invention as specified in claim 21. - With regards to claim 23, Rossi et al. in view of Chung et al. in view of Georgescu et al. disclose the method of claim 15. Rossi et al. fail to disclose explicitly determining whether image blocks of the entire output image are generated; if the image blocks of the entire output image are generated, outputting the output image; and if some of the image blocks of the entire output image are not generated, processing another image block of the input image. Pertaining to analogous art, Georgescu et al. disclose determining whether image blocks of the entire output image are generated; (Georgescu et al., Abstract, Figs. 1A, 5 & 6, Pg. 1 ¶ 0020 - 0025, Pg. 2 ¶ 0031, Pg. 3 ¶ 0043 - 0046, Pg. 4 ¶ 0065 - 0066, Pg. 5 ¶ 0080 - 0083, Pg. 7 ¶ 0126 - 0137) if the image blocks of the entire output image are generated, outputting the output image; (Georgescu et al., Abstract, Figs. 1A, 5 & 6, Pg. 1 ¶ 0020 - 0025, Pg. 2 ¶ 0031, Pg. 3 ¶ 0043 - 0046, Pg. 4 ¶ 0065 - 0066, Pg. 5 ¶ 0080 - 0083, Pg. 7 ¶ 0126 - 0137) and if some of the image blocks of the entire output image are not generated, processing another image block of the input image. (Georgescu et al., Abstract, Figs. 1A, 5 & 6, Pg. 1 ¶ 0020 - 0025, Pg. 2 ¶ 0031, Pg. 3 ¶ 0043 - 0046, Pg. 4 ¶ 0065 - 0066, Pg. 5 ¶ 0080 - 0083, Pg. 7 ¶ 0126 - 0137) Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Rossi et al. U.S. Publication No. 2020/0380639 A1 in view of Chung et al. U.S. Patent No. 10,740,939 in view of Georgescu et al. U.S. Publication No. 2019/0206056 A1 as applied to claim 15 above, and further in view of Alexander et al. U.S. Patent No. 5,467,459. - With regards to claim 22, Rossi et al. in view of Chung et al. in view of Georgescu et al. disclose the method of claim 15, further comprising: transmitting the input image between a memory and a chip, (Rossi et al., Figs. 3A - 6, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0011, Pg. 6 ¶ 0044, Pg. 6 ¶ 0049 - Pg. 7 ¶ 0052) and a random-access memory (RAM). (Rossi et al., Fig. 6, Pg. 2 ¶ 0011, Pg. 6 ¶ 0049, Pg. 7 ¶ 0051 - 0052) Rossi et al. fail to disclose explicitly transmitting corresponding to a dynamic random-access memory (DRAM) bandwidth. Pertaining to analogous art, Georgescu et al. disclose transmitting the input image between a memory and a chip, (Georgescu et al., Figs. 6 - 8, Pg. 7 ¶ 0142 - Pg. 8 ¶ 0145, Pg. 8 ¶ 0150 - Pg. 9 ¶ 0156) and a dynamic random-access memory (DRAM). (Georgescu et al., Fig. 6, Pg. 8 ¶ 0144, Pg. 9 ¶ 0152) Georgescu et al. fail to disclose explicitly transmitting corresponding to a dynamic random-access memory (DRAM) bandwidth. Pertaining to analogous art, Alexander et al. disclose transmitting the input image between a memory and a chip corresponding to a dynamic random-access memory (DRAM) bandwidth. (Alexander et al., Fig. 1, Col. 7 Line 10 - Col. 8 Line 12) Rossi et al. in view of Chung et al. in view of Georgescu et al. and Alexander et al. are combinable because they are all directed towards image processing systems that facilitate image processing operations, such as image transforms and convolution. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Rossi et al. in view of Chung et al. in view of Georgescu et al. with the teachings of Alexander et al. This modification would have been prompted in order to enhance the combined base device of Rossi et al. in view of Chung et al. in view of Georgescu et al. with the well-known and applicable technique Alexander et al. applied to a similar device. Transmitting image data between a memory and a chip corresponding to a dynamic random-access memory (DRAM) bandwidth, as taught by Alexander et al., would enhance the combined base device by maximizing its data transfer bandwidth relative to its capabilities so as to enable its image processing operations to be carried out quickly and efficiently at very high speeds. Furthermore, this modification would have been prompted by the teachings and suggestions of Rossi et al. that various electronic and processing devices may be utilized to implement their teachings, that large image sizes increase memory bandwidth requirements and that data transfer may have a high cost in terms of latency and/or memory utilization, see at least page 2 paragraph 0011, page 3 paragraph 0027, page 5 paragraph 0044 and page 6 paragraph 0049 - page 7 paragraph 0052 of Rossi et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that image data between the memory and chip of the combined base device would be transmitted corresponding to a dynamic random-access memory (DRAM) bandwidth so as to maximize data transfer bandwidth of the combined base device relative to its capabilities and thereby enable its image processing operations to be carried out quickly and efficiently at very high speeds. Therefore, it would have been obvious to combine Rossi et al. in view of Chung et al. in view of Georgescu et al. with Alexander et al. to obtain the invention as specified in claim 22. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC RUSH whose telephone number is (571) 270-3017. The examiner can normally be reached 9am - 5pm Monday - Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571) 270 - 5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ERIC RUSH/Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Oct 04, 2021
Application Filed
May 29, 2024
Non-Final Rejection — §102, §103, §112
Sep 03, 2024
Response Filed
Nov 20, 2024
Final Rejection — §102, §103, §112
Jan 08, 2025
Interview Requested
Jan 21, 2025
Response after Non-Final Action
Feb 07, 2025
Request for Continued Examination
Feb 10, 2025
Response after Non-Final Action
Mar 22, 2025
Non-Final Rejection — §102, §103, §112
May 27, 2025
Interview Requested
Jun 16, 2025
Examiner Interview Summary
Jun 16, 2025
Applicant Interview (Telephonic)
Jun 26, 2025
Response Filed
Oct 03, 2025
Final Rejection — §102, §103, §112
Dec 16, 2025
Response after Non-Final Action
Jan 12, 2026
Request for Continued Examination
Jan 26, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586229
COMPUTER IMPLEMENTED METHODS AND DEVICES FOR DETERMINING DIMENSIONS AND DISTANCES OF HEAD FEATURES
2y 5m to grant Granted Mar 24, 2026
Patent 12548292
METHOD AND SYSTEM FOR IDENTIFYING REFLECTIONS IN THERMAL IMAGES
2y 5m to grant Granted Feb 10, 2026
Patent 12548395
SYSTEMS, METHODS AND DEVICES FOR MONITORING BETTING ACTIVITIES
2y 5m to grant Granted Feb 10, 2026
Patent 12541856
MASKING OF OBJECTS IN AN IMAGE STREAM
2y 5m to grant Granted Feb 03, 2026
Patent 12518504
METHOD FOR CALIBRATING AN OBJECT RE-IDENTIFICATION SOLUTION IMPLEMENTING AN ARRAY OF A PLURALITY OF CAMERAS
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
61%
Grant Probability
97%
With Interview (+36.2%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 628 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month