Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED OFFICE ACTION
Status of Claims
Claims 1-20 are pending examination.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b) (2) (C) for any potential 35 U.S.C. 102(a) (2) prior art against the later invention.
1. Claims 1 ,18 and 20 are rejected under 35 U.S.C 103(a) as being unpatentable over Takada (USPUB 20230095184) in view of Weisheng Dong et al. (NPL DOC: "Denoising Prior Driven Deep Neural Network for Image Restoration," 12th September 2019,IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 41, NO. 10, OCTOBER 2019,Pages 2305-2316.).
As per claim 1, Takada teaches A method executed by an electronic device ( electronic device interpreted as the cloud server taught within Paragraph [0029]- “…The cloud server 200 is in charge of generating training data, estimating image quality degradation, and doing training for restoration. The edge device 100 is in charge of degradation restoration on an image to be processed….”) , the method comprising: acquiring a first image that is obtained by adding noise to a second image (Adding noise to images for training /processing taught within FIG.2 – 200 AND Paragraphs [0054]- “…211 generates student image data by adding at least one or more types of degradation elements to teacher image data taken out of a degradation-free teacher image group. In the present exemplary embodiment, noise is described as an example of the degradation elements. The degradation addition unit 211 therefore generates student image data by adding noise as a degradation element to the teacher image data. In the present exemplary embodiment, the degradation addition unit 211 analyzes the physical properties of the imaging apparatus 10, and generates student image data by adding noise corresponding to a wider range of amounts of degradation than that of possible amounts of degradation occurring in the imaging apparatus…” AND Paragraph [0064]) , the second image comprising a target restoration region (Paragraphs [0059-0060]- “…The training-specific degradation restoration unit 214 receives the student image data 308 and the degradation estimation result 310 estimated by the training-specific degradation estimation unit 213, and performs restoration processing on the student image data 308. Specifically, the training-specific degradation restoration unit 214 initially inputs the student image data 308 …”) ;
Takada does not explicitly teach performing at least one first denoising process on the first image using a first artificial intelligence (AI) network to obtain a first denoising result; and restoring the target restoration region based on the first denoising result using a second AI network to obtain a restored image.
However, within analogous art, Weisheng Dong et al. teaches performing at least one first denoising process on the first image using a first artificial intelligence (AI) network to obtain a first denoising result ( Denoising process of images with AI /neural network taught within Page 2306- Col. 2- “…whose layers mimic the process flow of the proposed denoising-based IR algorithm. Moreover, an effective DCNN denoiser that can exploit the multi-scale redundancies is proposed and plugged into the deep network. Through end-to-end training, both the DCNN denoisers and other network parameters can be jointly optimized….” AND Page 2306- Col. 2- “…2.1 Denoising-Based IR Methods…” AND Page 2309- Fig. 1 ) ; and restoring the target restoration region based on the first denoising result using a second AI network to obtain a restored image (Restoration of image after denoising taught within Page 2309- Fig. 1 And Page 2307- Col. 2- 3. 3 PROPOSED DENOISING-BASED IMAGE RESTORATION ALGORITHM AND Page 2307- Col.1 – “…Though excellent IR performances have been obtained, these DCNN methods generally treat the IR problems as denoising problems, i.e., removing the noise or artifacts of the initially recovered images, and ignore the observation models….” ) .
One of ordinary skill in the art would have been motivated to combine the teaching of Weisheng Dong et al. within the modified teaching of the Information processing apparatus, information processing method, and storage medium mentioned by Takada because the Denoising Prior Driven Deep Neural Network for Image Restoration mentioned by Weisheng Dong et al. provides a method and system for implementation of neural network models for image restoration by denoising process.
Therefore, it would have been obvious for one in the ordinary skills in the art before the effective filing date of the claimed invention to implement the Denoising Prior Driven Deep Neural Network for Image Restoration mentioned by Weisheng Dong et al. within the modified teaching of the Information processing apparatus, information processing method, and storage medium mentioned by Takada for implementing a system and method for neural network models for image restoration by denoising process.
As per claim 18, Takada teaches An electronic device (electronic device interpreted as the cloud server taught Paragraph [0029]- “…The cloud server 200 is in charge of generating training data, estimating image quality degradation, and doing training for restoration. The edge device 100 is in charge of degradation restoration on an image to be processed….”: comprising: a memory configured to store instructions ( Paragraph [0032]- “…performing various types of processing. The RAM 203 is used as a temporary storage area such as a main memory and a work area of the CPU 201….”) ; and at least one processor configured to execute the instructions ( Paragraph [0129]- “…The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. …”) to: acquire a first image that is obtained by adding noise to a second image (adding noise to images for training /processing taught within FIG.2 – 200 AND Paragraphs [0054]- “…211 generates student image data by adding at least one or more types of degradation elements to teacher image data taken out of a degradation-free teacher image group. In the present exemplary embodiment, noise is described as an example of the degradation elements. The degradation addition unit 211 therefore generates student image data by adding noise as a degradation element to the teacher image data. In the present exemplary embodiment, the degradation addition unit 211 analyzes the physical properties of the imaging apparatus 10, and generates student image data by adding noise corresponding to a wider range of amounts of degradation than that of possible amounts of degradation occurring in the imaging apparatus…” AND Paragraph [0064]) , the second image comprising a target restoration region (Paragraphs [0059-0060]- “…The training-specific degradation restoration unit 214 receives the student image data 308 and the degradation estimation result 310 estimated by the training-specific degradation estimation unit 213, and performs restoration processing on the student image data 308. Specifically, the training-specific degradation restoration unit 214 initially inputs the student image data 308 …”) ;
Takada does not explicitly teach perform at least one first denoising process on the first image using a first artificial intelligence (AI) network to obtain a first denoising result; and
restore the target restoration region based on the first denoising result using a second AI network to obtain a restored image.
However, within analogous art, Weisheng Dong et al. teaches perform at least one first denoising process on the first image using a first artificial intelligence (AI) network to obtain a first denoising result ( Denoising process of images with AI /neural network taught within Page 2306- Col. 2- “…whose layers mimic the process flow of the proposed denoising-based IR algorithm. Moreover, an effective DCNN denoiser that can exploit the multi-scale redundancies is proposed and plugged into the deep network. Through end-to-end training, both the DCNN denoisers and other network parameters can be jointly optimized….” AND Page 2306- Col. 2- “…2.1 Denoising-Based IR Methods…” AND Page 2309- Fig. 1 ) ; and restore the target restoration region based on the first denoising result using a second AI network to obtain a restored image (Restoration of image after denoising taught within Page 2309- Fig. 1 And Page 2307- Col. 2- 3. 3 PROPOSED DENOISING-BASED IMAGE RESTORATION ALGORITHM AND Page 2307- Col.1 – “…Though excellent IR performances have been obtained, these DCNN methods generally treat the IR problems as denoising problems, i.e., removing the noise or artifacts of the initially recovered images, and ignore the observation models….” ) .
One of ordinary skill in the art would have been motivated to combine the teaching of Weisheng Dong et al. within the modified teaching of the Information processing apparatus, information processing method, and storage medium mentioned by Takada because the Denoising Prior Driven Deep Neural Network for Image Restoration mentioned by Weisheng Dong et al. provides a method and system for implementation of neural network models for image restoration by denoising process.
Therefore, it would have been obvious for one in the ordinary skills in the art before the effective filing date of the claimed invention to implement the Denoising Prior Driven Deep Neural Network for Image Restoration mentioned by Weisheng Dong et al. within the modified teaching of the Information processing apparatus, information processing method, and storage medium mentioned by Takada for implementing a system and method for neural network models for image restoration by denoising process.
As per claim 20, Takada teaches A non-transitory computer-readable storage medium storing instructions ( Paragraph [0129]- “…computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions …”) which, when executed by at least one processor ( Paragraph [0129]- “…The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. …”) , cause the at least one processor to: acquire a first image that is obtained by adding noise to a second image(adding noise to images for training /processing taught within FIG.2 – 200 AND Paragraphs [0054]- “…211 generates student image data by adding at least one or more types of degradation elements to teacher image data taken out of a degradation-free teacher image group. In the present exemplary embodiment, noise is described as an example of the degradation elements. The degradation addition unit 211 therefore generates student image data by adding noise as a degradation element to the teacher image data. In the present exemplary embodiment, the degradation addition unit 211 analyzes the physical properties of the imaging apparatus 10, and generates student image data by adding noise corresponding to a wider range of amounts of degradation than that of possible amounts of degradation occurring in the imaging apparatus…” AND Paragraph [0064]), the second image comprising a target restoration region(Paragraphs [0059-0060]- “…The training-specific degradation restoration unit 214 receives the student image data 308 and the degradation estimation result 310 estimated by the training-specific degradation estimation unit 213, and performs restoration processing on the student image data 308. Specifically, the training-specific degradation restoration unit 214 initially inputs the student image data 308 …”);
Takada does not explicitly teach perform at least one first denoising process on the first image using a first artificial intelligence (AI) network to obtain a first denoising result; and restore the target restoration region based on the first denoising result using a second AI network to obtain a restored image.
However, within analogous art, Weisheng Dong et al. teaches perform at least one first denoising process on the first image using a first artificial intelligence (AI) network to obtain a first denoising result ( Denoising process of images with AI /neural network taught within Page 2306- Col. 2- “…whose layers mimic the process flow of the proposed denoising-based IR algorithm. Moreover, an effective DCNN denoiser that can exploit the multi-scale redundancies is proposed and plugged into the deep network. Through end-to-end training, both the DCNN denoisers and other network parameters can be jointly optimized….” AND Page 2306- Col. 2- “…2.1 Denoising-Based IR Methods…” AND Page 2309- Fig. 1 ) ; and restore the target restoration region based on the first denoising result using a second AI network to obtain a restored image (Restoration of image after denoising taught within Page 2309- Fig. 1 And Page 2307- Col. 2- 3. 3 PROPOSED DENOISING-BASED IMAGE RESTORATION ALGORITHM AND Page 2307- Col.1 – “…Though excellent IR performances have been obtained, these DCNN methods generally treat the IR problems as denoising problems, i.e., removing the noise or artifacts of the initially recovered images, and ignore the observation models….” ) .
One of ordinary skill in the art would have been motivated to combine the teaching of Weisheng Dong et al. within the modified teaching of the Information processing apparatus, information processing method, and storage medium mentioned by Takada because the Denoising Prior Driven Deep Neural Network for Image Restoration mentioned by Weisheng Dong et al. provides a method and system for implementation of neural network models for image restoration by denoising process.
Therefore, it would have been obvious for one in the ordinary skills in the art before the effective filing date of the claimed invention to implement the Denoising Prior Driven Deep Neural Network for Image Restoration mentioned by Weisheng Dong et al. within the modified teaching of the Information processing apparatus, information processing method, and storage medium mentioned by Takada for implementing a system and method for neural network models for image restoration by denoising process.
2. Claims 2,3,4 and 5 are rejected under 35 U.S.C 103(a) as being unpatentable over Takada (USPUB 20230095184) in view of Weisheng Dong et al. (NPL DOC: "Denoising Prior Driven Deep Neural Network for Image Restoration," 12th September 2019,IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 41, NO. 10, OCTOBER 2019,Pages 2305-2316.) in further view of KANG et al. (USPUB 20210166352).
As per claim 2, Combination of Takeda and Weisheng Dong et al. teach claim 1,
Combination of Takeda and Weisheng Dong et al. does not explicitly teach further comprising: receiving a first instruction to select a target restoration region in a third image , the third image comprising the second image before the target restoration region is removed; and determining, based on the first instruction, the target restoration region and the second image comprising the target restoration region.
Within analogous art, KANG et al. teaches further comprising: receiving a first instruction to select a target restoration region in a third image ( Paragraph [0023]- “…The pixel position information may comprise a third image, in which, a third value changes based on a distance from a reference point in polar coordinates, and a fourth image, in which, a fourth value changes based on an angle with respect to a reference line, and the third image and the fourth image have a same resolution as the target image….”) , the third image comprising the second image before the target restoration region is removed( Paragraph [0046]- “…The image restoration model 130 may perform an image processing of the input target image 110 and may output the restoration image 140. The image processing may include, for example, a super resolution (SR), deblurring, denoising, demosaicing, or inpainting. The SR may be an image processing to increase a resolution of an image, the deblurring may be an image processing to remove a blur included in an image,…”) ; and determining, based on the first instruction, the target restoration region and the second image comprising the target restoration region ( Paragraphs [ 0061-0062]- “…The target image 310 and the pixel position information 320 may be concatenated to each other and may be input to the image restoration model 330. The image restoration model 330 may be a neural network that processes an input image using at least one convolution layer…”) .
One of ordinary skill in the art would have been motivated to combine the teaching of KANG et al. within the combined modified teaching of the Information processing apparatus, information processing method, and storage medium mentioned by Takada and the Denoising Prior Driven Deep Neural Network for Image Restoration mentioned by Weisheng Dong et al. because the Method and apparatus for restoring image mentioned by KANG et al. provides a method and system for implementation of target image restoration by removing degradation within the image.
Therefore, it would have been obvious for one in the ordinary skills in the art before the effective filing date of the claimed invention to implement the Method and apparatus for restoring image mentioned by KANG et al. within the combined modified teaching of the Information processing apparatus, information processing method, and storage medium mentioned by Takada and the Denoising Prior Driven Deep Neural Network for Image Restoration mentioned by Weisheng Dong et al. for implementing a system and method for target image restoration by removing degradation within the image.
As per claim 3, Combination of Takeda and Weisheng Dong et al. and KANG et al. teach claim 2,
Combination of Takeda and Weisheng Dong et al. does not explicitly teach wherein the performing the at least one first denoising process comprises: determining target restoration content information corresponding to the target restoration region ; and performing, based on the target restoration content information, the at least one first denoising process on the first image using the first AI network.
Within analogous art, KANG et al. teaches wherein the performing the at least one first denoising process comprises: determining target restoration content information corresponding to the target restoration region ( restoration images taught within Paragraph [0045]- “… image restoration model 130 may output a restoration image 140 based on pixel position information 120 in response to an input of a target image 110. The target image 110 may refer to an image input to the image restoration model 130, and may be an image that includes different levels of degradation based on a pixel position….”) ; and performing, based on the target restoration content information, the at least one first denoising process on the first image using the first AI network ( Paragraph [0046]-“…The image restoration model 130 may perform an image processing of the input target image 110 and may output the restoration image 140. The image processing may include, for example, a super resolution (SR), deblurring, denoising, demosaicing, or inpainting. The SR may be an image processing to increase a resolution of an image, the deblurring may be an image processing to remove a blur included in an image, the denoising may be an image processing to cancel noise included in an image, …”) .
One of ordinary skill in the art would have been motivated to combine the teaching of KANG et al. within the combined modified teaching of the Information processing apparatus, information processing method, and storage medium mentioned by Takada and the Denoising Prior Driven Deep Neural Network for Image Restoration mentioned by Weisheng Dong et al. because the Method and apparatus for restoring image mentioned by KANG et al. provides a method and system for implementation of target image restoration by removing degradation within the image.
Therefore, it would have been obvious for one in the ordinary skills in the art before the effective filing date of the claimed invention to implement the Method and apparatus for restoring image mentioned by KANG et al. within the combined modified teaching of the Information processing apparatus, information processing method, and storage medium mentioned by Takada and the Denoising Prior Driven Deep Neural Network for Image Restoration mentioned by Weisheng Dong et al. for implementing a system and method for target image restoration by removing degradation within the image.
As per claim 4, Combination of Takeda and Weisheng Dong et al. and KANG et al. teach claim 3,
Combination of Takeda and KANG et al. does not explicitly teach wherein the determining the target restoration content information comprises: providing class information about restoration contents; receiving a second instruction to select target class information ; and determining, based on the second instruction, the target restoration content information.
Within analogous art , Weisheng Dong et al. teaches wherein the determining the target restoration content information comprises: providing class information about restoration contents; receiving a second instruction to select target class information ( Page 2306- Col. 1- “…proposed, where mapping functions from the low-resolution (LR) patches to high-resolution (HR) patches are learned. Inspired by the great successes of the deep convolution neural network (DCNN) for image classification…the DCNN models have also been successfully applied to image IR tasks,… for image super-resolution, and…for image denoising. In these methods, a DCNN is used to learn the mapping function from the degraded images to the original images …” AND Page 2307- Col. 1- “…DCNNs for image classification [37], [38], object detection [48], [49], semantical segmentation…DCNNs have also been applied for low level image processing tasks… learning has been proposed for image restoration…”) ; and determining, based on the second instruction, the target restoration content information ( Page 2309- Fig. 1 -restoration ( b) ) .
As per claim 5, Combination of Takeda and Weisheng Dong et al. and KANG et al. teach claim 3,
Combination of Takeda and KANG et al. does not explicitly teach wherein the target restoration content information comprises a content feature map corresponding to a target restoration content.
Within analogous art , Weisheng Dong et al. teaches wherein the target restoration content information comprises a content feature map corresponding to a target restoration content ( Page 2307- Col. 1- “…feature maps from preceding layers, densely connected network has also been developed for image SR [35]. Different from the existing shortcut connections for identity mappings, adaptive shortcut connections with learnable parameters have also been proposed in [36] for image restoration tasks….” AND Page 2310- Col. 1- “...As the finally extracted feature maps lose a lot of spatial information, directly reconstructing images from the extracted features cannot recover fine image details. To compensate the lost spatial information, the feature maps of the same spatial resolution generated in the encoding stage are fused with the upsampled feature maps generated in the decoding stage, for obtaining newly upsampled feature maps. A shown in Fig. 1c, each decoding block consists of five convolutional layers….”) .
One of ordinary skill in the art would have been motivated to combine the teaching of Weisheng Dong et al. within the combined modified teaching of the Information processing apparatus, information processing method, and storage medium mentioned by Takada and the Method and apparatus for restoring image mentioned by KANG et al. because the Denoising Prior Driven Deep Neural Network for Image Restoration mentioned by Weisheng Dong et al. provides a method and system for implementation of neural network models for image restoration by denoising process.
Therefore, it would have been obvious for one in the ordinary skills in the art before the effective filing date of the claimed invention to implement the Denoising Prior Driven Deep Neural Network for Image Restoration mentioned by Weisheng Dong et al. within the combined modified teaching of the Information processing apparatus, information processing method, and storage medium mentioned by Takada and the Method and apparatus for restoring image mentioned by KANG et al. for implementing a system and method for neural network models for image restoration by denoising process.
3. Claim 17 is rejected under 35 U.S.C 103(a) as being unpatentable over Takada (USPUB 20230095184) in view of Weisheng Dong et al. (NPL DOC: "Denoising Prior Driven Deep Neural Network for Image Restoration," 12th September 2019,IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 41, NO. 10, OCTOBER 2019,Pages 2305-2316.) in further view of Muhammad Shahin Uddin et al. (NPL DOC: "Intelligent estimation of noise and blur variances using ANN for the restoration of ultrasound images," 4th October 2016, Optical Society of America, Vol. 55, No. 31,November 1 2016,Pages 8905-8913.).
As per claim 17, Combination of Takeda and Weisheng Dong et al. teach claim 1,
Combination of Takeda and Weisheng Dong et al. does not explicitly teach wherein the first AI network comprises a diffusion network.
Within analogous art, Muhammad Shahin Uddin et al. teaches wherein the first AI network comprises a diffusion network ( Diffusion method within AI / neural network taught within Page 8905- Col. 2- “…Nonlinear diffusion-based methods, the Perona–Malik filter [11], and the Weickert filter [12]rely on the diffusion flux to iteratively preserve large variations due to edges and reduce small variations due to noise. This relationship for speckle noise no longer exists in the nonlinear diffusion-based methods. The Gaussian smoothing-dependent linear anisotropic diffusion method not only smooths the noise but also blurs important features, such as edges. A nonlinear anisotropic diffusion-based method [13] employed within the framework of discrete wavelet transforms (DWTs) with more favorable de-speckling and edge enhancement properties has been proposed….”).
One of ordinary skill in the art would have been motivated to combine the teaching of Muhammad Shahin Uddin et al. within the combined modified teaching of the Information processing apparatus, information processing method, and storage medium mentioned by Takada and the Denoising Prior Driven Deep Neural Network for Image Restoration mentioned by Weisheng Dong et al. because the Intelligent estimation of noise and blur variances using ANN for the restoration of ultrasound images mentioned by Muhammad Shahin Uddin et al. provides a method and system for implementation of noise estimation within images with neural network model.
Therefore, it would have been obvious for one in the ordinary skills in the art before the effective filing date of the claimed invention to implement the Intelligent estimation of noise and blur variances using ANN for the restoration of ultrasound images mentioned by Muhammad Shahin Uddin et al. within the combined modified teaching of the Information processing apparatus, information processing method, and storage medium mentioned by Takada and the Denoising Prior Driven Deep Neural Network for Image Restoration mentioned by Weisheng Dong et al. for implementing a system and method for noise estimation within images with neural network model.
It is noted that any citations to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the reference should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. See MPEP 2123.
Allowable Subject Matter
4. Claims 6,7,8,9,10,11,12,13,14,15,16 and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
5. The following is an examiner’s statement of reasons for objecting the claims as allowable subject matter:
As to claim 6, prior art of record does not teach or suggest the limitation mentioned within claim 6: “… first AI network comprises at least one attention module, and wherein the performing the at least one first denoising process comprises: using the at least one attention module, determining an attention between a noise feature map corresponding to the first image and the target restoration content information, the attention representing a degree of denoising the noise feature map according to the target restoration content information, and performing the at least one first denoising process on the first image based on the attention.”
As to claim 7, The following claims depend objected allowable claim 6, therefore the following claims are considered objected allowable claims over prior art of record.
As to claim 8, prior art of record does not teach or suggest the limitation mentioned within claim 8: “…performing a first denoising process on the first image iteratively using at least one first AI network; extracting a first feature map corresponding to the first denoising result and a second feature map corresponding to a third image added with a predetermined proportion of noise, the third image the second image before the target restoration region is removed; and
determining, based on a similarity between the first feature map and the second feature map, whether to iteratively use the at least one first AI network to perform the first denoising process.”
As to claim 9, The following claims depend objected allowable claim 8, therefore the following claims are considered objected allowable claims over prior art of record.
As to claim 10, prior art of record does not teach or suggest the limitation mentioned within claim 10: “…the restoring the target restoration region based on the first denoising result comprises: performing a second denoising process on the first denoising result to obtain a second denoising result; determining texture information included in the second image based on a region that is not to be restored; extracting a third feature map corresponding to the target restoration region in the second denoising result; and restoring the target restoration region based on the texture information and the third feature map to obtain the restored image.”
As to claims 11 and 13 , The following claims depend objected allowable claim 10, therefore the following claims are considered objected allowable claims over prior art of record.
As to claim 12 , The following claims depend objected allowable claim 11, therefore the following claims are considered objected allowable claims over prior art of record.
As to claim 14 , The following claims depend objected allowable claim 13, therefore the following claims are considered objected allowable claims over prior art of record.
As to claim 15, prior art of record does not teach or suggest the limitation mentioned within claim 15: “…acquiring a fifth image comprising the target restoration region; based on a size of the fifth image being greater than a second threshold, acquiring the second image from the fifth image by performing at least one of: based on an area of the target restoration region being less than a third threshold and a length of the target restoration region being less than a fourth threshold, clipping the fifth image into a clipped image having the size equal to the second threshold using the target restoration region as a center to obtain the second image; or based on the area of the target restoration region being less than the third threshold and the length of the target restoration region being greater than the fourth threshold, or based on the area of the target restoration region being greater than the third threshold, determining an image region in the fifth image in which the size of the image region is equal to the second threshold and an area of the target restoration region in the image region is not greater than a fifth threshold, clipping the image region into the clipped image to obtain the second image, and using the target restoration region in the image region as the target restoration region of the second image.”
As to claim 16, The following claims depend objected allowable claim 15, therefore the following claims are considered objected allowable claims over prior art of record.
As to claim 19, prior art of record does not teach or suggest the limitation mentioned within claim 19: “…perform a first denoising process on the first image iteratively using at least one first AI network; extract a first feature map corresponding to the first denoising result and a second feature map corresponding to the third image added with a predetermined proportion of noise; and determine, based on a similarity between the first feature map and the second feature map, whether to iteratively use the at least one first AI network to perform the first denoising process.”
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
Conclusion
6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Refer to PTO-892, Notice of Reference Cited for a listing of analogous art.
7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to OMAR S ISMAIL whose telephone number is (571)272-9799 and Fax # is (571)273-9799. The examiner can normally be reached on M-F 9:00am-6:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at
http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David C. Payne can be reached on (571) 272-3024. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free)? If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OMAR S ISMAIL/
Primary Examiner, Art Unit 2635