Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on August 19, 2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 2, and 6 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Cui (Chinese Patent CN111310582 A).
Regarding claim 1, Cui discloses a method comprising: receiving and/or generating a training data set comprising a plurality of simulated degraded images capturing the degradation in turbulent medium with varying turbulence strengths and corresponding to a target image or object, wherein the plurality of simulated degraded images are provided as multiple frame images into inputs of an artificial neural network (Cui paragraphs [0009]-[0016] where “a depth model combining a boundary perception algorithm and generation of an antagonistic network (GAN) is suitable for a semantic segmentation task of an image degraded under the influence of atmospheric turbulence”); evaluating, by a processor, via the training, the performance of the artificial neural network using the multiple frame images to recognize differences between the plurality of degraded images in a turbulent medium and the target image using a perceptual loss function, wherein the perceptual loss function comprises a spatial domain loss component and a frequency domain loss component (Cui paragraphs [0031]-[0040] where the training process of the GAN network uses a synthetic loss function which can be construed as the perceptual loss function of the claimed invention, which is also the weighted sum of the two groups of loss functions used); and adjusting, by a processor, a weighting parameter of the artificial neural network based on the loss function to generate a trained neural network, wherein, once trained, the trained neural network is configured to enhance actual images taken in a turbulent medium (Cui paragraphs [0021]-[0030] where “accurate semantic boundaries are extracted through a final loss function while the differences between classes of features are increased.”).
PNG
media_image1.png
641
954
media_image1.png
Greyscale
PNG
media_image2.png
695
948
media_image2.png
Greyscale
PNG
media_image3.png
670
962
media_image3.png
Greyscale
Regarding claim 2, Cui discloses the method of claim 1, wherein the artificial neural network comprises ResNet layers (Cui paragraph [0061] constructing the DeepLabV3+ as a basic network which uses ResNet layer architecture).
PNG
media_image4.png
134
944
media_image4.png
Greyscale
Regarding claim 6, Cui discloses the method of claim 1, wherein the artificial neural network comprises a GAN network (Cui paragraph [0071]).
PNG
media_image5.png
56
938
media_image5.png
Greyscale
Claim(s) 7, 9, 11, 15, 17, 19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Bau et. al. (United States Patent Publication US 2023/0081128 A1).
Regarding claim 7, Bau et. al. discloses a system comprising: a processor; and a memory having instructions stored thereon, wherein the instructions, when executed by the processor, causes the processor to (Bau et. al. Figure 1): receive one or more images having a distortion (Bau et. al. [0046] i.e., “visual artifacts”); generate one or more cleaned images from the received one or more images using a trained neural network having been trained using a perceptual loss function comprising a spatial domain loss component and a frequency domain loss component; and output the generated one or more cleaned images (Bau et. al. Figure 4, [0046], 0058],[0060]).
PNG
media_image6.png
162
472
media_image6.png
Greyscale
PNG
media_image7.png
384
492
media_image7.png
Greyscale
PNG
media_image8.png
382
722
media_image8.png
Greyscale
PNG
media_image9.png
666
788
media_image9.png
Greyscale
Regarding claim 9, Bau et. al. discloses the system of claim 7, wherein the one or more images include terrestrial images through turbulence (Bau et. al. Figures 3A-3D, [0054-0055]).
PNG
media_image10.png
764
624
media_image10.png
Greyscale
PNG
media_image11.png
809
646
media_image11.png
Greyscale
Regarding claim 11, Bau et. al. teaches the system of claim 7, wherein the trained neural network was generated by: providing a training data set comprising a plurality of degraded images corresponding to a target image to an artificial neural network; evaluating the performance of the artificial neural network to recognize the differences between the plurality of degraded images and the target image using a perceptual loss function, wherein the perceptual loss function comprises a spatial domain loss component and a frequency domain loss component; and adjusting a weighting parameter of the artificial neural network based on the loss function to generate a trained neural network (Bau et. al. Figure 4, [0058]-[0060] above).
Regarding claim 15, Bau et. al. discloses a non-transitory computer-readable medium having instructions stored thereon, wherein the instructions, when executed by a processor, causes the processor to: receive one or more images having a distortion; generate one or more cleaned images from the received one or more images using a trained neural network having been trained using a perceptual loss function comprising a spatial domain loss component and a frequency domain loss component; and output the generated one or more cleaned images (Rejection of claim 7 applies here. Also, Bau et. al. [0007]).
Regarding claim 17, Bau et. al. discloses the non-transitory computer-readable medium of claim 15, wherein the one or more images include terrestrial images through turbulence (Bau et. al. Figure 3, [0054]-[0055]).
Regarding claim 19, Bau et. al. discloses the non-transitory computer-readable medium of claim 17, wherein the trained neural network was generated by: providing a training data set comprising a plurality of degraded images corresponding to a target image to an artificial neural network; evaluating the performance of the artificial neural network to recognize differences between the plurality of degraded images and the target image using a perceptual loss function, wherein the perceptual loss function comprises a spatial domain loss component and a frequency domain loss component; and adjusting a weighting parameter of the artificial neural network based on the loss function to generate a trained neural network (Bau et. al. Figure 4, [0058]-[0060]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Cui (Chinese Patent CN 111310582 A) in view of Yang (Chinese Patent CN 116309107 A).
Regarding claim 3, Cui teaches the method of claim 1. However, Cui fails to teach further comprising globally and locally aligning the plurality of images prior to evaluating the performance of the artificial neural network.
Yang teaches further comprising globally and locally aligning the plurality of images prior to evaluating the performance of the artificial neural network (Yang paragraphs [0032]-[0034] where the original features are first extracted and parsed into categories that separate global information and local information).
PNG
media_image12.png
364
1388
media_image12.png
Greyscale
The problem of detailed positioning needs to be solved by using local information and the problem of semantic discrimination needs to be solved by using global information. Thus, it would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to have combined the teachings of Cui and Yang so that both local and global alignment of the images are utilized effectively for the solution performed by the claimed invention.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Cui (Chinese Patent CN 111310582 A) in view of Yao (Foreign Patent LU503093 B1) and Ferrante et. al. (United States Patent US 7,308,154 B1).
Regarding claim 4, Cui discloses the method of claim 1. However, Cui fails to disclose wherein the perceptual loss function further comprises a spatial correntropy-based loss component and Fourier space-loss. Yao teaches the correntropy-based loss component as part of a loss function to remove noise from hyperspectral image (Yao, Summary of the Invention).
PNG
media_image13.png
764
962
media_image13.png
Greyscale
Ferrante et. al. teaches a discrete Fourier transform (DFT) as a method for computing a two-dimensional transform of a digital image (Ferrante et. al. col. 8, lines 41-55, page 16). Combining both the correntropy-based loss function and Fourier space loss is important to the claimed invention because correntropy-based loss functions provide a robust measure of similarity between data points. The Fourier based loss function is more computationally efficient. Thus, it would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to have combined the teachings of Ferrante et. al., Yao, and Cui so that the perceptual loss function is more robust and efficient.
PNG
media_image14.png
684
812
media_image14.png
Greyscale
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Bau et. al. (United States Patent Publication US2023/0081128 A1) in view of Ferrante et. al. (United States Patent US 7,308,154 B1) and Yao (Foreign Patent LU503093 B1).
Regarding claim 20, Bau et. al. discloses the non-transitory computer-readable medium of claim 17. However, Bau et. al. fails to disclose wherein the perceptual loss function further comprises a spatial correntropy-based loss and Fourier space-loss. Yao teaches the correntropy-based loss component as part of a loss function to remove noise from hyperspectral image (Yao, Summary of the Invention). Ferrante et. al. teaches a discrete Fourier transform (DFT) as a method for computing a two-dimensional transform of a digital image (Ferrante et. al. col. 8, lines 41-55, page 16). Combining both the correntropy-based loss function and Fourier space loss is important to the claimed invention because correntropy-based loss functions provide a robust measure of similarity between data points. The Fourier based loss function is more computationally efficient. Thus, it would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to have combined the teachings of Ferrante et. al., Yao, and Bau et. al. so that the perceptual loss function is more robust and efficient.
Claim(s) 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Bau et. al. (United States Patent Application US 2023/0081128 A1) in view of Yang (Chinese Patent CN 116309107 A).
Regarding claim 8, Bau et. al. discloses the system of claim 7. However, Bau et. al. fails to disclose wherein the one or more images include underwater images through turbulence.
Yang teaches wherein the one or more images include underwater images through turbulence.
(Yang, [0001]).
PNG
media_image15.png
202
1386
media_image15.png
Greyscale
It is important to the claimed invention to include a variety of different turbulent medium from which the images are captured. Thus, it would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to have included the teachings of Yang et. al. with the teachings of Bau et. al. so that underwater images are included in the artificial intelligent system of image processing.
Regarding claim 16, Bau et. al. discloses the non-transitory computer-readable medium of claim 15. However, Bau et. al. fails to disclose wherein the one or more images include underwater images through turbulence. Yang teaches wherein the one or more images include underwater images through turbulence (Yang, [0001]).
It is important to the claimed invention to include a variety of different turbulent medium from which the images are captured. Thus, it would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to have included the teachings of Yang et. al. with the teachings of Bau et. al. so that underwater images are included in the artificial intelligent system of image processing.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Cui (Chinese Patent CN 111310582 A) in view of Bau et. al. (United States Patent Application US 2023/0081128 A1).
Regarding claim 5, Cui discloses the method of claim 1. However, Cui fails to disclose the method further comprising: generating a set of simulated degraded image generated from one or more source images, wherein the generating provides a motion vector to offset values of reference pixels of the source image to geometrically warp and/or distort the one or more source image.
Bau et. al. teaches the method further comprising: generating a set of simulated degraded image generated from one or more source images, wherein the generating provides a motion vector to offset values of reference pixels of the source image to geometrically warp and/or distort the one or more source image (Bau et. al. Figure 2, confusion factor y, class-wise probability values, and degree of degradation distorts the source image, [0049]-[0050]).
PNG
media_image16.png
652
890
media_image16.png
Greyscale
Applying a distortion to the real images in the process of training the artificial neural network allows for a more distinct classification for degraded images within the trained network. Thus, it would have been obvious for one skilled in the art prior to the effective filing date of the claimed invention to have included the teachings of Cui in the system created by Bau et. al. to arrive at the solution of the claimed invention.
Claim(s) 10, 13, 14, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Bau et. al. (United States Patent Application US 2023/0081128 A1) in view of Cui (Chinese Patent CN 111310582 A).
Regarding claim 10, Bau et. al. discloses the system of claim 7. However, Bau et. al. fails to disclose wherein the one or more images include satellite images through turbulence.
Cui teaches wherein the one or more images include satellite images through turbulence (Cui paragraph [0002]).
PNG
media_image17.png
216
924
media_image17.png
Greyscale
It is important to the claimed invention to include a variety of different turbulent medium from which the images are captured. Thus, it would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to have included the teachings of Cui with the system of Bau et. al. so that satellite images are included in the artificial intelligent system of image processing.
Regarding claim 13., Bau et. al. teaches the system of claim 10. However, Bau et. al. fails to teach wherein the system comprises real-time vehicle control configured to employ the one or more cleaned images in the control of the vehicle. Cui teaches wherein the system comprises real-time vehicle control configured to employ the one or more cleaned images in the control of the vehicle (Cui Figure 2, [0084]).
PNG
media_image18.png
200
758
media_image18.png
Greyscale
It is a key feature of the claimed invention to have real-time vehicle control in the view of capturing the images. This is shown to identify the images more accurately and improve the edge detection of targets. Thus, it would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to have included the teachings of Cui with the system of Bau et. al so that image capture control is in real-time based on the target of interest.
Regarding claim 14, the combination of Bau et. al. and Cui disclose the system of claim 10, wherein the system comprises a post-processing system configured to post-process the one or more images having the distortion to generate the one or more cleaned images (Bau et. al. Figure 2, after the degradation/distortion is applied to the real image, the images are further processed for training and appropriate loss functions are applied).
It is important to the claimed invention to have the ability to clean the distorted images within the training data set. Thus, it would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to have combined the teachings of Cui and Bau et. al. so that these features are shown.
Regarding claim 18, Bau et. al. discloses the non-transitory computer-readable medium of claim 17. However, Bau et. al. fails to disclose wherein the one or more images include satellite images through turbulence. Cui teaches wherein the one or more images include satellite images through turbulence (Cui paragraph [0002]).
It is important to the claimed invention to include a variety of different turbulent medium from which the images are captured. Thus, it would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to have included the teachings of Cui with the system of Bau et. al. so that satellite images are included in the artificial intelligent system of image processing.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Bau et. al. (United States Patent Application US 2023/0081128 A1) in view of Yao (Foreign Patent LU503093 B1) and Ferrante et. al. (United States Patent US 7,308,154 B1).
Regarding claim 12, Bau et. al. discloses the system of claim 10. However, Bau et. al. fails to disclose wherein the perceptual loss function comprises spatial correntropy-based loss and Fourier space-loss. Yao teaches the correntropy-based loss component as part of a loss function to remove noise from hyperspectral image (Yao, Summary of the Invention). Ferrante et. al. teaches a discrete Fourier transform (DFT) as a method for computing a two-dimensional transform of a digital image (Ferrante et. al. col. 8, lines 41-55, page 16).
Combining both the correntropy-based loss function and Fourier space loss is important to the claimed invention because correntropy-based loss functions provide a robust measure of similarity between data points. The Fourier based loss function is more computationally efficient. Thus, it would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to have combined the teachings of Ferrante et. al., Yao, and Bau et. al. so that the perceptual loss function is more robust and efficient.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JESSICA YIFANG LIN whose telephone number is (571)272-6435. The examiner can normally be reached M-F 7:00am-6:15pm, with optional day off.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at 571-272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JESSICA YIFANG LIN/Examiner, Art Unit 2668
February 10, 2026
/VU LE/Supervisory Patent Examiner, Art Unit 2668