DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant’s response to the Non-final Office Action dated 04/25/2025, filed with the office on 09/25/2025, has been entered and made of record.
Response to Amendment
5. In light of Applicant’s amendment of the claims, the 35 U.S.C. 112(b) rejection of record with respect to claim 17 for insufficient antecedent basis has been withdrawn.
Status of Claims
Claims 1-4 and 6-20 are pending. Claims 1, 6-8 and 16-18 are amended. Claim 5 is cancelled.
Response to Arguments
Applicant's arguments filed on September 25, 2025 with respect to rejection of claims under 35 U.S.C. 103 has been fully considered; but they are not found persuasive. Specifically, in page 8 of its reply, Applicant argues in fourth paragraph that Owechko does not teach processing both a global region (entire image) and local regions (partitions of input image). Examiner respectfully disagrees. Owechko discloses dividing a whole input image into smaller rectangular regions— Owechko, ¶0054: “partition each input image frame into K rectangular regions”, wherein detecting artifacts in the whole image is disclosed in ¶0089: “determining a background optical flow motion corresponding to a set of image frames of the image data… determining the optical flow motion of an entire image” and detecting artifacts in the smaller regions is disclosed in ¶0088: “determining 1105 a level of environmental artifacts at a plurality of positions of an image frame of image data”.
Applicant continues to argue in page 8, fifth paragraph that Owechko does not each generating an enhanced image based on the degradation profile. Examiner respectfully disagrees. Owechko performs image enhancement based on the detected artifacts— Owechko, ¶0090: “the method 1100 may include adjusting 1110 local area processing of the image frame, to generate an enhanced image frame of image data, based on the level of environmental artifacts at each position of the plurality of positions” wherein artifacts can be detected in K number of regions— Owechko, ¶0090: “adjusting 1110 the local area processing includes adjusting the number of local area regions (e.g., the parameter K of Equation 3”. Therefore, for K=1, the entire image is used for artifact detection, and based on the detected artifacts, image enhancement processing is performed— Owechko, ¶0090: “generate an enhanced image frame of image data, based on the level of environmental artifacts”. Therefore, Applicant’s arguments are not found persuasive.
Consequently, THIS ACTION IS MADE FINAL.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 6-9, 11-15 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Owechko et al. (US 2018/0365805 A1) in view of Reddy et al. (US 2020/0242515 A1).
Regarding claim 1, Owechko teaches, A method for generating an enhanced image includes: (Owechko, ¶0005: “A method of enhancing an image”) obtaining an input image from an image sensor; (Owechko, ¶0042: “sensed image obtained from the sensor 120”) generating sensor feature data, wherein the sensor feature data is based on the input image; (Owechko, ¶0042: “a degraded visibility environment 140 may block the sensor 120 from sensing a clear image of the object 125, thus resulting in degraded image data”) for each region of a plurality of regions of the input image, (Owechko, ¶0043: “enhance image sequence data by dividing an image into multiple local area regions”) determining a degradation profile based on the sensor feature data, (Owechko, ¶0087: “The artifact level determination module 1020 may determine a level of environmental artifacts corresponding to each region of a plurality of regions”) and wherein the plurality of regions includes a global region and a first plurality of regions (Owechko, ¶0054: “partition each input image frame into K rectangular regions”; ¶0089: “determining a background optical flow motion corresponding to a set of image frames of the image data… determining the optical flow motion of an entire image”) corresponding to a first local scale; (Owechko, ¶0090: “adjusting 1110 the local area processing includes adjusting the number of local area regions”) and generating an enhanced image based on the degradation profile for each region of the plurality of regions. (Owechko, ¶0090: “generate an enhanced image frame of image data, based on the level of environmental artifacts at each position of the plurality of positions”). Although Owechko teaches a degradation amount for a single degradation (Owechko, ¶0084: “By detecting the swirling motion of environmental artifacts, an amount of the environmental artifacts in image sequence data that obscures the image may be determined”), however, Owechko does not explicitly teach wherein degradation profile indicates, for each of a plurality of degradations, a degradation amount.
In an analogous field of endeavor, Reddy teaches, wherein degradation profile indicates, for each of a plurality of degradations, a degradation amount (Reddy, ¶0014: “degradation bins are generated where each degradation bin represents a particular degradation type and level combination”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Owechko using the teachings of Reddy to introduce multiple types and levels of degradation. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of optimized image degradation removal. Therefore, it would have been obvious to combine the analogous arts Owechko and Reddy to obtain the invention of claim 1.
Regarding claim 6, Owechko in view of Reddy teaches, The method of claim 1, wherein the plurality of regions includes a second plurality of regions (Owechko, ¶0054: “partition each input image frame into K rectangular regions”; for a second K value) corresponding to a second local scale that is different than the first local scale. (Owechko, ¶0090: “adjusting the number of local area regions (e.g., the parameter K of Equation 3”; second K value is different than the first K value).
Regarding claim 7, Owechko in view of Reddy teaches, The method of claim 1, wherein each region of the first plurality of regions corresponds to a local block of the input image. (Owechko, ¶0087: “determine a level of environmental artifacts corresponding to each region of a plurality of regions”; also see fig. 7).
Regarding claim 8, Owechko in view of Reddy teaches, The method of claim 1, wherein multiscale degradation profile data is generated based on determining the degradation profile for at least one region (Owechko, ¶0087: “The artifact level determination module 1020 may determine a level of environmental artifacts corresponding to each region of a plurality of regions”) of the first plurality of regions (Owechko, ¶0090: “adjusting the number of local area regions (e.g., the parameter K of Equation 3”) and the degradation profile for the global region. (Owechko, ¶0054: “partition each input image frame into K rectangular regions”; where K is 1).
Regarding claim 9, Owechko in view of Reddy teaches, The method of claim 8, wherein the multiscale degradation profile data is used to determine one or more controlling parameters (Owechko, ¶0073: “local image statistics may be used to construct new transform functions for each pixel in the active regions”) that are used as input into one or more image enhancement techniques (Owechko, ¶0073: “computed transform functions may be used to update a lookup table for the transform functions”) so as to control the one or more image enhancement techniques when applied to the input image to generate the enhanced image. (Owechko, ¶0079: “the transformation function to modify an intensity for each pixel in the image to produce an enhanced image”).
Regarding claim 11, Owechko in view of Reddy teaches, The method of claim 1, wherein, for the degradation profile for each region of the plurality of regions, the plurality of degradations include a predetermined plurality of degradations, (Reddy, ¶0013: “remove only two types of degradations such as noise and blur, each degradation type can have multiple levels and/or variants”) and wherein an output degradation value is determined (Reddy, ¶0013: “the noise can be of different levels (e.g., 10 dB noise, 20 dB noise etc.), and for each level of noise, there can be multiple levels of blur”) for each of the predetermined plurality of degradations. (Reddy, ¶0014: “degradation bins are generated where each degradation bin represents a particular degradation type and level combination”). The proposed combination as well as the motivation for combining Owechko and Reddy references presented in the rejection of claim 1, apply to claim 11 and are incorporated herein by reference. Thus, the method recited in claim 11 is met by Owechko and Reddy.
Regarding claim 12, Owechko in view of Reddy teaches, The method of claim 11, wherein the output degradation value for each of the predetermined plurality of degradations is used for determining a controlling parameter for use by an image enhancement technique (Owechko, ¶0082: “The parameters may include one or more thresholds and/or variables (e.g., sampling rates, etc.) used to control the operation of the local area processing module”) for controlling application of the image enhancement technique when generating the enhanced image. (Owechko, ¶0082: “The local area processing module 910 then enhances the image sequence data 300 based on the one or more parameters”).
Regarding claim 13, Owechko in view of Reddy teaches, The method of claim 1, wherein the degradation profiles are determined (Reddy, ¶0003: “determining, using a validation process for the machine-learning model, a loss value for each of the plurality of combinations of degradations”) as a result of a machine learning technique that takes, as input, the sensor feature data (Reddy, ¶0022: “The machine learning module 130 extracts feature values from the input data”) and generates, as output, the degradation amount. (Reddy, ¶0014: “the machine learning model can produce acceptable (e.g., as defined by a user) enhanced images across all degradation type and level”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Owechko in view of Reddy using the additional teachings of Reddy to introduce a machine learning model compatible with multiple types and levels of degradation. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of optimized image degradation removal. Therefore, it would have been obvious to combine the analogous arts Owechko and Reddy to obtain the invention of claim 13.
Regarding claim 14, Owechko in view of Reddy teaches, The method of claim 1, wherein the sensor feature data is generated by generating a reduced feature vector based on dimension reduction of a feature vector having feature values extracted from the input image. (Reddy, ¶0022: “the machine learning module 130 applies dimensionality reduction (e.g., via linear discriminant analysis (LDA), principle component analysis (PCA), or the like) to reduce the amount of data in the feature vectors for the input data to a smaller, more representative set of data”).
Regarding claim 15, Owechko in view of Reddy teaches, The method of claim 14, wherein the reduced feature vector is generated using a machine learning (ML) technique that takes, as input, the input image and/or the feature vector and generates, as output, the reduced feature vector. (Reddy, ¶0022: “the machine learning module 130 applies dimensionality reduction (e.g., via linear discriminant analysis (LDA), principle component analysis (PCA), or the like) to reduce the amount of data in the feature vectors for the input data to a smaller, more representative set of data”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Owechko in view of Reddy using the additional teachings of Reddy to introduce a dimension reducing machine learning technique. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of reducing noise and improving classification of the feature data. Therefore, it would have been obvious to combine the analogous arts Owechko and Reddy to obtain the invention of claim 15.
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Owechko in view of Reddy using the additional teachings of Reddy to introduce reducing the dimensionality of a feature vector of an input. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of reducing noise in the feature data. Therefore, it would have been obvious to combine the analogous arts Owechko and Reddy to obtain the invention of claim 14.
Regarding claim 17, it recites a system with elements corresponding to the steps of the method recited in claim 1. Therefore, the recited elements of system claim 17 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 1. Additionally, the rationale and motivation to combine Owechko and Reddy presented in rejection of claim 1, apply to this claim. Additionally, Owechko teaches, An image enhancement system, (Owechko, ¶0041: “an image processing system 130 to process the image data and/or to enhance the quality of the image data”) comprising: at least one processor; and memory (Owechko, ¶0044: “processing system 130 includes a processor 200, memory 205”) storing computer instructions; wherein the image enhancement system is configured so that the at least one processor executes the computer instructions, which, when executed by the at least one processor, (Owechko, ¶0044: “memory 205 may store code and the processor 200 may be used to execute the code”) cause the image enhancement system to: obtain an input image (Owechko, ¶0042: “sensed image obtained from the sensor 120”).
Claims 2 and 3 are rejected under 35 U.S.C. 103 as being unpatentable over Owechko et al. (US 2018/0365805 A1), in view of Reddy et al. (US 2020/0242515 A1) and in further view of Yuan et al. (US 9,424,461 B1).
Regarding claim 2, Owechko in view of Reddy teaches, The method of claim 1. However, the combination of Owechko and Reddy does not explicitly teach wherein generating the sensor feature data includes generating a combined feature vector based on the input image and environment sensor data and then generating a reduced feature vector based on dimension reduction of the combined feature vector, and wherein the environment sensor data is obtained from an environment sensor that is separate from the image sensor.
In an analogous field of endeavor, Yuan teaches wherein generating the sensor feature data includes generating a combined feature vector based on the input image and environment sensor data (Yuan, col 6, lines 29-33: “Multiple types of visual features can be extracted from the multi-modal single/multi-view images for an object and a set of visual features can be selected from multiple types of visual features to generate a combined visual feature vector”) and then generating a reduced feature vector based on dimension reduction of the combined feature vector, (Yuan, col 3, lines 29-31: “Accordingly, when reducing the combined visual feature vector, a compact combined visual feature vector can be generated”) and wherein the environment sensor data is obtained from an environment sensor that is separate from the image sensor. (Yuan, col 20, lines 55-57: “cameras that are able to capture images of the surrounding environment and that are able to image a user, people, or objects in the vicinity”; Applicant’s specification ¶0024: “environment sensors include a camera, another image sensor”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Owechko in view of Reddy using the teachings of Yuan to introduce feature vectors. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of combining and reducing image data using feature vectors. Therefore, it would have been obvious to combine the analogous arts Owechko, Reddy and Yuan to obtain the invention of claim 2.
Regarding claim 3, Owechko in view of Reddy and in further view of Yuan teaches, The method of claim 2, wherein the dimension reduction of the combined feature vector is performed using principal component analysis (PCA). (Reddy, ¶0022: “the machine learning module 130 applies dimensionality reduction (e.g., via linear discriminant analysis (LDA), principle component analysis (PCA), or the like) to reduce the amount of data in the feature vectors for the input data to a smaller, more representative set of data”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Owechko in view of Reddy and in further view of Yuan using the additional teachings of Reddy to introduce dimensionality reduction using principal component analysis. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of reducing noise in the feature data. Therefore, it would have been obvious to combine the analogous arts Owechko and Yuan to obtain the invention of claim 3.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Owechko et al. (US 2018/0365805 A1), in view of Reddy et al. (US 2020/0242515 A1), in further view of Yuan et al. (US 9,424,461 B1) and still in further view of Mukherjee et al. (US 2022/0319646 A1).
Regarding claim 4, Owechko in view of Reddy and in further view of Yuan teaches, The method of claim 2. However, the combination of Owechko, Reddy and Yuan does not explicitly teach wherein the combined feature vector is generated using a machine learning (ML) and feature description techniques.
In an analogous field of endeavor, Mukherjee teaches wherein the combined feature vector is generated using a machine learning (ML) (Mukherjee, ¶0033: “the machine learning engine 102 generates a combined feature vector”) and feature description techniques. (Mukherjee, ¶0063: “the combined vector has n dimensions, i.e., one dimension for each of a plurality of features”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Owechko in view of Reddy, in further view of Yuan using the teachings of Mukherjee to introduce machine learning. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of automatically combining the features encoded in the first vector and the second vector into a combined vector. Therefore, it would have been obvious to combine the analogous arts Owechko, Reddy, Yuan and Mukherjee to obtain the invention of claim 4.
Claims 10 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Owechko et al. (US 2018/0365805 A1), in view of Reddy et al. (US 2020/0242515 A1), and in further view of Yu et al. (US 2021/0248718 A1).
Regarding claim 10, Owechko in view of Reddy teaches, The method of claim 9. However, The combination of Owechko and Reddy does not explicitly teach, wherein generating the enhanced image includes: applying at least one of the one or more image enhancement techniques to the global region to generate an enhanced global image; applying at least one other image enhancement technique of the one or more image enhancement techniques to at least one of the first plurality of regions so as to generate an enhanced local image layer; and generating the enhanced image based on the enhanced global image layer and the enhanced local image layer.
In an analogous field of endeavor, Yu teaches, wherein generating the enhanced image includes: applying at least one of the one or more image enhancement techniques to the global region to generate an enhanced global image; (Yu, ¶0050: “the global enhancement feature map refers to a feature map that may represent image features over the entire image”) applying at least one other image enhancement technique of the one or more image enhancement techniques to at least one of the first plurality of regions so as to generate an enhanced local image layer; (Yu, ¶0043: “There may be multiple local feature maps, for example, multiple local feature maps corresponding to the output of each layer may be obtained by the residual dense block at each layer”) and generating the enhanced image based on the enhanced global image layer and the enhanced local image layer. (Yu, ¶0043: “the residual fusion is performed on the multiple local feature maps and multiple global enhancement feature maps in a parallel manner to obtain the raindrop result”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Owechko in view of Reddy using the teachings of Yu to introduce combining a global region with local regions. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of producing an improved enhanced image with enhancement/processing performed in both global and local levels. Therefore, it would have been obvious to combine the analogous arts Owechko, Reddy and Yu to obtain the invention of claim 10.
Regarding claim 16, Owechko in view of Reddy teaches, The method of claim 1. However, The combination of Owechko and Reddy does not explicitly teach, wherein a first enhanced local image layer is generated for the first plurality of regions of the plurality of regions and a second enhanced local image layer is generated for a second plurality of regions of the plurality of regions, wherein the first plurality of regions is sized according to a first scale and the second plurality of regions is sized according to a second scale that is different than the first scale, and wherein the first enhanced local image layer and the second enhanced local image layer are combined to form the enhanced image.
In an analogous field of endeavor, Yu teaches, wherein a first enhanced local image layer is generated for a first plurality of regions of the plurality of regions (Yu, ¶0051: “There may be multiple global enhancement feature maps, for example, multiple global enhancement feature maps corresponding to the output of each layer may be obtained by the region sensitive block of each layer”) and a second enhanced local image layer is generated for a second plurality of regions of the plurality of regions, (Yu, ¶0080: “the first granularity processing at first stage of “local-global” may be performed by using the local features extracted by the local convolution kernel in combination with the global features extracted by the region sensitive block”) wherein the first plurality of regions is sized according to a first scale and the second plurality of regions is sized according to a second scale that is different than the first scale, (Yu, ¶0027: “methods for removing raindrops… The method uses multi-scale features based on pairwise single image data with/without rain”) and wherein the first enhanced local image layer and the second enhanced local image layer are combined to form the enhanced image. (Yu, ¶0078: “Finally, the preliminary rain removal result of the first stage and the detail enhancement result of this stage are fused to obtain final rain removal result”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Owechko in view of Reddy using the teachings of Yu to introduce combining a two image layers. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result processing images to removing degradations in two different scales and combine the processed images to obtain an enhanced image. Therefore, it would have been obvious to combine the analogous arts Owechko, Reddy and Yu to obtain the invention of claim 16.
Claims 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Owechko et al. (US 2018/0365805 A1), in view of Yuan et al. (US 9,424,461 B1) and in further view of Reddy et al. (US 2020/0242515 A1).
Regarding claim 18, Owechko teaches, An onboard image enhancement system (Owechko, ¶0005: “A method of enhancing an image”) for a vehicle, comprising: an onboard vehicle computer for a vehicle having at least one processor and memory storing computer instructions; (Owechko, ¶0012: “The mobile vehicle also includes a processor operatively coupled to the sensor and configured to receive the image data. The mobile vehicle includes a memory that stores code executable by the processor”) an image sensor configured to be mounted onto the vehicle (Owechko, ¶0088: “infrared imaging equipment of the vehicle”) and to face an area exterior to the vehicle and configured to capture one or more input images; (Owechko, ¶0088: “the image data includes multiple images in a video of an environment outside of a vehicle”) (Owechko, ¶0088: “a video of an environment outside of a vehicle that are acquired in real time using infrared imaging equipment of the vehicle”) wherein the onboard vehicle computer is configured so that when the at least one processor executes the computer instructions, (Owechko, ¶0012: “The mobile vehicle includes a memory that stores code executable by the processor”) the onboard vehicle computer processes the one or more input images to generate one or more enhanced images using an image enhancement process (Owechko, ¶0043: “image processing system 130 may produce enhanced image sequence data from the degraded image sequence data”) that includes: obtaining the input image from the image sensor; (Owechko, ¶0044: “processor 200 may be operatively coupled to the sensor 120 and configured to receive image data from the sensor 120”) (Owechko, ¶0087: “The artifact level determination module 1020 may determine a level of environmental artifacts corresponding to each region of a plurality of regions”) and wherein the plurality of regions includes a global region and a first plurality of regions (Owechko, ¶0054: “partition each input image frame into K rectangular regions”; ¶0089: “determining a background optical flow motion corresponding to a set of image frames of the image data… determining the optical flow motion of an entire image”) corresponding to a first local scale; (Owechko, ¶0090: “adjusting 1110 the local area processing includes adjusting the number of local area regions”) generating an enhanced image based on the degradation profile for each region of the plurality of regions; (Owechko, ¶0090: “generate an enhanced image frame of image data, based on the level of environmental artifacts at each position of the plurality of positions”) and providing the enhanced image for display (Owechko, ¶0044: “The display device 215 may be operatively coupled to the processor 200 and used to display data, such as image data and/or enhanced image data”) at an electronic display of the vehicle. (Owechko, ¶0014: “The mobile vehicle includes a display device”). However, The combination of Owechko and Reddy does not explicitly teach obtaining the environment sensor data from the environment sensor; generating sensor feature fusion data, wherein the sensor feature fusion data is based on the input image and the environment sensor data and wherein the degradation profile indicates, for each of a plurality of degradations, a degradation amount.
In an analogous field of endeavor, Yuan teaches, obtaining the environment sensor data from the environment sensor; (Yuan, col 20, lines 55-57: “cameras that are able to capture images of the surrounding environment and that are able to image a user, people, or objects in the vicinity”; Applicant’s specification ¶0024: “environment sensors include a camera, another image sensor”) generating sensor feature fusion data, wherein the sensor feature fusion data is based on the input image and the environment sensor data; (Yuan, col 6, lines 29-33: “Multiple types of visual features can be extracted from the multi-modal single/multi-view images for an object and a set of visual features can be selected from multiple types of visual features to generate a combined visual feature vector”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Owechko using the teachings of Yuan to introduce feature fusion. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of combining features obtained from sensor data from two sensors. Therefore, it would have been obvious to combine the analogous arts Owechko and Yuan to obtain the above-described limitations of claim 18. However, the combination of Owechko and Yuan does not explicitly teach, wherein the degradation profile indicates, for each of a plurality of degradations, a degradation amount.
In an analogous field of endeavor, Reddy teaches, wherein the degradation profile indicates, for each of a plurality of degradations, a degradation amount (Reddy, ¶0014: “degradation bins are generated where each degradation bin represents a particular degradation type and level combination”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Owechko in view of Yuan using the teachings of Reddy to introduce multiple types and levels of degradation. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of optimized image degradation removal. Therefore, it would have been obvious to combine the analogous arts Owechko, Yuan and Reddy to obtain the invention of claim 18.
Regarding claim 19, Owechko in view of Yuan and in further view of Reddy teaches, The onboard image enhancement system of claim 18, wherein the one or more input images include a plurality of images part of an input video stream, (Owechko, ¶0088: “the image data includes multiple images in a video of an environment outside of a vehicle”) wherein the one or more enhanced images are a plurality of enhanced images. (Owechko, ¶0047: “Enhanced image data 335 is output from the image processing module 220”).
Regarding claim 20, Owechko in view of Yuan and in further view of Reddy teaches, The onboard image enhancement system of claim 19, wherein the onboard image enhancement system is configured to cause the plurality of enhanced images to be displayed at the electronic display (Owechko, ¶0044: “The display device 215 may be operatively coupled to the processor 200 and used to display… enhanced image data”) as an enhanced video stream. (Owechko, ¶0040: “image data may refer to one or more images, image video data, video data, a sequence of images, and/or image sequence data”).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEHRAZUL ISLAM whose telephone number is (571)270-0489. The examiner can normally be reached Monday-Friday: 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Saini Amandeep can be reached on (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MEHRAZUL ISLAM/Examiner, Art Unit 2662
/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662