DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 13, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chun et al. (US Pub. No. 2022/0164596 A1).
Regarding claim 1, Chun discloses, an apparatus, comprising: one or more memories; and one or more processors coupled to the one or more memories, the one or more processors being configured to cause the apparatus to: (See Chun ¶79, “Further, the above-described algorithms are stored in the memory 240 as a computer-readable recording medium that may be controlled by the processor 230. At least part of the algorithm may be implemented in software, firmware, hardware, or a combination of at least two or more thereof and may include a module, program.”)
obtain a first image frame and a second image frame; (See Chun ¶50, “The image input unit 221 receives an operation environment image and a camouflage pattern image. The operation environment image may be still image or video data.”)
generate a modified first image frame and a modified second image frame, wherein to generate the modified first image frame and the modified second image frame comprises to convert a first color space of the first image frame and a second color space of the second image frame to a third color space; (See Chun ¶58, “For example, the camouflage pattern evaluation module 220 converts the operation environment image and the camouflage pattern image represented as RGB color space data into the following XYZ color space data using Equation 1 below, and the camouflage pattern evaluation module 220 converts the converted XYZ color space data into Lab color space data using Equation 2 below.”)
extract a first plurality of features from the modified first image frame; extract a second plurality of features from the modified second image frame; (See Chun ¶74, “The deep learning-based object recognition algorithm may input the operation environment image and the camouflage pattern image to the VGG-19 network and extracts the respective feature maps of the operation environment image and camouflage pattern image from the second layer, i.e., conv2, of the VGG-19 network.”)
and determine at least one first matching cost based on the first plurality of features and the second plurality of features. (See Chun ¶74, “Thereafter, the deep learning-based object recognition algorithm compares the extracted feature maps using the Frobenius norm and measures the structural similarity between the operation environment image and the camouflage pattern image as shown in Equation 15.”)
Regarding claim 13, Chun discloses, a method, comprising: obtaining a first image frame and a second image frame; (See Chun ¶50, “The image input unit 221 receives an operation environment image and a camouflage pattern image. The operation environment image may be still image or video data.”)
generating a modified first image frame and a modified second image frame, wherein to generate the modified first image frame and the modified second image frame comprises to convert a first color space of the first image frame and a second color space of the second image frame to a third color space; (See Chun ¶58, “For example, the camouflage pattern evaluation module 220 converts the operation environment image and the camouflage pattern image represented as RGB color space data into the following XYZ color space data using Equation 1 below, and the camouflage pattern evaluation module 220 converts the converted XYZ color space data into Lab color space data using Equation 2 below.”)
extracting a first plurality of features from the modified first image frame; extracting a second plurality of features from the modified second image frame; (See Chun ¶74, “The deep learning-based object recognition algorithm may input the operation environment image and the camouflage pattern image to the VGG-19 network and extracts the respective feature maps of the operation environment image and camouflage pattern image from the second layer, i.e., conv2, of the VGG-19 network.”)
and determining at least one first matching cost based on the first plurality of features and the second plurality of features. (See Chun ¶74, “Thereafter, the deep learning-based object recognition algorithm compares the extracted feature maps using the Frobenius norm and measures the structural similarity between the operation environment image and the camouflage pattern image as shown in Equation 15.”)
Regarding claim 20, Chun discloses, one or more non-transitory computer-readable media comprising executable instructions that, when executed by one or more processors of an apparatus, cause the apparatus to perform operations comprising: (See Chun ¶79, “Further, the above-described algorithms are stored in the memory 240 as a computer-readable recording medium that may be controlled by the processor 230. At least part of the algorithm may be implemented in software, firmware, hardware, or a combination of at least two or more thereof and may include a module, program.”)
obtaining a first image frame and a second image frame; (See Chun ¶50, “The image input unit 221 receives an operation environment image and a camouflage pattern image. The operation environment image may be still image or video data.”)
generating a modified first image frame and a modified second image frame, wherein to generate the modified first image frame and the modified second image frame comprises to convert a first color space of the first image frame and a second color space of the second image frame to a third color space; (See Chun ¶58, “For example, the camouflage pattern evaluation module 220 converts the operation environment image and the camouflage pattern image represented as RGB color space data into the following XYZ color space data using Equation 1 below, and the camouflage pattern evaluation module 220 converts the converted XYZ color space data into Lab color space data using Equation 2 below.”)
extracting a first plurality of features from the modified first image frame; extracting a second plurality of features from the modified second image frame; (See Chun ¶74, “The deep learning-based object recognition algorithm may input the operation environment image and the camouflage pattern image to the VGG-19 network and extracts the respective feature maps of the operation environment image and camouflage pattern image from the second layer, i.e., conv2, of the VGG-19 network.”)
and determining at least one first matching cost based on the first plurality of features and the second plurality of features. (See Chun ¶74, “Thereafter, the deep learning-based object recognition algorithm compares the extracted feature maps using the Frobenius norm and measures the structural similarity between the operation environment image and the camouflage pattern image as shown in Equation 15.”)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 2-3 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Chun et al. (US Pub. No. 2022/0164596 A1) in view of Wyatt et al. (US Pub. No. 2015/0178932 A1).
Regarding claim 2, Chun discloses, the apparatus of claim 1, but he fails to disclose, wherein to generate the modified first image frame and the modified second image frame comprises to generate the modified first image frame and the modified second image frame based on one or more characteristics associated with at least one of the first image frame and the second image frame satisfying one or more criteria.
However, Wyatt discloses, wherein to generate the modified first image frame and the modified second image frame comprises to generate the modified first image frame and the modified second image frame based on one or more characteristics associated with at least one of the first image frame and the second image frame satisfying one or more criteria. (See Wyatt ¶47, “At 606, the method 600 includes applying scoring criteria to the reference image to generate an image conversion score for the reference image. For example, the scoring criteria may be based at least partially on a saturation level of the reference image, edge definition, or another characteristic of the reference image. In some embodiments, the scoring criteria are based on a histogram of one or more characteristics of the reference image.” Further see Wyatt ¶48, “At 608, the method 600 includes comparing the image conversion score to one or more threshold values. Such comparisons yield either an affirmative output or a negative output.” Further see Wyatt ¶50, “At 612, the method 600 includes converting the reference image of a first image format.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include doing color space conversion based on histogram thresholding or saturation level thresholding as suggested by Wyatt to Chun’s color space conversion of an image. This can be done using known engineering techniques, with a reasonable expectation of success. The motivation for doing so as disclosed by Wyatt ¶27 is to, “determined whether a reference image is a suitable candidate for conversion.”
Regarding claim 3, Chun and Wyatt disclose, the apparatus of claim 2, wherein: the one or more characteristics comprise one or more of: a brightness, a signal-to-noise ratio (SNR), or a color histogram, and the one or more criteria comprise one or more of: a brightness threshold; an SNR threshold; or a color histogram threshold. (See the rejection of claim 2, whereby Wyatt discloses, that a histogram threshold or saturation threshold can be used to determine if a color conversion should occur.)
Regarding claim 14, Chun and Wyatt disclose, the method of claim 13, wherein generating the modified first image frame and the modified second image frame comprises generating the modified first image frame and the modified second image frame based on one or more characteristics associated with at least one of the first image frame and the second image frame satisfying one or more criteria. (See the rejection of claim 2 as it is equally applicable for claim 14 as well.)
Regarding claim 15, Chun and Wyatt disclose, the method of claim 14, wherein: the one or more characteristics comprise one or more of: a brightness, a signal-to-noise ratio (SNR), or a color histogram, and the one or more criteria comprise one or more of: a brightness threshold; an SNR threshold; or a color histogram threshold. (See the rejection of claim 3 as it is equally applicable for claim 15 as well.)
Claims 6 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Chun et al. (US Pub. No. 2022/0164596 A1) in view of Lin et al. (US Pub. No. 2021/0374908 A1).
Regarding claim 6, Chun discloses, the apparatus of claim 1, but he fails to disclose, wherein a feature size of the first plurality of features and the second plurality of features is based on at least one of a first frame resolution associated with the first image frame or a second frame resolution associated with the second image frame.
However, Lin discloses, wherein a feature size of the first plurality of features and the second plurality of features is based on at least one of a first frame resolution associated with the first image frame or a second frame resolution associated with the second image frame. (See Lin ¶17, “In some embodiments, the super-resolution deep learning network model obtained from public data uses images with a 90-degree field of view and 2K resolution as its training images, and the size of its feature filter is 3*3 pixels. … However, a feature filter with the size of 3*3 pixels is used to capture a second image with a 360-degree field of view and 4K resolution (the content of the second image is the same as the first image, only the resolution is different), only a part of the facial features can be captured, … the processing method of the present invention adjusts the size of the feature filter to 5*5 pixels, so that the feature filter can, for example, capture the complete facial features.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the extracting a smaller feature for a first size resolution and larger features for a larger size resolution image suggested by Lin to Chun’s extraction of image features. This can be done using known engineering techniques, with a reasonable expectation of success. The motivation for doing so is because extracting smaller features from lower resolution image allows the comparison to focus on smaller local structures, whereas large features are required to capture the broad context of a high-resolution scene.
Regarding claim 18, Chun and Lin disclose, the method of claim 13, wherein a feature size of the first plurality of features and the second plurality of features is based on at least one of a first frame resolution associated with the first image frame or a second frame resolution associated with the second image frame. (See the rejection of claim 6 as it is equally applicable for claim 18 as well.)
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Chun et al. (US Pub. No. 2022/0164596 A1) in view of Hirohata (US Pub. No. 2009/0067724 A1).
Regarding claim 8, Chun discloses, the apparatus of claim 6, but he fails to disclose, wherein the first frame resolution and the second frame resolution comprise a same frame resolution.
However, Hirohata discloses, wherein the first frame resolution and the second frame resolution comprise a same frame resolution. (See Hirohata ¶163, “As such, whether the image of the input image data is identical with the reference image or not is determined by comparing features of the image with that of the reference image. In this case, when resolutions of the input image data and reference image are set to the same resolution, accuracy of the determination can be improved.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the matching of images using same sized resolution of the two images as suggested by Hirohata to Chun’s matching of two images. This can be done using known engineering techniques, with a reasonable expectation of success. The motivation for doing so is because feature matching algorithms perform between when the scale difference between images is minimal.
Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Chun et al. (US Pub. No. 2022/0164596 A1) in view of Dos Santos Mendonca (US Pub. No. 2016/0307334 A1).
Regarding claim 9, Chun discloses, the apparatus of claim 1, but he fails to disclose, wherein to determine the at least one first matching cost based on the first plurality of features and the second plurality of features comprises to: for each first feature of one or more first features of the first plurality of features, determine an individual matching cost between the first feature and each second feature of one or more second features of the second plurality of features.
However, Dos Santos Mendonca discloses, wherein to determine the at least one first matching cost based on the first plurality of features and the second plurality of features comprises to: for each first feature of one or more first features of the first plurality of features, determine an individual matching cost between the first feature and each second feature of one or more second features of the second plurality of features. (See Dos Santos Mendonca ¶32, “Process block 302 includes deriving a cost function for matching each feature in the first set of features with each feature of the second set of features. In one example, the derived cost function is the cost of determining every plausible match between the two sets of features.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the matching each feature of one image with every feature of another as suggested by Dos Santos Mendonca to Chun’s feature matching of images. This can be done using known engineering techniques, with a reasonable expectation of success. The motivation for doing is to find corresponding key points in two different images, or to determine their relationship.
Regarding claim 19, Chun and Dos Santos Mendonca disclose, the method of claim 18, wherein the feature size of the first plurality of features and the second plurality of features comprises: a first feature size when at least one of the first frame resolution or the second frame resolution does not satisfy a first display resolution; a second feature size when the first frame resolution and the second frame resolution satisfy a second display resolution; and a third feature size when the first frame resolution and the second frame resolution satisfy the first display resolution and at least one of the first frame resolution or the second frame resolution do not satisfy the second display resolution. (See the rejection of claim 9 as it is equally applicable for claim 19 as well.)
Allowable Subject Matter
Claim 4-5, 7, 10-12, and 16-17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claim 4, the apparatus of claim 1, wherein the one or more processors are configured to cause the apparatus to: obtain a third image frame and a fourth image frame; extract a first luma representation of the third image frame and a second luma representation of the fourth image frame based on one or more characteristics associated with at least one of the third image frame or the fourth image frame not satisfying one or more criteria; extract a third plurality of features from the first luma representation; extract a fourth plurality of features from the second luma representation; and determine at least one second matching cost based on the third plurality of features and the fourth plurality of features. (The disclosed prior art of record fails to disclose the limitations of this claim.)
Regarding claim 7, the apparatus of claim 6, wherein the feature size of the first plurality of features and the second plurality of features comprises: a first feature size when at least one of the first frame resolution or the second frame resolution does not satisfy a first display resolution; a second feature size when the first frame resolution and the second frame resolution satisfy a second display resolution; and a third feature size when the first frame resolution and the second frame resolution satisfy the first display resolution and at least one of the first frame resolution or the second frame resolution do not satisfy the second display resolution. (The disclosed prior art of record fails to disclose the limitations of this claim.)
Regarding claim 10, the apparatus of claim 9, wherein to determine the individual matching cost between the first feature and each second feature comprises to, for each second feature: determine a Hamming distance between a first lightness channel associated with the first feature and a second lightness channel associated with the second feature; determine a first Euclidean distance between a first chroma channel a* associated with the first feature and a second chroma channel a* associated with the second feature; determine a second Euclidean distance between a first chroma channel b* associated with the first feature and a second chroma channel b* associated with the second feature; and compute the individual matching cost based on the Hamming distance, the first Euclidean distance, and the second Euclidean distance. (The disclosed prior art of record fails to disclose the limitations of this claim.)
Regarding claim 12, the apparatus of claim 9, wherein to determine the individual matching cost between the first feature and each second feature comprises to, for each second feature: determine a Hamming distance between a first lightness channel associated with the first feature and a second lightness channel associated with the second feature; determine a Euclidean distance between a first square root value associated with the first feature and a second square root value associated with the second feature, wherein: the first square root value is based on a first chroma channel a* associated with the first feature and a first chroma channel b* associated with the first feature; and the second square root value is based on a second chroma channel a* associated with the second feature and a first chroma channel b* associated with the first feature; and compute the individual matching cost based on the Hamming distance and the Euclidean distance. (The disclosed prior art of record fails to disclose the limitations of this claim.)
Regarding claims 5 and 11, these claims are objected to since they depend from objected to claims.
Regarding claims 16-17, these claims are objected to since they are similar to objected to claim 4-5.
Conclusion
Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant’s disclosure.
Xiao et al. (US Pub. No. 2020/0084320 A1) In some examples, print quality diagnosis may include aligning corresponding characters between a scanned image of a printed physical medium aligned to a master image associated with generation of the printed physical medium to generate a common mask. Print quality diagnosis may further include determining, for each character of the scanned image, an average value associated with pixels within the common mask, and determining, for each corresponding character of the master image, the average value associated with pixels within the common mask. Further, print quality diagnosis may include determining, for each character of the common mask, a metric between the average values associated with the corresponding characters in the scanned and master images.
Aizaki (US Pub. No. 2022/0036570 A1) An image processing apparatus includes circuitry that estimates a first geometric transformation parameter for aligning first output image data with the original image data, the first output image data being acquired by reading a first output result, and a second geometric transformation parameter for aligning second output image data with the original image data, the second output image data being acquired by reading a second output result; associates, based on the first and the second geometric transformation parameters, combinations of color components of the first and the second output image data, corresponding to pixels of the original image data, to generate pixel value association data; determines, based on the pixel value association data, a mapping for estimating color drift between the first output image data and the second output image data from the original image data; and subjects the original image data to color conversion based on the mapping.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID PERLMAN whose telephone number is (571) 270-1417.
The examiner can normally be reached on Monday - Friday; 10:00am -6:30pm.
Examiner interviews are available via telephone and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is
(571) 273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call (800) 786-9199 (IN USA OR CANADA) or (571) 272-1000.
/DAVID PERLMAN/Primary Examiner, Art Unit 2673