Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 7-13, and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Katzman et al. (U.S. Patent No. 11,900,538), referred herein as Katzman, in view of Maeda et al. (U.S. Patent Application Publication No. 2021/0334938), referred herein as Maeda.
Regarding claim 1, Katzman teaches a display apparatus, comprising: a memory configured to store at least one instruction; and one or more processors configured to execute the at least one instruction to cause the display apparatus to (fig 1; column 6, lines 46-51):
obtain weight value information for a plurality of clusters by inputting an input image in a prediction neural network model (figs 3 and 4; column 5, lines 15-25 and 40-55; a prediction neural network obtains weight values for a plurality of clusters);
obtain an adaptive neural network model by respectively applying the weight value information to a plurality of neural network models corresponding to the plurality of clusters (figs 3 and 4; column 5, lines 40-55; column 6, lines 10-17; column 7, lines 49-55; weights are applied to corresponding neural network models to obtain an adaptive neural network); and
obtain an output image with improved picture quality by inputting the input image in the adaptive neural network model (column 4, lines 1-3; column 8, line 60 through column 9, line 2; column 12, lines 27-29; output image data with improved quality is obtained via the adaptive neural network),
wherein the prediction neural network model is a model trained to output a plurality of probability values for the plurality of clusters based on loss information for a plurality of output images obtained by inputting a plurality of learning images into the plurality of neural network models (fig 3; column 6, lines 10-17; column 7, lines 49-64; column 11, lines 1-11 and 37-56; column 12, lines 18-27; the prediction neural network is trained to output probability values for the clusters based on loss information by comparing current and input images into the neural networks).
Katzman does not explicitly teach that the clusters are classified according to picture quality.
However, in a similar field of endeavor, Maeda teaches a display apparatus configured to utilize neural networks to classify clusters of an image to obtain an output image with improved picture quality (figs 3 and 4; paragraph 34, lines 6-14; paragraph 39, lines 1-8), wherein the clusters are classified according to picture quality (figs 5A-5G; paragraph 48; paragraph 57; paragraph 58, lines 1-30).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the cluster classification of Maeda with that of Katzman because this facilitates more efficient and higher accuracy training, thereby producing higher quality output image data (see, for example, Maeda, paragraph 68, the last 19 lines).
Regarding claim 2, Katzman in view of Maeda teaches the display apparatus of claim 1, wherein the one or more processors are configured to execute the at least one instruction to cause the display apparatus to: obtain the plurality of output images by inputting the plurality of learning images into the plurality of neural network models; obtain the loss information by comparing a plurality of picture quality improved images corresponding to the plurality of learning images with the plurality of output images; and input the loss information into the prediction neural network model (Katzman, fig 3; column 6, lines 10-17; column 7, lines 49-64; column 11, lines 1-11 and 37-56; column 12, lines 18-27).
Regarding claim 3, Katzman in view of Maeda teaches the display apparatus of claim 2, wherein the one or more processors are configured to execute the at least one instruction to cause the display apparatus to: obtain first loss information corresponding to a first learning image by inputting the first learning image from among the plurality of learning images; classify the first learning image into a first cluster from among the plurality of clusters based on a first loss value of less than a threshold value from among a first plurality of loss values in the first loss information (Katzman, figs 3 and 4; column 5, lines 45-55; column 7, lines 49-64; column 11, lines 1-11 and 37-56; column 12, lines 18-27; the first iteration);
obtain second loss information corresponding to a second learning image by inputting the second learning image from among the plurality of learning images in the plurality of neural network models; and classify the second learning image into a second cluster from among the plurality of clusters based on a second loss value of less than the threshold value from among a second plurality of loss values in the second loss information (Katzman, figs 3 and 4; column 5, lines 45-55; column 7, lines 49-64; column 11, lines 1-11 and 37-56; column 12, lines 18-27; the second iteration).
Regarding claim 4, Katzman in view of Maeda teaches the display apparatus of claim 3, wherein the one or more processors are configured to execute the at least one instruction to cause the display apparatus to: train a first neural network model corresponding to the first cluster based on a first plurality of learning images classified into the first cluster and a first picture quality improved image corresponding to the first plurality of learning images; and train a second neural network model corresponding to the second cluster based on a second plurality of learning images classified into the second cluster and a second picture quality improved image corresponding to the second plurality of learning images (Katzman, fig 3, the functional training loop in engine 104; column 5, lines 45-55; column 6, lines 10-17; column 7, lines 38-45 and 49-64; column 12, lines 23-26; the first and second iterations).
Regarding claim 7, Katzman in view of Maeda teaches the display apparatus of claim 1, wherein a first number of the plurality of neural network models corresponds to a second number of the plurality of clusters (Katzman, fig 4; column 5, lines 40-55; column 7, lines 38-55).
Regarding claim 8, Katzman in view of Maeda teaches the display apparatus of claim 1, wherein the weight value information comprises the plurality of probability values, and wherein the one or more processors are configured to execute the at least one instruction to cause the display apparatus to obtain the adaptive neural network model by applying different weight values to the plurality of neural network models based on the plurality of probability values (Katzman, column 5, lines 40-55; column 6, lines 10-17; column 7, lines 48-64).
Regarding claim 9, Katzman in view of Maeda teaches the display apparatus of claim 1, wherein the memory is configured to store a plurality of picture quality improved images corresponding to the plurality of learning images, and wherein the plurality of picture quality improved images are super resolution images (Maeda, paragraph 49; the motivation to combine is similar to that discussed above in the 103 rejection of claim 1).
Regarding claim 10, the limitations of this claim substantially correspond to the limitations of claim 1; thus they are rejected on similar grounds.
Regarding claims 11-13 and 16-18, the limitations of these claims substantially correspond to the limitations of claims 2-4 and 7-9, respectively; thus they are rejected on similar grounds as their corresponding claims.
Regarding claim 19, the limitations of this claim substantially correspond to the limitations of claim 1; thus they are rejected on similar grounds.
Allowable Subject Matter
Claims 5, 6, 14, and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 5, the prior art teaches the display apparatus of claim 4, as shown above, and further teaches iteratively computing loss values until a threshold is reached in order to produce quality improved images, among other claim features.
In the context of claims 1, 2, 3, 4, and 5 as a whole, however, the prior art does not appear to teach the method of claim 4, wherein the one or more processors are configured to execute the at least one instruction to cause the display apparatus to: obtain a third loss value corresponding to the first learning image by inputting the first learning image into the trained first neural network model; obtain a fourth loss value corresponding to the first learning image by inputting the first learning image into the trained second neural network model; re-classify the first learning image into a third cluster from among the plurality of clusters based on a fifth loss value of less than the threshold value from among the third loss value and the fourth loss value; obtain a sixth loss value corresponding to the second learning image by inputting the second learning image into the trained second neural network model; obtain a seventh loss value corresponding to the second learning image by inputting the second learning image in the trained second neural network model; re-classify the second learning image into a fourth cluster from among the plurality of clusters based on an eighth loss value of less than the threshold value from among the sixth loss value and the seventh loss value; re-train the first neural network model based on a third plurality of learning images re-classified into the first cluster and a third picture quality improved image corresponding to the third plurality of learning images; and re-train the second neural network model based on a fourth plurality of learning images re-classified into the second cluster and a fourth picture quality improved image corresponding to the fourth plurality of learning images re-classified into the second cluster. Accordingly, claim 5 comprises allowable subject matter.
Regarding claim 6, this claim comprises allowable subject matter insomuch as it depends from claim 5, which comprises allowable subject matter.
Regarding claims 14 and 15, the limitations of these claims substantially correspond to the limitations of claims 5 and 6, respectively, and therefore comprise allowable subject matter for similar reasons as those discussed above.
Conclusion
The following prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Li (U.S. Patent Application Publication No. 2019/0130574); Image processing method and image processing device.
Zimmer (U.S. Patent Application Publication No. 2020/0250794); Method, device, and computer program for improving the reconstruction of dense super-resolution images from diffraction-limited images acquired by single molecule localization microscopy.
Bai (U.S. Patent Application Publication No. 2021/0065337); Method and image processing device for image super resolution, image enhancement, and convolutional neural network model training.
Moon (U.S. Patent Application Publication No. 2021/0104018); Method and apparatus for enhancing resolution of image.
Long (U.S. Patent Application Publication No. 2021/0174604); Systems and methods for constructing a three-dimensional model from two-dimensional images.
Mason (U.S. Patent No. 12,217,402); Deep learning based image enhancement for additive manufacturing.
Kupryjanow (U.S. Patent Application Publication No. 2022/0124433); Method and system of neural network dynamic noise suppression for audio processing.
Finlay (U.S. Patent Application Publication No. 2024/0354553); Method and data processing system for lossy image or video encoding, transmission and decoding.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID T WELCH whose telephone number is (571)270-5364. The examiner can normally be reached on Monday-Thursday, 8:30-5:30 EST, and alternate Fridays, 9:00-2:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached on 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
DAVID T. WELCH
Primary Examiner
Art Unit 2613
/DAVID T WELCH/Primary Examiner, Art Unit 2613