DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 05/16/2024 was filed after the mailing date of the application. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-5, 8-11, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Abramov (US. Patent App. Pub. No. 2020/0380763).
As per claim 1, as shown in Fig. 1, Abramov teaches an information processing device comprising:
a control unit that sets number of samples of a light ray used in a case where a rendered image is generated by utilization of ray tracing (¶ [26], setting additional samples using ray tracing),
generates, by using the number of samples, the rendered image used as training data of machine learning (¶ [26], generating high quality rendered image 155 using the additional samples. The rendered image 155 may be fed into the machine learning model(s) 110 for a subsequent pass through the loop. See also ¶ [49] referring to Fig. 2), and
adjusts the number of samples in accordance with accuracy of a machine learning model (further addressed below) of a case where the rendered image is learned as the training data and the number of samples used in generation of the rendered image (Fig. 2, ¶ [50], “For example, the adaptive renderer 150 of FIG. 1 may identify the number of additional samples to render for each pixel from a sampling map, render the additional samples per pixel, and average them into the rendered image 155 from the previous iteration to generate an updated version of the rendered image (e.g., the rendered image 155 of FIG. 1)”. The adjusted number of samples is described in ¶ [51-52] and depicted in Fig. 3 and 4, where the number of samples is reduced by half).
Abramov does not explicitly teach adjusts the number of samples in accordance with accuracy of a machine learning model. However, according to the Applicant specification at ¶ [23-27], the accuracy of the machine learning model is based on the reduction of noise in the image utilizing the number of samples. This is also taught by Abramov, as depicted in Fig. 4, ¶ [52], “The image of FIG. 4 was rendered with 256 samples per pixel on average, half the samples used for the image of FIG. 3, and yet the magnified region of FIG. 4 illustrates substantially reduced noise around the lamp and wall clock”. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the denoised image method taught by Abramov to be as the accuracy of the machine learning model so as to provide the same functionality, and the advantage of which is to produce a higher quality output using fewer ray-traced samples, thereby resulting in faster convergence time, saving computational resources, and/or significantly accelerating the rendering process (see ¶ [52]).
As per claim 2, Abramov also teaches wherein the control unit adjusts the number of samples by using a prediction model of the number of samples (¶ [21]).
As per claim 3, as addressed, Abramov does also teach wherein the control unit generates the prediction model of the number of samples according to the rendered image and an evaluation result of the accuracy (¶ [23], i.e., by “Generally, the loop may be iterated, predicting successive higher quality denoised images, until one or more completion criteria are satisfied).
As per claim 4, Abramov further teaches wherein the control unit adjusts the number of samples by using the machine learning model that is already learned (see Fig. 1, ¶ [35], because the image already feedforwards pass through the machine learning model(s) 110 to determine whether to stop the loop and use the denoised image 115 as the output image 160).
As per claim 5, as addressed above, Abramov does teach wherein the control unit generates the machine learning model by using the rendered image (see further Fig. 6).
As per claim 8, as addressed in claim 1, Abramov substantially teaches wherein the control unit adjusts the number of samples in such a manner that the number of samples becomes a smaller value in the number of samples with the accuracy being equal to or higher than a target (i.e., half the number of samples to obtain higher quality image).
As per claim 9, as also addressed, Abramov does further teach wherein the control unit sets the rendered image generated with the adjusted number of samples as output data (¶ [35], i.e., based on the completion criteria, the denoised image 115 is determined as the output image 160. See also ¶ [39] for completion criteria).
As per claim 10, Abramov also teaches wherein the control unit adjusts the number of samples for each pixel of the rendered image (¶ [49], “In some embodiments, the distribution of additional samples may be represented in a sampling map storing an integer value for each pixel, indicating the number of additional samples to render for each pixel”).
As per claim 11, Abramov also teaches wherein the control unit adjusts the number of samples for all pixels of the rendered image (¶ [22], “For example, given a particular sampling budget (e.g., 10 samples/pixel on average), the sampling budget may be distributed across all pixels to reduce uncertainty”).
Claim 13, which is similar in scope to claim 1 as addressed above, is thus rejected under the same rationale.
Claims 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Abramov (US. Patent App. Pub. No. 2020/0380763) in view of Ha et al. (US. Patent App. Pub. No. 2021/0074052, “Ha”).
As per claim 6, Abramov does not explicitly teach wherein the control unit increases the number of samples in a case where the accuracy is lower than a target. Abramov, however, does teach adjusting the number of samples as addressed in claim 1.
In a very similar method of rendering image utilizing ray tracing samples of image data (see Fig. 2-4, ¶ [19-21]), Ha does teach the above features, i.e., wherein the control unit increases the number of samples in a case where the accuracy is lower than a target (¶ [80], “Generally, when a number of sample points extracted from a 3D scene increases, a number of errors occurring in a restoration of the second rendering result image 460 decreases. When the number of sample points decreases, the number of errors occurring in the restoration of the second rendering result image 460 increases”, implying increasing the number of samples when the number of errors is high, or accuracy is lower than target).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the method as taught by Ha into the method as taught by Abramov as addressed above, the advantage of which is to minimize the difference between the first rendered image and the second rendered image during the training process of adjusting number of samples (¶ [71])
As per claim 7, as addressed in claim 6, the combined teachings of Abramov and Ha also include wherein the control unit decreases the number of samples in a case where the accuracy is equal to or higher than a target (Ha, ¶ [80]). Thus, claim 7 would have been obvious over the combined references for the reason above.
Claim 12 is are rejected under 35 U.S.C. 103 as being unpatentable over Abramov (US. Patent App. Pub. No. 2020/0380763) in view of Mitchell et al. (US. Patent App. Pub. No. 20170365089, “Mitchell”).
As per claim 12, Abramov does not expressly teach wherein the number of samples is number of samples in tracing using a Monte Carlo method.
However, in a very similar method of adaptive rendering (see ¶ [15]), Mitchell teaches the number of samples is number of samples in tracing using a Monte Carlo method (see ¶ [14-15]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combined the method as taught by Michell with the method as taught by Abramov as addressed above, the advantage of which is to improve noise filtering.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hau H. Nguyen whose telephone number is: 571-272-7787. The examiner can normally be reached on MON-FRI from 8:30-5:30.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard, can be reached on (571) 272-7773.
The fax number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/HAU H NGUYEN/Primary Examiner, Art Unit 2611