DETAILED ACTION
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitation uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation is: “processing unit to render an image …” in claims 24 and 35. Because this claim limitation is being interpreted under 35 U.S.C. 112(f), it is being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this limitation interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation to avoid it being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation recites sufficient structure to perform the claimed function so as to avoid it being interpreted under 35 U.S.C. 112(f).
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
Claims 24 and 35 are rejected under 35 U.S.C. 112(a), because the claim purports to invoke 35 U.S.C. 112(f), but fails to recite a combination of elements as required by that statutory provision and thus cannot rely on the specification to provide the structure, material or acts to support the claimed function.
As such, the claim recites a function that has no limits and covers every conceivable means for achieving the stated function, while the specification discloses at most only those means known to the inventor. Accordingly, the disclosure is not commensurate with the scope of the claim.
Particularly, they are considered as single means claims. See MPEP 2181(V).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 24-40 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Vogels (US 2020/0,184,313).
Referring to claims 24, 35 and 41, Vogels discloses a processor (fig. 25, system 2500) comprising a processing circuitry (fig. 14, adaptive sampling system 1400; paras.0195-0197) to:
render an image using a ray-traced samples (paras.0053-0054, rendering with path tracing of a ray through pixels on an image; fig. 16, image samples 1602) distributed base on a representation of uncertainty (figs. 26A/26B, predicted value with high/low uncertainty; para. 0057, random sampled light path; fig. 16, Monte Carlo path tracing 1602; para.0294) of a predicted distribution of a pixel value (paras.0093-0106, reconstruct denoised values using prediction convolution network; fig. 16, respective error value for each pixel 1606) of a corresponding denoised image (fig. 16, denoised image 1604/1606).
As to claims 25, 36 and 42, Vogels discloses the apparatus of claim 24, wherein the processing circuitry is to use a machine learning model (paras.0063-0065, machine learning) to predict a mean of the predicted distribution of the pixel value of the corresponding denoised image (para.0057, mean of samples from pixel’s distribution), and an uncertainty value representing the uncertainty of the predicted distribution of the pixel value (para.0058, variance of MC estimate).
As to claims 26 and 37, Vogels discloses the apparatus of claim 24, wherein the processing circuitry is to use a machine learning model (para. 0063) to store:
an expected value of the predicted distribution of the pixel value in the corresponding denoised image (para.0057, Monte Carlo approximation value), or
an uncertainty value (para.0294, uncertainty) representing the uncertainty of the predicted distribution in an uncertainty map for the corresponding denoised image (para.0058, Monte Carlo estimated variance).
As to claim 27, Vogels discloses the apparatus of claim 24, wherein the processing circuitry is to use a Bayesian neural network (paras.0176-0179, denoiser neural network and Bayesian decision) to jointly predict the corresponding denoised image and uncertainty map that quantifies the representation of uncertainty of the corresponding denoised image (fig. 14, denoised image 1422 through predictor 1430 to sampling map 1440 and renderer 1410).
As to claim 28, Vogels discloses the apparatus of claim 24, wherein the processing circuitry is to use a machine model to predict an uncertainty map comprising a channel (fig. 3, output 350/360 RGB channels; para.0085) that represent the uncertainty as: variance (para.0083, variance; paras.0084-0089) of the corresponding denoised image.
As to claims 29 and 38, Vogels discloses the apparatus of claim 24, wherein the processing circuitry is to generate a sample distribution (para. 0239, sample distributions) that distributes the ray-traced samples (para.0055-0056, ray traced distributions) based on the representation of uncertainty and a number of previously taken samples (para.0232, previous frame), and allocate a sampling budget (para.0198, such as start with 16 samples per pixel) based on the sampling distribution (para.0195, adaptive sampling of noise sample distribution).
As to claim 30, Vogels discloses the apparatus of claim 24, wherein the processing circuitry is to render the image based on combining an input image (fig. 14, input to denoised image 1422) used to generate the corresponding denoised image with the ray-traced samples (fig. 14, input to noisy image 1412) using a tracked number of rendered samples per pixel (para.0198, such as start with 16 samples per pixel).
As to claim 31, Vogels discloses the processor of claim 24, wherein the processing circuitry is to feed the image for a subsequent pass (para.0198, such as start with 16 samples per pixel, then doubled in next iteration) through a machine learning model (para.0083, machine learning) used to generate the corresponding denoised image.
As to claim 32, Vogels discloses the processor of claim 24, wherein the processing circuitry is to operate an adaptive rendering loop (para.0198, such as start with 16 samples per pixel, then doubled in next iteration) to render images with successively higher ray-traced sample counts in successive iteration until a completion criterion is satisfied (para.0198, such as tripled or quadrupled).
As to claim 33, Vogels discloses the processor of claim 24, wherein the processing circuitry is to operate an adaptive rendering loop to render images with successively higher ray-traced sample counts in successive iteration until a completion criterion is satisfied (para.0198, such as start with 16 samples per pixel, then doubled in next iteration until quadrupled), wherein the completion criterion comprises: an expiration of a sampling budget (para.0198, tripled or quadrupled).
As to claims 34, 40 and 43, Vogels discloses the processor of claim 24, wherein the processor is comprised in: a computer graphics system (fig. 25, system 2500 with graphics processor 2510).
As to claim 39, Vogels discloses the apparatus of claim 35, wherein the hardware processors are to track a number of rendered samples per pixel (para. 0083, input to Monte Carlo path tracing rendering) and generate the image based on averaging the ray-traced samples (para. 0085, averaged per-pixel data) into a corresponding input image (para.0086, Monte Carlo denoising output image) associated with the denoised image using the number of rendered samples per pixel.
Claims 24 and 35 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Pohl (US 2020/0,211,157).
Referring to claims 24 and 35, Pohl discloses a processor (fig. 1, system 100; paras. 0146-0148, adaptive ray tracing system with machine learning) comprising a processing unit (fig. 16, GPU 1604; para.0049, machine learning logic) to: [functional languages ignored].
See MPEP 2114(II). “Apparatus claim cover what a device is, not what a device does.” The only structure component for the claims is “a processor comprising a processing unit”. The claimed invention appeared to be software program executing on general purpose computer systems, which might be better suitable for CRM or method claims.
Claims 24, 35 and 41 are rejected under 35 U.S.C. 102 as being anticipated by AAPA (Applicant Admitted Prior Art, Specification paras. 0002-0003).
Referring to claims 24, 35 and 41, AAPA discloses a processor (fig. 25, system 2500) comprising a processing circuitry (para.0003, conventional approaches) to: render an image using a ray-traced samples (para.0003, ray traced samples) distributed base on a representation of uncertainty of a predicted distribution of a pixel value (para.0003, prior technique – adaptive distributed samples across different pixels) of a corresponding denoised image (para.0003, produced high quality images).
Response to Arguments
Applicant’s arguments have been fully considered, but they are not deemed to be persuasive.
Applicant argues that Vogels does not disclose “uncertainty of a predicted distribution of pixel values of a corresponding denoised image” (pp.9-10).
Vogels discloses rendering ray-traced input images samples using Monte Carlo path tracing within neural network to generate denoised image (para.0202-0204). For each pixel, an adaptive sampling map is predicted to represent respective values between the input image and the generated denoise image (paras.0205,0239). To address the prediction, Vogels further discloses using Direct Prediction Convolutional Network (DPCN) and Kernel Prediction Convolutional Network (KPCN) with asymmetric loss with asymmetric loss (paras.179,289) for predicting the error distributions of each pixel between the input image and denoised image. Vogels discloses the output is based on an uncertainty (para.0294).
Applicant argues that Vogels does not disclose “using a Bayesian neural network …” (p.12).
Vogels discloses the system as a neural network system (fig. 1), and the system uses neural network at different stages including rendering, sampling map and denoising. Vogels discloses Bayesian decision for generating the denoised image and error distribution of sampling map. A neural network executing Bayesian algorithm is a Bayesian neural network.
Applicant argues that functional limitation of claims 24 and 35 should be considered (p. 13).
Claims 24 and 35 are apparatus claims. The only hardware structure is “a processor”. For example, it is unclear whether an infringement of the claim will occur when a processor that is capable to perform the functional language or when a processor that is performing the functional limitations.
Applicant argues claims 24 and 35 should not be rejected under 35 USC 112 as single-means claim (p.13).
Currently, claims 24 and 35 are still can be interpreted and realized as invoking 35 USC 112(f).
Conclusion
This action is made final. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire in three months from the mailing date of this action. In the event a first reply is filled within two months of the mailing date of this final action and the advisory action is not mailed until after the end of the three-month shortened statutory period, then the shortened statutory period will expire on the date of the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than six months from the date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to examiner Cheng-Yuan Tseng whose telephone number is (571)272-9772, and fax number is (571)273-9772. The examiner can normally be reached on Monday through Friday from 09:00 to 17:30 Eastern Time. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached on (571)272-2330. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at (866)217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call (800)786-9199 (IN USA OR CANADA) or (571)272-1000.
/CHENG YUAN TSENG/Primary Examiner, Art Unit 2615