Prosecution Insights
Last updated: April 19, 2026
Application No. 18/152,320

SUBSURFACE SCATTERING FOR REAL-TIME RENDERING APPLICATIONS

Final Rejection §103
Filed
Jan 10, 2023
Examiner
LE, MICHAEL
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
4 (Final)
66%
Grant Probability
Favorable
5-6
OA Rounds
3y 3m
To Grant
88%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
568 granted / 864 resolved
+3.7% vs TC avg
Strong +22% interview lift
Without
With
+22.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
61 currently pending
Career history
925
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
52.7%
+12.7% vs TC avg
§102
13.4%
-26.6% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 864 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections 2. Applicant is advised that should claim 10 be found allowable, claim 10 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Response to Arguments 3. Applicant's arguments filed on 12/08/2025, with respect to the 103 rejection have been fully considered but they are not persuasive. 4. On page 12, Applicant's Remarks, the applicant argues that the combination of references does not teach or suggest, at least, "resampling the one or more samples of energy using a target function that models one or more portions of subsurface light transport to generate one or more resampled samples that represent an amount of energy transported to the interaction from one or more internal interactions within the object," as recited in amended claim 1. Examiner respectfully disagrees with that argument. The Office Action clearly asserted that claim 8 in the context of claim 1 and 7 as a whole, the combination of prior arts does not teach the rendering of the image is based at least on combining the one or more resampled samples with one or more second samples corresponding to a second amount of energy externally transported to the interaction from the environment outside of the object. The amendment incorporates part of subject matter that is taught or suggested by the combination of current references. In particular, Ouyang discloses resampling the one or more samples using a target function to generate one or more resampled samples to the interaction from one or more internal interactions within the object (See Ouyang; Figure 2(b); Figure 2(d); page 21, right column, section 4.2. Resampling and Shading; page 22, left column, 1st paragraph, at least discloses). Wright discloses one or more portions of light transport to generate one or more resampled samples that represent an amount of energy transported to the interaction from one or more internal interactions within the object (See Wright; Fig. 2; ¶0018-0019; ¶0022; ¶0026; ¶0029-0030; ¶0032; ¶0037-0038; ¶0040; ¶0061). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Ouyang to incorporate the teachings of Wright, and apply the irradiance value into generating samples on the lights in the scene, as taught by Ouyang for resampling the one or more samples of energy using a target function that one or more portions of subsurface light transport to generate one or more resampled samples that represent an amount of energy transported to the interaction from one or more internal interactions within the object. Doing so would provide improved approaches for determining lighting contributions of interactions of light transport paths that may be used for determining or rendering global illumination or other applications of ray-tracing techniques. Thus the Examiner respectfully submits that claim 1 is disclosed by the prior art (See Office Action below). 5. On pages 12-14 of Applicant's Remarks, the Applicant argues that the dependent claims are not taught by the prior art, insomuch as they depend from claims that are not taught by the prior art. Examiner respectfully disagrees with these arguments, for the reasons discussed below. Claim Rejections - 35 USC § 103 6. The following is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 7. Claims 1-2, 5, 7, 9-11, 14-16 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over “ReSTIR GI: Path Resampling for Real-Time Path Tracing” by Ouyang et al., (“Ouyang”) in view of Wright et al. (“Wright”) [US-2021/0049807-A1], further in view of Zhao et al. (“Zhao”) [US-2023/0027890-A1] Regarding claim 1, Ouyang discloses a method (Ouyang- Abstract, at least discloses We introduce an effective path sampling algorithm for indirect lighting that is suitable to highly parallel GPU architectures. Building on the screen-space spatio-temporal resampling principles of ReSTIR, our approach resamples multi-bounce indirect lighting paths obtained by path tracing; page 18, left column, 2th paragraph, at least discloses Resampling these points both in space and time allows us to generate weighted samples from a distribution approximating the indirect illumination in the scene, leading to substantial error reduction) comprising: detecting an interaction of a ray with an object in an environment (Ouyang- Figure 2 (a) shows generating random samples on the lights in the scene; page 20, left column, 2nd paragraph, at least discloses the visible points are the positions on surfaces in the scene that are visible from the camera at each pixel. At each visible point, a direction is randomly sampled and a ray is traced to find the closest surface intersection; these intersections are called sample points); generating one or more samples that correspond to one or more scatterings of within the object (Ouyang- Figure 2(a) shows generating random samples on the lights in the scene. Reflected radiance is computed at these intersections with path tracing; Figure 4 shows “Area with scattered radiance” and “Valid sample”. Initial sampling: we trace a ray from each visible point (red dots) with a random direction and record the closest intersection in a screen-space initial sample buffer. The position, normal and radiance of the intersection, the random numbers used in next event estimation, as well as the position and normal of the pixel, are recorded; Figure 5 shows At each sample point x2, we estimate the radiance scattered to the corresponding visible point using path tracing; page 20, left column, 1st and 2nd paragraphs, at least disclose points on surfaces with the radiance they scatter back along an incident ray [scatterings of light within the object] […] the visible points are the positions on surfaces in the scene that are visible from the camera at each pixel. At each visible point, a direction is randomly sampled and a ray is traced to find the closest surface intersection; these intersections are called sample points; page 20, right column, section 4.1. Sample Generation, 1st paragraph, at least discloses The first phase of our algorithm generates a new sample point for each visible point […] For each pixel q with corresponding visible point xv , we sample a direction w; using the source PDF pq (wi) and trace a ray to obtain the sample point xs. The source PDF may be a uniform distribution, a cosine-weighted distribution, or a distribution based on the BSDF at the visible point […] At each sample point, we need to compute the outgoing radiance Lo (xs, w0), where wo is the normalized direction to the visible point); resampling the one or more samples using a target function to generate one or more resampled samplesto the interaction from one or more internal interactions within the object (Ouyang- Figure 2(b) shows “Reused sample” After resampling, the original samples with no contribution are discarded; the useful samples are shared spatially and temporally and are used with probability based on their contribution; Figure 2(d) shows Spatial and temporal resampling is applied in a similar manner; page 21, right column, section 4.2. Resampling and Shading, at least discloses After the fresh initial sample is taken, spatial and temporal resampling is applied. The target function fj = Li(xv, coi ) f (coo, coi) (cos0i ) = Lo(x, , - coi ) f (coo, coi) (cos0i) includes the effect of the BSDF and cosine factor at the visible point, though we have also found that the simple target function […] works well. While it is a suboptimal target function for a single pixel, we have found that it is helpful for spatial resampling in that it preserves samples that may be effective at pixels other than the one that initially generated it. After initial samples are generated, temporal resampling is applied. In this stage, for each pixel, we read the sample from the initial sample buffer, and use it to randomly update temporal reservoir, computing the RIS weight following Equation 5 with the source PDF as the PDF for the sampled direction pq (coi) and /J as defined in Equation 10. The pseudo-code for temporal resampling is shown in Algorithm 3 Temporal Resampling; Figure 6 shows If a visible point xq1 generates a sample point Xq2 that is reused at another visible point xr1, then the Jacobian determinant in Equation 11 accounts for the fact that xrq would have itself generated the sample point Xq2 with a different probability; page 22, left column, 1st paragraph, at least discloses Samples are taken from the temporal reservoirs at nearby pixels, and resampled into a separate spatial reservoir. (See Algorithm 4 for pseudo-code.)) and rendering an image corresponding to the environment based at least on the one or more resampled samples (Ouyang- Fig. 1 shows Images are rendered at 1080p resolution with an NVIDIA 3090 RTX GPU without denoising (Middle) shows ReSTIR GI using spatial and temporal resampling and one sample per pixel in 8.9 ms; Figure 7 shows Effect of the Jacobian determinant, Equation 11, in spatial resampling. The wall receives sunlight and indirectly illuminates the floor; Fig. 8 shows Comparison between direct light, 1-bounce and 2-bounce GI rendered with our algorithm. Sample reuse uses 4.6 ms in both cases; page 24, left column, section 5. Implementation, at least discloses We have implemented our algorithm in Unreal Engine 4 and Falcor [… In Unreal Engine 4 implementation, the initial G-buffer is generated using rasterization before a full-screen pass generates the new samples for each visible point. The Falcor implementation is similar, though it uses ray tracing to generate G-buffer. Both temporal and spatial resampling are handled in a subsequent full-screen pass. Temporal resampling uses reprojected pixels according to their motion vectors from the previous frame. For efficiency, our implementation neglects the directional variation in scattered radiance at the sample point. Instead, the scattered radiance in the direction to the original visible point is used for all directions, corresponding to Lambertian scattering). Ouyang does not clearly disclose generating one or more samples of energy that correspond to one or more simulated scatterings of light transported through one or more subsurface interactions of one or more rays; a target function that models one or more portions of subsurface light transport to generate one or more resampled samples that represent an amount of energy transported to the interaction from one or more internal interactions within the object. However, Wright discloses generating one or more samples of energy that correspond to one or more simulated scatterings of light transported through one or more surface interactions of one or more rays (Wright- ¶0018, at least discloses determining lighting contributions of interactions of light transport paths that may be used for computing or rendering global illumination or other ray-tracing applications [light transported through one or more subsurface interactions]; ¶0022, at least discloses when sampling a location, outgoing irradiance [energy] from an outgoing irradiance cache may be used to determine shading associated with the location when a hit distance of a ray used to generate the sample [generating one or more samples of energy] exceeds a threshold value; ¶0026, at least discloses to render frames of a virtual environment, the ray caster 104 may be configured to trace rays in a virtual environment to define one or more portions of ray-traced light transport paths (e.g., between a viewpoint camera and one or more light sources) within the virtual environment over one or more frames. The lighting determiner 106 may be configured to determine—based at least in part on the traced rays—data representative of lighting contributions (also referred to as lighting contribution data) of interactions of the ray-traced light transport paths in the virtual environment (e.g., with surfaces), such as irradiance (e.g., diffuse irradiance) [correspond to one or more simulated scatterings of light transported]; ¶0029, at least discloses The irradiance may correspond to one or more ray-traced irradiance samples [samples of energy], for example, as described herein with respect to FIG. 2. The irradiance (e.g., diffuse irradiance) may comprise incoming irradiance associated with a location(s) and/or outgoing irradiance associated with the location(s). In embodiments where an irradiance cache comprises outgoing irradiance, the outgoing irradiance associated with (e.g., outgoing or incoming from/to or near the locations) the location(s) may be computed from incoming irradiance associated with the location(s) (e.g., from the irradiance cache and/or ray-traced samples of irradiance); ¶0035, at least discloses a sample(s) captured in each irradiance cache may comprise spatial samples and/or temporal samples relative to a current frame or state. A spatial irradiance sample [samples of energy] may refer to irradiance that corresponds to a time and/or state of the virtual environment being sampled to determine irradiance for that time and/or state. A temporal irradiance sample may refer to irradiance that corresponds to a different (e.g., previous) time and/or state of the virtual environment from the time and/or state for which irradiance is being sampled and/or determined; ¶0037-0038, at least disclose irradiance could be sampled from one or more locations (e.g., random locations) on the face 214 to determine irradiance for the locations and/or other locations on or near the face 214 […] the ray caster 104 may define a Normal Distribution Function (NDF) range for the vertex 118F (or other location being sampled) based at least in part on the normal of the surface at the location; ¶0040, at least discloses the ray caster 104 may determine an interaction of the ray 230 with a surface, such as a surface corresponding to the face 214, as shown. The ray caster 104 may also determine the location 216 that corresponds to the interaction [surface interactions]; ¶0070, at least discloses Light meters may be based on measuring the light incoming from certain directions and may directly benefit from the irradiance caches, as the cached irradiance at a point on a surface along a measured direction may be used to calculate final exposure for the virtual environment [simulated scatterings of light transported through one or more surface interactions]); one or more portions of surface light transport to generate one or more resampled samples that represent an amount of energy transported to the interaction from one or more internal interactions within the object (Wright- Fig. 2 shows irradiance of a location 216 (e.g., a point or area) may be computed from irradiance sampled at one or more of the vertices 210 [interaction from within the object]; ¶0018-0019, at least disclose determining lighting contributions of interactions of light transport paths that may be used for computing or rendering global illumination or other ray-tracing applications [light transported] […] irradiance for a location on a face of an object may be computed from irradiance [energy] from irradiance caches at one or more vertices of the face, reducing the number of irradiance caches needed and/or locations that need to be sampled to update the irradiance caches; ¶0022, at least discloses Using outgoing irradiance may be less accurate than incoming irradiance, but more computationally efficient; ¶0026, at least discloses to render frames of a virtual environment, the ray caster 104 may be configured to trace rays in a virtual environment to define one or more portions of ray-traced light transport paths (e.g., between a viewpoint camera and one or more light sources) within the virtual environment over one or more frames. The lighting determiner 106 may be configured to determine—based at least in part on the traced rays—data representative of lighting contributions (also referred to as lighting contribution data) of interactions of the ray-traced light transport paths in the virtual environment (e.g., with surfaces), such as irradiance (e.g., diffuse irradiance) [surface light transport based at least on computing energy] […] The lighting determiner 106 may further aggregate the data representative of the lighting contributions (e.g., incident radiance or irradiance values [amount of the energy]) to update one or more irradiance caches […] The lighting determiner 106 may determine irradiance for and/or update one or more of the irradiance caches (e.g., periodically) using the update ranker 108 and the update selector 110; ¶0029-0030, at least disclose where an irradiance cache comprises outgoing irradiance, the outgoing irradiance associated with (e.g., outgoing or incoming from/to or near the locations) the location(s) may be computed from incoming irradiance associated with the location(s) (e.g., from the irradiance cache and/or ray-traced samples of irradiance) […] the irradiance may comprise an irradiance value, such as a color value, and a normal that defines a plane from which the irradiance is sampled (e.g., a sampled hemisphere); ¶0032, at least discloses where irradiance of multiple locations (e.g., each of vertices 210) are used to derive irradiance for the location 216, the lighting determiner 106 may interpolate irradiance values between the locations to compute the irradiance at the location 216. For example, barycentric interpolation may be applied to irradiance values of the vertices 210 to compute an irradiance value(s) any given point and/or area bounded by the vertices 210 (e.g., on the face 214); ¶0037-0038, at least disclose irradiance could be sampled from one or more locations (e.g., random locations) on the face 214 to determine irradiance [determine an amount of the energy] for the locations and/or other locations on or near the face 214 […] the ray caster 104 may define a Normal Distribution Function (NDF) range for the vertex 118F (or other location being sampled) based at least in part on the normal of the surface at the location; ¶0040, at least discloses the ray caster 104 may determine an interaction of the ray 230 with a surface, such as a surface corresponding to the face 214, as shown. The ray caster 104 may also determine the location 216 that corresponds to the interaction [interaction from within the object]; ¶0061, at least discloses The update ranker 108 may be implemented using one or more algorithms and/or Machine Learning Models (MLMs). A MLM may take a variety of forms for example, and without limitation, the MLM(s) may include any type of machine learning model, such as a machine learning model(s) using linear regression, logistic regression, decision trees, support vector machines (SVM), Naïve Bayes, k-nearest neighbor (Knn), K means clustering, random forest, dimensionality reduction algorithms, gradient boosting algorithms, neural networks (e.g., auto-encoders, convolutional, recurrent, perceptrons, long/short term memory/LSTM, Hopfield, Boltzmann [physics-based model], deep belief, deconvolutional, generative adversarial, liquid state machine, etc.), and/or other types of machine learning models). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Ouyang to incorporate the teachings of Wright, and apply the irradiance value into generating samples on the lights in the scene, as taught by Ouyang for generating one or more samples of energy that correspond to one or more simulated scatterings of light transported through one or more surface interactions of one or more rays; resampling the one or more samples of energy using a target function that one or more portions of subsurface light transport to generate one or more resampled samples that represent an amount of energy transported to the interaction from one or more internal interactions within the object. Doing so would provide improved approaches for determining lighting contributions of interactions of light transport paths that may be used for determining or rendering global illumination or other applications of ray-tracing techniques. The prior art does not clearly disclose, but Zhao discloses one or more subsurface interactions of one or more rays (Zhao- ¶0103, at least discloses indirect light interactions, which include subsurface scattering and subsurface bouncing for human skin, is modelled using a trained neural network that outputs a volumetric light map. While the present example will be described with respect to human face rendering and discusses light interaction with human skin, it will be appreciated that similar methods can be implemented for modeling light transport for non-human organisms, object, etc.; ¶0107-0110, at least disclose Unlike a multi-bounce application, the human skin is a multilayered structure comprising thin oily layer, epidermis and dermis producing specular reflection, surface diffuse reflection and subsurface scattering […] the volumetric light map is used to model unpredictable light transport underneath the skin including the subsurface bouncing and subsurface scattering in between material particles by employing local spherical harmonics to define indirect light Lindirect for building indirect light transport Lid, which is provided […] Referring now to FIG. 8 , it shows the effectiveness of volumetric light map under all white illumination. In particular, input albedo is shown under column 810, subsurface scattering build from volumetric light map is shown under 812, and final rendering is shown under column 814. As evidenced, subsurface scattering presents the similar color patter of that from albedo but with less appearance of hairs including beard and eyebrows; ¶0112, at least discloses when rendering human skin, the generated volumetric light map models subsurface scattering and subsurface bouncing. For example, under a given lighting condition, a thinner skin appears brighter and with a more reddish hue compared to thicker skin. Further, indirect light transport, such as scattering of light at a sub-surface level (e.g., underneath the skin), contributes to shadow softening; ¶0138, at least discloses In order to model light transport underneath the skin including the subsurface bouncing and subsurface scattering in between material particles, (which leads to the visual difference that a thinner skin looks brighter and more reddish under the same lighting condition. Also, indirect light also contributes to shadow softening)); a target function that models one or more portions of subsurface light transport (Zhao- Fig. 3 and ¶0100, at least disclose at 314, the method 300 includes determining one or more of a direct specular component, a direct diffuse component, and an indirect diffuse component for each sample in the volumetric radiance field to model total light transport; ¶0102-0103, at least discloses where Lo (x→ωo) is outgoing radiance leaving geometric location x in direction ωo, Li (x←ωi) stands for incident radiance that arrives at x, θ is the angle between incident light direction ωi and surface normal direction at x. Further, f (x, ωo, ωi) is a bidirectional scattering distribution function (BSDF) that describes the appearance of a surface area centered at a point x when viewed from a direction ωo, illuminated by incident light from direction ωi, and fs is specular reflection, fr is diffuse reflection, and fis is subsurface scattering […] the inventors herein provide a method for modelling direct and indirect light interactions with materials. In particular, indirect light interactions, which include subsurface scattering and subsurface bouncing for human skin, is modelled using a trained neural network that outputs a volumetric light map. While the present example will be described with respect to human face rendering and discusses light interaction with human skin, it will be appreciated that similar methods can be implemented for modeling light transport for non-human organisms, object, etc; ¶0109-0112, at least disclose the volumetric light map is used to model unpredictable light transport underneath the skin including the subsurface bouncing and subsurface scattering in between material particles by employing local spherical harmonics to define indirect light Lindirect for building indirect light transport Lid, which is provided by: […]; ¶0138, at least discloses In order to model light transport underneath the skin including the subsurface bouncing and subsurface scattering in between material particles, (which leads to the visual difference that a thinner skin looks brighter and more reddish under the same lighting condition. Also, indirect light also contributes to shadow softening) […] the volumetric light map is employed using local spherical harmonics to define indirect light Lindirect for building indirect light transport Lss. The indirect light transport equation is shown at equation (3) above); It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Ouyang/Wright to incorporate the teachings of Zhao, and apply modelling light transport including the subsurface scattering into Ouyang/Wright’s teachings for generating one or more samples of energy that correspond to one or more simulated scatterings of light transported through one or more subsurface interactions of one or more rays within the object; resampling the one or more samples of energy using a target function that models one or more portions of subsurface light transport to generate one or more resampled samples that represent an amount of energy transported to the interaction from one or more internal interactions within the object. Doing so would perform physically-based rendering with high-fidelity and improved computational speed to render photo-realistic images. Regarding claim 2, Ouyang in view of Wright and Zhao, discloses the method of claim 1, and further discloses wherein the target function includes one or more of: one or more first variables representing a distance between a first location corresponding to the interaction and a second location corresponding to a second interaction of one or more second rays with the object (Wright- ¶0048, at least discloses the lighting determiner 106 may make this determination based at least on a distance between the location and the interaction; ¶0074, at least discloses determining a second location of the locations that corresponds to an interaction of the ray in the virtual environment. For example, the ray caster 104 may determine the location 216 that corresponds to an interaction of the ray 230 in the virtual environment 116), the second interaction included in the one or more internal interactions (Wright- Fig. 2 and ¶0081, at least disclose the ray caster 104 may determine an interaction of the ray 232 with the location 234. The location 234 may be associated with an outgoing irradiance cache that stores outgoing irradiance corresponding to the second location (e.g., in the face 220 of the object 124)); one or more second variables representing irradiance corresponding to the second location (Wright- ¶0075, at least discloses rendering a frame using at least irradiance from one or more irradiance caches associated with the first location and irradiance from one or more irradiance caches associated with the second location); or one or more third variables representing one or more material properties associated with the object (Ouyang- page 17, left column, section 1. Introduction, at least discloses The flexibility and generality offered by path tracing is highly desirable for real-time rendering, offering the promise of a single unified algorithm that renders photorealistic imagery of scenes with complex lighting, materials, and geometry; Wright- ¶0029, at least discloses he outgoing irradiance may be computed based at least on the incoming irradiance from the irradiance cache, material properties (e.g., diffuse albedo), and any lighting that may not be captured by the irradiance cache, such as sun lighting (e.g., determined using shadow rays) and any self-illumination which may include a self-emissive component to the material and/or location(s)). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Ouyang/Zhao to incorporate the teachings of Wright, and apply the materials into Ouyang/Zhao’s teachings so one or more third variables representing one or more material properties associated with the object. The same motivation that was utilized in the rejection of claim 1 applies equally to this claim. Regarding claim 5, Ouyang in view of Wright and Zhao, discloses the method of claim 1, and further discloses wherein the target function computes the energy at the one or more internal interactions (see Claim 1 rejection for detailed analysis) using a source distribution function that defines a direction of one or more rays scattered from a location corresponding to the interaction with the object (Ouyang- page 19, right column, 1st paragraph, at least discloses First, M candidate samples y =YI , ... ,YM a.re sampled from a source distribution p(y) . Then a target PDF f3 is used to .resample one sample z from y with probability; page 20, right column, section 4.1. Sample Generation, at least discloses For each pixel q with corresponding visible point xv , we sample a direction w; using the source PDF pq ( w;) and trace a ray to obtain the sample point xs. The source PDF may be a uniform distribution, a cosine-weighted distribution, or a distribution based on the BSDF at the visible point; page 21, right column, section 4.2. Resampling and Shading, at least discloses After the fresh initial sample is taken, spatial and temporal resampling is applied. The target function fj = Li(xv, coi ) f (coo, coi) (cos0i ) = Lo(x, , - coi ) f (coo, coi) (cos0i) includes the effect of the BSDF and cosine factor at the visible point, though we have also found that the simple target function […] works well. While it is a suboptimal target function for a single pixel, we have found that it is helpful for spatial resampling in that it preserves samples that may be effective at pixels other than the one that initially generated it). Regarding claim 7, Ouyang in view of Wright and Zhao, discloses the method of claim 1, and further discloses wherein the resampling includes: updating one or more reservoirs of samples using the one or more samples to generate one or more updated reservoirs of samples based at least on the target function (Ouyang- Figure 4 shows Temporal reuse: we use the sample from the initial sample buffer to update temporal reservoir buffer by randomly choosing between the one created in current frame and the existing one in the buffer. Temporal reprojection is applied to find the corresponding temporal reservoir from the last frame. Spatial reuse: we use randomly-chosen temporal reservoirs from neighborhood pixels to update spatial reservoir. To suppress bias, we choose neighborhood pixels with similar geometric features by comparing their depth and normal with the current pixel's; page 21, right column, section 4.2. Resampling and Shading, at least discloses After the fresh initial sample is taken, spatial and temporal resampling is applied. The target function fj = Li(xv, coi ) f (coo, coi) (cos0i ) = Lo(x, , - coi ) f (coo, coi) (cos0i) (9) includes the effect of the BSDF and cosine factor at the visible point, though we have also found that the simple target function (10) works well. While it is a suboptimal target function for a single pixel, we have found that it is helpful for spatial resampling in that it preserves samples that may be effective at pixels other than the one that initially generated it. After initial samples are generated, temporal resampling is applied. In this stage, for each pixel, we read the sample from the initial sample buffer, and use it to randomly update temporal reservoir, computing the RIS weight following Equation 5 with the source PDF as the PDF for the sampled direction pq (coi) and /J as defined in Equation 10. The pseudo-code for temporal resampling is shown in Algorithm 3.); and selecting the one or more resampled samples from the one or more updated reservoirs of samples (Ouyang- page 22, left column, 1st paragraph, at least discloses After temporal use, spatial reuse is applied. Samples are taken from the temporal reservoirs at nearby pixels, and resampled into a separate spatial reservoir. (See Algorithm 4 for pseudo-code.) With spatial reuse, it is necessary to account for differences in the source PDF between pixels that are due to the fact that our sampling scheme is based on the visible point's position and surface normal). Regarding claim 9, Ouyang in view of Wright and Zhao, discloses the method of claim 1, and further discloses wherein the resampling includes: updating one or more sets of samples using the one or more resampled samples and the target function to generate one or more first updated sets of samples (Ouyang- Figure 2(b) shows “Reused sample” After resampling, the original samples with no contribution are discarded; the useful samples are shared spatially and temporally and are used with probability based on their contribution; Figure 2(d) shows Spatial and temporal resampling is applied in a similar manner; page 21, right column, section 4.2. Resampling and Shading, at least discloses After the fresh initial sample is taken, spatial and temporal resampling is applied. The target function fj = Li(xv, coi ) f (coo, coi) (cos0i ) = Lo(x, , - coi ) f (coo, coi) (cos0i) includes the effect of the BSDF and cosine factor at the visible point, though we have also found that the simple target function […] works well. While it is a suboptimal target function for a single pixel, we have found that it is helpful for spatial resampling in that it preserves samples that may be effective at pixels other than the one that initially generated it. After initial samples are generated, temporal resampling is applied. In this stage, for each pixel, we read the sample from the initial sample buffer, and use it to randomly update temporal reservoir, computing the RIS weight following Equation 5 with the source PDF as the PDF for the sampled direction pq (coi) and /J as defined in Equation 10. The pseudo-code for temporal resampling is shown in Algorithm 3 Temporal Resampling; Figure 6 shows If a visible point xq1 generates a sample point Xq2 that is reused at another visible point xr1, then the Jacobian determinant in Equation 11 accounts for the fact that xrq would have itself generated the sample point Xq2 with a different probability; page 22, left column, 1st paragraph, at least discloses Samples are taken from the temporal reservoirs at nearby pixels, and resampled into a separate spatial reservoir. (See Algorithm 4 for pseudo-code.)); selecting one or more initial resampled samples from the one or more first updated sets of samples (Ouyang- page 22, left column, 1st paragraph, at least discloses After temporal use, spatial reuse is applied. Samples are taken from the temporal reservoirs at nearby pixels, and resampled into a separate spatial reservoir. (See Algorithm 4 for pseudo-code.) With spatial reuse, it is necessary to account for differences in the source PDF between pixels that are due to the fact that our sampling scheme is based on the visible point's position and surface normal).; updating the one or more first updated sets of samples using the one or more initial resampled samples and the target function to generate one or more second updated sets of samples (Ouyang- Figure 4 shows Temporal reuse: we use the sample from the initial sample buffer to update temporal reservoir buffer by randomly choosing between the one created in current frame and the existing one in the buffer. Temporal reprojection is applied to find the corresponding temporal reservoir from the last frame. Spatial reuse: we use randomly-chosen temporal reservoirs from neighborhood pixels to update spatial reservoir. To suppress bias, we choose neighborhood pixels with similar geometric features by comparing their depth and normal with the current pixel's; page 21, right column, section 4.2. Resampling and Shading, at least discloses After the fresh initial sample is taken, spatial and temporal resampling is applied. The target function fj = Li(xv, coi ) f (coo, coi) (cos0i ) = Lo(x, , - coi ) f (coo, coi) (cos0i) (9) includes the effect of the BSDF and cosine factor at the visible point, though we have also found that the simple target function (10) works well. While it is a suboptimal target function for a single pixel, we have found that it is helpful for spatial resampling in that it preserves samples that may be effective at pixels other than the one that initially generated it. After initial samples are generated, temporal resampling is applied. In this stage, for each pixel, we read the sample from the initial sample buffer, and use it to randomly update temporal reservoir, computing the RIS weight following Equation 5 with the source PDF as the PDF for the sampled direction pq (coi) and /J as defined in Equation 10. The pseudo-code for temporal resampling is shown in Algorithm 3.); and selecting the one or more resampled samples from the one or more second updated sets of samples (Ouyang- page 22, left column, 1st paragraph, at least discloses After temporal use, spatial reuse is applied. Samples are taken from the temporal reservoirs at nearby pixels, and resampled into a separate spatial reservoir. (See Algorithm 4 for pseudo-code.) With spatial reuse, it is necessary to account for differences in the source PDF between pixels that are due to the fact that our sampling scheme is based on the visible point's position and surface normal). Regarding claim 10, Ouyang discloses a system (Ouyang- Figure 1 shows Images are rendered at 1080p resolution with an NVIDIA 3090 RTX GPU without denoising; Figure 8 shows Rendered at 1920 x 1050 resolution on an NVIDIA RTX 3080 GPU, for ReSTIR GI, initial sampling takes 3.2 ms for one bounce and 4.2 ms for two; page 26, left column, section 6. Results, at least discloses All measurements were taken using an NVIDIA RTX 3090 GPU) comprising: one or more processing units to perform operations (Ouyang- Figure 1 shows Images are rendered at 1080p resolution with an NVIDIA 3090 RTX GPU without denoising; Figure 8 shows Rendered at 1920 x 1050 resolution on an NVIDIA RTX 3080 GPU, for ReSTIR GI, initial sampling takes 3.2 ms for one bounce and 4.2 ms for two; page 26, left column, section 6. Results, at least discloses All measurements were taken using an NVIDIA RTX 3090 GPU) including: determining one or more samples that correspond to one or more scatterings of light within an object (Ouyang- Figure 2(a) shows generating random samples on the lights in the scene. Reflected radiance is computed at these intersections with path tracing; Figure 4 shows “Area with scattered radiance” and “Valid sample”. Initial sampling: we trace a ray from each visible point (red dots) with a random direction and record the closest intersection in a screen-space initial sample buffer. The position, normal and radiance of the intersection, the random numbers used in next event estimation, as well as the position and normal of the pixel, are recorded; Figure 5 shows At each sample point x2, we estimate the radiance scattered to the corresponding visible point using path tracing; page 20, left column, 1st and 2nd paragraphs, at least disclose points on surfaces with the radiance they scatter back along an incident ray [scatterings of light within an object] […] the visible points are the positions on surfaces in the scene that are visible from the camera at each pixel. At each visible point, a direction is randomly sampled and a ray is traced to find the closest surface intersection; these intersections are called sample points; page 20, left column, section 4.1. Sample Generation, 1st paragraph, at least discloses the visible points are the positions on surfaces in the scene that are visible from the camera at each pixel; page 20, right column, 2nd paragraph, at least discloses The first phase of our algorithm generates a new sample point for each visible point […] For each pixel q with corresponding visible point xv , we sample a direction w; using the source PDF pq (wi) and trace a ray to obtain the sample point xs. The source PDF may be a uniform distribution, a cosine-weighted distribution, or a distribution based on the BSDF at the visible point […] At each sample point, we need to compute the outgoing radiance Lo (xs, w0), where wo is the normalized direction to the visible point); determining one or more sets of samples associated with a location (Ouyang- Figure 2(a) shows Initial Sampling or “Initial Samples”. Generating random samples on the lights in the scene; Figure 4 shows “Area with scattered radiance” and “Valid sample”. Initial sampling: we trace a ray from each visible point (red dots) with a random direction and record the closest intersection in a screen-space initial sample buffer. The position, normal and radiance of the intersection, the random numbers used in next event estimation, as well as the position and normal of the pixel, are recorded; page 18, left column, 2nd paragraph, at least discloses our algorithm places initial samples in the space of the local sphere of directions around shading points [location]); filtering the one or more sets of samples using a target function that to select, from the one or more sets of samples , a subset of one or more sampleswithin the object (Ouyang- Figure 2(a) shows Initial Sampling or “Initial Samples”; page 26, left column, section 6. Results, at least discloses Figure 12 shows the effect of denoising with regular path tracing and ReSTIR GI using the default spatio-temporal denoiser in Unreal Engine 4.25, which performs temporal accumulation followed by spatial filtering and post filtering); and rendering an image based at least on the one or more samples (Ouyang- Fig. 1 shows Images are rendered at 1080p resolution with an NVIDIA 3090 RTX GPU without denoising (Middle) shows ReSTIR GI using spatial and temporal resampling and one sample per pixel in 8.9 ms; Figure 7 shows Effect of the Jacobian determinant, Equation 11, in spatial resampling. The wall receives sunlight and indirectly illuminates the floor; Fig. 8 shows Comparison between direct light, 1-bounce and 2-bounce GI rendered with our algorithm. Sample reuse uses 4.6 ms in both cases; page 24, left column, section 5. Implementation, at least discloses We have implemented our algorithm in Unreal Engine 4 and Falcor [… In Unreal Engine 4 implementation, the initial G-buffer is generated using rasterization before a full-screen pass generates the new samples for each visible point. The Falcor implementation is similar, though it uses ray tracing to generate G-buffer. Both temporal and spatial resampling are handled in a subsequent full-screen pass. Temporal resampling uses reprojected pixels according to their motion vectors from the previous frame. For efficiency, our implementation neglects the directional variation in scattered radiance at the sample point). Ouyang does not clearly disclose determining one or more samples of energy that correspond one or more scatterings of light transported through one or more subsurface interactions of one or more rays; one or more sets of samples of energy associated with a location corresponding to the object; one or more sets of samples of energy using a target function that models one or more portions of subsurface light transport to select, from the one or more sets of samples of energy, a subset of one or more samples of energy that represents an amount of energy transported to the location from one or more internal interactions within the object; the subset of one or more samples. However, Wright discloses determining one or more samples of energy that correspond to one or more scatterings of light transported through one or more surface interactions of one or more rays (Wright- ¶0018, at least discloses determining lighting contributions of interactions of light transport paths that may be used for computing or rendering global illumination or other ray-tracing applications [light transported through one or more subsurface interactions]; ¶0022, at least discloses when sampling a location, outgoing irradiance [energy] from an outgoing irradiance cache may be used to determine shading associated with the location when a hit distance of a ray used to generate the sample exceeds a threshold value; ¶0026, at least discloses to render frames of a virtual environment, the ray caster 104 may be configured to trace rays in a virtual environment to define one or more portions of ray-traced light transport paths (e.g., between a viewpoint camera and one or more light sources) within the virtual environment over one or more frames. The lighting determiner 106 may be configured to determine—based at least in part on the traced rays—data representative of lighting contributions (also referred to as lighting contribution data) of interactions of the ray-traced light transport paths in the virtual environment (e.g., with surfaces), such as irradiance (e.g., diffuse irradiance) [correspond one or more scatterings of light transported]; ¶0029, at least discloses The irradiance may correspond to one or more ray-traced irradiance samples [samples of energy], for example, as described herein with respect to FIG. 2. The irradiance (e.g., diffuse irradiance) may comprise incoming irradiance associated with a location(s) and/or outgoing irradiance associated with the location(s). In embodiments where an irradiance cache comprises outgoing irradiance, the outgoing irradiance associated with (e.g., outgoing or incoming from/to or near the locations) the location(s) may be computed from incoming irradiance associated with the location(s) (e.g., from the irradiance cache and/or ray-traced samples of irradiance); ¶0035, at least discloses a sample(s) captured in each irradiance cache may comprise spatial samples and/or temporal samples relative to a current frame or state. A spatial irradiance sample [samples of energy] may refer to irradiance that corresponds to a time and/or state of the virtual environment being sampled to determine irradiance for that time and/or state. A temporal irradiance sample may refer to irradiance that corresponds to a different (e.g., previous) time and/or state of the virtual environment from the time and/or state for which irradiance is being sampled and/or determined; ¶0037-0038, at least disclose irradiance could be sampled from one or more locations (e.g., random locations) on the face 214 to determine irradiance for the locations and/or other locations on or near the face 214 […] the ray caster 104 may define a Normal Distribution Function (NDF) range for the vertex 118F (or other location being sampled) based at least in part on the normal of the surface at the location; ¶0040, at least discloses the ray caster 104 may determine an interaction of the ray 230 with a surface, such as a surface corresponding to the face 214, as shown. The ray caster 104 may also determine the location 216 that corresponds to the interaction [surface interactions]; ¶0070, at least discloses Light meters may be based on measuring the light incoming from certain directions and may directly benefit from the irradiance caches, as the cached irradiance at a point on a surface along a measured direction may be used to calculate final exposure for the virtual environment [scatterings of light transported through one or more subsurface interactions]); one or more sets of samples of energy associated with a location corresponding to the object (Wright- ¶0019, at least discloses an irradiance cache may correspond to a location in the virtual environment and may be used to aggregate irradiance samples [samples of energy], thereby increasing the effective sample count used to compute lighting conditions); select, from the one or more sets of samples of energy, a subset of one or more samples of energy that represents an amount of energy transported to the location from one or more internal interactions within the object (Wright- Fig. 2 shows irradiance of a location 216 (e.g., a point or area) may be computed from irradiance sampled at one or more of the vertices 210 [internal interactions within the object]; ¶0018-0019, at least disclose determining lighting contributions of interactions of light transport paths that may be used for computing or rendering global illumination or other ray-tracing applications [light transported] […] irradiance for a location on a face of an object may be computed from irradiance [computing energy] from irradiance caches at one or more vertices of the face, reducing the number of irradiance caches needed and/or locations that need to be sampled to update the irradiance caches; ¶0022, at least discloses Using outgoing irradiance may be less accurate than incoming irradiance, but more computationally efficient; ¶0026, at least discloses to render frames of a virtual environment, the ray caster 104 may be configured to trace rays in a virtual environment to define one or more portions of ray-traced light transport paths (e.g., between a viewpoint camera and one or more light sources) within the virtual environment over one or more frames. The lighting determiner 106 may be configured to determine—based at least in part on the traced rays—data representative of lighting contributions (also referred to as lighting contribution data) of interactions of the ray-traced light transport paths in the virtual environment (e.g., with surfaces), such as irradiance (e.g., diffuse irradiance) [surface light transport based at least on computing energy] […] The lighting determiner 106 may further aggregate the data representative of the lighting contributions (e.g., incident radiance or irradiance values [amount of the energy]) to update one or more irradiance caches […] The lighting determiner 106 may determine irradiance for and/or update one or more of the irradiance caches (e.g., periodically) using the update ranker 108 and the update selector 110; ¶0029-0030, at least discloses The irradiance may correspond to one or more ray-traced irradiance samples [samples of energy], for example, as described herein with respect to FIG. 2. The irradiance (e.g., diffuse irradiance) may comprise incoming irradiance associated with a location(s) and/or outgoing irradiance associated with the location(s). In embodiments where an irradiance cache comprises outgoing irradiance, the outgoing irradiance associated with (e.g., outgoing or incoming from/to or near the locations) the location(s) may be computed from incoming irradiance associated with the location(s) (e.g., from the irradiance cache and/or ray-traced samples of irradiance) […] the irradiance may comprise an irradiance value, such as a color value, and a normal that defines a plane from which the irradiance is sampled (e.g., a sampled hemisphere); ¶0032, at least discloses where irradiance of multiple locations (e.g., each of vertices 210) are used to derive irradiance for the location 216, the lighting determiner 106 may interpolate irradiance values between the locations to compute the irradiance at the location 216. For example, barycentric interpolation may be applied to irradiance values of the vertices 210 to compute an irradiance value(s) any given point and/or area bounded by the vertices 210 (e.g., on the face 214); ¶0037-0038, at least disclose irradiance could be sampled from one or more locations (e.g., random locations) on the face 214 to determine irradiance [determine an amount of the energy] for the locations and/or other locations on or near the face 214 […] the ray caster 104 may define a Normal Distribution Function (NDF) range for the vertex 118F (or other location being sampled) based at least in part on the normal of the surface at the location; ¶0040, at least discloses the ray caster 104 may determine an interaction of the ray 230 with a surface, such as a surface corresponding to the face 214, as shown. The ray caster 104 may also determine the location 216 that corresponds to the interaction [interaction from within the object]; ¶0061, at least discloses The update ranker 108 may be implemented using one or more algorithms and/or Machine Learning Models (MLMs). A MLM may take a variety of forms for example, and without limitation, the MLM(s) may include any type of machine learning model, such as a machine learning model(s) using linear regression, logistic regression, decision trees, support vector machines (SVM), Naïve Bayes, k-nearest neighbor (Knn), K means clustering, random forest, dimensionality reduction algorithms, gradient boosting algorithms, neural networks (e.g., auto-encoders, convolutional, recurrent, perceptrons, long/short term memory/LSTM, Hopfield, Boltzmann [physics-based model], deep belief, deconvolutional, generative adversarial, liquid state machine, etc.), and/or other types of machine learning models; ¶0067-0068, at least discloses the image denoiser 112 may employ temporal accumulation for filtering pixels and may comprise one or more temporal and/or spatiotemporal filters. By leveraging temporal information of pixels, the image denoiser 112 may increase the effective sample count used to determine lighting for the pixels […] on disocclusions, where there are no or very few samples accumulated for an irradiance cache, the irradiance cache may be used to provide a stable and noise free estimate of the pixel value […] when a disocclusion for a pixel is detected, irradiance from an irradiance cache may be used for denoising the pixel and/or surrounding pixels (e.g., within a filter radius of a pixel being filtered)); the subset of one or more samples (Wright- ¶0020, at least discloses an irradiance sample may comprise radiance of a single ray cast from a location and may be one of many samples used to calculate irradiance. An irradiance cache may store incoming irradiance or outgoing irradiance and may be updated by casting one or more rays from one or more locations to sample irradiance for the location(s) […] the number of rays that are cast to update irradiance caches may be reduced by selecting a subset of locations and/irradiance caches in a virtual environment based at least on one or more associated characteristics […] geometry of the virtual environment may be divided into geometry groups, where each geometry group may comprise a different subset of irradiance caches). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Ouyang to incorporate the teachings of Wright, and apply the irradiance value into generating samples on the lights in the scene, as taught by Ouyang for determining one or more samples of energy that correspond one or more scatterings of light transported through one or more surface interactions of one or more rays within an object; determining one or more sets of samples of energy associated with location corresponding to the object; select, from the one or more sets of samples of energy, a subset of one or more samples of energy that represents an amount of energy transported to the location from one or more internal interactions within the object. Doing so would provide improved approaches for determining lighting contributions of interactions of light transport paths that may be used for determining or rendering global illumination or other applications of ray-tracing techniques. The same motivation that was utilized in the rejection of claim 1 applies equally to this claim. The prior art does not clearly disclose, but Zhao discloses one or more subsurface interactions of one or more rays (Zhao- ¶0103, at least discloses indirect light interactions, which include subsurface scattering and subsurface bouncing for human skin, is modelled using a trained neural network that outputs a volumetric light map. While the present example will be described with respect to human face rendering and discusses light interaction with human skin, it will be appreciated that similar methods can be implemented for modeling light transport for non-human organisms, object, etc.; ¶0107-0110, at least disclose Unlike a multi-bounce application, the human skin is a multilayered structure comprising thin oily layer, epidermis and dermis producing specular reflection, surface diffuse reflection and subsurface scattering […] the volumetric light map is used to model unpredictable light transport underneath the skin including the subsurface bouncing and subsurface scattering in between material particles by employing local spherical harmonics to define indirect light Lindirect for building indirect light transport Lid, which is provided […] Referring now to FIG. 8 , it shows the effectiveness of volumetric light map under all white illumination. In particular, input albedo is shown under column 810, subsurface scattering build from volumetric light map is shown under 812, and final rendering is shown under column 814. As evidenced, subsurface scattering presents the similar color patter of that from albedo but with less appearance of hairs including beard and eyebrows; ¶0112, at least discloses when rendering human skin, the generated volumetric light map models subsurface scattering and subsurface bouncing. For example, under a given lighting condition, a thinner skin appears brighter and with a more reddish hue compared to thicker skin. Further, indirect light transport, such as scattering of light at a sub-surface level (e.g., underneath the skin), contributes to shadow softening; ¶0138, at least discloses In order to model light transport underneath the skin including the subsurface bouncing and subsurface scattering in between material particles, (which leads to the visual difference that a thinner skin looks brighter and more reddish under the same lighting condition. Also, indirect light also contributes to shadow softening)); using a target function that models one or more portions of subsurface light transport (Zhao- Fig. 3 and ¶0100, at least disclose at 314, the method 300 includes determining one or more of a direct specular component, a direct diffuse component, and an indirect diffuse component for each sample in the volumetric radiance field to model total light transport; ¶0102-0103, at least discloses where Lo (x→ωo) is outgoing radiance leaving geometric location x in direction ωo, Li (x←ωi) stands for incident radiance that arrives at x, θ is the angle between incident light direction ωi and surface normal direction at x. Further, f (x, ωo, ωi) is a bidirectional scattering distribution function (BSDF) that describes the appearance of a surface area centered at a point x when viewed from a direction ωo, illuminated by incident light from direction ωi, and fs is specular reflection, fr is diffuse reflection, and fis is subsurface scattering […] the inventors herein provide a method for modelling direct and indirect light interactions with materials. In particular, indirect light interactions, which include subsurface scattering and subsurface bouncing for human skin, is modelled using a trained neural network that outputs a volumetric light map. While the present example will be described with respect to human face rendering and discusses light interaction with human skin, it will be appreciated that similar methods can be implemented for modeling light transport for non-human organisms, object, etc; ¶0109-0112, at least disclose the volumetric light map is used to model unpredictable light transport underneath the skin including the subsurface bouncing and subsurface scattering in between material particles by employing local spherical harmonics to define indirect light Lindirect for building indirect light transport Lid, which is provided by: […]; ¶0138, at least discloses In order to model light transport underneath the skin including the subsurface bouncing and subsurface scattering in between material particles, (which leads to the visual difference that a thinner skin looks brighter and more reddish under the same lighting condition. Also, indirect light also contributes to shadow softening) […] the volumetric light map is employed using local spherical harmonics to define indirect light Lindirect for building indirect light transport Lss. The indirect light transport equation is shown at equation (3) above); It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Ouyang/Wright to incorporate the teachings of Zhao, and apply the subsurface into Ouyang/Wright’s teachings for determining one or more samples of energy that correspond to one or more scatterings of light transported through one or more subsurface interactions of one or more rays within an object; using a target function that models one or more portions of subsurface light transport to select, from the one or more sets of samples of energy, a subset of one or more samples of energy that represents an amount of energy transported to the location from one or more internal interactions within the object. The same motivation that was utilized in the rejection of claim 1 applies equally to this claim. The system of claim 11 is similar in scope to the functions performed by the method of claim 2 and therefore claim 11 are rejected under the same rationale. Regarding claim 14, Ouyang in view of Wright and Zhao, discloses, discloses the system of claim 10, and further discloses wherein the system is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation (Ouyang- Figure 1 shows Images are rendered at 1080p resolution with an NVIDIA 3090 RTX GPU without denoising; Figure 8 shows Rendered at 1920 x 1050 resolution on an NVIDIA RTX 3080 GPU, for ReSTIR GI, initial sampling takes 3.2 ms for one bounce and 4.2 ms for two; page 26, left column, section 7. Conclusion and Future Work, at least discloses All measurements were taken using an NVIDIA RTX 3090 GPU; page 28, left column, section 7. Conclusion and Future Work, at least discloses The first and most important to address is the ability to more effectively deal with non-Lambertian scattering events along light transport paths); a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations (Ouyang- Figure 1 shows Images are rendered at 1080p resolution with an NVIDIA 3090 RTX GPU without denoising; page 18, right column, 4th paragraph, at least discloses Deep learning has also been applied to path guiding); a system implemented using an edge device; a system implemented using a robot; a system for performing conversational AI operations; a system for generating synthetic data; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources. Regarding claim 15, Ouyang discloses one or more processors (Ouyang- Figure 1 shows Images are rendered at 1080p resolution with an NVIDIA 3090 RTX GPU without denoising; Figure 8 shows Rendered at 1920 x 1050 resolution on an NVIDIA RTX 3080 GPU, for ReSTIR GI, initial sampling takes 3.2 ms for one bounce and 4.2 ms for two; page 26, left column, section 6. Results, at least discloses All measurements were taken using an NVIDIA RTX 3090 GPU) comprising: one or more circuits to render an image based at least on: generating one or more samples that correspond to one or more surface scatterings of light within an object (Ouyang- Figure 2(a) shows generating random samples on the lights in the scene. Reflected radiance is computed at these intersections with path tracing; Figure 4 shows “Area with scattered radiance” and “Valid sample”. Initial sampling: we trace a ray from each visible point (red dots) with a random direction and record the closest intersection in a screen-space initial sample buffer. The position, normal and radiance of the intersection, the random numbers used in next event estimation, as well as the position and normal of the pixel, are recorded; Figure 5 shows At each sample point x2, we estimate the radiance scattered to the corresponding visible point using path tracing; page 20, left column, 1st and 2nd paragraphs, at least disclose points on surfaces with the radiance they scatter back along an incident ray [scatterings of light within the object] […] the visible points are the positions on surfaces in the scene that are visible from the camera at each pixel. At each visible point, a direction is randomly sampled and a ray is traced to find the closest surface intersection; these intersections are called sample points; page 20, right column, section 4.1. Sample Generation, 1st paragraph, at least discloses The first phase of our algorithm generates a new sample point for each visible point […] For each pixel q with corresponding visible point xv , we sample a direction w; using the source PDF pq (wi) and trace a ray to obtain the sample point xs. The source PDF may be a uniform distribution, a cosine-weighted distribution, or a distribution based on the BSDF at the visible point […] At each sample point, we need to compute the outgoing radiance Lo (xs, w0), where wo is the normalized direction to the visible point), and resampling the one or more samples using a target function that to generate one or more resampled samplesto the object from one or more internal interactions within the object (Ouyang- Fig. 1 shows Images are rendered at 1080p resolution with an NVIDIA 3090 RTX GPU without denoising (Middle) shows ReSTIR GI using spatial and temporal resampling and one sample per pixel in 8.9 ms; Figure 2(a) shows generating random samples on the lights in the scene. Reflected radiance is computed at these intersections with path tracing; Figure 2(b) shows “Reused sample” After resampling, the original samples with no contribution are discarded; the useful samples are shared spatially and temporally and are used with probability based on their contribution; Figure 2(d) shows Spatial and temporal resampling is applied in a similar manner; Figure 4 shows “Area with scattered radiance” and “Valid sample”. Initial sampling: we trace a ray from each visible point (red dots) with a random direction and record the closest intersection in a screen-space initial sample buffer. The position, normal and radiance of the intersection, the random numbers used in next event estimation, as well as the position and normal of the pixel, are recorded; Figure 5 shows At each sample point x2, we estimate the radiance scattered to the corresponding visible point using path tracing; page 20, left column, 1st and 2nd paragraphs, at least disclose points on surfaces with the radiance they scatter back along an incident ray [one or more scatterings of light] […] the visible points are the positions on surfaces in the scene that are visible from the camera at each pixel. At each visible point, a direction is randomly sampled and a ray is traced to find the closest surface intersection; these intersections are called sample points; page 20, right column, section 4.1. Sample Generation, 1st paragraph, at least discloses The first phase of our algorithm generates a new sample point for each visible point […] For each pixel q with corresponding visible point xv , we sample a direction w; using the source PDF pq (wi) and trace a ray to obtain the sample point xs. The source PDF may be a uniform distribution, a cosine-weighted distribution, or a distribution based on the BSDF at the visible point […] At each sample point, we need to compute the outgoing radiance Lo (xs, w0), where wo is the normalized direction to the visible point; page 21, right column, section 4.2. Resampling and Shading, at least discloses After the fresh initial sample is taken, spatial and temporal resampling is applied. The target function fj = Li(xv, coi ) f (coo, coi) (cos0i ) = Lo(x, , - coi ) f (coo, coi) (cos0i) includes the effect of the BSDF and cosine factor at the visible point, though we have also found that the simple target function […] works well. While it is a suboptimal target function for a single pixel, we have found that it is helpful for spatial resampling in that it preserves samples that may be effective at pixels other than the one that initially generated it. After initial samples are generated, temporal resampling is applied. In this stage, for each pixel, we read the sample from the initial sample buffer, and use it to randomly update temporal reservoir, computing the RIS weight following Equation 5 with the source PDF as the PDF for the sampled direction pq (coi) and /J as defined in Equation 10. The pseudo-code for temporal resampling is shown in Algorithm 3 Temporal Resampling; Figure 6 shows If a visible point xq1 generates a sample point Xq2 that is reused at another visible point xr1, then the Jacobian determinant in Equation 11 accounts for the fact that xrq would have itself generated the sample point Xq2 with a different probability; page 22, left column, 1st paragraph, at least discloses Samples are taken from the temporal reservoirs at nearby pixels, and resampled into a separate spatial reservoir. (See Algorithm 4 for pseudo-code)). Ouyang does not clearly disclose generating one or more samples of energy that correspond to one or more subsurface scatterings of light transported through one or more subsurface interactions of one or more rays, and using a target function that models one or more portions of subsurface light transport to generate one or more resampled samples that represent an amount of energy transported to a location corresponding to the object from one or more internal interactions within the object. However, Wright discloses one or more samples of energy that correspond to one or more subsurface scatterings of light transported through one or more subsurface interactions of one or more rays (Wright- ¶0018, at least discloses determining lighting contributions of interactions of light transport paths that may be used for computing or rendering global illumination or other ray-tracing applications [light transported through one or more subsurface interactions]; ¶0022, at least discloses when sampling a location, outgoing irradiance [energy] from an outgoing irradiance cache may be used to determine shading associated with the location when a hit distance of a ray used to generate the sample [generating one or more samples of energy] exceeds a threshold value; ¶0026, at least discloses to render frames of a virtual environment, the ray caster 104 may be configured to trace rays in a virtual environment to define one or more portions of ray-traced light transport paths (e.g., between a viewpoint camera and one or more light sources) within the virtual environment over one or more frames. The lighting determiner 106 may be configured to determine—based at least in part on the traced rays—data representative of lighting contributions (also referred to as lighting contribution data) of interactions of the ray-traced light transport paths in the virtual environment (e.g., with surfaces), such as irradiance (e.g., diffuse irradiance) [correspond to one or more simulated scatterings of light transported]; ¶0029, at least discloses The irradiance may correspond to one or more ray-traced irradiance samples [samples of energy], for example, as described herein with respect to FIG. 2. The irradiance (e.g., diffuse irradiance) may comprise incoming irradiance associated with a location(s) and/or outgoing irradiance associated with the location(s). In embodiments where an irradiance cache comprises outgoing irradiance, the outgoing irradiance associated with (e.g., outgoing or incoming from/to or near the locations) the location(s) may be computed from incoming irradiance associated with the location(s) (e.g., from the irradiance cache and/or ray-traced samples of irradiance); ¶0035, at least discloses a sample(s) captured in each irradiance cache may comprise spatial samples and/or temporal samples relative to a current frame or state. A spatial irradiance sample [samples of energy] may refer to irradiance that corresponds to a time and/or state of the virtual environment being sampled to determine irradiance for that time and/or state. A temporal irradiance sample may refer to irradiance that corresponds to a different (e.g., previous) time and/or state of the virtual environment from the time and/or state for which irradiance is being sampled and/or determined; ¶0037-0038, at least disclose irradiance could be sampled from one or more locations (e.g., random locations) on the face 214 to determine irradiance for the locations and/or other locations on or near the face 214 […] the ray caster 104 may define a Normal Distribution Function (NDF) range for the vertex 118F (or other location being sampled) based at least in part on the normal of the surface at the location; ¶0040, at least discloses the ray caster 104 may determine an interaction of the ray 230 with a surface, such as a surface corresponding to the face 214, as shown. The ray caster 104 may also determine the location 216 that corresponds to the interaction [surface interactions]; ¶0070, at least discloses Light meters may be based on measuring the light incoming from certain directions and may directly benefit from the irradiance caches, as the cached irradiance at a point on a surface along a measured direction may be used to calculate final exposure for the virtual environment [simulated scatterings of light transported through one or more surface interactions]); one or more portions of subsurface light transport to generate one or more resampled samples that represent an amount of energy transported to a location corresponding to the object from one or more internal interactions within the object (Wright- Fig. 2 shows irradiance of a location 216 (e.g., a point or area) may be computed from irradiance sampled at one or more of the vertices 210 [interaction from within the object]; ¶0018-0019, at least disclose determining lighting contributions of interactions of light transport paths that may be used for computing or rendering global illumination or other ray-tracing applications [light transported] […] irradiance for a location on a face of an object may be computed from irradiance [computing energy] from irradiance caches at one or more vertices of the face, reducing the number of irradiance caches needed and/or locations that need to be sampled to update the irradiance caches; ¶0022, at least discloses Using outgoing irradiance may be less accurate than incoming irradiance, but more computationally efficient; ¶0026, at least discloses to render frames of a virtual environment, the ray caster 104 may be configured to trace rays in a virtual environment to define one or more portions of ray-traced light transport paths (e.g., between a viewpoint camera and one or more light sources) within the virtual environment over one or more frames. The lighting determiner 106 may be configured to determine—based at least in part on the traced rays—data representative of lighting contributions (also referred to as lighting contribution data) of interactions of the ray-traced light transport paths in the virtual environment (e.g., with surfaces), such as irradiance (e.g., diffuse irradiance) [surface light transport based at least on computing energy] […] The lighting determiner 106 may further aggregate the data representative of the lighting contributions (e.g., incident radiance or irradiance values [amount of the energy]) to update one or more irradiance caches […] The lighting determiner 106 may determine irradiance for and/or update one or more of the irradiance caches (e.g., periodically) using the update ranker 108 and the update selector 110; ¶0029-0030, at least disclose where an irradiance cache comprises outgoing irradiance, the outgoing irradiance associated with (e.g., outgoing or incoming from/to or near the locations) the location(s) may be computed from incoming irradiance associated with the location(s) (e.g., from the irradiance cache and/or ray-traced samples of irradiance) […] the irradiance may comprise an irradiance value, such as a color value, and a normal that defines a plane from which the irradiance is sampled (e.g., a sampled hemisphere); ¶0032, at least discloses where irradiance of multiple locations (e.g., each of vertices 210) are used to derive irradiance for the location 216, the lighting determiner 106 may interpolate irradiance values between the locations to compute the irradiance at the location 216. For example, barycentric interpolation may be applied to irradiance values of the vertices 210 to compute an irradiance value(s) any given point and/or area bounded by the vertices 210 (e.g., on the face 214); ¶0037-0038, at least disclose irradiance could be sampled from one or more locations (e.g., random locations) on the face 214 to determine irradiance [determine an amount of the energy] for the locations and/or other locations on or near the face 214 […] the ray caster 104 may define a Normal Distribution Function (NDF) range for the vertex 118F (or other location being sampled) based at least in part on the normal of the surface at the location; ¶0040, at least discloses the ray caster 104 may determine an interaction of the ray 230 with a surface, such as a surface corresponding to the face 214, as shown. The ray caster 104 may also determine the location 216 that corresponds to the interaction [interaction from within the object]; ¶0061, at least discloses The update ranker 108 may be implemented using one or more algorithms and/or Machine Learning Models (MLMs). A MLM may take a variety of forms for example, and without limitation, the MLM(s) may include any type of machine learning model, such as a machine learning model(s) using linear regression, logistic regression, decision trees, support vector machines (SVM), Naïve Bayes, k-nearest neighbor (Knn), K means clustering, random forest, dimensionality reduction algorithms, gradient boosting algorithms, neural networks (e.g., auto-encoders, convolutional, recurrent, perceptrons, long/short term memory/LSTM, Hopfield, Boltzmann [physics-based model], deep belief, deconvolutional, generative adversarial, liquid state machine, etc.), and/or other types of machine learning models). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Ouyang to incorporate the teachings of Wright, and apply the irradiance value into generating samples on the lights in the scene, as taught by Ouyang for resampling the one or more resampled samples of energy that one or more portions of subsurface light transport to generate one or more resampled samples that represent an amount of energy transported to a location corresponding to the object from one or more internal interactions within the object. The same motivation that was utilized in the rejection of claim 1 applies equally to this claim. The prior art does not clearly disclose, but Zhao discloses one or more subsurface scatterings of light transported through one or more subsurface interactions of one or more rays within an object (Zhao- ¶0103, at least discloses indirect light interactions, which include subsurface scattering and subsurface bouncing for human skin, is modelled using a trained neural network that outputs a volumetric light map. While the present example will be described with respect to human face rendering and discusses light interaction with human skin, it will be appreciated that similar methods can be implemented for modeling light transport for non-human organisms, object, etc.; ¶0107-0110, at least disclose Unlike a multi-bounce application, the human skin is a multilayered structure comprising thin oily layer, epidermis and dermis producing specular reflection, surface diffuse reflection and subsurface scattering […] the volumetric light map is used to model unpredictable light transport underneath the skin including the subsurface bouncing and subsurface scattering in between material particles by employing local spherical harmonics to define indirect light Lindirect for building indirect light transport Lid, which is provided […] Referring now to FIG. 8 , it shows the effectiveness of volumetric light map under all white illumination. In particular, input albedo is shown under column 810, subsurface scattering build from volumetric light map is shown under 812, and final rendering is shown under column 814. As evidenced, subsurface scattering presents the similar color patter of that from albedo but with less appearance of hairs including beard and eyebrows; ¶0112, at least discloses when rendering human skin, the generated volumetric light map models subsurface scattering and subsurface bouncing. For example, under a given lighting condition, a thinner skin appears brighter and with a more reddish hue compared to thicker skin. Further, indirect light transport, such as scattering of light at a sub-surface level (e.g., underneath the skin), contributes to shadow softening; ¶0138, at least discloses In order to model light transport underneath the skin including the subsurface bouncing and subsurface scattering in between material particles, (which leads to the visual difference that a thinner skin looks brighter and more reddish under the same lighting condition. Also, indirect light also contributes to shadow softening)); using a target function models one or more portions of subsurface light transport (Zhao- Fig. 3 and ¶0100, at least disclose at 314, the method 300 includes determining one or more of a direct specular component, a direct diffuse component, and an indirect diffuse component for each sample in the volumetric radiance field to model total light transport; ¶0102-0103, at least discloses where Lo (x→ωo) is outgoing radiance leaving geometric location x in direction ωo, Li (x←ωi) stands for incident radiance that arrives at x, θ is the angle between incident light direction ωi and surface normal direction at x. Further, f (x, ωo, ωi) is a bidirectional scattering distribution function (BSDF) that describes the appearance of a surface area centered at a point x when viewed from a direction ωo, illuminated by incident light from direction ωi, and fs is specular reflection, fr is diffuse reflection, and fis is subsurface scattering […] the inventors herein provide a method for modelling direct and indirect light interactions with materials. In particular, indirect light interactions, which include subsurface scattering and subsurface bouncing for human skin, is modelled using a trained neural network that outputs a volumetric light map. While the present example will be described with respect to human face rendering and discusses light interaction with human skin, it will be appreciated that similar methods can be implemented for modeling light transport for non-human organisms, object, etc; ¶0109-0112, at least disclose the volumetric light map is used to model unpredictable light transport underneath the skin including the subsurface bouncing and subsurface scattering in between material particles by employing local spherical harmonics to define indirect light Lindirect for building indirect light transport Lid; ¶0138, at least discloses In order to model light transport underneath the skin including the subsurface bouncing and subsurface scattering in between material particles, (which leads to the visual difference that a thinner skin looks brighter and more reddish under the same lighting condition. Also, indirect light also contributes to shadow softening) […] the volumetric light map is employed using local spherical harmonics to define indirect light Lindirect for building indirect light transport Lss. The indirect light transport equation is shown at equation (3) above); It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Ouyang/Wright to incorporate the teachings of Zhao, and apply the subsurface scattering into Ouyang/Wright’s teachings for resampling the one or more samples of energy using a target function that models one or more portions of subsurface light transport to generate one or more resampled samples that represent an amount of energy transported to a location corresponding to the object from one or more internal interactions within the object. The same motivation that was utilized in the rejection of claim 1 applies equally to this claim. The one or more processors of claim 16 is similar in scope to the functions performed by the method of claim 2 and therefore claim 16 is rejected under the same rationale. Regarding claim 19, Ouyang in view of Wright and Zhao, discloses the one or more processors of claim 15, and further discloses wherein the resampling includes: updating one or more reservoirs of samples using the one or more samples and the target function to generate one or more updated reservoirs of samples (Ouyang- Figure 4 shows Temporal reuse: we use the sample from the initial sample buffer to update temporal reservoir buffer by randomly choosing between the one created in current frame and the existing one in the buffer. Temporal reprojection is applied to find the corresponding temporal reservoir from the last frame. Spatial reuse: we use randomly-chosen temporal reservoirs from neighborhood pixels to update spatial reservoir. To suppress bias, we choose neighborhood pixels with similar geometric features by comparing their depth and normal with the current pixel's; page 21, right column, section 4.2. Resampling and Shading, at least discloses After the fresh initial sample is taken, spatial and temporal resampling is applied. The target function fj = Li(xv, coi ) f (coo, coi) (cos0i ) = Lo(x, , - coi ) f (coo, coi) (cos0i) (9) includes the effect of the BSDF and cosine factor at the visible point, though we have also found that the simple target function (10) works well. While it is a suboptimal target function for a single pixel, we have found that it is helpful for spatial resampling in that it preserves samples that may be effective at pixels other than the one that initially generated it. After initial samples are generated, temporal resampling is applied. In this stage, for each pixel, we read the sample from the initial sample buffer, and use it to randomly update temporal reservoir, computing the RIS weight following Equation 5 with the source PDF as the PDF for the sampled direction pq (coi) and /J as defined in Equation 10. The pseudo-code for temporal resampling is shown in Algorithm 3); and selecting the one or more resampled samples from the one or more updated reservoirs of samples (Ouyang- page 22, left column, 1st paragraph, at least discloses After temporal use, spatial reuse is applied. Samples are taken from the temporal reservoirs at nearby pixels, and resampled into a separate spatial reservoir. (See Algorithm 4 for pseudo-code.) With spatial reuse, it is necessary to account for differences in the source PDF between pixels that are due to the fact that our sampling scheme is based on the visible point's position and surface normal). The one or more processors of claim 20 is similar in scope to the functions performed by the system of claim 14 and therefore claim 20 is rejected under the same rationale. 8. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Ouyang in view of Wright, further in view of Zhao, still further in view of Karlík et al. (“Karlík”) [US-2021/0142555-A1], still further in view of Wang et al. (“Wang”) [US-2009/0219287-A1] Regarding claim 3, Ouyang in view of Wright and Zhao, discloses the method of claim 1, and discloses wherein the target function includes a physics-based model of the one or more portions of subsurface light transport, the physics-based model including an analytical function (Wright- Fig. 2 shows irradiance of a location 216 (e.g., a point or area) may be computed from irradiance sampled at one or more of the vertices 210 [interaction from within the object]; ¶0018-0019, at least disclose determining lighting contributions of interactions of light transport paths that may be used for computing or rendering global illumination or other ray-tracing applications [light transported] […] irradiance for a location on a face of an object may be computed from irradiance [computing energy] from irradiance caches at one or more vertices of the face, reducing the number of irradiance caches needed and/or locations that need to be sampled to update the irradiance caches; ¶0022, at least discloses Using outgoing irradiance may be less accurate than incoming irradiance, but more computationally efficient; ¶0026, at least discloses to render frames of a virtual environment, the ray caster 104 may be configured to trace rays in a virtual environment to define one or more portions of ray-traced light transport paths (e.g., between a viewpoint camera and one or more light sources) within the virtual environment over one or more frames. The lighting determiner 106 may be configured to determine—based at least in part on the traced rays—data representative of lighting contributions (also referred to as lighting contribution data) of interactions of the ray-traced light transport paths in the virtual environment (e.g., with surfaces), such as irradiance (e.g., diffuse irradiance) [surface light transport based at least on computing energy] […] The lighting determiner 106 may further aggregate the data representative of the lighting contributions (e.g., incident radiance or irradiance values [amount of the energy]) to update one or more irradiance caches […] The lighting determiner 106 may determine irradiance for and/or update one or more of the irradiance caches (e.g., periodically) using the update ranker 108 and the update selector 110; ¶0029-0030, at least disclose where an irradiance cache comprises outgoing irradiance, the outgoing irradiance associated with (e.g., outgoing or incoming from/to or near the locations) the location(s) may be computed from incoming irradiance associated with the location(s) (e.g., from the irradiance cache and/or ray-traced samples of irradiance) […] the irradiance may comprise an irradiance value, such as a color value, and a normal that defines a plane from which the irradiance is sampled (e.g., a sampled hemisphere); ¶0032, at least discloses where irradiance of multiple locations (e.g., each of vertices 210) are used to derive irradiance for the location 216, the lighting determiner 106 may interpolate irradiance values between the locations to compute the irradiance at the location 216. For example, barycentric interpolation may be applied to irradiance values of the vertices 210 to compute an irradiance value(s) any given point and/or area bounded by the vertices 210 (e.g., on the face 214); ¶0038, at least discloses the ray caster 104 may define a Normal Distribution Function (NDF) range for the vertex 118F (or other location being sampled) based at least in part on the normal of the surface at the location. The ray caster 104 may use the NDF and an incoming ray (such as a primary ray, secondary ray, or other incident ray and in some examples a roughness value of the surface that is associated with the location) to define a Bidirectional Reflectance Distribution Function (BRDF); ¶0061, at least discloses The update ranker 108 may be implemented using one or more algorithms and/or Machine Learning Models (MLMs). A MLM may take a variety of forms for example, and without limitation, the MLM(s) may include any type of machine learning model, such as a machine learning model(s) using linear regression, logistic regression, decision trees, support vector machines (SVM), Naïve Bayes, k-nearest neighbor (Knn), K means clustering, random forest, dimensionality reduction algorithms, gradient boosting algorithms, neural networks (e.g., auto-encoders, convolutional, recurrent, perceptrons, long/short term memory/LSTM, Hopfield, Boltzmann [physics-based model], deep belief, deconvolutional, generative adversarial, liquid state machine, etc.), and/or other types of machine learning models). The prior art does not clearly disclose, but Karlík discloses function representing a boundary term of a lighting equation for the interaction (Karlík- ¶0020, at least discloses To sample rays, the rendering module 103 employs one or more sampling techniques. Two common sampling techniques used to evaluate lighting at a point on a surface include sampling the light source and sampling a bidirectional reflectance distribution function (BRDF); ¶0023, at least discloses When the rendering module 103 employs multiple sampling techniques, multiple importance sampling provides a simple yet robust means for combining the sampling techniques with provable variance bounds; ¶0042, at least discloses Sampling from the HDR map, e.g., map 204, is usually implemented using a tabulated probability density function (pdf) p1(ωi)), the lighting equation including a scattering term, and the boundary term (Karlík- ¶0021, at least discloses The BRDF describes how light is reflected/scattered off a surface as a function of the direction of a ray incident on the surface; ¶0023, at least discloses When the rendering module 103 employs multiple sampling techniques, multiple importance sampling provides a simple yet robust means for combining the sampling techniques with provable variance bounds). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Ouyang/Wright/Zhao to incorporate the teachings of Karlík, and apply the target function corresponds to a boundary term into Ouyang/Wright/Zhao’s teachings in order the target function corresponds to a boundary term of a lighting equation for the interaction. Doing so would provide sampling techniques in image rendering that reduce sample result variance. The prior art does not clearly disclose, but Wang discloses the lighting equation including an absorption term (Wang- ¶0020, at least discloses For a given distribution of spatially-variant absorption and diffusion coefficients, the corresponding diffusion process that generates the material appearance can be expressed as a partial differential equation, defined over the volumetric elements, with a boundary condition given by a lighting environment; ¶0041, at least discloses In acquiring the material properties from measured appearance, computation of the absorption coefficients μ(x) and diffusion coefficients κ(X) occurs based on measured outgoing radiances: {L o,m(x,ω o)|x ∈ A, m=0, 1, . . . M} from the object surface due to multiple scattering under M different illumination conditions: {L i,m(x,ω i)|x ∈ A, m=0, 1, . . . M} on the object surface.) It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Ouyang/Wright/Zhao/Karlík to incorporate the teachings of Wang, and apply the target function corresponds to a boundary term into Ouyang/Wright/Zhao/Karlík’s teachings in order the lighting equation including an absorption term, a scattering term, and the boundary term. Doing so would provide for modeling and/or rendering of heterogeneous translucent material. 9. Claim 13 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Ouyang in view of Wright, further in view of Zhao, still further in view of Piatt et al. (“Piatt”) [US-11063556-B1] Regarding claim 13, Ouyang in view of Wright and Zhao, discloses the system of claim 10, and does not clearly disclose, but Piatt discloses wherein the one or more sets of samples of energy are determined using one or more cached irradiance values from a backside lighting cache, the backside lighting cache storing the one or more cached irradiance values based at least on the one or more cached irradiance values corresponding to a backside of the object relative to a camera in an environment (Piatt- col 1, lines 27-36, at least discloses The system can measure a backside irradiance of the array and set a backside irradiance parameter in accordance with the measured backside irradiance. The backside irradiance may represent the amount of solar power the back sides of the bifacial solar modules receive after light reflects off the surface on which the bifacial solar modules are disposed. The system can determine and set a shed transparency parameter using the measured backside irradiance and a geometric model of the array of bifacial solar modules). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Ouyang/Wright/Zhao to incorporate the teachings of Piatt, and apply the backside irradiance into Ouyang/Wright/Zhao’s teachings in order the one or more sets of samples of energy are determined using one or more cached irradiance values from a backside lighting cache, the backside lighting cache storing the one or more cached irradiance values based at least on the one or more cached irradiance values corresponding to a backside of the object relative to a camera in an environment. Doing so would minimize a loss function of the expected bifacial gain and the actual bifacial gain, to further improve the performance of the bifacial gain model. The one or more processors of claim 18 is similar in scope to the functions performed by the method of claim 13 and therefore claim 18 is rejected under the same rationale. 10. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Ouyang in view of Wright, further in view of Zhao, still further in view of Wang et al. (“Wang”) [US- 2009/0219287-A1] Regarding claim 6, Ouyang in view of Wright and Zhao, discloses the method of claim 1, and does not clearly disclose, but Wang discloses wherein the target function corresponds to a single scattering transmission of lighting for the interaction (Wang- ¶0038-0039, at least discloses where n is the surface normal at xi and S(xi,ωi, xo,ωo) is the BSSRDF. The outgoing radiance can be divided into single- and multiple-scattering components: L o(x o,ωo)=L s(x o,ωo)+L m(x o,ωo) […] The single-scattering component Ls(xo,ωo) accounts for light that interacts exactly once with the medium before exiting the volume, and may be evaluated by integrating the incident radiance along the refracted outgoing ray. An exemplary technique focuses on multiple scattering and uses a highly simplified single scattering term that assumes scattering to be isotropic and occurring only at surface points xo), and the method further includes: determining, using a diffusion profile, energy that corresponds to a multiple scattering transmission of lighting for the interaction (Wang- ¶0038-0039, at least discloses where n is the surface normal at xi and S(xi,ωi, xo,ωo) is the BSSRDF. The outgoing radiance can be divided into single- and multiple-scattering components: L o(x o,ωo)=L s(x o,ωo)+L m(x o,ωo) […] The multiple-scattering component Lm(xo,ωo) consists of light that interacts multiple times within the object volume. For highly scattering, non-emissive materials, multiple scattering may be approximated by a diffusion process described by the following equation ∇·(κ(x)∇φ(x))−μ(x)φ(x)=0, x ∈ V, with boundary condition defined on the object surface A: Where […] is the radiant fluence (also known as the scalar irradiance), κ(x)=1/[3(μ(x)+σs′(x)] is the diffusion coefficient, μ(x) is the absorption coefficient, and σs′(x)=σs(1−g) is the reduced scattering coefficient with “g” being the mean cosine of the scattering angle. The exemplary technique can define C=(1+Fdr)/(1−Fdr) where Fdr is the diffuse Fresnel reflectance. The diffuse incoming light at a surface point x is given by q(x)= ∫Ω L i(x,ω i)(n·ω i)F t(η(x), ωi)dω i. With the diffusion approximation, the multiple scattering component of the outgoing radiance is calculated as: […] where Φ(xo) is computed from the previous equations (diffusion and boundary equations)); and combining the energy that corresponds to the multiple scattering transmission with the one or more resampled samples (Wang- ¶0038-0039, at least discloses where n is the surface normal at xi and S(xi,ωi, xo,ωo) is the BSSRDF. The outgoing radiance can be divided into single- and multiple-scattering components: L o(x o,ωo)=L s(x o,ωo)+L m(x o,ωo) […] The multiple-scattering component Lm(xo,ωo) consists of light that interacts multiple times within the object volume. For highly scattering, non-emissive materials, multiple scattering may be approximated by a diffusion process described by the following equation ∇·(κ(x)∇φ(x))−μ(x)φ(x)=0, x ∈ V, with boundary condition defined on the object surface A: Where […] is the radiant fluence (also known as the scalar irradiance), κ(x)=1/[3(μ(x)+σs′(x)] is the diffusion coefficient, μ(x) is the absorption coefficient, and σs′(x)=σs(1−g) is the reduced scattering coefficient with “g” being the mean cosine of the scattering angle. The exemplary technique can define C=(1+Fdr)/(1−Fdr) where Fdr is the diffuse Fresnel reflectance. The diffuse incoming light at a surface point x is given by q(x)= ∫Ω L i(x,ω i)(n·ω i)F t(η(x), ωi)dω i. With the diffusion approximation, the multiple scattering component of the outgoing radiance is calculated as: […] where Φ(xo) is computed from the previous equations (diffusion and boundary equations)), wherein the rendering of the image is further based at least on an output of the combining (Wang- ¶0038-0039, at least discloses where n is the surface normal at xi and S(xi,ωi, xo,ωo) is the BSSRDF. The outgoing radiance can be divided into single- and multiple-scattering components: L o(x o,ωo)=L s(x o,ωo)+L m(x o,ωo) […] The multiple-scattering component Lm(xo,ωo) consists of light that interacts multiple times within the object volume. For highly scattering, non-emissive materials, multiple scattering may be approximated by a diffusion process described by the following equation ∇·(κ(x)∇φ(x))−μ(x)φ(x)=0, x ∈ V, with boundary condition defined on the object surface A: Where […] is the radiant fluence (also known as the scalar irradiance), κ(x)=1/[3(μ(x)+σs′(x)] is the diffusion coefficient, μ(x) is the absorption coefficient, and σs′(x)=σs(1−g) is the reduced scattering coefficient with “g” being the mean cosine of the scattering angle. The exemplary technique can define C=(1+Fdr)/(1−Fdr) where Fdr is the diffuse Fresnel reflectance. The diffuse incoming light at a surface point x is given by q(x)= ∫Ω L i(x,ω i)(n·ω i)F t(η(x), ωi)dω i. With the diffusion approximation, the multiple scattering component of the outgoing radiance is calculated as: […] where Φ(xo) is computed from the previous equations (diffusion and boundary equations)). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Ouyang/Wright/Zhao to incorporate the teachings of Wang, and apply the single- and multiple-scattering into Ouyang/Wright/Zhao’s teachings for determining, using a diffusion profile, energy that corresponds to a multiple scattering transmission of lighting for the interaction; and combining the energy that corresponds to the multiple scattering transmission with the one or more resampled samples, wherein the rendering of the image is further based at least on an output of the combining. Doing so would provide for modeling and/or rendering of heterogeneous translucent material. 11. Claims 12 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Ouyang in view of Wright, further in view of Zhao, still further in view of Habel et al. (“Habel”) [US-2014/0204087-A1] Regarding claim 12, Ouyang in view of Wright and Zhao, discloses the system of claim 10, and does not clearly disclose, but Habel discloses wherein the filtering (see Claim 10 rejection for detailed analysis) is based at least on converting the one or more sets of samples of energy from a first distribution corresponding to a bidirectional scattering distribution function to a second distribution corresponding to the target function (Habel- ¶0022-0023, at least disclose BSDF Bidirectional Scattering Distribution Function […] BSSRDF Bidirectional Surface Scattering Reflectance Distribution Function; ¶0072, at least discloses the present invention computes a numerical approximation by calculating a weighted sum of values of the integrand in (19), denoted here as f({right arrow over (x)}, ti), using importance sampling: where N is the number of evaluation points used, ti are distances along the beam and pdf(ti|{right arrow over (x)}) is a probability density function (pdf) used in the importance sampling). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Ouyang/Wright/Zhao to incorporate the teachings of Habel, and apply the Bidirectional Surface Scattering Reflectance Distribution Function into Ouyang/Wright/Zhao’s teachings for filtering is based at least on converting the one or more sets of samples of energy from a first distribution corresponding to a bidirectional scattering distribution function to a second distribution corresponding to the target function. Doing so would provide new techniques for simulating the effects of light on a surface of a translucent object. Regarding claim 17, Ouyang in view of Wright and Zhao, discloses the one or more processors of claim 15, and does not clearly disclose, but Habel discloses wherein the resampling is based at least on converting one or more sets of samples from a first distribution corresponding to a source probability distribution function to a second distribution corresponding to the target function (Habel- ¶0022-0023, at least disclose BSDF Bidirectional Scattering Distribution Function […] BSSRDF Bidirectional Surface Scattering Reflectance Distribution Function; ¶0072, at least discloses the present invention computes a numerical approximation by calculating a weighted sum of values of the integrand in (19), denoted here as f({right arrow over (x)}, ti), using importance sampling: where N is the number of evaluation points used, ti are distances along the beam and pdf(ti|{right arrow over (x)}) is a probability density function (pdf) used in the importance sampling). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Ouyang/Wright/Zhao to incorporate the teachings of Habel, and apply the Bidirectional Surface Scattering Reflectance Distribution Function into Ouyang/Wright/Zhao’s teachings in order the resampling is based at least on converting one or more sets of samples from a first distribution corresponding to a source probability distribution function to a second distribution corresponding to the target function. Doing so would provide new techniques for simulating the effects of light on a surface of a translucent object. Allowable Subject Matter 12. Claims 4 and 8 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. 13. The following is a statement of reasons for the indication of allowable subject matter: Regarding Claim 4, the combination of prior arts teaches the method of Claim 1. However in the context of claim 1 as a whole, the combination of prior arts does not teach rendering of the image is based at least on combining the one or more resampled samples with data corresponding to energy externally transported to the interaction from the environment outside of the object. Therefore, Claim 4 in the context of claim 1 as a whole does comprise allowable subject matter. Regarding Claim 8, the combination of prior arts teaches the method of Claim 1. However in the context of claim 1 and 7 as a whole, the combination of prior arts does not teach the rendering of the image is based at least on combining the one or more resampled samples with one or more second samples corresponding to a second amount of energy externally transported to the interaction from the environment outside of the object. Therefore, Claim 8 in the context of claim 1, 7 as a whole does comprise allowable subject matter. Conclusion 14. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 15. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL LE whose telephone number is (571)272-5330. The examiner can normally be reached 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL LE/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Jan 10, 2023
Application Filed
Nov 03, 2024
Non-Final Rejection — §103
Feb 06, 2025
Response Filed
May 04, 2025
Final Rejection — §103
Aug 08, 2025
Request for Continued Examination
Aug 11, 2025
Response after Non-Final Action
Sep 05, 2025
Non-Final Rejection — §103
Dec 08, 2025
Response Filed
Mar 03, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579211
AUTOMATED SHIFTING OF WEB PAGES BETWEEN DIFFERENT USER DEVICES
2y 5m to grant Granted Mar 17, 2026
Patent 12579738
INFORMATION PRESENTING METHOD, SYSTEM THEREOF, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12579072
GRAPHICS PROCESSOR REGISTER FILE INCLUDING A LOW ENERGY PORTION AND A HIGH CAPACITY PORTION
2y 5m to grant Granted Mar 17, 2026
Patent 12573094
COMPRESSION AND DECOMPRESSION OF SUB-PRIMITIVE PRESENCE INDICATIONS FOR USE IN A RENDERING SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12558788
SYSTEM AND METHOD FOR REAL-TIME ANIMATION INTERACTIVE EDITING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
66%
Grant Probability
88%
With Interview (+22.1%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 864 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month