1-Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Status
This instant application No. 18/070326 has claims 1-20 pending.
Priority / Filing Date
Applicant did not claim for any domestic or foreign priority. The effective filing date of this application is November 28, 2022.
Information Disclosure Statement
As required by M.P.E.P. 609(C), the Applicant’s submissions of the Information Disclosure Statements dated February 28, 2023, February 13, 2024 and November 19, 2025 are acknowledged by the Examiner and the cited references have been considered in the examination of the claims now pending. As required by M.P.E.P. 609 C(2), a copy of each of the PTOL-1449s initialed and dated by the Examiner is attached to the instant Office action.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
4. Claims 1-8, 10-20 are rejected under 35 U.S.C. 103 as being obvious over Zhou et al. hereafter Zhou (Pub. No.: US 2009/0006044 A1), in view of Gibbs et al. hereafter Gibbs (Pub. No.: US 2009/0040220 A1).
Regarding Claim 1, Zhou discloses a method for real-time volumetric rendering of dynamic particles (Zhou: abstract), comprising: converting particle data representing each of the dynamic particles into a density volume representing a density distribution of the respective dynamic particle distributed in a three- dimensional (3D) space (Zhou: [0025]-[0028], [0116]-[0118]: The process 100 starts at block 102 where a RBF model density field is received or provided. The RBF model density field is decomposed Into a weighted sum of a set of radial basis functions (RBFs) each having a RBF center ……….scattering medium defined in a volume space; the process computes an effective exitant radiance at point x in the volume space based on the source radiance obtained by block 106. The point x can be any point in the 3-D volume space), a light distribution within the density volume representing a light value for each grid point within the density volume using ray marching from a light source (Zhou: [0009],[0028], [0034], [0102]-[0105]: The runtime algorithm, which includes both light transfer simulation and ray marching, can be implemented on a GPU to allow for real-time rendering, real-time manipulation of viewpoint and lighting; the process computes an effective exitant radiance at point x in the volume space based on the source radiance obtained by block 106. The point x can be any point in the 3-D volume space. The effective exitant radiance is thus computed as a function of x. Because the function may typically have no analytical form, the effective exitant radiance is calculated at discrete points x, each corresponding to a discrete voxel; computes effective exitant radiance at point x in the volume space based on the model source radiance by taking into account contributions by the residual density field. The point x can be any point in the 3-D volume space; ray marching method as described herein may be used for such integration after computation of source radiances), grid points within the density volume corresponding to fixed reference positions; rendering the dynamic particles in real-time by computing pixel color values determined using (i) ray marching toward a viewpoint position, (ii) the density volume, and (iii) the light distribution (Zhou: [0034], [0103], [0154], [0169]-[0171], [0177]: For each pixel in the quad, denote its corresponding point in the volume as ui . In the pixel shader, the density wlBl(ui) is evaluated and saved in the alpha channel, and {tilde over (J))D,l(ul)=wlBl(ui)(y(o)·{tilde over (J)}(cl)) is computed and placed in the RGB channels. With the residual R(ui) from the hash table, the process calculates and stores {tilde over (J)}D,l(ui)R(ui) in the RGB channels of a second color buffer; Vertex shading is then computed as the triple product of visibility, BRDF and lighting vectors. After the scene geometry is rendered, the source radiance is computed at RBF centers due to single scattering, taking the occlusions due to scene geometry into account using SHEXP. Finally, the multiple scattering simulation and compensated ray marching are performed to generate the results; For each voxel traversed, the source radiance of the voxel is first added into the current radiance. The transmittance X due to the voxel is computed and multiplied with the sum of the source radiance, weighted by the extinction cross section t and the medium density D at the voxel; the effective exitant radiance is calculated at discrete points x, each corresponding to a discrete voxel. The effective exitaAt radiance is calculated based on several input information including the reduced incident radiance, the source radiance at each voxel, and the density field; The ray marching process is to compute the medium radiance Lm(x,wo) by gathering radiance contributions towards the viewpoint); and outputting a representation of the dynamic particles based on the rendering (Zhou: ( [0029]: Any technique, such as programmable shaders, suitable for rendering 3-D voxel properties into a 2-D pixel display may be used).
Zhou does not teach·
precomputing a light distribution within the density volume representing a light value for each grid point within the density volume using ray marching from a light source, grid points within the density volume corresponding to fixed reference positions.
However, Gibbs teaches precomputing a light distribution within the density volume representing a light value for each grid point within the density volume using ray marching from a light source, grid points within the density volume corresponding to fixed reference positions (Gibbs: [0007], [0036]-[0039]: the voxel grid is ray marched with a particle generated for each sample. The particles are then rendered by splatting. For each pixel, a single ray Is cast from the camera location through the center of that pixel. "light volume" for reusing illumination calculations. Specifically, illumination is computed for the center point of each voxel. Illumination at an arbitrary point in the volume is then approximated through interpolation; decoupling the light volume from the voxel grid. Specifically one creates a distinct voxel grid (herein after referred to as the "light grid") whose resolution is specified by the animator. Upon creation, the light grid is aligned and oriented with the volume data bounding box. (A bounding box is a representation of the extent of the volume.) In other words, the light grid precisely fits the volume data whatever the volume data representation may be. The light grid is initialized in order to contain no illumination information).
It would have been obvious to one of ordinary skill in the art to combine the animation method taught by Zhou with the precomputing light distribution taught by Gibbs since doing so would improve the capability of the graphic processing unit.
Regarding Claims 14 and 18, the claims recite the same substantive limitations as claim 1 and are rejected using the same teachings.
Regarding Claim 2, the combinations of Zhou and Gibbs disclose the method of claim 1, further comprising: generating the particle data representing simulated particles composed of a simulated material by a physically-based simulation of natural phenomena, wherein the generated particle data includes movement of the simulated particles in the 3D space (Zhou: [0027], [0046], [0053] : Interpolation is a much faster process than directly computing the values of source radiance at various voxels in the volume space, especially when the direct computation requires a physics-based simulation; realistic rendering; rendering smoke in video games in which the smoke is a part of a video or animation having a sequence of renderings of images each rendered at a short time interval (e.g., 1/20 sec to result in 20 frames per second)).
Regarding Claims 15 and 20, the claims recite the same substantive limitations as claim 2 and are rejected using the same teachings.
Regarding Claim 3, neither Zhou nor Gibbs teaches the simulated material is snow. However, Zhou teaches real-time algorithm for simulation and rendering of an inhomogeneous scattering media (Zhou: Abstract : Real-time rendering inhomogeneous scattering media animations (such as smoke animations)).
It would have been obvious to one of ordinary skill in the art to utilize the method taught by the combination of Zhou and Gibbs to simulate snow, since doing so would provide enable realistic animation of snow.
Regarding Claim 16, the claim recites the same substantive limitations as claim 3 and is rejected using the same teachings.
Regarding Claim 4, neither Zhou nor Gibbs teaches the simulated material is ash. However, Zhou teaches real-time algorithm for simulation and rendering of an inhomogeneous scattering media (Zhou: Abstract: Real-time rendering inhomogeneous scattering media animations (such as smoke animations)).
It would have been obvious to one of ordinary skill in the art to utilize the method taught by the combination of Zhou and Gibbs to simulate ash, since doing so would provide enable realistic animation of ash.
Regarding Claim 5, Gibbs teaches the simulated material is dust (Gibbs: [0041]: effects include fire, dust, and smoke).
It would have been obvious to one of ordinary skill in the art to utilize the method taught by the combination of Zhou and Gibbs to simulate dust, since doing so would provide enable realistic animation of dust.
Regarding Claim 6, Zhou teaches the simulated material is translucent (Zhou: [0025]: scattering medium, such as smoke).
Regarding Claim 7, Zhou teaches determining, based on the particle data, a density value contribution for each grid point within a particle radius, and summing the density value contributions to obtain the density distribution of the dynamic particles distributed in the 3D space (Zhou: [0095]-[0100], [0128]-[0131]: sum of RBF model density field (RBF approximation); accumulated optical depth is shown to be determined by the angle subtended by half the RBF and its absolute radius r).
Regarding Claim 8, Zhou teaches the density volume and the light distribution are represented in a 3D box that aligns with a Cartesian coordinate axis (Zhou: [0028]-[0029], [0034], [0086]-[0087], [0131]: effective exitant radiance at point x in the volume space based on the source radiance obtained by block 106. The point x can be any point in the 3-D volume space; The point x can be any point in the 3-D volume space. The effective exitant radiance is thus computed as a function of x.; a rotation along the elevation angle towards directions (i.e., the angle between the positive z-axis and direction s); rotating T(Bl,h) to the axis determined by cl and ch, followed by a multiplication with the radius rh).
Regarding Claim 17, the claim recites the same substantive limitations as claim 8 and is rejected using the same teachings.
Regarding Claim 10, Zhou teaches the precomputing the light distribution comprises: generating the light distribution with a same resolution as a resolution of the density distribution of the dynamic particles, wherein the light value at each voxel of a respective grid point within the density volume is determined in parallel using the ray marching (Zhou: [0116]-[0120]: the RBF approximation of the density field, the residual density field R(x)=D(x)-{tilde over (D)}(x) may be compressed for faster GPU processing. While the residual field is of the same resolution as the density field, it normally consists of small values; Entries in R(x) below a given threshold (e.g., 0.005-0.01) may be made zero, and the resulting sparse residual field is compressed using a hashing technique such as the perfect spatial hashing, which is lossless and ideally suited for parallel evaluation on graphics hardware).
Regarding claim 11, Gibbs teaches the precomputing the light distribution comprises: generating the light distribution with a resolution lower than a resolution of the density distribution of the dynamic particles and interpolating the generated light distribution (Gibbs: [0039], [0040]: creates a distinct voxel grid (herein after referred to as the "light grid") whose resolution is specified by the animator. (The neighborhood is defined by the surrounding points used for the interpolation. For a tri-linear interpolation it is the 8 points defining the voxel edges.) This is useful for quickly determining If light voxels need to be illuminated. The neighboring block is the same width as the grid interpolation filter which is the mechanism of the interpolation used to define the volume attributes at any point in space).
Motivation to combine Gibbs with Zhou is same here as Claim 1.
Regarding claim 12, Zhou teaches generating a plurality of volume slices of the density volume, determining a light distribution result for an initial volume slice from the plurality of volume slices (Zhou: [0008],[0018], [0056], [0134]-[0138], [0157]-[0160] : Source radiances from single and optionally multiple scattering are directly computed at only the RBF centers and then approximated at other points in the volume using an RBF-based interpolation. Using the computed source radiances, a ray marching technique using slice-based integration of radiance along each viewing ray Is performed to render the final Image, and Once the source radiance(s) are obtained, the effective exitant radiance (blocks 108, 208, 308, 408 and 508) may be obtained using a variety of suitable techniques. In one embodiment, a ray marching technique is used to accomplish this. The ray marching technique decomposes the volume space into N(≥2) slices of a user-controllable thickness); for each additional volume slice after the initial volume slice, determining a light distribution result of the respective additional volume slice by performing the ray marching based on a light distribution result from a previous volume slice).
Regarding claim 13, Zhou teaches the pixel color values are computed by: generating a ray for each pixel from the viewpoint position (Zhou: [0102]-[0105]: ray marching process is to compute the medium radiance Lm(x, wo) by gathering radiance contributions towards the viewpoint. The medium radiance Lm(x, wo) is further used to compute the final exitant radiance Lout(x, wo);
sampling the generated rays at sections that intersect with the density volume (Zhou: [0154], [0167)-[0170]: calculate the radiance of a view ray 1202, all voxels that the view ray 1202 Intersects are traversed from back (the side close to the viewpoint 1204) to front (beside opposite to the viewpoint 1204). The initial radiance is set to that of the background; The process that iterates over all the RBFs. For each RBF Bl, its intersection with the slice is first calculated. This plane-to-sphere intersection can be efficiently computed given the center and radius of the sphere);
retrieving a light value from the light distribution at each sample (Zhou: [0155], [0164]-[0167]: exitant radiance Ek is computed at block 1108 using Ek(cl)=(1-α(cl))Ik(cl)+{tilde over (J)}ms k(cl) and stored at exitant radiance buffer 1110. The exitant radiance Ek is used for estimate the subsequent incident radiance Ik+1 using the previously described process H 1112 in alternation); and
computing a scattered radiance value at each sample based on the retrieved light values and computing the pixel color values based on the scattered radiance values (Zhou: [0162]-[0164], [0169]-[0170]: an overview of the GPU pipeline for multiple scattering is depicted FIG. 11. The incident radiance buffer 1102 is initialized with the reduced incident radiance 11 that has been calculated for single scattering. Then for each iteration of the simulation, the scattered source radiance {tilde over (J)}msk is computed at block 11; the density wlBl(ui) is evaluated and saved in the alpha channel, and {tilde over (J)}D,l(ui)=wlBl(ui)(y(wo)·{tilde over (J)}(cl)) is computed and placed in the RGB channels; For single scattering computation, one exemplary implementation rasterizes a small 2D quad in which each RBF is represented by a pixel; RBF data (cl981 , rl, wl) may be precomputed (e.g., either by a CPU on the same board with the GPU, or an external CPU) is packed into a texture and passed to the pixel shader).
5. Claim 9 is rejected under 35 U.S.C. 103 as being obvious over Zhou et al. hereafter Zhou (Pub. No.: US 2009/0006044 A1), in view of Gibbs et al. hereafter Gibbs (Pub. No.: US 2009/0040220 A1), further in view of Sebastien Hillaire hereafter Hillaire (Physically Based and Unified Volumetric Rendering in Frostbite, SIGGRAPH, 2015, pp 1-56)
Regarding Claim 9, neither Zhou nor Gibbs teaches the density volume and the light distribution are represented using a froxel volume aligned to the viewpoint position.
However, Hillaire teaches a density volume and a light distribution that are represented using a froxel volume aligned to the viewpoint position (Hillaire: page 27: Integrate froxel {scattering, extinction} along view ray Solves { , , , } for each froxel at position).
It would have been obvious to one of ordinary skill in the art to modify the combination of Zhou and Gibbs with those of Hillaire to efficiency represent the density and light data.
Conclusion
6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Caulfield et al. (Pub. No.: US 20190180499 A1) teaches a ray that is cast into a volume described by a volumetric data structure, which describes the volume at a plurality of levels of detail. The ray is determined to pass through a particular subset of the voxels at the first level of detail and at least a particular one of the particular subset of voxels is determined to be occupied by geometry.
Lee et al. (Pub. No. US 20180286130 A1) teaches a method and apparatus to generate an object model from image data captured from a physical object. The object model includes data representing location and geometry of the physical object. The instructions generate an augmentation model that includes data representing a graphical image and location information thereof with respect to the physical object in response to a user interaction associated with the physical object.
McKenzie et al. (Patent No.: US 9342920 B1) conceptually presents a GPU-based cloud computing platform is used to facilitate data computations on behalf of requesting users. In this embodiment, a user of a thin client has an associated dataset that requires computation. That dataset is adapted to be delivered to a computing platform, such as the GPU-based cloud, for computation, such as to facilitate a 3D volume rendering.
Cohen et al. (Pub. No.: US 20200211280 A1) defines image processing carried out by accepting an array of voxels that include data representing a physical property of a 3-dimensional object, segmenting the array of voxels into a plurality of regional subarrays of voxels that respectively satisfy predetermined criteria.
7. Examiner’s Remarks: Examiner has cited particular columns and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. In the case of amending the claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention.
Correspondence Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IFTEKHAR A KHAN whose telephone number is (571)272-5699. The examiner can normally be reached on M-F from 9:00AM-6:00PM (CST). If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emerson Puente can be reached on (571)272-3652. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR to authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/IFTEKHAR A KHAN/Primary Examiner, Art Unit 2187