DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This Office Action is in response to Applicant’s amendment/response filed on 05/29/2025, which has been entered and made of record.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 8-11, 16, and 20-22 are rejected under 35 U.S.C. 103 as being unpatentable over McGuire et al. (Hardware-accelerated global illumination by image space photon mapping, HPG '09: Proceedings of the Conference on High Performance Graphics 2009, pp 77-90, hereinafter “McGuire”).
Regarding claim 1, McGuire discloses A method, applied to a terminal device, the method comprising: (page 77, col. left, Abstract, “An implementation on a consumer GPU and 8-core CPU renders high quality global illumination at up to 26 Hz at HD (1920_1080) resolution, for complex scenes containing moving objects and lights”). Note that: the rendering device with a consumer GPU and 8-core CPU is a terminal device.
performing illumination calculation on the to-be-rendered virtual scene based on the texture information of the light probe, to obtain a first irradiance map, wherein a resolution of the first irradiance map is less than a resolution required by a rendering result of the to-be-rendered virtual scene; and (page 77, col. left, Abstract, “Image Space Photon Mapping (ISPM) rasterizes a light-space bounce map of emitted photons surviving initial-bounce Russian roulette sampling on a GPU … ISPM instead scatters indirect illumination by rasterizing an array of photon volumes. Each volume bounds a filter kernel based on the a priori probability density of each photon path. These two steps exploit the fact that initial path segments from point lights and final ones into a pinhole camera each have a common center of projection. An optional step uses joint bilateral upsampling of irradiance to reduce the fill requirements of rasterizing photon volumes. ISPM preserves the accurate and physically-based nature of photon mapping, supports arbitrary BSDFs, and captures both high- and low-frequency illumination effects such as caustics and diffuse color interreflection”; page 80, col. right, para. 3, “4. Render indirect illumination by scattering photon volumes”; page 83, col. left, para 2, “When fill-limited by rendering photon volumes, we can optionally subsample radiance from the photons and then use geometry-aware filtering to upsample the resulting screen space radiance estimate to the final image resolution”). Note that: (1) the light illumination on the scene is calculated by ISPM rasterizing a light-space bounce map of emitted photons and rasterizing an array of photon volumes (light probes) with corresponding texture information; (2) the calculated irradiance distribution as a first irradiance map can be related to subsampled resulting screen space radiance fill-limited by rendering photon volumes; and (3) since it is subsampled, the resolution of the first irradiance map is lower than that of final image by up-sampling as specified above.
performing upsampling on the first irradiance map, to obtain a second irradiance map, wherein a resolution of the second irradiance map and the resolution required by the rendering result of the to-be-rendered virtual scene are the same, and the second irradiance map is used to perform indirect light rendering of the to-be-rendered virtual scene. (page 83, col. left, para 2, “When fill-limited by rendering photon volumes, we can optionally subsample radiance from the photons and then use geometry-aware filtering to upsample the resulting screen space radiance estimate to the final image resolution”; page 80, col. right, para. 3, “4. Render indirect illumination by scattering photon volumes”). Note that: (1) the first irradiance map with a subsample and lower resolution can be upsampled to a higher resolution for rendering final rendered image while; (2) the upsampled irradiation map can be regarded as a second irradiation map that has the same resolution of the final rendered image of the to-be-rendered virtual scene; and (3) since the photon volumes (light probes) are related to rendering indirect illumination by scattering photon volumes, the second irradiance map is used to perform indirect light rendering of the to-be-rendered virtual scene.
obtaining (page 78, col. right, Figure 2: “We highlight a few ISPM light transport paths (lines), photons (discs), and photon volumes (wireframe) in a rendered scene”; page 80, col. left, para. 5, “Our photon volumes most closely resemble the photon splats of Herzog et al [2007] … new term “photon volumes” to emphasize the 3D nature, and because ISPM photon volumes extend Herzog et al.’s 3D splats by conforming more tightly to surfaces and avoiding their expensive cone estimation step”). Note that: (1) a to-be-rendered scene is shown while photon volumes placed in the scene can be regarded as light probes although the term of light probe of the same meaning was not yet widely adopted in the field at the publication of McGuire; and (2) photon volumes (light probes) can be used to obtain the characteristics information of the surfaces since they conform to surfaces by resembling the photons splats.
However, McGuire does not expressively disclose obtaining of a light probe. McGuire discloses that photon volumes as light probes conform tightly to surfaces and reflect the surfaces’ characteristics very closely (page 80, col. right, para. 2, “Photon volumes conform tightly to surfaces and are only rendered within a few pixels of a visible surface”). By exploring the characteristics (e.g., norms) of photon volumes, it is obvious to one having ordinary skills in the art that the texture information of the photon volumes (light probes) can be obtained or determined in the virtual scene.
Before the effective filing date of the claimed invention, it would have been obvious to apply the teaching by McGuire. The motivation would have been "Photon volumes conform tightly to surfaces and are only rendered within a few pixels of a visible surface " (McGuire, page 80, col. right, para. 2). The suggestion for doing so would allow to obtain the texture information of light probes. Therefore, it would have been obvious to use McGuire 's teachings.
Regarding claim 2, McGuire discloses The method according to claim 1, wherein an aspect ratio of the first irradiance map is consistent with an aspect ratio of the second irradiance map, and a quantity of pixel points of the first irradiance map is less than a quantity of pixel points of the second irradiance map. (page 83, col. left, para 2, “When fill-limited by rendering photon volumes, we can optionally subsample radiance from the photons and then use geometry-aware filtering to upsample the resulting screen space radiance estimate to the final image resolution”). Note that: (1) for an upsampling operation without any cropping applied to the first irradiance map, it is obvious to one having ordinary skills in the art that an aspect ratio of the first irradiance map is consistent with an aspect ratio of the second irradiance map; and (2) since the resolution of the second irradiance map is higher than that of the first irradiance map for the same content scope, a quantity or number of pixel points of the first irradiance map is less than a quantity or number of pixel points of the second irradiance map.
Regarding claim 3, McGuire discloses The method according to claim 1, further comprising:
obtaining direct light light source information in the to-be-rendered virtual scene; and (page 80, col. right, para. 3, “1. For each emitter: (a) Render shadow map”). Note that: a shadow map contains direct light light source information in the to-be-rendered virtual scene.
performing illumination calculation on the to-be-rendered virtual scene based on the direct light light source information in a shadow map manner, to obtain a third irradiance map, wherein the third irradiance map is used to perform direct light rendering of the to-be-rendered virtual scene. (page 80, col. right, para. 3, “3. Compute direct illumination using shadow maps and deferred shading of the eye G-buffer … 5. Render translucent surfaces back-to-front with direct, mirror, and refracted illumination only”, and “2. Render G-buffer from the eye’s view”). Note that: (1) the direct illumination is computed with shadow maps; (2) it is obvious to one having ordinary skills in the art that a third irradiance map can be computed with corresponding direct illumination; and (3) the third irradiance map can be used to render G-buffer from the eye’s view and render translucent surfaces corresponding to direct light.
Regarding claim 8, McGuire discloses The method according to claim 1, wherein performing illumination calculation on the to-be-rendered virtual scene based on the texture information of the light probe comprises:
obtaining texture information of an adjacent light probe of each pixel in the first irradiance map; and (page 78, col. right, Figure 2: “We highlight a few ISPM light transport paths (lines), photons (discs), and photon volumes (wireframe) in a rendered scene”; page 80, col. left, para. 5, “Our photon volumes most closely resemble the photon splats of Herzog et al [2007] … new term “photon volumes” to emphasize the 3D nature, and because ISPM photon volumes extend Herzog et al.’s 3D splats by conforming more tightly to surfaces and avoiding their expensive cone estimation step”; page 78, col. right, Figure 2: “We highlight a few ISPM light transport paths (lines), photons (discs), and photon volumes (wireframe) in a rendered scene”, wall location “1”). Note that: (1) a to-be-rendered scene is shown while photon volumes placed in the scene can be regarded as light probes although the term of light probe of the same meaning was not yet widely adopted in the field at the publication of McGuire; (2) photon volumes (light probes) can be used to obtain the characteristics information of the surfaces since they conform to surfaces by resembling the photons splats; (3) the rationale for claim 1 above has been specified that texture information of a light probe can be obtained above; and (4) for each pixel in the first irradiance map (e.g., corresponding to wall location “1” in Figure 2), repeat the process obtaining texture information of a light probe for all adjacent photon volumes (light probes) as shown in Figure 2.
performing weighted-based interpolation on the texture information of the adjacent light probe. Note that: (1) after the texture information for the adjacent light probes have been obtained, a conventional interpolation (e.g., tri-linear interpolation) can be performed on the texture information of the adjacent light probe; and (3) a conventional interpolation (e.g., tri-linear interpolation) is actually a weighted-based interpolation, and it is obvious to one having ordinary skills in the art that the weights to the adjacent light probes can be related to their respective distances to the location corresponding to the pixel point in the first irradiance map.
Claim 9 reciting “An apparatus, wherein the apparatus is used in a terminal device, and the apparatus comprises: at least one processor: and at least one memory storing instructions that are executable by the at least one processor, the instructions comprising instructions for:” corresponds to the method of claim 1. Therefore, claim 9 is rejected for the same rationale for claim 1.
In addition, McGuire discloses An apparatus, wherein the apparatus is used in a terminal device, and the apparatus comprises: at least one processor: and at least one memory storing instructions that are executable by the at least one processor, the instructions comprising instructions for: (page 83, col. left, para. 5, “All results use a NVIDIA GeForce GTX 280 GPU for all rasterization steps, using OpenGL and the GLSL shading language. We use 3.2 GHz dual-processor quad-core Intel Extreme CPU with 2 GB memory running Microsoft Windows Vista 32-bit, and use all 8 cores for CPU tracing”; page 78, col. right, para. 1, “our experimental implementation is surprisingly simple: about 600 C++ and GLSL statements inserted into a deferred-shading game engine”). Note that: the computer with the CPU, NVDIA GPU and 2GB memory is a terminal device in which there are CPU / GPU and memory that storing the instructions coded in C++ GLSL statements.
Claims 10-11 correspond to the method of claims 2-3, respectively. Therefore, claims 10-11 are rejected for the same rationale for claims 2-3, respectively.
Claim 16 corresponds to the method of claim 8, respectively. Therefore, claim 16 is rejected for the same rationale for claim 8.
Claim 20 reciting “A non-transitory computer readable storage medium storing code that is executable by at least one processor, the code including instructions for:” corresponds to the method of claim 1. Therefore, claim 20 is rejected for the same rationale for claim 1.
In addition, McGuire discloses A non-transitory computer readable storage medium storing code that is executable by at least one processor, the code including instructions for: (page 83, col. left, para. 5, “All results use a NVIDIA GeForce GTX 280 GPU for all rasterization steps, using OpenGL and the GLSL shading language. We use 3.2 GHz dual-processor quad-core Intel Extreme CPU with 2 GB memory running Microsoft Windows Vista 32-bit, and use all 8 cores for CPU tracing”; page 78, col. right, para. 1, “our experimental implementation is surprisingly simple: about 600 C++ and GLSL statements inserted into a deferred-shading game engine”). Note that: the computer’s memory of 2GB memory and / or GPU’s memory are usually of RAM as a non-transitory computer readable storage medium that stores the instructions coded in C++ GLSL statements and are executable at the CPU and GPU.
Claims 21-22 correspond to the method of claims 2-3, respectively. Therefore, claims 21-22 are rejected for the same rationale for claims 2-3, respectively.
Claims 4-7, 12-15, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over McGuire in view of Deng et al. (Detail Preserving Coarse-to-Fine Matching for Stereo Matching and Optical Flow, IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 30, 2021, pp. 5835-5747, hereinafter “Deng”).
Regarding claim 4, McGuire discloses The method according to claims 1, wherein performing upsampling on the first irradiance map, to obtain the second irradiance map comprises: … the first irradiance map … the second irradiance map
However, McGuire fails to disclose, but in the same art of computer processing, Deng discloses
performing upsampling on the first irradiance map according to a neighborhood search algorithm, to obtain the second irradiance map. (Deng, page 5836, col. right, para. 1, “We accordingly propose a novel differentiable Neighbor-Search Upsampling (NSU) module for upsampling. When computing the disparity/flow values for the upsampled map, the NSU estimates the matching scores for the disparity/flow candidates within a neighborhood and selects the best-matched one, resulting in a detail-preserving upsampled disparity/flow map. The key of the proposed NSU module is how to estimate the matching scores and select the best-matched one with differentiable operations”). Note that: (1) the specified Neighbor-Search Upsampling (NSU) module above upsamples a disparity/flow map to a detail-preserving upsampled disparity/flow map by estimating the matching scores for the disparity/flow candidates within a neighborhood and selects the best-matched one; and (2) a disparity/flow map can be substituted by the first irradiance map, and a detail-preserving upsampled disparity/flow map can be substituted by the second irradiance map.
McGuire and Deng are in the same field of endeavor, namely computer processing. Before the effective filing date of the claimed invention, it would have been obvious to apply neighbor-search upsampling method, as taught by Deng into McGuire. The motivation would have been “We observe that these wrong disparity/flow values can be avoided if we select the best-matched value among their neighborhood, which inspires us to propose a novel differentiable Neighbor-Search Upsampling (NSU) module” (Deng, Abstract). The suggestion for doing so would allow to improve the upsampling pixel quality and avoid wrong values for aliasing induced artifacts. Therefore, it would have been obvious to combine McGuire and Deng.
Regarded claim 5, McGuire in view of Deng discloses The method according to claim 4, wherein the first irradiance map comprises a plurality of first pixel points, the second irradiance map comprises a second pixel point, a location of the second pixel point corresponds to a location of a target pixel point in the first irradiance map in location, the target pixel point is adjacent to the plurality of first pixel points, and performing upsampling on the first irradiance map according to the neighborhood search algorithm, to obtain the second irradiance map comprises: Note that: the limitations here are a general description for a general upsampling method to upsample the first irradiance map to the second irradiance map: (1) the upsamping is to obtain a value at a second pixel point in the upsampled high-resolution second irradiance sample; (2) mathematically, a target pixel point corresponding to the second pixel point is mapped into the first irradiance map; (3) there are a plurality of pixel points adjacent to the target pixel point in the first irradiance map; and (4) using the neighborhood search algorithm to determine a value at the target pixel point for the value of the second pixel points in the upsampled second irradiance map.
based on in response to information indicating that there is a first pixel point of which similarity to the target pixel point is greater than a threshold in the plurality of first pixel points, obtaining the second pixel point based on the first pixel point of which similarity is greater than the threshold. (Deng, page 5836, col. right, para 1, “We regard feature vectors in the finer level feature maps as robust descriptors and use convolutional neural networks (CNN) to compute matching scores between each reference pixel in the reference feature and its candidates in the target feature, where the candidates are indicated by disparity/flow values in the neighborhood centered at the reference pixel. With the estimated matching scores, we are able to select the best-matched disparity/flow value for a reference pixel. A straightforward solution, referred to as winner-take-all (WTA), selects the candidate with the highest matching score”). Note that: (1) the similarity or matching score is calculated for each pixel (of the plurality of first pixel points) in the neighborhood of the target pixel point in the first irradiance map; and (2) it is obvious to one having ordinary skills in the art to obtain a pixel point with maximum similarity and establish a threshold of similarity as a scaled value of the average similarity (e.g., 1.25 * the average similarity) of all pixel points in the neighborhood, and select the value of the pixel point with the maximum similarity that is greater than the threshold as a value of the target pixel point for the second pixel point.
The motivation to combine McGuire and Deng given in claim 4 is incorporated here.
Regarded claim 6, McGuire in view of Deng discloses The method according to claim 5, wherein the method further comprising:
performing average value calculation on the plurality of first pixel points in
response to information indicating that there is no first pixel point of which similarity to the target pixel point is greater than the threshold in the plurality of first pixel points, to obtain the second pixel point. (Deng, page 5836, col. right, para 1, “We regard feature vectors in the finer level feature maps as robust descriptors and use convolutional neural networks (CNN) to compute matching scores between each reference pixel in the reference feature and its candidates in the target feature, where the candidates are indicated by disparity/flow values in the neighborhood centered at the reference pixel. With the estimated matching scores, we are able to select the best-matched disparity/flow value for a reference pixel. A straightforward solution, referred to as winner-take-all (WTA), selects the candidate with the highest matching score”). Note that: (1) the similarity or matching score is calculated for each pixel (of the plurality of first pixel points) in the neighborhood of the target pixel point in the first irradiance map; (2) for neighborhood search method it is obvious to one having ordinary skills in the art to calculate an average value on the pixel points of the neighborhood for neighborhood operations; and (3) it is obvious to one having ordinary skills in the art to obtain a pixel point with maximum similarity and establish a threshold of similarity as a scaled value of the average similarity (e.g., 1.25 * the average similarity) of all pixel points in the neighborhood. However, when the maximum similarity is not greater than threshold of similarity, one can select the average value of all pixels of the plurality of pixel values as the value of the target pixel point for the second pixel point in a conventional fallback mechanism.
The motivation to combine McGuire and Deng given in claim 4 is incorporated here.
Regarded claim 7, McGuire in view of Deng discloses The method according to claim 4, wherein the neighborhood search algorithm is used to perform anti-aliasing on the first irradiance map. (Deng, page 5836, col. right, para. 1, “We accordingly propose a novel differentiable Neighbor-Search Upsampling (NSU) module for upsampling. When computing the disparity/flow values for the upsampled map, the NSU estimates the matching scores for the disparity/flow candidates within a neighborhood and selects the best-matched one, resulting in a detail-preserving upsampled disparity/flow map. The key of the proposed NSU module is how to estimate the matching scores and select the best-matched one with differentiable operations”; page 5845, col. right, para. 3, “The third one applies a Gaussian filter to smooth the image before downsampling to avoid aliasing artifacts.”). Note that: (1) the neighborhood search algorithm by Deng determines the irradiance value in the upsampled irradiance map (the second irradiance map) by using the value of a pixel point in the first irradiance map with the best matching score; (2) the best matching score is selected among the calculated matching scores for the pixel points adjacent to the target pixel location in the first irradiance map and corresponding to the second pixel point; and (3) it is obvious to one having ordinary skills in the art that the filtering operation based on the similarity values or matching scores in the neighborhood of a pixel point of the low-resolution first irradiation map is similar to the functional effects of a Gaussian filter to avoid aliasing artifacts. In other words, anti-aliasing is performed on the first irradiation map.
The motivation to combine McGuire and Deng given in claim 4 is incorporated here.
Claims 12-15 correspond to the method of claims 4-7, respectively. Therefore, claims 12-15 are rejected for the same rationale for claims 4-7, respectively.
Claim 23 corresponds to the method of claim 4. Therefore, claim 23 is rejected for the same rationale for claim 4.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
McGuire et al. (Real-time global illumination using precomputed light field probes, I3D '17: Proceedings of the 21st ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (February 2017) teaches a new data structure and algorithms that employ it to compute real-time global illumination from static environments. Light field probes encode a scene’s full light field and internal visibility. They extend current radiance and irradiance probe structures with per-texel visibility information similar to a G-buffer and variance shadow map.
Majercik et al. (Dynamic Diffuse Global Illumination with Ray-Traced Irradiance Fields Journal of Computer Graphics Techniques Vol. 8, No. 2, 2019) teaches how to compute global illumination efficiently in scenes with dynamic objects and lighting.
Hu et al. (Signed Distance Fields Dynamic Diffuse Global Illumination, arXiv.org, arXiv:2007.14394v1 [cs.GR] 28 Jul 2020) teaches a novel approach of computing dynamic diffuse GI with a signed distance fields approximation of the scene and discretizing the space domain of the irradiance function.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BIAO CHEN whose telephone number is (703)756-1199. The examiner can normally be reached M-F 8am-5pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee M Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Biao Chen/
Patent Examiner, Art Unit 2611
/KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611