Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant’s election without traverse of Group I in the reply filed on 11/10/25 is acknowledged.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 22 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the scope of the claimed computer-readable medium includes transitory/signal media, i.e. the disclosure, e.g. paragraph 167, indicates the computer-readable storage medium includes signal media.
Applicant is advised that this rejection can be overcome by amending the claim to recite that the computer-readable medium is non-transitory.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 10 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 10 recites “after the drawing” of the particle/particle model, “adjusting the position of each particle model so that the boundary of the particle model whose position is adjusted is parallel to a boundary of the scene image”. Applicant’s disclosure does not explain or suggest how an already drawn particle/particle model can be adjusted in space, i.e. as one of ordinary skill in the art would understand, once the GPU has drawn a particle/particle model, its position cannot be adjusted because the GPU will already have evaluated the position and bounds of the particle/particle model, meaning that any adjustment to the position of a particle/particle model must be performed prior to drawing. This leaves the claim indefinite, i.e. one of ordinary skill in the art would understand that the particle model position can be adjusted prior to drawing, but not how the particle model position is adjusted after drawing, leaving the scope thereof undefined.
For purposes of applying prior art, the claim will be interpreted as reciting “before the drawing”.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 10, 12, and 22 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by “Hierarchical Image-Space Radiosity for Interactive Global Illumination” by Greg Nichols, et al. (hereinafter Nichols).
The limitations “a method for generating a lighting image, comprising: establishing a plurality of Graphics Processing Unit (GPU) particles in a virtual space; … [rendering particles to obtain a virtual lighting range image]; and fusing the virtual lighting range image with a scene image corresponding to the illuminated object to obtain a lighting image in the virtual space” are taught by Nichols (Nichols, e.g. abstract, sections 1-5 describes a system for rendering images of virtual objects with global illumination based on virtual point lights (VPLs), where the process is performed by rendering a direct illumination image, e.g. section 3.2.2, figure 2A, performing multiresolution splatting of the VPLs into an illumination buffer, e.g. figure 2F, and combining the direct illumination image with the illumination buffer to generate an output image, where the VPLs correspond to the GPU particles/particle models representing a lighting area, the direct illumination image corresponds to the claimed scene image, the illumination buffer corresponds to the claimed virtual lighting range image, and the combined output image corresponds to the claimed lighting image in the virtual space obtained by fusing the virtual lighting range image with the scene image.)
The limitations “acquiring a position of each GPU particle in the virtual space, and drawing, at the position of each GPU particle, a particle model for representing a lighting area; determining a positional relationship between each particle model and an illuminated object in the virtual space; selecting a plurality of target particle models satisfying a lighting requirement from the plurality of particle models based on the positional relationship” are taught by Nichols (Nichols, e.g. section 2, paragraphs 6-7, section 3.2.2, paragraphs 2-4, sections 3.3, 3.4, 3.5, teaches that as is known in the art, reflective shadow maps (RSMs) are rasterized from the light view to generate VPLs, i.e. as claimed, establishing, acquiring the position of, and drawing, at the position, a particle model representing a lighting area, where, e.g. sections 3.3-3.5.1, figure 7, a plurality of VPLs are selected according to their positions in image space relative to image space discontinuities determined in the direct illumination image, i.e. as claimed, selecting a plurality of target models satisfying a lighting requirement from the plurality of particle models based on a positional relationship between the particle models and the illuminated object.)
The limitations “determining a lighting range corresponding to each target particle model; rendering each target particle model according to the lighting range corresponding to each target particle model to obtain a virtual lighting range image” are taught by Nichols (Nichols, e.g. sections 3.1, 3.2, 3.2.1, 3.2.2 uses a stencil approach to splat the lighting contribution of each selected VPL into the multi-resolution illumination buffer, where, e.g. section 3.2.1, paragraph 5, the stencil is used to cull invalid contributions to image patches/fragments/pixels from each respective VPL, i.e. as claimed, each target particle model is rendered according to a lighting range determined for the target particle model based on its positional relationship to the illuminated object in the scene image, in order to obtain the virtual lighting range image.)
Regarding claim 10, the limitation “wherein the particle models comprise two-dimensional squares, and the method further comprises, before the drawing, at the position of each GPU particle, a particle model for representing a lighting area: adjusting the position of each particle model so that a boundary of the particle model whose position is adjusted is parallel to a boundary of the scene image corresponding to the object” is taught by Nichols (Nichols, e.g. section 3.2.1, paragraph 5, teaches that the VPLs are drawn using a single full screen quad for every multiresolution splat, i.e. the full screen quad is a 2D rectangle which is aligned to the screen and of equal size, where the rectangle would be a square if the resolution were set to be equal in width and height. That is, as claimed, Nichols’ VPL splats, corresponding to the particle models, are two-dimensional squares which are positioned to be parallel to the boundary of the scene image.)
Regarding claims 12 and 22, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 1 above, with Nichols, e.g. section 4, indicating implementation using OpenGL and GLSL on a consumer hardware based PC, i.e. the claimed electronic device comprising a memory and processor executing a program stored in the memory.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2-4 and 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over “Hierarchical Image-Space Radiosity for Interactive Global Illumination” by Greg Nichols, et al. (hereinafter Nichols) as applied to claims 1 and 12 above, and further in view of U.S. Patent Application Publication 2013/0328871 A1 (hereinafter Harada).
Regarding claim 2, the limitations “wherein the determining a positional relationship between each particle model and an illuminated object in the virtual space comprises: determining a first distance from each particle model to the camera in the virtual space; acquiring a depth image of the illuminated object in the virtual space by using the camera; sampling the depth image based on an area range of each particle model to obtain a plurality of sampling images; determining, according to the depth information of each sampling image, a second distance from the illuminated object displayed in each sampling image to the camera; comparing the first distance with the second distance, and determining the positional relationship between each particle model and the illuminated object displayed in a corresponding sampling image; wherein the selecting a plurality of target particle models satisfying a lighting requirement from a plurality of particle models based on the positional relationship comprises: determining particle models for which the first distance is smaller than or equal to the second distance as the plurality of target particle models satisfying the lighting requirement” are not explicitly taught by Nichols (Nichols, e.g. section 3.2.1, teaches calculating a depth image using the virtual camera, and, e.g. section 5, paragraph 2, suggests future directions for improving the system could include accounting for indirect light visibility, but does not teach the claimed comparing of the first distance from the VPL to the camera to the second distance sampled from the depth image an area corresponding to the VPL to determine the positional relationship, although said comparison/relationship corresponds to Nichols’ suggested accounting for indirect light visibility, i.e. determining whether the relationship between the VPL position and the depth map indicates the VPL is visible or not visible for a surface.) However, this limitation is taught by Harada (Harada, e.g. abstract, paragraphs 21-78, describes a forward rendering pipeline with a light culling stage. Harada, e.g. paragraphs 24-27, describes the forward rendering pipeline, and, e.g. paragraphs 57-64, teaches that the pipeline can be extended to support one bounce indirect illumination using virtual point lights, wherein the virtual point lights are evaluated for visibility at the light culling stage. Harada, e.g. paragraphs 30-50, describes the light culling stage, which operates by determining, for each tile in screen space, the minimum and maximum depths in the depth buffer, and retaining the list of lights which overlap the tile frustum defined by the minimum and maximum depths, and culling those which do not overlap the tile frustum. Harada’s one bounce indirect illumination extension includes an extended light culling stage, e.g. paragraphs 60-63, which evaluates the virtual point lights with respect to the tile frustums. More specifically, Harada, e.g. paragraphs 61, 62, indicates that the depth extent for a tile frustum is split into cells, and for each pixel 715 a depth mask marks the cells overlapped by the pixel based on its depth value, and the light geometry is checked against the tile frustum to determine whether to generate a light depth mask by similarly calculating the extent of the light geometry in the depth direction and flagging cells, and determining the overlap for each pixel by comparing the light depth mask to the tile depth mask. Harada, e.g. paragraph 63, indicates that when the light and surface occupy the same cell, the light and tile depth masks have the same flag at the cell, such that a logical and operation between the masks indicates the overlap. Harada’s light culling for VPLs corresponds to the claimed steps of determining and comparing the first and second distances to determine the positional relationship between each particle model and the illuminated object, i.e. Harada’s depth buffer corresponds to the depth image of the illuminated object in the virtual space acquired using a virtual camera, which is sampled based on an area range of each particle model to obtain a plurality of sampling images, i.e. Harada’s light depth masks are compared to tile depth masks determined by sampling the depth buffer at the corresponding screen-space location of the light/VPL, where the tile depth masks are used to determine second distance(s), i.e. each cell in a depth masks corresponds to a different range of distances from the virtual camera, such that flagged cells correspond to the second distance(s) determined from the sampled tile depth masks determined by sampling the depth image. Further, Harada’s light depth masks similarly contain flagged cells indicating distance(s) of the extent of the light/VPL from the virtual camera, corresponding to the claimed first distance(s) of the particle model to the camera in the virtual space. Finally, Harada, paragraph 63, indicates that a logical-and between the masks is used to determine whether the light/VPL should be retained for the frustum, i.e. when both the light/VPL and tile depth masks have the same cell flagged as occupied, the light/VPL affects the object surface at that cell and should be retained, whereas if there is no overlap the light/VPL can be culled, corresponding to the claimed selecting target particle models satisfying a lighting requirement based on the positional relationship based on determining that particle models for which the first distance is smaller than or equal to the second distance satisfy the lighting requirement, i.e. when the distances are equal/overlapping as represented by the cell logical-and result, the lighting requirement is satisfied and the light/VPL is selected/retained, and in contrast if the flagged light depth mask cell(s) have a greater distance to the camera than the flagged tile depth mask cell(s) for all the pixels of the tile, the light/VPL is not selected/retained for that tile. As noted above, this corresponds to Nichols’ suggestion to account for indirect light visibility, as the lack of overlap between a VPL and the screen space tile depth extent indicates that the VPL is not visible from the surface represented in the screen space tile.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Nichols’ global illumination system to include Harada’s VPL light culling technique in order to account for indirect light visibility as suggested by Nichols. In Nichols’ modified system, Harada’s light culling stage would be performed after generating VPLs, as in Harada’s figure 6, thereby determining the subset of VPLs which are visible from the surfaces represented in each screen space tile, i.e. analogous to Nichols’ stencil culling in section 3.2.1, paragraphs 3-4, Harada’s light culling stage would reduce the number of fragments generated for the illumination buffer that do not contribute to the final result.
Regarding claim 3, the limitations “wherein the determining a first distance from each particle model to a camera in the virtual space comprises; determining interface coordinates of a target reference point in each particle model according to a transformation relationship between a coordinate system of each particle model and a coordinate system of a display interface; and calculating the first distance from each particle model to the camera in the virtual space based on the interface coordinates of the target reference point in each particle model” are taught by Nichols in view of Harada (Harada, e.g. paragraphs 25, 27, 37, 38, 61, indicates that the light depth masks are calculated using screen-space depths, i.e. the screen-space coordinate system corresponds to the coordinate system of a display interface, and the depth values/light depth mask cell flags are also determined in the screen-space coordinate system, where the extent of the light geometry is determined in the screen-space depth direction. That is, the claimed target reference point of the particle model is transformed into the screen-space coordinate system to determine interface coordinates of the target reference point, i.e. the extent of the light geometry is determined in screen-space based on the point position of the virtual point light, and the first distance(s) in the light depth mask(s) are determined using the screen-space depth(s) of the extent of the light geometry, i.e. the claimed calculating based on the interface coordinates of the target reference point.)
Regarding claim 4, the limitation “wherein the selecting a plurality of target particle models satisfying a lighting requirement from the plurality of particle models based on the relationship further comprises: deleting pixels of a particle model for which the first distance is larger than the second distance” is taught by Nichols in view of Harada (As discussed in the claim 2 rejection above, in Nichols’ modified system, Harada’s light culling stage would be performed after generating VPLs, as in Harada’s figure 6, thereby determining the subset of VPLs which are visible from the surfaces represented in each screen space tile, i.e. analogous to Nichols’ stencil culling in section 3.2.1, paragraphs 3-4, Harada’s light culling stage would reduce the number of fragments generated for the illumination buffer that do not contribute to the final result. Further, as was noted in the claim 2 rejection, in Harada’s VPL light culling technique, when the distances are equal/overlapping as represented by the cell logical-and result, the lighting requirement is satisfied and the light/VPL is selected/retained, and in contrast if the flagged light depth mask cell(s) have a greater distance to the camera than the flagged tile depth mask cell(s) for all the pixels of the tile, the light/VPL is not selected/retained for that tile. That is, as claimed, the fragments, i.e. pixels, for the tile from the corresponding light/VPL would be deleted if there is no overlap.)
Regarding claim 14, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 2 above.
Regarding claim 15, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 3 above.
Regarding claim 16, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 4 above.
Claims 9 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over “Hierarchical Image-Space Radiosity for Interactive Global Illumination” by Greg Nichols, et al. (hereinafter Nichols) as applied to claims 1 and 12 above, and further in view of “A Reflectance Model for Computer Graphics” by Robert L. Cook, et al. (hereinafter Cook)
Regarding claim 9, the limitations “wherein the fusing the virtual lighting rang image with a scene image corresponding to the illuminated object to obtain a lighting image in the virtual space comprises: acquiring a target light source color and a target scene color; performing interpolation processing on the target light source color and the target scene color by using … the virtual lighting range image to obtain an interpolation result; and superimposing the interpolation processing result with a color value of the scene image corresponding to the illuminated object to obtain the lighting image in the virtual space” are taught by Nichols (As noted in the claim 1 rejection above, Nichols teaches that the process is performed by rendering a direct illumination image, e.g. section 3.2.2, figure 2A, performing multiresolution splatting of the VPLs into an illumination buffer, e.g. figure 2F, and combining the direct illumination image with the illumination buffer to generate an output image. Nichols, e.g. section 3.2.2, paragraphs 4, 5, equations 2-5, teaches that the diffuse colors pi of an eye-space patch i are interpolated with the VPL Lj light source color Ij weighted by the contribution factor Fi->j, i.e. the indirect illumination value is the claimed interpolation processing result calculated from the target light source color Ij and target scene color pi using interpolation/weighting factor Fi->j. Finally, Nichols, e.g. section 3.2.2 paragraph 6, combines the indirect illumination result with the direct illumination result to produce the final result, i.e. the claimed superimposition of the interpolation processing result with a color value of the scene image corresponding to the illumination object to obtain the lighting image.)
The limitation “performing interpolation processing on the target light source color and the target scene color by using a target channel value of the virtual lighting range image to obtain an interpolation result” is implicitly taught by Nichols (As noted above, Nichols, section 3.2.2, equations 2-5, describes calculating the indirect illumination value corresponding to the claimed interpolation processing result calculated from the target light source color Ij and target scene color pi using interpolation/weighting factor Fi->j. While not explicitly stated by Nichols, one of ordinary skill in the art would have found it implicit, if not inherent, that Nichols’ calculations in equations 2 and 5 are performed separately for each color channel, i.e. red, green, and blue channels, for the images, i.e. one of ordinary skill in the art would understand that evaluating illumination of light sources having a color spectrum requires separately evaluating the lighting contribution from each spectral component/channel of the light source. In the interest of compact prosecution, Cook is cited for teaching one of ordinary skill in the art would recognize that light source simulation/calculations are often described with respect to a single channel calculation/equation, but represent performing the simulation/calculations/equations for all the components of the light source color, conventionally red, green, and blue.) However, this limitation is taught by Cook (Cook, e.g. abstract, sections pages 7-18, describes a physically based reflectance model for computer graphics, wherein the effect of light sources on objects in a scene is dependent on the spectral composition of the light source and wavelength-selective reflection of the object surface, e.g. page 8, 3rd paragraph, and intensities are wavelength dependent, e.g. page 11, 3rd paragraph. Further, Cook, e.g. the paragraph spanning page 16 and 18, describes evaluating the red component of an exemplary illumination calculation for a copper material, indicating that the green and blue components are calculated similarly, i.e. as noted above, one of ordinary skill in the art would understand that evaluating illumination of light sources having a color spectrum requires separately evaluating the lighting contribution from each spectral component/channel of the light source.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement Nichols’ global illumination system by evaluating the indirect illumination using red, green, and blue color component channels because, as taught by Cook, one of ordinary skill in the art would understand that evaluating illumination of light sources having a color spectrum requires separately evaluating the lighting contribution from each spectral component/channel of the light source. That is, Nichols’ calculations in equations 2 and 5 would be performed separately for each color channel, i.e. red, green, and blue channels, corresponding to the claimed interpolation processing performed using a target channel value of the virtual lighting range image.
Regarding claim 21, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 9 above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT BADER whose telephone number is (571)270-3335. The examiner can normally be reached 11-7 m-f.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at 571-272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROBERT BADER/Primary Examiner, Art Unit 2611