DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Applicant’s amendments filed on 02/05/2026 have been received and considered. Claims 1-24 are pending. Claims 1-2, 4-5, 9-11, 13, and 15-24 have been amended. No claims have been added or cancelled.
Drawings
The objections to the drawing in Figure 2 are withdrawn in view of the Applicant’s amendments to the specifications.
Specification
The objections to the abstract, paragraph [0027], and paragraph [0077] due to minor informalities are withdrawn in view of the Applicant’s amendments to the specification.
Claim Objections
The objection to claims 1, 13, and 15 due to minor informalities are withdrawn in view of Applicant’s amendments to claims 1, 13, and 15.
Response to Arguments
The applicant argued that Laferriere in view of Ghosh and Zhan does not teach determining pixel values by calculating a weighted sum using “the distribution of contribution ratios of the plurality of light sources obtained with respect to the directions corresponding to the intra-scene light sources,” as recited by amended claim 1 (Remarks, pg. 11-14).
More specifically, the applicant argues (Remarks, pg. 14, lines 3-8):
PNG
media_image1.png
383
808
media_image1.png
Greyscale
The examiner respectfully disagrees with the Applicant’s analysis.
Regarding the feature calculating a weighted sum using “the distribution of contribution ratios of the plurality of light sources”, the mapping is achieved by the combination of Laferriere, Ghosh, and Zhan, more specifically, in view of Zhan.
The examiner explains that Zhan teaches the limitation in question: determining pixel values by calculating a weighted sum using “the distribution of contribution ratios of the plurality of light sources obtained with respect to the directions corresponding to the intra-scene light sources”:
calculating a weighted sum ([pg. 3290, col. 2, full par. 1, line 11] “M is the Gaussian Map”, where it is used “to synthesize the final illumination map” [pg. 3290, Figure 3, line 4]. Note: Gaussian map M is mapped to the claimed weighted sum, similar to L(P, ω) in Equation 2 as disclosed in the specification.)
PNG
media_image2.png
81
340
media_image2.png
Greyscale
Equation 3 (Zhan, pg. 3290)
of luminance values ([pg. 3290, col. 2, full par. 1, lines 12-14] “vi denotes the RGB value of a anchor points which is the product of light distribution on this anchor point and light intensity (namely vi = Pi * I)”)
using, as a weight, the distribution of contribution ratios (The section of Equation 3, shown below, is mapped to the claimed “contribution ratio,” because it is a weight that determines the contribution of the corresponding vi value. Further, “The value of all anchor points will be normalized by the intensity I to ensure their summation equals one, so that the N anchor points form a standard discrete distribution on a unit sphere as denoted by light distribution P.” [pg. 3289, col. 1, full par. 2, lines 21-pg. 3289, col. 2, lines 1-3])
PNG
media_image3.png
60
140
media_image3.png
Greyscale
Section of Equation 3 (Zhan, pg. 3290)
of the plurality of light sources (“As illumination maps are spherical images, we define N anchor points on a unit sphere to model discrete light distributions.” [pg. 3288, col. 1, full par. 1, lines 6-8]. Note: N anchor points is mapped to the plurality of light sources. The discrete light distribution is illustrated by the section of Figure 2 below, where “we first derive the Light sources region via thresholding and then assign light source pixels to N anchor points as illustrated in Gaussian Map” [pg. 3289, Figure 2, lines 2-4].)
PNG
media_image4.png
169
933
media_image4.png
Greyscale
Figure 2 (Zhan, pg. 3289)
obtained with respect to the directions corresponding to the intra-scene light sources (“di is the direction of the anchor point” [pg. 3290, col. 2, full par. 1, lines 14-15], where anchor points represent the discrete light sources.).
Therefore, the combination of Laferriere and Ghosh in view of Zhan teaches calculating a sum using “the distribution of contribution ratios of the plurality of light sources”.
Even if the characterization of the teaching of Zhan “Zhan’s ‘weights’ are therefore estimates of an environment lighting distribution derived image data” is correct, there is no apparent contradiction with the claimed feature of “the distribution of contribution ratios of the plurality of light sources”. Both are about environment lighting distribution.
Zhan’s Figure 2 shows the environment lighting distribution is for discrete light sources.
PNG
media_image4.png
169
933
media_image4.png
Greyscale
Further, the Examiner’s mapping of weights and distribution are consistent with the Applicant’s disclosure as shown in Fig. 4:
PNG
media_image5.png
164
618
media_image5.png
Greyscale
As the Examiner’s mapped weights are similar to the weights in Fig. 4, as they are “light source contribution ratio distributions” and are of “the plurality of light sources”.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4, 5, 20, 22, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Laferriere (US 6226005) in view of Ghosh et al. (US 11410378) and Zhan et al. (“EMLight: Lighting Estimation via Spherical Distribution Approximation”).
Laferriere teaches an image processing apparatus for generating a display image of a space including an object, the image processing apparatus comprising: circuitry configured to: (“A system and method of determining, and subsequently using in a rendering engine, an illumination map”, where “producing an illumination map for at least one object in a scene to be rendered” [col. 1, line 15-19])
the image processing apparatus comprising: circuitry configured to: store a distribution of contribution ratios of a plurality of light sources representing sizes of effects thereof on a color of a figure of the object ([col. 4, lines 9-10] “means to store said determined contributions in an illumination map”, where [col. 10, lines 61-65] “‘illumination map’… represents the contribution of the scene's light sources to the points of interest in the scene and combining the colors of the objects at those points of interest” and “the resulting illumination value can represent the final color” [col. 13, lines 4-5])
in association with model data of the object ([col. 3, lines 6-9] “producing an illumination map for at least one object in a scene to be rendered, …the object being represented as a mesh of polygons”)
…determine a pixel value of the figure ([col. 3, lines 18-25] “(iii) for each determined area of intersection, determining the product of illumination information… [and] (iv) summing each product determined in step (iii) for each respective pixel to obtain an illumination value”)
by calculating a weighted sum of luminance values of light reflected from the object after emanating from intra-scene light sources established in the space ([col. 3, lines 28-32] “the illumination information in step (iii) is determined by determining the sum of each light value for each light defined for said scene at said determined location of intersection”).
Laferriere fails to teach light sources being in different directions relative to the object. However, this is taught by Ghosh et al., hereinafter Ghosh. Ghosh teaches the light sources being in different directions relative to the object ([col. 4, lines 5-7] “a plurality of light sources arranged in a hemisphere or sphere around the object, or surrounding the object”).
Laferriere also fails to teach the calculation process described earlier when rendering the figure of the object. This is because Laferriere teaches that this is done “to avoid having to calculate the contributions of lights in the scene during rendering, thus reducing the rendering time” [Abstract]. However, this is known in the art as taught in the art by Ghosh. Ghosh teaches that performing calculations for luminance values can be done when rendering the figure of the object in the display image ([col. 8, lines 20-23] “an image processing system 12, which may be implemented in software on a processor-based computer system (not shown), can be used to generate computed images”).
Laferriere further fails to teach the same apparatus to output data of the display image including the object image. Ghosh also teaches output data of the display image including the object image ([col. 8, lines 40-41] “display the object 2 on a display 21”).
Laferriere and Ghosh are analogous to the claimed invention, as they are in the same field of image processing of a 3D scene, given a plurality of lights, to render and display an object in that scene. Ghosh teaches a system that “enables high-quality renderings of acquired objects under new lighting conditions”, with an arrangement of the light sources “to provide uniform illumination on the object” [col. 4, line 7]. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Ghosh with the apparatus of Laferriere for high-quality rendering with uniform illumination.
Laferriere in view of Ghosh fails to teach the calculation being made using as a weight the contribution ratios obtained with respect to the directions corresponding to the intra-scene light sources. However, this is taught by Zhan et al. (hereinafter Zhan).
Zhan also teaches an image processing apparatus ([pg. 3288, col. 1, paragraph 1, line 2] “an illumination estimation framework”),
…determine a pixel value of the figure ([pg. 3289, Fig. 2 description, lines 3-4] “assign light source pixels to N anchor points as illustrated in Gaussian Map”) calculating a weighted sum of luminance values (see equation 3 below, where “M is the gaussian map… [and] vi denotes the RGB value of a anchor point” [pg. 3290, col. 2, paragraph 2, lines 11-13] and ) ([pg. 3290, col. 2 paragraph 2, line 13])
the calculation being made using, as a weight, the distribution of contribution ratios of the plurality of light sources ([pg. 3288, col. 1, paragraph 2, line 7] “N anchor points… model discrete light distributions” where “The value of all anchor points… summation equals one” [pg. 3289, col. 1, paragraph 2, lines 25 - col. 2, paragraph 1, line 1]) obtained with respect to the directions corresponding to the intra-scene light sources ([pg. 3290] “di is the direction of an anchor point”).
PNG
media_image6.png
163
867
media_image6.png
Greyscale
Equation 3 (Zhan)
Similarly to Laferriere and Ghosh, Zhan is analogous to the claimed invention as in the same field of image processing of a 3D scene, given a plurality of lights, to render and display the scene. Zhan teaches “an accurate illumination estimation framework that is capable of locating light sources and recover illumination with realistic frequency simultaneously” [pg. 3288], in which the calculation is “largely attributed to the accurate generation of illumination” [pg. 3292]. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Zhan to the combination of Laferriere in view of Ghosh to accurately render illumination given by the light sources of a scene.
Regarding claim 4, the combination of Laferriere, Ghosh, and Zhan teaches the image processing apparatus according to claim 1, wherein the circuitry is configured to: store the contribution ratios for each of three primary colors, and calculate the weighted sum for each of the three primary colors (Laferriere; [col. 4, lines 9-10] “means to store said determined contributions in an illumination map”, where “‘illumination map’… represents the contribution of the scene's light sources to the points of interest in the scene and combining the colors of the objects at those points of interest” [col. 10, lines 62-65], and “in the present embodiment of the invention, each color is expressed in normalized RGB color space (R, G and B values each between 0.0 and 1.0)” [col. 8, line 67 - col. 9, lines 1-3])
Regarding claim 5, the combination of Laferriere, Ghosh, and Zhan further teaches the image processing apparatus according to claim 1, wherein the circuitry is configured to:
store the distribution of the contribution ratios collectively representing the sizes of the effects on the color (Laferriere; [col. 4, lines 9-10] “means to store said determined contributions in an illumination map” where “‘Illumination map’ which represents the contribution of the scene's light sources to the points of interest in the scene and combining the colors of the objects at those points of interest with the illumination map values” [col. 10, lines 62-66] and “the resulting illumination value can represent the final color” [col. 13, lines 4-5])
of a plurality of object figures having a constant positional relation therebetween, (Laferriere; “rendering engines… take a scene definition… such scene definitions can include geometric definitions for various 3D objects and their locations within the scene” [col. 1, lines 42-47])
and calculate the pixel values of the plurality of object figures by using the distribution of the contribution ratios collectively representing the sizes of the effects (Laferriere; [Abstract] “In another embodiment, the present invention is used to determine the illumination values for one or more objects represented by a polygon mesh”, where it “stor[es] the illumination value for each of the pixels” [col. 3, line 24]).
Regarding claim 20, claim 20 recites substantially similar limitations to claim 1, but in a method form. The rationale of claim 1 is applied to reject claim 20. In addition, the combination of Laferriere, Ghosh, and Zhan teaches reading a distribution of contribution ratios of a plurality of light sources representing sizes of effects thereof on a color of a figure of the object in association with model data of the object from a memory (Laferriere; [col. 10, line 67 – col. 11, lines 1-3] “the present invention also provides for the storage of the illumination map values to allow pre-rendering of the contribution of the scene's light sources” and “the resulting illumination value can represent the final color” [col. 13, lines 4-5]).
Regarding claim 22, claim 22 recites substantially similar limitations to claim 20, but in a non-transitory computer readable medium form. The rationale of claim 20 is applied to reject claim 22. The combination of Laferriere, Ghosh, and Zhan teaches a non-transitory computer readable medium storing a computer program for causing a computer to perform a method (Ghosh; [col. 4, lines 41-44] “According to a fourth aspect of the present invention is provided a computer program product comprising a computer readable medium (which may be non-transitory) storing the computer program of the third aspect. “, where the third aspect is “a computer program comprising instructions for performing the method of the first or second aspect.” [col. 4, lines 39-40]).
Regarding claim 24, claim 24 recites substantially similar limitations to claim 1, but as a non-transitory computer readable medium storing a data structure. The rationale of claim 1 is applied to reject claim 24.
In addition, the combination of Laferriere, Ghosh, and Zhan teaches a non-transitory computer readable medium storing a data structure (Ghosh; [col. 4, lines 41-44] “According to a fourth aspect of the present invention is provided a computer program product comprising a computer readable medium (which may be non-transitory) storing the computer program of the third aspect.”)
of an object model used by an image processing apparatus in generating a display image including an object (Laferriere; [col. 3, lines 5-9] “there is provided a method of producing an illumination map for at least one object in a scene to be rendered, the object to be texture mapped and the object being represented as a mesh of polygons”)
the data structure associating with each other: (Laferriere; [col.12, lines 46-47] “Storage of the final color values can be in… the definitions of the polygon meshes”)
data used by the image processing apparatus to represent a shape of the object arranged in a space of a display target (Laferriere; [col. 10, line 67 – col. 11, lines 1-3] “the present invention also provides for the storage of the illumination map values to allow pre-rendering of the contribution of the scene's light sources”).
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Laferriere (US 6226005) in view of Ghosh (US 11410378) and Zhan (“EMLight: Lighting Estimation via Spherical Distribution Approximation”), and further in view of Cabeleira (“Combining Rasterization and Ray Tracing Techniques to Approximate Global Illumination in Real-Time”) and Peterson et al. (US 2009/0096789).
Laferriere in view of Ghosh and Zhan teach the image processing apparatus according to claim 1, wherein the circuitry is configured to store a plurality of distributions of the contribution ratios, but fail to teach …in association with a combination of an incident position and an incident direction of a ray on a surface of the object.
However, this is taught by Cabeleira, who teaches store ([pg. 73, section 6.2.6, paragraph 2, line 1] “The result of this process is stored in a buffer”) …in association with a combination with a combination of an incident position and an incident direction of a ray on a surface ([pg. 73, section 6.2.6, paragraph 2, line 1] “that represents the reflection and refraction rays that were generated for each pixel and where each ray is represented by its origin and direction”).
Cabeleira is analogous to the claimed invention as they are in the same field of ray-tracing to render illumination in a 3D scene. Cabeleira teaches that this buffer allows “to optimize the ray tracing process” [pg. 74, section 6.2.6, paragraph 1, line 3]. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Cabeleira with the combination of Laferriere, Ghosh, and Zhan for an optimal ray-tracing process.
Laferriere in view of Ghosh and Zhan, and further in view of Cabeleira fail to teach …the ray emanating from a virtual camera for observing the object, and select the distribution of the contribution ratios to be used to determine the pixel value the ray coming from a point of view with respect to the display image.
However, this is known in the art as taught by Peterson et al., hereinafter Peterson. Peterson teaches …the ray emanating from a virtual camera for observing the object ([0008] “A ray tracing algorithm mainly involves casting one or more rays from the camera through each pixel of the image into the scene”),
and select the distribution of the contribution ratios to be used to determine the pixel value ([0008] “After a ray terminates, the contribution of the light source is traced back through the tree to determine its effect on the pixel of the scene”, where the effect is then transformed “into final pixel color values” [0038])
the ray coming from a point of view with respect to the display image ([0005] “A camera position from which the scene is viewed is defined. An image plane of a selected resolution… between the camera and the scene”, where “this image can thereafter be displayed on a monitor” [0006]).
Peterson is analogous to the claimed invention, as both are in the same field of rendering light information in a 3D scene. Peterson teaches that “Although the physical world operates by light energy being traced from a source to the camera, only a small portion of the light generated by a source arrives at the camera. Therefore, it has been recognized that rays, for most circumstances, should be traced from the camera back to determine intersections with light sources” [0007]. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Peterson with the combination of Laferriere, Ghosh, Zhan, and Cabeleira for realistic lighting that mimics the physical world.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Laferriere (US 6226005) in view of Ghosh (US 11410378) and Zhan (“EMLight: Lighting Estimation via Spherical Distribution Approximation”), and further in view of Seibert (US 2019/0114825).
Laferriere in view of Ghosh and Zhan teach the image processing apparatus according to claim 1, wherein the circuitry is configured, but fail to teach to associate a distribution of brightness of the space obtained from an environmental map representing a background of the space with a luminance value of the intra-scene light source.
However, this is known in the art as taught by Seibert. Seibert teaches to associate a distribution of brightness of the space obtained from an environmental map representing a background of the space with a luminance value of the intra-scene light source ([0003] “These images can be used as an environment map, which serves as a wrap-around background image”, where “background images are used to surround and light a scene” [0004])
Seibert is analogous to the claimed invention, as both are in the field of computer graphics, more specifically rendering a scene using ray tracing using illumination information from a light source in the scene. Seibert teaches that “In computer graphics… these images can be used as an environment map, which serves as a wrap-around background image and a light source of a scene… These images are referred to as ‘background images’” [0003]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to incorporate the teachings of Seibert into the combination of Laferriere, Ghosh, and Zhan as it is known in the art of computer graphics to use background images as environmental maps for a scene.
Claims 6 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Laferriere (US 6226005) in view of Ghosh (US 11410378) and Zhan (“EMLight: Lighting Estimation via Spherical Distribution Approximation”), and further in view of Tokuyoshi (US 2016/0125643).
Regarding claim 6, Laferriere, Ghosh, and Zhan teach the image processing apparatus according to claim 1, where circuitry is configured to but fail to teach given another object, calculate the weighted sum by adjusting by calculation the luminance value of the intra-scene light source in the direction of the other object according to a distance thereto.
This is known in the art as taught by Tokuyoshi, which teaches given another object, calculate the weighted sum by adjusting by calculation the luminance value of the intra-scene light source ([0005] “indirect light sources that further irradiate other objects due to a direct light source being reflected by an object arranged in the scene is employed… for each object rendered… summing up influences of all light sources that may irradiate the object.”)
in the direction of the other object ([0031] “Definition of a light source is performed by light source coordinates/direction… light sources include not only direct light sources…. but also indirect light sources”)
according to a distance thereto ([0026] “objects arranged in a game screen rendered by the rendering unit 104 in units of pixels. In the present embodiment, the luminance computation unit 105 divides a game screen into a plurality of regions, and after extracting a light source whose contribution should be considered for each region, performs luminance computation processing based on a distance from a light source for an object (shading point) corresponding to respective pixels.”).
Tokuyoshi is analogous to the claimed invention, as both relate to computation of luminance values based on light sources and objects in a scene. Tokuyoshi further teaches that this method is done “to improve a realism regarding a luminance of graphics that are rendered” [0005]. 7. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to incorporate the teaching of Tokuyoshi into the combination of Laferriere, Ghosh, and Zhan to improve realism in rendering luminance.
Regarding claim 9, Laferriere, Ghosh, and Zhan teach the image processing apparatus according to claim 1, where circuitry is configured to but fail to teach to individually render the plurality of object figures arranged in the space by use of the distribution of the contribution ratios associated with each of the objects.
This is known in the art as taught by Tokuyoshi. Tokuyoshi teaches individually render the plurality of object figures arranged in the space by use of the distribution of the contribution ratios associated with each of the objects (“a method that considers influences of a plurality of light sources… defined in a scene that is to be rendered, and of indirect light sources that further irradiate other objects due to a direct light source being reflected by an object arranged in the scene is employed… for each object rendered in respective” [0005]).
Tokuyoshi teaches that “for each object rendered… it is possible obtain a more correct luminance computation result by summing up influences of all light sources that may irradiate the object.” [0005]. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to incorporate the teaching of Tokuyoshi into Laferriere, Ghosh, and Zhan to have a more accurate luminance calculation of summing the contributions of the light sources in the scene.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Laferriere (US 6226005) in view of Ghosh (US 11410378) and Zhan (“EMLight: Lighting Estimation via Spherical Distribution Approximation”), and further in view of Koylazov et al. (US 2018/0374260).
Laferriere, Ghosh, and Zhan teach the image processing apparatus according to claim 1, where circuitry is configured to but fail to teach render a figure of an object with which no distribution of the contribution ratios is associated, the rendering being made through ray tracing by use of the model data.
This is taught in the art by Koylazov, hereinafter, Koylazov. Koylazov teaches render a figure of an object with which no distribution of the contribution ratios is associated ([0016] “The image rendering system 100 obtains as input a model of the scene… to generate a rendered image 122 of the scene… the model 121 may include… information about objects in the scene”)
the rendering being made through ray tracing by use of the model data ([0018] “The rendering module 103 may employ an image rendering technique called ray tracing”, where “The image rendering system 11 includes... a rendering module 103” [0017]).
Koylazov is analogous to the claimed invention, as both relate to computation of luminance values based on light sources and objects in a scene. Koylazov further teaches an embodiment where “images of scenes can be rendered with a high degree of visual realism while reducing the computational cost of rendering images of scenes” [0006]. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to incorporate the teachings of Koylazov into the combination as taught by Laferriere, Ghosh, and Zhan to render high-quality scenes while reducing processing power.
Claims 11, 12, 14, 21, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Koylazov (US 2018/0374260) in view of Cabeleira (“Combining Rasterization and Ray Tracing Techniques to Approximate Global Illumination in Real-Time”).
Regarding claim 11, Koylazov teaches an object data generation apparatus for generating data related to an object and used to generate a display image, the object data generation apparatus comprising: ([0016] “The image rendering system 100 obtains as input a model of the scene 121 and analyzes the model 121 to generate a rendered image 122 of the scene”)
circuitry configured to: arrange a plurality of light sources in different directions relative to the object in a virtual space ([0015] “where the scene is a three-dimensional environment that includes multiple points and is affected by multiple light sources”)
and to repeat a predetermined number of times (“[0021] “The sampling method may depend on a variety factors, such as one or more of the sampling rate”)
a sampling process that traces a ray from a virtual camera observing the object and finds a light source at which the ray has arrived, ([0030] “The image rendering system casts a ray 211 from the camera 201… then casts rays… to one or more selected light sources of the light sources in the scene”)
thereby obtaining a distribution of contribution ratios of the light sources representing sizes of effects thereof on a color of a figure of the object; ([0003] “determining a contribution value of the light source in the pair to a color of the point in the pair… based on the maximum… measure of an estimated importance of the light source”)
Koylazov does not teach …and store the distribution of the contribution ratios in association with a combination of an incident position and an incident direction of the ray on a surface of the object. However, this is taught by Cabeleira.
Cabeleira teaches …and store the distribution of the contribution ratios in association with a combination of an incident position and an incident direction of the ray on a surface of the object. ([pg. 73, section 6.2.6, paragraph 2, lines 1-3] “The result of this process is stored in a buffer called the ray casting buffer that represents the reflection and refraction rays that were generated for each pixel and where each ray is represented by its origin and direction.”).
Similar to claim 2, Cabeleira is analogous to the claimed invention as they are in the same field of ray-tracing to render illumination in a 3D scene. Cabeleira teaches that this buffer allows “to optimize the ray tracing process” [pg. 74, section 6.2.6, paragraph 1, line 3]. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Cabeleira into the apparatus of Koylazov for an optimal ray-tracing process.
Regarding claim 12, Koylazov in view of Cabeleira teaches the object data generation apparatus according to claim 11, wherein the circuitry is configured to make the contribution ratio of a light source higher, the greater the number of times the ray has arrived thereat during the sampling process. (Koylazov; [0022] “The sampling method employed by the rendering module 103 samples rays in a way that, for each cell 111 in the scene, more rays will be cast from the points in the cell 111 to the light sources having a higher significance value 112”, where “measure of significance of a light source to rendering points in a cell 111 is referred to as a ‘significance value’” [0022]).
Regarding claim 14, Koylazov in view of Cabeleira teaches the object data generation apparatus according to claim 11, wherein the circuitry is configured to, in tracing the ray, simulate scattering of the ray inside the object (Koylazov; [0002] “Ray tracing is a technique… simulating a variety of optical effects, such as… scattering”).
Regarding claim 21, claim 21 recites substantially similar limitations of claim 11 but in a method form, therefore, the rationale of claim 11 is applied to reject claim 21. Cabeleira further teaches in storing… in a memory of the object data generation apparatus ([pg. 73, Section 6.2.6, paragraph 2, line 1] “The result of this process is stored in a buffer called the ray casting buffer that represents the reflection and refraction rays that were generated for each pixel and where each ray is represented by its origin and direction.”).
Regarding claim 23, claim 23 recites substantially similar limitations of claim 21 but in a method form, therefore, the rationale of claim 21 is applied to reject claim 23. The combination of Koylazov and Cabeleira also teaches a non-transitory computer readable medium storing a computer program for causing a computer to perform a method ([0057] “Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non transitory program carrier for execution”).
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Koylazov (US 2018/0374260) in view of Cabeleira (“Combining Rasterization and Ray Tracing Techniques to Approximate Global Illumination in Real-Time”), and further in view of Williams et al. (GB 2586838).
Koylazov in view of Cabeleira teaches the object data generation apparatus according to claim 11, wherein the circuitry is configured to… store a plurality of the incident directions in association with the contribution ratios in the voxels that include the object surface (Koylazov; [Abstract] “determining a maximum contribution value of the contribution values for the light source to the color of the points that are in the cell”, where [0004] “dividing the scene into a plurality of 3-dimensional voxels and assigning… [to] plurality of cells.”).
Koylazov in view of Cabeleira fail to teach …given voxels obtained by dividing a cuboid containing the object. However, this is known in the art by Williams et al., hereinafter Williams. Williams teaches …given voxels obtained by dividing a cuboid containing the object ([pg. 5, lines 10-14] “cuboid that encloses the capture area… is shown as being divided into voxels 303 of equal size.”).
Williams is analogous to the claimed invention, as they both relate to rendering a 3D scene from the view of a virtual camera. Williams also discloses a system of 3D reconstruction of a scene with free viewpoints modified to provide high-quality images without increasing the amount of processing power ([pg. 2, lines 5-6] “A number of challenges exist when seeking to provide high-quality image…”, where “addressing these issues by simply increasing the amount of processing that is applied can also be problematic… It is therefore considered that alternative modifications to the free viewpoint content generating may be advantageous.” [pg. 2, lines 13-16]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Williams to the combination of Koylazov and Cabeleira to provide an alternate solution to display high-quality images without increasing processing power.
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Koylazov (US 2018/0374260) in view of Cabeleira (“Combining Rasterization and Ray Tracing Techniques to Approximate Global Illumination in Real-Time”), and further in view of Debevec et al. (“Acquiring the Reflectance Field of a Human Face”).
Koylazov in view of Cabeleira teaches the object data generation apparatus according to claim 11, wherein the circuitry is configured to, but fail to teach given a plurality of images presenting states (1) of the object observed by the virtual camera in different directions (2), store the distributions of the corresponding contribution ratios (3) in each of pixel regions representing the object figure.
This is known in the at as taught by Debevec et al., hereinafter Debevec. Debevec teaches given a plurality of images presenting states of the object observed by the virtual camera in different directions ([pg. 1, section 1, paragraph 4, lines 5-6] “subject’s appearance is recorded from different angles by stationary video cameras.”),
store the distributions of the corresponding contribution ratios in each of pixel regions representing the object figure ([pg. 1, section 1, paragraph 5, lines 1-3] “From this illumination data, we can immediately render the subject’s face from the original viewpoints under any incident field of illumination”, which is done by “construct[ing] a reflectance function image for each observed image pixel from its values over the space of illumination directions” [pg. 1, Abstract, lines 5-7]).
Debevec is analogous to the claimed invention, as they both relate to rendering a 3D object given a plurality of lights surrounding it. Debevec teaches a method to render faces regardless of lighting and capture the complexities of the human face (“render faces under arbitrary changes in lighting and viewing direction based on recorded imagery” [pg. 1] to address the problem of “the lack of a method for capturing the spatially varying reflectance characteristics of the human face”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Debevec to the combination of Koylazov and Cabeleira to improve rendering to account for complexities of reflectance of objects, such as human faces.
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Koylazov (US 2018/0374260) in view of Cabeleira (“Combining Rasterization and Ray Tracing Techniques to Approximate Global Illumination in Real-Time”), and further in view of Mitev et al. (US 8982126).
Similarly to claim 11, Koylazov in view of Cabeleira teaches the object data generation apparatus according to claim 11, wherein the circuitry is configured to …store a plurality of the incident directions in association with the distributions of the contribution ratios, but fails to teach in each of the pixel regions of an image obtained through UV unwrapping of the object.
However, this is known in the art as taught by Mitev et al., hereinafter Mitev. Mitev teaches in each of the pixel regions of an image (“extracting the pixels by translating coordinates of the mapping position to a location in each contributing palettized image” [col. 4, lines 20-22])
obtained through UV unwrapping of the object (“the mapping position that is to be shaded, i.e., a position in a UV mapping of the three-dimensional model of the image” [col. 6, lines 1-3]).
Mitev is analogous to the claimed invention, as they both relate to rendering based on lighting around an object. Mitev further teaches shading a CG representation of material “without obtaining multiple high-resolution images of a large sample of the physical materials… using less data and in a shorter amount of time” [col. 3, lines 11-17]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Mitev to the apparatus of Koylazov and Cabeleira to render material with less data in a shorter amount of time.
Allowable Subject Matter
Claims 7, 8, 13, 15, and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 7, the closest prior art of Tokuyoshi (US 2016/0125643) teaches “case where positions of the light source and the object change dynamically” [0057]. However, Tokuyoshi alone or in combination with rest of prior art on record fails to teach this same case where multiple objects change dynamically, therefore, Tokuyoshi fails to teach the limitation as a whole, “upon approaching of the other object, adjust the luminance value of the intra-scene light source in the direction of the other object in such a manner that the color of the intra-scene light source approaches a color on a model of the other object.”
Regarding claim 8, the closest prior art of Tokuyoshi (US 2016/0125643) teaches “cases where a shading occurs for objects to be arranged in a scene by a ray from a light source being occluded by another object” [0057], but fail to teach the occlusion due to the other object approaching. Therefore, Tokuyoshi alone or in combination with rest of prior art on record fails to teach “upon approaching of the other object, adjust the luminance value of the intra-scene light source in the direction of the other object in such a manner that the brightness of the intra-scene light source is reduced.”
Regarding claim 13, the prior art taken singly or in combination do not teach or suggest the relationship “to make the contribution ratio of the light source at which the ray has arrived smaller, the larger the reduction in luminance caused by reflection of the ray on the object surface.”
Regarding claim 15, the prior art taken singly or in combination do not teach or suggest the relationship “to make the contribution ratio of the light source at which the ray has arrived smaller, the larger attenuation of light caused by a collision between the ray and material particles inside the object.”
Regarding claim 19, the prior art taken singly or in combination do not teach or suggest the relationship “given small regions obtained by dividing the pixel regions, associate the small regions with the incident directions and store the distributions of the light source contribution ratios in each of the small regions.”
Therefore, claims 7, 8, 13, 15, and 19 are considered allowable.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALICIA HA whose telephone number is (571)272-3601. The examiner can normally be reached Mon-Thurs 9:00 AM - 6:00 PM, and Fri 9:00 AM - 1:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571) 272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALICIA HA/Examiner, Art Unit 2611
/KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611