DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
1. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
2. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
3. Claims 1, 2, 3, 11, 12, 16, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Sunkavalli et al., US 10665011 B1 A1, and in view of Gilpin et al., US 20230252764, and further in view of Huang et al., CN 113807398 A.
4. As per claim 1, Sunkavalli discloses: An image rendering method, comprising:
determining an image to be rendered, wherein a virtual object is newly added to the image to be rendered; (Sunkavalli, Column 2, lines 41-43, “ In some embodiments, for example, the disclosed systems identify a request to render a virtual object at a designated position within a digital scene. ”)
determining local lighting information and global lighting information corresponding to the image to be rendered; (Sunkavalli, Column 32, lines 50-59: “Further, when using the neural network for “Local,” the lighting estimation system 110 improves the accuracy in terms of MAE loss. Such an improved MAE loss suggests that local lighting can differ significantly from global lighting and that a local patch and a local anterior set of network layers better captures local lighting than a global anterior set of network layers by itself.”) and
projecting and rendering the virtual object in the image to be rendered by combining the local lighting information with the global lighting information. (Sunkavalli, Column 5, lines 25-29,” The lighting estimation system may further render a modified digital scene comprising a virtual object at the designated depth position according to the location-specific-depth parameters in response to a render request.”, and Column 32, lines 53-65, :” Such an improved MAE loss suggests that local lighting can differ significantly from global lighting and that a local patch and a local anterior set of network layers better captures local lighting than a global anterior set of network layers by itself. The lighting estimation system 110 measured even better accuracy in terms of MAE loss by using the neural network for “Local+Global.” Such an improved MAE loss suggests that a combination of a global anterior set of network layers, a masking feature map, and a local anterior set of network layers in a local-lighting-estimation-neural network improves accuracy of capturing local lighting conditions at designated positions.”)
5. Sunkavalli doesn’t expressly disclose: wherein projecting and rendering the virtual object in the image to be rendered by combining the local lighting information with the global lighting information, comprises:
projecting the virtual object by using the global lighting information as a whole background.
incorporating the projection of the virtual object based on the local lighting information in the whole background to reflect the change of projection.
6. Gilpin discloses: projecting the virtual object by using the global lighting information as a whole background. (Gilpin, [0042], “The image generation system 110 is configured to render the camera lens view to obtain a photorealistic view of the physical environment. For example, the image generation system 110 may use global illumination to generate the photorealistic image of the physical environment. The photorealistic image may include an image of the background and all the objects contained within the real physical environment.”)
7. Gilpin is analogous art with respect to Sunkavalli because they are from the same field of endeavor, namely image processing. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to include the process of projecting the virtual object by using the global lighting information as a whole background, as taught by Gilpin into the teaching of Sunkavalli. The suggestion for doing so would simulate a physical environment containing an object, the simulated physical environment corresponding to a real physical environment in which the object is disposed. Therefore, it would have been obvious to combine Sunkavalli and Gilpin.
8. Sunkavalli in view Gilpin doesn’t expressly disclose:
incorporating the projection of the virtual object based on the local lighting information in the whole background to reflect the change of projection.
9. Huang discloses: Incorporating the projection of the virtual object based on the local lighting information in the whole background to reflect the change of projection. (Huang, “S305, normalized by gradient histogram; in order to adapt the change of local illumination and the change of foreground-background contrast, the change range of the gradient intensity is large, so it needs to perform normalization processing.”)
10. Huang is analogous art with respect to Sunkavalli in view of Gilpin because they are from the same field of endeavor, namely image processing. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to include the process of incorporating the projection of the virtual object based on the local lighting information in the whole background to reflect the change of projection, as taught by Huang into the teaching of Sunkavalli in view of Gilpin. The suggestion for doing so would simulate a physical environment containing an object, the simulated physical environment corresponding to a real physical environment in which the object is disposed. Therefore, it would have been obvious to combine Sunkavalli in view of Gilpin.
11. As per claim 2, Sunkavalli in view Gilpin, and in view of Huang discloses: The method of claim 1, wherein the determining local lighting information and global lighting information corresponding to the image to be rendered comprises:
obtaining local lighting information corresponding to the image to be rendered through local light estimation of the image to be rendered; (Sunkavalli, Column 2, lines 14-19,” For example, the disclosed systems can render a virtual object in a digital scene by using a local-lighting-estimation-neural network to analyze both global and local features of the digital scene and generate location-specific-lighting parameters for a designated position within the digital scene.”)
identifying a target scene area from the image to be rendered and determining local lighting information of pixels in the target scene area; (Sunkavalli, Column 3, lines 18-23,”( 6) FIG. 4B illustrates a lighting estimation system training a local-lighting-estimation-neural network to generate localized-lighting-spherical-harmonic coefficients for designated positions within digital scenes using additional output parameters and a discriminator-neural network in accordance with one or more embodiments.”, and Column 26, lines 63-66,” As part of the rendering, the lighting estimation system 110 can select and renders pixel for the virtual object 493 that reflect lighting, shading, or appropriate color hues indicated by the localized-lighting-spherical-harmonic coefficients 490.”) and
performing weighted averaging on the local lighting information of pixels in the target scene area, to obtain the global lighting information corresponding to the image to be rendered. (Sunkavalli, Column 1, lines 40-44,” For example, conventional augmented-reality systems often use simple heuristics to create lighting conditions, such as by relying on mean-brightness values for pixels of (or around) an object to create lighting conditions in an ambient-light model.”, and Column 9, lines 41-47, “For instance, in some embodiments, location-specific-lighting parameters define, specify, or otherwise indicate lighting or shading of pixels corresponding to a designated position of a digital scene. Such location-specific-lighting parameters may define the shade or hue of pixels for a virtual object at a designated position.)
12. As per claim 3, Sunkavalli in view Gilpin, and in view of Huang discloses: The method of claim 2, wherein the target scene area is a ground area, a wall area or a top ceiling area segmented according to different scenes in the image to be rendered. (Sunkavalli, Column 29, lines 62-67,” As shown in rendered form in FIG. 5B, the location-specific-lighting parameters indicate realistic lighting conditions for the virtual object 508a with lighting and shading consistent with the real objects 512a and 512b. The shading for the virtual object 508a and for the real object 512b consistently reflect light from the real object 512a (e.g., a lamp and artificial light source) and from a light source outside the perspective shown in the modified digital scene 516.”)
13. Claims 11, and 16, which are similar in scope to claim 1, thus rejected under the same rationale.
14. Claims 12, and 17, which are similar in scope to claim 2, thus rejected under the same rationale.
15. Claims 4, 5, 13, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Sunkavalli et al., US 10665011 B1 A1, in view of Gilpin et al., US 20230252764, and in view of Huang et al., CN 113807398 A, and in view of GE et al., CN 114596589 A, and further in view of Paladidni et al., US 20160343161 A1.
16. As per claim 4, Sunkavalli in view Gilpin, and in view of Huang discloses: The method of claim 2, wherein performing weighted averaging on the local lighting information of pixels in the target scene area, to obtain the global lighting information corresponding to the image to be rendered, comprises:
determining local lighting directions for the pixels in the target scene area from the local lighting information of the pixels in the target scene area; (Sunkavalli, Column 13, lines 12-15,” In equation (2), ƒ represents the light intensity for each direction shown by visual portions of a cube map, where a solid angle corresponding to a pixel position weights the light intensity.”)
17. Sunkavalli in view Gilpin, and in view of Huang doesn’t expressly disclose:
performing weighted averaging on the local lighting directions for pixels in the target scene area to obtain a global average lighting direction corresponding to the image to be rendered; and
determining the global lighting information corresponding to the image to be rendered according to the global average lighting direction corresponding to the image to be rendered, wherein the global lighting information is used for indicating generation of a global parallel light along the global average lighting direction.
18. GE discloses:
performing weighted averaging on the local lighting directions for pixels in the target scene area to obtain a global average lighting direction corresponding to the image to be rendered, (Ge, “F: Flocal=F + r (4), in the formula, the dimension of Flocal is the upper standard local is used for indicating that it is the output of local light weight; it will be used as the input of step (2.2); (2.2) 2 layer of global light weight, performing the following sub-process: (2.2.1) using the embedded module to compress the input characteristic graph of the channel number, obtaining the dimension is the characteristic graph E, wherein d is less than 1024;v”)
19. GE is analogous art with respect to Sunkavalli in view Gilpin, and in view of Huang because they are from the same field of endeavor, namely image processing. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to include the process of performing weighted averaging on the local lighting directions for pixels in the target scene area to obtain a global average lighting direction corresponding to the image to be rendered, as taught by GE into the teaching of Sunkavalli in view Gilpin, and in view of Huang. The suggestion for doing so would improve the characteristic expression capability of the model. Therefore, it would have been obvious to combine Sunkavalli in view Gilpin, and in view of Huang with GE.
20. Sunkavalli in view Gilpin, and in view of Huang and in view of GE doesn’t expressly disclose:
determining the global lighting information corresponding to the image to be rendered according to the global average lighting direction corresponding to the image to be rendered, wherein the global lighting information is used for indicating generation of a global parallel light along the global average lighting direction.
21. Paladidni discloses: determining the global lighting information corresponding to the image to be rendered according to the global average lighting direction corresponding to the image to be rendered, wherein the global lighting information is used for indicating generation of a global parallel light along the global average lighting direction. (, [0029], “a) tracing 104 a plurality of light rays into a scene containing volumetric data, the light rays configured for simulating global illumination; (b) randomizing 106 the scattering location and direction of the plurality of light rays through the volume, wherein a common sequence of random numbers is used in order for the scattering direction of each of the plurality of randomized scattered light rays to be substantially parallel; (c) computing 108 at least one trilinearly interpolated and shaded sample along each of the plurality of randomized scattered light rays based on stored volumetric data, wherein at least a portion of the stored volumetric data used in at least a portion of the computing is configured for coherent access; and (d) rendering 110 the volume with global illumination based on a plurality of iterations of the tracing, the randomizing, and the computing.”)
22. Paladidni is analogous art with respect to Sunkavalli in view Gilpin, and in view of Huang and in view of GE because they are from the same field of endeavor, namely image processing. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to include the process of determining the global lighting information corresponding to the image to be rendered according to the global average lighting direction corresponding to the image to be rendered, wherein the global lighting information is used for indicating generation of a global parallel light along the global average lighting direction, as taught by Paladidni into the teaching of Sunkavalli in view Gilpin, and in view of Huang in view of GE. The suggestion for doing so would result in significant performance improvements and produce less noisy images. Therefore, it would have been obvious to combine Sunkavalli in view Gilpin, and in view of Huang in view of GE with Paladidni.
23. As per claim 5, Sunkavalli in view Gilpin, and in view of Huang in view of GE, and in view of Paladidni disclose: The method of claim 4, wherein the performing weighted averaging on the local lighting directions for pixels in the target scene area to obtain a global average lighting direction corresponding to the image to be rendered comprises: determining a pixel identification probability in the target scene area, wherein the pixel identification probability is a probability that a pixel is identified as a pixel belonging to the target scene area; and performing weighted averaging on the local lighting directions for pixels in the target scene area according to the pixel identification probability in the target scene area, to obtain the global average lighting direction corresponding to the image to be rendered. (Sunkavalli, Column 8, lines 58-67, and column 9, lines 1-3, “As further used herein, the term “local position indicator” refers to a digital identifier for a location within a digital scene. For example, in certain implementations, a local position indicator includes a digital coordinate, pixel, or other marker indicating a designated position within a digital scene from a request to render a virtual object. To illustrate, a local position indicator may be a coordinate representing a designated position or a pixel (or coordinate for a pixel) corresponding to the designated position. Among other embodiments, the lighting estimation system 110 may generate (and input) a local position indicator into the local-lighting-estimation-neural network 112.”, and column 9, lines 15-22, “In addition to identifying a local position indicator and extracting a local patch, the lighting estimation system 110 uses the local-lighting-estimation-neural network 112 to analyze the digital scene 100, the local position indicator 102, and the local patch 104. The term “local-lighting-estimation-neural network” refers to an artificial neural network that generates lighting parameters indicating lighting conditions for a position within a digital scene. ”
24. Claims 13, and 18, which are similar in scope to claim 4, thus rejected under the same rationale.
25. Claims 6, 14, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Sunkavalli et al., US 10665011 B1 A1, in view of Gilpin et al., US 20230252764, and in view of Huang et al., CN 113807398 A and in view of GE et al., CN 114596589 A, and in view of Paladidni et al., US 20160343161 A1, and further in view of Chae et al., KR 100879536.
26. As per claim 6, Sunkavalli in view Gilpin, and in view of Huang in view of GE, and in view of Paladidni disclose: The method of claim 1, The projecting and rendering the virtual object in the image to be rendered by combining the local lighting information with the global lighting information comprises: projecting and rendering the virtual object in the image to be rendered by using the joint lighting information. (Sunkavalli, Column 5, lines 25-29,” The lighting estimation system may further render a modified digital scene comprising a virtual object at the designated depth position according to the location-specific-depth parameters in response to a render request.”, and Column 32, lines 53-65, :” Such an improved MAE loss suggests that local lighting can differ significantly from global lighting and that a local patch and a local anterior set of network layers better captures local lighting than a global anterior set of network layers by itself. The lighting estimation system 110 measured even better accuracy in terms of MAE loss by using the neural network for “Local+Global.” Such an improved MAE loss suggests that a combination of a global anterior set of network layers, a masking feature map, and a local anterior set of network layers in a local-lighting-estimation-neural network improves accuracy of capturing local lighting conditions at designated positions.”)
27. S Sunkavalli in view Gilpin, and in view of Huang in view of GE, and in view of Paladidni doesn’t disclose:
combining the local lighting information with the global lighting information by using Gamma correction to obtain joint lighting information to be adopted by the image to be rendered;
28. Chae discloses: combining the local lighting information with the global lighting information by using Gamma correction to obtain joint lighting information to be adopted by the image to be rendered; (Chae, “In operation 1172, a process of checking whether global / local lighting components and reflectance components are completed is performed. In operation 1174, gamma correction is performed on the estimated global / local lighting components and reflectance components. The reason for the gamma correction is to increase the contrast for the reflectance component while reducing the overall dynamic range for the estimated global / local illumination component and the reflectance component. Reference numeral 1176 denotes a process of processing histogram modeling, and 1180 denotes a process of generating an output image.”)
29. Chae is analogous art with respect to Sunkavalli in view Gilpin, and in view of Huang in view of GE and in view of Paladidni because they are from the same field of endeavor, namely image processing. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to include the process of combining the local lighting information with the global lighting information by using Gamma correction to obtain joint lighting information to be adopted by the image to be rendered, as taught by Chae into the teaching of Sunkavalli in view Gilpin, and in view of Huang in view of GE and in view of Paladidni. The suggestion for doing so would increase the contrast for the reflectance component while reducing the overall dynamic range for the estimated global / local illumination component. Therefore, it would have been obvious to combine Sunkavalli in view Gilpin, and in view of Huang in view of GE and in view of Paladidni with Chae.
30. Claims 14, and 19, which are similar in scope to claim 6, thus rejected under the same rationale.
31. Claims 7, and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Sunkavalli et al., US 10665011 B1 A1, in view of Gilpin et al., US 20230252764, and in view of Huang et al., CN 113807398 A, in view of GE et al., CN 114596589 A, and in view of Paladidni et al., US 20160343161 A1, and in view of Chae et al., KR 10087953, and further in view of Hou US 11574437 B2.
32. As per claim 7, Sunkavalli Sunkavalli in view Gilpin, and in view of Huang in view of GE and in view of Paladidni and further in view of Chae discloses: The method of claim 6, (See rejection of claim 6)
33. Sunkavalli in view Gilpin, and in view of Huang in view of GE and in view of Paladidni and further in view of Chae doesn’t discloses: wherein the projecting and rendering the virtual object in the image to be rendered by using the joint lighting information comprises:
determining pixel depth information and pixel roughness of the virtual object in the image to be rendered;
performing surface reconstruction on the virtual object according to the pixel depth information and the pixel roughness of the virtual object, to obtain a three-dimensional texture mesh corresponding to the virtual object; and
projecting and rendering the virtual object in the image to be rendered using the three-dimensional texture mesh corresponding to the virtual object and the joint lighting information.
34. Hou discloses:
determining pixel depth information (Hou, Column 10, lines 4-7,” In steps 206 to 209, the terminal obtains the model coordinates of the plurality of pixels according to the current viewing angle and the depth information of the plurality of pixels.”) and pixel roughness of the virtual object in the image to be rendered; (Hou, Column 9, lines 47-50,” In the foregoing process, the terminal obtains the model coordinates of the plurality of pixels according to the world coordinates of the plurality of pixels. Model coordinates are used for describing texture information of a pixel relative to a model base point of a virtual object.”
performing surface reconstruction on the virtual object according to the pixel depth information and the pixel roughness of the virtual object, to obtain a three-dimensional texture mesh corresponding to the virtual object; (Hou, Column 20, lines 47-50,” “obtaining model coordinates of the plurality of pixels according to a current viewing angle associated with the virtual scene and the depth information of the plurality of pixels, the model coordinates being used for describing texture information of the pixels relative to a model vertex of each virtual object, the model vertex having associated vertex data comprising coordinates of a view and a ray direction for a pixel on a surface of the rendering structure.”, and Column 9, lines 30-32,” “In some embodiments, the model coordinate system in the rendering engine is a coordinate system in which the virtual object is located in a three-dimensional model.” and
projecting and rendering the virtual object in the image to be rendered using the three-dimensional texture mesh corresponding to the virtual object and the joint lighting information. (Hou, Column 5, lines 39-48, “In the foregoing process, the terminal directly left multiplies the view matrix of the illumination direction by the world coordinates of at least one virtual object in the virtual scene, transforms the at least one virtual object from the current viewing angle to a viewing angle of the illumination direction, and obtains a real-time image of the at least one virtual object from the viewing angle of the illumination direction as the at least one shadow map. Each shadow map corresponds to a virtual object and is used for providing texture (UV) information of a shadow of the virtual object.”)
35. Hou is analogous art with respect to Sunkavalli in view Gilpin, and in view of Huang in view of GE and in view of Paladidni and in view of Chae because they are from the same field of endeavor, namely image processing. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to include determining pixel depth information and pixel roughness of the virtual object in the image to be rendered; performing surface reconstruction on the virtual object according to the pixel depth information and the pixel roughness of the virtual object, to obtain a three-dimensional texture mesh corresponding to the virtual object; and projecting and rendering the virtual object in the image to be rendered using the three-dimensional texture mesh corresponding to the virtual object and the joint lighting information, as taught by Hou into the teaching of Sunkavalli in view Gilpin, and in view of Huang in view of GE and in view of Paladidni and in view of Chae. The suggestion for doing so would avoid rendering a poor shadow. Therefore, it would have been obvious to combine Sunkavalli in view Gilpin, and in view of Huang in view of GE and in view of Paladidni with Hou.
36. As per claim 8, Sunkavalli in view Gilpin, and in view of Huang and in view of GE and in view of Paladidni and in view of Chae, and in view of Hou discloses: The method of claim 7, wherein the projecting and rendering the virtual object in the image to be rendered comprises:
applying lighting corresponding to the joint lighting information to the virtual object in the image to be rendered, and projecting a shadow of the virtual object on the image to be rendered. (Hou, Column 12, lines 1-8, “In the method provided in the embodiments of this application, according to an illumination direction in a virtual scene, at least one rendering structure in the virtual scene is obtained, and by using the at least one rendering structure as a model of shadow rendering, according to a current viewing angle and depth information of a plurality of pixels, model coordinates of the plurality of pixels are obtained, so that the model coordinates correspond one-to-one with a texture (UV) mapping space of a shadow map. At least one shadow map is sampled according to the model coordinates of the plurality of pixels to obtain a plurality of sampling points corresponding to the plurality of pixels. The plurality of sampling points are rendered in the virtual scene to obtain at least one shadow. Therefore, the effect of shadow rendering is improved, real-time rendering of shadows may also be implemented based on the function of a rendering engine, and the processing efficiency of a terminal CPU is improved.”)
37. Claims 9, 15, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sunkavalli et al., US 10665011 B1 A1, in view of Gilpin et al., US 20230252764, and in view of Huang et al., CN 113807398 A in view of Hou US 11574437 B2.
38. As per claim 9, Sunkavalli in view Gilpin, and in view of Huang discloses: The method of claim 1, (See rejection of claim 1 above.) wherein, for projecting and rendering the virtual object in the image to be rendered, the method further comprises:
determining pixel depth information of the virtual object in the image to be rendered; information (Hou, Column 10, lines 4-7,” In steps 206 to 209, the terminal obtains the model coordinates of the plurality of pixels according to the current viewing angle and the depth information of the plurality of pixels.”, and Column 2, lines 21-28) and
determining and adjusting a size matched with the virtual object according to the pixel depth information of the virtual object through a preset virtual object scaling relationship, wherein the preset virtual object scaling relationship is used for recording a correlation relationship between a size of the virtual object in the image and a pixel depth of the virtual object. (Hou, Column 6, lines 26-34,” In the foregoing process, the initial size and the initial position matching the at least one virtual object refers to the following case: For each virtual object, an area of a bottom surface of a rendering structure corresponding to the virtual object is greater than or equal to an area of a bottom surface of a model of the virtual object, and an initial position of the rendering structure is at a position that can coincide with the bottom surface of the model of the virtual object in both horizontal and vertical directions.”, and Column 7, lines 66-67, and Column 8, lines 1-9)
39. Sunkavalli in view Gilpin, and in view of Huang doesn’t disclose:
determining pixel depth information of the virtual object in the image to be rendered; information and
determining and adjusting a size matched with the virtual object according to the pixel depth information of the virtual object through a preset virtual object scaling relationship, wherein the preset virtual object scaling relationship is used for recording a correlation relationship between a size of the virtual object in the image and a pixel depth of the virtual object.
40. Hou discloses: determining pixel depth information of the virtual object in the image to be rendered; information (Hou, Column 10, lines 4-7,” In steps 206 to 209, the terminal obtains the model coordinates of the plurality of pixels according to the current viewing angle and the depth information of the plurality of pixels.”, and Column 2, lines 21-28) and
determining and adjusting a size matched with the virtual object according to the pixel depth information of the virtual object through a preset virtual object scaling relationship, wherein the preset virtual object scaling relationship is used for recording a correlation relationship between a size of the virtual object in the image and a pixel depth of the virtual object. (Hou, Column 6, lines 26-34,” In the foregoing process, the initial size and the initial position matching the at least one virtual object refers to the following case: For each virtual object, an area of a bottom surface of a rendering structure corresponding to the virtual object is greater than or equal to an area of a bottom surface of a model of the virtual object, and an initial position of the rendering structure is at a position that can coincide with the bottom surface of the model of the virtual object in both horizontal and vertical directions.”, and Column 7, lines 66-67, and Column 8, lines 1-9)
41. Hou is analogous art with respect to Sunkavalli in view Gilpin, and in view of Huang because they are from the same field of endeavor, namely image processing. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to include d determining pixel depth information of the virtual object in the image to be rendered; information and determining and adjusting a size matched with the virtual object according to the pixel depth information of the virtual object through a preset virtual object scaling relationship, wherein the preset virtual object scaling relationship is used for recording a correlation relationship between a size of the virtual object in the image and a pixel depth of the virtual object, as taught by Hou into the teaching of Sunkavalli in view Gilpin, and in view of Huang. The suggestion for doing so would avoid rendering a poor shadow. Therefore, it would have been obvious to combine Sunkavalli in view Gilpin, and in view of Huang with Hou.
42. Claims 15, and 20, which are similar in scope to claim 9, thus rejected under the same rationale.
43. Claims 10 is rejected under 35 U.S.C. 103 as being unpatentable over Sunkavalli et al., US 10665011 B1 A1, , in view of Gilpin et al., US 20230252764, and in view of Huang et al., CN 113807398 A, and in view of Hou US 11574437 B2, and further in view of Hagland US 20210158597 A1.
44. As per claim 10, Sunkavalli in view Gilpin, and in view of Huang and in view of Hou discloses: The method of claim 9, (See rejection of claim 9 above.)
45. Sunkavalli in view Gilpin, and in view of Huang and in view of Hou doesn’ t expressly disclose; wherein when the pixel depth of the virtual object is less than a preset depth, the size of the virtual object recorded in the preset virtual object scaling relationship is negatively correlated with the pixel depth of the virtual object; and when the pixel depth of the virtual object is greater than or equal to the preset depth, the virtual object recorded in the preset virtual object scaling relationship maintains a preset size.
46. Hagland discloses: when the pixel depth of the virtual object is less than a preset depth, the size of the virtual object recorded in the preset virtual object scaling relationship is negatively correlated with the pixel depth of the virtual object; and when the pixel depth of the virtual object is greater than or equal to the preset depth, the virtual object recorded in the preset virtual object scaling relationship maintains a preset size. (Hagland, [0043],” As yet another example, the shader parameter adjuster A reduces an amount of detail of the virtual object in the image frames 102A-102C having the virtual object, or increases a distance, in the depth dimension, of the virtual object within the image frames, or decreases a number of ray iterations that are used to generate the color and intensity of pixels representing the virtual object in the image frames 102A-102C, or a combination thereof. As still another example, the shader parameter adjuster A reduces an amount of detail of the virtual object in the image frames 102A-102C having the virtual object to the preset level of detail, or increases a distance, in the depth dimension, of the virtual object within the image frames 102A-102C to the preset distance, or decreases a number of ray iterations that are used to generate the color and intensity of pixels representing the virtual object in the image frames to the preset number of ray iterations, or a combination thereof.”)
47. Hagland is analogous art with respect to Sunkavalli in view Gilpin, and in view of Huang and in view of Hou because they are from the same field of endeavor, namely image processing. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to include the pixel depth of the virtual object is less than a preset depth, the size of the virtual object recorded in the preset virtual object scaling relationship is negatively correlated with the pixel depth of the virtual object; and when the pixel depth of the virtual object is greater than or equal to the preset depth, the virtual object recorded in the preset virtual object scaling relationship maintains a preset size, as taught by Hagland into the teaching of Sunkavalli in view Gilpin, and in view of Huang and in view of Hou. The suggestion for doing so would adjusting the one or more shader parameters changes a level of complexity of the image frame being processed by the GPU. Therefore, it would have been obvious to combine Sunkavalli in view Gilpin, and in view of Huang and in view of Hou with Hagland.
Response to Arguments
48. Applicant’s arguments with respect to claims 1-20 have been considered but are moot because Applicant submitted new amended claims. Accordingly, new grounds of rejection are set forth above. The new grounds of rejection conclusion have been necessitated by Applicant's amendments to the claims.
Conclusion
49. Applicants amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDERRAHIM MEROUAN whose telephone number is (571)270-5254. The examiner can normally be reached on Monday to Friday 7:30 AM to 5:00 PM.
The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ABDERRAHIM MEROUAN/Supervisory Patent Examiner, Art Unit 2683