DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment / Arguments
Applicant added the following to the independent claims (see bold print, which is the added material):
reconstruction information, wherein the reconstruction information specifies a manner in which to shade pixels of the output image utilizing the plurality of shade space textures, and
wherein the reconstruction information facilitates performing reconstruction without performing geometry processing
The “without performing geometry processing” is respectfully only described in one paragraph of Applicant’s specification, paragraph 50 of the published application. Paragraph 50 also gives some examples of reconstruction information, which are respectfully taught by Garvey. All of the below are direct quotes from paragraph 50:
The reconstruction information 812 includes the shade space texture coordinates, the mesh ID, the shade space level of detail, the anisotropy, and the gradients for each pixel, or any other information required by the operation of the reconstruction filter, but not available at the later processing stages. … it is not strictly necessary to output these values as part of the visibility information 814.
However, these values are useful for the reconstruction operation 806 to execute without performing geometry processing.
As noted above, paragraph 50 lists some examples of reconstruction information. This is taught, and mapped in the prior office action, by Garvey. See e.g. pars. 97-99. See also Figs. 11, 14, 17, 19 and/or 20. Garvey teaches reconstruction information as claimed currently in multiple ways, as mapped herein and in prior office actions.
Accordingly, the 103 rejections are respectfully maintained. Please see remainder of this office action for details.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Garvey (U.S. Patent App. Pub. No. 2025/0095266 A1).
Regarding claim 1:
Garvey teaches: a method for rendering to generate an output image (para. 42, a method using the content generation system of Fig. 1, in combination with para. 37, which states that “content” is a term used by Garvey to refer to graphical content and/or images), the method comprising:
for a scene including a plurality of objects, performing a visibility operation to generate shade space visibility information (para. 5, identifying which shading elements are visible in a set of shading elements, in combination with para. 58, visibility passes can be done for primitives in a scene) indicating visible portions of shade space textures mapped to each object (Claim interpretation: the examiner is interpreting “shade space textures” to be “”canvases”” to which shading operations are applied” (quoting [0035] of Applicant’s published application, until Applicant amends the claim to require a different interpretation. This is taught by Garvey in at least three alternate ways: (1) Fig. 7: shading atlas, “ a 2D data structure that includes shading information of visible surfaces corresponding to rendered scenes. The shading atlas 708 may also be referred to as a texture atlas” (quoting para. 80). A second way this is taught: (2) “ a real-time dicing oracle may output a view-dependent mip region map for each visible meshlet in a scene. The view-dependent mip region map may be used for texture space shading.” (para. 90). A third way this is taught (3):para. 38: texture space shading) and reconstruction information, wherein the reconstruction information specifies a manner in which to shade pixels of the output image utilizing the plurality of shade space textures (example: paras 97, 99, screen space derivatives, after visibility. See also Fig. 11: 1104, 1106) (another example: para. 39, coordinates) (another example: paras. 85-86, mip maps) (another example: para. 40) (another example: para. 97: rate at which “v” increases or decreases moving horizontally) (another example: para. 99: shading rate for every visible shadel), and
wherein the reconstruction information facilitates performing reconstruction without performing geometry processing (this is taught by the mapped reconstruction information examples; the example of shading rate teaches this, as a decoupling shading from other operations in having a shading rate separate from a rendering or other geometry processing rates; the example of screen space derivatives also teaches this, and the features in pars. 96-99, mapped above, satisfy this “without performing geometry processing” claim feature) See also Figs. 11, 14, 17, 19 and/or 20.;
performing a shade space shading operation based on the shade space visibility information to shade visible portions of the shade space textures to generate shaded shade space textures (Fig. 5 and related description, which refers to texture space shading (TSS). TSS corresponds to this performing step. See e.g. para. 71) (alternatively: see claim 11, associating visible shading elements with a set of texels, the associating being the “shade space shading operation” and the shaded texel being the generated “shaded shade space textures) (another alternatively: shade based on the reconstruction mappings above); and
performing a reconstruction operation with reuse of the reconstruction information for multiple successive frames to apply the shaded shade space textures to an output image, based on the reconstruction information to generate the output image (e.g. Fig. 5: 518, appearance re-sampling is an exemplary “reconstruction operation”; another example, para. 69-70, convert to pixels in screen space, and perform pixel shading; another example: para. 90, color lookup; another example: para. 97, compute space derivatives, or rasterization or vertex shading; another example: Fig. 11)
(*claim interpretation: para. 39 of Applicant’s specification as filed describes “reconstruction operation” to include “applying the shade space textures to the geometry of the scene to result in a final image”. Examples of reconstruction operations, per Applicant’s para. 39, include ones that are taught by Garvey and mapped above).
In terms of “reuse…for multiple successive frames”, nothing in Garvey says that the reconstruction information can only be used once or for one frame alone. This teaches “multiple successive frames” of the same output image, for example.
It would have been obvious for one of ordinary skill in the art to have modified the applied reference(-s), in view of same, to have obtained the above, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
The prior art included each element recited in claim 1, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Additional motivation can be found in the prior art, and includes taking advantage of the teachings of Garvey to improve rendering efficiency and/or optimize memory usage (Garvey, para. 38).
Regarding claim 2:
Garvey teaches: the method of claim 1, wherein performing the reconstruction operation includes sampling the shaded shade space textures based on the reconstruction information to generate the output image (para. 74: re-sampling (sampling) performed on the shaded texels (shaded shade space textures) to obtain screen space pixels (output image pixels), based on reconstruction information (texel information; see para. 71).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to make use of known image processing/rendering techniques to generate output.
Regarding claim 3:
Garvey teaches: the method of claim 1, wherein the reconstruction information includes information for a pixel of the output image that uniquely identifies a shaded shade space texture of the shaded shade space textures (see para. 99, shadels (i.e. shading elements, mip regions. See also para. 116-117) (alternatively: Fig. 19 and para. 118: triangle ID and material ID) (alternatively second, para. 124) (alternatively third; para. 85, mip region map).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to have increased control over portions of processed and/or rendered image data.
Regarding claim 4:
Garvey teaches: the method of claim 3, wherein the reconstruction information includes information for the pixel that identifies a location within the shade space texture (see above mapping to claim 3, the mip region, triangle ID and/or material ID all identify locations).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to have increased control over portions of processed and/or rendered image data.
Regarding claim 5:
Garvey teaches: the method of claim 3, wherein the reconstruction information includes one or more of a MIP level, an anisotropy level, an anisotropy direction, and texture gradients (see para. 85, which teaches a MIP region map indicating mip levels for texture regions).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to have increased control over portions of processed and/or rendered image data, and variously correct for resolution differences.
Regarding claim 6:
Garvey teaches: the method of claim 1, wherein generating the reconstruction information includes interpolating shade space texture coordinates for object vertices to generate texture coordinates for a pixel of an output image (see para. 97-98) (alternatively, paras. 102-03) (alternatively second, para. 123-24).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to take advantage of known rendering techniques to obtain final results.
Regarding claim 7:
Garvey teaches: the method of claim 1, further comprising generating shade space control information and reconstruction control information (see para. 78, shading rage (shade space control information) can be decoupled from rasterization rate (reconstruction control information)).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to enable split rendering and variations/benefits with parallel processing.
Regarding claim 8:
Garvey teaches: the method of claim 7, wherein shade space shading operation is based on the shade space control information and the reconstruction operation is based on the reconstruction control information (see mapping to claim 7, the split rates base operations, respectively).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to enable split rendering and variations/benefits with parallel processing.
Regarding claim 9:
Garvey teaches: the method of claim 8, wherein the shade space control information and reconstruction control information include information that varies shading rates (para. 78, see also Fig. 20: variable shading rates is known).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to enable split rendering and variations/benefits with parallel processing.
Regarding claim 10: see also claim 1.
Garvey teaches: a system (Fig. 1: 100, a content generation system) comprising: a memory (Fig. 1: 121, 123, 124, memories) configured to store an output image (para. 46 store graphical content, in combination with para. 37, the term “content” can be an image); and a processor configured to generate the output image by performing operations including.
The operations correspond to the method of claim 1; the same rationale for rejection applies.
Regarding claim 11: see claim 2.
These claims are similar; the same rationale for rejection applies.
Regarding claim 12: see claim 3.
These claims are similar; the same rationale for rejection applies.
Regarding claim 13: see claim 4.
These claims are similar; the same rationale for rejection applies.
Regarding claim 14: see claim 5.
These claims are similar; the same rationale for rejection applies.
Regarding claim 15: see claim 6.
These claims are similar; the same rationale for rejection applies.
Regarding claim 16: see claim 7.
These claims are similar; the same rationale for rejection applies.
Regarding claim 17: see claim 8.
These claims are similar; the same rationale for rejection applies.
Regarding claim 18: see claim 9.
These claims are similar; the same rationale for rejection applies.
Regarding claim 19: see also claim 1.
Garvey teaches: a non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to perform operations (claim 28) comprising.
The operations correspond to the method of claim 1; the same rationale for rejection applies.
Regarding claim 20: see claim 2.
These claims are similar; the same rationale for rejection applies.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sarah Lhymn whose telephone number is (571)270-0632. The examiner can normally be reached M-F, 9:00 AM to 6:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Sarah Lhymn
Primary Examiner
Art Unit 2613
/Sarah Lhymn/Primary Examiner, Art Unit 2613