Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant’s amendments and arguments filed on 1/29/2026 have been considered. Claim 1-16 are pending in the application. Applicant’s amendments to the Specifications and claims have overcome each and every objection previously set forth in the Non-Final Office Action mailed on 10/31/2025.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 2, 9 and 10 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Please see 35 U.S.C 103 rejections below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1 and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kubisch (“Mesh Shading for Vulkan”), Unterguggenberger (“Conservative Meshlet Bounds for Robust Culling of Skinned Meshes”), Laine et al (US 11734890 B2), and Øygard et al (US 20240169639 A1) , hereinafter Kubisch, Unterguggenberger, Laine, and Øygard respectively .
Regarding claim 1, Kubisch teaches an application programming interface (“With the release of the VK_EXT_mesh_shader extension Vulkan gets an alternative geometry rasterization pipeline. This extension brings cross-vendor mesh shading to Vulkan, with a focus on improving functional compatibility with DirectX 12” – Introduction Par 1. [NOTE: Kubisch teaches the differences and compatible features of the Vulkan graphics API and the DirectX 12 graphics API. See Table 1 and Portability Par 2]), comprising: a mesh shader, configured to process 3-dimensional objects and output vertices and primitives (Table 1 shows that the mesh shader outputs primitives and vertices after processing the 3D objects, such as triangles, from workgroups outputted from the task shader, Portability, Table 1);
PNG
media_image1.png
1018
939
media_image1.png
Greyscale
PNG
media_image2.png
175
861
media_image2.png
Greyscale
a rasterizer, linked to the mesh shader, and a fragment shader, linked to the rasterizer (“The new mesh shading pipeline with the task and mesh shading stages provides an alternative to the traditional vertex, tessellation or geometry shader stages that feed into rasterization” – Introduction, Par 3, Fig 1. [NOTE: Fig 1 best shows the general pipeline which connects a mesh shader to a rasterizer to a pixel shader (also known in the art as a fragment shader]).
Kubisch does not teach a mesh shader configured to output a plurality of bounding volumes of the 3-dimensional objects; a rasterizer configured to convert 3-dimensional geometric primitives, into 2-dimensional grid pixels of fragments; and a fragment shader, linked to the rasterizer, and configured to produce a final color, a depth, and other attributes of an individual fragment by calculating lighting, textures and materials effects. However, Unterguggenberger teaches a plurality of bounding volumes of the 3-dimensional objects (“Subsequently, the bounds of a meshlet can be easily computed by combining all its associated vertices’ bounds into a common bounding box.” – Meshlet Bounds Computation, 3 Meshlet Bounds Computation, paragraph 1) [NOTE: Unterguggenberger shows a bounding box computed for meshlets which are considered 3D objects. Unterguggenberger does not teach that the bounding volume of the 3D object is an output of the mesh shader. Since a meshlet are vertices and triangles and Kubisch teaches to use the mesh shader to generate vertices and triangle (Unterguggenberger , 1 introduction paragraph 2, …64 vertices and 126 triangles…Each one of these small geometry packets is refer to as a meshlet), one of ordinary skill could configure the mesh shader to output the vertices and triangles and combine them to form multiple bounding volumes for better accuracy compared to only one bounding volume. After the combination, the mesh shader taught by Kubisch can be modified to output the bounding volumes of the 3D object as disclosed by Unterguggenberger. The vertices and primitives outputted by the mesh shader would then be grouped into bounding volumes to simplify processing by reducing geometric complexity]). It would have been obvious to one of ordinary skill before the effective filing date of the present application to modify Kubisch by incorporating the teachings of Unterguggenberger to have the mesh shader output bounding volumes of the 3D objects. One would be motivated to make this combination to allow early culling techniques using the bounding boxes which reduces unnecessary work on the rasterizer.
Kubisch in view of Unterguggenberger still does not teach; a rasterizer configured to convert 3-dimensional geometric primitives, into 2-dimensional grid pixels of fragments; and a fragment shader, linked to the rasterizer, and configured to produce a final color, a depth, and other attributes of an individual fragment by calculating lighting, textures and materials effects. However, Laine teaches a rasterizer configured to convert 3-dimensional geometric primitives, into 2-dimensional grid pixels of fragments (“The rasterization stage 660 converts the 3D geometric primitives into 2D fragments (e.g. capable of being utilized for display, etc.). The rasterization stage 660 may be configured to utilize the vertices of the geometric primitives to setup a set of plane equations from which various attributes can be interpolated” – Lines Col 34, 39-44. [NOTE: Laine also teaches the rasterizer outputting a 2D grid associated with an image being rendered, (“In the forward pass through the rendering pipeline 205, the rasterizer 220 outputs a 2D sample grid associated with the image being rendered” – column 10, lines 32-35, column 11, lines 22-25, given the rasterizer’s out (per-pixels triangle IDs and barycentrics)). Therefore, this 2D grid of the image being rendered is the grid of individual pixels based on the 3D geometric primitives.]); and a fragment shader configured to produce a final color, and other attributes of an individual fragment by calculating lighting, textures and materials effects (“The fragment shading stage 670 processes fragment data by performing a set of operations (e.g., a fragment shader or a program) on each of the fragments. The fragment shading stage 670 may generate pixel data (e.g., color values) for the fragment such as by performing lighting operations or sampling texture maps using interpolated texture coordinates for the fragment” – Col 34, Lines 54-60,, also see column 1, line 39-41 textures map that represents the lighting and material properties of the 3D models; column 4, lines 60, compute shaded pixel) [NOTE: Laine discloses that lighting operations and sampling texture maps are some of the calculations performed to generate pixel data such as final color]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Kubisch by incorporating the teachings of Laine to configure the rasterizer to convert 3D primitives to 2D fragments and use the fragment shader to calculate lighting and texture to produce final color and other attributes such as shaded values material for each fragment. One would be motivated to add this rasterizer in order to take the bounding volumes of the 3D objects from the mesh shader to be processed in order for pixels to be produced. The fragment shader can then perform calculations based on attributes such as light and texture and material effects to produce a final color for the pixels given by the rasterizer. These calculations result in producing consistent and high quality pixels to be displayed.
Kubisch in view of Unterguggenberger and Laine still does not teach a fragment shader configured to produce a depth. However, Øygard teaches a fragment shader configured to produce a depth (“The first and second fragment shader routines thus process the fragments to generate the desired shaded fragment output data, e.g. in the form of shaded depth and/or colour values, etc., necessary to respectively determine the visibility information and/or to produce the final rendered output data” – Par 137, Lines 1-6). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Kubisch by incorporating the teachings of Øygard to configure the fragment shader to produce the depth for each fragment. The reason of doing so can allow the system to perform proper depth testing in order to discard fragments with low depth value and render fragments that are necessary. Depth information can also help to generate a high realistic 3D image/video.
Regarding claim 9, the claim describes a method for an application programming interface
comprising the same components as described in claim 1. Therefore, method claim 10 corresponds
to the API disclosed in claim 2 and is rejected for the same reasons of obviousness as used above.
Claim(s) 2 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kubisch, Unterguggenberger, Laine, Øygard and Hakura et al (US 20230360305 A1), hereinafter Hakura.
Regarding claim 2, Kubisch in view of Unterguggenberger, and Laine and Øygard teach the application programming interface of claim 1. Kubisch in view of Unterguggenberger and Laine and Øygard does not teach the rasterizer comprising a tiler, linked to the mesh shader, and configured to cut the vertices, the primitives, and the plurality of bounding volumes into a plurality of tiles; the rasterizer converts the plurality of tiles into the 2-dimensional grid pixels of fragments. However, Hakura teaches the rasterizer comprises a tiler, linked to the mesh shader (“In one embodiment, each raster pipeline 110A-C includes a tiler (e.g., tiler 115A), a raster engine (e.g., raster engine 130A), and a pixel shader (e.g., pixel shader 140A). The respective raster pipeline 110A-C may send the primitives in 2D image space to a tiler 115A-C.” – Par 27, Lines 1-5. [NOTE: Hakura discloses a tiler in the rasterizer, but does not disclose that the rasterizer is linked to the mesh shader. After the combination, the rasterizer described by Hakura can be added into the graphics API as described by Kubisch]), and configured to cut the vertices, the primitives, and the plurality of bounding volumes into a plurality of tiles (“The rasterizer 370 reads the meshlets 360, scans the graphics primitives, and transmits fragments and coverage data to the pixel shading unit 380. Additionally, the rasterizer 385 may be configured to perform z culling and other z-based optimizations” – Par 67. [NOTE: Hakura teaches cutting vertices and primitives but does not teach the cutting of bounding volumes into tiles. After the combination, using the rasterizer comprising a tiler to cut vertices and primitives as taught by Hakura and the bounding volumes of 3D objects as taught by Unterguggenberger can be applied to the graphics API taught by Kubisch to cut the vertices, primitives, and bounding volumes of the 3D objects into tiles.]); the rasterizer converts the plurality of tiles into the 2-dimensional grid pixels of fragments (“If tiled rendering was performed by a tiler 115A-C, then frame buffer 150 may be filled with fragments generated from graphics primitives, tile by tile, where one or more fragments and/or pixels associated with a first tile are sent to frame buffer 150 before fragments and/or pixels associated with a next tile..” – Par 40 Lines 3-8. [NOTE: Hakura does not explicitly say that the tiles are converted into 2D grid pixels of fragments. After the combination, the rasterizer configured to convert 3D primitives to 2D fragments as taught by Laine can use the tiler component inside the rasterizer as taught by Hakura. This component could then be added to the graphics API taught by Kubisch to teach that the rasterizer converts the plurality of tiles into the 2-dimensional grid pixels of fragments.]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kubisch to incorporate the teaching of Hakura to use rasterizer comprising a tiler, linked to a mesh shader, to cut the vertices, the primitives, and the plurality of bounding volumes and convert the tiles into 2D pixel grid fragments. Modern GPUs use tilers to overlap the vertices, primitives, and bounding boxes with each tile and using it to for processing. This is an optimization strategy that allows a scene to be broken up in manageable tiles so that rendering is more efficient for devices with limited bandwidth such as smartphones and game consoles.
Regarding claim 10, the claim describes a method for an application programming interface
comprising the rasterizer comprising a tiler as described in claim 2. Therefore, method claim 10 corresponds to the API disclosed in claim 2 and is rejected for the same reasons of obviousness as used above.
Claim(s) 3 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kubisch, Unterguggenberger, Laine, Øygard, Hakura, and Uludag (US 20220084226 A1), hereinafter Uludag.
Regarding claim 3, Kubisch in view of Unterguggenberger, Laine and Øygard and Hakura teach the application programming interface of claim 2. Kubisch does not teach the plurality of bounding volumes are culled in the tiler. However, Uludag teaches that the plurality of bounding volumes are culled in the tiler (“Also, in some implementations, for animated hair, these AABBs are updated every frame, but static hair does not update every frame. This also means we could perform the coarse culling against the smaller AABBs for groups of hair rather than individual segments to accelerate the process. In some implementations, one could even have multiple levels/hierarchical culling tests, where we do AABB as well as individual segments of what survived culling.” – Par 91, Lines 32-37. [NOTE: Uludag describes course culling of smaller axis-aligned bounding boxes. One of ordinary skill could configure the rasterizer comprising a tiler as taught by Hakura in the graphics API as taught by Kubisch to perform the culling]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kubisch in view of Unterguggenberger, Laine, and Hakura to incorporate the teaching of Uludag to perform the culling of bounding volumes in the tiler. By performing the culling of bounding volumes that encompass primitives and vertices, the system can reduce the amount of processing by entirely skipping the tile which significantly reduces memory bandwidth. Culling in the tiler also avoids redundant computations on graphic resources.
Regarding claim 11, the claim describes a method for an application programming interface comprising culling some of the plurality of bounding volumes in the tiler as described in claim 3. Therefore, method claim 11 corresponds to the API disclosed in claim 3 and is rejected for the same reasons of obviousness as used above.
Claim(s) 4, 8, 12, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kubisch, Unterguggenberger, Laine, Øygard, and Uludag.
Regarding claim 4, Kubisch in view of Unterguggenberger, and Øygard and Laine teach the application programming interface of claim 1. Kubisch does not teach that the plurality of bounding volumes are a plurality of axis-aligned bounding boxes (AABBs). However, Uludag further teaches that the plurality of bounding volumes are a plurality of axis-aligned bounding boxes (AABBs) (“In some implementations, an intermediate acceleration structure is used that groups the hair segments into “local” chunks, which themselves are bounded by an AABB (axis aligned bounding box). Then we would only loop over groups of hair, rather than individual hair segments, that are spatially chunked and then perform overlap tests of their associated AABB against the screen-space tiles then would accelerate this process.” – Par 91, Lines 15-22). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kubisch to incorporate the teachings of Uludag to use axis-aligned bounding boxes because this method is known in the art to be computational efficient and simple to implement since they are easy to define. The speed that AABBs provide allow for rapid calculations of an object’s size and position.
Regarding claim 12, the claim describes a method for an application programming interface wherein the plurality of bounding volumes are a plurality of axis-aligned bounding volumes as described in claim 4. Therefore, method claim 12 corresponds to the API disclosed in claim 4 and is rejected for the same reasons of obviousness as used above.
Regarding claim 8, Kubisch in view of Unterguggenberger, Laine and Øygard teach the application programming interface of claim 1. Kubisch does not teach that some of the plurality of bounding volumes are not rendered in the rasterizer. However, Uludag further teaches that some of the plurality of bounding volumes are not rendered in the rasterizer (“The method shown in FIG. 20 can be repeated for each tile with potentially visible hair (i.e., hair that was not culled using conservative depth culling). The result of processing the method of FIG. 20 for each tile with potentially visible hair is a rasterized hair overlay that can be applied on top of a rendered image (without hair) to render the overall image. Each pixel of the rasterized hair overlay will include a color value and an opacity value for the hair at that pixel.” – Par 152 Lines 1-9. [NOTE: Uludag refers to hair that was “not culled” which refers to the bounding boxes that encompassed a collection of primitives that were unseen. This implies that a selection of the primitives was culled out along with the bounding boxes that bound them. Since the bounding volumes encompass the primitives, it would also imply that a selection of bounding volumes is determined to be rendered in the rasterizer]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kubisch to incorporate the teachings of Uludag to have some of the plurality of bounding volumes not rendered in the rasterizer. It is common in the field that during the culling process, the useless or unseen primitives are culled in order to save on processing time. Uludag observes that by doing an early culling of the primitives so that only the visible hair strands are rendered, an improvement in quality was seen (“In some implementations, this could occur if individual “wisps” of hair are extending form the character, such that there are not enough hair strands covering a given pixel to reach the opacity threshold. As such, each cluster will be evaluated, and the pixel will raster with an opacity that is below the opacity threshold. As described above, early terminating a cluster since the opacity of each pixel with potentially visible hair has reached an opacity threshold gives a performance improvement.” – Par 151 Lines 1-9).
Regarding claim 16, the claim describes a method for an application programming interface wherein some of the plurality of bounding volumes are not rendered in the rasterizer as described in claim 8. Therefore, method claim 16 corresponds to the API disclosed in claim 8 and is rejected for the same reasons of obviousness as used above.
Claim(s) 5 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kubisch, Unterguggenberger, Laine, Øygard, Uludag, and Vries (“Collision detection”), hereinafter Vries.
Regarding claim 5, Kubisch in view of Unterguggenberger, Laine and Øygard and Uludag teach the application programming interface of claim 4. Kubisch does not teach that each of the axis-aligned bounding boxes is defined by left-top-near coordinates and right-bottom-far coordinates. However, Vries teaches that each of the axis-aligned bounding boxes is defined by left-top-near coordinates and right-bottom-far coordinates (“Axis aligned bounding boxes can be defined in several ways. One of them is to define an AABB by a top-left and a bottom-right position. The GameObject class that we defined already contains a top-left position (its Position vector), and we can easily calculate its bottom-right position by adding its size to the top-left position vector (Position + Size). Effectively, each GameObject contains an AABB that we can use for collisions.” – Par 5, Lines 1-5). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kubisch to incorporate the teachings of Vries to define the AABBs by left-top-near coordinates and right bottom far coordinates because of its simplicity and computational efficiency. Defining an AABB with only two coordinates is a very memory-efficient method since only two points will need to be stored to represent the entire bounding box. Finding the two opposite corner coordinates to define the rectangular bounds is also easy to implement and allows the user to perform straightforward computations using this representation.
Regarding claim 13, the claim describes a method for an application programming interface wherein each of the AABBs is defined by left-top-near coordinates and right-bottom-far coordinates as described in claim 5. Therefore, method claim 13 corresponds to the API disclosed in claim 5 and is rejected for the same reasons of obviousness as used above.
Claim(s) 6-7 and 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kubisch, Unterguggenberger, Laine, Øygard, Schmalstieg et al (US20220058872 A1).
PNG
media_image3.png
426
776
media_image3.png
Greyscale
Regarding claim 6, Kubisch in view of Unterguggenberger and Laine and Øygard teach the application programming interface of claim 1. Kubisch does not teach a plurality of primitives are within a bounding volume. However, Schmalsteig further teaches a plurality of primitives are within a bounding volume (“Additionally, aspects of the present disclosure may aggregate primitives/triangles until a maximum number is reached or any edge of a bounding box (BB) exceeds a threshold. For example, the primitives/triangles of image mesh 210 may be aggregated until a maximum number is reached or any edge of a bounding box 220 exceeds a threshold. Also, FIG. 2 shows that aspects of the present disclosure may transform a bounding box into a unit cube including contained vertices.” – Par 49, Lines 9-17. Fig. 2).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kubisch to incorporate the teachings of Schmalstieg to have the primitives within the bounding volumes. This strategy is commonly known in the field to assist in early primitive culling in order to save bandwidth memory and hardware resources. Primitives that are useless or not in the field of view can be skipped from the rendering process.
Regarding claim 14, the claim describes a method for an application programming interface
wherein a plurality of primitives are within a bounding volume as described in claim 6. Therefore,
method claim 14 corresponds to the API disclosed in claim 6 and is rejected for the same reasons of
obviousness as used above.
Regarding claim 7, Kubisch in view of Unterguggenberger and Øygard and Laine teach the application programming interface of claim 1. Kubisch does not teach a plurality of vertices are within a bounding volume. However, Schmalsteig further teaches a plurality of vertices are within a bounding volume (“Also, FIG. 2 shows that aspects of the present disclosure may transform a bounding box into a unit cube including contained vertices.” – Par 49 Lines 9-17 and Fig. 2. [NOTE: Fig 2 highlights a bounding box that encompasses a mesh (a collection of primitive basic shapes which are created from a collection of vertices)]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kubisch to incorporate the teachings of Schmalsteig to have the vertices within the bounding volumes. With similar logic as the rejection to claim 6, it is known in the field to place primitives (an ordered collection of vertices) into the bounding volumes so that an early culling of the useless or unseen vertices are skipped during rendering. This strategy saves the system from performing repetitive checks to reduce processing times.
Regarding claim 15, the claim describes a method for an application programming interface wherein a plurality of vertices are within a bounding volume as described in claim 7. Therefore, method claim 15 corresponds to the API disclosed in claim 7 and is rejected for the same reasons of obviousness as used above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID V. NGUYEN whose telephone number is 571-272-6111. The examiner can normally be reached M-F 9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Y Poon can be reached at 571-270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID VAN NGUYEN/Examiner, Art Unit 2617 /KING Y POON/Supervisory Patent Examiner, Art Unit 2617