Prosecution Insights
Last updated: April 19, 2026
Application No. 18/481,909

CACHE MEMORY ARCHITECTURE AUGMENTATION FOR 3-DIMENSIONAL (3D) DATA

Non-Final OA §103
Filed
Oct 05, 2023
Examiner
RICKS, DONNA J
Art Unit
2618
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
86%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
387 granted / 502 resolved
+15.1% vs TC avg
Moderate +9% lift
Without
With
+8.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
30 currently pending
Career history
532
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
58.3%
+18.3% vs TC avg
§102
13.7%
-26.3% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 502 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Claims 1-10 have been elected. Applicant's election with traverse of the Restriction of claims 1-30 in the reply filed on 10/29/2025 is acknowledged. The traversal is on the ground(s) that “it would not be unduly burdensome on the office to search and examine Groups I and II together.” Applicant argues: “Applicant respectfully submits that Groups I and II are neither independent nor distinct for the following reasons: FIG. 4B of the present application illustrates an example information processing system with an integrate block 450, a create block 460 and main memory 430. The information processing system delivers an updated three dimensional (3D) image to the main memory after depth and color processing on an input 3D image. Each element of the input 3D image is indexed as a volume element, or voxel (analogous to a pixel in a 2D image). The input 3D image is accessed either from a cache memory in the integrate block 450 or from the main memory 430 using input block voxel indices. The input block voxel indices are mapped to 2D pixels which correspond to cache memory address locations when the cache memory is used. The present disclosure is directed toward improving memory access for the input 3D image by introducing a reordering block 462 in the create block 460. The reordering block 462 creates a reordered list 463 by reordering the input block voxel indices such that the neighboring input block voxel indices are mapped into neighboring 2D pixels. This reordering greatly improves the probability of successful cache memory access which is much faster than main memory access. Thus, the inventive reordering block 462 substantially reduces memory access latency for 3D image processing. Claims 1-10 are apparatus claims which recite a create block to generate a reordered list based on a plurality of input block voxel indices and an integrate block which uses the reordered list to delivered integrate depth data as part of an updated 3D image. Claims 11-30 include method claims, apparatus (means-plus) claims and non-transitory computer-readable medium claims which recite a reordering of input block voxel indices into output block voxel indices to provide an augmented cache memory access for an input 3D image to deliver an updated 3D image. The augmented cache memory access refers to the addition of the reordered list 463 for improved cache memory access. The claims in both group I and group II embody the central idea of reordering input block voxel indices to facilitate a delivery of an updated 3D image, and hence, they are neither independent nor distinct from each other. Applicant respectfully submits that the two Groups (Groups I and II) are not independent and not distinct from each other as alleged by the Office Action.” This is not found persuasive because for example, claim 11 recites accessing the plurality of output block voxel indices to provide an augmented cache memory access. The augmented cache memory access is an additional requirement not recited in claim 1, that makes it distinct from claim 1 and also changes the scope. Also, for example, claim 23 recites means for accepting a plurality of input block voxels from a cache memory and means for separating the plurality of input block voxel indices to generate a separated set of input block voxels indices. The input voxel indices in the cache memory and the separating of the input block voxel indices are also additional limitations not recited in claim 1 which also makes it distinct from claim 1 and also changes the scope. Applicant argues: “Applicant respectfully submits that any search and examination of Groups I and II together can be made without serious burden on the Office. This may be illustrated, for example, with the following: When a search is conducted regarding the elements of claims 1-10, (Group I) in particular, an integrate block and a create block, an inventive feature is a reordering block which is a sub-element of the create block. The reordering block 462 creates a reordered list 463 by reordering the input block voxel indices to greatly improve the probability of successful (and faster) cache memory access. The reordering feature is not only a feature in claims 1-10 (Group I), but is also a central feature in claims 11-30 (Group II) as well. Thus, any search applicable for claims 1-10 regarding reordering would be applicable for claims 11-30 as well. Therefore, no serious burden is imposed on the Office with a common search for claims 1-30 of the present application. Based at least on the reasoning provided above, Applicant requests that the restriction requirement as to Groups I and II be reconsidered and withdrawn.” This is not found persuasive because while reordering is an element of all three independent claims, as discussed above, independent claims 11 and 23 include other elements that change the scope and that require separate and additional searching that significantly increases the burden. The requirement is still deemed proper and is therefore made FINAL. CLAIM INTERPRETATION The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: create block, integrate block, select block, reordering block, depth pass module, color pass module in claim 1, 2, 4, 6, 7 and 9. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The Specification discloses: In [0039], “... the information processing system 300 includes a hardware (HW) engine 310, a software (SW) module 320 and a main memory 330... the hardware engine includes a select block 340 and an integrate block 350... the software module 320 includes a create block 360.” In [0040], “... the integrate block 350 includes a depth pass module 351 and a color pass module 355.” The corresponding structure for the select block and the integrate block, which includes the depth pass module and the color pass module, is the hardware engine. And, the corresponding structure for the create block, which includes the reordering block, is the information processing system. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1; 2, 3, 4, 5 and 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thyagharajan et al. U.S. Pub. No. 2020/0327396 in view of Nguyen U.S. Patent No. 10,891,779. Re: claim 1, Thyagharajan teaches 1. An apparatus comprising: a create block configured to receive a plurality of input block voxel indices and configured to generate a reordered list based on the plurality of input block voxel indices; (“Fig. 3 depicts a block diagram 300 illustrating reordering of input data elements. In this case the input date is for sparse voxels 302. The data for the sparse voxels 302 is subject to reordering 304 to produce a chunk 305... ”; Thyagharajan, [0033], Fig. 3) Fig. 3 illustrates reordering 304 (create block) that receives plural sparse voxels and reorders the sparse voxels to produce a chunk (receive a plurality of input block voxel indices). Reordering 304 is coupled to processing logic 308. (“The reordering may work with an input list of voxels Vin may be expressed as Vin=[vi], where vi is dimensional location/index of the ith voxel in Vin. An occupancy map M may be defined. The occupancy map M maps a tuple of indices of each occupied voxel to the index of the voxel in then list Vin and is undefined everywhere else.”; Thyagharajan, [0034]) The reordering works with an input list of voxels Vin, expressed as vi, which is a dimension/location index of the ith voxel in Vin. An occupancy map is defined which maps a tuple of indices of each occupied voxel to the index of the voxel in the list Vin. The input list of voxels is then reordered (generate a reordered list based on the plurality of input block voxel indices). Thyagharajan is silent regarding an integrate block coupled to the create block, the integrate block configured to use the reordered list to deliver integrate depth data for generating a plurality of output block voxel indices, however, Nguyen teaches and an integrate block coupled to the create block, the integrate block configured to use the reordered list to deliver integrate depth data for generating a plurality of output block voxel indices. (“From experiments, we can well predict the number of reconstructed surface voxels for each use case. Given such a number, we can pre-allocate the array of reconstructed voxel blocks 201. Each voxel block consists of 8x8x8 voxels. This indexing mechanism is extremely efficient for creating, accessing and modifying the voxels; this indexing mechanism enables a fast preparation step before integrating the newly observed depth map into the reconstructed scene.”; Nguyen, col. 4, lines 58-65) The indexing mechanism creates, accesses and modifies voxels and enables a fast preparation step before integrating the depth map into the reconstructed scene. Fig. 1 illustrates a preparation block (create block) coupled to the integration block (integrate block). (“The preparation step will result in a list of existing voxels and non-existing voxels which are close to the observed depth map. This prepared list makes it efficient for later integration of the depth map to the reconstructed scene.”; Nguyen, col. 5, lines 29-31) The preparation step results in, for example, a list of existing voxels, which is used for integration of the depth map to the reconstructed scene. Nguyen is combined with Thyagharajan such that the indexed voxels of Nguyen are the reordered indexed voxels of Thyagharajan and the reordered indexed voxels of Thyagharajan are used as the prepared list of for integration of the depth map to the reconstructed scene of Nguyen and the preparation block of Nguyen includes the reordering of Thyagharajan. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Thyagharajan by adding the feature of an integrate block coupled to the create block, the integrate block configured to use the reordered list to deliver integrate depth data for generating a plurality of output block voxel indices, in order to quickly and efficiently create, access and modify voxels before integrating the newly observed depth map into the reconstructed scene, as taught by Nguyen (col. 4, lines 62-65). Re: claim 2, Thyagharajan and Nguyen teach 2. The apparatus of claim 1, further comprising a select block coupled to the create block, the select block configured to send the plurality of input block voxel indices to the create block. (“Fig. 3 depicts a block diagram 300 illustrating reordering of input data elements. In this case the input data is for sparse voxels 302. The data for the sparse voxels 302 is subject to reordering 304 to produce a chunk 306...”; Thyagharajan, [0033], Fig. 3) Fig. 3 illustrates sparse voxels block 302 (select block) coupled to reordering block 304 (create block), where the sparse voxels block sends the input voxels to the reordering block (send the plurality of input block voxel indices to the create block). Re: claim 3, Thyagharajan and Nguyen teach 3. The apparatus of claim 2, further comprising a memory coupled to the create block, the memory configured for storing the plurality of input block voxel indices. (“The data for the sparse voxels 302 is subject to reordering 304 to produce a chunk 306... The chunk 306 is then passed to the processing logic 308 and stored in a memory 310 used by the processing logic for processing... The reordering may work with an input list of voxels Vin may be expressed as Vin=[vi], where vi is dimensional location/index of the ith voxel in Vin. An occupancy map M may be defined. The occupancy map M maps a tuple of indices of each occupied voxel to the index of the voxel in then list Vin and is undefined everywhere else”; Thyagharajan, [0033], [0034]) Fig. 3 illustrates that the sparse voxels block 302 sends sparse voxels to the reordering block to produce a chunk, which is stored in the memory of the processing logic (a memory coupled to the create block, the memory configured for storing the plurality of input block voxel indices). Re: claim 4, Thyagharajan and Nguyen teach 4. The apparatus of claim 3, wherein the create block includes a reordering block, the reordering block configured to generate the reordered list. (“The reordering may work with an input list of voxels Vin may be expressed as Vin=[vi], where vi is dimensional location/index of the ith voxel in Vin. An occupancy map M may be defined. The occupancy map M maps a tuple of indices of each occupied voxel to the index of the voxel in then list Vin and is undefined everywhere else.”; Thyagharajan, [0034], Fig. 3) The reordering works with an input list of voxels Vin, expressed as vi, which is a dimension/location index of the ith voxel in Vin. The input list of voxels is reordered (generate a reordered list based on the plurality of input block voxel indices). Fig. 3 illustrates a reordering block 304 (create block includes a reordering block) that reorders an input list of voxels (the reordering block configured to generate the reordered list). Re: claim 5, Thyagharajan and Nguyen teach 5. The apparatus of claim 3, wherein the memory is configured to store one or more of the following: a depth image, one or more 3D voxels, a depth and voxel set, one or more voxels, a meta data buffer, an updated voxel, a color image, or an updated voxel with color. (“Fig. 3 depicts a block diagram 300 illustrating reordering of input data elements. In this case the input data is for sparse voxels 302. The data for the sparse voxels 302 is subject to reordering 304 to produce a chunk 306...The chunk 306 is then passed to the processing logic 308 and stored in a memory 310 used by the processing logic for processing.”; Thyagharajan, [0033], Fig. 3) Fig. 3 illustrates a memory 310 that stores chunk 306, which includes reordered voxels. The memory stores chunk 306, which is considered to include, one or more 3D voxels, one or more voxels or an updated voxel. Re: claim 6, Thyagharajan and Nguyen teach 6. The apparatus of claim 3, wherein the integrate block includes a depth pass module, the depth pass module configured to receive a depth image and one or more 3D voxels, (“... and an integration step, in which the collected and cached voxels of the preparation step are updated with a newly captured depth map frame (403);”; Nguyen, col. 3, lines 1-4) The integration step (integrate block) receives the collected and cached voxels (one or more 3D voxels) and a newly captured depth map (depth image). Fig. 1 illustrates an integration block 106, which is considered to include a depth pass module, which receives the collected and cached voxels (one or more 3D voxels) and a newly captured depth map (depth image). and the depth pass module further configured to generate a depth image data based on the depth image and the one or more 3D voxels. (“This indexing mechanism is extremely efficient for creating, accessing and modifying the voxels; this indexing mechanism enables a fast preparation step before integrating the newly observed depth map into the reconstructed scene.”; Nguyen, col. 4, lines 62-65) (“The preparation step will result in a list of existing voxels and non-existing voxels which are close to the observed depth map. This prepared list makes it efficient for later integration of the depth map to the reconstructed scene.”; Nguyen, col. 5, lines 9-12) The preparation step results in, for example, a list of existing voxels (one or more 3D voxels), which is used for integration of the depth map (depth image) into the reconstructed scene (generate a depth image data based on the depth image and the one or more 3D voxels). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Thyagharajan by adding the feature of the integrate block includes a depth pass module, the depth pass module configured to receive a depth image and one or more 3D voxels, and the depth pass module further configured to generate a depth image data based on the depth image and the one or more 3D voxels, in order to quickly and efficiently create, access and modify voxels before integrating the newly observed depth map into the reconstructed scene, as taught by Nguyen (col. 4, lines 62-65). Claim(s) 7 and 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thyagharajan in view of Nguyen as applied to claim 6 above, and further in view of Hasegawa et al. U.S. Pub. No. 2021/0344815 and Chen et al. U.S. Pub. No. 2020/0175644. Re: claim 7, Thyagharajan and Nguyen are silent regarding the depth pass module is further configured to deliver the depth image data to a meta data buffer, however Hasegawa and Chen teach 7. The apparatus of claim 6, wherein the depth pass module is further configured to deliver the depth image data to a meta data buffer. (“On receiving the depth map and the tracking data, the sensor information integration unit 11 integrates the depth map and the tracking data to transmit the integrated data to the reception unit 12, storing the data into the buffer 13 (step S11).”; Hasegawa, [0059], Figs. 2 and 7) Fig. 2 illustrates an integration unit 11, which is considered to include the depth pass module, that integrates the depth map (depth image data) with tracking data and stores the integrated data in the buffer 13 via the reception unit 12. Thyagharajan, Nguyen and Hasegawa are silent regarding the buffer 13 being a metadata buffer, however, Chen teaches (“... the RAM such as a DRAM 130 may comprise a color buffer 132 and a metadata buffer 134, where the color buffer 132 and the metadata buffer 134 may be implemented with different buffer regions in the RAM such as DRAM 130.”; Chen, [0025], Fig. 1) Fig. 1 illustrates that the metadata buffer is included in the DRAM 130. (“For example, the set of metadata of the aforementioned at least one subsequent frame may comprise one or a combination of depth information regarding deferred shading... ”; Chen, [0029]) The metadata includes depth information regarding deferred shading (depth image data). Hasegawa and Chen are combined with Thyagharajan and Nguyen such that the buffer of Hasegawa is the metadata buffer of Chen, which is included in the method of Thyagharajan. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Thyagharajan by adding the feature of the depth pass module is further configured to deliver the depth image data to a meta data buffer, in order to generate location information for each label without waiting until all pieces of location information are available thereby suppressing delay over the entire system as taught by Hasegawa ([0076) and in order to enhance overall display performance of an electronic device, as taught by Chen ([0003]). Re: claim 8, Thyagharajan, Nguyen, Hasegawa and Chen teach 8. The apparatus of claim 7, wherein the meta data buffer is a component of the memory. (“... the RAM such as a DRAM 130 may comprise a color buffer 132 and a metadata buffer 134, where the color buffer 132 and the metadata buffer 134 may be implemented with different buffer regions in the RAM such as DRAM 130.”; Chen, [0025], Fig. 1) Fig. 1 illustrates that the metadata buffer is included in the DRAM 130. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Thyagharajan by adding the feature of - the meta data buffer is a component of the memory, in order to enhance overall display performance of an electronic device, as taught by Chen ([0003]). Claim(s) 9 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thyagharajan, Nguyen, Hasegawa and Chen as applied to claim 7 above, and further in view of Gruber U.S. Patent No. 10,796,478. Re: claim 9, Thyagharajan, Nguyen, Hasegawa and Chen are silent regarding integrate block includes a color pass module, the color pass module configured to receive the depth image data from the meta data buffer, and further configured to generate an updated voxels with color based on the depth image data, however, Gruber teaches 9. The apparatus of claim 7, wherein integrate block includes a color pass module, the color pass module configured to receive the depth image data from the meta data buffer, and further configured to generate an updated voxels with color based on the depth image data. (“The contents of the depth buffer or depth surface 308 may only be inputted into the GPU 302 during the color render pass, such that any non-visible pixels are culled and processing time and/or resources are not wasted on non-visible pixels.”; Gruber, col. 17, lines 41-46, Fig. 3B) The GPU (integrate block includes a color pass module), receives the contents of the depth buffer (receive the depth image data from the meta data buffer), culls non-visible pixels and colors/shades visible pixels. (“The first pass of the depth pre-pass may be able to identify the non-visible portion of triangles 404 and 406, such that the non-visible portions may be skipped and are not rendered, which saves processing resources. The first pass may identify the visible portions of triangles 404 and 406, which may in turn save processing resources during the second pass or the color render pass, because the non-visible portions of triangles 404 and 406 do not need to be colored.”; Gruber, col. 18, lines 8-16, Fig. 4) The first pass or the depth pass identifies visible portions and the second pass or the color render pass colors the visible portions (generate updated voxels with color based on depth image data). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Thyagharajan by adding the feature of integrate block includes a color pass module, the color pass module configured to receive the depth image data from the meta data buffer, and further configured to generate an updated voxels with color based on the depth image data, in order to save color or shading resources on non-visible pixels by generating a complete depth buffer, as taught by Gruber (col. 16, line 66-col. 17, line 2). Re: claim 10, Thyagharajan, Nguyen, Hasegawa, Chen and Gruber teach 10. The apparatus of claim 9, wherein the color pass module includes a color cache memory, the color cache memory configured to receive a color image for the generation of the updated voxels with color. (“Fig. 3B provides an example hardware architecture 320 of the second pass of the depth pre-pass... The architecture 320 may include the GPU 320, a color command buffer 314, the vertex buffer 306, the depth surface 308, the color surface 310, and a texture surface 312.”; Gruber, col. 17, lines 33-39, Fig. 3B) Fig. 3B illustrates the architecture of the second pass (color pass) of the depth pre-pass. The GPU is considered to include the color pass module and the color surface 310 is considered to be the color buffer (color cache) (the color pass module includes a color cache memory). (“In a visibility pass, the GPU 502 may be configured to generate visibility information associated with the color. The output of the color visibility pass may include generating a final visibility stream that may include the results of any late occluders. The low res depth surface 508 may be read as input by the GPU 502 during the color visibility pass when the visibility streams are being generated. The final visibility streams are stored in the visibility streams 514.”; Gruber, col. 19, lines 17-25, Fig. 5B) Fig. 5B illustrates a GPU 502 that generates visibility information associated with color. The GPU receives the low res depth surface during the color visibility pass and generates visibility streams, which are stored in the visibility streams buffer 514 (color cache memory configured to receive a color image for the generation of the updated voxels with color). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Thyagharajan by adding the feature of the color pass module includes a color cache memory, the color cache memory configured to receive a color image for the generation of the updated voxels with color, in order to save color or shading resources on non-visible pixels by generating a complete depth buffer, as taught by Gruber (col. 16, line 66-col. 17, line 2). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONNA J RICKS whose telephone number is (571)270-7532. The examiner can normally be reached on M-F 7:30am-5pm EST (alternate Fridays off). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached on 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Donna J. Ricks/Examiner, Art Unit 2618 /DEVONA E FAULK/Supervisory Patent Examiner, Art Unit 2618
Read full office action

Prosecution Timeline

Oct 05, 2023
Application Filed
Feb 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602751
SAMPLE DISTRIBUTION-INFORMED DENOISING & RENDERING
2y 5m to grant Granted Apr 14, 2026
Patent 12592021
GRAPHICS PROCESSING
2y 5m to grant Granted Mar 31, 2026
Patent 12579726
HIERARCHICAL TILING MECHANISM
2y 5m to grant Granted Mar 17, 2026
Patent 12573133
Reprojection method of generating reprojected image data, XR projection system, and machine-learning circuit
2y 5m to grant Granted Mar 10, 2026
Patent 12555281
MANAGING MULTIPLE DATASETS FOR DATA BOUND OBJECTS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
86%
With Interview (+8.8%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 502 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month