Prosecution Insights
Last updated: April 19, 2026
Application No. 18/615,723

HIGH-QUALITY SKINNED OBJECT ANIMATIONS FOR DISPLACED MICRO MESHES

Final Rejection §103
Filed
Mar 25, 2024
Examiner
PUNTIER, CHRIS ALEJANDRO
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Advanced Micro Devices, Inc.
OA Round
2 (Final)
94%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 94% — above average
94%
Career Allow Rate
29 granted / 31 resolved
+31.5% vs TC avg
Moderate +10% lift
Without
With
+10.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
12 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
6.6%
-33.4% vs TC avg
§103
70.9%
+30.9% vs TC avg
§102
15.4%
-24.6% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 31 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, filed 12/18/2025, with respect to the rejection(s) of claim(s) 1,10, and 19 under U.S.C 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Kavan(Kavan, Ladislav. "Part i: direct skinning methods and deformation primitives." ACM SIGGRAPH. Vol. 2014. 2014.). Allowable Subject Matter Claims 21,22 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1,2,8,9,10,11,17,18,19,20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Burgess (US-20230081791-A1) in view of Eisenmann (US-20170032055-A1) and Kavan. Regarding claim 1, Burgess discloses A method comprising: based on a displaced micro-mesh that includes a plurality of coarse base triangles that define a set of coarse vertices each of which has an associated directional value, the displaced micro-mesh further comprising a set of displacement values (para. [0243-0244] “FIG. 16A shows an example base triangle with a micro-mesh pattern on its planar surface. In actual implementations, there is no pattern defined on this base triangle's surface since—as will become clear from the discussion below—the base triangle typically is not itself visualized. Rather, the base triangle serves as platform for defining/supporting the displaced micro-mesh of microtriangles. Hitting the page down key, FIG. 16B shows each of the three vertices v0, v1, v2 of the base triangle has an associated direction vector that indicates a displacement direction in 3D space;” para. [0027] “FIG. 8A shows the Stanford bunny represented using micro-meshes, drawn with a set of base triangles outlined in white with their implicit microtriangles within. Each base triangle vertex has a displacement direction shown as arrows for one of the base triangles;” para. [0239] “In particular, in one embodiment, the DM contains a scalar displacement per μ-mesh vertex which is used to offset or displace the μ-triangles of the μ-mesh in 3D space. In one embodiment, μ-mesh microvertex direction vectors in 3D space are obtained by linearly interpolating from base triangle information and other values previously calculated from previous recursive subdividing steps, and then each μ-vertex of interest is displaced along the direction vector using the scalar displacement looked up in the DM. “ These passages expressly teach base triangles (coarse triangles) with coarse vertices that each have an associated direction vector ( a “directional value”) and a micro-mesh representation that includes displacement values (scalar displacements per micro-vertex). ), generating a fine triangular mesh defined by a plurality of fine vertices (para. [0037-0038] “FIG. 12A shows a displaced micro-mesh as a structured representation of geometry on a 2n×2n barycentric grid, where “n” is the subdivision level of the grid where the displaced micro-mesh of level 1 has a total of 41 microtriangles. FIG. 12B shows a displaced micro-mesh of level 3 with a total of 64 microtriangles having 43 microvertices.” This passage describes creating microtriangles and microvertices via the displaced micro-mesh process.), wherein the generating comprises: subdividing each of the coarse base triangles into a plurality of fine triangles that define the fine vertices(para. [0312] “A subdivision step of this triangle creates four new triangles named w, m, u, and v (FIG. 27 ). The corner triangles are named for their base vertex. The triangle in the middle is the “middle” triangle. Each subsequent level of subdivision further divides each of those triangles into four more. This subdivision process can be performed recursively to yield more and more, finer and finer sub triangles. Therefore, this process generates 4n microtriangles after n subdivision steps.” This passage teaches subdividing base triangles to create fine triangles corresponding to microvertices.); interpolating directional values for the fine vertices based on directional values associated with the coarse vertices (para. [0254] “FIGS. 17A-17C, 18 show different views of an array of direction vectors for a number of microvertices. In example embodiments, rather than specifying the microvertex direction vectors explicitly, they can all be derived by linearly interpolating between direction vectors of their respective base triangles—which can be called anchors for the micro-mesh (see direction vectors with the large cone-shaped arrowheads on top). Such linear interpolation is performed in two dimensions so that microvertices—which are all “in between” two or three base triangle vertices by various distances—have their direction vectors calculated through linear interpolation (see discussion below).” This is explicit disclosure of direction vectors being interpolated from the base triangle vertices’ direction vectors.); applying displacement values from the set of displacement values to the interpolated directional values (para. [0239]“In particular, in one embodiment, the DM contains a scalar displacement per μ-mesh vertex which is used to offset or displace the μ-triangles of the μ-mesh in 3D space. In one embodiment, μ-mesh microvertex direction vectors in 3D space are obtained by linearly interpolating from base triangle information and other values previously calculated from previous recursive subdividing steps, and then each μ-vertex of interest is displaced along the direction vector using the scalar displacement looked up in the DM.”) However, Burgess does not disclose and applying skinning to the fine vertices based on a sum of weighted values at each coarse vertex. The combination of Eisenmann and Kavan do disclose and applying skinning to the fine vertices based on a sum of weighted values at each coarse vertex (para. [0066] “As explained supra, skinning is a standard technique in computer graphics for skeleton based animation that induces little computational overhead and has been used extensively in video games, movies, simulations, and virtual-reality systems. This process associates a skeleton, defined by a set of segments, each storing a rotation/translation pair {(Rj, Tj)} to a mesh. The skeleton can be manipulated, which modifies in turn the deformation pairs. These transformations can then be transferred to the mesh vertices. Specifically, the location of a vertex i under linear blend skinning (LBS) is determined by the expression give in Equation 1 supra.” This reference explicitly teaches applying skinning (Linear Blend Skinning) to mesh vertices. Kavan further teaches on page 1 Section 2, para 2 “Linear blend skinning computes deformed vertex positions vi according to the following formula: …” Formula 1 taught by Kavan is a summation therefore explicitly computing the deformed position as a sum of weighted contributions. This summation can be used alongside the skinning technique taught by Eisenmann) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Eisenmann and Kavan into the teachings of Burgess in order to a have more scalable performance and allow for better joint deformation when working with meshes. Regarding claim 2, the combination of Burgess, Eisenmann and Kavan disclose all the elements of claim 1 as discussed above. Burgess also discloses wherein the displaced micro-mesh corresponds to animated graphics content (para. [0245] “As explained below, interesting animation effects can be created by changing the direction vectors and/or the base triangle vertex positions over time such as between frames while keeping other parameters (e.g., displacement amounts) static. For example, one approach is to define two primitives that are identical except for the base triangle vertex position(s) and/or direction vector(s) to be changed, and then interpolate over time between the micro-meshes they respectively define. Such changes can be used to dynamically distort the shape of the micro-mesh, for example by moving, contracting, stretching or otherwise deforming it, from one time instant to another.” This reference explicitly ties the micro-mesh to animated graphics content. The behavior is presented as a flipchart animation(shown by figures 16A-16F) and it teaches frame-to-frame changes of base triangle vertex positions and direction vectors to create animation effects.) Regarding claim 8, the combination of Burgess, Eisenmann and Kavan disclose all the elements of claim 1 as discussed above. Eisenmann also discloses wherein the skinning is performed in accordance with a skeleton having a plurality of bones each of which is associated with a transformational matrix (para. [0066] “This process associates a skeleton, defined by a set of segments, each storing a rotation/translation pair {(Rj, Tj)} to a mesh. The skeleton can be manipulated, which modifies in turn the deformation pairs. These transformations can then be transferred to the mesh vertices. Specifically, the location of a vertex i under linear blend skinning (LBS) is determined by the expression give in Equation 1 supra.”), and applying the skinning comprises applying transformational matrices associated with the bones to the fine vertices (para. [0074] “The quantity being summed represents the rigid transformation applied to vertex i by bone j. Each bone (indexed by j) transforms the input vertex using a rigid transformation (i.e. a rotation and a translation). The final transformation of the vertex is obtained by averaging the transformations by all bones, using the weights w, which define the respective influence of the bones over the vertex.” Eisenmann explicitly discloses skinning with a skeleton composed of bones, as mentioned above, associated with a matrix and applying bone transformations via the linear blend skinning sum, aligning fully with the claim element.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Eisenmann into the teachings of Burgess in order to a have better joint control of the skeleton. Regarding claim 9, the combination of Burgess, Eisenmann and Kavan disclose all the elements of claim 1 as discussed above. Eisenmann also discloses wherein a plurality of transformational matrices influence at least one of the fine vertices (para. [0074] “The quantity being summed represents the rigid transformation applied to vertex i by bone j. Each bone (indexed by j) transforms the input vertex using a rigid transformation (i.e. a rotation and a translation). The final transformation of the vertex is obtained by averaging the transformations by all bones, using the weights w, which define the respective influence of the bones over the vertex.” Eisenmann teaches here the linear blend skinning formulation explicitly blending multiple bone transformations for a single vertex. This means that there is a plurality of transformational matrices influencing a single vertex, aligning with the claim element.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Eisenmann into the teachings of Burgess in order to a have better joint control of the skeleton. Regarding claim 10, claim 10 recites similar claim limitations as claim 1. However, claim 10 recites a memory that stores a displaced micro-mesh; and a processor coupled to the memory. Burgess also discloses a memory that stores a displaced micro-mesh; and a processor coupled to the memory (para. [0085] “FIGS. 48A-48G are together a flip chart animation that shows on a high level how the builder stores DMM primitive information into memory and how the TTU hardware reads and uses this stored DMM primitive information to create images.” Eisenmann discusses how the displaced micro-mesh as data in memory and has processing hardware reading from that memory.) The rest of claim 10 is rejected under the same rationale as claim 1. Claim 11, which is similar in scope to claim 2, is rejected under the same rationale. Claim 17, which is similar in scope to claim 8, thus rejected under the same rationale. Regarding claim 18, the combination of Burgess, Eisenmann and Kavan disclose all the elements of claim 17 as discussed above. Eisenmann also discloses wherein a plurality of transformational matrices influence at least one of the fine vertices (para. [0074] “The quantity being summed represents the rigid transformation applied to vertex i by bone j. Each bone (indexed by j) transforms the input vertex using a rigid transformation (i.e. a rotation and a translation). The final transformation of the vertex is obtained by averaging the transformations by all bones, using the weights w, which define the respective influence of the bones over the vertex.” Eisenmann teaches here the linear blend skinning formulation explicitly blending multiple bone transformations for a single vertex. This means that there is a plurality of transformational matrices influencing a single vertex, aligning with the claim element.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Eisenmann into the teachings of Burgess in order to a have better joint control of the skeleton. Regarding claim 19, claim 19 recites similar claim limitations to claim 1. However, claim 19 recites A non-transitory computer-readable medium storing instructions (para. [0437] “The Builder writes/stores the AS it creates into non-transitory memory, and the AS then (or eventually) is stored in a main system RAM for the Tracing Hardware to access, read and use.”) The rest of claim 19 is rejected under the same rationale as claim 1. Claim 20, which is similar in scope to claim 2, thus rejected under the same rationale. Claims 4,13 are rejected under 35 U.S.C. 103 as being unpatentable over Burgess in view of Eisenmann and Kavan as applied to claim 1 above, and further in view of Tatarchuk (US-20100091018-A1). Regarding claim 4, the combination of Burgess, Eisenmann and Kavan disclose all the elements of claim 1 as discussed above. However, the combination does not disclose wherein the displaced micro-mesh is created as a pre-processing step, and the generating of the fine triangular mesh and the applying of the skinning to the fine vertices are implemented at runtime by a graphics processing pipeline. Tatarchuk does disclose wherein the displaced micro-mesh is created as a pre-processing step (para. [0031] “The method carried out by graphics processing circuitry may include two processing passes, where, in a first processing pass, the method includes generating animated coarse mesh vertex information based on instanced coarse mesh data and compressing said vertex information; and in a second processing pass, the method includes tessellating instanced coarse mesh data based on the animated coarse mesh vertex information to produce instances of a three dimensional object for display.” The first processing pass discussed here can be considered a “pre-processing” step.), and the generating of the fine triangular mesh and the applying of the skinning to the fine vertices are implemented at runtime by a graphics processing pipeline (para. [0131] “FIG. 11 illustrates an overview of the tessellation process. The process starts by rendering a coarse, low resolution mesh (also referred to as the “control cage” or “the super-primitive mesh”). The tessellator unit of the GPU generates new vertices, thus amplifying the input mesh. The vertex shader is used to evaluate surface positions and add displacement, obtaining the final tessellated and displaced high resolution mesh seen on the right.” This passage teaches the runtine generation of a fine triangular mesh in the GPU pipeline. para. [0125] “FIG. 21 illustrates a GPU implementation for rendering crowds of characters in accordance with the embodiments wherein per-vertex animation computations are limited to a coarse mesh. A two pass processing method is utilized wherein the instanced coarse mesh information 2101 is input (via an input assembler 2103) to a vertex shader 2105 where the animation of the coarse mesh in performed. The vertex data may also be compressed using various compression methods as described herein below. A compressed output stream 2109 is then applied in a second processing pass to add details to the animated coarse mesh by tessellation, etc.” This passage teaches the runtime skinning in the graphics pipeline.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Tatarchuk into the combination of teachings of Burgess and Eisenmann in order to improve performance and latency. Regarding claim 13, the combination of Burgess, Eisenmann and Kavan disclose all the elements of claim 1 as discussed above. However, the combination does not disclose wherein the processor is configured to create the displaced micro-mesh and store the displaced micro-mesh in the memory in an asset creation operation, and the processor is configured to generate the fine triangular mesh and apply skinning to the fine vertices at runtime using a graphics processing pipeline. Tatarchuk does disclose wherein the processor is configured to create the displaced micro-mesh and store the displaced micro-mesh in the memory in an asset creation operation (para. [0031] “The method carried out by graphics processing circuitry may include two processing passes, where, in a first processing pass, the method includes generating animated coarse mesh vertex information based on instanced coarse mesh data and compressing said vertex information; and in a second processing pass, the method includes tessellating instanced coarse mesh data based on the animated coarse mesh vertex information to produce instances of a three dimensional object for display.” The first processing pass discussed here can be considered a “asset creation” operation.), and the processor is configured to generate the fine triangular mesh and apply skinning to the fine vertices at runtime using a graphics processing pipeline. (para. [0131] “FIG. 11 illustrates an overview of the tessellation process. The process starts by rendering a coarse, low resolution mesh (also referred to as the “control cage” or “the super-primitive mesh”). The tessellator unit of the GPU generates new vertices, thus amplifying the input mesh. The vertex shader is used to evaluate surface positions and add displacement, obtaining the final tessellated and displaced high resolution mesh seen on the right.” This passage teaches the runtine generation of a fine triangular mesh in the GPU pipeline. para. [0125] “FIG. 21 illustrates a GPU implementation for rendering crowds of characters in accordance with the embodiments wherein per-vertex animation computations are limited to a coarse mesh. A two pass processing method is utilized wherein the instanced coarse mesh information 2101 is input (via an input assembler 2103) to a vertex shader 2105 where the animation of the coarse mesh in performed. The vertex data may also be compressed using various compression methods as described herein below. A compressed output stream 2109 is then applied in a second processing pass to add details to the animated coarse mesh by tessellation, etc.” This passage teaches the runtime skinning in the graphics pipeline.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Tatarchuk into the combination of teachings of Burgess and Eisenmann in order to improve performance and latency. Claim(s) 5,6,14,15 are rejected under 35 U.S.C. 103 as being unpatentable over Burgess in view of Eisenmann and Kavan as applied to claim 1 above, and further in view of Vlachos(US-20030016217-A1). Regarding claim 5, the combination of Burgess, Eisenmann and Kavan disclose all the elements of claim 1 as discussed above. However, the combination does not fully disclose further comprising performing ray tracing based on a convex hull defined by Bezier control points derived from the coarse vertices and a current sum of weighted skinning matrices at each coarse vertex. The combination of Burgess, Eisenmann and Vlachos does disclose further comprising performing ray tracing based on a convex hull (Burgess para. [0303] “As briefly explained above, this is useful for intersection testing since a ray that does not intersect the convex hull cannot intersect any of the microtriangles within the convex hull. In one example implementation, the hardware uses axis aligned bounding boxes (AABB) that bounds the convex hull to perform ray-bounding volume intersection testing other than when reaching a leaf node containing the DMM primitive. In addition, culling is implicitly performed as part of the ray-geometry intersection testing by subdividing the convex hull into a hierarchy of prismoidal volumetric subdivisions corresponding to sub triangles. As will be recalled, the base triangle provides a platform for constructing the minimum and maximum triangles that form the convex hull.” Burgess here discloses ray tracing using a convex hull around the micro mesh aligning with the claim element.) defined by Bezier control points derived from the coarse vertices(Vlachos para.[0021] “ A cubic Bezier control mesh is calculated using the vertex parameters provided for the non-planar video graphics primitive. Two techniques for calculating locations of control points included in the cubic Bezier triangular control mesh relating to the edges of the non-planar video graphics primitive are described in additional detail below. A location of a central control point is determined based on a weighted average of the locations of the other control points and the locations of the original vertices of the high-order primitive. The resulting cubic Bezier triangular control mesh can then be evaluated using any method for evaluating Bezier surfaces at the vertices of planar video graphics primitives that result from tessellation, where the number of planar video graphics primitives produced can be controlled based on a selected tessellation level. The resulting planar video graphics primitives are then provided to a conventional 3D pipeline for processing to produce pixel data for blending in a frame buffer.” Vlachos discloses building a cubic Bezier triangular control mesh for each triangle by deriving control points from the triangles coarse vertices and normal. This provides the claimed Bezier control points derived from the coarse vertices taught by Burgess. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Vlacho into the combination of teachings of Burgess and Eisenmann in order to allow the method to use Bezier control points in order to allow for easier subdivision.) and a current sum of weighted skinning matrices at each coarse vertex (Eisenmann para. [0074] The quantity being summed represents the rigid transformation applied to vertex i by bone j. Each bone (indexed by j) transforms the input vertex using a rigid transformation (i.e. a rotation and a translation). The final transformation of the vertex is obtained by averaging the transformations by all bones, using the weights w, which define the respective influence of the bones over the vertex. Eisenmann discloses the per-vertex skinning transform as the weighted sum of bone matrices. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Eisenmann into the teachings of Burgess in order to a have more scalable performance and allow for better joint deformation when working with meshes.) Regarding claim 6, the combination of Burgess, Eisenmann, Kavan and Vlachos disclose all the elements of claim 5 as discussed above. Burgess also discloses wherein the ray tracing further comprises identifying a ray intersection by recursively decimating the convex hull into subdivisions(para. [0303] “ In addition, culling is implicitly performed as part of the ray-geometry intersection testing by subdividing the convex hull into a hierarchy of prismoidal volumetric subdivisions corresponding to sub triangles. As will be recalled, the base triangle provides a platform for constructing the minimum and maximum triangles that form the convex hull.” Explicit disclosure of subdividing into smaller prismoid volumes.) and testing the subdivisions. (para. [0417] “In one embodiment, the culling is done by using stack pushes and pops to descend down into a volumetric hierarchy of prismoids between the minimum and maximum triangles the DMM primitive defines and testing the ray against each successively smaller subdivision (each of which eventually contains a set of microtriangles) to cull away as many sets of microtriangles as possible that don't need to be generated and tested (FIG. 47C blocks 4032-4038).” Explicit disclosure of testing the subdivisions.) Claim 14, which is similar in scope to claim 5, thus rejected under the same rationale. Claim 15, which is similar in scope to claim 6, thus rejected under the same rationale. Claim(s) 7,16 is rejected under 35 U.S.C. 103 as being unpatentable over Burgess as modified by Eisenmann and Kavan and Vlachos as applied to claim 6 above, and further in view of Loop (US-5602979-A). Regarding claim 7, the combination of Burgess, Eisenmann, Kavan and Vlachos disclose all the elements of claim 6 as discussed above. However, the combination does not disclose wherein the Bezier control points correspond to a degree 2 degree Bezier triangle. Loop does disclose wherein the Bezier control points correspond to a degree 2 degree Bezier triangle (col. 14 lines 12-31, “FIG. 9b illustrates control nets of four example quadratic Bezier triangles. The following formulas may be used to compute the points labeled on the quad-net in FIG. 9b. ##EQU19## This construction equates to the construction of the Bezier control net for a quadratic box spline surface N1111. This implies that the quadratic box splines are generalized by this spline surface method.”) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Loop into the combination of teachings of Burgess, Eisenmann, and Vlachos in order to a reduce control-point count while preserving surface quality. Claim 16, which is similar in scope to claim 7, thus rejected under the same rationale. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRIS ALEJANDRO PUNTIER whose telephone number is (703)756-1893. The examiner can normally be reached M-F 7:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRIS ALEJANDRO PUNTIER/ Examiner, Art Unit 2616 /DANIEL F HAJNIK/ Supervisory Patent Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Mar 25, 2024
Application Filed
Sep 29, 2025
Non-Final Rejection — §103
Dec 18, 2025
Response Filed
Mar 03, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586298
CONTROLLED ILLUMINATION FOR IMPROVED 3D MODEL RECONSTRUCTION
2y 5m to grant Granted Mar 24, 2026
Patent 12586291
Fast Large-Scale Radiance Field Reconstruction
2y 5m to grant Granted Mar 24, 2026
Patent 12573103
ENVIRONMENT MAP UPSCALING FOR DIGITAL IMAGE GENERATION
2y 5m to grant Granted Mar 10, 2026
Patent 12548226
SYSTEMS AND METHODS FOR A THREE-DIMENSIONAL DIGITAL PET REPRESENTATION PLATFORM
2y 5m to grant Granted Feb 10, 2026
Patent 12536679
APPLICATION MATCHING METHOD AND APPLICATION MATCHING DEVICE
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
94%
Grant Probability
99%
With Interview (+10.0%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 31 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month