Prosecution Insights
Last updated: April 19, 2026
Application No. 18/619,798

RENDERING A SIMPLIFIED VERSION OF A DYNAMIC OBJECT USING SPRITES RECORDED AS TEXTURE DATA

Non-Final OA §103§112
Filed
Mar 28, 2024
Examiner
FLORA, NURUN N
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Vrchat Inc.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
87%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
331 granted / 387 resolved
+23.5% vs TC avg
Minimal +1% lift
Without
With
+1.3%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
24 currently pending
Career history
411
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
27.1%
-12.9% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 387 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 4 is objected to because of the following informalities: Ending of claim 4 should be recited like, “…wherein the material property block data includes a sprite count, which is a number of sprites stored in the texture data, and a data chunk start.” Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Base claims 1, 9, and 16 recites the limitation "the avatar" within claim scope. There is insufficient antecedent basis for this limitation in the claim. Dependent claims 2-8, 10-15, and 17-20 also carry similar defect like respective base claims and thus rejected for same/similar reason(s) stated above. Claims 1-20 would be evaluated on its merit as best understood by the Examiner. To be more specific Examiner assumes that ‘avatar’ is same as ‘dynamic object’ – which is defined in antecedent scope. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 6, 9, 11, 14, 16, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 20230274492 A1, hereinafter Chen) in view of Minor (US 20240282035 A1). Regarding claim 1, Chen discloses a method rendering a simplified version of a dynamic object using sprites recorded as texture data, wherein the sprites are created from an original version of the dynamic object (fig. 5, ¶0047-0050, claim 1 and dependents), the method comprising: overlaying a The masker network 368 can be used to generate a segmentation mask for the input point cloud 300. Such a process can be thought of as cutting the 3D shape into multiple pieces, represented in FIG. 3B as front and back masks 370, such that each of these pieces can be represented by a single texture image. The input to the masker contains the shape code 360 from the encoding module 304, in addition to the point coordinates and point normals 366. The normals can be important for segmenting the shapes, since thin parts such as fingers in a human body mesh can be difficult to segment with only point coordinates, as the points on the fingers may be clustered closely in space, ¶0038. Alternatively, representing textures with images and linking them to a 3D mesh via a mapping approach such as UV, for example, may provide superior results, ¶0027 FIG. 4B, on the other hand, illustrates a set of texture images 450 for a set of objects A, B, C, D (in this example front texture images) that can be deformed to generate textured meshes 452 that correspond to the geometries of four other people W, X, Y, Z, ¶0043. In addition to generating a 2D texture image for a first object, a geometric representation of a second object can be obtained 510, where the second object can have a target shape that is to be used in synthesizing a new object. In some embodiments, such as where the obtained representation is a 3D model of an object, a point cloud or mesh representation can be generated that represents the shape of the second object but is free of any other visual features or aspects, such as color or texture, ¶0045, step 510, fig. 5); selecting a sprite in the texture data for the at least one isolated segment of the object that matches the location and rotation for a perspective from which the isolated segment is to be rendered (The 2D texture image(s) can be deformed 512 to correspond to the coordinates of the 3D geometric representation. This can effectively project or wrap the texture images onto the shape of the 3D geometric representation, such that the visual features of the texture image(s) are placed at appropriate locations on the shape of the geometric representation, such as where the visual features of a first person can be deformed to correspond to the facial or body shape of a second person, ¶0045; step 512, fig. 5), wherein the texture data stores information about a plurality of sprites, respective sprites in the plurality of sprites correspond to an isolated segment in a pose and an angle from which the sprite was captured (In this example, a 3D representation of a first object is obtained 502, where that first object has one or more target visual aspects that are to appear on a synthesized object. One or more feature encodings can be generated 504 corresponding to visual features of the 3D representation, where this might include a front encoding and back a encoding, among other such options, where each encoding may correspond to a feature vector or point in a latent feature space. The coordinates of the 3D representation can be mapped 506 to a 2D texture space. These coordinate mappings and feature encoding(s) can be used to generate 508 a 2D texture image that corresponds to the first object and includes data values for at least some of the extracted features, ¶0044; steps 502-508, fig. 8); rendering a simplified version of the dynamic object from data in the texture data associated with the selected sprite (A 3D representation of a new object can then be generated 514 that has the target visual aspects of the first object and the target shape of the second object. This 3D representation can then be used for various applications or operations, such as for 3D animation in a virtual reality experience or robot simulation environment, or for 2D renderings of the 3D object in a video game or movie, among other such options, step 54, fig. 5). Chen is not found expressly disclosing an avatar and quad mesh. However, in the background section, Chen discloses that 3D model could potentially be 3D human avatar (¶0002). Minor, on the other hand discloses that mesh could be expressed reasonably be triangular, quad or polygonal mesh (¶0058). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to implement the 3D representation of the first and second object of Chen in method expressed in fig. 5 as 3D avatar, and meshing be defined as quad meshes as disclosed by Minor, because, combining prior art elements ready to be improved according to known method to yield predictable results is obvious. Regarding claim 3, Chen in view of Minor discloses the method of claim 1, wherein the texture data recorded from the sprites includes a relative camera rotation in UV coordinates, a relative camera position in the UV coordinates, and bounding box information (¶0027-0029, ¶0034-0037, ¶0040-0042). Regarding claim 6, Chen in view of Minor discloses the method of claim 1, wherein the rendering a simplified version of the dynamic object from the texture data further comprises: mapping the selected sprite to the quad mesh, and deforming the quad mesh based on the texture data for the selected sprite (A trained network can generate a basis shared by all shape textures, and can predict input-specific coefficients to construct the output texture for each shape as a linear combination of the basis images, then deform the texture to match the pose of the input, abstract. FIG. 4B, on the other hand, illustrates a set of texture images 450 for a set of objects A, B, C, D (in this example front texture images) that can be deformed to generate textured meshes 452 that correspond to the geometries of four other people W, X, Y, Z, ¶0043). Regarding claim 9, Chen discloses a computing system (fig. 11, ¶0097, ¶0240) comprising: a processor (1102, fig. 11, ¶0097, ¶0240); and a memory (1120, fig. 11) storing instructions (1121, fig. 11) that, when executed by the processor, configure the computing system to (¶0102, fig. 11, ¶0240): overlay a quad mesh over at least one isolated segment of a dynamic object; select a sprite in the texture data for the at least one isolated segment of the avatar that matches the location and rotation for a perspective from which the isolated segment is to be rendered, wherein the texture data stores information about a plurality of sprites, respective sprites in the plurality of sprites correspond to an isolated segment in a pose and an angle from which the sprite was captured; render a simplified version of the dynamic object from data in the texture data associated with the selected sprite (see substantively similar claim 1 rejection above). Regarding system claim(s) 11, 14 although wording is different, the material is considered substantively equivalent to the method claim(s) 3, 6 as described above. Regarding claim 16, Chen discloses a non-transitory computer-readable storage medium (1120, fig. 11), the computer-readable storage medium including instructions that when executed by at least one processor, cause the at least one processor to (fig. 11, ¶0097-0102, ¶0240): overlay a quad mesh over at least one isolated segment of a dynamic object; select a sprite in the texture data for the at least one isolated segment of the avatar that matches the location and rotation for a perspective from which the isolated segment is to be rendered, wherein the texture data stores information about a plurality of sprites, respective sprites in the plurality of sprites correspond to an isolated segment in a pose and an angle from which the sprite was captured; render a simplified version of the dynamic object from data in the texture data associated with the selected sprite (see substantively similar claims 1 & 9 rejections above). Regarding CRM claim(s) 19 although wording is different, the material is considered substantively equivalent to the method claim(s) 6 as described above. Claims 2, 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Minor and further in view of Baba (US 20240013502). Regarding claim 2, Chen in view of minor discloses the method of claim 1, except wherein the at least one isolated segment of the dynamic object captured from a plurality of orientations and rotations of the virtual camera. However, Baba discloses that capturing images of an avatar from surrounding positions and orientations using virtual camera (¶0186). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Chan such during 2D texture image creation phase, scanning is done on an avatar using virtual camera in a virtual reality scene (see Chan ¶0045 where use case of 3D modeling is applicable in a virtual reality experience), by rotating and orienting the virtual camera around the avatar to take texture images of various coordinates as disclosed by Baba, to obtain, wherein the at least one isolated segment of the dynamic object captured from a plurality of orientations and rotations of the virtual camera, because, combining prior art elements ready to be improved according to known method to yield predictable results is obvious. Regarding system claim(s) 10, although wording is different, the material is considered substantively equivalent to the method claim(s) 2 as described above. Claims 7, 15, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Minor and further in view of Boissé et al. (US 20170200301 A1; hereinafter Boissé). Regarding claim 7, Chen in view of Minor discloses the method of claim 1, except, wherein the rendering a simplified version of the dynamic object from the texture data further comprises: shading the quad mesh using a fragment shader using the texture data for the selected sprite to yield the rendered imposter. However, Boissé discloses that mesh is shaded according to texture data using a fragment shader (¶0106). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to implement the shading operation of the quad mesh using a fragment shader and according to the texture data of the selected sprite, to obtain, wherein the rendering a simplified version of the dynamic object from the texture data further comprises: shading the quad mesh using a fragment shader using the texture data for the selected sprite to yield the rendered imposter, because. combining prior art elements ready to be improved according to known method to yield predictable results is obvious. Regarding system claim(s) 15, although wording is different, the material is considered substantively equivalent to the method claim(s) 7 as described above. Regarding CRM claim(s) 20, although wording is different, the material is considered substantively equivalent to the method claim(s) 7 as described above. Allowable Subject Matter Claims 4-5, 8, 12-13, 17-18 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Prior arts of record taken alone or in combination fails to reasonably disclose or suggest, Regarding claim 4, wherein the rendering a simplified version of the dynamic object from the texture data further comprises: sending material property block data and the texture data for the dynamic object from a central processing unit (CPU) into a graphics processing unit (GPU), wherein the material property block data includes a sprite count, which is a number of sprites stored in the texture data, a data chunk start. Regarding claim 8, capturing a plurality of sprites from which to render the simplified version of the dynamic object by: downloading the dynamic object; duplicating the dynamic object to yield a duplicated dynamic object; reducing the duplicated dynamic object to parent segments; isolating at least one parent segment of the parent segments to yield the isolated segment; overlaying the quad mesh on top of the isolated segment; locating the isolated segment at a scene origin; creating a bounding box that envelopes the isolated segment; scaling a field of view of a virtual camera to approximately match the bounding box; capturing a sprite of the isolated segment by the virtual camera, wherein the sprite is specific to a location and angle of rotation of the virtual camera with respect to the isolated segment; rotating and/or relocating the virtual camera about the isolated segment and recalculating the bounding box and capturing a second sprite of the isolated segment that is specific to a second location and/or a second angle of rotation of the virtual camera relative to the isolated segment; repeating the scaling, capturing, and rotating until sufficient sprites are captured to represent the isolated segment from likely poses and angles; from the captured sprites, storing a relative camera rotation in UV coordinates, a relative camera position in the UV coordinates, and bounding box information including a bounds size, a bounds center in the UV coordinates, a depth atlas position, and a color atlas position as texture data for the avatar, wherein data for the sprites is stored as the texture data for the dynamic object. System claims 12-13 and 17-18 are allowable for the same/similar reasons stated above for method claims 4-5, since they are substantively similar. . Conclusion The prior and/or pertinent art(s) made of record and not relied upon is considered pertinent to applicant's disclosure, are: Lombardi et al. (US 20240303951 A1), Xu et al. (US 20240096041 A1), Hopkins et al. (US 20230377268 A1), who disclose different methods of re-rendering a scanned digital object based on pose and/or orientation. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NURUN FLORA whose telephone number is (571)272-5742. The examiner can normally be reached M-F 9:30 am -5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571) 272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NURUN FLORA/Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Mar 28, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592025
IMAGE RENDERING BASED ON LIGHT BAKING
2y 5m to grant Granted Mar 31, 2026
Patent 12586250
COMPRESSION AND DECOMPRESSION OF SUB-PRIMITIVE PRESENCE INDICATIONS FOR USE IN A RENDERING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12586254
High-quality Rendering on Resource-constrained Devices based on View Optimized RGBD Mesh
2y 5m to grant Granted Mar 24, 2026
Patent 12579751
TECHNIQUES FOR PARALLEL EDGE DECIMATION OF A MESH
2y 5m to grant Granted Mar 17, 2026
Patent 12561896
INSERTING THREE-DIMENSIONAL OBJECTS INTO DIGITAL IMAGES WITH CONSISTENT LIGHTING VIA GLOBAL AND LOCAL LIGHTING INFORMATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
87%
With Interview (+1.3%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 387 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month