Prosecution Insights
Last updated: April 19, 2026
Application No. 18/818,626

VOLUMETRIC VIDEO PROCESSING SYSTEM AND METHOD

Non-Final OA §102§103
Filed
Aug 29, 2024
Examiner
FLORA, NURUN N
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Viverse Limited
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
87%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
331 granted / 387 resolved
+23.5% vs TC avg
Minimal +1% lift
Without
With
+1.3%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
24 currently pending
Career history
411
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
27.1%
-12.9% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 387 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 5, 7-10, 12, 14-17, 20 are rejected under 35 U.S.C. 102(a)(1) and/or 102(a)(2) as being anticipated by Wallner et al. (US 20180005449 A1, hereinafter Wallner). Regarding claim 1, Wallner discloses a volumetric video processing system (¶0002-0006, system 1000, fig. 2, ¶0035-0044), comprising: a storage circuit (1004, 1022, and/or 1024, fig. 1), storing a program code (¶0042-0044); and a processor (1002, 1018), coupled to the storage circuit and accessing the program code to execute (¶0042-0044, fig. 2): obtaining a texture of a frame of a volumetric video (see e.g. ¶0034, disclosing that frames of a digital video are subject to texture-mapping onto a spherical mesh); generating a timecode based on a frame number of the frame (In an alternative embodiment, whether or not all frames of the digital video have had frame identifiers inserted therein, the media player can operate to parse frame identifiers only from a few frames expected (by estimation using the codec-produced elapsed time timecode) to be preceding the event-triggering frame and the event-triggering frame itself, ¶0099. Frame accurate timecodes would be essential in synchronizing the live compositing of such independent videos, which may have their own separate timecodes, to create complex sequences of asynchronous action triggered by the user in order to maintain the illusion of totally seamless interactivity for the user, ¶0108. Also see ¶0035); embedding the timecode into the texture to generate an embedded texture (In embodiments described above, the non-image data inserted into a frame is a frame-accurate timecode that may have a counterpart event stored in a metadata file with parameters for causing the media player to trigger a particular event upon the display of the frame, ¶0105); obtaining a 3D model of the frame (see e.g. ¶s 0034, 0055-0056 and 0088, disclosing the use of a spherical mesh and encoding the video data. Also steps 700-800, fig. 6); and storing the embedded texture and the 3D model together as the volumetric video (The frame identifier insertion module may alternatively be activated during a publishing routine that may subsequently automatically encode the digital video, with inserted frame identifier, into one or more formats such as MP4, AVI, MOV, or WEBM, ¶0054). Regarding claim 2, Wallner discloses the volumetric video processing system according to claim 1, wherein the processor further executes: converting the frame number from a text to an image to generate the timecode (…each frame identifier that is to be inserted into a frame by the frame identifier insertion module represents a number in a sequence that is represented in binary. Each digit in the binary code is represented by a respective block of uniformly-coloured pixels inserted into a respective one of the determined regions, ¶0057. Frame identifier, digit and number are reasonably understood as text. In embodiments described above, the non-image data inserted into a frame is a frame-accurate timecode that may have a counterpart event stored in a metadata file with parameters for causing the media player to trigger a particular event upon the display of the frame, ¶0105). Regarding claim 3, Wallner discloses the volumetric video processing system according to claim 1, wherein the timecode is a binary image (¶0057, ¶0068, fig. 5). Regarding claim 5, Wallner discloses the volumetric video processing system according to claim 1, wherein the processor further executes: disposing the timecode at a top or a bottom of the texture to generate the embedded texture (see e.g. Fig. 3, 5, ¶0057, ¶0065, ¶0068 and ¶0104). Regarding claim 7, Wallner discloses the volumetric video processing system according to claim 1, wherein the processor further executes: disposing the timecode at a left side or a right side of the texture to generate the embedded texture (fig. 5 shows a rather left aligned code embedding, for a 10-bit long timecode). Regarding method claim(s) 8-10, 12, and 14 although wording is different, the material is considered substantively equivalent to the system claim(s) 1-3, 5, and 7 respectively as described above. Regarding claim 15, Wallner discloses a volumetric video processing method (), comprising: obtaining a frame of a volumetric video (see e.g. ¶0034, disclosing that frames of a digital video are subject to texture-mapping onto a spherical mesh. The resolution is to be used by the frame identifier insertion module along with the parameters representing the predetermined spherical mesh to determine how many pixels in the digital video can be modified in order to insert an appropriate frame identifier into regions of the digital video that would become substantially invisible upon being texture-mapped to the predetermined sphere mesh, ¶0056) obtaining a texture video from the frame of the volumetric video (ibid, see e.g. ¶0034, disclosing that frames of a digital video are subject to texture-mapping onto a spherical mesh. The resolution is to be used by the frame identifier insertion module along with the parameters representing the predetermined spherical mesh to determine how many pixels in the digital video can be modified in order to insert an appropriate frame identifier into regions of the digital video that would become substantially invisible upon being texture-mapped to the predetermined sphere mesh, ¶0056); obtaining a texture and a timecode based on an embedded texture of the texture video (Furthermore, in this embodiment, each frame in the digital video has a respective frame identifier inserted into it, so that all frames received by a decoder can, once decoded, be processed in order to extract the frame identifier data instead of relying on the decoder's timecode, ¶0035. In an alternative embodiment, whether or not all frames of the digital video have had frame identifiers inserted therein, the media player can operate to parse frame identifiers only from a few frames expected (by estimation using the codec-produced elapsed time timecode) to be preceding the event-triggering frame and the event-triggering frame itself, ¶0099); obtaining a frame number of the frame based on the timecode (Furthermore, in this embodiment, each frame in the digital video has a respective frame identifier inserted into it, so that all frames received by a decoder can, once decoded, be processed in order to extract the frame identifier data instead of relying on the decoder's timecode, ¶0035. In an alternative embodiment, whether or not all frames of the digital video have had frame identifiers inserted therein, the media player can operate to parse frame identifiers only from a few frames expected (by estimation using the codec-produced elapsed time timecode) to be preceding the event-triggering frame and the event-triggering frame itself, ¶0099); obtaining a 3D model of the frame from the volumetric video based on the frame number (As such, at least one frame region the contents of which would be rendered substantially invisible, were frames of the digital video to be subjected to the predetermined texture-mapping onto the spherical mesh, is determined thereby to determine where in the frames non-image data may be inserted such that the non-image data is also rendered substantially invisible upon mapping, ¶0061; figs. 3-5); and applying the texture on the 3D model to generate a rendered 3D model (ibid, ¶0065, figs. 3-5). Regarding claim 16, Wallner discloses the volumetric video processing method according to claim 15, further comprising: converting the timecode from an image to a text to an image to obtain the frame number (Furthermore, in this embodiment, each frame in the digital video has a respective frame identifier inserted into it, so that all frames received by a decoder can, once decoded, be processed in order to extract the frame identifier data instead of relying on the decoder's timecode, ¶0035). Regarding claim 17, Wallner discloses the volumetric video processing method according to claim 15, wherein the timecode is a binary image (¶0057, ¶0068, fig. 5). Regarding claim 20, Wallner discloses the volumetric video processing method according to claim 15, further comprising: loading a 3D model of a nearby frame based on the frame number (…one or more events associated with a frame identifier that is/are to be triggered upon display of the decoded frame from which the corresponding frame identifier has been parsed, ¶0080. It will be appreciated that the frame identifiers in frames intended for flat video are placed by the decoder in the same position in processor-accessible memory as frame identifiers are placed in the equirectangular frames intended for 360 video. In this way, the media player can, subsequent to decoding, look to the same place in each frame for the frame identifiers, ¶0086). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4, 6, 11, 13, 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Wallner. Regarding claim 4, Wallner discloses the volumetric video processing system according to claim 1, wherein the timecode is a row of black polygon and white polygon (see e.g. Fig. 3, 5, ¶0057, ¶0065, ¶0068 and ¶0104). Black and white pixels in fig. 5 is shown embedded potentially on rectangular polygons, rather than squares (¶0068). However, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to design the shapes of the constituent bits in the timecode be embedded in a square shaped polygon, because, the shape is merely a design choice that does not raise any issue of criticality. Regarding claim 6, Wallner discloses the volumetric video processing system according to claim 1, wherein the timecode is a column of black squares and white polygons. Black and white pixels in fig. 5 is shown embedded potentially on rectangular polygons, rather than squares (¶0068). However, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to design the shapes of the constituent bits in the timecode be embedded in a square shaped polygon, because, the shape is merely a design choice that does not raise any issue of criticality. Regarding method claim(s) 11, 13 although wording is different, the material is considered substantively equivalent to the system claim(s) 4, 6 respectively as described above. Regarding claim 18, Wallner discloses the volumetric video processing method according to claim 15, wherein the timecode is a row of black polygon and white polygon. Black and white pixels in fig. 5 is shown embedded potentially on rectangular polygons, rather than squares (¶0068). However, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to design the shapes of the constituent bits in the timecode be embedded in a square shaped polygon, because, the shape is merely a design choice that does not raise any issue of criticality. Regarding claim 19, Wallner discloses the volumetric video processing method according to claim 15, wherein the timecode is a column of black squares and white polygons. Black and white pixels in fig. 5 is shown embedded potentially on rectangular polygons, rather than squares (¶0068). However, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to design the shapes of the constituent bits in the timecode be embedded in a square shaped polygon, because, the shape is merely a design choice that does not raise any issue of criticality. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NURUN FLORA whose telephone number is (571)272-5742. The examiner can normally be reached M-F 9:30 am -5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571) 272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NURUN FLORA/Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Aug 29, 2024
Application Filed
Feb 17, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592025
IMAGE RENDERING BASED ON LIGHT BAKING
2y 5m to grant Granted Mar 31, 2026
Patent 12586250
COMPRESSION AND DECOMPRESSION OF SUB-PRIMITIVE PRESENCE INDICATIONS FOR USE IN A RENDERING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12586254
High-quality Rendering on Resource-constrained Devices based on View Optimized RGBD Mesh
2y 5m to grant Granted Mar 24, 2026
Patent 12579751
TECHNIQUES FOR PARALLEL EDGE DECIMATION OF A MESH
2y 5m to grant Granted Mar 17, 2026
Patent 12561896
INSERTING THREE-DIMENSIONAL OBJECTS INTO DIGITAL IMAGES WITH CONSISTENT LIGHTING VIA GLOBAL AND LOCAL LIGHTING INFORMATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
87%
With Interview (+1.3%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 387 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month