Prosecution Insights
Last updated: April 19, 2026
Application No. 18/270,963

MMT SIGNALING FOR STREAMING OF VISUAL VOLUMETRIC VIDEO-BASED AND GEOMETRY-BASED POINT CLOUD MEDIA

Final Rejection §103
Filed
Jul 05, 2023
Examiner
KIM, WILLIAM JW
Art Unit
2409
Tech Center
2400 — Computer Networks
Assignee
InterDigital Patent Holdings, Inc.
OA Round
4 (Final)
79%
Grant Probability
Favorable
5-6
OA Rounds
2y 2m
To Grant
94%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
352 granted / 448 resolved
+20.6% vs TC avg
Strong +15% interview lift
Without
With
+15.1%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 2m
Avg Prosecution
16 currently pending
Career history
464
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
10.5%
-29.5% vs TC avg
§112
17.3%
-22.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 448 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Claims 19 and 29 are amended Claims 19-21, 23-31, and 33-38 are presently pending. Applicant’s arguments, see Remarks, filed 08 January 2026, with respect to the rejection(s) of claim(s) 19 and 29 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Yip et al. (US 2021/0099754 A1). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 19-21, 23, 28-31, 33, and 38 are rejected under 35 U.S.C. 103 as being unpatentable over Oyman et al. (US 2021/0006614 A1) (of record, hereinafter Oyman), in view of Hinds et al. (US 2019/0028691 A1) (of record, hereinafter Hinds), further in view of Kitazato (US 2018/0192084 A1) (of record, hereinafter Kitazato), and further in view of Yip et al. (US 2021/0099754 A1) (hereinafter Yip). Regarding Claim 19, Oyman discloses a method implemented in a receiving device for streaming Motion Picture Experts Group (MPEG) media content, [Fig. 9; ABST; 0021-22: adaptive mechanisms for distributing point cloud content based on MPEG technologies] the method comprising: receiving, from a sending device, [Figs. 2, 7-9] an asset group message including asset descriptor data describing one or more asset groups that are available to be streamed, [Figs. 7-9; 0042-45: V-PCC wherein viewport indications for a video may be provided by means of a Media Presentation Description (MPD); 0067-70: server 704 may provide MPD to a client 706] wherein the asset descriptor data includes a field indicating respective data types associated with each asset of the one or more asset groups; [0035-37: V-PCC bitstream information includes various encoded type information such as occupancy, geometry, attribute, patch, etc.] sending, to the sending device, an asset selection message including a request for at least a subset of assets of the identified one or more asset groups that are available to be streamed, wherein the asset selection message includes a respective unique asset identifier associated with each of the requested subset of assets; [Figs. 7-9; 0043-46, 0071: client 706 obtains viewpoint information from a user device and parses MPD to determine the specific AdaptationSet and Representation covering the viewport information and issues a request for the associated segments accordingly; 0045: MPD fragments are requested by fragment URLs; 0050-66, 0219-221: viewports may be identified relative positionings as well as point cloud object identifiers (i.e., some respective identifier exists for each corresponding viewport V-PPC object)] receiving, from the sending device, in response to the asset selection message, a packet flow comprising one or more packets; [Figs. 7-9; 0045, 0072: server 704 may provide segments requested by client 706 over network 710; 0117-118, 0175: where systems may provide content over packet-based networks encoded in any transport protocol] and processing the one or more packets to recover at least a portion of the requested subset of assets of the one or more asset groups. [Figs. 7-9; 0037-41, 0045, 0072: server 704 may provide segments requested by client 706 over network 710 to be subsequently received and decapsulated by the client; 0117-118, 0175: where systems may provide content over packet-based networks encoded in any transport protocol] Oyman fails to explicitly disclose a method implemented in a receiving device for streaming Motion Picture Experts Group (MPEG) Media Transport Protocol (MMTP) media content; receiving, from the sending device, in response to the asset selection message, an MMTP packet flow comprising one or more MMTP packets; and processing the one or more MMTP packets to recover at least a portion of the requested subset of assets of the one or more asset groups. (Emphasis on the particular elements of the limitations not explicitly disclosed by Oyman – namely the specific use of the MPEG Media Transport Protocol) Hinds, in analogous art, teaches a method implemented in a receiving device for streaming Motion Picture Experts Group (MPEG) Media Transport Protocol (MMTP) media content; receiving, from the sending device, in response to the asset selection message, an MMTP packet flow comprising one or more MMTP packets; and processing the one or more MMTP packets to recover at least a portion of the requested subset of assets of the one or more asset groups. [0076-77, 0094-95: wherein provision of point cloud information for 360-degree video that allows for changes in point-of-view (e.g., the viewport changes for the 360-degree video of Oyman above) may be implemented in any existing MPEG technologies including DASH, MMT, etc.] It would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to modify the method of Oyman with the teachings of Hinds to specify streaming via MPEG Media Transport protocol (i.e., MMT/MMTP) as it is known existing MPEG-based technology that may support streaming of 360-degree video. [Hinds – 0095-96] Oyman and Hinds fail to explicitly disclose each of the one or more MMTP packets that include respective packet identifiers; identifying the one or more MMTP packets based on the respective packet identifiers to retrieve one or more Media Processing Units (MPUs), each of the one or more MPUs comprising one of the unique asset identifiers associated with one of the requested subset of assets; and processing, based on the unique asset identifiers, the one or more MPUs to recover at least a portion of the requested subset of assets of the one or more asset groups. Kitazato, in analogous art, teaches ach of the one or more MMTP packets that include respective packet identifiers; [0204, 0246, 0252, 0259, 0289: MMTP packets are associated with unique packet identifiers] identifying the one or more MMTP packets based on the respective packet identifiers to retrieve one or more Media Processing Units (MPUs), each of the one or more MPUs comprising one of the unique asset identifiers associated with one of the requested subset of assets; [Fig. 4; 0094-95: where a receiver in an MMTP system may receive an MMTP stream and demultiplex the received signal; 0119-121, 0204, 0246, 0252, 0259: MMTP packets are identified according to packet identifiers, and each MMTP packet is encoded as MPUs comprising data according to asset identifiers] and processing, based on the unique asset identifiers, the one or more MPUs to recover at least a portion of the requested subset of assets of the one or more asset groups. [0119-121, 0138: assets are processed according to asset identifiers in respective MPUs] It would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to modify the method of Oyman and Hinds with the teachings of Kitazato to specify processing the MMTP packets according to respective identifiers, MPUs, and asset identifiers as such elements are known to manage the production of content and configuration of broadcast transport data within the MMT format. [Kitazato – 0002-4] Oyman, Hinds, and Kitazato fail to explicitly disclose wherein the asset descriptor data includes at least a first field identifying the one or more asset groups, and a second field indicating respective data types associated with assets of the one of more asset groups. (Emphasis on the particular elements of the limitations not explicitly disclosed by Oyman, Hinds, and Kitazato – namely the two fields in the asset descriptor data as claimed) Yip, in analogous art, teaches wherein the asset descriptor data includes at least a first field identifying the one or more asset groups, and a second field indicating respective data types associated with assets of the one of more asset groups. [0215-228, 0236: with respect to Table 5, VPCCAssetGroupMessage is signaled by an MMT transmitting entity (such as that of Oyman, Hinds, and Kitazato above), where such a group message includes an asset_group_id field, as well as a data_type field for signaling VPCC data type assets from the provided asset list] It would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to modify the method of Oyman, Hinds, and Kitazato with the teachings of Yip to specify first and second fields within asset descriptor data indicating asset group identifier and asset data type respectively as it is understood that such a VPCC Asset Group message is a mandatory signaling message required within the MMT protocol when transmitted VPCC encoded content. [Yip – 0217] Regarding Claim 20, Oyman, Hinds, Kitazato, and Yip disclose all of the limitations of Claim 19, which are analyzed as previously discussed. Furthermore, Oyman, Hinds, and Kitazato disclose sending, to the sending device, a viewport change message including an indication of a current viewport of the receiving device; [Oyman – Figs. 7-9; 0043-46, 0071: client 706 obtains viewpoint information from a user device and parses MPD to determine the specific AdaptationSet and Representation covering the viewport information and issues a request for the associated segments accordingly; Hinds – 0077, 0095: viewport changes may be determined by various inputs, including relative movement of user HMD, or manual input/rotation] and receiving another asset group message that includes updated asset descriptor data describing one or more asset groups that are available to be streamed based on the current viewport of the receiving device. [Oyman – Figs. 7-9; 0042-45: V-PCC wherein viewport indications for a video may be provided by means of a Media Presentation Description (MPD); 0047-52: viewport-dependent streaming may provide FOV tiles in high quality, and other tiles in lower quality; 0067-70: server 704 may provide MPD to a client 706; 0074: wherein for live presentations, changes in viewports may be signaled via regular MPD updates; Hinds – 0068: foveated rendering isolates a center tile and areas around the exact center are provided in slightly lower resolution] Regarding Claim 21, Oyman, Hinds, Kitazato, and Yip disclose all of the limitations of Claim 19, which are analyzed as previously discussed. Furthermore, Oyman, Hinds, and Kitazato disclose wherein the requested at least the subset of assets of the one or more asset groups that are available to be streamed are selected by the receiving device based on a current viewport of the receiving device. [Oyman – Figs. 7-9; 0043-46, 0071: client 706 obtains viewpoint information from a user device and parses MPD to determine the specific AdaptationSet and Representation covering the viewport information and issues a request for the associated segments accordingly; Hinds – 0077, 0095: viewport changes may be determined by various inputs, including relative movement of user HMD, or manual input/rotation] Regarding Claim 23, Oyman, Hinds, Kitazato, and Yip disclose all of the limitations of Claim 19, which are analyzed as previously discussed. Furthermore, Hinds discloses wherein the sent asset selection message includes information identifying an application intended to consume the requested subset of assets. [0088-91: client requests for content may include client profile information which may include description/requirements of the client application itself (i.e., some information identifying the application)] Regarding Claim 28, Oyman, Hinds, Kitazato, and Yip disclose all of the limitations of Claim 19, which are analyzed as previously discussed. Furthermore, Oyman, Hinds, and Kitazato disclose wherein the asset group message includes information indicating one or more of: a dependency of an asset upon another asset for decoding; an indication of the another asset upon which the asset is dependent; whether the asset has an alternate version; and an identification of the alternate version of the asset. [Oyman – 0042: MPD indicates different bitrates/frame rates/resolutions/codec types, etc. to react to changes of device state; 0213-214: manifest file includes hierarchical levels of different viewports; Hinds – 0013: different quality versions of content may be created and provided to account for network conditions; 0076: hierarchical manifest] Regarding Claim 29, Claim 29 recites a devices that performs the method of Claim 19. As such, Claim 29 is analyzed and rejected similarly as Claim 19, mutatis mutandis. (See also, Oyman [Figs. 2, 11, 13-14] and accompanying descriptions) Regarding Claim 30, Oyman, Hinds, Kitazato, and Yip disclose all of the limitations of Claim 29, which are analyzed as previously discussed. Furthermore, Claim 30 recites nearly identical limitations as Claim 20 and is rejected similarly as that claim. Regarding Claim 31, Oyman, Hinds, Kitazato, and Yip disclose all of the limitations of Claim 29, which are analyzed as previously discussed. Furthermore, Claim 31 recites nearly identical limitations as Claim 21 and is rejected similarly as that claim. Regarding Claim 33, Oyman, Hinds, Kitazato, and Yip disclose all of the limitations of Claim 29, which are analyzed as previously discussed. Furthermore, Claim 33 recites nearly identical limitations as Claim 23 and is rejected similarly as that claim. Regarding Claim 38, Oyman, Hinds, Kitazato, and Yip disclose all of the limitations of Claim 29, which are analyzed as previously discussed. Furthermore, Claim 38 recites nearly identical limitations as Claim 28 and is rejected similarly as that claim. Claim(s) 24-27 and 34-37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oyman, Hinds, Kitazato, and Yip as applied to claims 19 and 29, respectively above, and further in view of Oh (US 2021/0005006 A1) (of record, hereinafter Oh). Regarding Claim 24, Oyman, Hinds, Kitazato, and Yip disclose all of the limitations of Claim 19, which are analyzed as previously discussed. Furthermore, Oyman discloses wherein the asset descriptor data describing the one or more asset groups that are available to be streamed describes video-based point cloud coding (V-PCC) data. [0024, 0032: volumetric video may be represented as point clouds, such as V-PCC architecture] Oyman, Hinds, Kitazato, and Yip fail to explicitly disclose volumetric video-based coding (V3C) data. Oh, in analogous art, teaches volumetric video-based coding (V3C) data. [0079, 0083: V-PCC may be the same as Visual Volumetric Vide-based Coding (V3C)] It would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to modify the method of Oyman, Hinds, Kitazato, and Yip with the teachings of Oh to specify V3C data as it is readily understood that V-PCC and V3C are terms that are complementarily used to Video-based Point Cloud Compression. [Oh – 0083] Regarding Claim 25, Oyman, Hinds, Kitazato, Yip, and Oh disclose all of the limitations of Claim 24, which are analyzed as previously discussed. Furthermore, Oyman and Oh disclose wherein the respective data types associated with each asset of the one or more asset groups are one of atlas component data, occupancy component data, geometry component data, attribute component data, dynamic volumetric timed-metadata information, or viewport timed-metadata information. [Oyman – 0035-37: V-PCC bitstream information includes various encoded type information such as occupancy, geometry, attribute, patch, etc.; Oh – 0008-11, 0016-17, 0021: PCC data may carry geometry data and attribute data and occupancy data] Regarding Claim 26, Oyman, Hinds, Kitazato, and Yip disclose all of the limitations of Claim 19, which are analyzed as previously discussed. Furthermore, Oyman discloses wherein the asset descriptor data describing the one or more asset groups that are available to be streamed describes video-based point cloud coding (V-PCC) data. [0024, 0032: volumetric video may be represented as point clouds, such as V-PCC architecture] Oyman, Hinds, Kitazato, and Yip fail to explicitly disclose geometry-based point cloud compression (G-PCC) data. Oh, in analogous art, teaches geometry-based point cloud compression (G-PCC) data. [0079-82, 0365: where point cloud video encoders may support V-PCC or geometry-based point cloud compression G-PCC] It would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to modify the method of Oyman, Hinds, Kitazato, and Yip with the teachings of Oh to specify G-PCC data as it is readily understood that G-PCC is an alternative encoding scheme to V-PCC/V3C for encoding and signaling point cloud data (such as that of Oyman and Hinds). [Oh – 0082] Regarding Claim 27, Oyman, Hinds, Kitazato, Yip, and Oh disclose all of the limitations of Claim 26, which are analyzed as previously discussed. Furthermore, Oyman and Oh disclose wherein the respective data types associated with each asset of the one or more asset groups are one of geometry data; attribute data; attribute parameter set data; sequence parameter set data; geometry parameter set data; tile inventory data; frame boundary market data; default data; or three-dimensional spatial region timed metadata information. [Oyman – 0035-37: V-PCC bitstream information includes various encoded type information such as occupancy, geometry, attribute, patch, etc.; Oh – 0008-11, 0016-17, 0021: PCC data may carry geometry data and attribute data and occupancy data] Regarding Claim 34, Oyman, Hinds, Kitazato, and Yip disclose all of the limitations of Claim 29, which are analyzed as previously discussed. Furthermore, Claim 34 recites nearly identical limitations as Claim 24 and is rejected similarly as that claim. Regarding Claim 35, Oyman, Hinds, Kitazato, Yip, and Oh disclose all of the limitations of Claim 34, which are analyzed as previously discussed. Furthermore, Claim 35 recites nearly identical limitations as Claim 25 and is rejected similarly as that claim. Regarding Claim 36, Oyman, Hinds, Kitazato, and Yip disclose all of the limitations of Claim 29, which are analyzed as previously discussed. Furthermore, Claim 36 recites nearly identical limitations as Claim 26 and is rejected similarly as that claim. Regarding Claim 37, Oyman, Hinds, Kitazato, and Yip, and Oh disclose all of the limitations of Claim 36, which are analyzed as previously discussed. Furthermore, Claim 37 recites nearly identical limitations as Claim 27 and is rejected similarly as that claim. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Choi et al. (KR 20130085627 A) Suzuki et al. (JP 2016103813 A1) Suzuki et al. (JP 2016111466 A) The Above art also disclose various MMT packet signaling that discloses fields that identify both asset group identifiers and asset data types. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM J KIM whose telephone number is (571)272-2767. The examiner can normally be reached 9:30am - 5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hadi Armouche can be reached at (571) 270-3618. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILLIAM J KIM/Primary Examiner, Art Unit 2409
Read full office action

Prosecution Timeline

Jul 05, 2023
Application Filed
Jul 05, 2023
Response after Non-Final Action
Jan 30, 2025
Non-Final Rejection — §103
May 05, 2025
Response Filed
Jun 03, 2025
Final Rejection — §103
Sep 05, 2025
Request for Continued Examination
Oct 02, 2025
Response after Non-Final Action
Oct 06, 2025
Non-Final Rejection — §103
Jan 08, 2026
Response Filed
Feb 09, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598351
METHODS, SYSTEMS, AND APPARATUSES FOR SCALABLE CONTENT DATA UPDATING
2y 5m to grant Granted Apr 07, 2026
Patent 12594887
TECHNIQUES FOR DISPLAYING CONTENT WITH A LIVE VIDEO FEED
2y 5m to grant Granted Apr 07, 2026
Patent 12587701
METHODS AND SYSTEMS FOR SYNCHRONIZING PLAYBACK OF MEDIA CONTENT ITEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12574587
METHODS AND SYSTEMS FOR GROUP WATCHING
2y 5m to grant Granted Mar 10, 2026
Patent 12563251
CONTENT DELIVERY
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
79%
Grant Probability
94%
With Interview (+15.1%)
2y 2m
Median Time to Grant
High
PTA Risk
Based on 448 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month