Prosecution Insights
Last updated: April 19, 2026
Application No. 18/989,612

DATA PROCESSING METHOD AND RELATED DEVICE FOR POINT CLOUD MEDIA

Non-Final OA §102§103
Filed
Dec 20, 2024
Examiner
WONG, ALLEN C
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Tencent Technology (Shenzhen) Company Limited
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
95%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
669 granted / 805 resolved
+25.1% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
27 currently pending
Career history
832
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
41.6%
+1.6% vs TC avg
§102
16.5%
-23.5% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 805 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/20/24 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 7, 11, 14 and 18-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Takahashi (US 2023/0291918). Regarding claim 1, Takahashi discloses a data processing method for point cloud media (paragraph [273], fig.21, Takahashi discloses processing point cloud data, in that element 400 decodes a G-PCC file as encoded by the embodiment of fig.19), the method being executed by a media processing device (paragraph [273], fig.21, Takahashi discloses processing point cloud data, in that element 400 decodes a G-PCC file as encoded by the embodiment of fig.19) and comprising: obtaining a media file of point cloud media (paragraph [276], Takahashi discloses element 411 obtains a media content file that comprises the point cloud data), the media file comprising a point cloud bitstream and cross-attribute dependency indication information of the point cloud media (paragraph [277], Takahashi discloses element 421 processes the obtained media file, as obtained by element 411, by extracting the bitstream of the encoded point cloud data, and wherein paragraph [91], fig.3, Takahashi discloses the presence of independent slice units and dependent slice units, and the dependency relationships of the slice units relative to other slice units, wherein paragraph [93], Takahashi discloses that attribute data of dependent slice #2 is dependent on attribute data of independent slice #1, and paragraph [94], Takahashi discloses that attribute data of dependent slice #6 is dependent on attribute data of dependent slice #2, and attribute data of dependent slice #6 is also indirectly dependent on attribute data of independent slice #1, and paragraph [95], Takahashi discloses that attribute data of dependent slice #3 is dependent on attribute data of independent slice #1, and paragraph [96], Takahashi discloses that attribute data of dependent slice #7 is dependent on attribute data of dependent slice #3, and attribute data of dependent slice #7 is also indirectly dependent on attribute data of independent slice #1, and paragraph [108], Takahashi discloses GPCCDecoderConfigurationRecord includes data from attribute parameter set (APS) and a tile inventory depending on a sample entry type as illustrated as fig.3 for clearly associating the dependency information of the attribute data onto the GPCCDecoderConfigurationRecord in the metadata area of the media content file for providing the proper information to the decoder for establishing the cross-attribute dependency indication information based on encoding/decoding relationship between attribute data in the point cloud bitstream), and the cross-attribute dependency indication information being for indicating an encoding and decoding dependency relationship between attribute data in the point cloud bitstream (paragraph [91], fig.3, Takahashi discloses the presence of independent slice units and dependent slice units, and the dependency relationships of the slice units relative to other slice units, wherein paragraph [93], Takahashi discloses that attribute data of dependent slice #2 is dependent on attribute data of independent slice #1, and paragraph [94], Takahashi discloses that attribute data of dependent slice #6 is dependent on attribute data of dependent slice #2, and attribute data of dependent slice #6 is also indirectly dependent on attribute data of independent slice #1, and paragraph [95], Takahashi discloses that attribute data of dependent slice #3 is dependent on attribute data of independent slice #1, and paragraph [96], Takahashi discloses that attribute data of dependent slice #7 is dependent on attribute data of dependent slice #3, and attribute data of dependent slice #7 is also indirectly dependent on attribute data of independent slice #1, and paragraph [108], Takahashi discloses GPCCDecoderConfigurationRecord includes data from attribute parameter set (APS) and a tile inventory depending on a sample entry type as illustrated as fig.3 for clearly associating the dependency information of the attribute data onto the GPCCDecoderConfigurationRecord in the metadata area of the media content file for providing the proper information to the decoder for establishing the cross-attribute dependency indication information based on encoding/decoding relationship between attribute data in the point cloud bitstream); and decoding the point cloud bitstream based on the cross-attribute dependency indication information to present the point cloud media (paragraph [278], Takahashi discloses element 422 decodes the encoded bitstream as outputted by element 421 to generate the attribute data to the presentation information generation unit 423 for constructing the point cloud to be presented, wherein the output of element 423 is sent to element 413, and that paragraph [279], Takahashi discloses element 413 processes the point cloud data to be supplied to a display device for viewing the point cloud data; paragraph [91], fig.3, Takahashi discloses the presence of independent slice units and dependent slice units, and the dependency relationships of the slice units relative to other slice units, wherein paragraph [93], Takahashi discloses that attribute data of dependent slice #2 is dependent on attribute data of independent slice #1, and paragraph [94], Takahashi discloses that attribute data of dependent slice #6 is dependent on attribute data of dependent slice #2, and attribute data of dependent slice #6 is also indirectly dependent on attribute data of independent slice #1, and paragraph [95], Takahashi discloses that attribute data of dependent slice #3 is dependent on attribute data of independent slice #1, and paragraph [96], Takahashi discloses that attribute data of dependent slice #7 is dependent on attribute data of dependent slice #3, and attribute data of dependent slice #7 is also indirectly dependent on attribute data of independent slice #1, and paragraph [108], Takahashi discloses GPCCDecoderConfigurationRecord includes data from attribute parameter set (APS) and a tile inventory depending on a sample entry type as illustrated as fig.3 for clearly associating the dependency information of the attribute data onto the GPCCDecoderConfigurationRecord in the metadata area of the media content file for providing the proper information to the decoder for establishing the cross-attribute dependency indication information based on encoding/decoding relationship between attribute data in the point cloud bitstream). Regarding claim 7, Takahashi discloses wherein the media file comprises one or more attribute component tracks (paragraph [68], Takahashi discloses that L-HEVC file format can have a file with a single or plural individual tracks of ISOBMFF, and paragraph [122], Takahashi discloses that in the media file, there can be one or more tracks for storing attribute components), and the attribute data having the encoding and decoding dependency relationship in the point cloud bitstream is in different attribute component tracks (paragraph [228], fig.17, Takahashi discloses a group tracks are illustrated with track dependency relationship for compression and decompression in that the slices stored can be inputted into different tracks, wherein paragraph [226], Takahashi discloses that independent slice stored in Track 1 is utilized for decoding the dependent slice of Track 2, and also independent slice stored in Track 1 is utilized for decoding the dependent slice of Track 3; paragraph [91], fig.3, Takahashi discloses the presence of independent slice units and dependent slice units, and the dependency relationships of the slice units relative to other slice units, wherein paragraph [93], Takahashi discloses that attribute data of dependent slice #2 is dependent on attribute data of independent slice #1, and paragraph [94], Takahashi discloses that attribute data of dependent slice #6 is dependent on attribute data of dependent slice #2, and attribute data of dependent slice #6 is also indirectly dependent on attribute data of independent slice #1, and paragraph [95], Takahashi discloses that attribute data of dependent slice #3 is dependent on attribute data of independent slice #1, and paragraph [96], Takahashi discloses that attribute data of dependent slice #7 is dependent on attribute data of dependent slice #3, and attribute data of dependent slice #7 is also indirectly dependent on attribute data of independent slice #1, and paragraph [108], Takahashi discloses GPCCDecoderConfigurationRecord includes data from attribute parameter set (APS) and a tile inventory depending on a sample entry type as illustrated as fig.3 for clearly associating the dependency information of the attribute data onto the GPCCDecoderConfigurationRecord in the metadata area of the media content file for providing the proper information to the decoder for establishing the cross-attribute dependency indication information based on encoding/decoding relationship between attribute data in the point cloud bitstream); and an association relationship between the different attribute component tracks is represented by a track group (paragraph [228], fig.17, Takahashi discloses a group tracks are illustrated with track dependency relationship for compression and decompression in that the slices stored can be inputted into different tracks, wherein paragraph [226], Takahashi discloses that independent slice stored in Track 1 is utilized for decoding the dependent slice of Track 2, and also independent slice stored in Track 1 is utilized for decoding the dependent slice of Track 3, thus establishing an association relationship with Tracks 1-3 within the track group; paragraph [91], fig.3, Takahashi discloses the presence of independent slice units and dependent slice units, and the dependency relationships of the slice units relative to other slice units, wherein paragraph [93], Takahashi discloses that attribute data of dependent slice #2 is dependent on attribute data of independent slice #1, and paragraph [94], Takahashi discloses that attribute data of dependent slice #6 is dependent on attribute data of dependent slice #2, and attribute data of dependent slice #6 is also indirectly dependent on attribute data of independent slice #1, and paragraph [95], Takahashi discloses that attribute data of dependent slice #3 is dependent on attribute data of independent slice #1, and paragraph [96], Takahashi discloses that attribute data of dependent slice #7 is dependent on attribute data of dependent slice #3, and attribute data of dependent slice #7 is also indirectly dependent on attribute data of independent slice #1, and paragraph [108], Takahashi discloses GPCCDecoderConfigurationRecord includes data from attribute parameter set (APS) and a tile inventory depending on a sample entry type as illustrated as fig.3 for clearly associating the dependency information of the attribute data onto the GPCCDecoderConfigurationRecord in the metadata area of the media content file for providing the proper information to the decoder for establishing the cross-attribute dependency indication information based on encoding/decoding relationship between attribute data in the point cloud bitstream). Regarding claim 11, Takahashi discloses wherein the media file comprises a track (paragraph [68], Takahashi discloses that L-HEVC file format can have a file with a single or plural individual tracks of ISOBMFF, and paragraph [122], Takahashi discloses that in the media file, there can be one or more tracks), the track contains one or more samples (paragraph [199], Takahashi discloses that plural samples can be stored in the tracks; paragraph [107], fig.4, Takahashi discloses the file structure of the media content file in that the GPCC track with a MediaDataBox that comprises multiple samples), and each sample corresponds to one frame in the point cloud media (paragraph [90], Takahashi discloses that each sample corresponds to one frame of a moving image in a point cloud); one sample is divided into one or more slices (paragraph [91], fig.3, Takahashi discloses a sample can comprise multiple slices with slice #1 as the independent slice and slices #2, 3, 6 and 7 are dependent slices), and each of the one or more slices is represented by one subsample (paragraph [249], Takahashi discloses that file generation unit 315 can set a subsample for each individual slice, and specify slice information onto the subsample information box); and the cross-attribute dependency indication information (paragraph [91], fig.3, Takahashi discloses the presence of independent slice units and dependent slice units, and the dependency relationships of the slice units relative to other slice units, wherein paragraph [93], Takahashi discloses that attribute data of dependent slice #2 is dependent on attribute data of independent slice #1, and paragraph [94], Takahashi discloses that attribute data of dependent slice #6 is dependent on attribute data of dependent slice #2, and attribute data of dependent slice #6 is also indirectly dependent on attribute data of independent slice #1, and paragraph [95], Takahashi discloses that attribute data of dependent slice #3 is dependent on attribute data of independent slice #1, and paragraph [96], Takahashi discloses that attribute data of dependent slice #7 is dependent on attribute data of dependent slice #3, and attribute data of dependent slice #7 is also indirectly dependent on attribute data of independent slice #1, and paragraph [108], Takahashi discloses GPCCDecoderConfigurationRecord includes data from attribute parameter set (APS) and a tile inventory depending on a sample entry type as illustrated as fig.3 for clearly associating the dependency information of the attribute data onto the GPCCDecoderConfigurationRecord in the metadata area of the media content file for providing the proper information to the decoder for establishing the cross-attribute dependency indication information based on encoding/decoding relationship between attribute data in the point cloud bitstream) is set in a subsample information data box (paragraph [249], Takahashi discloses that file generation unit 315 can set a subsample for each individual slice, and specify slice information onto the subsample information box). Regarding claim 14, Takahashi discloses wherein the point cloud media is transmitted through streaming (paragraph [309], Takahashi discloses that the point cloud media is transmitted with MPEG -DASH, in that DASH is Dynamic Adaptive Streaming over HTTP (hypertext protocol), thus streaming video data over HTTP), and the obtaining the media file of point cloud media comprises: obtaining transmission signaling of the point cloud media (paragraph [276], Takahashi discloses element 411 obtains a media content file that comprises the point cloud data), the transmission signaling containing the cross-attribute dependency indication information (paragraph [277], Takahashi discloses element 421 processes the obtained media file, as obtained by element 411, by extracting the bitstream of the encoded point cloud data, and wherein paragraph [91], fig.3, Takahashi discloses the presence of independent slice units and dependent slice units, and the dependency relationships of the slice units relative to other slice units, wherein paragraph [93], Takahashi discloses that attribute data of dependent slice #2 is dependent on attribute data of independent slice #1, and paragraph [94], Takahashi discloses that attribute data of dependent slice #6 is dependent on attribute data of dependent slice #2, and attribute data of dependent slice #6 is also indirectly dependent on attribute data of independent slice #1, and paragraph [95], Takahashi discloses that attribute data of dependent slice #3 is dependent on attribute data of independent slice #1, and paragraph [96], Takahashi discloses that attribute data of dependent slice #7 is dependent on attribute data of dependent slice #3, and attribute data of dependent slice #7 is also indirectly dependent on attribute data of independent slice #1, and paragraph [108], Takahashi discloses GPCCDecoderConfigurationRecord includes data from attribute parameter set (APS) and a tile inventory depending on a sample entry type as illustrated as fig.3 for clearly associating the dependency information of the attribute data onto the GPCCDecoderConfigurationRecord in the metadata area of the media content file for providing the proper information to the decoder for establishing the cross-attribute dependency indication information based on encoding/decoding relationship between attribute data in the point cloud bitstream); and obtaining the media file of the point cloud media based on the transmission signaling (paragraph [277], Takahashi discloses element 421 processes the obtained media file, as obtained by element 411, by extracting the bitstream of the encoded point cloud data, and wherein paragraph [91], fig.3, Takahashi discloses the presence of independent slice units and dependent slice units, and the dependency relationships of the slice units relative to other slice units, wherein paragraph [93], Takahashi discloses that attribute data of dependent slice #2 is dependent on attribute data of independent slice #1, and paragraph [94], Takahashi discloses that attribute data of dependent slice #6 is dependent on attribute data of dependent slice #2, and attribute data of dependent slice #6 is also indirectly dependent on attribute data of independent slice #1, and paragraph [95], Takahashi discloses that attribute data of dependent slice #3 is dependent on attribute data of independent slice #1, and paragraph [96], Takahashi discloses that attribute data of dependent slice #7 is dependent on attribute data of dependent slice #3, and attribute data of dependent slice #7 is also indirectly dependent on attribute data of independent slice #1, and paragraph [108], Takahashi discloses GPCCDecoderConfigurationRecord includes data from attribute parameter set (APS) and a tile inventory depending on a sample entry type as illustrated as fig.3 for clearly associating the dependency information of the attribute data onto the GPCCDecoderConfigurationRecord in the metadata area of the media content file for providing the proper information to the decoder for establishing the cross-attribute dependency indication information based on encoding/decoding relationship between attribute data in the point cloud bitstream). Regarding claim 18, Takahashi discloses wherein the decoding the point cloud bitstream based on the cross-attribute dependency indication information ()comprises: determining the attribute data on which current attribute data depends based on the encoding and decoding dependency relationship indicated by the cross-attribute dependency indication information (paragraph [91], fig.3, Takahashi discloses the presence of independent slice units and dependent slice units, and the dependency relationships of the slice units relative to other slice units, wherein paragraph [93], Takahashi discloses that attribute data of dependent slice #2 is dependent on attribute data of independent slice #1, and paragraph [94], Takahashi discloses that attribute data of dependent slice #6 is dependent on attribute data of dependent slice #2, and attribute data of dependent slice #6 is also indirectly dependent on attribute data of independent slice #1, and paragraph [95], Takahashi discloses that attribute data of dependent slice #3 is dependent on attribute data of independent slice #1, and paragraph [96], Takahashi discloses that attribute data of dependent slice #7 is dependent on attribute data of dependent slice #3, and attribute data of dependent slice #7 is also indirectly dependent on attribute data of independent slice #1, and paragraph [108], Takahashi discloses GPCCDecoderConfigurationRecord includes data from attribute parameter set (APS) and a tile inventory depending on a sample entry type as illustrated as fig.3 for clearly associating the dependency information of the attribute data onto the GPCCDecoderConfigurationRecord in the metadata area of the media content file for providing the proper information to the decoder for establishing the cross-attribute dependency indication information based on encoding/decoding relationship between attribute data in the point cloud bitstream); decoding the attribute data on which the current attribute data depends (paragraph [278], Takahashi discloses element 422 decodes the encoded bitstream as outputted by element 421 to generate the attribute data to the presentation information generation unit 423 for constructing the point cloud to be presented, wherein the output of element 423 is sent to element 413, and that paragraph [279], Takahashi discloses element 413 processes the point cloud data to be supplied to a display device for viewing the point cloud data; paragraph [91], fig.3, Takahashi discloses the presence of independent slice units and dependent slice units, and the dependency relationships of the slice units relative to other slice units, wherein paragraph [93], Takahashi discloses that attribute data of dependent slice #2 is dependent on attribute data of independent slice #1, and paragraph [94], Takahashi discloses that attribute data of dependent slice #6 is dependent on attribute data of dependent slice #2, and attribute data of dependent slice #6 is also indirectly dependent on attribute data of independent slice #1, and paragraph [95], Takahashi discloses that attribute data of dependent slice #3 is dependent on attribute data of independent slice #1, and paragraph [96], Takahashi discloses that attribute data of dependent slice #7 is dependent on attribute data of dependent slice #3, and attribute data of dependent slice #7 is also indirectly dependent on attribute data of independent slice #1, and paragraph [108], Takahashi discloses GPCCDecoderConfigurationRecord includes data from attribute parameter set (APS) and a tile inventory depending on a sample entry type as illustrated as fig.3 for clearly associating the dependency information of the attribute data onto the GPCCDecoderConfigurationRecord in the metadata area of the media content file for providing the proper information to the decoder for establishing the cross-attribute dependency indication information based on encoding/decoding relationship between attribute data in the point cloud bitstream); and decoding the current attribute data after decoding the attribute data on which the current attribute data depends (paragraph [278], Takahashi discloses element 422 decodes the encoded bitstream as outputted by element 421 to generate the attribute data to the presentation information generation unit 423 for constructing the point cloud to be presented, wherein the output of element 423 is sent to element 413, and that paragraph [279], Takahashi discloses element 413 processes the point cloud data to be supplied to a display device for viewing the point cloud data; paragraph [91], fig.3, Takahashi discloses the presence of independent slice units and dependent slice units, and the dependency relationships of the slice units relative to other slice units, wherein paragraph [93], Takahashi discloses that attribute data of dependent slice #2 is dependent on attribute data of independent slice #1, and paragraph [94], Takahashi discloses that attribute data of dependent slice #6 is dependent on attribute data of dependent slice #2, and attribute data of dependent slice #6 is also indirectly dependent on attribute data of independent slice #1, and paragraph [95], Takahashi discloses that attribute data of dependent slice #3 is dependent on attribute data of independent slice #1, and paragraph [96], Takahashi discloses that attribute data of dependent slice #7 is dependent on attribute data of dependent slice #3, and attribute data of dependent slice #7 is also indirectly dependent on attribute data of independent slice #1, and paragraph [108], Takahashi discloses GPCCDecoderConfigurationRecord includes data from attribute parameter set (APS) and a tile inventory depending on a sample entry type as illustrated as fig.3 for clearly associating the dependency information of the attribute data onto the GPCCDecoderConfigurationRecord in the metadata area of the media content file for providing the proper information to the decoder for establishing the cross-attribute dependency indication information based on encoding/decoding relationship between attribute data in the point cloud bitstream). Regarding claim 19, Takahashi discloses a data processing method for point cloud media (paragraph [237], fig.19, Takahashi discloses file generation device 300 that comprises an encoding unit for processing point cloud media to encode geometry and attribute data of point cloud data, and element 313 generates an encoded bitstream that includes metadata along with encoded geometry and attribute data of the point cloud data, and element 315 produces a media content file), the method being executed by a content production device (paragraph [237], fig.19, Takahashi discloses file generation device 300 that comprises an encoding unit for processing point cloud media to encode geometry and attribute data of point cloud data, and element 313 generates an encoded bitstream that includes metadata along with encoded geometry and attribute data of the point cloud data, and element 315 produces a media content file), and comprising: obtaining point cloud media (paragraph [238], fig.19, Takahashi discloses that element 311 receives the point cloud data that comprises geometry and attribute data of the point cloud), and encoding the point cloud media to obtain a point cloud bitstream (paragraph [239], fig.19, Takahashi discloses element 312 for encoding the point cloud data that includes the compression of geometry data bitstream with geometry encoding unit 321 and compression of attribute data bitstream with attribute encoding unit 322 to form an encoded point cloud bitstream with metadata generation unit 323, and paragraph [242], Takahashi discloses bitstream generation unit 313 multiplexes the geometry bitstream, attribute bitstream and metadata to generate G-PCC bitstream); generating cross-attribute dependency indication information based on an encoding and decoding dependency relationship between attribute data in the point cloud bitstream (paragraph [91], fig.3, Takahashi discloses the presence of independent slice units and dependent slice units, and the dependency relationships of the slice units relative to other slice units, wherein paragraph [93], Takahashi discloses that attribute data of dependent slice #2 is dependent on attribute data of independent slice #1, and paragraph [94], Takahashi discloses that attribute data of dependent slice #6 is dependent on attribute data of dependent slice #2, and attribute data of dependent slice #6 is also indirectly dependent on attribute data of independent slice #1, and paragraph [95], Takahashi discloses that attribute data of dependent slice #3 is dependent on attribute data of independent slice #1, and paragraph [96], Takahashi discloses that attribute data of dependent slice #7 is dependent on attribute data of dependent slice #3, and attribute data of dependent slice #7 is also indirectly dependent on attribute data of independent slice #1, and paragraph [108], Takahashi discloses GPCCDecoderConfigurationRecord includes data from attribute parameter set (APS) and a tile inventory depending on a sample entry type as illustrated as fig.3 for clearly associating the dependency information of the attribute data onto the GPCCDecoderConfigurationRecord in the metadata area of the media content file for providing the proper information to the decoder for establishing the cross-attribute dependency indication information based on encoding/decoding relationship between attribute data in the point cloud bitstream); and encapsulating the cross-attribute dependency indication information and the point cloud bitstream to obtain a media file of the point cloud media (paragraph [122], Takahashi discloses that data for compressing point cloud data is encapsulated in an encapsulating structure (ie. content file or media file) wherein attribute data is encapsulated during point cloud compression, and paragraph [66], Takahashi discloses the media file is in ISOBMFF (International Organization for Standardization Base Media File Format) media file format, and paragraph [67], Takahashi discloses the media file is in L-HEVC (high efficiency video coding) file format; paragraph [91], fig.3, Takahashi discloses the presence of independent slice units and dependent slice units, and the dependency relationships of the slice units relative to other slice units, wherein paragraph [93], Takahashi discloses that attribute data of dependent slice #2 is dependent on attribute data of independent slice #1, and paragraph [94], Takahashi discloses that attribute data of dependent slice #6 is dependent on attribute data of dependent slice #2, and attribute data of dependent slice #6 is also indirectly dependent on attribute data of independent slice #1, and paragraph [95], Takahashi discloses that attribute data of dependent slice #3 is dependent on attribute data of independent slice #1, and paragraph [96], Takahashi discloses that attribute data of dependent slice #7 is dependent on attribute data of dependent slice #3, and attribute data of dependent slice #7 is also indirectly dependent on attribute data of independent slice #1, and paragraph [108], Takahashi discloses GPCCDecoderConfigurationRecord includes data from attribute parameter set (APS) and a tile inventory depending on a sample entry type as illustrated as fig.3 for clearly associating the dependency information of the attribute data onto the GPCCDecoderConfigurationRecord in the metadata area of the media content file for providing the proper information to the decoder for establishing the cross-attribute dependency indication information based on encoding/decoding relationship between attribute data in the point cloud bitstream). Regarding claim 20, Takahashi discloses a data processing apparatus for point cloud media (paragraph [273], fig.21, Takahashi discloses processing point cloud data, in that element 400 decodes a G-PCC file as encoded by the embodiment of fig.19), comprising: a memory operable to store computer-readable instructions (paragraph [402], Takahashi discloses storage unit 913 (ie. RAM 903) for storing a computer program to be executed by a computer via a processor or CPU 901; paragraph [405], Takahashi discloses ROM 902 for storing the computer program that comprises instructions); and a processor circuitry operable to read the computer-readable instructions (paragraph [402], Takahashi discloses storage unit 913 (ie. RAM 903) for storing a computer program to be executed by a computer via a processor or CPU 901; paragraph [405], Takahashi discloses ROM 902 for storing the computer program that comprises instructions), the processor circuitry when executing the computer-readable instructions (paragraph [402], Takahashi discloses storage unit 913 (ie. RAM 903) for storing a computer program to be executed by a computer via a processor or CPU 901; paragraph [405], Takahashi discloses ROM 902 for storing the computer program that comprises instructions) is configured to: obtain a media file of point cloud media (paragraph [276], Takahashi discloses element 411 obtains a media content file that comprises the point cloud data), the media file comprising a point cloud bitstream and cross-attribute dependency indication information of the point cloud media (paragraph [277], Takahashi discloses element 421 processes the obtained media file, as obtained by element 411, by extracting the bitstream of the encoded point cloud data, and wherein paragraph [91], fig.3, Takahashi discloses the presence of independent slice units and dependent slice units, and the dependency relationships of the slice units relative to other slice units , wherein paragraph [93], Takahashi discloses that attribute data of dependent slice #2 is dependent on attribute data of independent slice #1, and paragraph [94], Takahashi discloses that attribute data of dependent slice #6 is dependent on attribute data of dependent slice #2, and attribute data of dependent slice #6 is also indirectly dependent on attribute data of independent slice #1, and paragraph [95], Takahashi discloses that attribute data of dependent slice #3 is dependent on attribute data of independent slice #1, and paragraph [96], Takahashi discloses that attribute data of dependent slice #7 is dependent on attribute data of dependent slice #3, and attribute data of dependent slice #7 is also indirectly dependent on attribute data of independent slice #1, and paragraph [108], Takahashi discloses GPCCDecoderConfigurationRecord includes data from attribute parameter set (APS) and a tile inventory depending on a sample entry type as illustrated as fig.3 for clearly associating the dependency information of the attribute data onto the GPCCDecoderConfigurationRecord in the metadata area of the media content file for providing the proper information to the decoder for establishing the cross-attribute dependency indication information based on encoding/decoding relationship between attribute data in the point cloud bitstream), and the cross-attribute dependency indication information being for indicating an encoding and decoding dependency relationship between attribute data in the point cloud bitstream (paragraph [91], fig.3, Takahashi discloses the presence of independent slice units and dependent slice units, and that , wherein paragraph [93], Takahashi discloses that attribute data of dependent slice #2 is dependent on attribute data of independent slice #1, and paragraph [94], Takahashi discloses that attribute data of dependent slice #6 is dependent on attribute data of dependent slice #2, and attribute data of dependent slice #6 is also indirectly dependent on attribute data of independent slice #1, and paragraph [95], Takahashi discloses that attribute data of dependent slice #3 is dependent on attribute data of independent slice #1, and paragraph [96], Takahashi discloses that attribute data of dependent slice #7 is dependent on attribute data of dependent slice #3, and attribute data of dependent slice #7 is also indirectly dependent on attribute data of independent slice #1, and paragraph [108], Takahashi discloses GPCCDecoderConfigurationRecord includes data from attribute parameter set (APS) and a tile inventory depending on a sample entry type as illustrated as fig.3 for clearly associating the dependency information of the attribute data onto the GPCCDecoderConfigurationRecord in the metadata area of the media content file for providing the proper information to the decoder for establishing the cross-attribute dependency indication information based on encoding/decoding relationship between attribute data in the point cloud bitstream); and decode the point cloud bitstream based on the cross-attribute dependency indication information to present the point cloud media (paragraph [278], Takahashi discloses element 422 decodes the encoded bitstream as outputted by element 421 to generate the attribute data to the presentation information generation unit 423 for constructing the point cloud to be presented, wherein the output of element 423 is sent to element 413, and that paragraph [279], Takahashi discloses element 413 processes the point cloud data to be supplied to a display device for viewing the point cloud data; paragraph [91], fig.3, Takahashi discloses the presence of independent slice units and dependent slice units, and the dependency relationships of the slice units relative to other slice units, wherein paragraph [93], Takahashi discloses that attribute data of dependent slice #2 is dependent on attribute data of independent slice #1, and paragraph [94], Takahashi discloses that attribute data of dependent slice #6 is dependent on attribute data of dependent slice #2, and attribute data of dependent slice #6 is also indirectly dependent on attribute data of independent slice #1, and paragraph [95], Takahashi discloses that attribute data of dependent slice #3 is dependent on attribute data of independent slice #1, and paragraph [96], Takahashi discloses that attribute data of dependent slice #7 is dependent on attribute data of dependent slice #3, and attribute data of dependent slice #7 is also indirectly dependent on attribute data of independent slice #1, and paragraph [108], Takahashi discloses GPCCDecoderConfigurationRecord includes data from attribute parameter set (APS) and a tile inventory depending on a sample entry type as illustrated as fig.3 for clearly associating the dependency information of the attribute data onto the GPCCDecoderConfigurationRecord in the metadata area of the media content file for providing the proper information to the decoder for establishing the cross-attribute dependency indication information based on encoding/decoding relationship between attribute data in the point cloud bitstream). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Takahashi (US 2023/0291918) in view of Huang (US 2022/0353539). Regarding claim 8, Takahashi discloses wherein a media file contains a track data box (paragraph [107, fig.4, Takahashi discloses a file structure with a “TrackBox” or a track data box), and the track data box is for indicating the attribute component track to which the attribute data having the encoding and decoding dependency relationship in the point cloud bitstream belongs (paragraph [91], fig.3, Takahashi discloses the presence of independent slice units and dependent slice units, and that , wherein paragraph [93], Takahashi discloses that attribute data of dependent slice #2 is dependent on attribute data of independent slice #1, and paragraph [94], Takahashi discloses that attribute data of dependent slice #6 is dependent on attribute data of dependent slice #2, and attribute data of dependent slice #6 is also indirectly dependent on attribute data of independent slice #1, and paragraph [95], Takahashi discloses that attribute data of dependent slice #3 is dependent on attribute data of independent slice #1, and paragraph [96], Takahashi discloses that attribute data of dependent slice #7 is dependent on attribute data of dependent slice #3, and attribute data of dependent slice #7 is also indirectly dependent on attribute data of independent slice #1, and paragraph [108], Takahashi discloses GPCCDecoderConfigurationRecord includes data from attribute parameter set (APS) and a tile inventory depending on a sample entry type as illustrated as fig.3 for clearly associating the dependency information of the attribute data onto the GPCCDecoderConfigurationRecord in the metadata area of the media content file for providing the proper information to the decoder for establishing the cross-attribute dependency indication information based on encoding/decoding relationship between attribute data in the point cloud bitstream); and the cross-attribute dependency indication information (paragraph [91], fig.3, Takahashi discloses the presence of independent slice units and dependent slice units, and that , wherein paragraph [93], Takahashi discloses that attribute data of dependent slice #2 is dependent on attribute data of independent slice #1, and paragraph [94], Takahashi discloses that attribute data of dependent slice #6 is dependent on attribute data of dependent slice #2, and attribute data of dependent slice #6 is also indirectly dependent on attribute data of independent slice #1, and paragraph [95], Takahashi discloses that attribute data of dependent slice #3 is dependent on attribute data of independent slice #1, and paragraph [96], Takahashi discloses that attribute data of dependent slice #7 is dependent on attribute data of dependent slice #3, and attribute data of dependent slice #7 is also indirectly dependent on attribute data of independent slice #1, and paragraph [108], Takahashi discloses GPCCDecoderConfigurationRecord includes data from attribute parameter set (APS) and a tile inventory depending on a sample entry type as illustrated as fig.3 for clearly associating the dependency information of the attribute data onto the GPCCDecoderConfigurationRecord in the metadata area of the media content file for providing the proper information to the decoder for establishing the cross-attribute dependency indication information based on encoding/decoding relationship between attribute data in the point cloud bitstream) is represented as a cross-attribute dependency information data box (paragraph [107, fig.4, Takahashi discloses a file structure with a “TrackBox” or a track data box, wherein GPCCDecoderConfigurationRecord comprises the cross-attribute dependency information, and Takahashi’s TrackBox comprises SampleTableBox which comprises the “GPCCDecoderConfigurationRecord”, thus the cross-attribute dependency information data box is within the track data box), and the cross-attribute dependency information data box is set in the track data box (paragraph [107, fig.4, Takahashi discloses a file structure with a “TrackBox” or a track data box, wherein GPCCDecoderConfigurationRecord comprises the cross-attribute dependency information, and Takahashi’s TrackBox comprises SampleTableBox which comprises the “GPCCDecoderConfigurationRecord”, thus the cross-attribute dependency information data box is within the track data box). Takahashi does not disclose wherein the media file contains a track group type data box, and the track group type data box is for indicating the attribute component track to which the attribute data having the encoding and decoding dependency relationship in the point cloud bitstream belongs; and the cross-attribute dependency indication information is represented as a cross-attribute dependency information data box, and the cross-attribute dependency information data box is set in the track group type data box. However, Huang teaches the concept of a track group type data box (paragraph [54], Huang discloses TrackGroupTypeBox for a media file structure). Since Takahashi teaches “…wherein a media file contains a track data box, and the track data box is for indicating the attribute component track to which the attribute data having the encoding and decoding dependency relationship in the point cloud bitstream belongs; and the cross-attribute dependency indication information is represented as a cross-attribute dependency information data box”, and Huang discloses the implementation of a “track group type data box”, therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Takahashi and Huang together as a whole for ascertaining the limitation of “…wherein the media file contains a track group type data box, and the track group type data box is for indicating the attribute component track to which the attribute data having the encoding and decoding dependency relationship in the point cloud bitstream belongs; and the cross-attribute dependency indication information is represented as a cross-attribute dependency information data box, and the cross-attribute dependency information data box is set in the track group type data box” in order to efficiently capture, compress, reconstruct and render point cloud data (Huang’s paragraph [16]). Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Takahashi (US 2023/0291918) in view of Oh (US 2021/0029187). Regarding claim 13, Takahashi discloses wherein the media file comprises a track ((paragraph [68], Takahashi discloses that L-HEVC file format can have a file with a single or plural individual tracks of ISOBMFF, and paragraph [122], Takahashi discloses that in the media file, there can be one or more tracks), the track contains one or more samples (paragraph [199], Takahashi discloses that plural samples can be stored in the tracks; paragraph [107], fig.4, Takahashi discloses the file structure of the media content file in that the GPCC track with a MediaDataBox that comprises multiple samples), and each of the one or more samples corresponds to one frame in the point cloud media (paragraph [90], Takahashi discloses that each sample corresponds to one frame of a moving image in a point cloud). Takahashi does not disclose the media file further comprises a component information data box, and the component information data box contains a type of a component in the track; in response to the type of the component being attribute data, the component information data box further comprises an attribute identifier field, the attribute identifier field is for indicating an identifier of current attribute data, and the current attribute data refers to attribute data being decoded. However, Oh teaches the media file further comprises a component information data box (paragraph [548], Oh discloses the concept of a media data box that comprises information about attributes of point cloud data from G-PCC sample, and paragraph [549], Oh discloses a moov box, wherein a moov box is a data box that can comprise component information from the metadata that is necessary for decoding and playback of media data, as well as information from the tracks and samples of the media file), and the component information data box contains a type of a component in the track (paragraph [371], fig.25, Oh discloses attribute types comprises color, reflectance and frame index, and paragraph [17], Oh discloses that the bitstream can be stored in either a single track or in plural tracks that includes attribute data components, and paragraph [19], Oh discloses that the type of data in the payload can be included within the parameter set or attribute bitstream, and also paragraph [265], Oh discloses metadata track information can be encapsulated for comprising a type of component in the tracks); in response to the type of the component being attribute data (paragraph [368], Oh discloses known_attribute_label [i] field is an attribute identifier field, wherein if the known_attribute_label [i] is set equal to zero to specify the i-th attribute is color and if the known_attribute_label [i] is set equal to one, then the i-th attribute is reflectance), the attribute identifier field is for indicating an identifier of current attribute data (paragraph [368], Oh discloses known_attribute_label [i] field is an attribute identifier field, wherein if the known_attribute_label [i] is set equal to zero to specify the i-th attribute is color and if the known_attribute_label [i] is set equal to one, then the i-th attribute is reflectance, thus, the known_attribute_label [i] field indicates the current attribute data), the component information data box further comprises an attribute identifier field (paragraph [368], Oh discloses known_attribute_label [i] field is an attribute identifier field, wherein if the known_attribute_label [i] is set equal to zero to specify the i-th attribute is color and if the known_attribute_label [i] is set equal to one, then the i-th attribute is reflectance), the attribute identifier field is for indicating an identifier of current attribute data (paragraph [368], Oh discloses known_attribute_label [i] field is an attribute identifier field, wherein if the known_attribute_label [i] is set equal to zero to specify the i-th attribute is color and if the known_attribute_label [i] is set equal to one, then the i-th attribute is reflectance, thus, the known_attribute_label [i] field indicates the current attribute data), and the current attribute data refers to attribute data being decoded (paragraph [368], Oh discloses known_attribute_label [i] field is an attribute identifier field, wherein if the known_attribute_label [i] is set equal to zero to specify the i-th attribute is color and if the known_attribute_label [i] is set equal to one, then the i-th attribute is reflectance, thus, the known_attribute_label [i] field indicates the current attribute data, and paragraph [114], Oh discloses implementing a point cloud video decoder 10006 for performing the decoding of point cloud data based on the received encoded bitstream that includes attribute data and metadata). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Takahashi and Oh together as a whole for improving parallel processing and scalability of point cloud data transmission (Oh’s paragraph [30]). Allowable Subject Matter Claims 2-6, 9-10, 12 and 15-17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Citation of Other Pertinent Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. 1. “Method, Apparatus and Medium for Point Cloud Coding” – Xu et al. (US 2024/0314359). 2. “Model-Based Prediction for Geometry Point Cloud Compression” – Van der Auwera et al. (US 2022/0215596). Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALLEN C WONG whose telephone number is (571)272-7341. The examiner can normally be reached on Flex Monday-Thursday 9:30am-7:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath V Perungavoor can be reached on 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALLEN C WONG/Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Dec 20, 2024
Application Filed
Jan 09, 2026
Non-Final Rejection — §102, §103
Mar 19, 2026
Examiner Interview Summary
Mar 19, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604009
IMAGE ENCODING/DECODING METHOD AND APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12598321
ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12587671
VIDEO ENCODING APPARATUS AND A VIDEO DECODING APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12581134
FEATURE ENCODING/DECODING METHOD AND DEVICE, AND RECORDING MEDIUM STORING BITSTREAM
2y 5m to grant Granted Mar 17, 2026
Patent 12581091
METHODS AND APPARATUS OF ENCODING/DECODING VIDEO PICTURE DATA
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
95%
With Interview (+11.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 805 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month