Prosecution Insights
Last updated: April 19, 2026
Application No. 18/528,022

3D Map Compression Method and Apparatus, and 3D Map Decompression Method and Apparatus

Final Rejection §102§103
Filed
Dec 04, 2023
Examiner
TRAN, JENNY NGAN
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Huawei Technologies Co., Ltd.
OA Round
2 (Final)
20%
Grant Probability
At Risk
3-4
OA Rounds
2y 6m
To Grant
70%
With Interview

Examiner Intelligence

Grants only 20% of cases
20%
Career Allow Rate
1 granted / 5 resolved
-42.0% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
31 currently pending
Career history
36
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
49.0%
+9.0% vs TC avg
§102
21.8%
-18.2% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 5 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Claims 1-20 are currently pending in the present application, with claim 1, 11, 19, and 20 being independent. Response to Amendments / Arguments Applicant’s arguments, see Pg. 9, filed 12/04/2025, with respect to claims 5 and 15 have been fully considered and are persuasive. The 35 U.S.C. § 112b rejection of claims 5 and 15 have been withdrawn. Applicant's arguments filed 12/04/2025 have been fully considered but they are not persuasive. Applicant argues: Huang (WO 2021093153) fails to disclose “performing compaction processing on a to-be-encoded descriptor to obtain a first compact representation of the to-be-encoded descriptor, wherein the to-be-encoded descriptor corresponds to a three-dimensional (3D) map”. Examiner replies: that Huang expressly discloses “region_id specify the identity of a 3D spatial region of a point cloud data. Reference_x, reference_y, and reference_z specify the x, y, and z coordinate values, respectively, of the spatial region corresponding to the 3D spatial part of point cloud data in the Cartesian coordinates…delta_x, delta_y, delta_z specify the dimensions of the 3D spatial region…width, height, and depth of the 3D spatial region…” (Huang Pg. 7; Semantics Example). Additionally, Huang further discloses “the spatial Region element of the V-PCCSpatialRegion descriptor provides information of a spatial region of the point cloud, including x, y, z offset of the spatial region and the width, height, and depth of the spatial region in 3D space…” (Huang Pg. 17). These cited disclosures constitute a descriptor representing 3D spatial map information in compact attribute form, meeting the claimed “performing compaction processing on a to-be-encoded descriptor to obtain a first compact representation of the to-be-encoded descriptor, wherein the to-be-encoded descriptor corresponds to a three-dimensional (3D) map” Applicant argues: Huang (WO 2021093153) fails to disclose “a reference descriptor corresponds to an encoded 3D map point on the 3D map” Examiner replies: that Huang expressly discloses “Multiple versions of the same point cloud could be signaled using separate PreSelections. PreSelections that represent alternative versions of the same 3D spatial regions of the point cloud may contain a VPCC descriptor with the same @pcId value and the same regionIds value, wherein the value assigned to the @pcId attribute identifies the point cloud content, and the value of @regionIds attribute identifies one or more 3D spatial regions of the point cloud” (Huang Pg. 18). These disclosures constitute encoded alternative descriptors corresponding to the same 3D spatial region, meeting the claimed “reference descriptor corresponds to an encoded 3D map point on the 3D map”. Applicant argues: Huang (WO 2021093153) fails to disclose “obtaining a third compact representation of the to-be-encoded descriptor based on the first compact representation and the second compact representation”. Examiner replies: that Huang expressly discloses “V-PCC preselection is signaled in the MPD using a PreSelection element as defined in MPEG-DASH…with an identifier (ID) list for the @preselectionComponents attribute including the ID of the main AdaptionSet for the point cloud followed by the IDs of the AdaptionSets corresponding to the point cloud components” (Huang, Pg. 12, Section VI.(c).) and “To identify the static 3D spatial regions of the point cloud and their associated V-PCC component track groups, a VPCCSpatialRegion descriptor shall be used… (Pg. 12, Section VI.(d).) These cited disclosures show an MPD representation formed using both the spatial region descriptor information (first compact representation) and the preselection information (second compact representation) (Huang Pg. 1, Summary, Par. 2; media presentation description (MPD) file that includes the one or more spatial region descriptors and the one or more preselection elements), meeting the claimed “obtaining a third compact representation of the to-be-encoded descriptor based on the first compact representation and the second compact representation”. Applicant argues: Huang (WO 2021093153) fails to disclose “encapsulating the third compact representation to obtain a bitstream of the 3D map”. Examiner replies: that Huang expressly discloses “Each V-PCC component (and/or layer) is separately encoded as a sub-stream of the V-PCC bitstream. V-PCC component sub-streams of geometry, occupancy map and attributes, are encoded using video encoders…However, these sub-streams need to be collectively decoded along with the patch data of atlas sub-stream in order to reconstruct and render the point cloud data” (Huang Pg. 4, Section II). Additionally, Huan further discloses “Each V-PCC component may be represented in the MPEG-DASH manifest or MPEG-Dash Media Presentation Description (MPD) file as a separate AdaptionSet…Media segments for the Representation of the main AdaptionSet may contain one or more track fragments of the V-PCC track. Media segments for the Representations of component AdaptionSets may contain one or more track fragments of the corresponding component track at the file format level” (Huang Pg. 11-12, Section VI). These disclosures expressly teach encapsulation of the representation into a V-PCC bitstream for transmission and decoding, meeting the claimed “encapsulating the third compact representation to obtain a bitstream of the 3D map”. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-9, 11-17, and 19-20 is/are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Huang (WO 2021093153). Regarding claim 1, Huang discloses a method comprising: performing compaction processing (Detailed Description; Video-based point cloud compressions (V-PCC) represents a volumetric encoding of point cloud visual information and enables efficient capturing, compression, reconstruction, and rendering of point cloud data utilizing MPEG video codecs, such as AVC, HEVC…) on a to-be-encoded descriptor (Pg. 12-13, Section VI.(d).; VPCCSpatialRegion descriptor) to obtain a first compact representation of the to-be-encoded descriptor (Pg. 1, Summary, Par. 2; A method of point cloud data processing comprises determining one or more spatial region descriptors…Pg. 17; VPCCSpatialRegion descriptor provides information of a spatial region of the point cloud including x, y, z offsets…bounding box information of the point cloud), wherein the to-be-encoded descriptor corresponds to a three-dimensional (3D) map point on a 3D map (Pg. 1, Summary, Par. 2; spatial region descriptors that describe one or more three-dimensional (3D) spatial regions of a point cloud data. Examiner's note: The "first compact representation" maps to the VPCCSpatialRegion descriptor that compresses raw 3D map points into structured attributes such as x, y, z offset, width, height, depth, etc.), Examiner’s note: Additionally, as recited above in examiner’s response to arguments, the cited disclosures in Huang Pg. 7 and Pg. 17 constitute a descriptor representing 3D spatial map information in compact attribute form, meeting the claimed “performing compaction processing on a to-be-encoded descriptor to obtain a first compact representation of the to-be-encoded descriptor, wherein the to-be-encoded descriptor corresponds to a three-dimensional (3D) map” obtaining a second compact representation of a reference descriptor (Pg. 12, Section VI.(c).; V-PCC Pre-Selections) corresponding to the to-be-encoded descriptor (Fig. 1-2 Preselection [VPCC Descriptor) arrow pointing to VPCCSpatialRegion descriptors) wherein the reference descriptor corresponds to an encoded 3D map point on the 3D map (Pg. 1, Summary, Par. 2; one or more preselection elements that describe point cloud components associated with the point cloud data. Examiner's Note: The "second compact representation" corresponds to alternative VPCC descriptors (via PreSelections), which are encoded 3D versions of the same spatial region), Examiner’s note: Additionally, as recited above in examiner’s response to arguments, the cited disclosures in Huang Pg. 18 constitute encoded alternative descriptors corresponding to the same 3D spatial region, meeting the claimed “reference descriptor corresponds to an encoded 3D map point on the 3D map” obtaining a third compact representation of the to-be-encoded descriptor based on the first compact representation and the second compact representation (Pg. 1, Summary, Par. 2; transmitting a media presentation description (MPD) file that includes the one or more spatial region descriptors and the one or more preselection elements), Examiner’s note: Additionally, as recited above in examiner’s response to arguments, the cited disclosures in Huang Pg. 12 show an MPD representation formed using both the spatial region descriptor information (first compact representation) and the preselection information (second compact representation) meeting the claimed “obtaining a third compact representation of the to-be-encoded descriptor based on the first compact representation and the second compact representation”. and encapsulating the third compact representation to obtain a bitstream of the 3D map (Pg. 2, Section II, Encapsulation and signaling in MPEG-DASH. Pg. 11, Section VI, Embodiment 2: Encapsulation and signaling in MPEG-DASH. Pg. 1, Summary, Par. 2; transmitting a media presentation description (MPD) file. Examienr's note: The encapsulation corresponds to the MPEG-DASH MPD encapsulating VPCCSpatialRegion descriptors and PreSelection elements to generate a bitstream for transmission). Examiner’s note: Additionally, as recited above in examiner’s response to arguments, the cited disclosures in Huang Pg. 4 and Pg. 11-12 expressly teach encapsulation of the representation into a V-PCC bitstream for transmission and decoding, meeting the claimed “encapsulating the third compact representation to obtain a bitstream of the 3D map”. Regarding claim 2, Huang discloses the method of claim 1, and further discloses receiving, from an electronic device (Fig. 5 and Pg. 31; The transmitter 515 transmits or sends information or data to another device…(e.g., mobile device)), 3D map request information corresponding to the bitstream (Pg. 11-12, Section VI(c); A V-PCC preselection is signaled in the MPD using a PreSelection element…with an identifier (ID) list for the @preselectionComponents attribute including the ID of the main AdaptionSet for the point cloud…), and sending, to the electronic device in response to the 3D map request information, the bitstream (Pg. 4, Section II, Encapsulation and signaling in MPEG-DASH; adaptive streaming based content distribution technologies such as MPEG DASH…point cloud data specified as V-PCC media content…attribute of a particular type that is associated with a V-PCC point cloud representation…Each V-PCC component (and/or layer) is separately encoded as a sub-stream of the V-PCC bitstream…), or sending the bitstream (MPEG-DASH) to a server (Fig. 5 and Pg. 31; For example, a video encoder transmitter in a server can send encoded video to a video decoder). Regarding claim 3, Huang discloses the method of claim 1, and further discloses wherein a first data distribution (Pg. 6-7; reference_x, reference_y, reference_z specify the x, y, and z coordinate values, respectively of the spatial region corresponding to the 3D spatial part of point cloud data…delta_x, delta_y and delta_z specify specify the dimensions of the 3D spatial region…3D bounding box parameters of a point cloud data) of the first compact representation (Pg. 6-7; BoundingBoxStruct()) is different from a second data distribution (Pg. 8; 2D video encoded occupancy map track, a 2D video encoded geometry track, and zero or more 2D video encoded attribute tracks) of the third compact representation (Pg. 7-8, Section V(b); V-PCC component tracks (occupancy map track, geometry track and attribute tracks) corresponding to the same spatial region point cloud data may be grouped together using the track grouping tool…VPCCTrackGroupBox. Examiner's Note: first compact representation BoundingBoxStruct represents coordinate/dimension data distribution, while third compact representation VPCC tracks encode compressed geometry/occupancy/attribute with a different data distribution). Regarding claim 4, Huang discloses the method of claim 3, and further discloses wherein the first compact representation comprises a first compact representation substring of the to-be-encoded descriptor (Pg. 7; BoundingBoxStruct…coordinates..bounding box parameters (reference_x, reference_y, reference_z, delta_x, delta_y, delta_z), wherein the third compact representation comprises a second compact representation substring of the to-be-encoded descriptor (Pg. 8; VPCCTrackGroupBox… track_group_id…track_group_type…The track_group_id within TrackGroupTypeBox with track_group_type equal to 'pctg' could be used as the identifer of the spatial region of point cloud data), and wherein the first compact representation substring is different from the second compact representation substring (Pg. 7-8 bounding box parameters vs encoded subtracks). Regarding claim 5, Huang discloses the method of claim 4, and further discloses wherein a length of the the second compact representation substring is partially or all less than or equal to the corresponding first compact representation substring (Pg. 18; PreSelections that represent alternative versions of the same 3D spatial regions of the point cloud may contain a VPCC descriptor with the same @pcId value and the same @regionIds value, wherein the value assigned to the @pcld attribute identifies the point cloud content, and the value of @regionIds attribute identifies one or more 3D spatial regions of the point cloud). Regarding claim 6, Huang discloses the method of claim 4, and further discloses wherein obtaining the third compact representation comprises obtaining the third compact representation by looking up a table based on the first compact representation and the second compact representation (Pg. 12, Section VI(c) A V-PCC preselection is signlaed in the MPD using a PreSelection element defined in MPEG-DASH…with an identifier (ID) list for the @preselectionComponents attribute including the ID of the main AdaptionSet for the point cloud followed by the IDs of the AdaptionSets corresponding to the point cloud components. Examiner's interpretation: the PreSelection element is a lookup table mapping the SpatialRegion descriptor and preselection descriptor). Regarding claim 7, Huang discloses the method of claim 6, and further discloses wherein the table comprises a third compact representation substring corresponding to an encoded compact representation substring and a first reference compact representation substring of the reference descriptor (Pg. 12, Section VI(c) A V-PCC preselection is signaled in the MPD using a PreSelection element defined in MPEG-DASH…with an identifier (ID) list for the @preselectionComponents attribute including the ID of the main AdaptionSet for the point cloud followed by the IDs of the AdaptionSets corresponding to the point cloud components. Pg. 18; Multiple version of the same point cloud could be signaled using separate PreSelections. PreSelections that represent alternative versions of the same 3D spatial regions of the point cloud may contain a VPCC descriptor…) wherein the encoded compact representation substring comprises the first compact representation substring of the to-be-encoded descriptor (Pg. 1, Summary; media presentation description (MPD) file that includes the one or more spatial region descriptors and the one or more preselection elements), and wherein obtaining the third compact representation further comprises: obtaining, from the table based on the first compact representation substring (spatial region descriptor) and the first reference compact representation substring (ID of the main AdaptionSet) of the reference descriptor (@preselectionComponents) corresponding to the to-be-encoded descriptor (corresponding to point cloud components…VPCCSpatialRegion descriptor), the third compact representation substring (Pg. 12, Section VI(c). Examiner's note: using the spatial region descriptor present in the MPD (first compact representation substring) and the main AdaptionSetID (first reference compact representation substring) from the PreSelection table, you obtain the third compact representation substring (component AdaptionSet ID) returned by that PreSelection entry) corresponding to the first compact representation substring and first reference compact representation substring (Pg. 17-18; The usage of VPCCSpatialRegion descriptor, VPCC descriptor and related preselection mechanisms using PreSelections element to support the partial access and delivery of point cloud content in MPEG-DASH is described…) and obtaining the third compact representation (MPD file) based on the third compact representation substring. Regarding claim 8, Huang discloses the method of claim 1, and further discloses wherein obtaining the third compact representation (MPD) comprises: obtaining the third compact representation from a table based on the first compact representation (Pg. 12, Section VI(c); V-PCC preselection is signaled in the MPD… Examiner's note: the PreSelection table obtains the third compact representation by using the second compact representation (preselection), contextualized by the first compact representation (spatial region descriptor). Regarding claim 9, Huang discloses the method of claim 8, and further discloses wherein the second compact representation of the reference descriptor comprises a first reference compact representation substring (Pg. 11-12; …including the ID of the main AdaptionSet for the point cloud…Pg. 26; VPCCSpatialRegion descriptor in each main AdaptionSet also has the @trackGroupIDs attribute to identify V-PCC component track groups that correspond to the 3D spatial region of the point cloud.), wherein the table comprises the first reference compact representation substring (Pg. 26; 3D spatial region “1” is mapped to V-PCC component track groups with the track_group_id of “1 2”… track_group_id of “2 3”) and a fourth compact representation substring corresponding to a first compact representation substring of the to-be- encoded descriptor (Pg. 26; ContenetComponent elements “ 1 2 3”… The V-PCC component tracks group with the track_group_id attribute value “1”, “2” and ‘3” will be mapped to ContentComponent elements with the id attribute value “1 4 7”, “2 5 8” and “3 6 9” respectively. and wherein obtaining the third compact representation further comprises obtaining, from the table based on the first compact representation substring of the to-be-encoded descriptor, the fourth compact representation substring and obtaining the third compact representation (Pg. 26; PreSelection elemets in the MPD) based on the fourth compact representation substring (Pg. 26; …The ID 1 in the preselection corresponds to an identifier of the main adaption set for the point cloud data and the IDs 1, 2, 4,5, 7, and 8 correspond to the identifiers of the content components…Pg. 27; An example MPD file that signals the preselection of V-PCC spatial region using ContentComponent in this embodiment …Examiner's note: The preSelection defines the group of AdaptionSets used for that spatial region). Regarding claim 11, Huang discloses a method comprising: decapsulating a bitstream of a three-dimensional (3D) map (Pg. 3; A V-PCC bitstream, containing…The payload of occupancy, geometry, and attribute V-PCC units correspond to video data units…that could be decoded by the video decoder specified in the corresponding occupancy, geometry, and attribute parameter set V-PCC unit…Pg. 4; sub-streams need to be collectively decoded along with the patch data of atlas-sub-stream in order to reconstruct and render the point cloud data…Section III; support the partial access and delivery of the point cloud object) to obtain a third compact representation (Pg. 18; ContentComponents corresponding to the point cloud components is used to signal grouping V-PCC components belonging to the 3D spatial region in a point cloud) of a to-be-decoded descriptor (Pg. 17; VPCCSpatialRegion descriptor. Pg. 9-10, Section V(c); VPCC volumetric sample entry should contain a VPCCConfigurationBox which includes a VPCCDecoderConfigurationRecord), wherein the to-be-decoded descriptor corresponds to a 3D map point on the 3D map (Pg. 1, Summary, Par. 2; spatial region descriptors that describe one or more three-dimensional (3D) spatial regions of a point cloud data), Examiner’s note: Additionally, as recited above in examiner’s response to arguments, the cited disclosures in Huang Pg. 7 and Pg. 17 constitute a descriptor representing 3D spatial map information in compact attribute form. obtaining a second compact representation of a reference descriptor (Pg. 12, Section VI.(c).; V-PCC Pre-Selections) corresponding to the to-be-decoded descriptor (Fig. 1-2 Preselection [VPCC Descriptor] arrow pointing to VPCCSpatialRegion descriptors), wherein the reference descriptor corresponds to a decoded 3D map point on the 3D map (Pg. 1, Summary, Par. 2; one or more preselection elements that describe point cloud components associated with the point cloud data), Examiner’s note: Additionally, as recited above in examiner’s response to arguments, the cited disclosures in Huang Pg. 18 constitute encoded alternative descriptors corresponding to the same 3D spatial region. obtaining a first compact representation of the to-be-decoded descriptor (Pg. 17; VPCCSpatialRegion descriptor) based on the third compact representation and the second compact representation (Pg. 12; initialize the V-PCC decoder, including V-PCC sequency parameter sets as well as other parameter sets for component sub-streams), and obtaining reconstructed data of the 3D map point based on the first compact representation (Detailed Description; Video-based point cloud compressions (V-PCC) represents a volumetric encoding of point cloud visual information and enables efficient capturing, compression, reconstruction, and rendering of point cloud data utilizing MPEG video codecs, such as AVC, HEVC…). Examiner’s note: Additionally, as recited above in examiner’s response to arguments, the cited disclosures in Huang Pg. 12 show an MPD representation formed using both the spatial region descriptor information (first compact representation) and the preselection information (second compact representation) meeting the claimed, and Huang Pg. 4 and Pg. 11-12 expressly teach encapsulation of the representation into a V-PCC bitstream for transmission and decoding. Regarding claim 12, Huang discloses the method of claim 11, and further discloses sending 3D map request information, and receiving the bitstream corresponding to the 3D map request information (Fig. 5 and Pg. 31; The transmitter 515 transmits or sends information or data to another device. For example, a video encoder transmitter in a server can send encoded video to a video decoder in another device (e.g., mobile device). The receiver 520 receives information or data transmitted or sent by another device. For example, a mobile device video decoder can receive encoded video data from another device (e.g., server)). Regarding claim 13, Huang discloses the method of claim 11, and further discloses wherein a first data distribution of the first compact representation (Pg 17; VPCCSpatialRegion descriptor provides information of a spatial region of the point cloud, including the x, y, z offset of the spatial region and the width, height, and depth of the spatial region in 3D space, and optionally the 3D bounding box information of the point cloud) is different (Pg. 13-17; Table 1 vs Table 3) from a second data distribution of the second compact representation (Pg. 18, Multiple versions of the same point cloud could be signaled using separate PreSelections…may contain a VPCC descriptor…@pcId attribute identifies the point cloud content, and the value of @regionIds attributes identifies one or more 3D spatial regions of the point cloud). Regarding claim 14, Huang discloses the method of claim 13, and further discloses wherein the first compact representation comprises a first compact representation substring of the to-be-decoded descriptor (Pg. 7; reference_x, reference_y, reference_z specify x, y, and z coordinate values of spatial region corresponding to the 3D spatial part of point cloud data…delta_x, delta_y, delta_z specify dimensions for the 3D spatial region…width, height, dept…bounding_box_x…3D bounding box of point cloud data), wherein the third compact representation comprises a second compact representation substring of the to-be-decoded descriptor (Pg. 9; V-PCC tracks should use VPCCSampleEntry which extends VolumetricVisualSampleEntry with a sample entry type of 'vpc1' or 'vpcg'. A VPCC volumetric sample entry should contain a VPCCConfigurationBox which includes a VPCCDecoderConfigurationRecord, as described herein…), and wherein the first compact representation substring is different from the second compact representation substring (Pg. 6-7, Section V(a); Spatial region information structure…SpatialRegionInfoStruct() and BoundingBoxInfoStruct() provide region information of a point cloud data, including the x, y, z, coordinate offset and the width, height, and depth of the 3D spatial region, and its source bounding box…Pg. 12; PreSelection elements as defined in MPEG-DASH…with an identifier (ID) for the @preselectionComponents attribute including the ID of the main AdaptionSet for the point cloud.). Regarding claim 15, Huang discloses the method of claim 14, and further discloses wherein a length of the second compact representation substring is greater than or equal to a length of the corresponding first compact representation substring (Pg. 18; PreSelections that represent alternative versions of the same 3D spatial regions of the point cloud may contain a VPCC descriptor with the same @pcId value and the same @regionIds value, wherein the value assigned to the @pcld attribute identifies the point cloud content, and the value of @regionIds attribute identifies one or more 3D spatial regions of the point cloud). Regarding claim 16, Huang discloses the method of claim 11, and further discloses wherein obtaining the first compact representation comprises: comprises obtaining the first compact representation from a table (Pg. 12, Section VI(c) A V-PCC preselection is signaled in the MPD using a PreSelection element defined in MPEG-DASH…with an identifier (ID) list for the @preselectionComponents attribute including the ID of the main AdaptionSet for the point cloud followed by the IDs of the AdaptionSets corresponding to the point cloud components. Examiner's interpretation at the MPD level: the PreSelection element is a lookup table mapping the SpatialRegion descriptor and preselection descriptor) based on the third compact representation and the second compact representation (Pg. 9-10, Section V(c); VPCCSampleEntry which extends VolumetricVisualSampleEntry…VPCC volumetric sample entry should contain a VPCCConfigurationBox which includes a VPCCDecoderConfigurationRecord. Examiner’s note: VPCCSampleEntry parsed from the bitstream (third compact representation) and PreSelection IDs that map to the AdaptionSets (second compact representation) allow the system to reconstruct the VPCCSpatialRegion (first compact representation)). Regarding claim 17, Huang discloses the method of claim 11, and further discloses wherein obtaining the first compact representation comprises: obtaining the first compact representation from the table based on the third compact representation (Pg. 9-10, Section V(c); VPCCSampleEntry which extends VolumetricVisualSampleEntry…VPCC volumetric sample entry should contain a VPCCConfigurationBox which includes a VPCCDecoderConfigurationRecord). Regarding claim 19, Huang discloses an apparatus, comprising: a memory configured to store instructions, and one or more processors coupled to the memory and configured to execute the instructions to cause the apparatus to (Pg. 31 and Fig. 5; processor 510 and a memory 505 having instructions stored…instructions upon execution by the processor 510 configure hardware platform 500 to perform the operations described in Fig. 1-4B): perform compaction processing (Detailed Description; Video-based point cloud compressions (V-PCC) represents a volumetric encoding of point cloud visual information and enables efficient capturing, compression, reconstruction, and rendering of point cloud data utilizing MPEG video codecs, such as AVC, HEVC…) on a to-be-encoded descriptor (Pg. 12-13, Section VI.(d).; VPCCSpatialRegion descriptor) to obtain a first compact representation of the to-be-encoded descriptor (Pg. 1, Summary, Par. 2; A method of point cloud data processing comprises determining one or more spatial region descriptors…Pg. 17; VPCCSpatialRegion descriptor provides information of a spatial region of the point cloud including x, y, z offsets…bounding box information of the point cloud), wherein the to-be-encoded descriptor corresponds to a three-dimensional (3D) map point on a 3D map (Pg. 1, Summary, Par. 2; spatial region descriptors that describe one or more three-dimensional (3D) spatial regions of a point cloud data), Examiner’s note: Additionally, as recited above in examiner’s response to arguments, the cited disclosures in Huang Pg. 7 and Pg. 17 constitute a descriptor representing 3D spatial map information in compact attribute form, meeting the claimed “performing compaction processing on a to-be-encoded descriptor to obtain a first compact representation of the to-be-encoded descriptor, wherein the to-be-encoded descriptor corresponds to a three-dimensional (3D) map” obtain a compact representation of a reference descriptor (Pg. 12, Section VI(c); V-PCC Pre-Selections) corresponding to the to- be-encoded descriptor (Fig. 1-2 Preselection [VPCC Descriptor) arrow pointing to VPCCSpatialRegion descriptors), wherein the reference descriptor corresponds to an encoded 3D map point on the 3D map (Pg. 1, Summary, Par. 2; one or more preselection elements that describe point cloud components associated with the point cloud data), Examiner’s note: Additionally, as recited above in examiner’s response to arguments, the cited disclosures in Huang Pg. 18 constitute encoded alternative descriptors corresponding to the same 3D spatial region, meeting the claimed “reference descriptor corresponds to an encoded 3D map point on the 3D map” obtain a second compact representation of the to-be-encoded descriptor based on the first compact representation and the compact representation of the reference descriptor (Pg. 1, Summary, Par. 2; transmitting a media presentation description (MPD) file that includes the one or more spatial region descriptors and the one or more preselection elements), Examiner’s note: Additionally, as recited above in examiner’s response to arguments, the cited disclosures in Huang Pg. 12 show an MPD representation formed using both the spatial region descriptor information (first compact representation) and the preselection information (second compact representation) meeting the claimed “obtaining a second compact representation of the to-be-encoded descriptor based on the first compact representation and the compact representation of the reference descriptor”. and encapsulate the second compact representation to obtain a bitstream of the 3D map (Pg. 2, Section II, Encapsulation and signaling in MPEG-DASH. Pg. 11, Section VI, Embodiment 2: Encapsulation and signaling in MPEG-DASH. Pg. 1, Summary, Par. 2; transmitting a media presentation description (MPD) file). Examiner’s note: Additionally, as recited above in examiner’s response to arguments, the cited disclosures in Huang Pg. 4 and Pg. 11-12 expressly teach encapsulation of the representation into a V-PCC bitstream for transmission and decoding, meeting the claimed “encapsulating the second compact representation to obtain a bitstream of the 3D map”. Regarding claim 20, Huang discloses an apparatus, comprising: a memory, configured to store instructions, and one or more processor coupled to the memory and configured to execute the instructions to cause the apparatus to (Fig. 5; processor 510 and a memory 505 having instructions stored…instructions upon execution by the processor 510 configure hardware platform 500 to perform the operations described in Fig. 1-4B): decapsulate a bitstream of a three-dimensional (3D) map (Pg. 3; A V-PCC bitstream, containing…The payload of occupancy, geometry, and attribute V-PCC units correspond to video data units…that could be decoded by the video decoder specified in the corresponding occupancy, geometry, and attribute parameter set V-PCC unit…Pg. 4; sub-streams need to be collectively decoded along with the patch data of atlas-sub-stream in order to reconstruct and render the point cloud data) to obtain a second compact representation (Pg. 18; ContentComponents corresponding to the point cloud components is used to signal grouping V-PCC components belonging to the 3D spatial region in a point cloud) of a to-be-decoded descriptor (Pg. 17; VPCCSpatialRegion descriptor. Pg. 9-10, Section V(c); VPCC volumetric sample entry should contain a VPCCConfigurationBox which includes a VPCCDecoderConfigurationRecord), wherein the to-be-decoded descriptor corresponds to a 3D map point on the 3D map (Pg. 1, Summary, Par. 2; spatial region descriptors that describe one or more three-dimensional (3D) spatial regions of a point cloud data); Examiner’s note: Additionally, as recited above in examiner’s response to arguments, the cited disclosures in Huang Pg. 7 and Pg. 17 constitute a descriptor representing 3D spatial map information in compact attribute form. obtain a compact representation of a reference descriptor (Pg. 12, Section VI.(c).; V-PCC Pre-Selections) corresponding to the to- be-decoded descriptor (Fig. 1-2 Preselection [VPCC Descriptor] arrow pointing to VPCCSpatialRegion descriptors), wherein the reference descriptor corresponds to a decoded 3D map point on the 3D map (Pg. 1, Summary, Par. 2; one or more preselection elements that describe point cloud components associated with the point cloud data), Examiner’s note: Additionally, as recited above in examiner’s response to arguments, the cited disclosures in Huang Pg. 18 constitute encoded alternative descriptors corresponding to the same 3D spatial region. obtain a first compact representation of the to-be-decoded descriptor (Pg. 17; VPCCSpatialRegion descriptor) based on the second compact representation and the compact representation of the reference descriptor (Pg. 12; initialize the V-PCC decoder, including V-PCC sequency parameter sets as well as other parameter sets for component sub-streams), and obtain reconstructed data of the 3D map point based on the first compact representation (Detailed Description; Video-based point cloud compressions (V-PCC) represents a volumetric encoding of point cloud visual information and enables efficient capturing, compression, reconstruction, and rendering of point cloud data utilizing MPEG video codecs, such as AVC, HEVC…). Examiner’s note: Additionally, as recited above in examiner’s response to arguments, the cited disclosures in Huang Pg. 12 show an MPD representation formed using both the spatial region descriptor information (first compact representation) and the preselection information (second compact representation) meeting the claimed, and Huang Pg. 4 and Pg. 11-12 expressly teach encapsulation of the representation into a V-PCC bitstream for transmission and decoding. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 10 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable Huang (WO 2021093153), in view of Duan et al. "Overview of the MPEG-CDVS standard." IEEE Transactions on Image Processing 25, no. 1 (2015): 179-194, hereinafter referred to as “Duan”. Regarding claim 10, Huang discloses obtaining the third compact representation based on the second compact representation and the first compact representation (Pg. 1, Summary, Par. 2; transmitting a media presentation description (MPD) file that includes the one or more spatial region descriptors and the one or more preselection elements). Huang does not disclose setting an exclusive OR value or a difference. In the same art of compact descriptor data and MPEG, Duan discloses setting an exclusive OR value or a difference (Pg. 181, Section D-E; Local feature descriptor…. Location data are represented as a spatial histogram consisting of a binary map and a set of histogram counts. The histogram map and counts are encoded using binary context-based arithmetic coding scheme). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the V-PCC system of Huang with the XOR/difference operations to the compact representations as taught by Duan. Both references address compact descriptor data structures for multimedia coding and transmission, and a POSITA would recognize that applying a known operation from CDVS to V-PCC descriptors would yield predictable results in improving compression efficiency, optimize transmission, and reduced redundancy (Duan Pg. 179; one could reduce transmission data by at least an order of magnitude by extracting compact visual features efficiently on the mobile device and sending descriptors at low bitrates…a significant reduction in latency. Pg. 191, Conclusion; high image retrieval performance with extremely compact feature data). Regarding claim 18, Huang discloses the first compact representation, second compact representation, and third compact representation (Detailed Description; Video-based point cloud compressions (V-PCC) represents a volumetric encoding of point cloud visual information and enables efficient capturing, compression, reconstruction, and rendering of point cloud data utilizing MPEG video codecs, such as AVC, HEVC…Pg. 12-13, Section VI.(d).; VPCCSpatialRegion descriptor). Huang does not disclose setting an exclusive OR value or a sum In the same art of compact descriptor data and MPEG, Duan discloses setting an exclusive OR value or a sum (Pg. 181, Section D-E; Local feature descriptor…. Location data are represented as a spatial histogram consisting of a binary map and a set of histogram counts. The histogram map and counts are encoded using binary context-based arithmetic coding scheme). Huang and Duan are combined for the reason set forth above with respect to claim 10. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENNY NGAN TRAN whose telephone number is (571)272-6888. The examiner can normally be reached Mon-Thurs 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JENNY N TRAN/Examiner, Art Unit 2615 /ALICIA M HARRINGTON/Supervisory Patent Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Dec 04, 2023
Application Filed
Jan 03, 2024
Response after Non-Final Action
Sep 05, 2025
Non-Final Rejection — §102, §103
Dec 04, 2025
Response Filed
Feb 12, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12499589
SYSTEMS AND METHODS FOR IMAGE GENERATION VIA DIFFUSION
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
20%
Grant Probability
70%
With Interview (+50.0%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 5 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month