Prosecution Insights
Last updated: April 19, 2026
Application No. 18/578,964

ATLAS INFORMATION CARRIAGE IN CODED VOLUMETRIC CONTENT

Non-Final OA §103
Filed
Jan 12, 2024
Examiner
SILVA-AVINA, EMMANUEL
Art Unit
2673
Tech Center
2600 — Communications
Assignee
Guangdong OPPO Mobile Telecommunications Corp., Ltd.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
86%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
54 granted / 66 resolved
+19.8% vs TC avg
Minimal +5% lift
Without
With
+4.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
17 currently pending
Career history
83
Total Applications
across all art units

Statute-Specific Performance

§101
13.0%
-27.0% vs TC avg
§103
55.4%
+15.4% vs TC avg
§102
16.6%
-23.4% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 66 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This communication is in response to the Application No. 18/578,964 filed 01/12/2024. Claims 1-20 are pending. Information Disclosure Statement The information disclosure statement(s) (IDS) submitted on 01/12/2024 has been entered and considered. Initialed copies of the PTO-1449 by the examiner are attached. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6, 9-14, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over OH (US 20210217200 A1, hereafter referred to as “Oh”) in view of Salahieh et al. (US 20220262041 A1, hereafter referred to as “Salahieh”). Regarding claim 1, Oh discloses a method comprising: obtaining immersive media data comprising encoded data for three-dimensional volumetric media content (“The present disclosure provides a method of providing point cloud content to provide a user with various services such as virtual reality (VR), augmented reality (AR), mixed reality (MR), and autonomous driving” Oh, [0064]; “The point cloud video encoder may output a bitstream containing the encoded point cloud video data. The bitstream may not only include encoded point cloud video data, but also include signaling information related to encoding of the point cloud video data” Oh, [0067]); extracting, via a media pipeline, component data from the immersive media data, the component data comprising an atlas component, attribute component, geometry component, and occupancy component (“The reception device according to the embodiments may restore attribute video data, geometry video data, and occupancy video data, which are actual video data having the same presentation time, based on an atlas (tile, patch)” Oh, [0134]; “Point cloud data according to the embodiments, for example, V-PCC components may include an atlas, an occupancy map, geometry, and attributes” Oh, [0138]); decoding, via an atlas component decoder, the atlas component, wherein a decoded atlas component is output as decoded atlas bitstream (“the reception method/device according to the embodiments may access a 3D bounding_box based on the 3D region track grouping, and may access an atlas_tile based on the 2D region track grouping. In addition, in order to make a partial access to a 3D bounding_box, geometry, attributes, and occupancy data related to the 3D bounding_box should be decoded. In order to decode the V3C components, the relevant information is eventually used in the atlas bitstream” Oh, [0915]); assigning, via the pre-processing logic, header information to each block of the one or more blocks, the header information indicating respective tile information for each block (“VPS: V-PCC parameter set; AD: Atlas data; OVD: Occupancy video data; GVD: Geometry video data; AVD: Attribute video data; ACL: Atlas Coding Layer; AAPS: Atlas adaptation parameter set; ASPS: Atlas sequence parameter set, which may be a syntax structure containing syntax elements according to embodiments that apply to zero or more entire coded atlas sequences (CASs) as determined by the content of a syntax element found in the ASPS referred to by a syntax element found in each tile group header” Oh, [0500]; “Embodiments include atlas tile group information associated with some data of a V-PCC object included in each spatial region at a file system level. Further, embodiments include an extended signaling scheme for label and/or patch information included in each atlas tile group” Oh, [0508]); providing, via the atlas pre-processing logic, the block-order decoded atlas bitstream to an input buffer of a presentation engine, wherein the input buffer is configured to provide the block-order atlas bitstream to the presentation engine (“the input bitstream may include bitstreams for the geometry image, texture image (attribute(s) image), and occupancy map image described above. The reconstructed image (or the output image or the decoded image) may represent a reconstructed image for the geometry image, texture image (attribute(s) image), and occupancy map image described above.” Oh, [0304]; “In addition, the memory 170 may include a decoded picture buffer (DPB) or may be configured by a digital storage medium” Oh, [0305]). Oh discloses all of the subject matter as described above except for specifically teaching assembling an atlas frame in block-order based on the decoded atlas bitstream, wherein assembling the atlas frame in block-order further comprises: arranging, via the atlas pre-processing logic, one or more sub-bitstreams of the decoded atlas bitstream into one or more blocks, respectively; and generating, via the atlas pre-processing logic, a block-order decoded atlas bitstream, wherein generating the block-order decoded atlas bitstream includes: ordering, via the atlas pre-processing logic, the one or more blocks of the decoded atlas bitstream in a scan order following a space-filling curve. However, Salahieh in the same field of endeavor teaches assembling an atlas frame in block-order based on the decoded atlas bitstream, wherein assembling the atlas frame in block-order further comprises: arranging, via the atlas pre-processing logic, one or more sub-bitstreams of the decoded atlas bitstream into one or more blocks, respectively (“the V-PCC encoder 100 fills in empty spaces) between patches to improve the efficiency of video coding. For example, the V-PCC encoder 100 processes each block of pixels arranged in raster order and assigns the index of the patch as block metadata information” Salahieh, [0031]); generating, via the atlas pre-processing logic, a block-order decoded atlas bitstream, wherein generating the block-order decoded atlas bitstream includes: ordering, via the atlas pre-processing logic, the one or more blocks of the decoded atlas bitstream in a scan order following a space-filling curve (“the V-PCC encoder 100 packs the patches 108 and an example occupancy map 112 (described below) into a tiled canvas (e.g., an atlas). As used herein, an occupancy map indicates regions in the atlas that are occupied by patches. For example, the occupancy map includes a value of 1 indicating the corresponding pixel in the atlas is occupied and a value of 0 indicating the corresponding pixel in the atlas is not occupied. In some examples, the V-PCC encoder 100 generates the occupancy map. In the illustrated example, the occupancy map 112 indicates parts of the atlas that are occupied. The V-PCC encoder 100 further generates patch information metadata to indicate how patches are mapped between the projection planes and the atlas... the V-PCC encoder 100 fills in empty spaces) between patches to improve the efficiency of video coding. For example, the V-PCC encoder 100 processes each block of pixels arranged in raster order and assigns the index of the patch as block metadata information” Salahieh, [0030]-[0031]). Therefore, it would have been obvious to one of ordinary skill in the art to combine Oh and Salahieh before the effective filing date of the claimed invention. The motivation for this combination of references would have been to improve the efficiency of video coding by assembling an atlas frame in block-order based on a space-filling curve (Salahieh, [0031]). This motivation for the combination of Oh and Salahieh before is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III). Regarding claim 2, Oh and Salahieh disclose the method of claim 1, wherein in method further comprises: obtaining, via atlas pre-processing logic, the decoded atlas bitstream from the atlas component decoder; wherein the decoded atlas information is provided by the atlas component decoder in a patch-order (“asps_patch_precedence_order_flag equal to 1 indicates that patch precedence for the current atlas is the same as the decoding order. asps_patch_precedence_order_flag equal to 0 indicates that patch precedence for the current atlas is the reverse of the decoding order” Oh , [0650]). Regarding claim 3, Oh and Salahieh disclose the method of claim 1, wherein the space-filling curve is a Z-order curve such that the scan order is a raster scan order, wherein the one or more blocks of the block-order decoded atlas bitstream are ordered in raster scan order of an atlas tile of the atlas frame (“the V-PCC encoder 100 fills in empty spaces) between patches to improve the efficiency of video coding. For example, the V-PCC encoder 100 processes each block of pixels arranged in raster order and assigns the index of the patch as block metadata information” Salahieh, [0031]). Therefore, combining Oh and Salahieh would meet the claim limitations for the same reasons as previously discussed in claim 1. Regarding claim 4, Oh and Salahieh disclose the method of claim 1, wherein the header information comprises a patch identifier and block identifier of each block, the patch identifier identifying a patch to which a respective block belongs, and the block identifier identifying a position of the respective block within the patch (“Mapping information about each block and patch, i.e., a candidate index (when patches are disposed in order based on the 2D spatial position and size information about the patches, multiple patches may be mapped to one block in an overlapping manner. In this case, the mapped patches constitute a candidate list, and the candidate index indicates the position in sequential order of a patch whose data is present in the block), and a local patch index (which is an index indicating one of the patches present in the frame). Table X shows a pseudo code representing the process of matching between blocks and patches based on the candidate list and the local patch indexes.” Oh, [0249]). Regarding claim 5, Oh and Salahieh disclose the method of claim 1, wherein the respective tile information comprises one or more of a tile index, tile identifier, tile origin, and tile size for a respective block (“2d_region_id may indicate the identifier of a 2D region. According to embodiments, it may match a video tile identifier, a tile_group identifier, or an atlas_tile identifier or tile_group identifier in an atlas frame” Oh, [1029]). Regarding claim 6, Oh and Salahieh disclose the method of claim 1, wherein the header information comprises a tile identifier and patch identifier (“The VPCC unit header may include the following information based on the VUH unit type” Oh, [0527]; “The 3d region-related fields and the 2d region-related fields of the 3D region mapping information of FIG. 36 may correspond to the tile information (tile id, 2D region) and patch object idx of patch information contained in the bitstream according to the embodiments, respectively” Oh, [0749]). Regarding claim 9, Oh and Salahieh disclose an apparatus, comprising: a non-transitory computer readable medium in communication with the processor, the non-transitory computer readable medium having encoded thereon a set of instructions executable by the processor to (Oh, [1185]): obtain immersive media data comprising encoded data for three-dimensional volumetric media content (“The present disclosure provides a method of providing point cloud content to provide a user with various services such as virtual reality (VR), augmented reality (AR), mixed reality (MR), and autonomous driving” Oh, [0064]; “The point cloud video encoder may output a bitstream containing the encoded point cloud video data. The bitstream may not only include encoded point cloud video data, but also include signaling information related to encoding of the point cloud video data” Oh, [0067]); extract, via a media pipeline, component data from the immersive media data, the component data comprising an atlas component, attribute component, geometry component, and occupancy component (“The reception device according to the embodiments may restore attribute video data, geometry video data, and occupancy video data, which are actual video data having the same presentation time, based on an atlas (tile, patch)” Oh, [0134]; “Point cloud data according to the embodiments, for example, V-PCC components may include an atlas, an occupancy map, geometry, and attributes” Oh, [0138]); decode, via an atlas component decoder, the atlas component, wherein a decoded atlas component is output as decoded atlas bitstream (“the reception method/device according to the embodiments may access a 3D bounding_box based on the 3D region track grouping, and may access an atlas_tile based on the 2D region track grouping. In addition, in order to make a partial access to a 3D bounding_box, geometry, attributes, and occupancy data related to the 3D bounding_box should be decoded. In order to decode the V3C components, the relevant information is eventually used in the atlas bitstream” Oh, [0915]); assemble an atlas frame in block-order based on the decoded atlas bitstream, wherein assembling the atlas frame in block-order further comprises: arranging, via the atlas pre-processing logic, one or more sub-bitstreams of the decoded atlas bitstream into one or more blocks, respectively (“the V-PCC encoder 100 fills in empty spaces) between patches to improve the efficiency of video coding. For example, the V-PCC encoder 100 processes each block of pixels arranged in raster order and assigns the index of the patch as block metadata information” Salahieh, [0031]); assigning, via the pre-processing logic, header information to each block of the one or more blocks, the header information indicating respective tile information for each block (“VPS: V-PCC parameter set; AD: Atlas data; OVD: Occupancy video data; GVD: Geometry video data; AVD: Attribute video data; ACL: Atlas Coding Layer; AAPS: Atlas adaptation parameter set; ASPS: Atlas sequence parameter set, which may be a syntax structure containing syntax elements according to embodiments that apply to zero or more entire coded atlas sequences (CASs) as determined by the content of a syntax element found in the ASPS referred to by a syntax element found in each tile group header” Oh, [0500]; “Embodiments include atlas tile group information associated with some data of a V-PCC object included in each spatial region at a file system level. Further, embodiments include an extended signaling scheme for label and/or patch information included in each atlas tile group” Oh, [0508]); generating, via the atlas pre-processing logic, a block-order decoded atlas bitstream, wherein generating the block-order decoded atlas bitstream includes: ordering, via the atlas pre-processing logic, the one or more blocks of the decoded atlas bitstream in a scan order following a space-filling curve (“the V-PCC encoder 100 packs the patches 108 and an example occupancy map 112 (described below) into a tiled canvas (e.g., an atlas). As used herein, an occupancy map indicates regions in the atlas that are occupied by patches. For example, the occupancy map includes a value of 1 indicating the corresponding pixel in the atlas is occupied and a value of 0 indicating the corresponding pixel in the atlas is not occupied. In some examples, the V-PCC encoder 100 generates the occupancy map. In the illustrated example, the occupancy map 112 indicates parts of the atlas that are occupied. The V-PCC encoder 100 further generates patch information metadata to indicate how patches are mapped between the projection planes and the atlas... the V-PCC encoder 100 fills in empty spaces) between patches to improve the efficiency of video coding. For example, the V-PCC encoder 100 processes each block of pixels arranged in raster order and assigns the index of the patch as block metadata information” Salahieh, [0030]-[0031]); and provide, via the atlas pre-processing logic, the block-order decoded atlas bitstream to an input buffer of a presentation engine, wherein the input buffer is configured to provide the block-order atlas bitstream to the presentation engine (“the input bitstream may include bitstreams for the geometry image, texture image (attribute(s) image), and occupancy map image described above. The reconstructed image (or the output image or the decoded image) may represent a reconstructed image for the geometry image, texture image (attribute(s) image), and occupancy map image described above.” Oh, [0304]; “In addition, the memory 170 may include a decoded picture buffer (DPB) or may be configured by a digital storage medium” Oh, [0305]). Therefore, combining Oh and Salahieh would meet the claim limitations for the same reasons as previously discussed in claim 1. Regarding claim 10, Oh and Salahieh disclose the apparatus of claim 9, wherein the set of instructions is further executable by the processor to: obtain, via atlas pre-processing logic, the decoded atlas bitstream from the atlas component decoder; wherein the decoded atlas information is provided by the atlas component decoder in a patch-order (“asps_patch_precedence_order_flag equal to 1 indicates that patch precedence for the current atlas is the same as the decoding order. asps_patch_precedence_order_flag equal to 0 indicates that patch precedence for the current atlas is the reverse of the decoding order” Oh , [0650]). Regarding claim 11, Oh and Salahieh disclose the apparatus of claim 9, wherein the space-filling curve is a Z-order curve such that the scan order is a raster scan order, wherein the one or more blocks of the block-order decoded atlas bitstream are ordered in raster scan order of an atlas tile of the atlas frame (“the V-PCC encoder 100 fills in empty spaces) between patches to improve the efficiency of video coding. For example, the V-PCC encoder 100 processes each block of pixels arranged in raster order and assigns the index of the patch as block metadata information” Salahieh, [0031]). Therefore, combining Oh and Salahieh would meet the claim limitations for the same reasons as previously discussed in claim 1. Regarding claim 12, Oh and Salahieh disclose the apparatus of claim 9, wherein the header information comprises a patch identifier and block identifier of each block, the patch identifier identifying a patch to which a respective block belongs, and the block identifier identifying a position of the respective block within the patch (“Mapping information about each block and patch, i.e., a candidate index (when patches are disposed in order based on the 2D spatial position and size information about the patches, multiple patches may be mapped to one block in an overlapping manner. In this case, the mapped patches constitute a candidate list, and the candidate index indicates the position in sequential order of a patch whose data is present in the block), and a local patch index (which is an index indicating one of the patches present in the frame). Table X shows a pseudo code representing the process of matching between blocks and patches based on the candidate list and the local patch indexes.” Oh, [0249]). Regarding claim 13, Oh and Salahieh disclose the apparatus of claim 9, wherein the respective tile information comprises one or more of a tile index, tile identifier, tile origin, and tile size for a respective block (“2d_region_id may indicate the identifier of a 2D region. According to embodiments, it may match a video tile identifier, a tile_group identifier, or an atlas_tile identifier or tile_group identifier in an atlas frame” Oh, [1029]). Regarding claim 14, Oh and Salahieh disclose the apparatus of claim 9, wherein the header information comprises a tile identifier and patch identifier (“The VPCC unit header may include the following information based on the VUH unit type” Oh, [0527]; “The 3d region-related fields and the 2d region-related fields of the 3D region mapping information of FIG. 36 may correspond to the tile information (tile id, 2D region) and patch object idx of patch information contained in the bitstream according to the embodiments, respectively” Oh, [0749]). Regarding claim 17, Oh and Salahieh disclose a system for provisioning decoded atlas information, the system comprising: a demultiplexer configured to demultiplex immersive media data (“The demultiplexer 16000 demultiplexes the compressed bitstream to output a compressed texture image, a compressed geometry image, a compressed occupancy map, and compressed auxiliary patch information” Oh, [0289]), wherein immersive media data comprises encoded data for three-dimensional volumetric media content (“The present disclosure provides a method of providing point cloud content to provide a user with various services such as virtual reality (VR), augmented reality (AR), mixed reality (MR), and autonomous driving” Oh, [0064]; “The point cloud video encoder may output a bitstream containing the encoded point cloud video data. The bitstream may not only include encoded point cloud video data, but also include signaling information related to encoding of the point cloud video data” Oh, [0067]), wherein demultiplexing the immersive media data includes extracting an atlas component (“The reception device according to the embodiments may restore attribute video data, geometry video data, and occupancy video data, which are actual video data having the same presentation time, based on an atlas (tile, patch)” Oh, [0134]; “Point cloud data according to the embodiments, for example, V-PCC components may include an atlas, an occupancy map, geometry, and attributes” Oh, [0138]); an atlas component decoder coupled to the demultiplexer, the atlas component decoder configured to decode the atlas component, wherein a decoded atlas component is output as decoded atlas bitstream (“the reception method/device according to the embodiments may access a 3D bounding_box based on the 3D region track grouping, and may access an atlas_tile based on the 2D region track grouping. In addition, in order to make a partial access to a 3D bounding_box, geometry, attributes, and occupancy data related to the 3D bounding_box should be decoded. In order to decode the V3C components, the relevant information is eventually used in the atlas bitstream” Oh, [0915]); an atlas pre-processing subsystem coupled to the atlas component decoder, the atlas pre-processing subsystem comprising: a processor; and a non-transitory computer readable medium in communication with the processor, the non-transitory computer readable medium having encoded thereon a set of instructions executable by the processor to (Oh, [0081], [1185]): arrange one or more sub-bitstreams of the decoded atlas bitstream into one or more blocks (“the V-PCC encoder 100 fills in empty spaces) between patches to improve the efficiency of video coding. For example, the V-PCC encoder 100 processes each block of pixels arranged in raster order and assigns the index of the patch as block metadata information” Salahieh, [0031]); assign header information to each block of the one or more blocks, the header information indicating respective tile information for each block (“VPS: V-PCC parameter set; AD: Atlas data; OVD: Occupancy video data; GVD: Geometry video data; AVD: Attribute video data; ACL: Atlas Coding Layer; AAPS: Atlas adaptation parameter set; ASPS: Atlas sequence parameter set, which may be a syntax structure containing syntax elements according to embodiments that apply to zero or more entire coded atlas sequences (CASs) as determined by the content of a syntax element found in the ASPS referred to by a syntax element found in each tile group header” Oh, [0500]; “Embodiments include atlas tile group information associated with some data of a V-PCC object included in each spatial region at a file system level. Further, embodiments include an extended signaling scheme for label and/or patch information included in each atlas tile group” Oh, [0508]); and generate a block-order decoded atlas bitstream, wherein generating the block- order decoded atlas bitstream includes: ordering the one or more blocks of the decoded atlas bitstream in a scan order following a space-filling curve (“the V-PCC encoder 100 packs the patches 108 and an example occupancy map 112 (described below) into a tiled canvas (e.g., an atlas). As used herein, an occupancy map indicates regions in the atlas that are occupied by patches. For example, the occupancy map includes a value of 1 indicating the corresponding pixel in the atlas is occupied and a value of 0 indicating the corresponding pixel in the atlas is not occupied. In some examples, the V-PCC encoder 100 generates the occupancy map. In the illustrated example, the occupancy map 112 indicates parts of the atlas that are occupied. The V-PCC encoder 100 further generates patch information metadata to indicate how patches are mapped between the projection planes and the atlas... the V-PCC encoder 100 fills in empty spaces) between patches to improve the efficiency of video coding. For example, the V-PCC encoder 100 processes each block of pixels arranged in raster order and assigns the index of the patch as block metadata information” Salahieh, [0030]-[0031]). Therefore, combining Oh and Salahieh would meet the claim limitations for the same reasons as previously discussed in claim 1. Regarding claim 18, Oh and Salahieh disclose the system of claim 17, wherein the header information comprises a patch identifier and block identifier of each block, the patch identifier identifying a patch to which a respective block belongs, and the block identifier identifying a position of the respective block within the patch (“Mapping information about each block and patch, i.e., a candidate index (when patches are disposed in order based on the 2D spatial position and size information about the patches, multiple patches may be mapped to one block in an overlapping manner. In this case, the mapped patches constitute a candidate list, and the candidate index indicates the position in sequential order of a patch whose data is present in the block), and a local patch index (which is an index indicating one of the patches present in the frame). Table X shows a pseudo code representing the process of matching between blocks and patches based on the candidate list and the local patch indexes.” Oh, [0249]). Regarding claim 19, Oh and Salahieh disclose the system of claim 17, wherein the respective tile information comprises one or more of a tile index, tile identifier, tile origin, and tile size for a respective block (“2d_region_id may indicate the identifier of a 2D region. According to embodiments, it may match a video tile identifier, a tile_group identifier, or an atlas_tile identifier or tile_group identifier in an atlas frame” Oh, [1029]). Regarding claim 20, Oh and Salahieh disclose the system of claim 17, wherein the header information comprises a tile identifier and patch identifier (“The VPCC unit header may include the following information based on the VUH unit type” Oh, [0527]; “The 3d region-related fields and the 2d region-related fields of the 3D region mapping information of FIG. 36 may correspond to the tile information (tile id, 2D region) and patch object idx of patch information contained in the bitstream according to the embodiments, respectively” Oh, [0749]). Claim(s) 7-8 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Oh in view of Salahieh and in further view of Hannuksela et al. (WO 2020141260 A1, hereafter referred to as “Hannuksela”). Regarding claim 7, the combination of Oh and Salahieh as a whole does not expressly disclose wherein the header information comprises one or more of an atlas frame delimiter, atlas tile delimiter, and patch delimiter. However, Hannuksela in the same field of endeavor teaches wherein the header information comprises one or more of an atlas frame delimiter, atlas tile delimiter, and patch delimiter (“A sub-picture sequence identifier included in a header included in a VCL NAL unit, such as a tile group header or a slice header and associated with the respective image segment (e.g. tile group or slice)... A sub-picture sequence identifier included in a sub-picture delimiter, a picture header, or alike syntax structure, which is implicitly referenced by coded video data. A sub-picture delimiter may for example be a specific NAL unit that starts a new sub-picture. Implicit referencing may for example mean that the previous syntax structure (e.g. sub-picture delimiter or picture header) in decoding or bitstream order may be referenced” Hannuksela [0296]). Therefore, it would have been obvious to one of ordinary skill in the art to combine Oh, Salahieh and Hannuksela before the effective filing date of the claimed invention. The motivation for this combination of references would have been to indicate a start and endpoint of the coded video data such as Video Coding Layer (VCL) Network Abstraction Layer (NAL) units and the picture elements included in a header (Hannuksela, [0296]). This motivation for the combination of Oh, Salahieh and Hannuksela is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III). Regarding claim 8, Oh, Salahieh and Hannuksela disclose the method of claim 1, wherein the decoded atlas bitstream comprises an atlas frame size indicator followed by an atlas frame payload (“the phrase along the bitstream may be used when the bitstream is contained in a container file, such as a file conforming to the ISO Base Media File Format, and certain file metadata is stored in the file in a manner that associates the metadata to the bitstream, such as boxes in the sample entry for a track containing the bitstream, a sample group for the track containing the bitstream, or a timed metadata track associated with the track containing the bitstream” Hannuksela, [0154]; “A basic building block in the ISO base media file format is called a box. Each box has a header and a payload” Hannuksela, [0165]), wherein the atlas frame payload comprises one or more tile size indicators, each tile size indicator followed by a respective tile payload, and each respective tile payload comprises one or more patch size indicators, each patch size indicator followed by a respective patch payload, wherein the patch payload comprises a plurality of blocks in a raster scan order (“the box header indicates the type of the box and the size of the box in terms of bytes. A box may enclose other boxes, and the ISO file format specifies which box types are allowed within a box of a certain type. Furthermore, the presence of some boxes may be mandatory in each file, while the presence of other boxes may be optional. Additionally, for some box types, it may be allowable to have more than one box present in a file. Thus, the ISO base media file format may be considered to specify a hierarchical structure of boxes... According to the ISO family of file formats, a file includes media data and metadata that are encapsulated into boxes. Each box is identified by a four character code (4CC) and starts with a header which informs about the type and size of the box” Hannuksela, [0165]-[0166]). Therefore, combining Oh, Salahieh and Hannuksela would meet the claim limitations for the same reasons as previously discussed in claim 7. Regarding claim 15, Oh, Salahieh and Hannuksela disclose the apparatus of claim 9, wherein the header information comprises one or more of an atlas frame delimiter, atlas tile delimiter, and patch delimiter (“A sub-picture sequence identifier included in a header included in a VCL NAL unit, such as a tile group header or a slice header and associated with the respective image segment (e.g. tile group or slice)... A sub-picture sequence identifier included in a sub-picture delimiter, a picture header, or alike syntax structure, which is implicitly referenced by coded video data. A sub-picture delimiter may for example be a specific NAL unit that starts a new sub-picture. Implicit referencing may for example mean that the previous syntax structure (e.g. sub-picture delimiter or picture header) in decoding or bitstream order may be referenced” Hannuksela [0296]). Therefore, combining Oh, Salahieh and Hannuksela would meet the claim limitations for the same reasons as previously discussed in claim 7. Regarding claim 16, Oh, Salahieh and Hannuksela disclose the apparatus of claim 9, wherein the decoded atlas bitstream comprises an atlas frame size indicator followed by an atlas frame payload (“the phrase along the bitstream may be used when the bitstream is contained in a container file, such as a file conforming to the ISO Base Media File Format, and certain file metadata is stored in the file in a manner that associates the metadata to the bitstream, such as boxes in the sample entry for a track containing the bitstream, a sample group for the track containing the bitstream, or a timed metadata track associated with the track containing the bitstream” Hannuksela, [0154]; “A basic building block in the ISO base media file format is called a box. Each box has a header and a payload” Hannuksela, [0165]), wherein the atlas frame payload comprises one or more tile size indicators, each tile size indicator followed by a respective tile payload, and each respective tile payload comprises one or more patch size indicators, each patch size indicator followed by a respective patch payload, wherein the patch payload comprises a plurality of blocks in a raster scan order (“the box header indicates the type of the box and the size of the box in terms of bytes. A box may enclose other boxes, and the ISO file format specifies which box types are allowed within a box of a certain type. Furthermore, the presence of some boxes may be mandatory in each file, while the presence of other boxes may be optional. Additionally, for some box types, it may be allowable to have more than one box present in a file. Thus, the ISO base media file format may be considered to specify a hierarchical structure of boxes... According to the ISO family of file formats, a file includes media data and metadata that are encapsulated into boxes. Each box is identified by a four character code (4CC) and starts with a header which informs about the type and size of the box” Hannuksela, [0165]-[0166]). Therefore, combining Oh, Salahieh and Hannuksela would meet the claim limitations for the same reasons as previously discussed in claim 7. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20220060529 A1 Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method WO 2021176133 A1 discloses method and apparatus for volumetric video compression including packing two or more components of the volumetric video content into separate atlases. Inquiries Any inquiry concerning this communication or earlier communications from the examiner should be directed to EMMANUEL SILVA-AVINA whose telephone number is (571)270-0729. The examiner can normally be reached Monday - Friday 11 AM - 8 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EMMANUEL SILVA-AVINA/Examiner, Art Unit 2673 /CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673
Read full office action

Prosecution Timeline

Jan 12, 2024
Application Filed
Feb 02, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597141
SYSTEM FOR OPTICAL DETERMINATION OF COEFFICIENT OF FRICTION FOR A SURFACE
2y 5m to grant Granted Apr 07, 2026
Patent 12591996
Visual Localization Method and Apparatus
2y 5m to grant Granted Mar 31, 2026
Patent 12586251
PATCH ZIPPERING FOR MESH COMPRESSION
2y 5m to grant Granted Mar 24, 2026
Patent 12586258
NON-ADVERSARIAL IMAGE GENERATION USING TRANSFER LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12579679
System and Method for Identifying Feature in an Image of a Subject
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
86%
With Interview (+4.7%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 66 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month