DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of claims
Claims 1, 2, 4 – 12, and 14 - 20 are pending.
Claims 1, 4, 11, and 14 are amended.
Claims 3 and 13 are cancelled
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-6, 9-16, 19, and 20 are rejected under 35 U.S.C. 103 as being anticipated by Hannuksela et al. (CA 3143885) in view of Lee et al. (US 20210398323 A1) and Oh et al. (US-20210409767).
Regarding claim 1:
Hannuksela teaches:
An apparatus comprising (Hannuksela [0007] Now in order to at least alleviate the above problems, an enhanced encoding method is introduced herein. In some embodiments there is provided a method, apparatus and computer program product for video coding and decoding.):
a communication interface including a buffer (Hannuksela [0085] A first bitstream may be followed by a second bitstream in the same logical channel, such as in the same file or in the same connection of a communication protocol. [0135] A Decoded Picture Buffer (DPB) may be used in the encoder and/or in the decoder. [0550] Thus, for example, embodiments of the invention may be implemented in a video codec which may implement video coding over fixed or wired communication paths.);
and a processor operably coupled to the communication interface, and configured to (Hannuksela [0021] An apparatus according to a second aspect comprises at least one processor and at least one memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following):
receive, via the communication interface, a scene description for visual volumetric video-based coding (V3C) content, wherein the scene description indicates a media stream for a V3C atlas and media streams for V3C components (Hannuksela [0388] Moreover, in volumetric video coding several types of video, such as texture, geometry, occupancy, and different types attributes may be coded. …One or more 2D, 360-degree or volumetric video clips that overlay 360- degree background, which may be coded using sub-pictures for viewport-dependent delivery. [0262] Two-dimensional form of patches may be packed into one or more atlases. Texture atlases are known in the art, comprising an image consisting of sub-images, the image being treated as a single unit by graphics hardware and which can be compressed and transmitted as a single image for subsequent identification and decompression. Geometry atlases may be constructed similarly to texture atlases. Texture and geometry atlases may be treated as separate pictures (and as separate picture sequences in case of volumetric video), or texture and geometry atlases may be packed onto the same frame, e.g. similarly to how frame packing is conventionally performed. [0148] A sender, a gateway, or alike may select the transmitted layers and/or sub-layers of a scalable video bitstream, or likewise a receiver, a client, a player, or alike may request transmission of selected layers and/or sub-layers of a scalable video bitstream. [0247] A three-dimensional volumetric representation of a scene may be determined as a plurality of voxels on the basis of input streams of at least one multicamera device. Thus, at least one but preferably a plurality (i.e. 2, 3, 4, 5 or more) of multicamera devices may be used to capture 3D video representation of a scene. [0532] The above described embodiments provide a mechanism and an architecture to use core video (de)coding process and bitstream format in a versatile manner for many video-based purposes, including video-based point cloud coding, patch-based volumetric video coding, and 360-degree video coding with multiple projection surfaces. [0012] The entity extracts independently decodable picture region sequences from the bitstreams and makes them accessible individually in a media presentation description.)
receive, via the communication interface, a plurality of media streams of the V3C content (Hannuksela [0307] A decoder receives coded video data (e.g. a bitstream).);
and render the plurality of media streams based on the scene description for the V3C content (Hannuksela [0257] One patch culling module may be configured to determine which patches are transmitted to a user device, for example the rendering module of the headset.).
Hannuksela fails to teach:
wherein the V3C atlas is distinguishable from the V3C components (Lee [0958] The V3C tile item may be an item for encapsulating atlas tile data when the V3C atlas data includes multiple atlas tiles.);
and wherein the scene description includes a constraint indicating that the V3C components cannot be selected for processing until the V3C atlas is also selected for processing (Oh [1167] According to embodiments, the alternate entity group may include a track or item containing an atlas bitstream. When one of the track or item is selected, the point cloud data may be decoded and reconstructed by extracting atlas data from the atlas bitstream track or item and extracting V3C component data from a V3C component track or item associated with the atlas track or item. In addition, depending on the condition of the player/decoder, network conditions, and the like, the entity may be changed to another entity, that is, another atlas track or item within the same alternate entity group. In this case, the data may be switched to a different version of the point cloud data by extracting the atlas data from the atlas track or item and extracting V3C component data from a V3C component track or item associated with the atlas track or item.);
Lee teaches:
wherein the V3C atlas is distinguishable from the V3C components (Lee [0958] The V3C tile item may be an item for encapsulating atlas tile data when the V3C atlas data includes multiple atlas tiles.);
Oh teaches:
and wherein the scene description includes a constraint indicating that the V3C components cannot be selected for processing until the V3C atlas is also selected for processing (Oh [1167] According to embodiments, the alternate entity group may include a track or item containing an atlas bitstream. When one of the track or item is selected, the point cloud data may be decoded and reconstructed by extracting atlas data from the atlas bitstream track or item and extracting V3C component data from a V3C component track or item associated with the atlas track or item. In addition, depending on the condition of the player/decoder, network conditions, and the like, the entity may be changed to another entity, that is, another atlas track or item within the same alternate entity group. In this case, the data may be switched to a different version of the point cloud data by extracting the atlas data from the atlas track or item and extracting V3C component data from a V3C component track or item associated with the atlas track or item.);
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hannuksela with Lee and Oh. Having a V3C atlas and selecting the atlas to select components, as in Lee and Oh, would benefit the Hannuksela teachings by allowing for ways to store the data of the v3c. Additionally, this is the application of a known technique, using a v3c atlas, to yield predictable results.
Regarding claim 2:
Hannuksela and Lee teach:
The apparatus of Claim 1,
wherein the scene description distinguishes the plurality of media streams based on the V3C atlas and the V3C components (Hannuksela [0140] In some coding formats and codecs, a distinction is made between so-called short-term and long-term reference pictures.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hannuksela with Lee and Oh. Having a V3C atlas and selecting the atlas to select components, as in Lee and Oh, would benefit the Hannuksela teachings by allowing for ways to store the data of the v3c. Additionally, this is the application of a known technique, using a v3c atlas, to yield predictable results.
Regarding claim 3:
Hannuksela and Lee teach:
The apparatus of Claim 2,
wherein the scene description indicates that the V3C components cannot be selected for processing until the V3C atlas is also selected for processing (Hannuksela [0263] For example, a tile grid, as understood in the context of High Efficiency Video Coding (HEVC), may be selected for encoding and an atlas may be organized in a manner such that a patch or a group of patches having similar visibility information can be encoded as a motion-constrained tile set (MCTS).).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hannuksela with Lee and Oh. Having a V3C atlas and selecting the atlas to select components, as in Lee and Oh, would benefit the Hannuksela teachings by allowing for ways to store the data of the v3c. Additionally, this is the application of a known technique, using a v3c atlas, to yield predictable results.
Regarding claim 4:
Hannuksela and Lee teach:
The apparatus of Claim 1,
wherein the scene description is a Moving Picture Experts group (MPEG) media extension which lists items that are separately processed form other media items (Hannuksela [0218] MPEG Omnidirectional Media Format (ISO/IEC 23090-2) is a virtual reality (VR) system standard. [Figure 1]).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hannuksela with Lee and Oh. Having a V3C atlas and selecting the atlas to select components, as in Lee and Oh, would benefit the Hannuksela teachings by allowing for ways to store the data of the v3c. Additionally, this is the application of a known technique, using a v3c atlas, to yield predictable results.
Regarding claim 5:
Hannuksela and Lee teach:
The apparatus of Claim 4,
wherein the MPEG media extension includes: a media stream configured to indicate the V3C atlas; and at least one component stream configured to indicate the V3C components (Hannuksela [0265] In some cases, several versions of the one or more atlases are encoded.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hannuksela with Lee and Oh. Having a V3C atlas and selecting the atlas to select components, as in Lee and Oh, would benefit the Hannuksela teachings by allowing for ways to store the data of the v3c. Additionally, this is the application of a known technique, using a v3c atlas, to yield predictable results.
Regarding claim 6:
Hannuksela and Lee teach:
The apparatus of Claim 5,
wherein the at least one component stream lists the V3C components as an array (Hannuksela [0067] A component may be defined as an array or single sample from one of the three sample arrays (luma and two chroma) or the array or a single sample of the array that compose a picture in monochrome format.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hannuksela with Lee and Oh. Having a V3C atlas and selecting the atlas to select components, as in Lee and Oh, would benefit the Hannuksela teachings by allowing for ways to store the data of the v3c. Additionally, this is the application of a known technique, using a v3c atlas, to yield predictable results.
Regarding claim 9:
Hannuksela and Lee teach:
The apparatus of Claim 1,
wherein the V3C atlas and the V3C components are synchronously received (Hannuksela [0305] In some operating systems and/or device architectures, the player might not be able to pass metadata to the rendering process in picture-synchronized manner but rather only the video decoder might be capable of doing that. This might apply to any video (both non-encrypted and encrypted) or only for encrypted video.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hannuksela with Lee and Oh. Having a V3C atlas and selecting the atlas to select components, as in Lee and Oh, would benefit the Hannuksela teachings by allowing for ways to store the data of the v3c. Additionally, this is the application of a known technique, using a v3c atlas, to yield predictable results.
Regarding claim 10:
Hannuksela and Lee teach:
The apparatus of Claim 1,
wherein the media streams are multiplexed and received through a single buffer (Hannuksela [0159] However, a constituent bitstream may also be used for other purposes; for example, a texture video bitstream and a depth video bitstream that are multiplexed into the same bitstream (e.g. as separate independent layers) may be regarded as constituent bitstreams.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hannuksela with Lee and Oh. Having a V3C atlas and selecting the atlas to select components, as in Lee and Oh, would benefit the Hannuksela teachings by allowing for ways to store the data of the v3c. Additionally, this is the application of a known technique, using a v3c atlas, to yield predictable results.
Regarding claim 11:
Hannuksela and Lee teach:
A method for support of visual volumetric video-based coding (V3C) in immersive scene descriptions, the method comprising:
receiving a scene description for visual volumetric video-based coding (V3C) content, wherein the scene description indicates a media stream for a V3C atlas and media streams for V3C components (Hannuksela [0388] Moreover, in volumetric video coding several types of video, such as texture, geometry, occupancy, and different types attributes may be coded. …One or more 2D, 360-degree or volumetric video clips that overlay 360- degree background, which may be coded using sub-pictures for viewport-dependent delivery. [0262] Two-dimensional form of patches may be packed into one or more atlases. Texture atlases are known in the art, comprising an image consisting of sub-images, the image being treated as a single unit by graphics hardware and which can be compressed and transmitted as a single image for subsequent identification and decompression. Geometry atlases may be constructed similarly to texture atlases. Texture and geometry atlases may be treated as separate pictures (and as separate picture sequences in case of volumetric video), or texture and geometry atlases may be packed onto the same frame, e.g. similarly to how frame packing is conventionally performed. [0148] A sender, a gateway, or alike may select the transmitted layers and/or sub-layers of a scalable video bitstream, or likewise a receiver, a client, a player, or alike may request transmission of selected layers and/or sub-layers of a scalable video bitstream. [0247] A three-dimensional volumetric representation of a scene may be determined as a plurality of voxels on the basis of input streams of at least one multicamera device. Thus, at least one but preferably a plurality (i.e. 2, 3, 4, 5 or more) of multicamera devices may be used to capture 3D video representation of a scene. [0532] The above described embodiments provide a mechanism and an architecture to use core video (de)coding process and bitstream format in a versatile manner for many video-based purposes, including video-based point cloud coding, patch-based volumetric video coding, and 360-degree video coding with multiple projection surfaces.)
receive, via the communication interface, a plurality of media streams of the V3C content (Hannuksela [0307] A decoder receives coded video data (e.g. a bitstream).);
and render the plurality of media streams based on the scene description for the V3C content (Hannuksela [0257] One patch culling module may be configured to determine which patches are transmitted to a user device, for example the rendering module of the headset.).
Hannuksela fails to teach:
wherein the V3C atlas is distinguishable from the V3C components (Lee [0958] The V3C tile item may be an item for encapsulating atlas tile data when the V3C atlas data includes multiple atlas tiles.);
and wherein the scene description includes a constraint indicating that the V3C components cannot be selected for processing until the V3C atlas is also selected for processing (Oh [1167] According to embodiments, the alternate entity group may include a track or item containing an atlas bitstream. When one of the track or item is selected, the point cloud data may be decoded and reconstructed by extracting atlas data from the atlas bitstream track or item and extracting V3C component data from a V3C component track or item associated with the atlas track or item. In addition, depending on the condition of the player/decoder, network conditions, and the like, the entity may be changed to another entity, that is, another atlas track or item within the same alternate entity group. In this case, the data may be switched to a different version of the point cloud data by extracting the atlas data from the atlas track or item and extracting V3C component data from a V3C component track or item associated with the atlas track or item.);
Lee teaches:
wherein the V3C atlas is distinguishable from the V3C components (Lee [0958] The V3C tile item may be an item for encapsulating atlas tile data when the V3C atlas data includes multiple atlas tiles.);
Oh teaches:
and wherein the scene description includes a constraint indicating that the V3C components cannot be selected for processing until the V3C atlas is also selected for processing (Oh [1167] According to embodiments, the alternate entity group may include a track or item containing an atlas bitstream. When one of the track or item is selected, the point cloud data may be decoded and reconstructed by extracting atlas data from the atlas bitstream track or item and extracting V3C component data from a V3C component track or item associated with the atlas track or item. In addition, depending on the condition of the player/decoder, network conditions, and the like, the entity may be changed to another entity, that is, another atlas track or item within the same alternate entity group. In this case, the data may be switched to a different version of the point cloud data by extracting the atlas data from the atlas track or item and extracting V3C component data from a V3C component track or item associated with the atlas track or item.);
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hannuksela with Lee and Oh. Having a V3C atlas and selecting the atlas to select components, as in Lee and Oh, would benefit the Hannuksela teachings by allowing for ways to store the data of the v3c. Additionally, this is the application of a known technique, using a v3c atlas, to yield predictable results.
Regarding claim 12:
Hannuksela and Lee teach:
The method of Claim 11,
wherein the scene description distinguishes the plurality of media streams based on the V3C atlas and the V3C components (Hannuksela [0140] In some coding formats and codecs, a distinction is made between so-called short-term and long-term reference pictures.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hannuksela with Lee and Oh. Having a V3C atlas and selecting the atlas to select components, as in Lee and Oh, would benefit the Hannuksela teachings by allowing for ways to store the data of the v3c. Additionally, this is the application of a known technique, using a v3c atlas, to yield predictable results.
Regarding claim 13:
Hannuksela and Lee teach:
The method of Claim 12,
wherein the scene description indicates that the V3C components cannot be selected for processing until the V3C atlas is also selected for processing (Hannuksela [0263] For example, a tile grid, as understood in the context of High Efficiency Video Coding (HEVC), may be selected for encoding and an atlas may be organized in a manner such that a patch or a group of patches having similar visibility information can be encoded as a motion-constrained tile set (MCTS).).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hannuksela with Lee and Oh. Having a V3C atlas and selecting the atlas to select components, as in Lee and Oh, would benefit the Hannuksela teachings by allowing for ways to store the data of the v3c. Additionally, this is the application of a known technique, using a v3c atlas, to yield predictable results.
Regarding claim 14:
Hannuksela and Lee teach:
The method of Claim 11,
wherein the scene description is a Moving Picture Experts group (MPEG) media extension which lists items that are separately processed form other media items (Hannuksela [0218] MPEG Omnidirectional Media Format (ISO/IEC 23090-2) is a virtual reality (VR) system standard. [Figure 1]).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hannuksela with Lee and Oh. Having a V3C atlas and selecting the atlas to select components, as in Lee and Oh, would benefit the Hannuksela teachings by allowing for ways to store the data of the v3c. Additionally, this is the application of a known technique, using a v3c atlas, to yield predictable results.
Regarding claim 15:
Hannuksela and Lee teach:
The method of Claim 14,
wherein the MPEG media extension includes: a media stream configured to indicate the V3C atlas; and at least one component stream configured to indicate the V3C components (Hannuksela [0265] In some cases, several versions of the one or more atlases are encoded.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hannuksela with Lee and Oh. Having a V3C atlas and selecting the atlas to select components, as in Lee and Oh, would benefit the Hannuksela teachings by allowing for ways to store the data of the v3c. Additionally, this is the application of a known technique, using a v3c atlas, to yield predictable results.
Regarding claim 16:
Hannuksela and Lee teach:
The method of Claim 15,
wherein the at least one component stream lists the V3C components as an array (Hannuksela [0067] A component may be defined as an array or single sample from one of the three sample arrays (luma and two chroma) or the array or a single sample of the array that compose a picture in monochrome format.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hannuksela with Lee and Oh. Having a V3C atlas and selecting the atlas to select components, as in Lee and Oh, would benefit the Hannuksela teachings by allowing for ways to store the data of the v3c. Additionally, this is the application of a known technique, using a v3c atlas, to yield predictable results.
Regarding claim 19:
Hannuksela and Lee teach:
The method of Claim 11,
wherein the V3C atlas and the V3C components are synchronously received (Hannuksela [0305] In some operating systems and/or device architectures, the player might not be able to pass metadata to the rendering process in picture-synchronized manner but rather only the video decoder might be capable of doing that. This might apply to any video (both non-encrypted and encrypted) or only for encrypted video.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hannuksela with Lee and Oh. Having a V3C atlas and selecting the atlas to select components, as in Lee and Oh, would benefit the Hannuksela teachings by allowing for ways to store the data of the v3c. Additionally, this is the application of a known technique, using a v3c atlas, to yield predictable results.
Regarding claim 20:
Hannuksela and Lee teach:
The method of Claim 11,
wherein the media streams are multiplexed and received through a single buffer (Hannuksela [0159] However, a constituent bitstream may also be used for other purposes; for example, a texture video bitstream and a depth video bitstream that are multiplexed into the same bitstream (e.g. as separate independent layers) may be regarded as constituent bitstreams.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hannuksela with Lee and Oh. Having a V3C atlas and selecting the atlas to select components, as in Lee and Oh, would benefit the Hannuksela teachings by allowing for ways to store the data of the v3c. Additionally, this is the application of a known technique, using a v3c atlas, to yield predictable results.
Claim(s) 7 and 17 are rejected under 35 U.S.C. 103 as being anticipated by Hannuksela et al. (CA 3143885) in view of Lee et al. (US 20210398323), Oh et al. (US-20210409767) and Rao et al. (US 20180262779).
Regarding claim 7:
Hannuksela, Oh and Lee teach:
The apparatus of claim 5,
Hannuksela, Oh and Lee fail to teach:
wherein the at least one component stream is an MPEG media compound extension which indicates alternatives for the V3C components (Rao [0018] In conventional MPEG4 video encoding, the encoded content is not able to be played back until the entire file is completely encoded and the index data is finalized. Finalization typically involves writing an index or the like that identifies locations of the various video frames and other portions of the encoded data so that the media player can locate appropriate data for playback, trick play, etc.).
Rao teaches:
wherein the at least one component stream is an MPEG media compound extension which indicates alternatives for the V3C components (Rao [0018] In conventional MPEG4 video encoding, the encoded content is not able to be played back until the entire file is completely encoded and the index data is finalized. Finalization typically involves writing an index or the like that identifies locations of the various video frames and other portions of the encoded data so that the media player can locate appropriate data for playback, trick play, etc.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hannuksela, Oh and Lee with Rao. Having an MPEG media compound (in this case MPEG4), as in Rao, would benefit the Hannuksela and Lee teachings by allowing for a compound of mpeg to be used. Additionally, this is the application of a known technique, using an MPEG media compound, to yield predictable results.
Regarding claim 17:
Hannuksela, Oh and Lee teach:
The method of claim 15,
Hannuksela, Oh and Lee fail to teach:
wherein the at least one component stream is an MPEG media compound extension which indicates alternatives for the V3C components (Rao [0018] In conventional MPEG4 video encoding, the encoded content is not able to be played back until the entire file is completely encoded and the index data is finalized. Finalization typically involves writing an index or the like that identifies locations of the various video frames and other portions of the encoded data so that the media player can locate appropriate data for playback, trick play, etc.).
Rao teaches:
wherein the at least one component stream is an MPEG media compound extension which indicates alternatives for the V3C components (Rao [0018] In conventional MPEG4 video encoding, the encoded content is not able to be played back until the entire file is completely encoded and the index data is finalized. Finalization typically involves writing an index or the like that identifies locations of the various video frames and other portions of the encoded data so that the media player can locate appropriate data for playback, trick play, etc.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hannuksela, Oh and Lee with Rao. Having an MPEG media compound (in this case MPEG4), as in Rao, would benefit the Hannuksela and Lee teachings by allowing for a compound of mpeg to be used. Additionally, this is the application of a known technique, using an MPEG media compound, to yield predictable results.
Claim(s) 8 and 18 are rejected under 35 U.S.C. 103 as being anticipated by Hannuksela et al. (CA 3143885) in view of Lee et al. (US 20210398323), Oh et al. (US-20210409767) and Rao et al. (US 20180262779) and Kirk et al. (US 20200279385).
Regarding claim 8:
Hannuksela, Oh, Lee, and Rao teach:
The apparatus of Claim 7,
and an alternatives item indicating the alternatives for the V3C components (Rao [0018] In conventional MPEG4 video encoding, the encoded content is not able to be played back until the entire file is completely encoded and the index data is finalized. Finalization typically involves writing an index or the like that identifies locations of the various video frames and other portions of the encoded data so that the media player can locate appropriate data for playback, trick play, etc.).
Hannuksela, Oh, Lee, and Rao fail to teach:
MPEG media compound extension includes: a reference media item including an index of the V3C content for the V3C components (Kirk [0061] In some embodiments, the volumetric video analytics server 114 determines the visibility by: (i) generating an index map that assigns a unique color to each valid pixel associated with each frame of the 3D content in the visibility texture atlas, (ii) rendering an image, e.g., the image of a product, such as a shoe, a bag, etc., associated with the 3D content, with the index map including the unique color to each valid pixel based on the viewer telemetry data and an index texture map to obtain an index rendered image);
Kirk teaches:
MPEG media compound extension includes: a reference media item including an index of the V3C content for the V3C components (Kirk [0061] In some embodiments, the volumetric video analytics server 114 determines the visibility by: (i) generating an index map that assigns a unique color to each valid pixel associated with each frame of the 3D content in the visibility texture atlas, (ii) rendering an image, e.g., the image of a product, such as a shoe, a bag, etc., associated with the 3D content, with the index map including the unique color to each valid pixel based on the viewer telemetry data and an index texture map to obtain an index rendered image);
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hannuksela, Oh, Lee and Rao with Kirk. Having an index of the content, as in Kirk, would benefit the Hannuksela, Oh, Lee and Rao teachings by allowing for a different things to be accessed. Additionally, this is the application of a known technique, having an index, to yield predictable results.
Regarding claim 18:
Hannuksela, Oh, Lee, and Rao teach:
The method of Claim 17,
and an alternatives item indicating the alternatives for the V3C components (Rao [0018] In conventional MPEG4 video encoding, the encoded content is not able to be played back until the entire file is completely encoded and the index data is finalized. Finalization typically involves writing an index or the like that identifies locations of the various video frames and other portions of the encoded data so that the media player can locate appropriate data for playback, trick play, etc.).
Hannuksela, Oh, Lee, and Rao fail to teach:
MPEG media compound extension includes: a reference media item including an index of the V3C content for the V3C components (Kirk [0061] In some embodiments, the volumetric video analytics server 114 determines the visibility by: (i) generating an index map that assigns a unique color to each valid pixel associated with each frame of the 3D content in the visibility texture atlas, (ii) rendering an image, e.g., the image of a product, such as a shoe, a bag, etc., associated with the 3D content, with the index map including the unique color to each valid pixel based on the viewer telemetry data and an index texture map to obtain an index rendered image);
Kirk teaches:
MPEG media compound extension includes: a reference media item including an index of the V3C content for the V3C components (Kirk [0061] In some embodiments, the volumetric video analytics server 114 determines the visibility by: (i) generating an index map that assigns a unique color to each valid pixel associated with each frame of the 3D content in the visibility texture atlas, (ii) rendering an image, e.g., the image of a product, such as a shoe, a bag, etc., associated with the 3D content, with the index map including the unique color to each valid pixel based on the viewer telemetry data and an index texture map to obtain an index rendered image);
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hannuksela, Oh, Lee and Rao with Kirk. Having an index of the content, as in Kirk, would benefit the Hannuksela, Oh, Lee and Rao teachings by allowing for a different things to be accessed. Additionally, this is the application of a known technique, having an index, to yield predictable results.
Response to Arguments
Applicant's arguments filed 11/18/2025 have been fully considered but they are not persuasive.
Claims 1, 4, 11, and 14 have been amended. The applicant alleges “Hannuksela” and “Lee” fail to teach the following “and wherein the scene description includes a constraint indicating that the V3C components cannot be selected for processing until the V3C atlas is also selected for processing.”
While not explicitly stated, one skilled in the art would clearly understand that when selecting a component of an atlas, you would have to select the entire atlas, as also taught by Oh [1167]. Therefore, to advance prosecution, Oh has been added in the above 35 U.S.C. 103 rejection.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENIS VASILIY MINKO whose telephone number is (571)270-5226. The examiner can normally be reached Monday-Thursday 8:30-6:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DENIS VASILIY MINKO/Examiner, Art Unit 2612
/Said Broome/Supervisory Patent Examiner, Art Unit 2612