Prosecution Insights
Last updated: April 19, 2026
Application No. 18/855,296

PROVIDING SEGMENTATION INFORMATION FOR IMMERSIVE VIDEO

Non-Final OA §102
Filed
Oct 08, 2024
Examiner
ALCON, FERNANDO
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
Interdigital Ce Patent Holdings SAS
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
82%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
529 granted / 725 resolved
+15.0% vs TC avg
Moderate +9% lift
Without
With
+8.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
20 currently pending
Career history
745
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
58.0%
+18.0% vs TC avg
§102
15.0%
-25.0% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 725 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 6, 19, and 23 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ilola et al. (WO 2021/191500). Regarding claim 1, Ilola discloses a method comprising: obtaining a plurality of source views of a scene (See [0052] scene model may comprise a number of source volumes; see also [0088]); for at least one of the source views, obtaining segmentation information associating each of a plurality of regions of the source view with a respective entity (See [0084] 3D scene is segmented into a number of regions; See [0097] “In MIV, an atlas is used to store a number of patches representing a limited predetermined viewing volume (sub-viewing volume), from where the viewport of the scene may be rendered”. Under a broadest reasonable interpretation of the claim language an entity could read as the viewport of a given scene); encoding the plurality of source views as an immersive video (See [0053] MIV MPEG) comprising a plurality of patches, the patches being segmented according to the segmentation information (See [0084-0085] segmented regions are projected into 2D patches; See [0100-0101] all patches required to render the point cloud from any spatial position and orientation are packed into the same atlas); and encoding information indicating which of the source views are associated with the segmentation information used to segment the patches (See [0100-0101] the metadata used to render the immersive video indicates the atlas patches for rendering the scene). Regarding claim 6, Ilola discloses a method comprising: obtaining an encoded immersive video comprising a plurality of patches, the video representing a plurality of input views of a scene (See [0052] scene model may comprise a number of source volumes; see also [0088]); obtaining information indicating which of the input views are associated with segmentation information used to segment the patches (See [0084-0085] segmented regions are projected into 2D patches; See [0100-0101] all patches required to render the point cloud from any spatial position and orientation are packed into the same atlas); and rendering the immersive video according to the information indicating which of the input views are associated with segmentation information used to segment the patches (See [0100-0101] the metadata used to render the immersive video indicates the atlas patches for rendering the scene). Regarding claim 19, Ilola further discloses an apparatus comprising one or more processors configured to perform: obtaining a plurality of source views of a scene (See [0052] scene model may comprise a number of source volumes; see also [0088]); for at least one of the source views, obtaining segmentation information associating each of a plurality of regions of the source view with a respective entity (See [0084] 3D scene is segmented into a number of regions; See [0097] “In MIV, an atlas is used to store a number of patches representing a limited predetermined viewing volume (sub-viewing volume), from where the viewport of the scene may be rendered”. Under a broadest reasonable interpretation of the claim language an entity could be read as the viewport of a given scene); encoding the plurality of source views as an immersive video (See [0053] MIV MPEG) comprising a plurality of patches, the patches being segmented according to the segmentation information (See [0084-0085] segmented regions are projected into 2D patches; See [0100-0101] all patches required to render the point cloud from any spatial position and orientation are packed into the same atlas); and encoding information indicating which of the source views are associated with the segmentation information used to segment the patches (See [0100-0101] the metadata used to render the immersive video indicates the atlas patches for rendering the scene). Regarding 23, Ilola discloses an apparatus comprising one or more processors configured to perform: obtaining an encoded immersive video comprising a plurality of patches, the video representing a plurality of input views of a scene; (See [0052] scene model may comprise a number of source volumes; see also [0088]) obtaining information indicating which of the input views are associated with segmentation information used to segment the patches (See [0084-0085] segmented regions are projected into 2D patches; See [0100-0101] all patches required to render the point cloud from any spatial position and orientation are packed into the same atlas); and rendering the immersive video according to the information indicating which of the input views are associated with segmentation information used to segment the patches (See [0100-0101] the metadata used to render the immersive video indicates the atlas patches for rendering the scene). Claim(s) 1, 3-6, 8-12, and 19-28 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Dore et al. (WO 2021/122881 A1). The referenced paragraphs are cited with respect to the same disclosure in U.S. child application Dore et al. (US 2023/0042874). Regarding claim 1, Dore discloses a method comprising: obtaining a plurality of source views of a scene (Fig 2, Fig 6 and [0045-0057] sequence of 3D scenes; [0010] generating a set of first patches from a Multiview plus depth content for rendering of a 3D scene); for at least one of the source views, obtaining segmentation information associating each of a plurality of regions of the source view with a respective entity (See [0095] entity id attaches group of patches to an index); encoding the plurality of source views as an immersive video comprising a plurality of patches, the patches being segmented according to the segmentation information (See [0095-0104] See entity id allowing attaching a group of patches to an index for high level semantic processing such as object filtering or compositing); and encoding information indicating which of the source views are associated with the segmentation information used to segment the patches (See [0095-0104] metadata describing atlases indicating patch height, patch width, patch pos, etc.). Regarding claim 3, Dore further discloses the method of claim 1, wherein the information indicating which of the source views are associated with the segmentation information used to segment the patches comprises, for each source view, a flag indicating whether that input view is associated with segmentation information used to segment the patches (See [0101] auxiliary patch indicates that a patch p is not for the viewport rendering). Regarding claim 4, Dore discloses the method of claim 1, wherein the segmentation information associated with a source view comprises an entity map associated with the source view (See [0089] [0095] patch data comprises a reference to projection data (e.g. an index in a table of projection data or a pointer)). Regarding claim 5, Dore discloses the method of claim 1, wherein the information indicating which of the source views are associated with the segmentation information used to segment the patches comprises information indicating whether the segmentation information is based on a depth image or on a texture image (See [0088] patches may include identical layout one or texture and one for depth information; [0048-0049] [0085]). Regarding claim 6, Dore discloses a method comprising: obtaining an encoded immersive video comprising a plurality of patches, the video representing a plurality of input views of a scene (Fig 2, Fig 6 and [0045-0057] sequence of 3D scenes; [0010] generating a set of first patches from a Multiview plus depth content for rendering of a 3D scene); obtaining information indicating which of the input views are associated with segmentation information used to segment the patches (See [0095-0104] See entity id allowing attaching a group of patches to an index for high level semantic processing such as object filtering or compositing); and rendering the immersive video according to the information indicating which of the input views are associated with segmentation information used to segment the patches (See [0095-0104] metadata describing atlases indicating patch height, patch width, patch pos, etc.). Regarding claim 8, Dore further discloses the method of claim 6, wherein the information indicating which of the source views are associated with the segmentation information used to segment the patches comprises, for each source view, a flag indicating whether that input view is associated with segmentation information used to segment the patches (See [0101] auxiliary patch indicates that a patch p is not for the viewport rendering). Regarding claim 9, Dore further discloses the method of claim 6, wherein the information indicating which of the source views are associated with the segmentation information used to segment the patches comprises information indicating whether the segmentation information is based on a depth image or on a texture image (See [0088] patches may include identical layout one or texture and one for depth information; [0048-0049] [0085]). Regarding claim 10, Dore further discloses the method of claim 6, wherein rendering the immersive video according to the information indicating which of the input views are associated with segmentation information used to segment the patches comprises: identifying at least one selected entity to be rendered (See [0095-0105] determining whether entity is for viewport rendering); and in response to a determination that the segmentation information is based on a depth image, performing warping of depth pixels of the immersive video only for depth patches that are associated with the at least one selected entity (See Fig 5 and [0087-0090] depth image; See Fig 9 [0062-0063] [0105] [0090]using the depth information to render the 3D scene; [0091-0093] [0085-0087] projecting the patches on a 3D scene using spherical mapping). Regarding claim 11, Dore further discloses the method of claim 6 or the apparatus of claim 7, wherein rendering the immersive video according to the information indicating which of the input views are associated with segmentation information used to segment the patches comprises: identifying at least one selected entity to be rendered; and in response to a determination that the segmentation information is based on a texture image: performing warping of depth pixels of the immersive video for depth patches including at least depth patches that are associated with the at least one selected entity; and performing blending of color values based only on color pixels that are associated with the at least one selected entity (See [0091] the patch consists of pairs of texture and depth. See [0094] texture layout. See [0041] color component expressed as RGB or YUV. [0048] Frames colors. See [0088] texture i.e., color information. [0091-0093] [0085-0087] projecting the patches on a 3D scene using spherical mapping). Regarding claim 12, Dore further discloses the method of claim 6, wherein rendering the immersive video according to the information indicating which of the input views are associated with segmentation information used to segment the patches comprises: in response to a determination that the segmentation information is based on a depth image, making a visibility determination based at least in part on the segmentation information (See [0095-0103] the auxiliary patch flag is used to determine whether a patch comprises information for the rendering and/or for another module). Regarding claim 19, Dore further discloses an apparatus comprising one or more processors configured to perform: obtaining a plurality of source views of a scene (Fig 2, Fig 6 and [0045-0057] sequence of 3D scenes; [0010] generating a set of first patches from a Multiview plus depth content for rendering of a 3D scene); for at least one of the source views, obtaining segmentation information associating each of a plurality of regions of the source view with a respective entity (See [0095] entity id attaches group of patches to an index); encoding the plurality of source views as an immersive video comprising a plurality of patches, the patches being segmented according to the segmentation information (See [0095-0104] See entity id allowing attaching a group of patches to an index for high level semantic processing such as object filtering or compositing); and encoding information indicating which of the source views are associated with the segmentation information used to segment the patches (See [0095-0104] metadata describing atlases indicating patch height, patch width, patch pos, etc.). Regarding claim 20, Dore further discloses the apparatus of claim 19, wherein the information indicating which of the source views are associated with the segmentation information used to segment the patches comprises, for each source view, a flag indicating whether that input view is associated with segmentation information used to segment the patches (See [0101] auxiliary patch indicates that a patch p is not for the viewport rendering). Regarding claim 21, Dore further discloses the apparatus of claim 19, wherein the segmentation information associated with a source view comprises an entity map associated with the source view (See [0089] [0095] patch data comprises a reference to projection data (e.g. an index in a table of projection data or a pointer)). Regarding claim 22, Dore further discloses the apparatus of claim 19, wherein the information indicating which of the source views are associated with the segmentation information used to segment the patches comprises information indicating whether the segmentation information is based on a depth image or on a texture image (See [0088] patches may include identical layout one or texture and one for depth information; [0048-0049] [0085]). Regarding 23, Dore discloses an apparatus comprising one or more processors configured to perform: obtaining an encoded immersive video comprising a plurality of patches, the video representing a plurality of input views of a scene; (Fig 2, Fig 6 and [0045-0057] sequence of 3D scenes; [0010] generating a set of first patches from a Multiview plus depth content for rendering of a 3D scene) obtaining information indicating which of the input views are associated with segmentation information used to segment the patches (See [0095-0104] See entity id allowing attaching a group of patches to an index for high level semantic processing such as object filtering or compositing); and rendering the immersive video according to the information indicating which of the input views are associated with segmentation information used to segment the patches (See [0095-0104] metadata describing atlases indicating patch height, patch width, patch pos, etc.). Regarding claim 24, Dore discloses the apparatus of claim 23, wherein the information indicating which of the source views are associated with the segmentation information used to segment the patches comprises, for each source view, a flag indicating whether that input view is associated with segmentation information used to segment the patches (See [0101] auxiliary patch indicates that a patch p is not for the viewport rendering). Regarding claim 25, Dore discloses the apparatus of claim 23, wherein the information indicating which of the source views are associated with the segmentation information used to segment the patches comprises information indicating whether the segmentation information is based on a depth image or on a texture image (See [0088] patches may include identical layout one or texture and one for depth information; [0048-0049] [0085]). Regarding claim 26, Dore discloses the apparatus of claim 23, wherein rendering the immersive video according to the information indicating which of the input views are associated with segmentation information used to segment the patches comprises: identifying at least one selected entity to be rendered; and in response to a determination that the segmentation information is based on a depth image, performing warping of depth pixels of the immersive video only for depth patches that are associated with the at least one selected entity (See [0091] the patch consists of pairs of texture and depth. See [0094] texture layout. See [0041] color component expressed as RGB or YUV. [0048] Frames colors. See [0088] texture i.e., color information. [0091-0093] [0085-0087] projecting the patches on a 3D scene using spherical mapping). Regarding claim 27, Dore further discloses the apparatus of claim 23, wherein rendering the immersive video according to the information indicating which of the input views are associated with segmentation information used to segment the patches comprises: identifying at least one selected entity to be rendered (See [0095-0105] determining whether entity is for viewport rendering); and in response to a determination that the segmentation information is based on a texture image: performing warping of depth pixels of the immersive video for depth patches including at least depth patches that are associated with the at least one selected entity; and performing blending of color values based only on color pixels that are associated with the at least one selected entity (See Fig 5 and [0087-0090] depth image; See Fig 9 [0062-0063] [0105] [0090]using the depth information to render the 3D scene; [0091-0093] [0085-0087] projecting the patches on a 3D scene using spherical mapping). Regarding claim 28, Dore further discloses the apparatus of claim 23, wherein rendering the immersive video according to the information indicating which of the input views are associated with segmentation information used to segment the patches comprises: in response to a determination that the segmentation information is based on a depth image, making a visibility determination based at least in part on the segmentation information (See [0095-0103] the auxiliary patch flag is used to determine whether a patch comprises information for the rendering and/or for another module). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FERNANDO ALCON whose telephone number is (571)270-5668. The examiner can normally be reached Monday-Friday, 9:00am-7:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at (571)272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. FERNANDO . ALCON Examiner Art Unit 2425 /FERNANDO ALCON/Primary Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

Oct 08, 2024
Application Filed
Jan 13, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597166
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12588580
RESIDUE MONITORING AND RESIDUE-BASED CONTROL
2y 5m to grant Granted Mar 31, 2026
Patent 12581154
METHOD, DEVICE, STORAGE MEDIUM AND PROGRAM PRODUCT FOR VIDEO INFORMATION DISPLAY
2y 5m to grant Granted Mar 17, 2026
Patent 12574601
PROGRAM RECEIVING DISPLAY DEVICE AND PROGRAM RECEIVING DISPLAY CONTROL METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12574594
SYSTEMS AND METHODS FOR CONTROLLING MEDIA CONTENT BASED ON USER PRESENCE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
82%
With Interview (+8.9%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 725 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month