Prosecution Insights
Last updated: April 19, 2026
Application No. 18/719,969

METHODS AND DEVICES FOR PROGRESSIVE ENCODING AND DECODING OF MULTIPLANE IMAGES

Non-Final OA §103§112
Filed
Jun 14, 2024
Examiner
JIANG, ZAIHAN
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Interdigital Ce Patent Holdings SAS
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
520 granted / 626 resolved
+25.1% vs TC avg
Strong +25% interview lift
Without
With
+25.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
32 currently pending
Career history
658
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
49.5%
+9.5% vs TC avg
§102
13.2%
-26.8% vs TC avg
§112
21.0%
-19.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 626 resolved cases

Office Action

§103 §112
DETAILED ACTION The Office Action is in response to Response to Election 09/10/2025. Application 18719969 filed on 06/14/2024. Claims 1-6, 7-12 are elected. Election/Restrictions 2. Applicant's election with traverse of restriction request (filed in 07/15/2025) reply filed on 09/10/2025 is acknowledged. The traversal is on the ground(s) that “The applicant believes that all of the claims are directed to one invention and that the restriction requirement is therefore improper. The present disclosure relates to methods and apparatus for encoding and decoding a three-dimensional scene. Claims 13 and 18 as amended describe a method and a device for encoding the 3D scene. A Multi Plane Image (MPI, that consists in layers with a transparency component located at a given depth) representative of the 3D scene is obtained. Patch pictures are extracted from the layers and packed in tiles. The tiles may, for example, be packed in an atlas image. Metadata are generated to allow a decoder to retrieve the MPI from the tiles”. This is not found persuasive because as stated in the restriction requirement 07/15/2025, page 1-2, Group I, Claims 1-6, 7-12, drawn to a method/device for obtaining metadata and tiles comprising images representative of a part of a three dimensional (3D) scene, wherein the metadata comprise information respectively associating each tile with one depth value; generating a viewport image for a point of view in the 3D scene by decoding tiles and blending tiles in the viewport image in a monotonic order of the tile numbers; classified in H04N 19/119. • Group II, Claims 13-17, 18-22, drawn to a method/device for obtaining a multiplane image representative of a part of a three dimensional (3D) scene, pixels of a layer have a transparency component, or each layer of the multiplane image, splitting the layer in patch pictures based on the transparency component; encoding the tiles and the metadata in a data stream; classified in H04N 19/46..” Invention I claim 1-10 and Invention II claim 11-20 are independent distinct and classified in different classes; even with the amendment 09/10/2025; For example, Invention I has limitation of “generating a viewport image for a point of view in the 3D scene by decoding tiles as layers according to the metadata and blending the layers in the viewport image in a monotonic order of the tile numbers” which Invention II does not have; and Invention II has limitation of “obtaining a multiplane image representative of a part of a three dimensional (3D) scene, wherein a layer has a depth, and wherein pixels of a layer have a transparency component; for each layer of the multiplane image, splitting the layer in patch pictures based on the transparency component; packing the patch pictures in a tile,” which Invention I does not have; It is obvious that there would be a serious search and/or examination burden if restriction were not required because one or more of the following reasons apply: i. The inventions have acquired a separate status in the art in view of their different classification. ii. The inventions have acquired a separate status in the art due to their recognized divergent subject matter. iii. The inventions require a different field of search (for example, searching different classes/subclasses or electronic resources, or employing different search queries). iv. The prior art applicable to one invention would not likely be applicable to another invention. v. The inventions are likely to raise different non-prior art issues under 35 U.S.C. 101 and/or 35 U.S.C 112, first paragraph. . The requirement is still deemed proper and is therefore made FINAL. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. 18719969 filed on 06/14/2024. Priority # Filling Data Country 21306825.7 2021-12-17 EP Notice of Pre-AIA or AIA Status 4. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 5. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 6. Claim 1 and its dependent claims 2-6 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. For claim 1, it recites limitations of “the tile numbers” in “ and generating a viewport image for a point of view in the 3D scene by decoding tiles and blending tiles in the viewport image in a monotonic order of the tile numbers”; it is not clear if the tile numbers are tile numbers of the tiles in the viewport or other tile numbers; since before this limitation, it recites limitations as that: “and wherein each tile has a tile number determined as a monotonic function of the depth value of the tile” in which, the tiles “packing patch pictures representative of a part of a three dimensional (3D) scene” (recited) such that the tiles are for multiple images (pictures), not only for the viewport image. Thus the scope of the claim and its dependent claims 2-6 are unclear. 7. Claim 7 and its dependent claims 8-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. For claim 7, it recites limitations of “the tile numbers” in “ and generating a viewport image for a point of view in the 3D scene by decoding and blending tiles in the viewport image in a monotonic order of the tile numbers”; it is not clear if the tile numbers are tile numbers of the tiles in the viewport or other tile numbers; since before this limitation, it recites limitations as that: “and wherein each tile has a tile number determined as a monotonic function of the depth value of the tile” in which, the tiles “packing patch pictures representative of a part of a three dimensional (3D) scene” (recited) such that the tiles are for multiple images (pictures), not only for the viewport image. Thus the scope of the claim and its dependent claims 8-12 are unclear. 8. Claim 3 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. For claim 3, it recites limitations of “atlases” and “ a given atlas” in “ and wherein generating the viewport image is performed by decoding and blending atlases in a monotonic order of the atlas numbers and, for tiles packed in a given atlas, in a monotonic order of the tile numbers”; it is not clear if the “atlase” and“ a given atlas” refer to “atlas images” recited previously or refer to something else? it recites limitations of “decoding and blending atlases in a monotonic order of the atlas numbers and, for tiles packed in a given atlas, in a monotonic order of the tile numbers”; since it previously recites that “the tiles are grouped in at least two atlas images”, if “decoding and blending atlases in a monotonic order of the atlas numbers” first, then how to perform the: decoding and blending tiles packed in a given atlas, in a monotonic order of the tile numbers secondly (Since tiles already grouped in atlas images and atlas images already decoded and blended in a monotonic order of the atlas numbers)? Thus the scope of the claim is unclear. 9. Claim 9 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention for the similar reason as for claim 3. Claim Rejections - 35 USC § 103 10. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 11. Claims 1-2, 4-5, 7-8, 10-11 are rejected are rejected under 35 U.S.C. 103 as being unpatentable over ILOLA et al. (US 20210383590) and in view of IEC2021 (Information technology - Coded representation of immersive media - Part 12: MPEG Immersive video) and further in view of Jonathan et al. ( Tiling Layered Depth Images). Regarding claim 1, ILOLA teaches a method (fig. 16) comprising: obtaining metadata and tiles (fig. 16, step 1602/1604; in which, the patch information includes tiles information, as shown in fig. 2C) comprising packing patch pictures (fig. 9 shows the packing patch pictures; also fig. 5 show patch metadatat in 516) representative of a part of a three dimensional (3D) scene (shown in fig. 3A/3B), wherein the metadata comprise information respectively associating each tile with one depth value (fig. 7, patch_texture_depth_offset; as explained, patch includes tiles information), information associating the patch pictures with a location in a layer (paragraph 0140, … each patch is a (perspective) projection toward a virtual camera location, with a set of such virtual camera locations residing in or near the intended viewing region of the scene; paragraph 0004, …the patch metadata to comprise at least one of: a depth offset of the texture layer with respect to a geometry surface; since each patch is a (perspective) projection toward a virtual camera location, the patch metadata is the information associating the patch pictures with a location in a layer (i.e., depth offset)). It is noticed that ILOLA does not disclose explicitly of generating a viewport image for a point of view in the 3D scene by decoding tiles as layers according to the metadata and blending the layers in the viewport image in a monotonic order of the tile numbers. IEC2021 discloses of generating a viewport image for a point of view in the 3D scene by decoding tiles as layers (page 58-59, Reconstruct geometry of the subset of views using an estimator… Reproject the subset of views to a viewport… Fetch texture from multiple views; in which texture comprise tiles and there are multi-layer in texture) according to the metadata and blending the layers in the viewport image in a monotonic order of the tile numbers (page 59, Project to a viewport and blend the different texture layers from the closest to the farthest, taking into account the associated transparency values; which is in a monotonic order of the tile numbers since texture comprise tiles and each tiles number associate the depth of the texture layers). It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that generating a viewport image for a point of view in the 3D scene by decoding tiles as layers according to the metadata and blending the layers in the viewport image in a monotonic order of the tile numbers as a modification to the method for the benefit of that to blend each tile’s contribution (page 59). It is noticed that ILOLA does not disclose explicitly of wherein each tile has a tile number determined as a monotonic function of the depth value of the tile. Jonathan discloses of wherein each tile has a tile number determined as a monotonic function of the depth value of the tile (page 2, fig. 3, in which, tile has a number which is a monotonic function of the depth value of the tile since tile is applied to each layer, each layer has a depth such that tile number determined as a monotonic function of the depth value of the tile; the lower row has different number than the above row). It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that wherein each tile has a tile number determined as a monotonic function of the depth value of the tile as a modification to the method for the benefit of that to model and real-time rendering of solid terrains effectively (page 1). Regarding claim 7, ILOLA teaches a device (fig. 15) comprising a memory (fig. 15, 1504) associated with a processor (fig. 15, 1502) configured to: obtaining metadata and tiles (fig. 16, step 1602/1604; in which, the patch information includes tiles information, as shown in fig. 2C) comprising packing patch pictures (fig. 9 shows the packing patch pictures; also fig. 5 show patch metadatat in 516) representative of a part of a three dimensional (3D) scene (shown in fig. 3A/3B), wherein the metadata comprise information respectively associating each tile with one depth value (fig. 7, patch_texture_depth_offset; as explained, patch includes tiles information), information associating the patch pictures with a location in a layer (paragraph 0140, … each patch is a (perspective) projection toward a virtual camera location, with a set of such virtual camera locations residing in or near the intended viewing region of the scene; paragraph 0004, …the patch metadata to comprise at least one of: a depth offset of the texture layer with respect to a geometry surface; since each patch is a (perspective) projection toward a virtual camera location, the patch metadata is the information associating the patch pictures with a location in a layer (i.e., depth offset)). It is noticed that ILOLA does not disclose explicitly of generating a viewport image for a point of view in the 3D scene by decoding tiles as layers according to the metadata and blending the layers in the viewport image in a monotonic order of the tile numbers. IEC2021 discloses of generating a viewport image for a point of view in the 3D scene by decoding tiles as layers (page 58-59, Reconstruct geometry of the subset of views using an estimator… Reproject the subset of views to a viewport… Fetch texture from multiple views; in which texture comprise tiles and there are multi-layer in texture) according to the metadata and blending the layers in the viewport image in a monotonic order of the tile numbers (page 59, Project to a viewport and blend the different texture layers from the closest to the farthest, taking into account the associated transparency values; which is in a monotonic order of the tile numbers since texture comprise tiles and each tiles number associate the depth of the texture layers). It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that generating a viewport image for a point of view in the 3D scene by decoding tiles as layers according to the metadata and blending the layers in the viewport image in a monotonic order of the tile numbers as a modification to the device for the benefit of that to blend each tile’s contribution (page 59). It is noticed that ILOLA does not disclose explicitly of wherein each tile has a tile number determined as a monotonic function of the depth value of the tile. Jonathan discloses of wherein each tile has a tile number determined as a monotonic function of the depth value of the tile (page 2, fig. 3, in which, tile has a number which is a monotonic function of the depth value of the tile since tile is applied to each layer, each layer has a depth such that tile number determined as a monotonic function of the depth value of the tile; the lower row has different number than the above row). It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that wherein each tile has a tile number determined as a monotonic function of the depth value of the tile as a modification to the device for the benefit of that to model and real-time rendering of solid terrains effectively (page 1). Regarding claim 2, the combination of ILOLA, IEC2021and Jonathan teaches the limitations recited in claim 1 as discussed above. In addition, ILOLA further discloses that the tiles are grouped in an atlas image (as shown in fig. 12, 1210,ATLAS encoder; fig. 2C, atals_tile_group_data_unit()). Regarding claim 4, the combination of ILOLA, IEC2021and Jonathan teaches the limitations recited in claim 1 as discussed above. In addition, Jonathan further discloses that tile numbers are consecutive numbers (as shown fig. 3, since tiles are applied to each layer and arranged consecutively, such that tile numbers are consecutive numbers). The motivation of combination is the same as in claim 1’s rejection. Regarding claim 5, the combination of ILOLA, IEC2021and Jonathan teaches the limitations recited in claim 1 as discussed above. In addition, Jonathan further discloses that the metadata comprise information indicating that tile numbers are determined as a monotonic function of the depth of the corresponding layer (as shown fig. 3, since tiles are applied to each layer and arranged consecutively, and each layer has a depth value, such that tile numbers are determined as a monotonic function of the depth of the corresponding layer; therefore, the metadata reflect the information). The motivation of combination is the same as in claim 1’s rejection. Regarding claim 8, the combination of ILOLA, IEC2021and Jonathan teaches the limitations recited in claim 7 as discussed above. In addition, ILOLA further discloses that the tiles are grouped in an atlas image (as shown in fig. 12, 1210,ATLAS encoder; fig. 2C, atals_tile_group_data_unit()). Regarding claim 10, the combination of ILOLA, IEC2021and Jonathan teaches the limitations recited in claim 7 as discussed above. In addition, Jonathan further discloses that tile numbers are consecutive numbers (as shown fig. 3, since tiles are applied to each layer and arranged consecutively, such that tile numbers are consecutive numbers). The motivation of combination is the same as in claim 7’s rejection. Regarding claim 11, the combination of ILOLA, IEC2021and Jonathan teaches the limitations recited in claim 7 as discussed above. In addition, Jonathan further discloses that the metadata comprise information indicating that tile numbers are determined as a monotonic function of the depth of the corresponding layer (as shown fig. 3, since tiles are applied to each layer and arranged consecutively, and each layer has a depth value, such that tile numbers are determined as a monotonic function of the depth of the corresponding layer; therefore, the metadata reflect the information). The motivation of combination is the same as in claim 7’s rejection. 12. Claims 6, 12 are rejected are rejected under 35 U.S.C. 103 as being unpatentable over ILOLA et al. (US 20210383590) and in view of IEC2021 (Information technology - Coded representation of immersive media - Part 12: MPEG Immersive video) and further in view of Jonathan et al. ( Tiling Layered Depth Images) and further in view of Pang et al. (US 20170244948). Regarding claim 6, the combination of ILOLA, IEC2021and Jonathan teaches the limitations recited in claim 1 as discussed above. In addition, Jonathan further discloses that obtaining a sequence of sets of tiles (as shown in fig. 3). The motivation of combination is the same as in claim 1’s rejection. It is noticed that ILOLA does not disclose explicitly of generating a predicted tile from a tile from a previous set of tiles in the sequence and generating a predicted tile from a tile of a same set of tiles having a lower tile number than the given tile number. Pang discloses of generating a predicted tile from a tile from a previous set of tiles in the sequence and generating a predicted tile from a tile of a same set of tiles having a lower tile number than the given tile number (as show in fig. 38, the predicted tile is generated in first layer and second layer (3810,3820) from a tile of a same set of tiles having a lower tile number than the given tile number (3830), since the third layer has the largest depth value and hence, has larger tile number than first layer 3810 and second layer 3820 ). It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that generating a predicted tile from a tile from a previous set of tiles in the sequence and generating a predicted tile from a tile of a same set of tiles having a lower tile number than the given tile number as a modification to the method for the benefit of that to predict tile using tiles from other layers (paragraph 0330). Regarding claim 12, the combination of ILOLA, IEC2021and Jonathan teaches the limitations recited in claim 7 as discussed above. In addition, Jonathan further discloses that obtaining a sequence of sets of tiles (as shown in fig. 3). The motivation of combination is the same as in claim 7’s rejection. It is noticed that ILOLA does not disclose explicitly of generating a predicted tile from a tile from a previous set of tiles in the sequence and generating a predicted tile from a tile of a same set of tiles having a lower tile number than the given tile number. Pang discloses of generating a predicted tile from a tile from a previous set of tiles in the sequence and generating a predicted tile from a tile of a same set of tiles having a lower tile number than the given tile number (as show in fig. 38, the predicted tile is generated in first layer and second layer (3810,3820) from a tile of a same set of tiles having a lower tile number than the given tile number (3830), since the third layer has the largest depth value and hence, has larger tile number than first layer 3810 and second layer 3820 ). It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that generating a predicted tile from a tile from a previous set of tiles in the sequence and generating a predicted tile from a tile of a same set of tiles having a lower tile number than the given tile number as a modification to the device for the benefit of that to predict tile using tiles from other layers (paragraph 0330). 13. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See form 892. 14. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZAIHAN JIANG whose telephone number is (571)272-1399. The examiner can normally be reached on flexible. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath Perungavoor can be reached on (571)272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-270-0655. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ZAIHAN JIANG/Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Jun 14, 2024
Application Filed
Nov 14, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587655
IMPROVING STREAMING VIDEO QUALITY IN LOSSY NETWORK CONDITIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12581105
SUPPLEMENTAL ENHANCEMENT INFORMATION MESSAGE CONSTRAINTS
2y 5m to grant Granted Mar 17, 2026
Patent 12581117
THREE-DIMENSIONAL DATA ENCODING METHOD, THREE-DIMENSIONAL DATA DECODING METHOD, THREE-DIMENSIONAL DATA ENCODING DEVICE, AND THREE-DIMENSIONAL DATA DECODING DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12581055
VERIFICATION METHOD FOR A PANORAMIC LENS FOCUSING WORKSTATION
2y 5m to grant Granted Mar 17, 2026
Patent 12574547
VIDEO DIVERSIFICATION DEVICE, VIDEO SERVICE SYSTEM HAVING THE SAME, AND OPERATING METHOD THEREOF
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+25.1%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 626 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month