Prosecution Insights
Last updated: April 19, 2026
Application No. 18/858,122

A METHOD AND APPARATUS FOR ENCODING/DECODING A 3D SCENE

Non-Final OA §101§102§103
Filed
Oct 18, 2024
Examiner
RAHAMAN, SHAHAN UR
Art Unit
2426
Tech Center
2400 — Computer Networks
Assignee
Interdigital Ce Patent Holdings SAS
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
88%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
479 granted / 633 resolved
+17.7% vs TC avg
Moderate +13% lift
Without
With
+12.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
51 currently pending
Career history
684
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
50.0%
+10.0% vs TC avg
§102
14.7%
-25.3% vs TC avg
§112
15.1%
-24.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 633 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Following prior arts are considered pertinent to applicant's disclosure. J. Kim, J. Im, S. Rhyu and K. Kim, "3D Motion Estimation and Compensation Method for Video-Based Point Cloud Compression," in IEEE Access, vol. 8, pp. 83538-83547, April 30 2020 (hereinafter Kim) LU ET AL: "PointINet: Point Cloud Frame Interpolation Network", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 18 December 2020 (2020-12-18) (hereinafter LU) OH ET AL: "Object-based compression requirement proposal for MIV future work (MIV v2)", 136. MPEG MEETING; 20211011 - 20211015; ONLINE; no. m58028 13 October 2021 (2021-10-13), XP030298700, retrieved from the Internet: URLhttps://dms.mpeg.expert/doc_end_user/documents/ 136_OnLine/wg11/m58028-v3- m58028_RequirementproposalforMIVr3.zip m58028_Requirement proposal for MIV r3.docx [retrieved on 2021-10-13] (hereinafter OH) US 20040190615 A1 (hereinafter Abe) US 20210217203 A1 (Fig.2C; 3D motion compensation with patch and occupancy map) Claim Rejections - 35 USC § 101 35 USC § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 25-26, 28-29 are rejected under 35 USC § 101 because the claimed invention is directed to non-statutory subject matter. . The claims recite “a computer readable storage medium". Considering the open-ended definition of the medium in the specification [published specification para 128], applying the broadest reasonable interpretation in light of the specification and taking into account the meaning of the words in their ordinary usage as they would be understood by one of ordinary skill in the art (MPEP §2111), the claim as a whole cover both transitory and non-transitory media. A transitory medium does not fall into any of the 4 statutory categories of invention (process, machine, manufacture, or composition of matter). Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 6, 14, 16, 22 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kim. Regarding Claim 1. Kim teaches a method comprising reconstructing at least one sequence of volumetric frames from a data stream [(Fig.5-7 and Section III A. Architecture, left column last paragraph)] , including: decoding from the data stream a patch-atlas based representation of at least one volumetric frames of the sequence [(Fig.5 “compressed auxiliary patch information” and “compressed occupancy map” see section II)] , reconstructing the at least one volumetric frame using the patch-atlas based representation [(Fig.5 “reconstructed point cloud”, the second last step; please note the point cloud frame is volumetric frame {Fig.6 and 7})] . obtaining a 3D motion information representative of a displacement in a 3D space of points of the at least one volumetric frames [(Fig.5 “compressed vector video” represent the motion as can be seen from “3D Motion search” of Fig.4 and section III; displacement between volumetric frames shown in Fig.7)] , by decoding metadata associated to at least one patch of the patch-atlas based representation of the at least one volumetric frame [(motion/vector information is associated with patch {see page 83544 left column})] , and displacing points of the at least one volumetric frame to a composition time frame using the 3D motion information, the composition time frame being different from a time of the at least one volumetric frame encoded in the data stream [(Fig.5 the Motion Compensation step displacing “reconstructed point cloud” with the motion information to a frame at different time {Fig.6 point cloud frame f(t), P is displaced compare to I, also see Fig.7})] . Regarding Claim 16. Kim teaches a method comprising encoding at least one sequence of volumetric frames representative of a three-dimensional (3D) scene, including: obtaining, for at least one volumetric frame of the sequence, a patch-atlas based representation, obtaining a 3D motion information representative of a displacement in a 3D space of de-projected samples of the patch-atlas based representation between two volumetric frames of the sequence, and encoding in a data stream the patch-atlas based representation and the 3D motion information, the 3D motion information being encoded as metadata associated to at least one patch of the patch-atlas based representation of the at least one volumetric frame. [(see analysis of claim 1 and see Kim Fig.4; the motion is displacement in 3D space samples that are not projected or de-projected { see section I. Introduction 3rd para})] Regarding Claim 6. The method of claim 1, wherein displacing points of the at least one volumetric frame comprises motion-compensating the decoded volumetric frame using the 3D motion information. [(Kim, Fig.5 the Motion Compensation step displacing “reconstructed point cloud” with the motion information to a frame at different time {Fig.6 point cloud frame f(t), P is displaced compare to I, also see Fig.7)] . Regarding Claim 14. The method of claim 1, wherein the metadata comprises parameters of a motion model determined for the at least one patch based on a 3D motion determined for de-projected samples of the at least one patch. [(see Kim, section III. 3D MOTION ESTIMATION AND COMPENSATION)] Regarding Claim 22. The method of claim 16, wherein the metadata comprises parameters of a motion model determined for the at least one patch based on a 3D motion determined for de-projected samples of the at least one patch. [(see Kim, section III. 3D MOTION ESTIMATION AND COMPENSATION)] Claims 25-26 are rejected under 35 U.S.C. 102(a)(1) and a(2) as being anticipated by Abe. Regarding Claim 25-26: These are product by process claims. MPEP §2113 recites “Product-by-Process claims are not limited to the manipulations of the recited steps, only the structure implied by the steps”. Thus, the scope of the claim is the computer readable storage with bitstream. “To be given patentable weight, the printed matter and associated product must be in a functional relationship. A functional relationship can be found where the printed matter performs some function with respect to the product to which it is associated”. MPEP §2111.05(I)(A). When a claimed “computer-readable medium merely serves as a support for information or data, no functional relationship exists. MPEP §2111.05(III). The storage medium storing the claimed bitstream in the claim merely provide a support for the storage of the bitstream and have no functional relationship between the stored bitstream and storage medium. Therefore, the claim scope is just a storage medium storing data. If the specification supports, the claim can be amended to recites a non-transitory computer-readable recording medium storing computer executable program, when the program is executed by a processor…….. performing the steps . Therefore, the claim scope is taught by Abe paragraph 160. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 2, 15, 17, 23, 28-30, 33-34 are rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Abe. Regarding Claim 2. This claim claiming an apparatus to implement the method of claim 1, the apparatus having one or more processor Kim does not explicitly show such apparatus implemented by one or more processor, such implementation is well known in the art as described by Abe [(Fig. 16-19 and para 161)] Therefore, in light of above discussion it would have been obvious to one of the ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teaching of the prior arts because such combination would provide predictable result with no change of their respective functionalities. Kim in view of Abe additionally teaches with regards to claim 15. The method of claim 1, wherein at least one syntax element indicating a presence of 3D motion information is encoded in the data stream. [(while Kim does not explicitly teach this, Kim shows this is an additional ways in the existing system, see section III and Figs.4-5, therefore this is an intuitive modification; Abe explicitly teaches communicating options through syntax in para 94 )] Kim in view of Abe additionally teaches with regards to claim 17. An apparatus comprising one or more processors [(Abe para 161, Fig. 16-19)] configured to encode at least one sequence of volumetric frames, wherein the one or more processors are further configured to: obtain, for at least one volumetric frame of the sequence, a patch-atlas based representation, obtain a three-dimensional (3D) motion information representative of a displacement in a 3D space of de-projected samples of the patch-atlas based representation between two volumetric frames of the sequence, and encode in a data stream the patch-atlas based representation and the 3D motion information, the 3D motion information being encoded as metadata associated to at least one patch of the patch-atlas based representation of the at least one volumetric frame. [(see analysis of claim 16 above)] With Regards to Claim 23, 28-29, 33-34. See analyses of claims 14-15 and para 156 of Abe. Abe additionally teaches with regards to claim 30. A device comprising: an apparatus according to claim 2: and at least one of (i) an antenna configured to receive a signal, the signal including data representative of at least one sequence of volumetric frames, (ii) a band limiter configured to limit the received signal to a band of frequencies that includes the data representative of the at least one sequence of volumetric frames, or (iii) a display configured to display the at least one sequence of volumetric frames. [(Abe Fig.16-19)] Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of LU. Regarding Claim 3. Kim does not explicitly show displacing points of the at least one volumetric frame is part of a resampling of the at least one sequence of volumetric frames at a frame rate different from the frame rate used at encoding However, in the same/related field of endeavor, LU teaches displacing points of the at least one volumetric frame is part of a resampling of the at least one sequence of volumetric frames at a frame rate different from the frame rate used at encoding [(LU Abstract lines 2-7; last para of page 2 and first para of page 3; also, in page 3 “Point cloud warping” show use of motion information)] Therefore, in light of above discussion it would have been obvious to one of the ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teaching of the prior arts because such combination would provide predictable result with no change of their respective functionalities. Claims 5, 7, 9 are rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of OH. Regarding Claim 5. Kim does not explicitly show sequence of volumetric frames is representative of a three-dimensional (3D) scene comprising at least two objects encoded in separate sub-streams of the data stream However, in the same/related field of endeavor, OH teaches sequence of volumetric frames is representative of a three-dimensional (3D) scene comprising at least two objects encoded in separate sub-streams of the data stream [(OH section 3, 4th para)] . Therefore, in light of above discussion it would have been obvious to one of the ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teaching of the prior arts because such combination would provide predictable result with no change of their respective functionalities. Claim 7. The method of claim 5, wherein each one of the at least two objects of the 3D scene is encoded as a sequence of volumetric frames [(Kim, Fig. 6)] . Claim 9. The method of claim 5, wherein reconstructing the at least one sequence of volumetric frames further includes, for each object of the at least two objects, determining a time frame of the corresponding sub-stream that is closest to the composition time frame, the object of the at least two objects being decoded and reconstructed at the determined time frame. [(OH section 3, Fig.4)] . Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Shahan Rahaman whose telephone number is (571)270-1438. The examiner can normally be reached on 7am - 3:30pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached at telephone number (571) 272-4195. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /SHAHAN UR RAHAMAN/Primary Examiner, Art Unit 2426
Read full office action

Prosecution Timeline

Oct 18, 2024
Application Filed
Jan 05, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599294
IMAGE-RECORDING DEVICE FOR IMPROVED LOW LIGHT INTENSITY IMAGING AND ASSOCIATED IMAGE-RECORDING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602765
DEFECT INSPECTION SYSTEM AND DEFECT INSPECTION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12598328
VIDEO SIGNAL PROCESSING METHOD AND DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12593035
IMAGE ENCODING/DECODING METHOD AND DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12586224
THREE-DIMENSIONAL SCANNING SYSTEM AND METHOD FOR OPERATING SAME
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
88%
With Interview (+12.6%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 633 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month