Prosecution Insights
Last updated: April 19, 2026
Application No. 18/494,078

POINT CLOUD DECODING METHOD, POINT CLOUD ENCODING METHOD, AND POINT CLOUD DECODING DEVICE

Non-Final OA §103§112
Filed
Oct 25, 2023
Examiner
HYTREK, ASHLEY LYNN
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Guangdong OPPO Mobile Telecommunications Corp., Ltd.
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
74 granted / 83 resolved
+27.2% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
12 currently pending
Career history
95
Total Applications
across all art units

Statute-Specific Performance

§101
13.8%
-26.2% vs TC avg
§103
51.0%
+11.0% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 83 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/25/2023 has been made record of and considered by the examiner. Claim Objections Claim 17 is objected to because of the following informalities: the claim appears to contain a typo, parameter should be plural. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 3 and 13 recite the limitation "determining a nearest neighboring point," as well as “wherein the nearest neighboring point of one representative point denotes one or more points in the point cloud.” It is unclear to the Examiner whether the “nearest neighboring point” is a single closest point, a set of nearest points, or all points within a distance threshold. Clarification is respectfully requested. Appropriate correction is required. Claims 1 and 11 recite the limitation "performing quality enhancement on attribute data of the converted 2D pictures," while claims 7 and 16 recite “performing quality enhancement on the point cloud.” The Examiner is unsure if quality enhancement is on 2D image attribute data, on the point cloud as a whole, or both. Alignment of the claims is respectfully requested. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-5, 7, 10-14, 16-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Tourapis (US 2022/0005228 A1), in further view of Wang (US 2020/0294270 A1). Consider claims 1, 11, and 20, Tourapis discloses a point cloud decoding device, comprising: at least one processor (FIG. 16 Processor #1610a-n); and a memory coupled to the at least one processor and storing at least one computer executable instruction thereon which, when executed by the at least one processor (FIG. 16 Memory #1620, Program Instructions #1622), causes the at least one processor to: A point cloud decoding/encoding method (FIG. 5A-5D), comprising: decoding a point cloud bitstream to output a point cloud, the point cloud comprising attribute data and geometry data (¶347, 349; FIG. 5B #550 Decoder, #240 point cloud generation; FIG. 5D #516, 518; ¶114-115; “a captured point cloud, such as captured point cloud 110, may include spatial and attribute information for the points included in the point cloud.”); extracting a plurality of three-dimensional (3D) patches from the point cloud (FIG. 5B #238; FIG. 5D #522; ¶120; intra-frame decoder, patches; ¶132; “A segmentation process may decompose a point cloud into a minimum number of patches (e.g., a contiguous subset of the surface described by the point cloud)” ); converting the extracted plurality of three-dimensional patches into two-dimensional (2D) pictures (FIG. 8C; ¶121, 122; “Geometry/Texture/Attribute generation” modules, such as modules 210, 212, and 214, generate 2D patch images associated with the geometry/texture/attributes”; ¶155; “The image generation process described above consists of projecting the points belonging to each patch onto its associated projection plane to generate a patch image.”; ¶358); and performing quality enhancement on attribute data of the converted two-dimensional pictures, and updating the attribute data of the point cloud according to the attribute data of the two-dimensional pictures after quality enhancement; and (FIG. 5B, 6A, FIG. 10 #1055, 1056; ¶346, patches, reconstructed geometry image, correcting for changes in patch shape; ¶131; “The obtained images are then used in order to reconstruct a point cloud, which may be smoothed as described previously to generate a reconstructed point cloud 282”; ¶290, ¶456; ¶707; “determining, for the at least one patch, updated spatial information for the set of points of the at least one patch based, at least in part, on the vector motion information; and generating an updated decompressed version of the compressed point cloud based, at least in part, on the updated spatial information.”) encoding the point cloud with the (Tourapis ¶295, 346, FIG. 5A Encoder #500, Compressed point cloud information #204). Tourapis fails to explicitly disclose encoding the point cloud with the updated attribute data. In related art, Wang discloses extracting a plurality of three-dimensional (3D) patches from the point cloud (Wang FIG. 4A; ¶24-25; “the patch generation module 12 may obtain a point cloud PC including a plurality of points, and obtain a patch … it is further assumed that the patch P1 is a 3D patch and the patch P1 include multiple points.”); converting the extracted plurality of three-dimensional patches into two-dimensional (2D) pictures (Wang FIG. 1B, 4A, ¶26; “in step S103, the patch generation module 12 generates a 2D patch corresponding to the patch P1 according to the 3D patch P1. Here, the 2D patch of the patch P1 includes a geometry image P_G and a texture images P_T.”); and performing quality enhancement on attribute data of the converted two-dimensional pictures, and updating the attribute data of the point cloud according to the attribute data of the two-dimensional pictures after quality enhancement; and (Wang FIG. 4A Patch expanding module #86; ¶51-58; “It should be noted that, the neighboring point added to the patch P1 usually belongs to another patch of the same point cloud PC. … By adding the point from a different patch to one patch, the problem of the crack occurred due to distortion at the intersections of the patches when the decoder is reconstructing may be solved.”) encoding the point cloud with the updated attribute data (Wang FIG. 1A Patch expanding module #10; ¶59; “the encoder 1000 and the decoder 4000 both include the patch expanding module”), and outputting a point cloud bitstream (Wang FIG. 1A Output bitstream, ¶35-39). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the patch extension method of Wang into the point cloud compression/decompression methods of Tourapis to predictably yield an enhanced point cloud/attribute quality. As stated by Wang, “when the decoder obtains the patches from the compressed data and reconstructs (or restores) the data of the point cloud before compression based on the patches, a crack may occur due to distortion at intersections of the patches of the point cloud. This crack will reduce data quality of the point cloud (i.e., a point cloud image) decoded by the decoder (Wang ¶4). As further stated by Tourapis, “a smoothing filter may smooth incongruences at edges of patches, wherein data included in patch images for the patches has been used by the point cloud generation module to recreate a point cloud from the patch images for the patches. In some embodiments, a smoothing filter may be applied to the pixels located on the patch boundaries to alleviate the distortions that may be caused by the compression/decompression process (Tourapis ¶127).” Consider claim 2, Tourapis, as modified by Wang, discloses the claimed invention wherein: the attribute data contains a luma component (Tourapis ¶112, 294-298); and performing quality enhancement on the attribute data of the converted two-dimensional pictures, and updating the attribute data of the point cloud according to the attribute data of the two-dimensional pictures after quality enhancement comprises (Wang ¶64, FIG. 4A; Tourapis FIG. 4C; ¶113, 294-304): performing quality enhancement on luma components of the converted two-dimensional pictures, and updating the luma component contained in the attribute data of the point cloud according to the luma components of the two-dimensional pictures after quality enhancement (Tourapis ¶113, 294-304; “wherein the one or more parameters are adjusted to improve quality of the final decompressed point cloud colors and to reduce the size of the compressed point cloud”; Wang FIG. 4A ¶51-58). Consider claims 3 and 12, Tourapis, as modified by Wang, discloses the claimed invention wherein extracting the plurality of three-dimensional patches from the point cloud comprises (Tourapis ¶132-154 Segmentation Process): determining a plurality of representative points in the point cloud (Tourapis ¶132-140); determining a nearest neighbouring point for each of the plurality of representative points, wherein the nearest neighbouring point of one representative point denotes one or more points in the point cloud nearest to the representative point (Tourapis ¶132-140, 144-149); and constructing the plurality of three-dimensional patches based on the plurality of representative points and the nearest neighbouring points of the plurality of representative points (Tourapis ¶132-154 Segmentation Process; Wang FIG. 2, ¶28). Consider claims 4 and 13, Tourapis, as modified by Wang, discloses the claimed invention wherein converting the extracted plurality of three-dimensional patches into the two-dimensional pictures comprises: converting each extracted three-dimensional patch in the following way: taking the representative point in the three-dimensional patch as a start point, scanning on a two-dimensional plane according to a predetermined scan mode, and mapping other points in the three-dimensional patch to a scan path according to an increasing order of Euclidean distances to the representative point, to obtain one or more two-dimensional pictures, wherein a point in the three-dimensional patch nearer to the representative point is nearer to the representative point on the scan path, and attribute data of all points after mapping are unchanged (Tourapis FIGs. 12B, 13A-K, ¶197-207, 238, 271-285, 550-552; Wang ¶26, FIG. 2). Consider claims 5 and 14, Tourapis, as modified by Wang, discloses the claimed invention wherein the predetermined scan mode comprises at least one of: square-spiral-shape scan, raster scan, or Z-shape scan (Tourapis FIGs. 12B, 13A-K, ¶550-552). Consider claims 7 and 16, Tourapis, as modified by Wang, discloses the claimed invention wherein: the method further comprises: decoding the point cloud bitstream to output at least one quality enhancement parameter of the point cloud (Wang FIG. 4A, ¶39, 43, 49-59); performing quality enhancement on the point cloud comprises: performing quality enhancement on the point cloud according to the at least one quality enhancement parameter output after decoding (Wang FIG. 4A, ¶43, 49-59); and [claim 16: determining a first quality enhancement parameter of the point cloud, and performing quality enhancement on the point cloud according to the determined first quality enhancement parameter; and the first quality enhancement parameter comprises at least one of (Wang FIG. 4A, ¶43, 49-59): the at least one quality enhancement parameter comprises at least one of: the number of the three-dimensional patches extracted from the point cloud (Wang ¶43, 49-59); the number of points in each two-dimensional picture (Wang ¶49-59); arrangement of the points in each two-dimensional picture (Wang ¶40, 49-59); at least one scan mode used when converting the plurality of three-dimensional patches into the two-dimensional pictures (Tourapis ¶550-552; Wang ¶49-59, FIG. 2); a parameter of a quality enhancement network, wherein the quality enhancement network is used for performing quality enhancement on the attribute data of the two-dimensional pictures; or a data feature parameter of the point cloud, wherein the data feature parameter is used for determining the quality enhancement network used in performing quality enhancement on the attribute data of the two-dimensional pictures, and the data feature parameter of the point cloud comprises at least one of: a type of the point cloud or a bit rate of an attribute bitstream of the point cloud. Consider claim 17, Tourapis, as modified by Wang, discloses the claimed invention wherein at least one of the first quality enhancement parameter is obtained from a point cloud data source device of the point cloud (Tourapis FIG. 1, ¶102; Wang FIGs. 1A, 4A, ¶47). Consider claims 10 and 19, Tourapis, as modified by Wang, discloses the claimed invention wherein determining the plurality of representative points in the point cloud comprises: selecting the plurality of representative points from the point cloud with a farthest point sampling algorithm (Wang ¶162). Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Tourapis, in view of Wang, as applied to claims 1-5, 7, 10-14, 16-17, and 19-20 above, and further in view of MPEG WG 7, hereinafter referred to as “MPEG,” (‘G-PCC codec description v12’). Consider claims 6 and 15, Tourapis, as modified by Wang, discloses the claimed invention wherein updating the attribute data of the point cloud according to the attribute data of the two-dimensional pictures after quality enhancement comprises: for each point in the point cloud, determining at least one corresponding point in the two-dimensional pictures after quality enhancement of the point (Tourapis ¶155-156, 176-178, Wang ¶51-59); setting attribute data of the point in the point cloud to be equal to attribute data of the at least one corresponding point, when the number of the at least one corresponding point is 1 (Tourapis ¶155-158, 557); setting the attribute data of the point in the point cloud to be equal to a weighted average value of the attribute data of the at least one corresponding point, when the number of the at least one corresponding point is greater than 1 (Tourapis ¶155-163, 491, 557). Tourapis, as modified by Wang, fails to specifically disclose: setting attribute data of the point in the point cloud to be equal to attribute data of the at least one corresponding point, when the number of the at least one corresponding point is 1; setting the attribute data of the point in the point cloud to be equal to a weighted average value of the attribute data of the at least one corresponding point, when the number of the at least one corresponding point is greater than 1; and skipping updating the attribute data of the point in the point cloud, when the number of the at least one corresponding point is 0. In related art, MPEG discloses: setting attribute data of the point in the point cloud to be equal to attribute data of the at least one corresponding point, when the number of the at least one corresponding point is 1 (MPEG 3.7 Attributes transfer (recoloring), Distance-weighted color transfer); setting the attribute data of the point in the point cloud to be equal to a weighted average value of the attribute data of the at least one corresponding point, when the number of the at least one corresponding point is greater than 1 (MPEG 3.7 Attributes transfer (recoloring), Distance-weighted color transfer); and skipping updating the attribute data of the point in the point cloud, when the number of the at least one corresponding point is 0 (MPEG 3.7 Attributes transfer (recoloring), Distance-weighted color transfer). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the 0/1/>1 handling of MPEG into the decoding/encoding method of Tourapis, as modified by Wang, to predictably yield updating the attribute data based on a number of corresponding points after quality enhancement (Tourapis ¶155-163, 491, 557, MPEG 3.7). Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Tourapis, in view of Wang, as applied to claims 1-5, 7, 10-14, 16-17, and 19-20 above, and further in view of Budagavi (US 2019/0318509 A1). Consider claim 18, Tourapis, as modified by Wang, fails to specifically disclose obtaining a second quality enhancement parameter; and encoding the second quality enhancement parameter and signalling the second quality enhancement parameter into the point cloud bitstream, wherein the second quality enhancement parameter is used when a decoding end performs quality enhancement on the point cloud output after decoding the point cloud bitstream. In related art, Budagavi discloses obtaining a second quality enhancement parameter; and encoding the second quality enhancement parameter and signalling the second quality enhancement parameter into the point cloud bitstream, wherein the second quality enhancement parameter is used when a decoding end performs quality enhancement on the point cloud output after decoding the point cloud bitstream (Budagavi ¶79-80). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the second quality enhancement parameter of Budagavi into the encoding/decoding method of Tourapis, as modified by Wang, to enable color smoothing enhancement at the decoder (Budagavi ¶79, Wang ¶59). Allowable Subject Matter Claims 8 and 9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Relevant Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 2021/0211703 A1 discloses geometry information signaling for occluded points in an occupancy map video. US 2020/0021856 A1 discloses systems for hierarchical point cloud compression. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASHLEY HYTREK whose telephone number is (703)756-4562. The examiner can normally be reached M-F 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Steve Koziol can be reached at (408)918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ASHLEY HYTREK/Examiner, Art Unit 2665 /Stephen R Koziol/Supervisory Patent Examiner, Art Unit 2665
Read full office action

Prosecution Timeline

Oct 25, 2023
Application Filed
Jan 09, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597122
DEFECT DETECTION DEVICE AND METHOD THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12555239
Microscopy System and Method for Image Segmentation
2y 5m to grant Granted Feb 17, 2026
Patent 12555357
SYSTEMS AND METHODS FOR CATEGORIZING IMAGE PIXELS
2y 5m to grant Granted Feb 17, 2026
Patent 12548291
VIDEO SIGNAL PROCESSING APPARATUS, VIDEO SIGNAL PROCESSING METHOD, AND IMAGING APPARATUS
2y 5m to grant Granted Feb 10, 2026
Patent 12548157
SYSTEMS AND METHODS FOR INLINE QUALITY CONTROL OF SLIDE DIGITIZATION
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+11.8%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 83 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month