Prosecution Insights
Last updated: April 19, 2026
Application No. 18/022,900

POINT CLOUD DATA TRANSMISSION DEVICE, POINT CLOUD DATA TRANSMISSION METHOD, POINT CLOUD DATA RECEPTION DEVICE, AND POINT CLOUD DATA RECEPTION METHOD

Final Rejection §103§DP
Filed
Feb 23, 2023
Examiner
HUNTSINGER, PETER K
Art Unit
2682
Tech Center
2600 — Communications
Assignee
LG Electronics Inc.
OA Round
2 (Final)
28%
Grant Probability
At Risk
3-4
OA Rounds
4y 11m
To Grant
45%
With Interview

Examiner Intelligence

Grants only 28% of cases
28%
Career Allow Rate
90 granted / 322 resolved
-34.0% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 11m
Avg Prosecution
59 currently pending
Career history
381
Total Applications
across all art units

Statute-Specific Performance

§101
9.3%
-30.7% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
19.4%
-20.6% vs TC avg
§112
19.0%
-21.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 322 resolved cases

Office Action

§103 §DP
DETAILED ACTION Claims 7 and 12-15 have been cancelled. Claims 1-6, 8-11 and 16-20 are currently pending. The rejections to claims 1, 9, 17 and 19 under double patenting is withdrawn due to Applicant’s amendment. Response to Arguments Applicant's arguments filed 12/5/25 have been fully considered but they are not persuasive. The Applicant argues on pages 9-10 of the response in essence that: However, Oyman fails to disclose the bitstream includes information for representing a number of the attributes for the atlas, information for dimension of partitions for the regions of the packed video frame for the atlas, and information for camera parameters." as recited in claim 1. Oyman discloses that the sequence parameter set (SPS) unit type describes the entire V-PCC bitstream and its subcomponents (paragraph 40). The reconstruction process requires the occupancy, geometry, and attribute video sequences to be resampled at the nominal 2D resolution specified in the SPS (paragraph 44). The resolution specified in the SPS is information for dimension of partitions for the regions of the packed video frame for the atlas. Lee discloses that metadata such as internal/external setup values of the camera may be generated during the capturing process (paragraph 261). Setup values of the camera are information for camera parameters. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 9-11 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Oyman et al. US Publication 2020/0382764 (hereafter “Oyman”) and Lee et al. US Publication 2020/0153885 (hereafter “Lee”). Referring to claims 1 and 17, Oyman discloses a method of encoding point cloud data, the method comprising: encoding atlas data of an atlas for point cloud data (paragraph 37, V-PCC exploits a patch-based approach to segment the point cloud into a set of clusters (also referred to as patches), e.g., by patch generation block 102 and patch packing block 104 [a patch is atlas data]); and encoding a packed video frame for the atlas (paragraph 38, All patch information that is required to reconstruct the 3D point cloud from the 2D geometry, attribute, and occupancy videos also needs to be compressed. Such information is encoded in the V-PCC patch sequence substream (e.g., at block 106)), and wherein the packed video frame includes regions for attributes for the point cloud data (paragraph 19, In some embodiments, the UE may receive a quality ranking and/or a priority ranking associated with respective regions of an adaptation set (e.g., for an associated viewport)), and wherein the encoded atlas data and the encoded packed video frame are included in a bitstream (paragraph 39, The V-PCC bitstream is then formed by concatenating the various encoded information (e.g., occupancy map, geometry, attribute, and patch sequence substreams) into a single stream (e.g., at multiplexer 108)), wherein the bitstream includes information for representing a number of the attributes for the atlas and information for dimension of partitions for the regions of the packed video frame for the atlas (paragraph 40, 44, The sequence parameter set (SPS) unit type describes the entire V-PCC bitstream and its subcomponents. The reconstruction process requires the occupancy, geometry, and attribute video sequences to be resampled at the nominal 2D resolution specified in the SPS) Oyman does not disclose expressly wherein the bitstream includes information for camera parameters. Lee discloses wherein the bitstream includes information for camera parameters (paragraph 261, metadata such as internal/external setup values of the camera may be generated during the capturing process). Before the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to include camera parameters in a bitstream. The motivation for doing so would have been to better understand the creation of the image data to improve subsequent processing on the bitstream. Therefore, it would have been obvious to combine Lee with Oyman to obtain the invention as specified in claims 1 and 17. Referring to claims 2 and 18, Oyman discloses wherein the point cloud data comprises geometry data and at data for the attributes (paragraph 28, A point cloud comprises a set of unordered data points in a 3D space, each of which is specified by its spatial (x, y, z) position possibly along with other associated attributes, e.g., RGB color, surface normal, and reflectance. This is essentially the 3D equivalent of well-known pixels for representing 2D videos. These data points collectively describe the 3D geometry and texture of the scene or object), the method further comprising: selecting a specific number of viewpoints from the viewpoints based on distances between an object and the viewpoints (paragraph 84, it may be possible to indicate recommended viewports via specific contextual information (e.g., the position of the ball, position of a star player, etc.) along with (or instead of) the coordinate-based description of the content coverage). Oyman does not disclose expressly wherein the point cloud data is obtained from cameras for viewpoints. Lee discloses wherein the point cloud data comprises geometry data and at least two attributes obtained from cameras for viewpoints (paragraph 85, The capture process may refer to a process of capturing images or videos for a plurality of views through one or more cameras). At the time of the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to obtain the point cloud data from cameras for viewpoints. The motivation for doing so would have been to provide an efficient manner in which a user can create the point cloud data. Therefore, it would have been obvious to combine Lee with Oyman to obtain the invention as specified in claims 2 and 18. Referring to claims 3, Oyman discloses generating representative attribute information from the selected specific number of viewpoints (paragraph 59, the ROI/viewport information may include one or more of the parameters below. FIG. 5 depicts these parameters in accordance with various embodiments); and selecting the specific number of viewpoints based on the representative attribute information (paragraph 84, it may be possible to indicate recommended viewports via specific contextual information (e.g., the position of the ball, position of a star player, etc.) along with (or instead of) the coordinate-based description of the content coverage). Referring to claim 4, Oyman discloses based on an order of the distances (paragraph 179, the UE may receive a quality ranking and/or a priority ranking associated with respective regions of an adaptation set (e.g., for an associated viewport)) (paragraph 84, it may be possible to indicate recommended viewports via specific contextual information (e.g., the position of the ball, position of a star player, etc.), generating texture data from attribute information related to the selected specific number of viewpoints (paragraph 54, This may potentially also involve live cloud-based production media workloads on the volumetric content, which may for instance include live point cloud or texture-and-mesh generation for volumetric video). Referring to claim 5, Oyman discloses generating texture data for the attributes from the specific number of viewpoints based on a difference between the representative attribute information and attribute information related to the specific number of viewpoints (paragraph 54, high quality viewport-specific video data (e.g., tiles) corresponding to portions of the point cloud content for different fields of view (FoVs) at various quality levels may be cached at the edge and delivered to the client device with very low latency based on the user's FOV information). Referring to claim 6, Oyman discloses generating the packed video frame by merging the texture data including attribute information related to the selected specific number of viewpoints (paragraph 37, A mapping between the point cloud and a regular 2D grid is then obtained by packing the projected patches in the patch-packing process). Referring to claims 9 and 19, Oyman discloses a method of decoding point cloud data, the method comprising: decoding atlas data for an atlas in a bitstream for point cloud data (paragraph 37, V-PCC exploits a patch-based approach to segment the point cloud into a set of clusters (also referred to as patches), e.g., by patch generation block 102 and patch packing block 104 [a patch is atlas data]) (paragraph 42, The bitstream decoding process takes as input the V-PCC compressed bitstream and outputs the decoded occupancy, geometry, and attribute 2D video frames, together with the patch information associated with every frame); and decoding a packed video frame for the atlas in the bitstream (paragraph 38, All patch information that is required to reconstruct the 3D point cloud from the 2D geometry, attribute, and occupancy videos also needs to be compressed. Such information is encoded in the V-PCC patch sequence substream (e.g., at block 106)), and wherein the packed video frame includes regions for attributes for the point cloud data (paragraph 19, In some embodiments, the UE may receive a quality ranking and/or a priority ranking associated with respective regions of an adaptation set (e.g., for an associated viewport)), and wherein the bitstream includes information for representing a number of the attributes for the atlas, information for dimension of partitions for the regions of the packed video frame for the atlas, and information for camera parameters. wherein the bitstream includes information for representing a number of the attributes for the atlas and information for dimension of partitions for the regions of the packed video frame for the atlas (paragraph 40, 44, The sequence parameter set (SPS) unit type describes the entire V-PCC bitstream and its subcomponents. The reconstruction process requires the occupancy, geometry, and attribute video sequences to be resampled at the nominal 2D resolution specified in the SPS) Oyman does not disclose expressly wherein the bitstream includes information for camera parameters. Lee discloses wherein the bitstream includes information for camera parameters (paragraph 261, metadata such as internal/external setup values of the camera may be generated during the capturing process). Before the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to include camera parameters in a bitstream. The motivation for doing so would have been to better understand the creation of the image data to improve subsequent processing on the bitstream. Therefore, it would have been obvious to combine Lee with Oyman to obtain the invention as specified in claims 9 and 19. Referring to claims 10 and 20 Oyman discloses wherein the point cloud data comprises geometry data and data for the attributes (paragraph 28, A point cloud comprises a set of unordered data points in a 3D space, each of which is specified by its spatial (x, y, z) position possibly along with other associated attributes, e.g., RGB color, surface normal, and reflectance. This is essentially the 3D equivalent of well-known pixels for representing 2D videos. These data points collectively describe the 3D geometry and texture of the scene or object), wherein the attributes are related to a specific number of viewpoints selected from the viewpoints based on distances between an object and the viewpoints (paragraph 84, it may be possible to indicate recommended viewports via specific contextual information (e.g., the position of the ball, position of a star player, etc.) along with (or instead of) the coordinate-based description of the content coverage). Oyman does not disclose expressly wherein the point cloud data is obtained from cameras for viewpoints. Lee discloses wherein the point cloud data comprises geometry data and at least two attributes obtained from cameras for viewpoints (paragraph 85, The capture process may refer to a process of capturing images or videos for a plurality of views through one or more cameras). At the time of the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to obtain the point cloud data from cameras for viewpoints. The motivation for doing so would have been to provide an efficient manner in which a user can create the point cloud data. Therefore, it would have been obvious to combine Lee with Oyman to obtain the invention as specified in claims 10 and 20. Referring to claim 11, Oyman discloses wherein attributes are related to the specific number of viewpoints (paragraph 59, the ROI/viewport information may include one or more of the parameters below. FIG. 5 depicts these parameters in accordance with various embodiments) based on representative attribute information generated from the selected specific number of viewpoints (paragraph 84, it may be possible to indicate recommended viewports via specific contextual information (e.g., the position of the ball, position of a star player, etc.) along with (or instead of) the coordinate-based description of the content coverage). Claims 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Oyman et al. US Publication 2020/0382764 and Lee et al. US Publication 2020/0153885 as applied to claims 1 and 9 above, and further in view of Mammou US Publication 2019/0087979 (hereafter “Mammou”). Referring to claims 8 and 16, Oyman discloses wherein the point cloud data comprises geometry data and data for the attributes obtained from objects (paragraph 28, A point cloud is a set of points {v}, each point v having a spatial position (x, y, z) comprising the geometry and a vector of attributes such as colors (Y, U, V), normals, curvature or others). Oyman does not disclose expressly wherein the bitstream contains valid camera viewpoint-related parameter information generated based on an occluded area between a first object and a second object among the objects. Mammou discloses wherein the bitstream contains valid camera viewpoint-related parameter information generated based on an occluded area between a first object and a second object among the objects (paragraph 443-444, In some embodiments, active and non-active portions of an image frame may be indicated by a “mask.” For example, a mask may indicate a portion of an image that is a padding portion or may indicate non-active points of a point cloud, such as points that are hidden from view in one or more viewing angles. In some embodiments, a “mask” may be encoded along with patch images or projections). At the time of the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to include valid camera viewpoint-related parameter information into a bitstream. The motivation for doing so would have been to improve coding efficiency by adjusting rate control and rate allocation depending on whether an object is hidden in one particular point of view. Therefore, it would have been obvious to combine Mammou with Oyman to obtain the invention as specified in claims 8 and 16. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER K HUNTSINGER whose telephone number is (571)272-7435. The examiner can normally be reached Monday - Friday 8:30 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benny Q Tieu can be reached at 571-272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PETER K HUNTSINGER/ Primary Examiner, Art Unit 2682
Read full office action

Prosecution Timeline

Feb 23, 2023
Application Filed
Jul 03, 2025
Non-Final Rejection — §103, §DP
Oct 02, 2025
Response Filed
Oct 02, 2025
Response after Non-Final Action
Dec 05, 2025
Response Filed
Feb 03, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12540884
Determining Fracture Roughness from a Core
2y 5m to grant Granted Feb 03, 2026
Patent 12412381
METHODS AND SYSTEMS FOR CONTROLLING OPERATION OF WIRELINE CABLE SPOOLING EQUIPMENT
2y 5m to grant Granted Sep 09, 2025
Patent 12387360
APPARATUS AND METHOD FOR ESTIMATING UNCERTAINTY OF IMAGE COORDINATE
2y 5m to grant Granted Aug 12, 2025
Patent 12388943
PRINTING SYSTEM USING FLUORESENT AND NON-FLUORESENT INK, PRINTING APPARATUS, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND CONTROL METHOD THEREOF
2y 5m to grant Granted Aug 12, 2025
Patent 12374081
DIGITAL IMAGE PROCESSING TECHNIQUES USING BOUNDING BOX PRECISION MODELS
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
28%
Grant Probability
45%
With Interview (+16.7%)
4y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 322 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month