Prosecution Insights
Last updated: April 19, 2026
Application No. 18/398,895

POINT CLOUD DATA TRANSMISSION DEVICE, POINT CLOUD DATA TRANSMISSION METHOD, POINT CLOUD DATA RECEPTION DEVICE, AND POINT CLOUD DATA RECEPTION METHOD

Non-Final OA §102
Filed
Dec 28, 2023
Examiner
CHIO, TAT CHI
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
LG Electronics Inc.
OA Round
3 (Non-Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
90%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
610 granted / 836 resolved
+15.0% vs TC avg
Strong +17% interview lift
Without
With
+16.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
49 currently pending
Career history
885
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
52.4%
+12.4% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 836 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/4/2025 has been entered. Response to Arguments Applicant's arguments filed 12/4/2025 have been fully considered but they are not persuasive. Applicant argues that Iguchi does not explicitly teach encoding geometry data of point cloud data based on a plurality of geometry layers in a segmented geometry slice. In response, the examiner respectfully disagrees. Iguchi teaches that the three-dimensional data encoding device stores a geometry information slice and an attribute information slice in a sample (sample) in a one-to-one relationship. Here, a slice includes information (layer data) on all layers. [0509]. The three-dimensional data encoding device starts a format transform of encoded data (S8811). The three-dimensional data encoding device then stores one slice including a plurality of layers in one sample (S8812). The three-dimensional data encoding device also stores layer information in metadata (S8813). The three-dimensional data encoding device forms a frame (access unit: AU) (S8814). [0513]. Applicant argues that Iguchi does not explicitly teach wherein the bitstream includes information for representing an identifier for the plurality of geometry layers, information for representing an identifier for the plurality of attribute layers, and information for a number of the plurality of geometry layers. In response, the examiner respectfully disagrees. Iguchi teaches that FIG. 82 shows a case where one item of depth data is used as one item of slice data, in which a slice header is assigned for each item of depth data. The slice header includes depthId that identifies the layer of the depth data, layerId that indicates the layer to which the depth belongs, and length that indicates the length of the depth data. The slice header may further include groupId that indicates that data belongs to the same frame. That is, groupId indicates a frame (time) to which the data belongs. When these items of information are included in the slice header, the overall encoded data need not have hierarchical structure metadata. The three-dimensional data encoding device may store a parameter common to all the depths in the header of the slice that transmits the first depth, or may store the parameter in a common header and arrange the parameter ahead of the data of depth #0. Note that the three-dimensional data encoding device may store depthId and groupId in the slice header, and store the number of depths and layerId and length for each depth in the hierarchical structure metadata or the common header. The data of depth #0 can be decoded by itself, and data of the depths other than depth #0 cannot be decoded by itself and depends on other data. The three-dimensional data decoding device determines that data of the depths other than depth #0 cannot be decoded by itself, and decodes depth data to be decoded along with depth data that has the same groupId as the depth data to be decoded and has depthId smaller than depthId of the depth data to be decoded. FIG. 83 shows a case where one item of layer data is used as one item of slice data, in which a slice header is assigned for each item of layer data. The slice header includes layerId, and the depth count (num_depth) indicating the number of depths included in the layer, and the length (length) of the depth data. The slice header may further include groupId that indicates that layer data belongs to the same frame. Note that the slice header may include layerId and groupId, and the number of layers, the number of depths included in each layer, and the length information (length) on each depth may be included in the hierarchical structure metadata. By using this structure, the above-described data can be more easily divided into items of data on a per layer basis, so that the processing amount involved with the division can be reduced. In addition, divisional data can be transmitted, so that the amount of transmission can be reduced. In addition, the geometry information and the attribute information can be divided on a per layer basis in the same manner. [0530] – [0534]. Applicant argues that Iguchi does not explicitly teach wherein the plurality of geometry layers are related to levels of an octree. In response, the examiner respectfully disagrees. Iguchi teaches an octree representation and a scan order for geometry information will be described. Geometry information (geometry data) is transformed into an octree structure (octree transform) and then encoded. The octree structure includes nodes and leaves. Each node has eight nodes or leaves, and each leaf has voxel (VXL) information. FIG. 10 is a diagram showing an example structure of geometry information including a plurality of voxels. FIG. 11 is a diagram showing an example in which the geometry information shown in FIG. 10 is transformed into an octree structure. Here, of leaves shown in FIG. 11, leaves 1, 2, and 3 represent voxels VXL1, VXL2, and VXL3 shown in FIG. 10, respectively, and each represent VXL containing a point cloud (referred to as a valid VXL, hereinafter). [0222]. A plurality of levels (referred to also as hierarchical levels) are defined as shown in FIG. 46. Level 2 is a point cloud represented by point cloud data resulting from octree division from depth=0 to a last depth (depth=6), level 1 is a point cloud represented by point cloud data resulting from octree division from depth=0 to depth=5, and level 0 is a point cloud represented by point cloud data resulting from octree division from depth=0 to depth=4. [0466]. Applicant argues that Iguchi does not explicitly teach wherein the segmented geometry slice is mapped to the plurality of geometry layers. In response, the examiner respectfully disagrees. Iguchi teaches in Fig. 76 that Geometry Information Slice 1 is mapped to Layer 0 and Layer 1. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-15 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Iguchi et al. (US 2022/0094982 A1). Consider claim 5, Iguchi teaches an apparatus for encoding point cloud data, the apparatus comprising: a memory ([0149] and [0151]); and at least one processor connected to the memory ([0149] and [0151]), the at least one processor configured to: encode geometry data of point cloud data based on a plurality of geometry layers in a segmented geometry slice (the three-dimensional data encoding device stores a geometry information slice and an attribute information slice in a sample (sample) in a one-to-one relationship. Here, a slice includes information (layer data) on all layers. [0509]. The three-dimensional data encoding device starts a format transform of encoded data (S8811). The three-dimensional data encoding device then stores one slice including a plurality of layers in one sample (S8812). The three-dimensional data encoding device also stores layer information in metadata (S8813). The three-dimensional data encoding device forms a frame (access unit: AU) (S8814). [0513]); and encode attribute data of the point cloud data based on a plurality of attribute layers in a segmented attribute slice (the three-dimensional data encoding device stores a geometry information slice and an attribute information slice in a sample (sample) in a one-to-one relationship. Here, a slice includes information (layer data) on all layers. [0509]. The three-dimensional data encoding device starts a format transform of encoded data (S8811). The three-dimensional data encoding device then stores one slice including a plurality of layers in one sample (S8812). The three-dimensional data encoding device also stores layer information in metadata (S8813). The three-dimensional data encoding device forms a frame (access unit: AU) (S8814). [0513]), the bitstream includes information for representing an identifier for the plurality of geometry layers, information for representing an identifier for the plurality of attribute layers, and information for a number of the plurality of geometry layers (that FIG. 82 shows a case where one item of depth data is used as one item of slice data, in which a slice header is assigned for each item of depth data. The slice header includes depthId that identifies the layer of the depth data, layerId that indicates the layer to which the depth belongs, and length that indicates the length of the depth data. The slice header may further include groupId that indicates that data belongs to the same frame. That is, groupId indicates a frame (time) to which the data belongs. When these items of information are included in the slice header, the overall encoded data need not have hierarchical structure metadata. The three-dimensional data encoding device may store a parameter common to all the depths in the header of the slice that transmits the first depth, or may store the parameter in a common header and arrange the parameter ahead of the data of depth #0. Note that the three-dimensional data encoding device may store depthId and groupId in the slice header, and store the number of depths and layerId and length for each depth in the hierarchical structure metadata or the common header. The data of depth #0 can be decoded by itself, and data of the depths other than depth #0 cannot be decoded by itself and depends on other data. The three-dimensional data decoding device determines that data of the depths other than depth #0 cannot be decoded by itself, and decodes depth data to be decoded along with depth data that has the same groupId as the depth data to be decoded and has depthId smaller than depthId of the depth data to be decoded. FIG. 83 shows a case where one item of layer data is used as one item of slice data, in which a slice header is assigned for each item of layer data. The slice header includes layerId, and the depth count (num_depth) indicating the number of depths included in the layer, and the length (length) of the depth data. The slice header may further include groupId that indicates that layer data belongs to the same frame. Note that the slice header may include layerId and groupId, and the number of layers, the number of depths included in each layer, and the length information (length) on each depth may be included in the hierarchical structure metadata. By using this structure, the above-described data can be more easily divided into items of data on a per layer basis, so that the processing amount involved with the division can be reduced. In addition, divisional data can be transmitted, so that the amount of transmission can be reduced. In addition, the geometry information and the attribute information can be divided on a per layer basis in the same manner. [0530] – [0535], Fig. 82, Fig. 83), wherein the plurality of geometry layers are related to levels of an octree (an octree representation and a scan order for geometry information will be described. Geometry information (geometry data) is transformed into an octree structure (octree transform) and then encoded. The octree structure includes nodes and leaves. Each node has eight nodes or leaves, and each leaf has voxel (VXL) information. FIG. 10 is a diagram showing an example structure of geometry information including a plurality of voxels. FIG. 11 is a diagram showing an example in which the geometry information shown in FIG. 10 is transformed into an octree structure. Here, of leaves shown in FIG. 11, leaves 1, 2, and 3 represent voxels VXL1, VXL2, and VXL3 shown in FIG. 10, respectively, and each represent VXL containing a point cloud (referred to as a valid VXL, hereinafter). [0222]. A plurality of levels (referred to also as hierarchical levels) are defined as shown in FIG. 46. Level 2 is a point cloud represented by point cloud data resulting from octree division from depth=0 to a last depth (depth=6), level 1 is a point cloud represented by point cloud data resulting from octree division from depth=0 to depth=5, and level 0 is a point cloud represented by point cloud data resulting from octree division from depth=0 to depth=4. [0466]), wherein the segmented geometry slice is mapped to the plurality of geometry layers (Geometry Information Slice 1 is mapped to Layer 0 and Layer 1. Fig. 76 ), wherein the segmented attribute slice is mapped to the plurality of attribute layers (Attribute Information Slice 1 is mapped to Layer 0 and Layer 1. Fig. 76). Consider claim 1, claim 1 recites the method implemented by the apparatus recited in claim 5. Thus, it is rejected for the same reasons. Consider claim 13, Iguchi teaches an apparatus for decoding point cloud data, the apparatus comprising: a memory ([0149] and [0151]); and at least one processor connected to the memory ([0149] and [0151]), the at least one processor configured to: decode geometry data of point cloud data based on a plurality of geometry layers in a segmented geometry slice in a bitstream ([0214] – [0215], [0509] – [0513]); and decode attribute data of the point cloud data based on a plurality of attribute layers in a segmented attribute slice ([0233], [0509] – [0513]), wherein the bitstream includes information for representing an identifier for the plurality of geometry layers, information for representing an identifier for the plurality of attribute layers, and information for a number of the plurality of geometry layers (that FIG. 82 shows a case where one item of depth data is used as one item of slice data, in which a slice header is assigned for each item of depth data. The slice header includes depthId that identifies the layer of the depth data, layerId that indicates the layer to which the depth belongs, and length that indicates the length of the depth data. The slice header may further include groupId that indicates that data belongs to the same frame. That is, groupId indicates a frame (time) to which the data belongs. When these items of information are included in the slice header, the overall encoded data need not have hierarchical structure metadata. The three-dimensional data encoding device may store a parameter common to all the depths in the header of the slice that transmits the first depth, or may store the parameter in a common header and arrange the parameter ahead of the data of depth #0. Note that the three-dimensional data encoding device may store depthId and groupId in the slice header, and store the number of depths and layerId and length for each depth in the hierarchical structure metadata or the common header. The data of depth #0 can be decoded by itself, and data of the depths other than depth #0 cannot be decoded by itself and depends on other data. The three-dimensional data decoding device determines that data of the depths other than depth #0 cannot be decoded by itself, and decodes depth data to be decoded along with depth data that has the same groupId as the depth data to be decoded and has depthId smaller than depthId of the depth data to be decoded. FIG. 83 shows a case where one item of layer data is used as one item of slice data, in which a slice header is assigned for each item of layer data. The slice header includes layerId, and the depth count (num_depth) indicating the number of depths included in the layer, and the length (length) of the depth data. The slice header may further include groupId that indicates that layer data belongs to the same frame. Note that the slice header may include layerId and groupId, and the number of layers, the number of depths included in each layer, and the length information (length) on each depth may be included in the hierarchical structure metadata. By using this structure, the above-described data can be more easily divided into items of data on a per layer basis, so that the processing amount involved with the division can be reduced. In addition, divisional data can be transmitted, so that the amount of transmission can be reduced. In addition, the geometry information and the attribute information can be divided on a per layer basis in the same manner. [0530] – [0535], Fig. 82, Fig. 83), wherein the plurality of geometry layers are related to levels of an octree (an octree representation and a scan order for geometry information will be described. Geometry information (geometry data) is transformed into an octree structure (octree transform) and then encoded. The octree structure includes nodes and leaves. Each node has eight nodes or leaves, and each leaf has voxel (VXL) information. FIG. 10 is a diagram showing an example structure of geometry information including a plurality of voxels. FIG. 11 is a diagram showing an example in which the geometry information shown in FIG. 10 is transformed into an octree structure. Here, of leaves shown in FIG. 11, leaves 1, 2, and 3 represent voxels VXL1, VXL2, and VXL3 shown in FIG. 10, respectively, and each represent VXL containing a point cloud (referred to as a valid VXL, hereinafter). [0222]. A plurality of levels (referred to also as hierarchical levels) are defined as shown in FIG. 46. Level 2 is a point cloud represented by point cloud data resulting from octree division from depth=0 to a last depth (depth=6), level 1 is a point cloud represented by point cloud data resulting from octree division from depth=0 to depth=5, and level 0 is a point cloud represented by point cloud data resulting from octree division from depth=0 to depth=4. [0466]), wherein the segmented geometry slice is mapped to the plurality of geometry layers (Geometry Information Slice 1 is mapped to Layer 0 and Layer 1. Fig. 76 ), wherein the segmented attribute slice is mapped to the plurality of attribute layers (Attribute Information Slice 1 is mapped to Layer 0 and Layer 1. Fig. 76). Consider claim 9, claim 9 recites the method implemented by the apparatus recited in claim 13. Thus, it is rejected for the same reasons. Consider claim 6, Iguchi teaches the geometry data in the bitstream is segmented based on the levels of the octree ([0463] – [0466], [0529] – [0535]; Fig. 46, Fig. 82 – Fig. 83) and the attribute data bitstream is segmented based on a level of detail (LOD) ([0230] – [0237], [0753]) the bitstream includes signaling information for the segmented geometry slice and the segmented attribute slice ([0351] – [0360], [0519] – [0526], [0529] – [0535]). Consider claim 7, Iguchi teaches wherein the attribute data are included in the LOD ([0230] – [0237], [0753]). Consider claim 8, Iguchi teaches wherein the bitstream includes a data unit for the encoded geometry data ([0463] – [0466], [0519] – [0526], [0529] – [0535]; Fig. 46, Fig. 76, Fig. 82 – Fig. 83), the data unit includes header information ([0463] – [0466], [0529] – [0535]; Fig. 46, Fig. 76, Fig. 82 – Fig. 83). Consider claim 2, claim 2 recites the method implemented by the apparatus recited in claim 6. Thus, it is rejected for the same reasons. Consider claim 3, Iguchi teaches the geometry data in the bitstream is segmented based on the levels of the octree ([0463] – [0466], [0529] – [0535]; Fig. 46, Fig. 82 – Fig. 83) and the attribute data bitstream is segmented based on a level of detail (LOD) ([0230] – [0237], [0753]), wherein the attribute data are included in the LOD ([0230] – [0237], [0753]). Consider claim 4, claim 4 recites the method implemented by the apparatus recited in claim 8. Thus, it is rejected for the same reasons. Consider claim 14, Iguchi teaches the geometry data in the bitstream is segmented based on the levels of the octree ([0463] – [0466], [0529] – [0535]; Fig. 46, Fig. 82 – Fig. 83) and the attribute data bitstream is segmented based on a level of detail (LOD) ([0230] – [0237], [0753]) the bitstream includes signaling information for the segmented geometry slice and the segmented attribute slice ([0351] – [0360], [0519] – [0526], [0529] – [0535]). Consider claim 15, Iguchi teaches wherein the attribute data are included in the LOD ([0230] – [0237], [0753]), wherein each data unit in the bitstream for the geometry data is matched to each level of the octree ([0463] – [0466], [0529] – [0535]; Fig. 46, Fig. 82 – Fig. 83), wherein each data unit in the bitstream for the attribute data is matched to each LOD ([0463] – [0466], [0529] – [0535]; Fig. 46, Fig. 82 – Fig. 83), and wherein the data unit includes header information ([0463] – [0466], [0529] – [0535]; Fig. 46, Fig. 82 – Fig. 83). Consider claim 10, Iguchi teaches the bitstream includes signaling information for the segmented geometry slice and the segmented attribute slice ([0351] – [0360], [0519] – [0526], [0529] – [0535]). Consider claim 11, Iguchi teaches wherein the attribute data are included in a level of detail (LOD) ([0230] – [0237], [0753]), wherein the bitstream includes a geometry bitstream including the geometry data ([0206] – [0207] and [0490]) and an attribute bitstream including the attribute data ([0230] and [0490]), wherein the geometry bitstream is segmented based on the levels of the octree ([0206] – [0207] and [0490]), and wherein the attribute bitstream is segmented based on the LOD ([0230] and [0490]). Consider claim 12, Iguchi teaches wherein each data unit in the bitstream for the geometry data is matched to each level of the octree ([0463] – [0466], [0529] – [0535]; Fig. 46, Fig. 82 – Fig. 83), wherein each data unit in the bitstream for the attribute data is matched to each LOD ([0463] – [0466], [0529] – [0535]; Fig. 46, Fig. 82 – Fig. 83), and wherein the data unit includes header information ([0463] – [0466], [0529] – [0535]; Fig. 46, Fig. 82 – Fig. 83). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAT CHI CHIO whose telephone number is (571)272-9563. The examiner can normally be reached Monday-Thursday 10am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JAMIE J ATALA can be reached at 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAT C CHIO/ Primary Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Dec 28, 2023
Application Filed
Feb 20, 2025
Non-Final Rejection — §102
May 27, 2025
Response Filed
Sep 02, 2025
Final Rejection — §102
Dec 04, 2025
Request for Continued Examination
Dec 14, 2025
Response after Non-Final Action
Mar 04, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587653
Spatial Layer Rate Allocation
2y 5m to grant Granted Mar 24, 2026
Patent 12549764
THREE-DIMENSIONAL DATA ENCODING METHOD, THREE-DIMENSIONAL DATA DECODING METHOD, THREE-DIMENSIONAL DATA ENCODING DEVICE, AND THREE-DIMENSIONAL DATA DECODING DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12549845
CAMERA SETTING ADJUSTMENT BASED ON EVENT MAPPING
2y 5m to grant Granted Feb 10, 2026
Patent 12546657
METHODS AND SYSTEMS FOR REMOTE MONITORING OF ELECTRICAL EQUIPMENT
2y 5m to grant Granted Feb 10, 2026
Patent 12549710
MULTIPLE HYPOTHESIS PREDICTION WITH TEMPLATE MATCHING IN VIDEO CODING
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
90%
With Interview (+16.6%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 836 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month