Prosecution Insights
Last updated: April 19, 2026
Application No. 18/290,622

POINT CLOUD DATA TRANSMISSION DEVICE, POINT CLOUD DATA TRANSMISSION METHOD, POINT CLOUD DATA RECEPTION DEVICE, AND POINT CLOUD DATA RECEPTION METHOD

Non-Final OA §103
Filed
Jan 19, 2024
Examiner
FEREJA, SAMUEL D
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
LG Electronics Inc.
OA Round
3 (Non-Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
86%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
458 granted / 614 resolved
+16.6% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
66 currently pending
Career history
680
Total Applications
across all art units

Statute-Specific Performance

§101
3.6%
-36.4% vs TC avg
§103
64.1%
+24.1% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
7.9%
-32.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 614 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Currently, claims 13-18 are pending in the application. Claims 13-14 are amended. Claims 1-12 are cancelled. Continued Examination Under 37 CFR 1.114 1. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/07/2026 has been entered. Response to Arguments / Amendments Applicant’s arguments have been fully considered but are rendered moot in view of the new ground of rejection necessitated by amendments initiated by the applicant. Claim Objections Claims 13 & 14 are objected to because of the following informalities: Both claims 13 & 14 “the objected is tracked based on the loss value ” appear to be typo error. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 13-15 & 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (US 20220301229, hereinafter Zhang ) in view of Lee et al. (US 20210350245, hereinafter Lee). Regarding Claim 13 Zhang discloses a method of receiving point cloud data ([0030]), the method comprising: receiving a bitstream containing point cloud data ([0030] a point cloud decoding method, including parsing a bitstream to obtain a syntax element, where the syntax element includes an index of a normal axis of a to-be-decoded patch in a to-be-decoded point cloud and information used to indicate description information of a bounding box size of the to-be-decoded point cloud); and decoding the point cloud data ([0030], point cloud decoder parsing, and reconstructing geometry information of the to-be-decoded point cloud based on tangent axes and bitangent axes of one or more patches in the to-be-decoded point cloud, where the one or more patches include the to-be-decoded patch and geometry information of a point cloud refers in three-dimensional space), wherein points of the point cloud data in a bounding box are represented based on a length of an axis for the bounding box and a length of a point for time ([0013], FIG. 6A, FIG. 8A, a point cloud with description information of the bounding box size that reflects a posture of the point cloud in three-dimensional space, such as, information about whether the point cloud is vertical or horizontal in the three-dimensional space; [0012], two pieces of information ( patch granularity & description information of the bounding box size of the to-be-encoded point cloud is frame level) & are encoded into the bitstream at different times), PNG media_image1.png 156 500 media_image1.png Greyscale wherein an object for the points is detected by a plane of a bounding box including the point cloud data and a value for positions of the points and a size of the bounding box ([0018], point cloud includes the size relationship among the side lengths of the bounding box of the to-be-encoded point cloud, when the normal axis of the to-be-encoded patch is different from the coordinate axis on which the longest side of the bounding box of the to-be-encoded point cloud is located). Zhang does not explicitly disclose wherein the decoding the point cloud data includes: transforming the point cloud data based on source coordinates, generating a loss value based on a difference between the transformed point cloud data and the point cloud data and wherein the objected is tracked based on the loss value. Lee teaches does not explicitly disclose wherein the decoding the point cloud data includes: transforming the point cloud data based on source coordinates ([0049], FIG. 3 , perform a 3D-to-2D point cloud transformation), generating a loss value based on a difference between the transformed point cloud data and the point cloud data ([0051], the mean square error (MSE) between an input image and a reconstructed image may be used as a loss function; [0052], FIG. 4, S430 step trains a 3D autoencoder and a chamfer distance between an input point and a reconstructed point used as a loss function; [0053], the chamfer distance is suitable for a loss function in a 3D point cloud and is calculated by Eq. 3). Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of transforming the point cloud data based on source coordinates as taught by Lee ([0051]) into transforming of the point cloud data system of Zhang so as to provide an autoencoder capable of reconstructing a 3D point cloud, reducing the amount of storage space used, and reducing computational complexity compared to the structure employing a conventional fully connected layer (Lee, [0012]). Regarding Claim 14, Apparatus claim 14 of using the corresponding method claimed in claim 13, and the rejections of which are incorporated herein for the same as used above. Regarding Claim 15 Zhang in view of Lee discloses the device of claim 14, Zhang discloses wherein the object is tracked by applying the value to the plane ([[0013], projected pictures of patches are closely arranged in an occupancy map of the point cloud, that is, there are a relatively small quantity of empty pixels in the occupancy map of the point cloud). Regarding Claim 17 Zhang in view of Lee discloses the device of claim 14, Zhang discloses wherein the value is derived based on an error between geometry data of the point cloud data and transformed geometry data ([0030], determining a tangent axis of to-be-decoded patch and a bitangent axis of the to-be-decoded patch based on the index of the normal axis of the to-be-decoded patch and the information used to indicate the description information of the bounding box size of the to-be-decoded point cloud that are obtained through parsing, and reconstructing geometry information of the to-be-decoded point cloud based on tangent axes and bitangent axes of one or more patches in the to-be-decoded point cloud, where the one or more patches include the to-be-decoded patch. The method may be performed by a point cloud decoder. Geometry information of a point cloud refers to coordinates of a point in the point cloud (for example, each point in the point cloud) in three-dimensional space. Regarding Claim 18 Zhang in view of Lee discloses the device of claim 14, Zhang discloses wherein the value is derived based on the positions, predicted positions, an orientation angle, and a width, a height, and a length including the points ([0030], Geometry information of a point cloud refers to coordinates of a point in the point cloud (for example, each point in the point cloud) in three-dimensional space; [0032] The description information of the bounding box size of the to-be-decoded point cloud may include the size relationship among the side lengths of the bounding box of the to-be-decoded point cloud, or the coordinate axis on which the longest side of the bounding box of the to-be-decoded point cloud is located). Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (US 20220301229, hereinafter Zhang ) in view of Lee et al. (US 20210350245, hereinafter Lee) and Zakharchenko et al. (US 20210217202, hereinafter Zakharchenko). Regarding Claim 16 Zhang in view of Lee discloses the device of claim 14, but does not explicitly disclose wherein the bounding box is obtained based on a field of view. Zakharchenko teaches wherein the bounding box is obtained based on a field of view ([0144], FIG. 15, view the 3-D patch bounding box 1504 from the side opposite to the projection plane 1508 (e.g., side 1505), then the V-PCC encoder would determine that when the attribute points included in the patch 3-D bounding box 1504 were projected onto the projection plane 1508, that projection would exhibit a maximum area over projecting onto other sides of the point cloud 3-D bounding box 1502.) Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings OF bounding box is obtained based on a field of view as taught by Zakharchenko ([0144]) into transforming of the point cloud data system of Zhang & Lee since frames of patches of motion refinement data can be used for encoding reduce the overall transmission bandwidth and properly decode and receive the message and hence signaling increases the overall signaling overhead of the coding process and reduces the efficiency of the compression (Zakharchenko, [0083]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Samuel D Fereja whose telephone number is (469)295-9243. The examiner can normally be reached 8AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DAVID CZEKAJ can be reached at (571) 272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SAMUEL D FEREJA/Primary Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

Jan 19, 2024
Application Filed
May 14, 2025
Non-Final Rejection — §103
Aug 18, 2025
Response Filed
Oct 05, 2025
Final Rejection — §103
Jan 07, 2026
Request for Continued Examination
Jan 25, 2026
Response after Non-Final Action
Feb 28, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597264
Method for Calibrating an Assistance System of a Civil Motor Vehicle
2y 5m to grant Granted Apr 07, 2026
Patent 12598318
METHOD AND SYSTEM-ON-CHIP FOR PERFORMING MEMORY ACCESS CONTROL WITH LIMITED SEARCH RANGE SIZE DURING VIDEO ENCODING
2y 5m to grant Granted Apr 07, 2026
Patent 12593018
SYSTEM AND METHOD FOR CONTROLLING PERCEPTUAL THREE-DIMENSIONAL ELEMENTS FOR DISPLAY
2y 5m to grant Granted Mar 31, 2026
Patent 12593036
METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL
2y 5m to grant Granted Mar 31, 2026
Patent 12591123
METHOD FOR DETERMINING SLOPE OF SLIDE IN SLIDE SCANNING DEVICE, METHOD FOR CONTROLLING SLIDE SCANNING DEVICE AND SLIDE SCANNING DEVICE USING THE SAME
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
86%
With Interview (+11.8%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 614 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month