Prosecution Insights
Last updated: April 19, 2026
Application No. 18/689,055

POINT CLOUD COMPLETION DEVICE, POINT CLOUD COMPLETION METHOD, AND POINT CLOUD COMPLETION PROGRAM

Non-Final OA §103
Filed
Mar 04, 2024
Examiner
TUNG, KEE M
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Nippon Telegraph and Telephone Corporation
OA Round
1 (Non-Final)
8%
Grant Probability
At Risk
1-2
OA Rounds
3y 0m
To Grant
18%
With Interview

Examiner Intelligence

Grants only 8% of cases
8%
Career Allow Rate
15 granted / 189 resolved
-54.1% vs TC avg
Moderate +11% lift
Without
With
+10.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
12 currently pending
Career history
201
Total Applications
across all art units

Statute-Specific Performance

§101
9.3%
-30.7% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 189 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Status of Claims Claims 1-15 are currently pending in this application. Information Disclosure Statement The information disclosure statement (IDS) submitted on March 4, 2024 is hereby acknowledged. All references have been considered by the examiner. Initialed copies of the PTO-1449 are included in this correspondence. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2(c) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: Determining the scope and contents of the prior art. Ascertaining the differences between the prior art and the claims at issue. Resolving the level of ordinary skill in the pertinent art. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-4, 6-7, 10-11 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Yuan et al. (“PCN: Point Completion Network” 2018 International Conference on 3D Vision, September 5, 2018, p. 728-737; IDS) in view of Liu et al. (“PCCN: Point Cloud Colorization Network” 2019 IEEE International Conference on Image Processing (ICIP), September 22, 2019, p. 3716-3720; IDS) and further in view of Wang et al. (2019/0122375). Regarding claim 1, Yuan teaches a point cloud complementing device comprising a processor configured to execute operations (e.g., In this work, we present a novel learning-based method to complete these partial data using an encoder-decoder network that directly maps partial shapes to complete shapes, both represented as 3D point clouds. Yuan: sec. 1 para. 1 L.5-9. It is obvious that the learning-based method using an encoder-decoder network consists of a computer to perform the steps of the method to complete the partial point cloud data) comprising: receiving, as an input, a colored three-dimensional point cloud of each point including a missing region and a number of points to be complemented (e.g., For example, the cars in the LiDAR scan shown in Figure 1 are hardly recognizable due to sparsity of the data points and missing regions caused by limited sensor resolution and occlusion. Yuan: sec. 1 para. 1 L.2-5 and Figure 1; reproduced below for reference. PNG media_image1.png 640 400 media_image1.png Greyscale See 1_1 below); extracting a feature vector of the colored three-dimensional point cloud using a feature extractor learned in advance (e.g., In this section, we describe the architecture of our proposed model, the Point Completion Network (PCN). As shown in Figure 2, PCN is an encoder-decoder network. The encoder takes the input point cloud and outputs a k-dimensional feature vector. Yuan: sec. 4 para. 1 L.1-5. The encoder is in charge of summarizing the geometric information in the input point cloud as a feature vector v [Symbol font/0xCE] Rk where k = 1024. Yuan: sec. 4.1 para. 1 L.1-3. Specifically, the encoder consists of two stacked Point-Net (PN) layers. The first layer consumes m input points represented as an m x 3 matrix P | where each row is the 3D coordinate of a point pi | = (x; y; z). Yuan: sec. 4.1 para. 1 L.1-4. See 1_1 below); and generating, by a point cloud complementing model learned in advance in consideration of an error between color information and brightness information, based on the feature vector and the number of points to be complemented as inputs, a point cloud by complementing the input three-dimensional point cloud up to the number of points to be complemented (e.g., The decoder is responsible for generating the output point cloud from the feature vector v. Yuan: sec. 4.2 para. 1 L.1-2. Our key observation is that the fully-connected decoder is good at predicting a sparse set of points which represents the global geometry of a shape. Meanwhile, the folding based decoder is good at approximating a smooth surface which represents the local geometry of a shape. Thus, we divide the generation of the output point cloud into two stages. In the first stage, a coarse output Ycoarse of s points is generated by passing v through a fully-connected network with 3s output units and reshaping the output into a s x 3 matrix. In the second stage, for each point qi in Ycoarse, a patch of t = u2 points is generated in the local coordinates centered at qi via the folding operation [59], and transformed into the global coordinates by adding qi to the output. Combining all s patches gives the detailed output Ydetail consisting of n = st points. This multistage process allows our network to generate a dense point cloud with fewer parameters than fully-connected decoder (see Table 1) and more flexibility than folding-based decoder. Yuan: sec. 4.2 para. 2) by performing correction on points at which the brightness does not change among adjacent points of a predicted point cloud such that the points have close brightness information assuming that the points are on the same plane (see 1_2 below). While Yuan does not explicitly teach, Liu teaches: (1_1). color three-dimensional point cloud (e.g., The collection of 3D data is far more difficult to capture image data. Color information of objects may be lost during acquisition and format conversion. In many cases, however, color information can provide sufficient clues in scene or object detection. As pointed out by [12], the color prediction is inherently multimodal – many objects can take on several plausible colorization. Therefore, our work does not necessarily restore the original color information of the object but produces a reasonable color that conforms to the real world. Liu: sec. 1 para. 3 L.1-10. One way to solve the problem of point cloud colorization is to map the point cloud’s coordinate to the color information. Liu: sec. 2.1 para. 1 L.1-2. Inspired by image to image transformation [3], we input the coordinates and color of the point cloud (x; y; z; r; g; b) into the discriminator at the same time. Liu: sec. 2.2 para. 1 L.1-3); It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu into the teaching of Yuan because color images are more realistic. While the combined teaching of Yuan and Liu does not explicitly teach, Wang teaches: (1_2). generating, by a point cloud complementing model learned in advance in consideration of an error between color information and brightness information, based on the feature vector and the number of points to be complemented as inputs, a point cloud by complementing the input three-dimensional point cloud up to the number of points to be complemented by performing correction on points at which the brightness does not change among adjacent points of a predicted point cloud such that the points have close brightness information assuming that the points are on the same plane (e.g., In implementations, a feature matching condition is a difference between a feature value of a feature point in the first patch to be processed and a feature value of a feature point in the second patch to be processed being the smallest, or the difference being within a preset feature threshold range, according to a same specified feature. Wang: [0063]. As can be seen from the above embodiments, the specified feature can be a structural feature or a texture feature. Wang: [0064]. As an example, if the specified feature is brightness, the feature point A1 in the first patch to be processed is considered to be texturally similar to the feature point B1 in the second patch to be processed when a difference between a brightness value of the feature point A1 in the first patch to be processed and a brightness value of the feature point B1 in the second patch to be processed is the smallest, or the difference is within a preset brightness threshold range. Wang: [0066]. In implementations, the brightness and the grayscale of the sampling point may be combined to determine whether the feature point A1 in the first patch to be processed and the feature point B1 in the second patch to be processed are similar in texture. Wang: [0069]. In implementations, the specified feature includes at least a geometric feature or a color feature. Wang: [0146]. As pointed out by [12], the color prediction is inherently multimodal – many objects can take on several plausible colorizations. Therefore, our work does not necessarily restore the original color information of the object but produces a reasonable color that conforms to the real world. Liu: sec. 1 para. 3 L.5-10. Therefore, when combining the input point cloud with the complementing points, the difference in brightness has to be smallest or within a preset threshold range and it is not necessary to restore the original color information but a reasonable color that conforms to the real world). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Wang into the combined teaching of Yuan and Liu so that brightness between the input and the complementing point clouds has to be adjusted so that the different is smallest or within a preset threshold range. Regarding claim 6, the combined teaching of Yuan, Liu and Wang teaches the point cloud complementing device according to claim 1, wherein the colored three-dimensional point cloud represents a three-dimensional color map of an area (e.g., One way to solve the problem of point cloud colorization is to map the point cloud’s coordinate to the color information. Liu: sec. 2.1 para. 1 L.1-2. After the global feature is connected to the local feature, the deconvolutional layers use the same filter size 1 x 1. Finally, the result is the color information (r, g, b) of the point cloud. Liu: sec. 2.1 para. 4 L.7-9. Inspired by image to image transformation [3], we input the coordinates and color of the point cloud (x; y; z; r; g; b) into the discriminator at the same time. Liu: sec. 2.2 para. 1 L.1-3). Regarding claim 7, the combined teaching of Yuan, Liu and Wang teaches the point cloud complementing device according to claim 1, wherein the point cloud complementing model includes an encoder-decoder network (e.g., In this work, we present a novel learning-based method to complete these partial data using an encoder-decoder network that directly maps partial shapes to complete shapes, both represented as 3D point clouds. Yuan: sec. 1 para. 1 L.5-9). Regarding claim 3, the claim is a method claim of device claim 1. The claim is similar in scope to claim 1 and it is rejected under similar rationale as claim 1. Wang teaches that “Embodiments of the present disclosure provide a method, an apparatus, a system, and a storage media for data processing, which can improve the accuracy and stability of registration of surface features of an object.” (Wang: [0006]). Regarding claims 10-11, the claims are method claims of device claims 6-7 respectively. The claims are similar in scope to claims 6-7 respectively and they are rejected under similar rationale as claims 6-7 respectively. Regarding claim 4, the claim is a computer-readable non-transitory recording medium claim of device claim 1. The claim is similar in scope to claim 1 and it is rejected under similar rationale as claim 1. Wang teaches that “Embodiments of the present disclosure provide a method, an apparatus, a system, and a storage media for data processing, which can improve the accuracy and stability of registration of surface features of an object.” (Wang: [0006]). Regarding claims 14-15, the claims are computer-readable non-transitory recording medium claims of device claims 6-7 respectively. The claims are similar in scope to claims 6-7 respectively and they are rejected under similar rationale as claims 6-7 respectively. Allowable Subject Matter Claim(s) 2, 5, 8-9 and 12-13 is/are objected to being dependent upon rejected base claim. The claim would be allowable if rewritten in independent form including all the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter in claim 2: The prior art of record, either individually or in combination, fails to teach the claimed limitation in the following: the point cloud complementing model is learned to determine the error between color information and brightness information by using: a first error function for obtaining a first error in a spatial distance between a predicted value of a colored three-dimensional point cloud and correct data, a second error function for obtaining a second error in a color spatial distance of color information for a nearest point, and a third error function for assuming that points having close brightness among neighboring points belong to the same plane and making a normal close to a neighboring point when brightness is close in a loss function. as recited in claim 2. The following is a statement of reasons for the indication of allowable subject matter in claim 5: The prior art of record, either individually or in combination, fails to teach the claimed limitation in the following: the processor further configured to execute operations comprising: learning the point cloud complementing model, based on assuming points at which the brightness does not change among adjacent points of the predicted point cloud as being on a plane, correcting the points toward a normal direction to prevent the points from becoming outliers. as recited in claim 5. Claims 8-9 are similar in scope to claims 2 and 5 respectively and they are objected under similar rationale as claims 2 and 5 respectively. Claims 12-13 are similar in scope to claims 2 and 5 respectively and they are objected under similar rationale as claims 2 and 5 respectively. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SING-WAI WU whose telephone number is (571)270-5850. The examiner can normally be reached 9:00am - 5:30pm (Central Time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SING-WAI WU/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Mar 04, 2024
Application Filed
Oct 17, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597174
METHOD AND APPARATUS FOR DELIVERING 5G AR/MR COGNITIVE EXPERIENCE TO 5G DEVICES
2y 5m to grant Granted Apr 07, 2026
Patent 12591304
SYSTEMS AND METHODS FOR CONTEXTUALIZED INTERACTIONS WITH AN ENVIRONMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12586311
APPARATUS AND METHOD FOR RECONSTRUCTING 3D HUMAN OBJECT BASED ON MONOCULAR IMAGE WITH DEPTH IMAGE-BASED IMPLICIT FUNCTION LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12537877
MANAGING CONTENT PLACEMENT IN EXTENDED REALITY ENVIRONMENTS
2y 5m to grant Granted Jan 27, 2026
Patent 12530797
PERSONALIZED SCENE IMAGE PROCESSING METHOD, APPARATUS AND STORAGE MEDIUM
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
8%
Grant Probability
18%
With Interview (+10.6%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 189 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month