Prosecution Insights
Last updated: April 19, 2026
Application No. 18/094,867

HIGH-PRECISION POINT CLOUD COMPLETION METHOD BASED ON DEEP LEARNING AND DEVICE THEREOF

Final Rejection §103
Filed
Jan 09, 2023
Examiner
CHOI, TIMOTHY WING HO
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Nanjing University Of Posts And Telecommunications
OA Round
2 (Final)
60%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
95%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
199 granted / 331 resolved
-1.9% vs TC avg
Strong +35% interview lift
Without
With
+35.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
21 currently pending
Career history
352
Total Applications
across all art units

Statute-Specific Performance

§101
10.6%
-29.4% vs TC avg
§103
56.5%
+16.5% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 331 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant’s response, filed 3 September 2025, to the last office action has been entered and made of record. In response to the cancellation of claims 2 and 6-7, they are acknowledged and made of record. In response to the amendments to the specification and claims, they are acknowledged, supported by the original disclosure, and no new matter is added. In response to the amendments to the specification, the amended language has not overcome the objection to the specification of the previous Office action. Amendments to paragraph [0037] from “P*P represents the distance” to “* represents the distance ” does not overcome the specification objection, as newly recited “*” is also not present or reflected in the preceding formula described in [0036] for calculating the chamfer distance. In response to the amendments to the claims, specifically addressing the objections to the claims of the previous Office action, the amended language has partially overcome the respective objections. Amendments to claim 8 from “P*P represents the distance” to “* represents the distance ” does not overcome the claim objection, as newly recited “*” is also not present or reflected in the preceding formula for calculating the chamfer distance. In response to the amendments to the claims, specifically addressing the claims 1, 2, and 6-8 rejections under 35 U.S.C. § 112(b) / (pre-AIA ) second paragraph, of the previous Office action, the amended language has overcome the respective rejections, and the rejections have been withdrawn. In response to the amendments to the claims, specifically addressing the claim 10 rejection under 35 U.S.C. § 101 for being directed to non-statutory subject matter category, of the previous Office action, the amended language has overcome the respective rejection, and the rejection has been withdrawn. Amendments to the independent claim 1 have necessitated an updated ground of rejection over the applied prior art. Please see below for the updated interpretations and rejections. Response to Arguments Applicant's arguments filed 3 September 2025 have been fully considered but they are not persuasive. Examiner notes that the claims are treated with their broadest reasonable interpretations consistent with the specification. See MPEP 2111. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Furthermore, the test for obviousness is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See In re Keller, 642 F.2d 413, 208 USPQ871 (CCPA 1981). One cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). In response to Applicant’s arguments on p. 4-6 of Applicant’s Remarks, that the multi-scale generator module as taught by Tan is not equal to the claimed multi-resolution encoder module of amended claim 1, the Examiner respectfully disagrees and notes that the combined teachings of Cai, Tan, and Xu, notably Tan and Xu, are relied upon to teach the corresponding claim limitations. Cai is relied upon to teach a method and device for receiving an incomplete point cloud and used in a point cloud completion model to generate a high-quality output point cloud (see Cai [0026]-[0027], [0035], and [0049]-[0051]). Tan is relied upon to teach a known point cloud completion model based on a multi-scale generator module, point pyramid decoder, and projection discriminator, which preprocesses a given input point cloud into different scales and extracts point features through a series of shared MLPs to be aggregated and construct a feature vector for being processed by the point pyramid decoder to generate point clouds with different resolutions, which are applied to the projection discriminator to determine a discriminator result and adversarial loss for optimizing the multi-scale generator (see Tan Fig. 2, Fig. 3, Fig. 4, sect. Ill. NETWORK ARCHITECTURE, A. Multi-Scale Generator Module, B. Projection Discriminator, and C. Loss Function, and sect. IV. B. Implementation Detail and Evaluation Metrics). Xu is relied upon to teach a known PAConv technique of generating dynamic kernels for convolutional operation for deep representation learning on 3D point clouds, where the dynamic kernels are generated from the combination of weight matrices with corresponding coefficients which associates relative positions with different weights matrices, and the coefficients are determined based on the non-linear function implemented using MLPs, and that iterative farthest point sampling is used to downsample point clouds for the encoder (see Xu sect. 1. Introduction, Fig. 2, sect. 3.1. Overview, and sect. 3.2. Dynamic Kernel Assembling, and sect. 4. Backbone Network Architectures). The combined teachings of Cai, Tan, and Xu suggests to one of ordinary skill in the art a method for performing point cloud completion using Tan’s PGAN model upon received incomplete point cloud data to generate a completed point cloud with high perceptual quality, which includes a multi-scale generator module (MSGM), point pyramid decoder, and projection discriminator, where point clouds of three different scales are input through a series of multi-layer perceptrons (MLPs) of the MSGM, implemented with dynamic kernels generated from PAConv, to extract point features, which are aggregated to acquire global feature vector, and local point features are input to the feature enhancement module to extract local geometric information and employs an MLP implemented with dynamic kernels generated from PAConv to obtain an enhanced feature vector, which are weighted according to the dynamic kernels, and a final feature vector is aggregated by concatenating with the global feature vector and the enhanced feature vector, the final feature vector is used to obtain three feature layers and uses corresponding decoders of the pyramid decoder to generate three point clouds with different resolutions, and that predicted point cloud and the ground truth projections are fed into an MLP, implemented with dynamic kernels generated from PAConv and accounting for the specific positions relations of a point with its local neighboring points, of a discriminator model to predict the category as real or fake result for optimizing the MSGM generator model. Therefore, the suggested point cloud completion method of the combined teachings of Cai, Tan, and Xu suggests performing feature extraction using MLPs implemented with dynamic kernels generated from PAConv as taught by Xu for improved spatial variation modeling, where the dynamic kernels are generated from the combination of weight matrices with corresponding coefficients, where the coefficients are predicted by ScoreNet and associates relative positions between a center point and its neighboring points with different weights matrices, providing for the broadest reasonable interpretation of the claimed multi-resolution encoder module of amended claim 1; and further suggests performing the feature enhancement upon local point features to capture local spatial information and employs MLP and symmetrical function to obtain an enhanced feature vector, where the MLPs of the feature enhancement being implemented with dynamic kernels generated from PAConv as taught by Xu for improved spatial variation modeling, providing for the broadest reasonable interpretation of the claimed feature fusion module with spatial attention mechanism of amended claim 1, and allowing for the point pyramid decoder of Tan to generate point clouds with improved local structure representation. Thus, the suggested multi-scale generator module with MLPs implemented with dynamic kernels generated from PAConv as taught by the combined teachings of Cai, Tan, and Xu provides for the broadest reasonable interpretation of the claimed multi-resolution encoder module of amended claim 1. In response to Applicant’s arguments on p. 6-9 of Applicant’s Remarks, that the combined teachings of Cai, Tan, and Xu do not teach the attention discriminator module configured to use a generative adversarial network to produce the results of consistency of global and local features, the Examiner respectfully disagrees. Examiner notes that the combined teachings of Cai, Tan, and Xu are relied upon to provide for the broadest reasonable interpretation of the claimed attention discriminator module. In particular, Tan is relied upon to further teach the use of a projection discriminator, where generated point clouds with different resolutions are applied to the projection discriminator to determine a discriminator result and adversarial loss for optimizing the multi-scale generator, where the predicted point cloud projection and ground truth projection are fed to the projection discriminator to predict a real or fake category through MLP and fully connected layers, and that that generated point clouds are suggested to be of size 512 (see Tan Fig. 2, Fig. 3, Fig. 4, sect. Ill. NETWORK ARCHITECTURE, A. Multi-Scale Generator Module, B. Projection Discriminator, and C. Loss Function, and sect. IV. B. Implementation Detail and Evaluation Metrics). Thus, the teachings of Cai and Tan provide for the broadest reasonable interpretation of the claimed attention discriminator configured to use a generative adversarial network to produce the results of consistency of global features through mutual game learning between a generation model and a discrimination model, and comprising a global attention discriminator configured to view the whole point cloud completion result to evaluate its overall consistency, and sending whole generated and real point clouds to the attention discriminator to obtain a feature vector with 512 dimensions through an auto-encoder, and reducing the dimension through continuous full connection layer and outputting a final fake or real binary result. Similarly as discussed above, Xu is further relied upon to teach a known PAConv technique of generating dynamic kernels for convolutional operation for deep representation learning on 3D point clouds, where the dynamic kernels are generated from the combination of weight matrices with corresponding coefficients which associates relative positions between a center point and neighboring points with different weights matrices, and the coefficients are determined based on the non-linear function implemented using MLPs (see Xu sect. 1. Introduction, Fig. 2, sect. 3.1. Overview, and sect. 3.2. Dynamic Kernel Assembling); when applied to the teachings of Cai and Tan, notably Tan, provide further suggested teachings of implementing dynamic kernels generated from PAConv with the MLPs used to predict the category in the discriminator model, and allowing for the discriminator to account for the specific positions relations of a point with its local neighboring points, thus considering local feature information in performing the discriminator classification. Thus, the combined teachings of Cai, Tan, and Xu further provides suggested teachings for the broadest reasonable interpretation of the claimed attention discriminator further configured to use a generative adversarial network to produce the results of consistency of global and local features through mutual game learning between a generation model and a discrimination model, and comprising a local attention discriminator which views a small area centered on the completed area to ensure the local consistency of a generated point cloud. Hence, the combined teachings of Cai, Tan, and Xu, provides for the broadest reasonable interpretations of amended claim 1 features of the attention discriminator configured to use a generative adversarial network to produce the results of consistency of global and local features through mutual game learning between a generation model and a discrimination model, and comprising a global attention discriminator configured to view the whole point cloud completion result to evaluate its overall consistency, and a local attention discriminator which views a small area centered on the completed area to ensure the local consistency of a generated point cloud, and sending whole generated and real point clouds to the attention discriminator to obtain a feature vector with 512 dimensions through an auto-encoder, and reducing the dimension through continuous full connection layer and outputting a final fake or real binary result. Specification The disclosure is objected to because of the following informalities: Specification paragraph [0037] recites “* represents the distance”, where “*” is not present in the preceding formula for calculating the chamfer distance. Appropriate correction is required. Claim Objections Claims 1 and 8 are objected to because of the following informalities: Amended claim 1 recites, “the global discriminator” and “the local discriminator module” in the amended limitation beginning with “wherein the attention discriminator module comprises…”, where typographical errors are assumed to exist, and “the global attention discriminator” and “the local attention discriminator” are assumed to be intended for proper antecedent support. Amended claim 8 recites, “* represents the distance”, where “*” is not present in the preceding formula for calculating the chamfer distance. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1, 3-5, and 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Cai et al. (US 2022/0335685, effectively filed 15 April 2021), in view of Tan et al. (“Projected Generative Adversarial Network for Point Cloud Completion”, published 6 September 2022), herein Tan, Xu et al. (“PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds”, published June 2021), herein Xu, and Huang et al. (“PF-Net: Point Fractal Network for 3D Point Cloud Completion”), herein Huang. Regarding claim 1, Cai discloses a high-precision point cloud completion method based on deep learning, comprising: acquiring point cloud data to be processed (see Cai [0026], where a first point cloud data is acquired, which may be an incomplete point cloud representing a portion of the shape of an object); inputting the point cloud data into a trained point cloud completion model (see Cai [0027] and [0035], where the first point cloud and determined probability distribution is used by a point cloud completion network to complete the shape of an object and generate a high-quality output point cloud), determining high-precision point cloud completion results according to the output of the point cloud completion model (see Cai [0027] and [0049]-[0051], where a high-quality output point cloud is generated based on the primary completed point cloud). Cai does not explicitly disclose preprocessing the point cloud data to obtain preprocessed point cloud data, inputting the preprocessed point cloud data into a trained point cloud completion model, wherein the point cloud completion model comprises a multi-resolution encoder module, a pyramid decoder module and an attention discriminator module; the multi-resolution encoder module is configured to perform feature extraction and fusion on the input point cloud data to obtain feature vectors; the pyramid decoder module is configured to process the feature vectors to obtain point cloud completion results of three scales; the attention discriminator module is configured to use a generative adversarial network to produce the results of consistency of global features through mutual game learning between a generation model and a discrimination model; wherein the multi-resolution encoder module comprises a feature extraction module and a feature fusion module, a spatial attention mechanism is added to the feature fusion module to realize feature focusing in spatial dimension; the feature extraction module of the multi-layer perceptron is used to extract the features of three missing point clouds of different scales to generate multidimensional feature vectors V 1 , V 2 , V 3 ; the output multidimensional feature vectors V 1 , V 2 , V 3 are input into the feature fusion module consisted of the spatial attention mechanism, the spatial attention mechanism learns 1024-dimensional abstract features that synthesize local features and global information; thereafter, three 1024-dimensional abstract features are spliced by a splicing array, and finally, the potential feature mapping is integrated into the final feature vector V with 1024 dimensions using the MLP; wherein the attention discriminator module comprises a global attention discriminator, the global discriminator is configured to view the whole point cloud completion result to evaluate its overall consistency; wherein the processing process of the attention discriminator module comprises: sending the whole or local generated point cloud and the real point cloud to the attention discriminator, obtaining the feature vector with 512 dimensions through an auto-encoder therein, and then reducing the dimension through the continuous full connection layer, and outputting the final fake or real binary result. Tan discloses in a related and pertinent projected generative adversarial network (PGAN) for point cloud completion (see Tan Abstract), which comprises a multi-scale generator module (MSGM), which includes feature extraction and feature enhancement components, point pyramid decoder, and projection discriminator (see Tan Fig. 2 and sect. Ill. NETWORK ARCHITECTURE), and given an incomplete input point cloud, the point cloud is down sampled to acquire two additional diverse scale point clouds (see Tan Fig. 2, and sect. Ill. NETWORK ARCHITECTURE, A. Multi-Scale Generator Module), the three point clouds of different scales are input through a series of shared multi-layer perceptrons (MLPs) of the feature extraction component to extract point features, which are aggregated to acquire global feature vector (see Tan Fig. 2, and sect. Ill. NETWORK ARCHITECTURE, A. Multi-Scale Generator Module), and feature enhancement component allows for acquiring features which capture local spatial information of a point cloud, where local point features are input to the feature enhancement module to extract local geometric information and used to obtain an enhanced feature vector to aggregate a final feature vector by concatenating with the global feature vector, where the enhanced feature vector is obtained using the MLP, which are concatenated to aggregate the final feature vector, and the enhanced feature vectors and global feature vectors are of size 1024 (see Tan Fig. 2, Fig. 3, Fig. 4, and sect. Ill. NETWORK ARCHITECTURE, A. Multi-Scale Generator Module), and final global feature vectors are input into a series of three fully connected layers of a point pyramid decoder and each feature layer uses the corresponding decoders to generate three point clouds with different resolutions (see Tan Fig. 2, Fig. 4, and sect. Ill. NETWORK ARCHITECTURE, A. Multi-Scale Generator Module), and the predicted point clouds and the ground truth are projected to a camera reference and fed into an MLP to predict the category as a discriminator model and used to optimize the MSGM generator model to learn the completion with high perceptual quality, where completion loss based on chamfer distance and adversarial loss are adopted (see Tan Fig. 2, and sect. Ill. NETWORK ARCHITECTURE, B. Projection Discriminator, and C. Loss Function), where the predicted point cloud projection and ground truth projection are fed to the projection discriminator to predict a real or fake category through MLP and fully connected layers (see Tan Fig. 2, and sect. Ill. NETWORK ARCHITECTURE, B. Projection Discriminator, and C. Loss Function), where the generated point clouds are suggested to be of size 512 (see Tan sect. IV. B. Implementation Detail and Evaluation Metrics). At the time of filing, one of ordinary skill in the art would have found it obvious to substitute the point cloud completion model of Cai with the PGAN model as taught by Tan, such that the acquired incomplete point cloud is used to perform point cloud completion using Tan’s PGAN model to generate a completed point cloud with high perceptual quality. This modification is rationalized as a simple substitution of one known element for another to obtain predictable results. In this instance, Cai disclose a method and device for receiving an incomplete point cloud and used in a point cloud completion model to generate a high-quality output point cloud. Tan teaches a known substitute point cloud completion model based on a multi-scale generator module, point pyramid decoder, and projection discriminator, which preprocesses a given input point cloud into different scales and extracts point features to be aggregated and construct a feature vector for being processed by the point pyramid decoder to generate point clouds with different resolutions, which are applied to the projection discriminator to determine a discriminator result and adversarial loss for optimizing the multi-scale generator. One of ordinary skill in the art could have simply substituted the use of Cai’s point cloud completion model with Tan’s PGAN point cloud completion model, and the results of the substitution would predictably lead to performing point cloud completion using Tan’s PGAN model upon the received incomplete point cloud data to generate a completed point cloud with high perceptual quality. While Tan teaches that three point clouds of different scales are input through a series of shared multi-layer perceptrons (MLPs) to extract point features and are aggregated to acquire corresponding global feature vectors, and that local point features are input to the feature enhancement module to extract local geometric information and used to obtain an enhanced feature vector to aggregate a final feature vector by concatenating with the global feature vector, where the (see Tan Fig. 2, Fig. 3, and sect. Ill. NETWORK ARCHITECTURE, A. Multi-Scale Generator Module), and that the predicted point cloud and the ground truth projections are fed into an MLP of the projection discriminator to predict the category (see Tan Fig. 2, and sect. Ill. NETWORK ARCHITECTURE, B. Projection Discriminator, and C. Loss Function); Cai and Tan do not explicitly disclose that the attention discriminator module is configured to use the generative adversarial network to produce the results of consistency of global and local features; a dynamic convolution layer PAConv is embedded in a multi-layer perceptron with shared weights in the feature extraction module, a weight coefficient is learned according to the positional relationship between each point and its neighboring points, and the convolution kernel is adaptively constructed in combination with the weight matrix, so as to improve the capability of extracting local detail features; three missing point clouds of different scales generated by sampling the farthest point arc input into the multi-resolution encoder module; that the feature extraction module of the multi-layer perceptron embedded in the dynamic kernel convolution PAConv is used to extract the features of three missing point clouds of different scales to generate multidimensional feature vectors V 1 , V 2 , V 3 ; that the spatial attention mechanism outputs weighted features of each position; that the attention discriminator module comprises a global attention discriminator and a local attention discriminator; and the local discriminator module views a small area centered on the completed area to ensure the local consistency of the generated point cloud. Xu teaches in a related and pertinent method of Position Adaptive Convolution (PAConv), a generic convolution operation for 3D point cloud processing (see Xu Abstract), where PAConv is a plug-and-play convolutional operation for deep representation learning on 3D point clouds, and allows for the bypass of the huge memory and computational burden via a dynamic kernel assembling strategy with ScoreNet, and gains flexibility to model spatial variations, where for simple MLP-based point networks, the MLPs are replaced with PAConv (see Xu sect. 1. Introduction), where the dynamic kernels generated from the PAConv, to be implemented in MLPs, are derived by combining weight matrices with corresponding coefficients predicted from ScoreNet, where the coefficients are predicted based on specific position relationships between a center point and neighboring points and associates relative neighboring point positions with different weights, where the ScoreNet uses a non-linear function implemented using MLPs to determine the score vector representing the corresponding coefficients (see Xu Fig. 2, sect. 3.1. Overview, and sect. 3.2. Dynamic Kernel Assembling), and that iterative farthest point sampling is used to downsample point clouds for the encoder (see Xu sect. 4. Backbone Network Architectures). At the time of filing, one of ordinary skill in the art would have found it obvious to apply the teachings of Xu to the teachings of Cai and Tan, such that the MLPs of the MSGM and projection discriminator are implemented with dynamic kernels generated from PAConv, thus suggesting that the MLPs in the feature extraction and feature enhancement components of the MSGM are implemented with dynamic kernels generated from PAConv to extract the point features and that the obtained enhanced feature vector are weighted according to the dynamic kernels, where an iterative farthest point sampling is used when downsampling the point clouds during the encoding process of the MSGM, and that the MLPs used to predict the category in the discriminator model are implemented with dynamic kernels generated from PAConv, accounting for the specific positions relations of a point with its local neighboring points, thus considering local feature information in performing the discriminator classification. This modification is rationalized as an application of a known technique to a known method ready for improvement to yield predictable results. In this instance, Cai and Tan disclose a base method of performing point cloud completion using Tan’s PGAN model upon the received incomplete point cloud data to generate a completed point cloud with high perceptual quality, where point clouds of different scales are input through a series of multi-layer perceptrons (MLPs) to extract point features which are aggregated to acquire global feature vector local features are input to the feature enhancement module to extract local geometric information and used to obtain an enhanced feature vector to aggregate a final feature vector by concatenating with the global feature vector, and that predicted point cloud and the ground truth projections are fed into an MLP to predict the category as a discriminator model. Xu teaches a known PAConv technique of generating dynamic kernels for convolutional operation for deep representation learning on 3D point clouds, where the dynamic kernels are generated from the combination of weight matrices with corresponding coefficients which associates relative positions with different weights matrices, and the coefficients are determined based on the non-linear function implemented using MLPs, and that iterative farthest point sampling is used to downsample point clouds for the encoder. One of ordinary skill in the art would have recognized that by applying Xu’s technique would allow for the method of Cai and Tan to implement the MLPs of the MSGM and projection discriminator with dynamic kernels generated from PAConv, including the MLPs used to predict the category in the discriminator model, and as the dynamic kernels account for the specific positions relations of a point with its local neighboring points, suggesting that the MLPs in the feature extraction and feature enhancement components of the MSGM, implemented with dynamic kernels generated from PAConv, extract the point features and that the obtained enhanced feature vector are weighted according to the dynamic kernels, that an iterative farthest point sampling is used when downsampling the point clouds during the encoding process of the MSGM, and that the MLPs used to predict the category in the discriminator model, when implemented with dynamic kernels generated from PAConv, accounts for the specific positions relations of a point with its local neighboring points, thus the local feature information are considered in performing the discriminator classification, providing for the broadest reasonable interpretation for the claimed local discriminator, and predictably leading to an improved MSGM and discriminator for optimizing the MSGM generator model. While Tan teaches that the feature vector with 512 dimensions is reduced through full connection layers to output a real or fake classification result (see Tan Fig. 2, and sect. Ill. NETWORK ARCHITECTURE, B. Projection Discriminator, and C. Loss Function); Cai, Tan, and Xu do not explicitly disclose reducing the dimension [512-256-128-16-1]. Huang teaches in a related and pertinent a Point Fractal Network (PFNet) for precise and high fidelity point cloud completion (see Huang Abstract), where a discriminator obtains a predicted value by passing a latent vector through fully connected layers [256, 128, 16, 1] (see Huang sect. 3.4. Loss Function). At the time of filing, one of ordinary skill in the art could have found it obvious to use the teachings of Huang to improve the teachings of Cai, Tan, and Xu, such that the projection discriminator passes the 512 dimension feature vector for the predicted point cloud and ground truth projections through the fully connected layers of size [256, 128, 16, 1], thus reducing the dimension from 512-256-128-16-1. This modification is rationalized as a use of a known technique to improve similar method in the same way. In this instance, Cai, Tan, and Xu disclose a base method of performing point cloud completion using Tan’s PGAN model upon the received incomplete point cloud data to generate a completed point cloud with high perceptual quality, where predicted point cloud and the ground truth projections are fed into an MLP of a projection discriminator model to predict a real or fake category, and that the feature vector with 512 dimensions is reduced through full connection layers to output a real or fake classification result. Huang teaches a known technique for a comparable point cloud completion network to use a discriminator to obtain a predicted value by passing a latent vector through fully connected layers [256, 128, 16, 1]. One of ordinary skill in the art could have applied Huang’s technique in the same way by using fully connected layers of size [256, 128, 16, 1] in the projection discriminator of Cai, Tan, and Xu, and would predictably allow for the method of Cai, Tan, and Xu to pass the 512 dimension feature vector for the predicted point cloud and ground truth projections through the fully connected layers of size [256, 128, 16, 1], and thus reducing the dimension of the feature vector from 512-256-128-16-1, and obtaining a real or fake prediction. Regarding claim 3, Cai, Tan, Xu, and Huang disclose the high-precision point cloud completion method based on deep learning according to claim 1, wherein the method of constructing the dynamic kernel convolution PAConv comprises: initializing a weight library W = W k k = 1 ,   2 ,   … ,   K consisted of K weight matrices with the size of C i n × C o u t , wherein C i n represents the input dimension of the network in the current layer and C o u t   represents the output dimension of the network in the current layer (see Xu Fig. 2, and sect. 3.2. Dynamic Kernel Assembling, where a Weight Bank is defined with M number of weight matrices of size Cin x Cout, corresponding to the number of input and output channels); calculating the relative position relationship between each point p i in the input point cloud and the neighboring points p j , and learning the weight coefficients E i j = E k i j k = 1 ,   2 ,   … ,   K at different positions, which are expressed as: E i j = S o f t m a x θ p i ,   p j where θ is a nonlinear function implemented by the convolution with a kernel size of 1x1; using the Softmax function for normalization operation to ensure that the output score is in the range (0,1), in which a higher score means that the corresponding position has more important local information (see Xu Fig. 2, sect. 3.2. Dynamic Kernel Assembling, and Eq. (2), ScoreNet associates relative positions with different weight matrices, and predicts position adaptive coefficients for each weight matrix given specific position relation between a center point and neighboring points, and are softmax normalized to ensure output scores are in range (0,1)); forming the kernel of PAConv by combining the weight matrix W k and the weight coefficient E i j learned from the point position, K p i ,   p j = ∑ k K E k i j W k completing the work of constructing the convolution kernel adaptively by the dynamic kernel convolution PAConv so far, so as to capture the local area information of the input features and output the features with local correlation (see Xu Fig. 2, sect. 3.2. Dynamic Kernel Assembling, and Eq. (3), where the kernel is derived by combining the weight matrices in Weight Bank with corresponding coefficients predicted from ScoreNet). Regarding claim 4, Cai, Tan, Xu, and Huang disclose the high-precision point cloud completion method based on deep learning according to claim 3, wherein the value of K is 16 (see Xu Fig. 2, sect. 3.2. Dynamic Kernel Assembling normalization, and sect. 6.2. The Number of Weight Matrices, where the number of weight matrices M is set between 8 or 16, and the best and most stable performance is achieved when the M is 16). Regarding claim 5, Cai, Tan, Xu, and Huang disclose the high-precision point cloud completion method based on deep learning according to claim 1, wherein processing the feature vectors to obtain point cloud completion results of three scales comprises: obtaining three sub-feature vectors U 1 , U 2 , U 3 with different resolutions by the feature vectors V through the full connection layer, wherein each sub-feature vector is responsible for completing the point clouds with different resolutions (see Tan Fig. 2, Fig. 4, and sect. Ill. NETWORK ARCHITECTURE, A. Multi-Scale Generator Module, where the three feature layer vectors, of size 1024, 512, and 256, are obtained from inputting the vector into fully connected layers of the point pyramid decoder and each feature layer uses the corresponding decoders to generate three point clouds with different resolutions); using U 3 to predict a primary point cloud P 3 , using U 2 to predict the relative coordinate of a secondary point cloud P 2 from the central point P 3 , and using the recombination and full connection operation to generate the secondary point cloud P 2 according to P 3 (see Tan Fig. 2, Fig. 4, and sect. Ill. NETWORK ARCHITECTURE, A. Multi-Scale Generator Module, where the first point cloud, M1x3 of Fig. 4, is predicted from the feature layer vector of size 256, and the second point cloud, M2x3 of Fig. 4, is predicted from the feature layer vector of size 512 combined with an expanded first point cloud); and using U 1 and P 2 to predict the relative coordinate of the final point cloud P 1 from the center point P 2 to supplement the final point cloud P
Read full office action

Prosecution Timeline

Jan 09, 2023
Application Filed
Aug 09, 2025
Non-Final Rejection — §103
Sep 03, 2025
Response Filed
Dec 05, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12497051
APPARATUSES, SYSTEMS, AND METHODS FOR DETERMINING VEHICLE OPERATOR DISTRACTIONS AT PARTICULAR GEOGRAPHIC LOCATIONS
2y 5m to grant Granted Dec 16, 2025
Patent 12488569
UNPAIRED IMAGE-TO-IMAGE TRANSLATION USING A GENERATIVE ADVERSARIAL NETWORK (GAN)
2y 5m to grant Granted Dec 02, 2025
Patent 12475992
SYSTEM AND METHOD FOR NAVIGATING A TOMOSYNTHESIS STACK INCLUDING AUTOMATIC FOCUSING
2y 5m to grant Granted Nov 18, 2025
Patent 12469300
SYSTEMS, DEVICES, AND METHODS FOR VEHICLE CAMERA CALIBRATION
2y 5m to grant Granted Nov 11, 2025
Patent 12469190
X-RAY TOMOGRAPHIC RECONSTRUCTION METHOD AND ASSOCIATED DEVICE
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
60%
Grant Probability
95%
With Interview (+35.1%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 331 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month