DETAILED ACTION
Election/Restrictions
Applicant’s election without traverse of Group I, Claims 1-10 and 16-25, in the reply filed on 12/17/2025 is acknowledged.
Claims 11-15 and 26-30 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected invention, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 12/17/2025.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-5, 8-10, 16-20 and 23-25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qi et al (NPL: PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation) in view of Dovrat et al (NPL: Learning to Sample).
Regarding claim 1, Qi discloses a processor-implemented method, comprising:
generating a score for each respective point in a multidimensional point cloud using a scoring neural network (pg. 1 Introduction: Our PointNet is a unified architecture that directly takes point clouds as input and outputs either class labels for the entire input or per point segment/part labels for each point of the input; In the basic setting each point is represented by just its three coordinates (x,y,z); select interesting or informative points of the point cloud and encode the reason for their selection; pg. 2 Problem Statement: We design a deep learning framework that directly consumes unordered point sets as inputs; Our proposed deep network outputs k scores for all the k candidate classes.);
taking one or more actions based on the selected top points (pg. 2 Introduction: perform 3D shape classification, shape part segmentation and scene semantic parsing tasks; see further pg. 5 Applications: 3D Object Classification, 3D Object Part Segmentation, and Semantic Segmentation in Scenes).
Qi fails to specifically teach where Dovrat teaches ranking points in the multidimensional point cloud based on the generated score for each respective point in the multidimensional point cloud (pg. 2 Introduction: ProgressiveNet orders points by importance to the task); selecting top points from the ranked multidimensional point cloud (pg. 2 Introduction: A Progressive sampling method that orders points according to their relevance for the task; Improved performance for point cloud classification, retrieval and reconstruction with sampled point clouds (e.g. based on ordered points)).
Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of ranking points in the multidimensional point cloud based on the generated score for each respective point in the multidimensional point cloud and selecting top points from the ranked multidimensional point cloud from Dovrat into the method as disclosed by Qi. The motivation for doing this is to improve processing point clouds for the improvement of power consumption, computational cost and communication load.
Regarding claim 2, the combination of Qi and Dovrat disclose the method of claim 1, wherein generating the score for each point in the multidimensional point cloud comprises:
mapping the multidimensional point cloud into a feature map representing the multidimensional point cloud using a feature extracting neural network (Qi pg. 4 4.2 PointNet Architecture: The output from the above section forms a vector [f1; : : : ; fK], which is a global signature of the input set; After computing the global point cloud feature vector, we feed it back to per point features by concatenating the global feature with each of the point features); and
generating the score for each respective point in the multidimensional point cloud based on the feature map representing the multidimensional point cloud (Qi pg. 4 4.2 PointNet Architecture: Then we extract new per point features based on the combined point features - this time the per point feature is aware of both the local and global information; see Fig. 2 e.g. output scores).
Regarding claim 3, the combination of Qi and Dovrat disclose the method of claim 2, wherein the feature extracting neural network is configured to map the multidimensional point cloud into the feature map based on a self-supervised loss function trained to map points in a multidimensional space to points in a multidimensional feature space (Dovrat pg. 2 Introduction: We solve this by training the network to generate a set of points that satisfy two objectives: a sampling loss and the task’s loss. The sampling loss drives the generated points close to the input point cloud. The task loss ensures that the points are optimal for the task). The motivation to combine the references is discussed above in the rejection for claim 1.
Regarding claim 4, the combination of Qi and Dovrat disclose the method of claim 2, wherein the feature map comprises a map with dimensions of a number of points in the multidimensional point cloud by a number of feature dimensions into which the multidimensional point cloud is mapped (Qi pg. 10 C. Network Architecture and Training Details (Sec 5.1): The first transformation network is a mini-PointNet that takes raw point cloud as input and regresses to a 3 x 3 matrix. It’s composed of a shared MLP(64; 128; 1024) network (with layer output sizes 64, 128, 1024) on each point, a max pooling across points and two fully connected layers with output sizes 512, 256. The output matrix is initialized as an identity matrix).
Regarding claim 5, the combination of Qi and Dovrat disclose the method of claim 2, wherein the score for each respective point in the multidimensional point cloud is generated based on a global feature representing the multidimensional point cloud and a sum of scores for the respective point in each feature dimension in the feature map (Qi pg. 10 PointNet Segmentation Network: Local point features (the output after the second transformation network) and global feature (output of the max pooling) are concatenated for each point; pg. 15 Point Function Visualization: Our classification Point-Net computes K (we take K = 1024 in this visualization) dimension point features for each point and aggregates all the per-point local features via a max pooling layer into a single K-dim vector, which forms the global shape descriptor.).
Regarding claim 8, the combination of Qi and Dovrat disclose the method of claim 1, wherein the one or more actions comprise classifying an input represented by the multidimensional point cloud as representative of one of a plurality of types of objects (Qi. Pg. 5 5.1 Applications: 3D Object Classification Our network learns global point cloud feature that can be used for object classification).
Regarding claim 9, the combination of Qi and Dovrat disclose the method of claim 1, wherein the one or more actions comprise semantically segmenting an input image into a plurality of segments, each segment of the plurality of segments corresponding to a type of object in the input image (Qi. Pg. 5-6 5.1 Applications: 3D Object Part Segmentation Part segmentation is a challenging fine-grained 3D recognition task. Given a 3D scan or a mesh model, the task is to assign part category label (e.g. chair leg, cup handle) to each point or face).
Regarding claim 10, the combination of Qi and Dovrat disclose the method of claim 1, wherein the multidimensional point cloud comprises a set of points having a plurality of spatial dimensions (Qi pg. 1 Introduction: each point is represented by just its three coordinates (x; y; z)).
Regarding claim(s) 16-20 and 23-25 (drawn to a system):
The rejection/proposed combination of Qi and Dovrat, explained in the rejection of method claim(s) 1-5 and 8-10, anticipates/renders obvious the steps of the system of claim(s) 16-20 and 23-25 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 1-5 and 8-10 is/are equally applicable to claim(s) 16-20 and 23-25.
Claim(s) 6 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Qi and Dovrat as applied to claim 1 and 16 above, and further in view of Cuturi et al (NPL: Differentiable Ranks and Sorting using Optimal Transport).
Regarding claim 6, the combination of Qi and Dovrat disclose the method of claim 1, but fail to teach where Cuturi teaches wherein ranking the points in the multidimensional point cloud comprises ranking the points in the multidimensional point cloud based on an optimal transport problem between an unordered ranking of points in the multidimensional point cloud to an ordered ranking of points in the multidimensional point cloud (pg. 1 Abstract: we propose extended rank and sort operators by considering optimal transport (OT) problems (the natural relaxation for assignments) where the auxiliary measure can be any weighted measure supported on m increasing values, where m= n; pg. 2 We show first that the sorting permutation σ for x can be recovered by solving an optimal assignment (OA) problem, from an input measure supported on all values in x to a second auxiliary target measure supported on any increasing family y = (y1 < ··· < yn); pg. 4 The Sinkhorn Ranking and Sorting Operators: We propose instead to rely on a differentiable variant of the OT problem that uses entropic regularization).
Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of wherein ranking the points in the multidimensional point cloud comprises ranking the points in the multidimensional point cloud based on an optimal transport problem between an unordered ranking of points in the multidimensional point cloud to an ordered ranking of points in the multidimensional point cloud from Cuturi into the method as disclosed by the combination of Qi and Dovrat. The motivation for doing this is to improve differential rankings.
Regarding claim(s) 21 (drawn to a system):
The rejection/proposed combination of Qi, Dovrat, and Cuturi explained in the rejection of method claim(s) 6, anticipates/renders obvious the steps of the system of claim(s) 21 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 6 is/are equally applicable to claim(s) 21.
Claim(s) 7 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Qi and Dovrat as applied to claim 1 and 16 above, and further in view of Ngo et al (US 20220375210).
Regarding claim 7, the combination of Qi and Dovrat disclose the method of claim 1, but fail to teach where Ngo teaches wherein selecting the top points from the ranked multidimensional point cloud comprises selecting top k points based on noise contrastive estimation over a plurality of subsets of multidimensional point clouds (¶52 In order to facilitate the learning of a mapping of inputs onto representations with this characteristic, ‘noise contrastive estimation’ (NCE) and a so-called InfoNCE loss are used in contrastive methods).
Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of wherein selecting the top points from the ranked multidimensional point cloud comprises selecting top k points based on noise contrastive estimation over a plurality of subsets of multidimensional point clouds from Ngo into the method as disclosed by the combination of Qi and Dovrat. The motivation for doing this is to improve learning of a mapping of inputs onto representations.
Regarding claim(s) 22 (drawn to a system):
The rejection/proposed combination of Qi, Dovrat, and Ngo explained in the rejection of method claim(s) 7, anticipates/renders obvious the steps of the system of claim(s) 22 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 7 is/are equally applicable to claim(s) 22.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEVIN KY whose telephone number is (571)272-7648. The examiner can normally be reached Monday-Friday 9-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at 571-272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KEVIN KY/ Primary Examiner, Art Unit 2671