Prosecution Insights
Last updated: April 19, 2026
Application No. 18/744,541

LEARNABLE DEFORMATION FOR POINT CLOUD SELF-SUPERVISED LEARNING

Non-Final OA §103
Filed
Jun 14, 2024
Examiner
HE, WEIMING
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Qualcomm Technologies, Inc.
OA Round
1 (Non-Final)
46%
Grant Probability
Moderate
1-2
OA Rounds
3y 4m
To Grant
60%
With Interview

Examiner Intelligence

Grants 46% of resolved cases
46%
Career Allow Rate
190 granted / 410 resolved
-15.7% vs TC avg
Moderate +14% lift
Without
With
+13.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
40 currently pending
Career history
450
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
59.2%
+19.2% vs TC avg
§102
12.4%
-27.6% vs TC avg
§112
15.0%
-25.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 410 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 6/14/24 and 4/22/25 are being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Camous et al. (US 2023/0074860 A1) in view of Thomas et al. (KPConv: Flexible and Deformable Convolution for Point Clouds, cited by Camous). As to Claim 1, Camous teaches An apparatus, comprising: one or more memories; and one or more processors coupled to the one or more memories and configured to: obtain, with a backbone artificial neural network, an original feature map of point cloud data (Camous, [0079, 0105-0106] and Fig 5B below PNG media_image1.png 444 1186 media_image1.png Greyscale ); deform the point cloud data, with a deformation artificial neural network, into a plurality of deformed point cloud objects based on the original feature map of point cloud data (Camous discloses pooling function in [0070, 0081]; “One or both of the selected LiDAR point clouds may be transformed to align the respective LiDAR point clouds to account for different locations and/or orientations of the LiDAR sensor that captured the LiDAR point cloud” in [0092], see also misalignment transformation in [0119,0122]. Thomas further discloses “we propose a deformable version of our convolution [7], which consists of learning local shifts applied to the kernel points (see Figure 3)… In contrast, KPConv combines features locally according to the 3D geometry, thus capturing the deformations of the surfaces” at p. 2; “We define the offsets ∆k(x) as the output of a rigid KPConv mapping Din input features to 3K values, as shown in Figure 3… With this loss, the network generates shifts that fit the local geometry of the input point cloud” at p. 4); combine the plurality of deformed point cloud objects into a mixed point cloud (Camous discloses “For example, the at least one functional network 510E, 510F, 510G may receive one feature map in the case that the at least two LiDAR point clouds are merged… The at least one functional network may not include the concatenation network 510E in the case that the at least two LiDAR point clouds are processed as a merged point cloud” in [0107]); extract, with the backbone artificial neural network, a mixed feature map from the mixed point cloud; extract a plurality of deformed feature maps from the plurality of deformed point cloud objects (Camous discloses “The classifier network may extract features from the pair of LiDAR point clouds, and compute a probability score of the pair of LiDAR point clouds being aligned or misaligned” in [0022]; “The feature backbone 510C may be a feature extraction network” in [0106]; see also [0096]); and compute, with a contrastive module, a loss for the backbone artificial neural network and for the deformation artificial neural network based on the mixed feature map and the plurality of deformed feature maps (Camous discloses “The output datasets may include a binary classification (aligned or misaligned), a probability score, and/or a confidence score, and the like.” in [0098]; “For instance, the classifier network 504 may be trained on training data including labeled sets of misaligned and aligned pairs of LiDAR point clouds with appropriate loss functions providing feedback to adjust the classifier network 504.” in [0117]; see also Fig 7B. Thomas further discloses “KP-CNN training. We use a Momentum gradient Descent optimizer to minimize a cross-entropy loss… In the case of deformable kernels, the regularization loss is added to the output loss” at p. 11; PNG media_image2.png 694 715 media_image2.png Greyscale at p.4.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Camous with the teaching of Thomas so as to generate shifts to fit the local geometry of the input point cloud with a loss (Thomas, p. 4). As to Claim 2, Camous in view of Thomas teaches The apparatus of claim 1, in which the deformation artificial neural network comprises a multilayer perceptron (Camous discloses “In some embodiments, perception system 402, planning system 404, localization system 406, and/or control system 408 implement at least one machine learning model (e.g., at least one multilayer perceptron (MLP), at least one convolutional neural network (CNN), at least one recurrent neural network (RNN), at least one autoencoder, at least one transformer, and/or the like)” in [0066]. See also Thomas’s Pointwise MLP networks at p. 2.) As to Claim 3, Camous in view of Thomas teaches The apparatus of claim 1, in which the one or more processors is further configured to combine the plurality of deformed point cloud objects by performing linear interpolation between the plurality of deformed point cloud objects (Thomas discloses “bilinear interpolation” at p. 4.) As to Claim 4, Camous in view of Thomas teaches The apparatus of claim 1, in which the one or more processors is further configured to deform the point cloud data by mapping each point cloud feature of the original feature map of point cloud data to a control point perturbation to obtain a set of control point perturbations in a lattice and to deform the lattice (Thomas discloses mapping input points to a set of kernel points in Fig 1; “Furthermore, we propose a deformable version of our convolution [7], which consists of learning local shifts applied to the kernel points (see Figure 3)” at p. 2; see also kernel points at p. 13.) As to Claim 5, Camous in view of Thomas teaches The apparatus of claim 1, in which the one or more processors is further configured to optimize the backbone artificial neural network and the deformation artificial neural network based on the loss (Thomas discloses “To tackle this behaviour, we propose a “fitting” regularization loss which penalizes the distance between a kernel point and its closest neighbor among the input neighbors… With this loss, the network generates shifts that fit the local geometry of the input point cloud. We show this effect in the supplementary material” at p. 4, see also Fig 11.) As to Claim 6, Camous in view of Thomas teaches The apparatus of claim 5, in which the one or more processors is further configured to compute the loss based on a regularization that penalizes similarity between the plurality of deformed point cloud objects (Thomas discloses “To tackle this behaviour, we propose a “fitting” regularization loss which penalizes the distance between a kernel point and its closest neighbor among the input neighbors. In addition, we also add a “repulsive” regularization loss between all pair off kernel points when their influence area overlap, so that they do not collapse together… With this loss, the network generates shifts that fit the local geometry of the input point cloud. We show this effect in the supplementary material” at p. 4, see also Fig 11.) As to Claim 7, Camous in view of Thomas teaches The apparatus of claim 1, in which the deformation artificial neural network comprises two instances of an artificial neural network (Camous, [0104] and Fig 5B.) Claim 8 recites similar limitations as claim 1 but in a method form. Therefore, the same rationale used for claim 1 is applied. Claim 9 is rejected based upon similar rationale as Claim 2. Claim 10 is rejected based upon similar rationale as Claim 3. Claim 11 is rejected based upon similar rationale as Claim 4. Claim 12 is rejected based upon similar rationale as Claim 5. Claim 13 is rejected based upon similar rationale as Claim 6. Claim 14 is rejected based upon similar rationale as Claim 7. Claim 15 recites similar limitations as claim 1 but in a computer readable medium form. Therefore, the same rationale used for claim 1 is applied. Claim 16 is rejected based upon similar rationale as Claim 2. Claim 17 is rejected based upon similar rationale as Claim 3. Claim 18 is rejected based upon similar rationale as Claim 4. Claim 19 is rejected based upon similar rationale as Claim 5. Claim 20 is rejected based upon similar rationale as Claim 6. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEIMING HE whose telephone number is (571)270-1221. The examiner can normally be reached on Monday-Friday, 8:30am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached on 571-272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WEIMING HE/ Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jun 14, 2024
Application Filed
Jan 14, 2026
Non-Final Rejection — §103
Mar 10, 2026
Examiner Interview Summary
Mar 10, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12567135
MULTIMEDIA PLAYBACK MONITORING SYSTEM AND METHOD, AND ELECTRONIC APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12561876
System and method for an audio-visual avatar creation
2y 5m to grant Granted Feb 24, 2026
Patent 12514672
System, Method And Software Program For Aiding In Positioning Of Objects In A Surgical Environment
2y 5m to grant Granted Jan 06, 2026
Patent 12494003
AUTOMATIC LAYER FLATTENING WITH REAL-TIME VISUAL DEPICTION
2y 5m to grant Granted Dec 09, 2025
Patent 12468949
SYSTEMS AND METHODS FOR FEW-SHOT TRANSFER LEARNING
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
46%
Grant Probability
60%
With Interview (+13.8%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 410 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month