Prosecution Insights
Last updated: April 19, 2026
Application No. 17/899,451

SYSTEM AND METHOD FOR PANOPTIC SEGMENTATION OF POINT CLOUDS

Non-Final OA §103
Filed
Aug 30, 2022
Examiner
ZHANG, FAN
Art Unit
2682
Tech Center
2600 — Communications
Assignee
Huawei Technologies Co., Ltd.
OA Round
3 (Non-Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
3y 1m
To Grant
71%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
322 granted / 592 resolved
-7.6% vs TC avg
Strong +16% interview lift
Without
With
+16.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
43 currently pending
Career history
635
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
65.6%
+25.6% vs TC avg
§102
12.1%
-27.9% vs TC avg
§112
2.2%
-37.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 592 resolved cases

Office Action

§103
DETAILED ACTION Notice of AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Request for Continued Examination 2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17© has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 02/27/2026 has been entered. Response to Arguments 3. Applicant’s remarks received on 02/27/2026 with respect to the amended independent claims have been acknowledged and are moot in view of a new ground of rejected necessitated by the corresponding amendment. Currently claims 1-20 remain rejected. Response to Amendment Claim Rejections - 35 USC § 103 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 51066.. Claims 1-3, 6-10, 13, 14, 15, 17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Hong et al (LiDAR-based Panoptic Segmentation via Dynamic Shifting Network) and in further view of Li et al (LiDAR Panoptic Segmentation via Sparse Multi-directional Attention Clustering, 8/31/2021). Regarding claim 1 (currently amended), Hong et al teaches: A computer-implemented method for clustering-based panoptic segmentation of point clouds, comprising: extracting features of a point cloud that includes a plurality of points; identifying clusters of the plurality of points corresponding to objects from the features of the point cloud frame [page 2: p03]; and selectively shifting a subset of the plurality of points using the features and the clusters of the plurality of points via a neural network [page 5 (Dynamically calculate shifting position based on distance between a ground truth center and dynamically calculated shift targets.)]. Hong et al prescribes dynamically shifting a subset of points using features based on distance between a center of an object and subset of points. Hong et al does not disclose action of neighboring object clusters when objects are close. In the same field of endeavor, Li et al teaches: generating, by a neural network, a shifted set of points by selectively shifting a subset of the plurality of points using the features and the clusters of the plurality of points, wherein the neural network is trained to shift foreground points such that foreground points of the same instance object are close to each other and away from one or more other instance objects by [page 4: p03 (points belonging to the same object are aggregated.); page 5: fig. 4 (points repelled from nearby instance)]: selecting a subset of points of a first object that are closer to points of a second object than a distance between centroids of the first and second objects, shifting the selected subset of points away from the second object [page 4: p04 (B. Centroid-aware Repel Loss)], and aggregating a remaining subset of points toward a centroid of the first object [page 3: p02 (shifting offsets for foreground points toward object centers); page 6: p06]. Therefore, it would have been obvious for an ordinary skilled in the art before the effective filing date of the claimed invention to combine the teaching of the two to incorporate Li et al’s centroid aware repelling to Hong’s shifting structure to improve separation between nearby objects and improving cluster of LiDAR points for better accuracy. Regarding claim 2 (original), the rationale applied to the rejection of claim 1 has been incorporated herein. Hong et al teaches: The computer-implemented method of claim 1, further comprising: mapping the plurality of points in the clusters into voxels [page 3: cylinder convolution]; for each voxel in which points of the clusters are located, determining a center of mass of at least regions extending from the voxel in each direction along at least two axes [page 12: implementation details]; and wherein the selectively shifting includes processing the center of mass and features of each region to identify a center of mass for the voxel [page 3: instance branch; page 4: 3.2]. Regarding claim 3 (original), the rationale applied to the rejection of claim 1 has been incorporated herein. Hong et al further teaches: The computer-implemented method of claim 2, wherein a neighborhood region of voxels within a range of the voxel is also used to determine the center of mass for each voxel [page 3: instance branch]. Regarding claim 6 (original), the rationale applied to the rejection of claim 2 has been incorporated herein. Hong et al further teaches: The computer-implemented method of claim 2, wherein the regions extend from the voxel in each direction along three axes [page 12: implementation details]. Regarding claim 7 (original), the rationale applied to the rejection of claim 6 has been incorporated herein. Hong et al further teaches: The computer-implemented method of claim 6, wherein a neighborhood region of voxels within a range of the voxel is also used to determine the center of mass for each voxel [page 3: instance branch]. Claims 8 (currently amended), 9, 10, 13, and 14 (original) have been analyzed and rejected with regard to claims 1-3, 6, and 7 respectively. Claims 15 (currently amended), 16, 17 and 20 (original) have been analyzed and rejected with regard to claims 1-3 and 6 respectively. 61066.. Claims 4, 5, 11, 12, 18, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Hong et al (LiDAR-based Panoptic Segmentation via Dynamic Shifting Network) and Li et al (LiDAR Panoptic Segmentation via Sparse Multi-directional Attention Clustering, 8/31/2021); and in further view of Jiang et al (Dual-Set Point Grouping for 3D Instance Segmentation). Regarding claim 4 (original), the rationale applied to the rejection of claim 1 has been incorporated herein. Encoding point cloud to extract features would have been an obvious step for image segmentation in both Hong et al and Xu et al’s teaching. Nevertheless, in the same field of endeavor, Jiang et al teaches: The computer-implemented method of claim 2, wherein the extracting features includes encoding the point cloud, and wherein the identifying includes decoding the encoded point cloud and, for every point in the clusters of the plurality of points, predicting an offset to shift the point to a centroid of the object [abstract, page 4870: offset prediction branch; page 4871: p03 (U-net includes a decoder.)]. Therefore, it would have been obvious for an ordinary skilled in the art before the effective filing date of the claimed invention to combine the teaching of all to encode/decode point cloud for identifying and managing data set for segmentation. Regarding claim 5 (original), the rationale applied to the rejection of claim 4 has been incorporated herein. Xu et al further teaches: The computer-implemented method of claim 4, wherein, for each voxel in which points of the clusters are located, the neural network generates a weight for each region that is used to scale the center of mass of the region [page 5: 3.3.4, page 6: 3.3.5]. Claims 11 and 12 (original) have been rejected with regard to claims 4 and 5 respectively. Claims 18 and 19 (original) are rejected in regard to claims 4, 5 respectively. Contact 7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FAN ZHANG whose telephone number is (571)270-3751. The examiner can normally be reached on Mon-Fri 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benny Tieu can be reached on 571-272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Fan Zhang/ Patent Examiner, Art Unit 2682
Read full office action

Prosecution Timeline

Aug 30, 2022
Application Filed
May 02, 2025
Non-Final Rejection — §103
Aug 27, 2025
Response Filed
Nov 27, 2025
Final Rejection — §103
Jan 30, 2026
Response after Non-Final Action
Feb 27, 2026
Request for Continued Examination
Mar 02, 2026
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12582477
COMPUTER-IMPLEMENTED METHOD FOR DETERMINATION OF A BONE CEMENT VOLUME OF A BONE CEMENT FOR A PERCUTANEOUS VERTEBROPLASTY
2y 5m to grant Granted Mar 24, 2026
Patent 12586277
QUASI-NEWTON MRI DEEP LEARNING RECONSTRUCTION
2y 5m to grant Granted Mar 24, 2026
Patent 12579612
SYSTEM AND METHOD FOR CONVOLUTION OF AN IMAGE
2y 5m to grant Granted Mar 17, 2026
Patent 12555364
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12548677
COMPUTER IMPLEMENTED METHOD FOR QUANTIFYING AND PREDICTING THE PROGRESSION OF INTERSTITIAL LUNG DISEASE
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
71%
With Interview (+16.5%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 592 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month