Prosecution Insights
Last updated: April 19, 2026
Application No. 18/518,513

INDIVIDUAL-TREE SEGMENTATION METHOD OF UAV LIDAR POINT CLOUD BASED ON CANOPY MORPHOLOGY

Non-Final OA §102§103
Filed
Nov 23, 2023
Examiner
ELLIOTT, JORDAN MCKENZIE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Yangtze Delta Region Institute (Huzhou) University Of Electronic Science And Technology Of China
OA Round
1 (Non-Final)
45%
Grant Probability
Moderate
1-2
OA Rounds
2y 10m
To Grant
31%
With Interview

Examiner Intelligence

Grants 45% of resolved cases
45%
Career Allow Rate
9 granted / 20 resolved
-17.0% vs TC avg
Minimal -14% lift
Without
With
+-13.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
40 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
53.3%
+13.3% vs TC avg
§102
27.1%
-12.9% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 20 resolved cases

Office Action

§102 §103
DETAILED ACTION Claims 1-7 are pending in this application, and have been examined under the priority date of 11/09/2023 in accordance with the applicant’s claim for foreign priority. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). All certified copies have been received. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-3 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ma (Ma, K.; Xiong, Y.; Jiang, F.; Chen, S.; Sun, H. A Novel Vegetation Point Cloud Density Tree-Segmentation Model for Overlapping Crowns Using UAV LiDAR. Remote Sens. 2021, 13, 1442). Regarding claim 1 Ma discloses; An individual-tree segmentation method of unmanned aerial vehicle (UAV) Light Detection and Ranging (LiDAR) point cloud based on canopy morphology, comprising: acquiring UAV LiDAR point cloud data (Ma, Figure 1, step 1, Collection of Field data and UAV-ALS point cloud data), and preprocessing the UAV LiDAR point cloud data to obtain a canopy height model (CHM) and a point cloud density model (PDM) (Ma, Page 3, paragraph 3, and Figure 1, the data is processed into a canopy height model (CHM) and a vegetation point cloud density model (VPCDM), where the vegetation point cloud density model is analogous to the PDM claimed); PNG media_image1.png 442 538 media_image1.png Greyscale (Page 3, column 3) PNG media_image2.png 340 518 media_image2.png Greyscale (Figure 1) extracting canopies and obtaining a set of tree tops according to the CHM and the PDM (Ma, page 6, section 2.2 Vegetation Point Cloud Density Model and Local Maximum Algorithm, paragraph 2, tree canopies and tree tops can be extracted using the CHM and VPCDM (analogous to the PDM)); determining each correct segmentation canopy and each wrong segmentation canopy of the canopies according to the number of local density maximum points in each of the canopies (Ma, Page 9, section 3.1, Results, single tree detection, The segmentations were assessed for correct detection segmentations, missing detection segmentations, and over detection segmentations (correct segmentations vs wrong segmentations), this is determined based on the maximum points detected in the point cloud data); PNG media_image3.png 308 530 media_image3.png Greyscale (Ma, Results section) PNG media_image4.png 262 646 media_image4.png Greyscale (Ma, Figure 4) and performing fine segmentation on each wrong segmentation canopy according to canopy morphology to update the set of tree tops and thereby obtain an updated set of tree tops (Ma, page 6, section 2.3 Improved Watershed Segmentation algorithm, the study used an improved watershed segmentation algorithm which looks at the overlapped initially segmented data, and re-segments it, where the watershed method is a fine segmentation method, therefore the data is re-segmented using a fine segmentation method, further, on Page 7, paragraph 1 and 1, the re-segmentation results are used to determine with higher accuracy update canopy segmentations (Updated tree tops)), PNG media_image5.png 262 534 media_image5.png Greyscale (Ma, page 6, section 2.3 emphasis added) PNG media_image6.png 540 546 media_image6.png Greyscale (Ma, page 7, section 2.3 continued, emphasis added) taking each tree top of the updated set of tree tops as a seed point, and performing region growing on each tree top to thereby obtain a final individual-tree segmentation result (Ma, Page 7, paragraph 2, the canopy heights of the crowns of the tree that were previously detected and segmented are segmented again using the watershed method based on the geometry of the tree tops, where the watershed method uses region growing inherently and the tree top geometry would be the seed points). Regarding claim 2 Ma discloses; The individual-tree segmentation method of UAV LiDAR point cloud based on canopy morphology as claimed in claim 1, wherein the preprocessing the UAV LiDAR point cloud data to obtain a CHM and a PDM, comprises (Ma, Page 3, paragraph 3, and Figure 1, the data is processed into a canopy height model (CHM) and a vegetation point cloud density model (VPCDM), where the vegetation point cloud density model is analogous to the PDM claimed): performing ground filtering on the UAV LiDAR point cloud data to extract ground points and non-ground points from the UAV LiDAR point cloud data (Ma, page 6, section 2.2. Vegetation Point Cloud Density Model and Local Maximum Algorithm, paragraph 1, the Vegetation Point Cloud Density Model (VPCDM) is filtered to determine if the points are ground points or vegetation points (non-ground points)); PNG media_image7.png 304 530 media_image7.png Greyscale (Ma, Page 6 emphasis added) performing an interpolating process on the ground points to obtain a digital elevation model (DEM) (Ma, page 6, section 2.2. Vegetation Point Cloud Density Model and Local Maximum Algorithm, paragraph 1, a DEM is generated from interpolated ground points), PNG media_image8.png 304 530 media_image8.png Greyscale (Ma, Page 6 emphasis added) and normalizing the non-ground points using the DEM to obtain normalized non-ground points (Ma, page 6, section 2.2. Vegetation Point Cloud Density Model and Local Maximum Algorithm, all vegetation (non-ground) points in the point cloud are normalized, the DEM points were used to generate a normalized model); and performing an interpolating process on the normalized non-ground points to obtain the CHM (Ma, Page 2, paragraph 4, the Canopy Height Model is generated from point cloud data, and a digital surface model (DEM) where polynomial fitting (interpolation) of canopy points (non-ground) is used to generated the CHM); and for each grid unit of the CHM, recording the number of projection points falling in the grid unit as a new value of the grid unit, to thereby generate the PDM (Ma, page 6, section 2.2. Vegetation Point Cloud Density Model and Local Maximum Algorithm, the vegetation point cloud (CHM analogous) is projected onto a normalized grid, where the points falling in each neighborhood unit (points falling within the grid) are used as the VPCDM (PDM analogous based on the interpretation made by the examiner above in claim 1)). PNG media_image9.png 304 530 media_image9.png Greyscale (Ma, page 6, emphasis added) Regarding claim 3 Ma discloses; The individual-tree segmentation method of UAV LiDAR point cloud based on canopy morphology as claimed in claim 1, wherein the extracting canopies and obtaining a set of tree tops according to the CHM and the PDM, comprises (Ma, page 6, section 2.2 Vegetation Point Cloud Density Model and Local Maximum Algorithm, paragraph 2, tree canopies and tree tops can be extracted using the CHM and VPCDM (analogous to the PDM)); smoothing each of the CHM and the PDM to thereby obtain a smoothed CHM and a smoothed PDM (Ma, page 6, section 2.2 Vegetation Point Cloud Density Model and Local Maximum Algorithm, paragraph 2, the VPCDM (PDM, as interpreted by the examiner above in claim 1) points are filtered and normalized to removed interference from terrain fluctuations, which is analogous to smoothing of the VPCDM (PDM), further, page 2, paragraph 4 notes that the CHM undergoes polynomial fitting, which is a smoothing method, and therefore the CHM generated would be a smoothed model), PNG media_image10.png 186 536 media_image10.png Greyscale (Ma, page 2) determining local maximum points of the smoothed CHM and local maximum points of the smoothed PDM (Ma, page 2, paragraph 4, the CHM undergoes local maximum clustering (after polynomial fitting/smoothing) to identify local maximum points (tree tops) to use as seed points), page 6 section 2.2. Vegetation Point Cloud Density Model and Local Maximum Algorithm, paragraph 2, the local maximum is used to detect the local maximums of the set of points), taking the local maximum points of the CHM as tree tops (Ma, page 2, paragraph 4, the CHM undergoes local maximum clustering to identify local maximum points (tree tops) to use as seed points), and taking the local maximum points of the PDM as local maximum density points of point cloud (Ma, page 6 section 2.2. Vegetation Point Cloud Density Model and Local Maximum Algorithm, paragraph 2, the local maximum is used to detect the local maximums of the set of points, where the local maximums are local maximum density points of the set); taking each of the tree tops as a seed point to perform region growing on the CHM (Ma, page 2, paragraph 4, the CHM undergoes local maximum clustering to identify local maximum points (tree tops) to use as seed points, where the seed points are used in segmentation which uses region growing), and assigning each adjacent grid point of the seed point satisfying a set condition to a canopy to which the seed point belongs (Ma, page 7 paragraph 2, maximum, heights are taken as the seed point, and points adjacent to that belonging to the same tree which the seed point belongs are taken are canopy points); and taking the canopy as an independent area, wherein the canopy has only one maximum height as a tree top of the canopy (Ma, page 7 paragraph 2, maximum, heights are taken as the seed point, and points adjacent to that belonging to the same tree which the seed point belongs are taken are canopy points, where the maximum points are taken as the tree top, as shown in figure 3). PNG media_image11.png 218 528 media_image11.png Greyscale (Ma, page 7) PNG media_image12.png 286 570 media_image12.png Greyscale PNG media_image13.png 320 512 media_image13.png Greyscale (Ma, Figure 3) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 2. Claim 4 and 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ma (Ma, K.; Xiong, Y.; Jiang, F.; Chen, S.; Sun, H. A Novel Vegetation Point Cloud Density Tree-Segmentation Model for Overlapping Crowns Using UAV LiDAR. Remote Sens. 2021, 13, 1442) and in further view of Morsdorf (Felix Morsdorf, Erich Meier, Benjamin Kötz, Klaus I. Itten, Matthias Dobbertin, Britta Allgöwer, LIDAR-based geometric reconstruction of boreal type forest stands at single tree level for forest and wildland fire management, Remote Sensing of Environment, Volume 92, Issue 3, 2004, Pages 353-36,). Regarding claim 4 Ma does not disclose; The individual-tree segmentation method of UAV LiDAR point cloud based on canopy morphology as claimed in claim 3, wherein the setting condition for assigning the adjacent grid point to the canopy to which the seed point belongs comprises: 1) a height of the adjacent grid point is higher than 60% of a height of the seed point; 2) a height of the adjacent grid point is higher than 60% of an average height of the canopy; and 3) a maximum canopy width of the canopy does not exceed 10 meters. However, in the same field of endeavor of tree canopy segmentation Morsdorf teaches; The individual-tree segmentation method of UAV LiDAR point cloud based on canopy morphology as claimed in claim 3, wherein the setting condition for assigning the adjacent grid point to the canopy to which the seed point belongs comprises: 1) a height of the adjacent grid point is higher than 60% of a height of the seed point (Morzdorf, page 357, column 1, paragraph 2, the grid points of the adjacent clusters are included in the canopy if they have a height smaller than the seed point, column 2 the points are clustered with a cutoff point which is 1 meter above the ground such that all the points clustered as part of the canopy meet a height requirement, page 358, column 1, section 4, the crown base height used in clustering is points within the 95th percentile of the tree height (seed point height) which satisfies the requirement for canopy points to be greater than 60% of the seed point height) 2) a height of the adjacent grid point is higher than 60% of an average height of the canopy (Morzdorf, page 357 column 2, the clustering using the tree crown height (canopy height) and assigns points within 3 meters of the canopy height as part of the canopy cluster, where per page 358, column 1, section 4, the crown/canopy height used here the top 5% of the tree itself, meaning the point must fall within 3 meters of the top 5% of the canopy, which satisfies the need for it to be 60% or higher than the average canopy height); and 3) a maximum canopy width of the canopy does not exceed 10 meters (Morsdorf, page 358, column 1, paragraph 2, crown diameters (canopy width) ranges from 1.5-3 meters, indicating the maximum is not greater than 10 meters). The combination of Ma and Morsdorf would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Ma teaches a segmentation algorithm to segment and measure the tree canopy of individual trees from LiDAR point cloud data. Morsdorf teaches a segmentation model for LiDAR point cloud data that clusters the points to generate segmentations of the trees and measurements of their canopies. The addition of the clustering method of Morsdorf to the segmentation method of Ma would be advantageous because these parameters allow for reconstruction and monitoring of the vegetations with higher accuracy than models without point clustering parameters. (Morsdorf page 354, columns 1 and 2, and page 360, section 4.4, columns 1 and 2) Regarding claim 7 the combination of Ma and Morsdorf teaches; The individual-tree segmentation method of UAV LiDAR point cloud based on canopy morphology as claimed in claims 4, wherein the taking each tree top of the updated set of tree tops as a seed point, and performing region growing on each tree top to thereby obtain a final individual-tree segmentation result (Ma, Page 7, paragraph 2, the canopy heights of the crowns of the tree that were previously detected and segmented are segmented again using the watershed method based on the geometry of the tree tops, where the watershed method uses region growing inherently and the tree top geometry would be the seed points), comprises: taking each tree top of the updated set of tree tops as a seed pixel (Ma, Page 7, paragraph 2, the canopy heights of the crowns of the tree that were previously detected and segmented are segmented again using the watershed method based on the geometry of the tree tops, where the watershed method uses region growing inherently and the tree top geometry would be the seed points); comparing the seed pixel with each pixel in a surrounding neighborhood of the seed pixel (Morzdorf, page 357, column 1, paragraph 2, the algorithm clusters points based on the characteristics of the neighbors to the seed pixel); and in a situation of the pixel satisfying the set condition, merging the seed pixel and the pixel to obtain a merged pixel (Morzdorf, Page 357, column 2, the pixel clustering is performed based on criteria determined from the sum of values of the neighbor pixels, and cutoff distance thresholding, when the condition is satisfied the pixel is merged into the cluster), and taking the merged pixel as a new seed pixel to continue to grow outward until there is no pixel satisfying the set condition (Morzdorf, Page 357, column 2, the pixel clustering is performed based on criteria determined from the sum of values of the neighbor pixels, and cutoff distance thresholding, when the condition is satisfied the pixel is merged into the cluster, cluster centroids are recalculated during clustering to update the seed points based on the clustering to improve accuracy). The combination of Ma and Morsdorf would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Ma teaches a segmentation algorithm to segment and measure the tree canopy of individual trees from LiDAR point cloud data. Morsdorf teaches a segmentation model for LiDAR point cloud data that clusters the points to generate segmentations of the trees and measurements of their canopies. The addition of the clustering method of Morsdorf to the segmentation method of Ma would be advantageous because these parameters allow for reconstruction and monitoring of the vegetations with higher accuracy than models without point clustering parameters. (Morsdorf page 354, columns 1 and 2, and page 360, section 4.4, columns 1 and 2) 3. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Ma (Ma, K.; Xiong, Y.; Jiang, F.; Chen, S.; Sun, H. A Novel Vegetation Point Cloud Density Tree-Segmentation Model for Overlapping Crowns Using UAV LiDAR. Remote Sens. 2021, 13, 1442) and in further view of Lisiewicz (Maciej Lisiewicz, Agnieszka Kamińska, Krzysztof Stereńczak, Recognition of specified errors of Individual Tree Detection methods based on Canopy Height Model, Remote Sensing Applications: Society and Environment, Volume 25, 2022,100690,ISSN 2352-9385). Regarding claim 5 Ma discloses; The individual-tree segmentation method of UAV LiDAR point cloud based on canopy morphology as claimed in claim 1, wherein the wrong segmentation canopy comprises an over- segmentation canopy and an under-segmentation canopy (Ma, Page, 8, section 2.4 Accuracy Evaluation Method, the number of correction segmentations, missing segmentations (under segmentations), and over segmentations are computed indicating there is at least one of each of an under and an over segmentation); and wherein the determining each correct segmentation canopy and each wrong segmentation canopy of the canopies according to the number of local density maximum points in each of the canopies (Ma, Page 9, section 3.1, Results, single tree detection, The segmentations were assessed for correct detection segmentations, missing detection segmentations, and over detection segmentations (correct segmentations vs wrong segmentations), this is determined based on the maximum points detected in the point cloud data), comprises: in a situation that there is only one local maximum density point in a canopy, determining the canopy as the correct segmentation canopy (Ma, page 9, section 3.1 the segmentation algorithm’s detection results are shown for a correct detection in figure 4, where there is one maximum detected point detected (yellow) and one measured (red), so the segmentation detection is correct); Ma does not teach; in a situation that there is no local maximum density point in the canopy, determining the canopy as the over-segmentation canopy; in a situation that there are two or more local maximum density points in the canopy, determining the canopy as the under-segmentation canopy; and recording a tree top and a local maximum density point of each wrong segmentation canopy. In the same field of endeavor, Lisiewicz teaches; in a situation that there is no local maximum density point in the canopy, determining the canopy as the over-segmentation canopy (Lisiewicz, section, 2.5.1, the over-segmentation is classified as a situation where one tree is split into multiple segments, the segments not containing the local maximum point); in a situation that there are two or more local maximum density points in the canopy, determining the canopy as the under-segmentation canopy (Lisiewicz, section, 2.5.1, the under-segmentation situations are when multiple trees are grouped as one segmented together, where the trees will have multiple maxima points in one detected area); and recording a tree top and a local maximum density point of each wrong segmentation canopy (Lisiewicz, section 2.5.1, the wrong-segmentation features which included the local maximum height point were recorded, shown in Table 1). PNG media_image14.png 240 632 media_image14.png Greyscale PNG media_image15.png 108 636 media_image15.png Greyscale (Lisiewicz, Section 2.5.1 and Table 1) The combination of Ma and Lisiewicz would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Ma teaches a method of segmentation of tree canopies, and while Ma teaches determination of under segmentation and over segmentation, it does not teach performing this using the maximum height points. Lisiewicz teaches this deficiency, the addition of the wrong segmentation classification of Lisiewicz would have been beneficial in that it provides a framework for identifying the type of wrong segmentation to improve the detection and segmentation of trees by allowing for adjusted thresholding and segmentation algorithms based on the error type. (Lisiewicz, abstract, and section 2.5) 4. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Ma (Ma, K.; Xiong, Y.; Jiang, F.; Chen, S.; Sun, H. A Novel Vegetation Point Cloud Density Tree-Segmentation Model for Overlapping Crowns Using UAV LiDAR. Remote Sens. 2021, 13, 1442) in view of Lisiewicz (Maciej Lisiewicz, Agnieszka Kamińska, Krzysztof Stereńczak, Recognition of specified errors of Individual Tree Detection methods based on Canopy Height Model, Remote Sensing Applications: Society and Environment, Volume 25, 2022,100690,ISSN 2352-9385), and in further view of Yang (J. Yang, Z. Kang, S. Cheng, Z. Yang and P. H. Akwensi, "An Individual Tree Segmentation Method Based on Watershed Algorithm and Three-Dimensional Spatial Distribution Analysis From Airborne LiDAR Point Clouds," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, pp. 1055-1067, 2020) and Sterenczak (Krzysztof Stereńczak, Bartłomiej Kraszewski, Miłosz Mielcarek, Żaneta Piasecka, Maciej Lisiewicz, Marco Heurich, Mapping individual trees with airborne laser scanning data in an European lowland forest using a self-calibration algorithm, International Journal of Applied Earth Observation and Geoinformation, Volume 93, 2020). Regarding claim 6 the combination of Ma and Lisiewicz teaches; The individual-tree segmentation method of UAV LiDAR point cloud based on canopy morphology as claimed in claim 5, wherein the performing fine segmentation on each wrong segmentation canopy according to canopy morphology to update the set of tree tops and thereby obtain an updated set of tree tops (Ma, page 6, section 2.3 Improved Watershed Segmentation algorithm, the study used an improved watershed segmentation algorithm which looks at the overlapped initially segmented data, and re-segments it, where the watershed method is a fine segmentation method, therefore the data is re-segmented using a fine segmentation method, further, on Page 7, paragraph 1 and 1, the re-segmentation results are used to determine with higher accuracy update canopy segmentations (Updated tree tops)), taking each tree top of the updated set of tree tops as a seedpoint, and performing region growing on each tree top to thereby obtain a final individual-tree segmentation result (Ma, Page 7, paragraph 2, the canopy heights of the crowns of the tree that were previously detected and segmented are segmented again using the watershed method based on the geometry of the tree tops, where the watershed method uses region growing inherently and the tree top geometry would be the seed points), comprises: [projecting all points of each over-segmentation canopy onto an xoy plane to obtain a point set P = {pi,p2, P3,...,pn}, and calculating a covariance matrix cov according to the following formula: c o v = 1 n - 1 ( P - p - ) ( P - p - ) T where p represents an average value in dimensions of the point set P, n represents the number of the all points of the over-segmentation canopy, and T represents a matrix transposition operation; performing singular value decomposition on the covariance matrix cov to thereby obtain two eigenvalues k1 and k2 and two eigenvectors ai and a2, wherein k1 corresponds to a major axis of an ellipse and k2 corresponds to a minor axis of the ellipse;] and in a situation of k1/ k2 >3, determining the over-segmentation canopy as an over-segmentation canopy, assigning the over-segmentation canopy to a nearest correct segmentation canopy to the over-segmentation canopy, and removing a tree top corresponding to the over-segmentation canopy from the set of tree tops, otherwise, determining the over-segmentation canopy as a correct segmentation canopy; taking a tree top of each under-segmentation canopy as a reference point to obtain a vertical plane containing the tree top and a local maximum density point and perpendicular to the xoy plane,] and projecting a point cloud within a neighborhood of 0.2 meters of the vertical plane onto the vertical plane to obtain a vertical sectional view (Ma, page 6, section 2.2. Paragraph 1, to prevent segmentation errors, the point cloud is projected onto a grid of size 0.2m x 0.2m); PNG media_image16.png 302 540 media_image16.png Greyscale (Ma, Page 6) extracting canopy surface points in the vertical sectional view, and performing polynomial fitting on the canopy surface points to obtain a canopy surface morphology fitting function (Ma, page 6, section 2.2, the points undergo interpolation after projection to obtain a model with the height, density and other morphological features of the trees); determining that the local maximum density point is a top of a under-segmentation tree in a situation that there is a minimum value between the tree top and the local maximum density point of the canopy surface morphology fitting function (Ma, page 7, paragraphs 1 and 2, the watershed algorithm takes each poorly segmented tree top, and determines whether there is a boundary between two merged tree top contours (under segmentation) by determining if there is a minimum point or area in between two maximums which the height decreases before increasing, which indicates that there are two trees not one in the contour), PNG media_image17.png 576 550 media_image17.png Greyscale (Ma, page 7) PNG media_image18.png 260 236 media_image18.png Greyscale (Ma, figure 3b) and adding the local maximum density point to the set of tree tops for updating the set of tree tops (Ma, page 7, according the geometric relationship extracted between the two local maxima points, the two points can be updated as separate tree tops); and determining that the local maximum density point is not a top in a situation that there is no minimum value between the tree top and the local maximum density point of the canopy surface morphology fitting function (Ma, page 7, paragraph 1, in the event that it is determined no irregularities in the local maximum points is determined, the region is determined as only being 1 tree, not two, meaning that in a situation where there is not a minimum between two maxima points, the second point will not be indicated as a tree top); after all wrong segmentation canopies are segmented, obtaining the updated set of tree tops (Ma, Figure 1, and page 7, paragraph 2 after the local minimum and maximum detection is run on the initial, poorly segmented and detected tree tops, the data is updated with the tree tops); and taking each tree top of the updated set of tree tops as a seed point for performing region growing to thereby obtain the final individual-tree segmentation result (Ma, Figure 1, and page 7, paragraph 2 after the local minimum and maximum detection is run on the initial, poorly segmented and detected tree tops, the data is updated with the tree tops and segmentation is performed using the watershed method, which is a region growing method based on the seed points). The combination of Ma and Liciewicz does not teach; projecting all points of each over-segmentation canopy onto an xoy plane to obtain a point set P = {pi,p2, P3,...,pn}, and calculating a covariance matrix cov according to the following formula: c o v = 1 n - 1 ( P - p - ) ( P - p - ) T where p represents an average value in dimensions of the point set P, n represents the number of the all points of the over-segmentation canopy, and T represents a matrix transposition operation; performing singular value decomposition on the covariance matrix cov to thereby obtain two eigenvalues k1 and k2 and two eigenvectors ai and a2, wherein k1 corresponds to a major axis of an ellipse and k2 corresponds to a minor axis of the ellipse; and in a situation of k1/ k2 >3, determining the over-segmentation canopy as an over-segmentation canopy, assigning the over-segmentation canopy to a nearest correct segmentation canopy to the over-segmentation canopy, and removing a tree top corresponding to the over-segmentation canopy from the set of tree tops, otherwise, determining the over-segmentation canopy as a correct segmentation canopy; taking a tree top of each under-segmentation canopy as a reference point to obtain a vertical plane containing the tree top and a local maximum density point and perpendicular to the xoy plane However, in the same field of endeavor Yang teaches; projecting all points of each over-segmentation canopy onto an xoy plane to obtain a point set P = {pi,p2, P3,...,pn} (Yang, page 1060, column 2, the 3D points are projected onto the xoy plan to obtain a set of points, where a covariance matrix is computed), and calculating a covariance matrix cov according to the following formula: c o v = 1 n - 1 ( P - p - ) ( P - p - ) T where p represents an average value in dimensions of the point set P, n represents the number of the all points of the over-segmentation canopy, and T represents a matrix transposition operation (Yang, page 1060, column 2, equation 1, the segmentation and modeling method uses a 2D covariance matrix where pi-2d is the average of points in the set pi2d making them functionally equivalent to P and pbar, where T is the transposition matrix per the covariance formula, and n represents the number of app points in the set, where the 3D coordinate system is elliptic as per page 1060 of Yang, column 1 paragraph 1 and 2); PNG media_image19.png 166 312 media_image19.png Greyscale (Yang, Page 1060 equation 1) PNG media_image20.png 144 318 media_image20.png Greyscale (Yang, page 1061) performing singular value decomposition on the covariance matrix cov to thereby obtain two eigenvalues k1 and k2 and two eigenvectors a1 and a2, wherein k1 corresponds to a major axis of an ellipse and k2 corresponds to a minor axis of the ellipse (Yang, page 1061, column 1, paragraph 1, two eigenvalues are determined corresponding to the x and y axes of the coordinate system, there is an eigenvector corresponding in to each of the two eigenvalues, where the 3D coordinate system is elliptic as per page 1060 of Yang, column 1 paragraph 1 and 2); PNG media_image21.png 164 316 media_image21.png Greyscale (Yang, page 1060) taking a tree top of each under-segmentation canopy as a reference point to obtain a vertical plane containing the tree top and a local maximum density point and perpendicular to the xoy plane (Yang, Page 1060, column 2, paragraph 2, the data is projected on to generate a local coordinate plane, Page 1061, column 1, where the plane is perpendicular to the xyz plane, and the origin is the tree apex (local max density point), note the xoy plane is just a standard x-y or x-y-z place which intersects at an origin, therefore projection onto any x-y-z plane with a set origin which is perpendicular to the initial plane would be analogous to this limitation), PNG media_image22.png 150 314 media_image22.png Greyscale (Yang, page 1061) The combination of Ma, Lisiewicz and Yang would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The combination of Ma and Lisiewicz teaches a method of tree canopy segmentation, however they do not teach the use of a covariance to determine eigenvectors. Yang teaches this deficiency, the addition of this feature of Yang would reasonably improve the systems of Ma and Lisiewicz because it allows for the establishment of a local coordinate system to assess changes in the data from multiple directions, which is advantageous for assessing tree segmentation profiles for accuracy. (Yang, Page 1059, column 2, and 1060 columns 1 and 2) The combination of Ma, Lisiewicz and Yang does not teach; and in a situation of k1/ k2 >3, determining the over-segmentation canopy as an over-segmentation canopy, assigning the over-segmentation canopy to a nearest correct segmentation canopy to the over-segmentation canopy, and removing a tree top corresponding to the over-segmentation canopy from the set of tree tops, otherwise, determining the over-segmentation canopy as a correct segmentation canopy; However, in the same field of endeavor of tree segmentation, Sterenczak teaches; and in a situation of k1 /k2 >3, determining the over-segmentation canopy as an over-segmentation canopy, assigning the over-segmentation canopy to a nearest correct segmentation canopy to the over-segmentation canopy (Sterenczak, pages 4 column 2, section 2.3.2 Detection of Individual Trees, the system performs a second reduction of false-negative detection results (over segmentation results, described on page 5 column 1 and 2 of Sterenczak, shown below), where the if the ratio between ellipse axis for the segment (analogous to k1 /k2 >3, because K1 and K2 are the axes of the ellipse) was greater than an upper limit of 3, the segment was deemed a false-negative (over segmentation) and the segment was then joined to the segments nearby to correct this), and removing a tree top corresponding to the over-segmentation canopy from the set of tree tops, otherwise, determining the over-segmentation canopy as a correct segmentation canopy (Sterenczak, pages 4 column 2, section 2.3.2 Detection of Individual Trees, the system performs a second reduction of false-negative detection results (over segmentation results, described on page 5 column 1 and 2 of Sterenczak, shown below), where the if the ratio between ellipse axis for the segment (analogous to k1 /k2 >3, because K1 and K2 are the axes of the ellipse) was greater than an upper limit of 3, the segment was deemed a false-negative (over segmentation) and the segment was then joined to the segments nearby to correct this, this is analogous to removing the over-segmentation from the set, because the over segmented portion is joined, therefore it is no longer listed as its own segment); PNG media_image23.png 244 350 media_image23.png Greyscale (Sterenczak, page 4, column 2) PNG media_image24.png 408 348 media_image24.png Greyscale (Sterenczak, page 5, column 2) The combination of Ma, Lisiewicz, Yang and Sterenczak would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The system of Ma, Lisiewicz, and Yang teaches a segmentation method for tree canopies which detects and corrects under and over segmentation, however none of which teach the use of a height ratio to determine if a tree is over-segmented. Sterenczak teaches this deficiency, the addition of this method of Sterenczak would be advantageous because it allows for trees which are over segmented, or a case where a single tree is segmented into multiple segments, to be corrected more easily using a filtering threshold to distinguish the smaller segments likely belonging to single tree and allow them to be merged/corrected. (Sterenczak, page 4 column 2) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. For a listing of analogous art as determined by the examiner please see the attached PTO-892 Notice of References Cited form. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN M ELLIOTT whose telephone number is (703)756-5463. The examiner can normally be reached M-F 8AM-5PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.M.E./Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Nov 23, 2023
Application Filed
Jan 22, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573117
METHOD AND DEVICE FOR DEEP LEARNING-BASED PATCHWISE RECONSTRUCTION FROM CLINICAL CT SCAN DATA
2y 5m to grant Granted Mar 10, 2026
Patent 12475998
SYSTEMS AND METHODS OF ADAPTIVELY GENERATING FACIAL DEVICE SELECTIONS BASED ON VISUALLY DETERMINED ANATOMICAL DIMENSION DATA
2y 5m to grant Granted Nov 18, 2025
Patent 12450918
AUTOMATIC LANE MARKING EXTRACTION AND CLASSIFICATION FROM LIDAR SCANS
2y 5m to grant Granted Oct 21, 2025
Patent 12437415
METHODS AND SYSTEMS FOR NON-DESTRUCTIVE EVALUATION OF STATOR INSULATION CONDITION
2y 5m to grant Granted Oct 07, 2025
Patent 12406358
METHODS AND SYSTEMS FOR AUTOMATED SATURATION BAND PLACEMENT
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
45%
Grant Probability
31%
With Interview (-13.7%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 20 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month