Prosecution Insights
Last updated: April 19, 2026
Application No. 17/984,876

METHOD FOR ANALYZING SHAPE OF OBJECT BY USE OF LIDAR THROUGH ADDITIONAL ANALYSIS OF WHOLE LAYER DATA AND DEVICE FOR TRACKING OBJECT ACCORDING TO THE SAME

Non-Final OA §101§102§103
Filed
Nov 10, 2022
Examiner
ALEXANDER, EMMA LYNNE
Art Unit
2857
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Kia Corporation
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
3y 4m
To Grant
68%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
11 granted / 19 resolved
-10.1% vs TC avg
Moderate +10% lift
Without
With
+10.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
41 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
23.1%
-16.9% vs TC avg
§103
50.5%
+10.5% vs TC avg
§102
12.6%
-27.4% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 19 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101. The claimed invention is directed to the abstract concept of performing mental steps without significantly more. The claim(s) recite(s) the following abstract concepts in BOLD of Claim 1. A method for analyzing a shape of an object by use of a LiDAR sensor, the method comprising: obtaining first to Mth (M is an integer of 2 or greater) layers of LiDAR points spaced apart in a vertical direction with respect to an object by the LiDAR sensor; obtaining LiDAR points of a whole layer by projecting whole LiDAR points for the object obtained by the LiDAR sensor onto the whole layer or projecting LiDAR points of the first to Mth layers onto the whole layer; and determining shape flags for the first to Mth layers and the whole layer, respectively, by use of at least a part of corresponding LiDAR points of each layer according to a plurality of predetermined shape types, and determining a shape flag of the object. 14. A device for tracking an object by use of a LiDAR sensor, the device comprising: the LiDAR sensor configured to obtain a point cloud including LiDAR points for a target object; a clustering unit configured to group the LiDAR points of the point cloud; and a shape analysis unit configured to analyze a shape of the target object from the grouped LiDAR points of the point cloud, wherein the shape analysis unit comprises: a layer shape determination unit configured to determine shape flags for first to Mth layers and a whole layer, respectively, by use of at least a part of corresponding LiDAR points of each layer according to a plurality of predetermined shape types, the first to Mth (M is an integer of 2 or greater) layers spaced apart in a vertical direction with respect to the target object and LiDAR points of the whole layer obtained by projecting whole LiDAR points for the targe object or LiDAR points of the first to Mth layers onto the whole layer in the vertical direction; and a target shape determination unit configured to determine a shape flag of the object by use of the shape flags of the first to Mth layers and the whole layer. 18. A vehicle comprising: a device for tracking an object by use of a LiDAR sensor, the device comprising: the LiDAR sensor configured to obtain a point cloud including LiDAR points for a target object; a clustering unit configured to group the LiDAR points of the point cloud; and a shape analysis unit configured to analyze a shape of the target object from the grouped LiDAR points of the point cloud, wherein the shape analysis unit comprises: a layer shape determination unit configured to determine shape flags for first to Mth layers and a whole layer, respectively, by use of at least a part of corresponding LiDAR points of each layer according to a plurality of predetermined shape types, the first to Mth (M is an integer of 2 or greater) layers spaced apart in a vertical direction with respect to the target object and LiDAR points of the whole layer obtained by projecting whole LiDAR points for the targe object or LiDAR points of the first to Mth layers onto the whole layer in the vertical direction; and a target shape determination unit configured to determine a shape flag of the object by use of the shape flags of the first to Mth layers and the whole layer. Under step 1 of the eligibility analysis, we determine whether the claims are to a statutory category by considering whether the claimed subject matter falls within the four statutory categories of patentable subject matter identified by 35 U.S.C. 101: process, machine, manufacture, or composition of matter. The above claims are considered to be in a statutory category. Under Step 2A, Prong One, we consider whether the claim recites a judicial exception (abstract idea). In the above claim, the highlighted portion constitutes an abstract idea because, under a broadest reasonable interpretation, it recites limitation the fall into/recite abstract idea exceptions. Specifically, under the 2019 Revised Patent Subject Matter Eligibility Guidance, it falls into the grouping of subject matter that, when recited as such in a claim limitation, covers performing mathematics or mental steps. Next, under Step 2A, Prong Two, we consider whether the claim that recites a judicial exception is integrated into a practical application. In this step, we evaluate whether the claim recites additional elements that integrate the exception into a practical application of that exception. This judicial exception is not integrated into a practical application because there is no improvement to another technology or technical field; improvements to the functioning of the computer itself; a particular machine; effecting a transformation or reduction of a particular article to a different state or thing. Examiner notes that since the claimed methods and system are not tied to a particular machine or apparatus, they do not represent an improvement to another technology or technical field. Similarly, there are no other meaningful limitations linking the use to a particular technological environment. Finally, there is nothing in the claims that indicates an improvement to the functioning of the computer itself or transform a particular article to a new state. Finally, under Step 2B, we consider whether the additional elements are sufficient to amount to significantly more than the abstract idea. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because a LiDAR sensor, a device, a clustering unit, a shape analysis unit, a target shape determination unit, are generic computer elements and not considered significantly more than the abstract idea. As recited in the MPEP, 2106.05(b), merely adding a generic computer, generic computer components, or a programmed computer to perform generic computer functions does not automatically overcome an eligibility rejection. Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 134 S. Ct. 2347, 2359-60, 110 USPQ2d 1976, 1984 (2014). See also OIP Techs. v. Amazon.com, 788 F.3d 1359, 1364, 115 USPQ2d 1090, 1093-94. The additional element of obtain a point cloud including LiDAR points for a target object; is considered necessary data gathering and is not sufficient to integrate the abstract idea into a practical application. As recited in MPEP section 2106.05(g), necessary data gathering (i.e., receiving data) is considered extra solution activity in light of Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015). Claims 2-13, 15-17, and 19-20 further limit the abstract ideas without integrating the abstract concept into a practical application or including additional limitations that can be considered significantly more than the abstract idea. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-3, 13-16, and 18-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kaithakapuzha et al. (WO 2021/207106 A1) hereinafter Kaithakapuzha. Regarding Claim 1, Kaithakapuzha teaches obtaining first to Mth (M is an integer of 2 or greater) layers of LiDAR points spaced apart in a vertical direction with respect to an object by the LiDAR sensor ([0085] “As illustrated in FIG. 8, various ones of the bounding boxes 424 from a particular corresponding FOV may be combined together into a single view. The neighbor relations for a given bounding box 424 may be defined by connecting a straight-line 810 from a center of the bounding box 424 with every other box center (i.e., layering the bounding boxes on top of one another.” Where bounding boxes are the 2D layers [0006] “generate respective 2D bounding boxes for the target object based on the 2D image data.”; and [0005] “a Light Detection and Ranging (lidar) system includes a control circuit configured to receive three-dimensional (3D) point data and two-dimensional (2D) image data representing a field of view including a target object and an object volume prediction circuit configured to detem1ine a predicted volume occupied by the target object within the 3D point data based on the 3D point data and the 2D image data.”); obtaining LiDAR points of a whole layer by projecting whole LiDAR points for the object obtained by the LiDAR sensor onto the whole layer or projecting LiDAR points of the first to Mth layers onto the whole layer ([0088]” The point cloud clustering module/circuit 9 IO may process large amounts of 3D points and extract clusters (e.g., groupings of detecting points) related to the objects in the scene. The point cloud clustering module/circuit 910 can serve as the first step of multiple applications of perceiving the scene based on point cloud data 320, such as object classification, detection, localization, and/or volume estimation (i.e., whole layer is processed).”, projecting whole LiDAR points for the object obtained by the LiDAR sensor onto the whole layer; [0090] “Referring back to FIG. 3, the output of the 3D clustering 910, along with the output of the 2D bounding box finalization 510 and the 2D object neighbor relation determination 710 may be passed to a 2D-3D integration module/circuit 1010. FIG. 10 is a schematic illustration of a 2D- 3D integration module/circuit 1010 in accordance with some embodiments of the present disclosure.” [0091] “The first function may include co-locating objects 1020 (e.g., in the point cloud) based on 2D bounding boxes 620 and/or 3D cluster centroids 932. The second function is creating 1022 projected 3D bounding boxes 1030 using information from 2D bounding boxes, 3D clusters, and/or camera calibration parameters. The shape of the projected 3D bounding boxes 1030 can be a frustum, cylinder, etc. The projected 3D bounding boxes 1030 may identify a 3D area projected to enclose an object detected within the point cloud 320.”, projecting LiDAR points of the first to Mth layers onto the whole layer); and determining shape flags for the first to Mth layers and the whole layer, respectively, by use of at least a part of corresponding LiDAR points of each layer according to a plurality of predetermined shape types, and determining a shape flag of the object ([0091] “The 2D-3D integration module circuit according to some embodiments of the present disclosure may include at least two functions. The first function may include co-locating objects 1020 (e.g., in the point cloud) based on 2D bounding boxes 620 and/or 3D cluster centroids 932. The second function is creating 1022 projected 3D bounding boxes 1030 using information from 2D bounding boxes, 3D clusters, and/or camera calibration parameters. The shape of the projected 3D bounding boxes 1030 can be a frustum, cylinder, etc. The projected 3D bounding boxes 1030 may identify a 3D area projected to enclose an object detected within the point cloud 320.” And [0094] “Meshing may include the generation of the 3D representation made of a series of interconnected shapes (e.g., a "mesh") that outline a surface of the 3D object. The mesh can polygonal or triangular, though the present disclosure is not limited thereto. Furthermore, the template matching for each object class may be used to predict correct bounding box predictions. Voxel templates may be created for each class depending on their dimensions and shape features.”; [0095] “After the volume estimation first phase, the volume shape may be compared to a set of predefined shape templates”). Regarding Claim 14, Kaithakapuzha teaches the LiDAR sensor configured to obtain a point cloud including LiDAR points for a target object ([0005] “a Light Detection and Ranging (lidar) system includes a control circuit configured to receive three-dimensional (3D) point data and two-dimensional (2D) image data representing a field of view including a target object and an object volume prediction circuit configured to detem1ine a predicted volume occupied by the target object within the 3D point data based on the 3D point data and the 2D image data.”; [0057]” Light emission output from one or more of the emitters 11 Se impinges on and is reflected by one or more targets 150, and the reflected light is detected as an optical signal (also referred to herein as a return signal, echo signal, or echo) by one or more of the detectors 110d (e.g., via receiver optics 112), converted into an electrical signal representation (referred to herein as a detection signal), and processed (e.g., based on time of flight) to define a 3-D point cloud representation 170 of the field of view 190.”); a clustering unit configured to group the LiDAR points of the point cloud ([0088] “The point cloud clustering module/circuit 9 IO may process large amounts of 3D points and extract clusters (e.g., groupings of detecting points) related to the objects in the scene.”); and a shape analysis unit configured to analyze a shape of the target object from the grouped LiDAR points of the point cloud ([0091] “The 2D-3D integration module circuit (i.e., shape analysis unit) according to some embodiments of the present disclosure may include at least two functions. The first function may include co-locating objects 1020 (e.g., in the point cloud) based on 2D bounding boxes 620 and/or 3D cluster centroids 932. The second function is creating 1022 projected 3D bounding boxes 1030 using information from 2D bounding boxes, 3D clusters, and/or camera calibration parameters. The shape of the projected 3D bounding boxes 1030 can be a frustum, cylinder, etc.”), wherein the shape analysis unit comprises: a layer shape determination unit configured to determine shape flags for first to Mth layers and a whole layer, respectively, by use of at least a part of corresponding LiDAR points of each layer according to a plurality of predetermined shape types ([0091] “The 2D-3D integration module circuit (i.e., shape analysis unit) according to some embodiments of the present disclosure may include at least two functions. The first function may include co-locating objects 1020 (e.g., in the point cloud) based on 2D bounding boxes 620 and/or 3D cluster centroids 932. The second function is creating 1022 projected 3D bounding boxes 1030 using information from 2D bounding boxes, 3D clusters, and/or camera calibration parameters. The shape of the projected 3D bounding boxes 1030 can be a frustum, cylinder, etc.” And [0094] “Meshing may include the generation of the 3D representation made of a series of interconnected shapes (e.g., a "mesh") that outline a surface of the 3D object. The mesh can polygonal or triangular, though the present disclosure is not limited thereto. Furthermore, the template matching for each object class may be used to predict correct bounding box predictions. Voxel templates may be created for each class depending on their dimensions and shape features.”; [0095] “After the volume estimation first phase, the volume shape may be compared to a set of predefined shape templates”), the first to Mth (M is an integer of 2 or greater) layers spaced apart in a vertical direction with respect to the target object and LiDAR points of the whole layer obtained by projecting whole LiDAR points for the targe object or LiDAR points of the first to Mth layers onto the whole layer in the vertical direction (([0085] “As illustrated in FIG. 8, various ones of the bounding boxes 424 from a particular corresponding FOV may be combined together into a single view. The neighbor relations for a given bounding box 424 may be defined by connecting a straight-line 810 from a center of the bounding box 424 with every other box center (i.e., layering the bounding boxes on top of one another.” Where bounding boxes are the 2D layers [0006] “generate respective 2D bounding boxes for the target object based on the 2D image data.”; and [0005] “a Light Detection and Ranging (lidar) system includes a control circuit configured to receive three-dimensional (3D) point data and two-dimensional (2D) image data representing a field of view including a target object and an object volume prediction circuit configured to detem1ine a predicted volume occupied by the target object within the 3D point data based on the 3D point data and the 2D image data.” Where the points are within each boundary boxes making the boundary stack in the vertical direction according to the points on the plan, i.e., points are in ab plane, boundary boxes stack in c plane at 90 degree angle or vertical to plane orientation.); and a target shape determination unit configured to determine a shape flag of the object by use of the shape flags of the first to Mth layers and the whole layer ([0016] “a computer program product for operating an electronic device comprising a non-transitory computer readable storage medium having computer readable program code embodied in the medium that when executed by a processor causes the processor to perform the operations comprising: receiving three dimensional (3D) point data and two-dimensional (2D) image data representing a field of view including a target object; and determining a predicted volume occupied by the target object within the 3D point data based on the 3D point data and the 2D image data.”). Regarding Claim 18, Kaithakapuzha teaches the LiDAR sensor configured to obtain a point cloud including LiDAR points for a target object ([0005] “a Light Detection and Ranging (lidar) system includes a control circuit configured to receive three-dimensional (3D) point data and two-dimensional (2D) image data representing a field of view including a target object and an object volume prediction circuit configured to detem1ine a predicted volume occupied by the target object within the 3D point data based on the 3D point data and the 2D image data.”; [0057]” Light emission output from one or more of the emitters 11 Se impinges on and is reflected by one or more targets 150, and the reflected light is detected as an optical signal (also referred to herein as a return signal, echo signal, or echo) by one or more of the detectors 110d (e.g., via receiver optics 112), converted into an electrical signal representation (referred to herein as a detection signal), and processed (e.g., based on time of flight) to define a 3-D point cloud representation 170 of the field of view 190.”); a clustering unit configured to group the LiDAR points of the point cloud([0088] “The point cloud clustering module/circuit 9 IO may process large amounts of 3D points and extract clusters (e.g., groupings of detecting points) related to the objects in the scene.”); and a shape analysis unit configured to analyze a shape of the target object from the grouped LiDAR points of the point cloud ([0091] “The 2D-3D integration module circuit (i.e., shape analysis unit) according to some embodiments of the present disclosure may include at least two functions. The first function may include co-locating objects 1020 (e.g., in the point cloud) based on 2D bounding boxes 620 and/or 3D cluster centroids 932. The second function is creating 1022 projected 3D bounding boxes 1030 using information from 2D bounding boxes, 3D clusters, and/or camera calibration parameters. The shape of the projected 3D bounding boxes 1030 can be a frustum, cylinder, etc.”), wherein the shape analysis unit comprises: a layer shape determination unit configured to determine shape flags for first to Mth layers and a whole layer, respectively, by use of at least a part of corresponding LiDAR points of each layer according to a plurality of predetermined shape types ([0091] “The 2D-3D integration module circuit (i.e., shape analysis unit) according to some embodiments of the present disclosure may include at least two functions. The first function may include co-locating objects 1020 (e.g., in the point cloud) based on 2D bounding boxes 620 and/or 3D cluster centroids 932. The second function is creating 1022 projected 3D bounding boxes 1030 using information from 2D bounding boxes, 3D clusters, and/or camera calibration parameters. The shape of the projected 3D bounding boxes 1030 can be a frustum, cylinder, etc.” And [0094] “Meshing may include the generation of the 3D representation made of a series of interconnected shapes (e.g., a "mesh") that outline a surface of the 3D object. The mesh can polygonal or triangular, though the present disclosure is not limited thereto. Furthermore, the template matching for each object class may be used to predict correct bounding box predictions. Voxel templates may be created for each class depending on their dimensions and shape features.”; [0095] “After the volume estimation first phase, the volume shape may be compared to a set of predefined shape templates”), the first to Mth (M is an integer of 2 or greater) layers spaced apart in a vertical direction with respect to the target object and LiDAR points of the whole layer obtained by projecting whole LiDAR points for the targe object or LiDAR points of the first to Mth layers onto the whole layer in the vertical direction ([0085] “As illustrated in FIG. 8, various ones of the bounding boxes 424 from a particular corresponding FOV may be combined together into a single view. The neighbor relations for a given bounding box 424 may be defined by connecting a straight-line 810 from a center of the bounding box 424 with every other box center (i.e., layering the bounding boxes on top of one another.” Where bounding boxes are the 2D layers [0006] “generate respective 2D bounding boxes for the target object based on the 2D image data.”; and [0005] “a Light Detection and Ranging (lidar) system includes a control circuit configured to receive three-dimensional (3D) point data and two-dimensional (2D) image data representing a field of view including a target object and an object volume prediction circuit configured to detem1ine a predicted volume occupied by the target object within the 3D point data based on the 3D point data and the 2D image data.” Where the points are within each boundary boxes making the boundary stack in the vertical direction according to the points on the plan, i.e., points are in ab plane, boundary boxes stack in c plane at 90 degree angle or vertical to plane orientation.); and a target shape determination unit configured to determine a shape flag of the object by use of the shape flags of the first to Mth layers and the whole layer ([0016] “a computer program product for operating an electronic device comprising a non-transitory computer readable storage medium having computer readable program code embodied in the medium that when executed by a processor causes the processor to perform the operations comprising: receiving three dimensional (3D) point data and two-dimensional (2D) image data representing a field of view including a target object; and determining a predicted volume occupied by the target object within the 3D point data based on the 3D point data and the 2D image data.”). Regarding Claims 2, 15, and 19, Kaithakapuzha teaches the limitations of claims 1, 14, and 18, respectively. Kaithakapuzha further teaches (a) determining the shape flags for the respective first to Mth layers ([0073] “The bounding boxes 424 output from the multi-model based inference module/circuit 410 may include virtual boxes and/or boundaries that enclose portions of the 20 data 310 that are tentatively identified as including one or more objects of interest. The bounding boxes 424 output from the multi-model based inference module/circuit 410 may include virtual boxes and/or boundaries that enclose portions of the 20 data 310 that are tentatively identified as including one or more objects of interest. The class labels 426 may include estimations of the type of the object(s) within the bounding box 424 (e.g., person, automobile, tree, etc.).” Where an object is identified based on its shape, i.e., human vs. vehicle are different shapes); (b) determining the shape flag of the object firstly by use of the shape flags of the first to Mth layers ([0080] “FIG. 6B illustrates a scenario in which not all of the predictions are overlapping equal to an image over union (IoU) threshold, but there is an object present which is detected correctly by at least one neural network 415. An image over union for overlapping bounding boxes 424 may include all of the area enclosed within each bounding box 424 in addition to any overlapping areas. Here a maximum overlapping area between the various models and the model bias score 420 may be used to filter out the bounding box predictions 424 from the various neural networks 415.”; [0081] “Referring to FlG. 6B, for every bounding box prediction 424, the overlapping area with every other bounding box predictions 424 may be calculated (as shown marked with an 'X' in the prediction from Network 2). If the overlapping area is below a predetermined threshold, it may be discarded. The bounding box prediction 424 with the maximum sum of overlapping area (e.g., with respect to the other bounding boxes 424) may be selected as the prediction for the final bounding box 620. In some embodiments, the model bias score 420 of the neural network model 415 used to generate the bounding box prediction 424 may also be used in selecting (e.g., as a weighting factor) the final prediction 620.”; where [0073] “The bounding boxes 424 output from the multi-model based inference module/circuit 410 may include virtual boxes and/or boundaries that enclose portions of the 20 data 310 that are tentatively identified as including one or more objects of interest. The class labels 426 may include estimations of the type of the object(s) within the bounding box 424 (e.g., person, automobile, tree, etc.). The confidence score 422 may be a number (e.g., generated by the neural network architecture 415) indicating a probability/confidence in the generated bounding box 424. For example, a higher confidence score 422 may indicate a higher likelihood that the object bounding box 424 and/or classification 426 is correct.”); and (c) determining the shape flag for the whole layer and accordingly changing or maintaining the firstly determined shape flag of the object ([0091] “The 2D-3D integration module/circuit 1010 may take the outputs from 2D prediction and 3D clustering with object neighbor relations and guided information from point clouds, and may create 2D-3D co-located bounding boxes 1030, class labels 1034, guided information 1032, and/or cluster labels 1036 for objects in the scene. The 2D-3D integration module circuit according to some embodiments of the present disclosure may include at least two functions. The first function may include co-locating objects 1020 (e.g., in the point cloud) based on 2D bounding boxes 620 and/or 3D cluster centroids 932. The second function is creating 1022 projected 3D bounding boxes 1030 using information from 2D bounding boxes, 3D clusters, and/or camera calibration parameters. The shape of the projected 3D bounding boxes 1030 can be a frustum, cylinder, etc. The projected 3D bounding boxes 1030 may identify a 3D area projected to enclose an object detected within the point cloud 320.”; [0095] “After the volume estimation first phase, the volume shape may be compared to a set of predefined shape templates to estimate the confidence of the resulting point cloud cluster being an accurate representation on the object in the scene. The final step is to calculate the object volume 1120 based on the refined boundary of the object in the scene.”). Regarding Claims 3, 16, and 20, Kaithakapuzha teaches the limitations of claims 2, 15, and 19, respectively. Kaithakapuzha further teaches wherein the step (c) is performed according to a predetermined reliability condition for the firstly determined shape flag of the object ([0095] “After the volume estimation first phase, the volume shape may be compared to a set of predefined shape templates to estimate the confidence of the resulting point cloud cluster being an accurate representation on the object in the scene. The final step is to calculate the object volume 1120 based on the refined boundary of the object in the scene.”; [0099] “The object level volume prediction module/circuit 1310 may finalize volume estimation ( e.g., of detected objects within the point cloud) by refining the direct volume estimation ( e.g., from FIG. 11) with occlusion awareness features (e.g., from FIG. 12). In the object level volume prediction module/circuit 13 I 0, misplaced points in occluded objects may be removed and a volume may be recalculated for those objects or otherwise calculated by excluding data corresponding to occluded objects or portions thereof Outputs of the object level volume prediction module/circuit 1310 may include final class labels 1320 for the detected objects, final volumes 1322 for the detected objects, and/or final confidence scores 1324 for the detected objects. In some embodiments, the excluded data may be used to estimate the volume of another object.”). Regarding Claim 13, Kaithakapuzha teaches the limitations of claim 2. Kaithakapuzha further teaches calculating each of confidence scores for each of the shape flags of the first to Mth layers by use of the at least part of the corresponding LiDAR points ([0073] “The bounding boxes 424 output from the multi-model based inference module/circuit 410 may include virtual boxes and/or boundaries that enclose portions of the 20 data 310 that are tentatively identified as including one or more objects of interest. The class labels 426 may include estimations of the type of the object(s) within the bounding box 424 (e.g., person, automobile, tree, etc.). The confidence score 422 may be a number (e.g., generated by the neural network architecture 415) indicating a probability/confidence in the generated bounding box 424. For example, a higher confidence score 422 may indicate a higher likelihood that the object bounding box 424 and/or classification 426 is correct.”); and determining the shape flag of the object by use of the shape flags and the confidence scores for the first to Mth layers ([0006] “In some embodiments, the object volume prediction circuit is further configured to analyze the 2D image data utilizing a plurality of neural network models, wherein the plurality of neural net\vork models are configured to generate respective 2D bounding boxes for the target object based on the 2D image data.” And [0007] “In some embodiments, the plurality of neural network models are further configured to generate respective object classifications (i.e., shape flags) for the target object based on the 2D image data.” [0008] “In some embodiments, the object volume prediction circuit is further configured to generate a final bounding box based on the respective 2D bounding boxes of the plurality of neural network models.”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 4, 7, 8, 11, 12, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kaithakapuzha in view of Wittman et al. (Improving Lidar Data Evaluation for Object Detection and Tracking Using a Priori Knowledge and Sensorfusion, 2014, Proceedings of the 11th International Conference on Informatics in Control, Automation and Robotics, 794-801) hereinafter Wittman. Regarding Claim 4 and 17, Kaithakapuzha teaches the limitations of claims 3 and 16, respectively. Kaithakapuzha further teaches wherein the predetermined reliability condition includes one or more of: condition I representing whether a L-shape flag is temporarily determined as the shape flag of the object according to a priority order rule due to a L-shape flag being assigned to at least one among the first to Mth layers and I-shape flag layers are more than L-shape flag layers or a maximum score among L-shape flag scores is below a predetermined first score and a maximum score of I-shape flag scores is equal to or over a predetermined second score; condition II representing whether when the firstly determined shape flag of the object is a L-shape, a heading flag score for the object is equal to or below a predetermined score; condition III representing whether, with respect to the firstly determined shape flag of the object, a difference of distances from a host vehicle to a shape box and a cluster box for the object in its heading layer is equal to or over a predetermined value; and condition IV representing whether only one of the first to Mth layers is determined as the shape flag and thus the firstly determined shape flag of the object becomes a shape ([0073] “The bounding boxes 424 output from the multi-model based inference module/circuit 410 may include virtual boxes and/or boundaries that enclose portions of the 20 data 310 that are tentatively identified as including one or more objects of interest. The bounding boxes 424 output from the multi-model based inference module/circuit 410 may include virtual boxes and/or boundaries that enclose portions of the 20 data 310 that are tentatively identified as including one or more objects of interest. The class labels 426 may include estimations of the type of the object(s) within the bounding box 424 (e.g., person, automobile, tree, etc.).” Where an object is identified based on its shape, i.e., human vs. vehicle are different shapes; ([0080] “FIG. 6B illustrates a scenario in which not all of the predictions are overlapping equal to an image over union (IoU) threshold, but there is an object present which is detected correctly by at least one neural network 415. An image over union for overlapping bounding boxes 424 may include all of the area enclosed within each bounding box 424 in addition to any overlapping areas. Here a maximum overlapping area between the various models and the model bias score 420 may be used to filter out the bounding box predictions 424 from the various neural networks 415.”; [0081] “Referring to FlG. 6B, for every bounding box prediction 424, the overlapping area with every other bounding box predictions 424 may be calculated (as shown marked with an 'X' in the prediction from Network 2). If the overlapping area is below a predetermined threshold, it may be discarded. The bounding box prediction 424 with the maximum sum of overlapping area (e.g., with respect to the other bounding boxes 424) may be selected as the prediction for the final bounding box 620. In some embodiments, the model bias score 420 of the neural network model 415 used to generate the bounding box prediction 424 may also be used in selecting (e.g., as a weighting factor) the final prediction 620.”, condition IV representing whether only one of the first to Mth layers is determined as the shape flag and thus the firstly determined shape flag of the object becomes a shape). Kaithakapuzha does not teach L-shape. Wittman teaches L-shape (pg 794 col. 2 paragraph 1 “Cars appear in the lidar data in the form of the characteristic object I-, U- and L-shapes, which can be fitted into the measurement values to find possible objects.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine L-shape taught in Wittman to the shape analyzer discussed in Kaithakapuzha for the purpose of detecting cars in the lidar data. This is advantageous because an increasing amount of advanced driver assistance systems (ADAS) utilize environmental data, e.g. collision warning or lane detection systems, (e.g., Wittman, pg. 794, col.1 paragraph 1). Regarding Claim 7, Kaithakapuzha and Wittman teach the limitations of Claim 4. Kaithakapuzha further teaches, wherein the step (c) comprises changing the shape of the object according to the shape flag of the whole layer when any one of the conditions II to IV is true ([0073] “The bounding boxes 424 output from the multi-model based inference module/circuit 410 may include virtual boxes and/or boundaries that enclose portions of the 20 data 310 that are tentatively identified as including one or more objects of interest. The bounding boxes 424 output from the multi-model based inference module/circuit 410 may include virtual boxes and/or boundaries that enclose portions of the 20 data 310 that are tentatively identified as including one or more objects of interest. The class labels 426 may include estimations of the type of the object(s) within the bounding box 424 (e.g., person, automobile, tree, etc.).” Where an object is identified based on its shape, i.e., human vs. vehicle are different shapes; ([0080] “FIG. 6B illustrates a scenario in which not all of the predictions are overlapping equal to an image over union (IoU) threshold, but there is an object present which is detected correctly by at least one neural network 415. An image over union for overlapping bounding boxes 424 may include all of the area enclosed within each bounding box 424 in addition to any overlapping areas. Here a maximum overlapping area between the various models and the model bias score 420 may be used to filter out the bounding box predictions 424 from the various neural networks 415.”; [0081] “Referring to FlG. 6B, for every bounding box prediction 424, the overlapping area with every other bounding box predictions 424 may be calculated (as shown marked with an 'X' in the prediction from Network 2). If the overlapping area is below a predetermined threshold, it may be discarded. The bounding box prediction 424 with the maximum sum of overlapping area (e.g., with respect to the other bounding boxes 424) may be selected as the prediction for the final bounding box 620. In some embodiments, the model bias score 420 of the neural network model 415 used to generate the bounding box prediction 424 may also be used in selecting (e.g., as a weighting factor) the final prediction 620.”, condition IV is true in this example). Kaithakapuzha does not teach and the L-shape or an sL-shape flag is determined as the shape flag of the whole layer. Wittman teaches and the L-shape or an sL-shape flag is determined as the shape flag of the whole layer (pg. 797, col. 1, paragraph 2 “To take into account the limited visibility of the car object contours, three shapes are differentiated for the description of the expected measurement values, the I-, the IS- and the L-shape as illustrated in figure 4(a)-(c) with real measured lidar values.” Where Figures 4(c) is an L-shape). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine L-shape taught in Wittman to the shape analyzer discussed in Kaithakapuzha for the purpose of detecting cars in the lidar data. This is advantageous because an increasing amount of advanced driver assistance systems (ADAS) utilize environmental data, e.g. collision warning or lane detection systems, (e.g., Wittman, pg. 794, col.1 paragraph 1). Regarding Claim 8, Kaithakapuzha and Wittman teach the limitations of claim 7. Kaithakapuzha further teaches, wherein heading information on the object is determined according to the shape flag of the whole layer ([0073] “The class labels 426 (i.e., header information) may include estimations of the type of the object(s) within the bounding box 424 (e.g., person, automobile, tree, etc.). The confidence score 422 may be a number (e.g., generated by the neural network architecture 415) indicating a probability/confidence in the generated bounding box 424 (i.e., a layer). For example, a higher confidence score 422 may indicate a higher likelihood that the object bounding box 424 and/or classification 426 is correct.”). Regarding Claim 11, Kaithakapuzha and Wittman teach the limitations of Claim 4. Kaithakapuzha further teaches wherein the firstly determined shape flag of the object is maintained when any one of the conditions II to IV is true ([0073] “The bounding boxes 424 output from the multi-model based inference module/circuit 410 may include virtual boxes and/or boundaries that enclose portions of the 20 data 310 that are tentatively identified as including one or more objects of interest. The bounding boxes 424 output from the multi-model based inference module/circuit 410 may include virtual boxes and/or boundaries that enclose portions of the 20 data 310 that are tentatively identified as including one or more objects of interest. The class labels 426 may include estimations of the type of the object(s) within the bounding box 424 (e.g., person, automobile, tree, etc.).” Where an object is identified based on its shape, i.e., human vs. vehicle are different shapes; ([0080] “FIG. 6B illustrates a scenario in which not all of the predictions are overlapping equal to an image over union (IoU) threshold, but there is an object present which is detected correctly by at least one neural network 415. An image over union for overlapping bounding boxes 424 may include all of the area enclosed within each bounding box 424 in addition to any overlapping areas. Here a maximum overlapping area between the various models and the model bias score 420 may be used to filter out the bounding box predictions 424 from the various neural networks 415.”; [0081] “Referring to FlG. 6B, for every bounding box prediction 424, the overlapping area with every other bounding box predictions 424 may be calculated (as shown marked with an 'X' in the prediction from Network 2). If the overlapping area is below a predetermined threshold, it may be discarded. The bounding box prediction 424 with the maximum sum of overlapping area (e.g., with respect to the other bounding boxes 424) may be selected as the prediction for the final bounding box 620. In some embodiments, the model bias score 420 of the neural network model 415 used to generate the bounding box prediction 424 may also be used in selecting (e.g., as a weighting factor) the final prediction 620.”, condition IV is true in this example). Kaithakapuzha does not teach and the shape flag of the whole layer is neither the L-shape nor an sL-shape flag. Wittman teaches and the shape flag of the whole layer is neither the L-shape nor an sL-shape flag (pg. 797, col. 1, paragraph 2 “To take into account the limited visibility of the car object contours, three shapes are differentiated for the description of the expected measurement values, the I-, the IS- and the L-shape as illustrated in figure 4(a)-(c) with real measured lidar values.” Where Figures 4(a,b, d) are not L or sL-shape). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine L-shape taught in Wittman to the shape analyzer discussed in Kaithakapuzha for the purpose of detecting cars in the lidar data. This is advantageous because an increasing amount of advanced driver assistance systems (ADAS) utilize environmental data, e.g. collision warning or lane detection systems, (e.g., Wittman, pg. 794, col.1 paragraph 1). Regarding Claim 12, Kaithakapuzha and Wittman teach the limitations of Claim 11. Kaithakapuzha further teaches wherein a heading flag determined for heading information on the object is deleted ([0073] “The class labels 426 (i.e., header information) may include estimations of the type of the object(s) within the bounding box 424 (e.g., person, automobile, tree, etc.). The confidence score 422 may be a number (e.g., generated by the neural network architecture 415) indicating a probability/confidence in the generated bounding box 424. For example, a higher confidence score 422 may indicate a higher likelihood that the object bounding box 424 and/or classification 426 is correct.”, [0076] “Referring to FJG. 6A, the multi-factor cross-validation module/circuit 510 may first determine, from among the results from the plurality of neural network models 415, the bounding box predictions 424 which overlap for the same class predictions. For each interested object label, a heatmap may be created. As used herein, a heatmap is a mask where each pixel in the heatmap will indicate the weighted prediction score for a given class label 426. If an object's heatmap average scores fall below a predetermined threshold, it will be considered as false positive and will be discarded (i.e., class label is deleted).”). Examiner’s Note Claims 5, 6, 9, and 10 are allowable pending overcoming the U.S.C. 101 and U.S.C. 102 rejection of independent claim 1. The most pertinent art is Kaithakapuzha et al. (WO 2021/207106 A1) hereinafter Kaithakapuzha. Regarding in Claim 5 Kaithakapuzha does not teach that condition I (from claim 4, condition I representing whether a L-shape flag is temporarily determined as the shape flag of the object according to a priority order rule due to a L-shape flag being assigned to at least one among the first to Mth layers and I-shape flag layers are more than L-shape flag layers or a maximum score among L-shape flag scores is below a predetermined first score and a maximum score of I-shape flag scores is equal to or over a predetermined second score) and therefore cannot teach that condition I is true. There are not motivations absent the applicant’s own disclose, to modify the reference of Cheng in the manner required by the claims. The prior arts do not anticipate nor render obvious the aforementioned limitations, therefore one of ordinary skill in the art would not have arrived at the claimed invention. For these reasons, the claimed invention distinguishes itself from the prior arts and is in condition for allowance pending overcoming the U.S.C. 101 and U.S.C. 102 rejection of independent claim 1. Dependent claim 6 is in allowance as well pending overcoming the U.S.C. 101 and U.S.C. 102 rejection of independent claim 1. Regarding in Claim 9 Kaithakapuzha does not teach that condition III (from claim 4, condition III representing whether, with respect to the firstly determined shape flag of the object, a difference of distances from a host vehicle to a shape box and a cluster box for the object in its heading layer is equal to or over a predetermined value) and therefore cannot teach that condition III is true. There are not motivations absent the applicant’s own disclose, to modify the reference of Cheng in the manner required by the claims. The prior arts do not anticipate nor render obvious the aforementioned limitations, therefore one of ordinary skill in the art would not have arrived at the claimed invention. For these reasons, the claimed invention distinguishes itself from the prior arts and is in condition for allowance pending overcoming the U.S.C. 101 and U.S.C. 102 rejection of independent claim 1. Dependent claim 11 is in allowance as well pending overcoming the U.S.C. 101 and U.S.C. 102 rejection of independent claim 1. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Emma L. Alexander whose telephone number is (571)270-0323. The examiner can normally be reached Monday- Friday 8am-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Catherine T. Rastovski can be reached at (571) 270-0349. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EMMA ALEXANDER/Patent Examiner, Art Unit 2863 /Catherine T. Rastovski/Supervisory Primary Examiner, Art Unit 2857
Read full office action

Prosecution Timeline

Nov 10, 2022
Application Filed
Feb 25, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604429
MEASUREMENT DEVICE UNIT
2y 5m to grant Granted Apr 14, 2026
Patent 12591007
DETERMINING A CORRELATION BETWEEN POWER DISTURBANCES AND DATA ERORS IN A TEST SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12517170
SEMICONDUCTOR DEVICE INSPECTION METHOD AND SEMICONDUCTOR DEVICE INSPECTION DEVICE
2y 5m to grant Granted Jan 06, 2026
Patent 12411047
BOLOMETER UNIT CELL PIXEL INTEGRITY CHECK SYSTEMS AND METHODS
2y 5m to grant Granted Sep 09, 2025
Patent 12406192
SERVICE LOCATION ANOMALIES
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
68%
With Interview (+10.4%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 19 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month