Prosecution Insights
Last updated: April 19, 2026
Application No. 17/483,548

SYSTEMS AND METHODS OF REAL-TIME DETECTION OF AND GEOMETRY GENERATION FOR PHYSICAL GROUND PLANES

Non-Final OA §103
Filed
Sep 23, 2021
Examiner
TRUONG, KARL DUC
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Aeye Inc.
OA Round
3 (Non-Final)
52%
Grant Probability
Moderate
3-4
OA Rounds
2y 7m
To Grant
83%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
15 granted / 29 resolved
-10.3% vs TC avg
Strong +31% interview lift
Without
With
+31.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
45 currently pending
Career history
74
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
85.3%
+45.3% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
2.1%
-37.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 29 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12th January, 2026 has been entered. Response to Amendment This action is in response to the amendment filed on 12th January, 2026. Claims 1-20 remain rejected in the application. Response to Arguments Applicant's arguments with respect to Claims 1, 10, and 19 filed on 12th January, 2026, with respect to the rejection under 35 U.S.C. § 103, regarding that the prior art does not teach the limitation(s): "slicing, in accordance with at least one coordinate space threshold, one or more threshold points from the point cloud to generate a first sliced point cloud excluding the one or more threshold points", "slicing, in accordance with at least one residual threshold, one or more residual points from the first sliced point cloud to generate a second sliced point cloud excluding the one or more residual points", and "generating a ground plane aligned with one or more points of the second sliced point cloud in a coordinate space" have been fully considered, but are moot because of new grounds for rejection. It has now been taught by the combination of Nister and Liu. Regarding arguments to Claims 2-9, 11-18, and 20, they directly/indirectly depend on independent Claims 1, 10, and 19 respectively. Applicant does not argue anything other than independent Claims 1, 10, and 19. The limitations in those claims, in conjunction with combination, was previously established as explained. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Nister et al. (US 20210063198 A1, previously cited), hereinafter referenced as Nister, in view of Liu et al. (US 20210090263 A1), hereinafter referenced as Liu. Regarding Claim 1, Nister discloses a method of real-time detection of and geometry generation for physical ground planes (Nister, [0085]: teaches a map creation process 106 <read on method> for receiving map-streams 210 from one or more vehicles 1500, where a stream of 3D camera images are received for feature detections, such as objects <read on geometry detection>, lane lines, and road boundary locations; [0088]: teaches using LIDAR data for generating LIDAR point cloud layers <read on physical ground planes>, such as a ground plane; [0131]: teaches real-time sensor data 102 generated by a vehicle 1500), the method comprising: generating a point cloud based on one or more detected points (Nister, [0056]: teaches sensor data 102 corresponding to LIDAR data, where raw LIDAR data is accumulated and then converted into other types of representations, such as a 3D point cloud <read on generating point cloud>, a 2D LIDAR range image, etc.; Note: it should be noted that LIDAR is commonly used to generate 3D point clouds, which requires emitting and reflecting laser pulses off objects, where the LIDAR sensor interprets that as detected points for point cloud data), the one or more detected points being reflected from one or more projected points of focused light projected onto an environment (Nister, [0056]: teaches sensor data 102 corresponding to LIDAR data, where raw LIDAR data is accumulated and then converted into other types of representations, such as a 3D point cloud, a 2D LIDAR range image, etc.; Note: it should be noted that LIDAR is known to reflect laser pulses off surfaces, which is commonly used to measure distances and map environments; additionally, the laser of LIDAR is also a type of focused and projected light); slicing, [[in accordance with at least one coordinate space threshold]], one or more [[threshold]] points from the point cloud to generate a first sliced point cloud [[excluding the one or more threshold points]] (Nister, [0088]: teaches generating the LIDAR data in LIDAR point cloud layers, hereinafter called slices <read on generating first sliced point cloud>, with respect to the origin of the vehicle, such as a ground slice, a giraffe plane slice, a ground plane slice, etc. <read on slicing points from point cloud>); slicing, [[in accordance with at least one residual threshold]], one or more [[residual]] points [[from the first sliced point cloud]] to generate a second sliced point cloud [[excluding the one or more residual points]] (Nister, [0088]: teaches generating the LIDAR data in LIDAR point cloud layers, hereinafter called slices <read on generating second sliced point cloud>, with respect to the origin of the vehicle, such as a ground slice, a giraffe plane slice, a ground plane slice, etc.); and generating a ground plane [[aligned with one or more points of the second sliced point cloud in a coordinate space]] (Nister, [0116]: teaches the LIDAR fusion process generating and saving point clouds in giraffe plane and ground plane slices <read on ground plane>, where "the 3D points in the corresponding planes or slices may later be used to generate different kinds of LiDAR map images for LiDAR localization"). However, Nister does not expressly disclose slicing, in accordance with at least one coordinate space threshold, one or more threshold points from the point cloud to generate a first sliced point cloud excluding the one or more threshold points; slicing, in accordance with at least one residual threshold, one or more residual points from the first sliced point cloud to generate a second sliced point cloud excluding the one or more residual points; and generating a ground plane aligned with one or more points of the second sliced point cloud in a coordinate space. Liu discloses slicing, in accordance with at least one coordinate space threshold, one or more threshold points from the point cloud to generate a first sliced point cloud excluding the one or more threshold points (Liu, [0081]: teaches dividing a first three-dimensional space, where the estimated ground point cloud is located into a plurality of second three-dimensional spaces as shown in FIG. 4; [0083]: teaches "fitting a plurality of first planes based on the estimated ground point cloud points <read on slicing threshold points from point cloud> within the plurality of second three-dimensional spaces <read on sliced in accordance with coordinate space threshold>"; [0087]: teaches "selecting, for each first plane, estimated ground point cloud points whose distances from the first plane are smaller than a first distance threshold from the second three-dimensional space where the first plane is located, as candidate ground point cloud points <read on generated first sliced point cloud>"; Note: it should be noted that the estimated ground point cloud points that are not selected as candidate ground point cloud points are being interpreted as threshold points that are excluded from the generated first sliced point cloud); PNG media_image1.png 669 403 media_image1.png Greyscale slicing, in accordance with at least one residual threshold, one or more residual points from the first sliced point cloud to generate a second sliced point cloud excluding the one or more residual points (Liu, [0089]: teaches "fitting a second plane by using the candidate ground point cloud points" and determining whether the second plane is stable; [0093]: teaches the system determining "whether the sum of the distances from the estimated ground point cloud points within the second three-dimensional space to the second plane is smaller than a second distance threshold <read on residual threshold>," where if it fails that condition, then the second plane is deemed to be unstable, thereby performing step 409, which replaces the first plane with the second plane and repeats steps 405-407; FIG. 4 teaches if the second plane replaces the first plane, then new estimated ground point cloud points whose distances meet the prior criteria <read on slicing in accordance with residual threshold> are selected as new candidate ground point cloud points <read on generated second sliced point cloud>, where a new second plane is fitted <read on slicing residual points from first sliced point cloud> by using the new candidate ground point cloud points; Note: it should be noted that the residual points are being interpreted as candidate ground point cloud points from the first plane; additionally, points that are not selected are being interpreted as excluded residual points); and generating a ground plane aligned with one or more points of the second sliced point cloud in a coordinate space (Liu, [0098]: teaches "generating a ground <read on ground plane> based on a plurality of ground sub planes"; [0099]: teaches "segmenting the point cloud into a first sub point cloud and a second sub point cloud based on the segmentation plane"; [0100]: teaches "determining point cloud points whose distances from the ground are smaller than a first distance threshold in the first sub point cloud as ground point cloud points, and determining point cloud points whose distances from the ground are smaller than a second distance threshold in the second sub point cloud <read on points of second sliced point cloud in coordinate space> as the ground point cloud points," where the cloud points exist in a three-dimensional space <read on coordinate space>; Note: it should be noted that Paragraph [0091] of the specification states that "the system generates the second ground plane aligned to one or more points satisfying the residual thresholds," where "the system generates the second ground plane based on the subset of the points detected from the environment and associated with a partial sweep"). Liu is analogous art with respect to Nister because they are from the same field of endeavor, namely handling 3D point cloud generation that include layers. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a system that determines candidate ground point cloud points from divided planes of a 3D point cloud as taught by Liu into the teaching of Nister. The suggestion for doing so would allow the system to repeat point cloud fittings until a criteria is met, thereby resulting in more accurate 3D point cloud plane data for separate categories. Therefore, it would have been obvious to combine Liu with Nister. Regarding Claim 10, it recites the limitations that are similar in scope to Claim 1, but in a system. As shown in the rejection, the combination of Nister and Liu discloses the limitations of Claim 1. Additionally, Nister discloses a system (Nister, [0045]: teaches an end-to-end system for map-stream generation) comprising:… Thus, Claim 10 is met by Nister according to the mapping presented in the rejection of Claim 1, given the method corresponds to a system. Regarding Claim 19, it recites the limitations that are similar in scope to Claim 1, but in a non-transitory computer readable medium. As shown in the rejection, the combination of Nister and Liu discloses the limitations of Claim 1. Additionally, Nister discloses a non-transitory computer readable medium including one or more instructions stored thereon and executable by a processor to (Nister, [0268]: teaches a computer-storage media including "both volatile and nonvolatile media and/or removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, and/or other data types"; [0270]: teaches a computing device including a processor to "execute at least some of the computer-readable instructions to control one or more components of the computing device 1600"):… calculate a geometric characteristic of a second ground plane (Nister, [0140]: teaches "a LiDAR layer of the fused HD map represented by the map data 108 may include a sliced LiDAR point cloud (e.g., corresponding to a ground plane slice, a giraffe plane slice, another defined slice <read on second ground plane>, such as a one meter thick slice extending from two meters to three meters from the ground plane <read on calculated geometric characteristic>, etc.)"). Thus, Claim 19 is met by Nister according to the mapping presented in the rejection of Claim 1, given the method corresponds to a non-transitory computer readable medium. Regarding Claims 2 and 11, the combination of Nister and Liu discloses the method and the system of Claims 1 and 10 respectively. Additionally, Nister further discloses generating the at least one residual threshold based on a geometry of the first sliced point cloud (Nister, [0115]: teaches filtering out noisy measurements (e.g., measurements corresponding to obstacles, such as ground reflectance), where "with respect to ground reflectance and elevation map image layers, the maps image generators may search for peak density <read on residual threshold> in each point's height distribution" and "only these points <read on geometry of first sliced point cloud> may be considered as ground detections, and the noisy measurements may be filtered out (e.g., measurements corresponding to obstacles)"). Regarding Claims 3 and 12, the combination of Nister and Liu discloses the method and the system of Claims 1 and 10 respectively. Additionally, Nister further discloses generating an intermediate ground plane aligned in the coordinate space with one or more points of the first sliced point cloud (Nister, [0051]: teaches "the LIDAR data may be sliced into an above ground slice <read on intermediate ground plane>," where the system determines if the above ground slice is useful or not relative to the ground plane slice <read on aligned with points of first sliced point cloud>; [0051]: further teaches an example, where "the above ground slice may not be as valuable as the giraffe plane slice or the ground plane slice because the detections far from the ground plane may not be as usable, accurate, and/or sufficiently precise for localization" and thus "the above ground slice (e.g., from 5 meters to 300 meters) may be filtered out"). Regarding Claims 4 and 13, the combination of Nister and Liu discloses the method and the system of Claims 1 and 10 respectively. Additionally, Nister further discloses wherein the point cloud corresponds to a portion of a field of view of the environment (Nister, [0088]: teaches the system receiving data in a ray format, which is then converted to a LIDAR point cloud layer 526 via LIDAR conversion 512, where "a virtual camera with a top-down field of view <read on portion of a field of view> may be used to project the LIDAR point cloud into frames of the virtual camera to generate LIDAR maps images for the LIDAR maps image layer 528"; [0168]: teaches "cameras with a field of view that include portions of the environment in front of the vehicle 1500 (e.g., front-facing cameras) may be used for surround view, to help identify forward facing paths and obstacles, as well aid in, with the help of one or more controllers 1536 and/or control SoCs, providing information critical to generating an occupancy grid and/or determining the preferred vehicle paths"). Regarding Claims 5 and 14, the combination of Nister and Liu discloses the method and the system of Claims 4 and 13 respectively. Additionally, Nister further discloses capturing a frame comprising the point cloud (Nister, [0088]: teaches LIDAR point cloud data being converted to an image of the LIDAR point cloud from one or more different perspectives, where "a virtual camera with a top-down field of view may be used to project the LIDAR point cloud into frames <read on capturing a frame> of the virtual camera to generate LIDAR maps images for the LIDAR maps image layer 528"), the frame corresponding to the field of view (Nister, [0088]: teaches the LIDAR data corresponding to a particular slice, which contains a particular field of view). Regarding Claims 6 and 15, the combination of Nister and Liu discloses the method and the system of Claims 4 and 13 respectively. Additionally, Nister further discloses generating the ground plane based on a partial frame corresponding to the portion of the field of view (Nister, [0088]: teaches "the LIDAR data may be generated in slices <read on partial frame> (e.g., an above ground slice (e.g., from 5 meters to 300 meters with respect to the origin of the vehicle 1500), a giraffe plane slice (e.g., from 2.5 meters to 5 meters with respect to the origin of the vehicle 1500), a ground plane slice (e.g., from −2.5 meters to 0.5 meters with respect to the origin of the vehicle 1500), etc.)"). Regarding Claims 7, 16, and 20, the combination of Nister and Liu discloses the method, the system, and the non-transitory computer readable medium of Claims 1, 10, and 19 respectively. Additionally, Nister further discloses detecting the one or more projected points reflected from the environment (Nister, [0134]: teaches 2D projections of 3D landmarks being "projected into the image space and each point from the 2D projections may be compared against the portion of the distance function representation that the projected point lands on to determine the associated cost"), wherein each of the detected points is associated with one or more corresponding spatial identifiers in the coordinate space (Nister, [0134]: teaches using landmark locations <read on spatial identifiers> of lane dividers, road boundaries, signs, poles, etc.). Regarding Claims 8 and 17, the combination of Nister and Liu discloses the method and the system of Claims 7 and 16 respectively. Additionally, Nister further discloses each of the one or more projected points is associated with at least one of the one or more corresponding spatial identifiers (Nister, [0097]: teaches "the 3D world space locations of landmarks may be projected into a virtual field of view of a virtual camera to generate images corresponding to 2D image space locations of the landmarks within the virtual image"). Regarding Claims 9 and 18, the combination of Nister and Liu discloses the method and the system of Claims 1 and 10 respectively. Additionally, Nister further discloses calculating a geometric characteristic of a second ground plane (Nister, [0137]: teaches "a LIDAR layer of the fused HD map <read on geometric characteristic> represented by the map data 108 may include a LiDAR intensity (or reflectivity) image (e.g., a top down projection of the intensity values from the fused LiDAR data)"; [0117]: further teaches an example, where "painted surfaces, such as lane markers, may have higher reflectivity, and this reflection intensity may be captured and used to compare the map data 108 to the current LiDAR sensor data"); and transmitting a vehicle operation instruction based on the geometric characteristic (Nister, [0126]: teaches "transmitting data representative of the fused map <read on geometric characteristic> to one or more vehicles for use in executing one or more operations <read on transmitting vehicle operation instruction>"; [0130]: teaches "at a beginning of a drive, a current road segment 610 of the vehicle 1500 may be determined," where "the current road segment 610 may be known from a last drive—e.g., when the vehicle 1500 was shut off, the last known road segment the vehicle 1500 was localized to may be stored"). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kim et al. (US 20190324471 A1) discloses improved processing of sensor data, such as LIDAR for distinguishing between free space and objects/hazards; Nagori et al. (US 20190078880 A1) discloses ground plane estimation of a 3D point cloud based on modifications to a random sample consensus (RANSAC) algorithm; Stein et al. (US 20150371096 A1) discloses detecting suspected hazard points on an image by using a ground plane constraint; and Yasutomi (US 20200311963 A1) discloses calculating a trajectory based on acquired point cloud data and a vertical plane orthogonal to the trajectory. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KARL TRUONG whose telephone number is (703)756-5915. The examiner can normally be reached 10:30 AM - 7:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.D.T./Examiner, Art Unit 2614 /KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Sep 23, 2021
Application Filed
Mar 19, 2025
Non-Final Rejection — §103
Sep 02, 2025
Response Filed
Oct 01, 2025
Final Rejection — §103
Jan 12, 2026
Request for Continued Examination
Jan 26, 2026
Response after Non-Final Action
Mar 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573149
DATA PROCESSING METHOD AND APPARATUS, DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 10, 2026
Patent 12561875
ANIMATION FRAME DISPLAY METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12494013
AUTODECODING LATENT 3D DIFFUSION MODELS
2y 5m to grant Granted Dec 09, 2025
Patent 12456258
SYSTEMS AND METHODS FOR GENERATING A SHADOW MESH
2y 5m to grant Granted Oct 28, 2025
Patent 12444020
FLEXIBLE IMAGE ASPECT RATIO USING MACHINE LEARNING
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
52%
Grant Probability
83%
With Interview (+31.0%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 29 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month