Prosecution Insights
Last updated: April 19, 2026
Application No. 18/909,705

USING DEEP LEARNING TO IDENTIFY ROAD GEOMETRY FROM POINT CLOUDS

Non-Final OA §102§103
Filed
Oct 08, 2024
Examiner
WILLIS, BRANDON Z.
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Mobileye Vision Technologies Ltd.
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
140 granted / 203 resolved
+17.0% vs TC avg
Strong +38% interview lift
Without
With
+38.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
23 currently pending
Career history
226
Total Applications
across all art units

Statute-Specific Performance

§101
11.3%
-28.7% vs TC avg
§103
48.3%
+8.3% vs TC avg
§102
27.3%
-12.7% vs TC avg
§112
9.1%
-30.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 203 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 6/20/2025 and 9/23/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Drawings The drawings are objected to because a clean copy of Figure 6 should be provided without the annotation which modified element 606 to 605. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. In addition to Replacement Sheets containing the corrected drawing figure(s), applicant is required to submit a marked-up copy of each Replacement Sheet including annotations indicating the changes made to the previous version. The marked-up copy must be clearly labeled as “Annotated Sheets” and must be presented in the amendment or remarks section that explains the change(s) to the drawings. See 37 CFR 1.121(d)(1). Failure to timely submit the proposed drawing and marked-up copy will result in the abandonment of the application. Specification The abstract of the disclosure is objected to because in lines 2-3, “objects is detected” should read “objects are detected”, and in line 6, “other network” should read “neural network”. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Claim Objections Claims 1, 12, 14-16, and 19 are objected to because of the following informalities: In claim 1, line 12, “get features” should read “extract features”. In claim 12, line 3, “at” should be removed. In claim 14, line 2, “presenting the convolutional values” should read “presenting convolutional values”. In claim 15, lines 1-2, “based on the convolutional values” should read “based on convolutional values”. In claim 16, lines 1-2, “detects what the object is, what shape the object has, and what the object’s current location and trajectory is” should read “detect the type of object, the shape of the object, and the current location and trajectory of the object”. In claim 19, lines 14-15, “finally into (d) a final neural network segment trained to identify the road geometry” should read “inputting the features determined in (d) for the respective voxel into a second neural network segment trained to identify the road geometry”. Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 2, 7, 9, 11, 15-17, 19 and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Zhang et al. (U.S. Publication No. 2020/0233429; hereinafter Zhang). Regarding claim 1, Zhang teaches a computer-implemented method for detecting road geometry from point clouds (Zhang: Par. Par. 41; i.e., Perception module 302 can also detect objects based on other sensors data provided by other sensors such as a radar and/or LIDAR; Par. 41; i.e., The objects can include … road way boundaries), comprising: (a) determining, using at least one sensor on a vehicle, a point cloud representing surroundings of the vehicle (Zhang: Par. 27; i.e., LIDAR unit 215 may sense objects in the environment in which the autonomous vehicle is located using lasers; Par. 65; i.e., a point cloud generated by the LiDAR unit 215 in the sensor system 115); (b) partitioning the surroundings of the vehicle into a plurality of voxels, each voxel representing a volume in the surroundings of the vehicle (Zhang: Par. 54; i.e., to extract or learn the point cloud features 405, the feature learning network 404 can partition a space within the angle of view of the ADV into multiple equally spaced voxels (i.e., cells)); for each of the plurality of voxels: (c) determining, based on the point cloud, whether a three-dimensional data exists for a respective voxel (Zhang: Par. 67; i.e., voxel D 609 does not contain any LiDAR point, whereas voxel A 604 contains 4 LiDAR points, voxel B 605 contains 5 LiDAR points, and voxel C 608 contains 3 LiDAR points); (d) when the three-dimensional data is determined in (c) to exist, inputting data representing points from the point cloud positioned within the respective voxel into a first neural network segment to get features for the respective voxel, the first neural network segment being a Fully Connected (FC) feature encoding neural network (Zhang: Par. 20; i.e., the point cloud features can be extracted using a fully connected network (FCN). The FCN can … encode each non-empty voxel with point-wise features, and combine the point-wise features with a locally aggregated feature; Par. 63; i.e., the point cloud features 522 can be concatenated and provided to a convolution neural network 527 for feature aggregation); and (e) inputting the features determined in (d) for the respective voxel into a second neural network segment trained to identify road geometry and detected objects (Zhang: Par. 64; i.e., the region proposal network 529 is a trained neutral network model that proposes multiple identifiable objects within a particular image. The region proposal network 529 can generate the multiple proposals from a region where an object lies by sliding over a feature map previously generated by the convolution neural network 527… the regression map 533 can be used in conjunction to detect and perceive an object in surrounding environments of the vehicle 511 within each view angle; Par. 41; i.e., the objects can include … road way boundaries). Regarding claim 2, Zhang teaches the method according to claim 1. Zhang further teaches wherein the determining the point cloud (a) comprises detecting the point cloud using lidar data (Zhang: Par. 53; i.e., a point cloud generated by the LiDAR unit 215 in the sensor system 115). Regarding claim 7, Zhang teaches the method according to claim 1. Zhang further teaches wherein each point in the point cloud comprises a location in three-dimensional space, a timestamp when the location was detected, and a reflectivity detected at the location (Zhang: Par. 54; i.e., Each LiDAR point can have a number of attributes, including coordinates and a received reflectance; Par. 62; i.e., As the vehicle starts to extract the map features at a timestamp for each predetermined time interval, the vehicle also extracts point cloud features 522 from the view angle at the same timestamp). Regarding claim 9, Zhang teaches the method according to claim 1. Zhang further teaches wherein the road geometry comprises road edges and lane dividers (Zhang: Par. 41; i.e., The objects can include … road way boundaries). Regarding claim 11, Zhang teaches the method according to claim 1. Zhang further teaches controlling the vehicle based on the road geometry (Zhang: Par. 41; i.e., Par. 41; i.e., the objects can include … road way boundaries; Par. 48; i.e., the navigation system may determine a series of speeds and directional headings to affect movement of the autonomous vehicle along a path that substantially avoids perceived obstacles). Regarding claim 15, Zhang teaches the method according to claim 1. Zhang further teaches wherein the second neural network detects, based on the convolutional values determined in (d), an object in the point cloud (Zhang: Par. 75; i.e., the concatenated feature list is provided as an input to a number of neural networks to detect one or more objects in the surrounding environment). Regarding claim 16, Zhang teaches the method according to claim 1. Zhang further teaches wherein the second neural network detects what the object is, what shape the object has, and what the object’s current location and trajectory is (Zhang: Par. 64; i.e., the region proposal network 529 is a trained neutral network model that proposes multiple identifiable objects within a particular image; Par. 40; i.e., The perception can include … a relative position of another vehicle; Par. 42; i.e., prediction module 303 will predict whether the vehicle will likely move straight forward or make a turn). Regarding claim 17, Zhang teaches the method according to claim 1. Zhang further teaches wherein the first and second neural networks are trained together (Zhang: Par. 35; i.e., a set of one or more machine-learning models such as neural networks can be trained for object detection based on map features and point cloud features). Regarding claim 19, Zhang teaches a non-transitory computer readable medium including instructions for determining road geometry from point clouds that causes a computing system to perform operations comprising (Zhang: Par. 86; i.e., Storage device 1508 may include computer-accessible storage medium 1509 (also known as a machine-readable storage medium or a computer-readable medium) on which is stored one or more sets of instructions or software (e.g., module, unit, and/or logic 1528) embodying any one or more of the methodologies or functions described herein): (a) determining, using at least one sensor on a vehicle, a point cloud representing surroundings of the vehicle (Zhang: Par. 27; i.e., LIDAR unit 215 may sense objects in the environment in which the autonomous vehicle is located using lasers; Par. 65; i.e., a point cloud generated by the LiDAR unit 215 in the sensor system 115); (b) partitioning the surroundings of the vehicle into a plurality of voxels, each voxel representing a volume in the surroundings of the vehicle (Zhang: Par. 54; i.e., to extract or learn the point cloud features 405, the feature learning network 404 can partition a space within the angle of view of the ADV into multiple equally spaced voxels (i.e., cells)); for each of the plurality of voxels: (c) determining, based on the point cloud, whether a three-dimensional data exists for a respective voxel (Zhang: Par. 67; i.e., voxel D 609 does not contain any LiDAR point, whereas voxel A 604 contains 4 LiDAR points, voxel B 605 contains 5 LiDAR points, and voxel C 608 contains 3 LiDAR points); (d) when the three-dimensional data is determined in (c) to exist, inputting data representing points from the point cloud positioned within the respective voxel into a first neural network segment to extract features for the respective voxel, then into the second neural network segment, being a sparse convolutional neural network (Zhang: Par. 20; i.e., the point cloud features can be extracted using a fully connected network (FCN). The FCN can … encode each non-empty voxel with point-wise features, and combine the point-wise features with a locally aggregated feature; Par. 63; i.e., the point cloud features 522 can be concatenated and provided to a convolution neural network 527 for feature aggregation); and (e) finally into (d) a final neural network segment trained to identify the road geometry (Zhang: Par. 64; i.e., the region proposal network 529 is a trained neutral network model that proposes multiple identifiable objects within a particular image. The region proposal network 529 can generate the multiple proposals from a region where an object lies by sliding over a feature map previously generated by the convolution neural network 527… the regression map 533 can be used in conjunction to detect and perceive an object in surrounding environments of the vehicle 511 within each view angle; Par. 41; i.e., the objects can include … road way boundaries). Regarding claim 20, Zhang teaches a processing device for determining road geometry from point clouds (Zhang: Par. Par. 41; i.e., Perception module 302 can also detect objects based on other sensors data provided by other sensors such as a radar and/or LIDAR; Par. 41; i.e., The objects can include … road way boundaries), the processing device configured to perform operations comprising: (a) determining, using at least one sensor on a vehicle, a point cloud representing surroundings of the vehicle (Zhang: Par. 27; i.e., LIDAR unit 215 may sense objects in the environment in which the autonomous vehicle is located using lasers; Par. 65; i.e., a point cloud generated by the LiDAR unit 215 in the sensor system 115); (b) partitioning the surroundings of the vehicle into a plurality of voxels, each voxel representing a volume in the surroundings of the vehicle (Zhang: Par. 54; i.e., to extract or learn the point cloud features 405, the feature learning network 404 can partition a space within the angle of view of the ADV into multiple equally spaced voxels (i.e., cells)); for each of the plurality of voxels: (c) determining, based on the point cloud, whether a three-dimensional data exists for a respective voxel (Zhang: Par. 67; i.e., voxel D 609 does not contain any LiDAR point, whereas voxel A 604 contains 4 LiDAR points, voxel B 605 contains 5 LiDAR points, and voxel C 608 contains 3 LiDAR points); (d) when the three-dimensional data is determined in (c) to exist, inputting data representing points from the point cloud positioned within the respective voxel into a first neural network segment to extract features for the respective voxel, the first neural network segment being for feature encoding (Zhang: Par. 20; i.e., the point cloud features can be extracted using a fully connected network (FCN). The FCN can … encode each non-empty voxel with point-wise features, and combine the point-wise features with a locally aggregated feature; Par. 63; i.e., the point cloud features 522 can be concatenated and provided to a convolution neural network 527 for feature aggregation); and (e) then inputting the features determined in (d) for the plurality of voxels into a second neural network segment trained to identify the road geometry (Zhang: Par. 64; i.e., the region proposal network 529 is a trained neutral network model that proposes multiple identifiable objects within a particular image. The region proposal network 529 can generate the multiple proposals from a region where an object lies by sliding over a feature map previously generated by the convolution neural network 527… the regression map 533 can be used in conjunction to detect and perceive an object in surrounding environments of the vehicle 511 within each view angle; Par. 41; i.e., the objects can include … road way boundaries). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 3 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang and further in view of Mahata et al. (U.S. Publication No. 2023/0186647; hereinafter Mahata). Regarding claim 3, Zhang teaches the method according to claim 1. Zhang further teaches wherein the determining the point cloud (a) comprises: receiving a plurality of sensor sweeps, each sensor sweep including a plurality of points detected in the surroundings of the vehicle at a different time (Zhang: Par. 56; i.e., the fan-shaped space can be created by a view angle and a vertical scanning angle at the particular timestamp corresponding to a driving cycle of the ADV; Par. 59; i.e., the vehicle may have a point cloud sweeping range 513. As the vehicle 511 moves from position A 515 to position B 517 along the trajectory, the vehicle 511 may have different surrounding environments, including different point cloud features); and aggregating the plurality of sensor sweeps to determine the point cloud (Zhang: Par. 17; i.e., the point cloud features can be extracted from a perception area of the ADV within a particular angle view at each driving cycle… The layered map features can be extracted based on a position of the ADV at each timestamp corresponding to a driving cycle or another time interval). Zhang does not explicitly teach adjusting points from each of the plurality of sensor sweeps to correct for ego-motion of the vehicle. However, in the same field of endeavor, Mahata teaches adjusting points from each of the plurality of sensor sweeps to correct for ego-motion of the vehicle (Mahata: Par. 114; i.e., for every given camera snapshot location we rotate the point cloud around the z-axis… The rotation generally aligns road in y-axis direction). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Zhang to have further incorporated adjusting points from each of the plurality of sensor sweeps to correct for ego-motion of the vehicle, as taught by Mahata. Doing so would allow the point cloud to have the same azimuth angle as the vehicle (Mahata: Par. 114; i.e., so that azimuth angle of the point cloud is aligned with car azimuth angle). Regarding claim 5, Zhang in view of Mahata teaches the method according to claim 3. Zhang further teaches wherein the plurality of sensor sweeps are collected from a plurality of lidar sensors on the vehicle (Zhang: Par. 17; i.e., the fan-shaped space may be created by the view angle of the ADV and a vertical scanning angle of one or more LiDAR units in the ADV). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of Mahata and further in view of Lever et al. (U.S. Publication No. 2021/0215805; hereinafter Lever). Regarding claim 4, Zhang in view of Mahata teaches the method according to claim 3 but does not explicitly teach wherein the point cloud represents objects in motion relative to earth as blurred. However, in the same field of endeavor, Lever teaches wherein the point cloud represents objects in motion relative to earth as blurred (Lever: Par. 48; i.e., if a first sensor sees an object at the beginning of its rotation (spin) and second sensor sees it at the end of the rotation and the two point clouds are fused together, one will get a smeared effect or a ghosting effect). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Zhang to have further incorporated wherein the point cloud represents objects in motion relative to earth as blurred, as taught by Lever. Doing so would allow the system to prevent the blurred objects by separating the LIDAR point clouds (Lever: Par. 49; i.e., the system is capable of providing a coherent spatial and temporal fused point cloud output by … splitting the LiDAR point clouds in time to prevent points in a region captured at overly different times to be associated with the same timestamp). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of Mahata and further in view of Hollen (U.S. Publication No. 2024/0302502; hereinafter Hollen). Regarding claim 6, Zhang in view of Mahata teaches the method according to claim 5, but does not explicitly teach wherein the plurality of lidar sensors comprises a long range lidar positioned high on the vehicle and a plurality of near field lidar sensors positioned around the vehicle to capture blind spots from the long range lidar. However, in the same field of endeavor, Hollen teaches wherein the plurality of lidar sensors comprises a long range lidar positioned high on the vehicle and a plurality of near field lidar sensors positioned around the vehicle to capture blind spots from the long range lidar (Hollen: Par. 118; i.e., an automobile (e.g., a passenger car) outfitted with lidar for autonomous driving might be outfitted with multiple separate lidar sensors including a forward-facing long range lidar sensor, a rear-facing short-range lidar sensor and one or more short-range lidar sensors along each side of the car). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Zhang to have further incorporated wherein the plurality of lidar sensors comprises a long range lidar positioned high on the vehicle and a plurality of near field lidar sensors positioned around the vehicle to capture blind spots from the long range lidar, as taught by Hollen. Doing so would allow the system to obtain sensor data 360 degrees around the vehicle (Hollen: Par. 119; i.e., The number of lidar sensors, the placement of the lidar sensors, and the fields of view of each individual lidar sensors can be chosen to obtain a majority of, if not the entirety of, a 360-degree Field of view of the environment surrounding the vehicle some portions of which can be optimized for different ranges). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang and further in view of Chen et al. (U.S. Publication No. 2021/0141092; hereinafter Chen). Regarding claim 8, Zhang teaches the method according to claim 7, but does not teach wherein each point further comprises a Doppler value detected at the location. However, in the same field of endeavor, Chen teaches wherein each point further comprises a Doppler value detected at the location (Chen: Par. 126; i.e., sensor processor 340 or other processors such as for example processors 604, 614 and 618 then determines the Doppler information for the LiDAR point cloud data). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Zhang to have further incorporated wherein each point further comprises a Doppler value detected at the location, as taught by Chen. Doing so would allow the system to classify each point as a static or a dynamic point (Chen: Par. 127; i.e., After the Doppler information has been determined, the processor 340, then classifies each return point in the Doppler cloud point as either a static point or a dynamic point). Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang and further in view of Ferrer et al. (U.S. Publication No. 2021/0039715; hereinafter Ferrer). Regarding claim 10, Zhang teaches the method according to claim 1, but does not explicitly teach comparing the road geometry to a known map of the surroundings of the vehicle to localize the vehicle. However, in the same field of endeavor, Ferrer teaches comparing the road geometry to a known map of the surroundings of the vehicle to localize the vehicle (Ferrer: Par. 16; i.e., lane boundaries and/or curb locations may be compared with map data … to more accurately determine the relative location of the vehicle). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Zhang to have further incorporated comparing the road geometry to a known map of the surroundings of the vehicle to localize the vehicle, as taught by Ferrer. Doing so would allow the system to determine the next location for the vehicle to perform a turn (Ferrer: Par. 36; i.e., the location of the next opportunity to turn right may be identified according to the map data and the vehicle's current position). Claims 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang and further in view of Vig (U.S. Publication No. 2020/0250439; hereinafter Vig). Regarding claim 12, Zhang teaches the method according to claim 1, but does not explicitly teach wherein the second neural network outputs, for respective voxels in the plurality of voxels, whether a lane or road edge is within the respective voxel, and at what angle the lane or road edge is passing through the voxel at. However, in the same field of endeavor, Vig teaches wherein the second neural network outputs, for respective voxels in the plurality of voxels, whether a lane or road edge is within the respective voxel, and at what angle the lane or road edge is passing through the voxel at (Vig: Par. 93; i.e., map generation system 102 defines, classifies, and/or labels each area (e.g., pixel, cell, etc.) of an attribute of a roadway including … a curb … a lane of a roadway of a road (e.g., … a direction of travel in a lane of a roadway, etc.); each cell is labeled with a curb that passes through a respective cell and the direction of the lane passing through the respective cell). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Zhang to have further incorporated wherein the second neural network outputs, for respective voxels in the plurality of voxels, whether a lane or road edge is within the respective voxel, and at what angle the lane or road edge is passing through the voxel at, as taught by Vig. Doing so would reduce mapping time and enhance overall safety of the vehicle (Vig: Par. 44; i.e., time-intensive mapping of roadways and intersections may be reduced, navigation range and safety of AV travel may be enhanced). Regarding claim 13, Zhang in view of Vig teaches the method according to claim 12. Vig further teaches interpolating, based on an output of the second neural network outputs, a spline representing the road geometry (Vig: Par. 46; i.e., generating a road edge boundary (e.g., a spline, a polyline, etc.) of the roadway in the map based on the plurality of prediction scores). Regarding claim 14, Zhang in view of Vig teaches the method according to claim 12. Zhang further teaches (f) assembling a two-dimensional grid presenting the convolutional values determined in (d) (Zhang: Par. 74; i.e., the point-wise concatenated features from all the non-empty voxel in the view angle at a particular timestamp are further concatenated with the map features extracted from a high definition map to create a concatenated feature list. Each feature can be presented by a binary grid), wherein the inputting (e) comprises inputting the two-dimensional grid (Zhang: Par. 75; i.e., the concatenated feature list is provided as an input to a number of neural networks to detect one or more objects in the surrounding environment). Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang and further in view of Zhang, D. et al. (U.S. Publication No. 2024/0185523; hereinafter Zhang, D.). Regarding claim 18, Zhang teaches the method according to claim 17, but does not explicitly teach wherein the first and second neural networks are trained using examples labeled by a more computationally demanding neural network. However, in the same field of endeavor, Zhang, D. teaches wherein the first and second neural networks are trained using examples labeled by a more computationally demanding neural network (Zhang, D.: Par. 84; i.e., complete geometry 244, class labels associated with complete geometry 244, colors associated with complete geometry 244, and/or other output generated by machine learning model 208 may be used to train one or more additional machine learning models to perform n-dimensional object detection). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Zhang to have further incorporated wherein the first and second neural networks are trained using examples labeled by a more computationally demanding neural network, as taught by Zhang, D. Doing so would allow for improved reliability and safety (Zhang, D.: Par. 194; i.e., the presence of a neural network(s) in the supervisory MCU may improve reliability, safety and performance). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Additional prior art deemed pertinent in the art of using neural networks to process point clouds to identify road geometry around a vehicle includes Silver et al. (U.S. Patent No. 9285230), Costea et al. (U.S. Publication No. 2022/0111868), Lu et al. (U.S. Publication No. 2021/0354718), Tran (U.S. Publication No. 2021/0108926), and Douillard et al. (U.S. Publication No. 2018/0364717). Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRANDON Z WILLIS whose telephone number is (571)272-5427. The examiner can normally be reached Weekdays 8:00-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin D. Bishop can be reached at (571) 270-3713. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRANDON Z WILLIS/Examiner, Art Unit 3665
Read full office action

Prosecution Timeline

Oct 08, 2024
Application Filed
Jan 08, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602931
IDENTIFICATION OF UNKNOWN TRAFFIC OBJECTS
2y 5m to grant Granted Apr 14, 2026
Patent 12589767
SYSTEMS AND METHODS FOR GENERATING A DRIVING TRAJECTORY
2y 5m to grant Granted Mar 31, 2026
Patent 12545299
DYNAMICALLY WEIGHTING TRAINING DATA USING KINEMATIC COMPARISON
2y 5m to grant Granted Feb 10, 2026
Patent 12534072
TRANSPORT DANGEROUS SITUATION CONSENSUS
2y 5m to grant Granted Jan 27, 2026
Patent 12528483
METHOD, ELECTRONIC DEVICE AND MEDIUM FOR TARGET STATE ESTIMATION
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+38.3%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 203 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month