Prosecution Insights
Last updated: April 19, 2026
Application No. 18/746,878

METHODS AND SYSTEMS FOR USING TRAINED GENERATIVE ADVERSARIAL NETWORKS TO IMPUTE 3D DATA FOR VEHICLES AND TRANSPORTATION

Non-Final OA §103§DP
Filed
Jun 18, 2024
Examiner
CHEN, FRANK S
Art Unit
2611
Tech Center
2600 — Communications
Assignee
State Farm Mutual Automobile Insurance Company
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
91%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
539 granted / 657 resolved
+20.0% vs TC avg
Moderate +9% lift
Without
With
+8.8%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 2m
Avg Prosecution
24 currently pending
Career history
681
Total Applications
across all art units

Statute-Specific Performance

§101
10.1%
-29.9% vs TC avg
§103
55.9%
+15.9% vs TC avg
§102
4.8%
-35.2% vs TC avg
§112
11.1%
-28.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 657 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting 2. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. 3. Claims 1-2, 4-10, 12-16, and 18-20 of the present application are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 12,059.179 B2 (paten 179) in view of Lin Yang (US Patent Application Publication No. 2018/0190016 A1) in view of Williams et al. (US Patent Application Publication No. 2020/0387739 A1) and Emrah Akin Sisbot (US Patent Application Publication No. 2018/0330547 A1). While the claims are not identical, they are similar. 4. The following table shows correspondence between claims of present application with claims of patent 179 Claims of present application 1 2 4 5 6 Claims of patent 179 1 in view of Williams 1 in view of Williams 1 in view of Williams and Sisbot 1 in view of Williams and Yang 1 in view of Williams and Yang 7 8 9 10 12 13 1 in view of Williams and Yang 1 in view of Williams and Sisbot 1 in view of Williams 1 in view of Williams 1 in view of Williams and Sisbot 1 in view of Williams and Yang 14 15 16 18 19 20 1 in view of Williams and Yang 1 in view of Williams 1 in view of Williams 1 in view of Williams and Sisbot 1 in view of Williams and Yang 1 in view of Williams and Yang 5. The following table shows correspondence between limitation of present application and limitation of patent 179. Claim 1 of present application Claim 1 of patent 179 1. A computer-implemented method for using a trained machine learning model to improve vehicle orientation and navigation, comprising: 1. A computer-implemented method for using a trained generative adversarial network to improve vehicle orientation and navigation, comprising: receiving, at one or more processors, a navigation data set comprising point cloud data relating to a terrain of an area, receiving a navigation data set including data corresponding to two or more different data types, wherein each data type includes respective point cloud data; relating to terrain within an area; processing, by the one or more processors, the point cloud data using the trained machine learning model to probabilistically wherein the point cloud data includes one or more gaps; fill the one or more gaps within the point cloud data to generate a processed navigation data set; generating a combined data set by processing the respective point cloud data of the different data types using the trained generative adversarial network Williams at paragraph [0146] to fill one or more gaps within the respective point cloud data and generating, by the one or more processors, a high resolution map of the terrain of the area based upon the processed navigation data set. and generating a high resolution map based on the combined data set to fill one or more gaps within the respective point cloud data relating to terrain within an area; wherein the high resolution map includes spatial data. Claim Rejections - 35 USC § 103 6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 8. Claims 1-2, 5-7, 9-10, 13-16, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Lin Yang (US Patent Application Publication No. 2018/0190016 A1) in view of Williams et al. (US Patent Application Publication No. 2020/0387739 A1). 9. Regarding Claim 1, Yang discloses A computer-implemented method (paragraph [0028] reciting “The figures depict various embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.”; paragraph [0027] reciting “FIG. 15 illustrates an embodiment of a computing machine that can read instructions from a machine-readable medium and execute the instructions in a processor or controller.”) to improve vehicle orientation and navigation, (Abstract reciting “… The enhanced data can be used for a variety of applications related to autonomous vehicle navigation and HD map generation, such as detecting lane markings on the road in front of the vehicle or determining a change in the vehicle's position and orientation.”) comprising: receiving, at one or more processors, (paragraph [0059] reciting “… In an embodiment, the online HD map system 110 may be a distributed system comprising a plurality of processors.”) a navigation data set (paragraph [0075] reciting “The process begins when the vehicle computing system 120 obtains 910 a point cloud from a LiDAR scan. As noted above, the vehicle sensors 105 include a light detection and ranging (LiDAR) sensor that surveys the surroundings of the vehicle 150 by measuring distance to a target. The LiDAR sensor measures the distance to targets surrounding the vehicle by illuminating targets with laser light pulses and measuring the reflected pulses. In some embodiments, the LiDAR sensor includes a laser and a rotating mirror, and the LiDAR sensor performs a scan by operating the rotating mirror to cause the laser pulses to be emitted from different pitch and yaw angles.” Targets correspond to a navigation data set.) comprising point cloud data relating to a terrain of an area, (paragraph [0009] reciting “One of the processes enriches point cloud data on the ground in front of the vehicle. A computing system obtains a point cloud that represents scan points collected by the LiDAR sensor while performing a scan. …” The ground is part of the targets and corresponds to a terrain of an area in front of the vehicle’s LIDAR scanner.) and generating, by the one or more processors, a high resolution map of the terrain of the area (paragraph [0008] reciting “Embodiments relate to processes that enhance the relatively sparse and low-resolution data collected by a LiDAR sensor by increasing the density of points in certain portions of the scan. The enhanced data can then be used for a variety of applications related to autonomous vehicle navigation and HD map generation.”) While Yang does not explicitly disclose, Williams discloses for using a trained machine learning model (paragraph [0100] reciting “Accordingly, in an embodiment of the present invention a generative adversarial network (GAN) may be trained to learn about the three-dimensional distribution of points.”) wherein the point cloud data includes one or more gaps; (paragraph [0119] reciting “The input data may also comprise additional random points, whose purpose is to be remapped to fill one or more gaps in the surface distribution. These random points can be added in a manner similar to the other forms of noise in the training data, e.g. simulating perturbations of non-existent points in the input point cloud.”) processing, by the one or more processors, the point cloud data using the trained machine learning model to probabilistically fill the one or more gaps within the point cloud data to generate a processed navigation data set; (paragraph [0146] reciting “a point cloud input to the generative network of the GAN further comprising additional random points, provided for remapping by the generative network of the GAN to fill one or more gaps in a surface distribution of the point cloud;” Using GAN to fill one or more gaps of the surface point cloud corresponds to probabilistically filling the one or more gaps of the point cloud because GAN is intelligent gap filling, not random or simple gap filling.) based upon the processed navigation data set. (paragraph [0146] reciting “a point cloud input to the generative network of the GAN further comprising additional random points, provided for remapping by the generative network of the GAN to fill one or more gaps in a surface distribution of the point cloud;” When the GAN is applied to the ground point cloud in Yang, the result is a ground point cloud of filled holes/gaps which is used for autonomous vehicle navigation.) It would have been obvious to a person of ordinary skills in the art before the effective filing date of the claimed invention to modify Yang with Williams so that any holes in the 3D point cloud generated in Yang can be filled using the generative adversarial network (GAN) disclosed in Williams. While Yang does not explicitly disclose holes, Yang does disclose undesirable sparse point clouds that are generated from LIDAR scans and generally speaking holes and gaps can occur. Therefore, in sparse areas that may be deemed to be hole/gap areas of the point cloud, Yang modified by Williams can beneficially fill in the holes/gaps using the GAN in order to generate a more complete and solid 3D point cloud for the HD map, which further helps the autonomous vehicle navigate using such HD map. 10. Regarding Claim 2, Williams further discloses The computer-implemented method of claim 1, wherein the trained machine learning model is a generative adversarial network. (paragraph [0146] reciting “a point cloud input to the generative network of the GAN further comprising additional random points, provided for remapping by the generative network of the GAN to fill one or more gaps in a surface distribution of the point cloud;”) 11. Regarding Claim 5, Yang further discloses The computer-implemented method of claim 1, wherein the point cloud data comprises a plurality of data types, wherein each data type includes a respective subset of the point cloud data. (paragraph [0075] reciting “The process begins when the vehicle computing system 120 obtains 910 a point cloud from a LiDAR scan. As noted above, the vehicle sensors 105 include a light detection and ranging (LiDAR) sensor that surveys the surroundings of the vehicle 150 by measuring distance to a target. The LiDAR sensor measures the distance to targets surrounding the vehicle by illuminating targets with laser light pulses and measuring the reflected pulses. In some embodiments, the LiDAR sensor includes a laser and a rotating mirror, and the LiDAR sensor performs a scan by operating the rotating mirror to cause the laser pulses to be emitted from different pitch and yaw angles.” Each LiDAR scanned target corresponds to a data type and each target comprises a subset of the point cloud.) 12. Regarding Claim 6, Yang further discloses The computer-implemented method of claim 5, wherein processing the point cloud data to generate the processed navigation data set comprises combining the respective subsets of the point cloud data into a combined point cloud of the processed navigation data set. (paragraph [0078] reciting “As referred to herein, the point cloud is a set of points in three-dimensional space that represent the positions of scan points in the scanner data. The 3D points in the point cloud may be generated by converting the range values and associated pitch and yaw angles collected by the LiDAR sensor into a three-dimensional coordinate system, such as Cartesian coordinates, cylindrical coordinates, or spherical coordinates.” Each of the scanned targets are converted into 3D point cloud and are combined together to generate the 3D point cloud of the entire scanned area.) 13. Regarding Claim 7, Yang further discloses The computer-implemented method of claim 1, further comprising: generating, by the one or more processors, a navigation decision for controlling an autonomous vehicle based upon the high resolution map. (paragraph [0097] reciting “… Additionally or alternatively, the identified markings can also be used by a vehicle 150 in real-time for navigation and steering purposes, such as keeping the vehicle in the same lane, switching to an adjacent lane, stopping before a crosswalk, or entering a parking space.”) 14. Regarding Claim 9, Yang discloses A computing system (paragraph [0117] reciting “… Specifically, FIG. 15 shows a diagrammatic representation of a machine in the example form of a computer system 1500 within which instructions 1524 (e.g., software) for causing the machine to perform any one or more of the methodologies discussed herein may be executed. …”) to improve vehicle orientation and navigation, (Abstract reciting “… The enhanced data can be used for a variety of applications related to autonomous vehicle navigation and HD map generation, such as detecting lane markings on the road in front of the vehicle or determining a change in the vehicle's position and orientation.”) comprising: one or more processors, and one or more memories having stored thereon computer-executable instructions that, when executed, cause the computing system to: (paragraph [0120] reciting “The storage unit 1516 includes a machine-readable medium 1522 on which is stored instructions 1524 (e.g., software) embodying any one or more of the methodologies or functions described herein. The instructions 1524 (e.g., software) may also reside, completely or at least partially, within the main memory 1504 or within the processor 1502 (e.g., within a processor's cache memory) during execution thereof by the computer system 1500, the main memory 1504 and the processor 1502 also constituting machine-readable media. The instructions 1524 (e.g., software) may be transmitted or received over a network 1526 via the network interface device 1520.”) receive a navigation data set (paragraph [0075] reciting “The process begins when the vehicle computing system 120 obtains 910 a point cloud from a LiDAR scan. As noted above, the vehicle sensors 105 include a light detection and ranging (LiDAR) sensor that surveys the surroundings of the vehicle 150 by measuring distance to a target. The LiDAR sensor measures the distance to targets surrounding the vehicle by illuminating targets with laser light pulses and measuring the reflected pulses. In some embodiments, the LiDAR sensor includes a laser and a rotating mirror, and the LiDAR sensor performs a scan by operating the rotating mirror to cause the laser pulses to be emitted from different pitch and yaw angles.” Targets correspond to a navigation data set.) comprising point cloud data relating to a terrain of an area, (paragraph [0009] reciting “One of the processes enriches point cloud data on the ground in front of the vehicle. A computing system obtains a point cloud that represents scan points collected by the LiDAR sensor while performing a scan. …” The ground is part of the targets and corresponds to a terrain of an area in front of the vehicle’s LIDAR scanner.) and generate a high resolution map of the terrain of the area (paragraph [0008] reciting “Embodiments relate to processes that enhance the relatively sparse and low-resolution data collected by a LiDAR sensor by increasing the density of points in certain portions of the scan. The enhanced data can then be used for a variety of applications related to autonomous vehicle navigation and HD map generation.”) While Yang does not explicitly disclose, Williams discloses for using a trained machine learning model (paragraph [0100] reciting “Accordingly, in an embodiment of the present invention a generative adversarial network (GAN) may be trained to learn about the three-dimensional distribution of points.”) wherein the point cloud data includes one or more gaps; (paragraph [0119] reciting “The input data may also comprise additional random points, whose purpose is to be remapped to fill one or more gaps in the surface distribution. These random points can be added in a manner similar to the other forms of noise in the training data, e.g. simulating perturbations of non-existent points in the input point cloud.”) process the point cloud data using the trained machine learning model to probabilistically fill the one or more gaps within the point cloud data to generate a processed navigation data set; (paragraph [0146] reciting “a point cloud input to the generative network of the GAN further comprising additional random points, provided for remapping by the generative network of the GAN to fill one or more gaps in a surface distribution of the point cloud;” Using GAN to fill one or more gaps of the surface point cloud corresponds to probabilistically filling the one or more gaps of the point cloud because GAN is intelligent gap filling, not random or simple gap filling.) based upon the processed navigation data set. (paragraph [0146] reciting “a point cloud input to the generative network of the GAN further comprising additional random points, provided for remapping by the generative network of the GAN to fill one or more gaps in a surface distribution of the point cloud;” When the GAN is applied to the ground point cloud in Yang, the result is a ground point cloud of filled holes/gaps which is used for autonomous vehicle navigation.) It would have been obvious to a person of ordinary skills in the art before the effective filing date of the claimed invention to modify Yang with Williams so that any holes in the 3D point cloud generated in Yang can be filled using the generative adversarial network (GAN) disclosed in Williams. While Yang does not explicitly disclose holes, Yang does disclose undesirable sparse point clouds that are generated from LIDAR scans and generally speaking holes and gaps can occur. Therefore, in sparse areas that may be deemed to be hole/gap areas of the point cloud, Yang modified by Williams can beneficially fill in the holes/gaps using the GAN in order to generate a more complete and solid 3D point cloud for the HD map, which further helps the autonomous vehicle navigate using such HD map. 15. Regarding Claim 10, Williams further discloses The computing system of claim 9, wherein the trained machine learning model is a generative adversarial network. (paragraph [0146] reciting “a point cloud input to the generative network of the GAN further comprising additional random points, provided for remapping by the generative network of the GAN to fill one or more gaps in a surface distribution of the point cloud;”) 16. Regarding Claim 13, Yang further discloses The computing system of claim 9, wherein: the point cloud data comprises a plurality of data types, wherein each data type includes a respective subset of the point cloud data; (paragraph [0075] reciting “The process begins when the vehicle computing system 120 obtains 910 a point cloud from a LiDAR scan. As noted above, the vehicle sensors 105 include a light detection and ranging (LiDAR) sensor that surveys the surroundings of the vehicle 150 by measuring distance to a target. The LiDAR sensor measures the distance to targets surrounding the vehicle by illuminating targets with laser light pulses and measuring the reflected pulses. In some embodiments, the LiDAR sensor includes a laser and a rotating mirror, and the LiDAR sensor performs a scan by operating the rotating mirror to cause the laser pulses to be emitted from different pitch and yaw angles.” Each LiDAR scanned target corresponds to a data type and each target comprises a subset of the point cloud.) and the computer-executable instructions that cause the computing system to process the point cloud data to generate the processed navigation data set cause the computing system to combine the respective subsets of the point cloud data into a combined point cloud of the processed navigation data set. (paragraph [0078] reciting “As referred to herein, the point cloud is a set of points in three-dimensional space that represent the positions of scan points in the scanner data. The 3D points in the point cloud may be generated by converting the range values and associated pitch and yaw angles collected by the LiDAR sensor into a three-dimensional coordinate system, such as Cartesian coordinates, cylindrical coordinates, or spherical coordinates.” Each of the scanned targets are converted into 3D point cloud and are combined together to generate the 3D point cloud of the entire scanned area.) 17. Regarding Claim 14, Yang further discloses The computing system of claim 9, wherein the computer-executable instructions further cause the computing system to generate a navigation decision for controlling an autonomous vehicle based upon the high resolution map. (paragraph [0097] reciting “… Additionally or alternatively, the identified markings can also be used by a vehicle 150 in real-time for navigation and steering purposes, such as keeping the vehicle in the same lane, switching to an adjacent lane, stopping before a crosswalk, or entering a parking space.”) 18. Regarding Claim 15, Yang discloses A non-transitory computer-readable medium having stored thereon computer-executable instructions (paragraph [0120] reciting “The storage unit 1516 includes a machine-readable medium 1522 on which is stored instructions 1524 (e.g., software) embodying any one or more of the methodologies or functions described herein. The instructions 1524 (e.g., software) may also reside, completely or at least partially, within the main memory 1504 or within the processor 1502 (e.g., within a processor's cache memory) during execution thereof by the computer system 1500, the main memory 1504 and the processor 1502 also constituting machine-readable media. The instructions 1524 (e.g., software) may be transmitted or received over a network 1526 via the network interface device 1520.”) to improve vehicle orientation and navigation (Abstract reciting “… The enhanced data can be used for a variety of applications related to autonomous vehicle navigation and HD map generation, such as detecting lane markings on the road in front of the vehicle or determining a change in the vehicle's position and orientation.”) that, when executed by one or more processors of a computing system, cause the computing system to: receive a navigation data set (paragraph [0075] reciting “The process begins when the vehicle computing system 120 obtains 910 a point cloud from a LiDAR scan. As noted above, the vehicle sensors 105 include a light detection and ranging (LiDAR) sensor that surveys the surroundings of the vehicle 150 by measuring distance to a target. The LiDAR sensor measures the distance to targets surrounding the vehicle by illuminating targets with laser light pulses and measuring the reflected pulses. In some embodiments, the LiDAR sensor includes a laser and a rotating mirror, and the LiDAR sensor performs a scan by operating the rotating mirror to cause the laser pulses to be emitted from different pitch and yaw angles.” Targets correspond to a navigation data set.) comprising point cloud data relating to a terrain of an area, (paragraph [0009] reciting “One of the processes enriches point cloud data on the ground in front of the vehicle. A computing system obtains a point cloud that represents scan points collected by the LiDAR sensor while performing a scan. …” The ground is part of the targets and corresponds to a terrain of an area in front of the vehicle’s LIDAR scanner.) and generate a high resolution map of the terrain of the area (paragraph [0008] reciting “Embodiments relate to processes that enhance the relatively sparse and low-resolution data collected by a LiDAR sensor by increasing the density of points in certain portions of the scan. The enhanced data can then be used for a variety of applications related to autonomous vehicle navigation and HD map generation.”) While Yang does not explicitly disclose, Williams discloses for using a trained machine learning model (paragraph [0100] reciting “Accordingly, in an embodiment of the present invention a generative adversarial network (GAN) may be trained to learn about the three-dimensional distribution of points.”) wherein the point cloud data includes one or more gaps; (paragraph [0119] reciting “The input data may also comprise additional random points, whose purpose is to be remapped to fill one or more gaps in the surface distribution. These random points can be added in a manner similar to the other forms of noise in the training data, e.g. simulating perturbations of non-existent points in the input point cloud.”) process the point cloud data using the trained machine learning model to probabilistically fill the one or more gaps within the point cloud data to generate a processed navigation data set; (paragraph [0146] reciting “a point cloud input to the generative network of the GAN further comprising additional random points, provided for remapping by the generative network of the GAN to fill one or more gaps in a surface distribution of the point cloud;” Using GAN to fill one or more gaps of the surface point cloud corresponds to probabilistically filling the one or more gaps of the point cloud because GAN is intelligent gap filling, not random or simple gap filling.) based upon the processed navigation data set. (paragraph [0146] reciting “a point cloud input to the generative network of the GAN further comprising additional random points, provided for remapping by the generative network of the GAN to fill one or more gaps in a surface distribution of the point cloud;” When the GAN is applied to the ground point cloud in Yang, the result is a ground point cloud of filled holes/gaps which is used for autonomous vehicle navigation.) It would have been obvious to a person of ordinary skills in the art before the effective filing date of the claimed invention to modify Yang with Williams so that any holes in the 3D point cloud generated in Yang can be filled using the generative adversarial network (GAN) disclosed in Williams. While Yang does not explicitly disclose holes, Yang does disclose undesirable sparse point clouds that are generated from LIDAR scans and generally speaking holes and gaps can occur. Therefore, in sparse areas that may be deemed to be hole/gap areas of the point cloud, Yang modified by Williams can beneficially fill in the holes/gaps using the GAN in order to generate a more complete and solid 3D point cloud for the HD map, which further helps the autonomous vehicle navigate using such HD map. 19. Regarding Claim 16, Williams further discloses The non-transitory computer-readable medium of claim 15, wherein the trained machine learning model is a generative adversarial network. (paragraph [0146] reciting “a point cloud input to the generative network of the GAN further comprising additional random points, provided for remapping by the generative network of the GAN to fill one or more gaps in a surface distribution of the point cloud;”) 20. Regarding Claim 19, Yang further discloses The non-transitory computer-readable medium of claim 15, wherein: the point cloud data comprises a plurality of data types, wherein each data type includes a respective subset of the point cloud data; (paragraph [0075] reciting “The process begins when the vehicle computing system 120 obtains 910 a point cloud from a LiDAR scan. As noted above, the vehicle sensors 105 include a light detection and ranging (LiDAR) sensor that surveys the surroundings of the vehicle 150 by measuring distance to a target. The LiDAR sensor measures the distance to targets surrounding the vehicle by illuminating targets with laser light pulses and measuring the reflected pulses. In some embodiments, the LiDAR sensor includes a laser and a rotating mirror, and the LiDAR sensor performs a scan by operating the rotating mirror to cause the laser pulses to be emitted from different pitch and yaw angles.” Each LiDAR scanned target corresponds to a data type and each target comprises a subset of the point cloud.)and the computer-executable instructions that cause the computing system to process the point cloud data to generate the processed navigation data set cause the computing system to combine the respective subsets of the point cloud data into a combined point cloud of the processed navigation data set. (paragraph [0078] reciting “As referred to herein, the point cloud is a set of points in three-dimensional space that represent the positions of scan points in the scanner data. The 3D points in the point cloud may be generated by converting the range values and associated pitch and yaw angles collected by the LiDAR sensor into a three-dimensional coordinate system, such as Cartesian coordinates, cylindrical coordinates, or spherical coordinates.” Each of the scanned targets are converted into 3D point cloud and are combined together to generate the 3D point cloud of the entire scanned area.) 21. Regarding Claim 20, Yang further discloses The non-transitory computer-readable medium of claim 15, wherein the computer-executable instructions further cause the computing system to generate a navigation decision for controlling an autonomous vehicle based upon the high resolution map. (paragraph [0097] reciting “… Additionally or alternatively, the identified markings can also be used by a vehicle 150 in real-time for navigation and steering purposes, such as keeping the vehicle in the same lane, switching to an adjacent lane, stopping before a crosswalk, or entering a parking space.”) 22. Claims 4, 8, 12 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Williams and further in view of Emrah Akin Sisbot (US Patent Application Publication No. 2018/0330547 A1). 23. Regarding Claim 4, while the combination of Yang and Williams does not explicitly disclose, Sisbot discloses The computer-implemented method of claim 1, wherein the point cloud data relating to the terrain of the area comprises a point cloud of surface elevation within the area. (paragraph [0023] reciting “… The computer program product where the plurality of elevation values is determined based at least in part on pointcloud data describing the road surface. The computer program product where the pointcloud data describes the elevations of the plurality of points on the road surface relative to one another. The computer program product where at least some of the pointcloud data is wirelessly received from an external device. …”) It would have been obvious to a person of ordinary skills in the art before the effective filing date of the claimed invention to modify Yang and Williams with Sisbot so that the point cloud data includes road surface elevation data. This is clearly an obvious modification since the point cloud obtained in Yang concerns the ground in front of the vehicle and behind able to determination elevation data in the point cloud allows for the generated HD map to be more accurate for autonomous navigation. 24. Regarding Claim 8, while the combination of Yang and Williams does not explicitly disclose, Sisbot discloses The computer-implemented method of claim 1, further comprising: processing, by the one or more processors, the high resolution map to determine one or more slopes of one or more portions of the terrain of the area. (paragraph [0040] reciting “A vehicle 103 including a 3D HUD is driving on a road 117. On either side of the road is an off-road area 119. The terrain of the road 117 is variable. For example, the road curves and has variations in elevation at different points. The terrain for the off-road area 119 is also variable. For example, the terrain of the off-road areas 119 has a slope as the road 117 appears to have been built on a hillside. The 3D HUD displays graphics that are intended to assist the driver to navigate the road. …”) It would have been obvious to a person of ordinary skills in the art before the effective filing date of the claimed invention to modify Yang and Williams with Sisbot so that HD map in Yang also discloses slopes. Yang’s HD map is generated from data gathered from multiple vehicle system so using the teachings of Sisbot, the map can beneficially aid the autonomous vehicle in knowing where the slopes are in the map even if the vehicle cannot immediately detect such slopes from its current position. This is obviously beneficial as a vehicle aware of slopes can navigate more safely when going into the slope in order to avoid undetectable objects. 25. Regarding Claim 12, while the combination of Yang and Williams does not explicitly disclose, Sisbot discloses The computing system of claim 9, wherein the point cloud data relating to the terrain of the area comprises a point cloud of surface elevation within the area. (paragraph [0023] reciting “… The computer program product where the plurality of elevation values is determined based at least in part on pointcloud data describing the road surface. The computer program product where the pointcloud data describes the elevations of the plurality of points on the road surface relative to one another. The computer program product where at least some of the pointcloud data is wirelessly received from an external device. …”) It would have been obvious to a person of ordinary skills in the art before the effective filing date of the claimed invention to modify Yang and Williams with Sisbot so that the point cloud data includes road surface elevation data. This is clearly an obvious modification since the point cloud obtained in Yang concerns the ground in front of the vehicle and behind able to determination elevation data in the point cloud allows for the generated HD map to be more accurate for autonomous navigation. 26. Regarding Claim 18, while the combination of Yang and Williams does not explicitly disclose, Sisbot discloses The non-transitory computer-readable medium of claim 15, wherein the point cloud data relating to the terrain of the area comprises a point cloud of surface elevation within the area. (paragraph [0023] reciting “… The computer program product where the plurality of elevation values is determined based at least in part on pointcloud data describing the road surface. The computer program product where the pointcloud data describes the elevations of the plurality of points on the road surface relative to one another. The computer program product where at least some of the pointcloud data is wirelessly received from an external device. …”) It would have been obvious to a person of ordinary skills in the art before the effective filing date of the claimed invention to modify Yang and Williams with Sisbot so that the point cloud data includes road surface elevation data. This is clearly an obvious modification since the point cloud obtained in Yang concerns the ground in front of the vehicle and behind able to determination elevation data in the point cloud allows for the generated HD map to be more accurate for autonomous navigation. Allowable Subject Matter 27. Claims 3, 11, and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. 28. The following is a statement of reasons for the indication of allowable subject matter: Claim 3 recites the limitation obtaining, by the one or more processors, a training set comprising a plurality of training point clouds; fabricating, by the one or more processors, a plurality of training data sets by extracting portions of each of the plurality of training point clouds to simulate holes in the respective training point clouds, wherein the portions extracted comprise a variety of shapes and sizes for each of the plurality of training point clouds; and training, by the one or more processors, the generative adversarial network to probabilistically predict the portions extracted from the plurality of training point clouds based upon the plurality of training data sets which is neither disclosed nor suggested by the cited references, either singly or in combination. 29. Regarding Claim 11 recites the limitation obtain a training set comprising a plurality of training point clouds; fabricate a plurality of training data sets by extracting portions of each of the plurality of training point clouds to simulate holes in the respective training point clouds, wherein the portions extracted comprise a variety of shapes and sizes for each of the plurality of training point clouds; and train the generative adversarial network to probabilistically predict the portions extracted from the plurality of training point clouds based upon the plurality of training data sets which is neither disclosed nor suggested by the cited references, either singly or in combination. 30. Claim 17 recites the limitation obtain a training set comprising a plurality of training point clouds; fabricate a plurality of training data sets by extracting portions of each of the plurality of training point clouds to simulate holes in the respective training point clouds, wherein the portions extracted comprise a variety of shapes and sizes for each of the plurality of training point clouds; and train the generative adversarial network to probabilistically predict the portions extracted from the plurality of training point clouds based upon the plurality of training data sets which is neither disclosed nor suggested by the cited references, either singly or in combination. CONTACT Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANK S CHEN whose telephone number is (571)270-7993. The examiner can normally be reached Mon - Fri 8-11:30 and 1:30-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 5712727794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FRANK S CHEN/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jun 18, 2024
Application Filed
Jan 06, 2026
Non-Final Rejection — §103, §DP
Apr 06, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597111
SYSTEMS AND METHODS FOR DULL GRADING
2y 5m to grant Granted Apr 07, 2026
Patent 12596007
DISPLAY CONTROL APPARATUS, DISPLAY SYSTEM, DISPLAY METHOD, AND COMPUTER READABLE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12592029
SYSTEMS AND METHODS FOR MEDIA CONTENT GENERATION
2y 5m to grant Granted Mar 31, 2026
Patent 12586308
GENERATING OBJECT REPRESENTATIONS USING NEURAL NETWORKS FOR AUTONOMOUS SYSTEMS AND APPLICATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12586293
SCENE RECONSTRUCTION FROM MONOCULAR VIDEO
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
91%
With Interview (+8.8%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 657 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month