Prosecution Insights
Last updated: April 19, 2026
Application No. 18/506,192

3-DIMENSIONAL (3D) MAP GENERATION SYSTEM AND METHOD FOR CREATING 3D MAP OF SURROUNDINGS OF A VEHICLE

Non-Final OA §103
Filed
Nov 10, 2023
Examiner
COFINO, JONATHAN M
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Continental Autonomous Mobility Germany GmbH
OA Round
3 (Non-Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
2y 4m
To Grant
94%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
130 granted / 210 resolved
At TC average
Strong +32% interview lift
Without
With
+32.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
13 currently pending
Career history
223
Total Applications
across all art units

Statute-Specific Performance

§101
6.4%
-33.6% vs TC avg
§103
64.7%
+24.7% vs TC avg
§102
10.2%
-29.8% vs TC avg
§112
12.3%
-27.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 210 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on/after Mar. 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 29 January 2026 has been entered. Response to Arguments Applicant’s arguments, see pp. 4-5, filed 29 January 2026, with respect to the rejection of claim 1 under 35 U.S.C. § 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Nonn et al. (U.S. PG-PUB 2021/0225020). Please see the Office action for the rationale regarding the rejection of the newly-amended independent claim. Claim Rejections - 35 USC § 103 The following is a quotation of 35 USC 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over Hicks (U.S. PG-PUB 2019/0387216, ‘HICKS’) in view of Aluru et al. (U.S. PG-PUB 2023/0370701, 'ALURU') and Nonn et al. (U.S. PG-PUB 2021/0225020, 'NONN'). Regarding claim 1, HICKS discloses a three-dimensional (3D) map generation system comprising: a processor (HICKS; FIG. 1, ‘image processor 114’; ¶ 0028); … camera(s) configured to capture image data of surroundings of the 3D map generation system (HICKS; FIG. 1; ¶ 0028; “The visible light camera system 115 has a camera 108 generates digital images of the scene within its field of regard 110 as determined by an optical system … The images are sent to an image processor 114. … Alternatively, the camera may capture an image in response to a command from an external controller such as the modeling processor 116. The images may be 2D color representations of the scene. … multiple cameras are used to determine depth or range from the cameras to the scene. Multiple cameras may also be used to provide information at … different fields of view.”); and … sensor(s) (HICKS; FIG. 1; ¶ 0025; “The lidar system 113 has a lidar 104 coupled to a lidar controller 112 that drives and controls the lidar and receives return data generated by the lidar. There may be an optical system within the lidar system 113 for directing laser pulses to the scene and laser reflections into a detector of the lidar 104. … The lidar controller generates a point cloud in which each point represents a [3-D] position in the scene in the field of regard. … The point cloud is sent to a 3D model processor 16 to be used in generating a 3D model 118 of the scene.”) configured to detect (HICKS; FIGS. 4-5; ¶ 0044-45; “FIG. 4 is an alternative side view diagram of a vehicle with a sensor system traveling along a roadway to show regions occluded from the sensor. FIG. 5 is a top view diagram of the same vehicle and sensor configuration. A first vehicle 70 has a forward-looking sensor or sensor suite 72 such as a lidar and visible light camera. The sensor suite has a field of regard 74 as indicated by ray trace lines. A second vehicle 76 is in the roadway in front of the first vehicle [‘physical obstruction to the … camera(s)’] with a particular width and height. The second vehicle 76 has a width and height that can be perceived by the sensor suite but the depth cannot be perceived as only the rear of the vehicle is visible.”); and … a LIDAR sensor [and/or] a radar sensor (HICKS; FIG. 10; ¶ 0079; “… the sensors may include … lidar device(s), camera(s), radar device(s)”), wherein the processor is configured to: control the … sensor(s) to detect (HICKS; ¶ 0054; “For the occluded portions of the grid, the modeling processor … may also correlate 230 lidar returns to classified objects. Objects that are moving into and out of occluded spaces in the grid may be identified 232 as they move in the 3D model [‘physical obstruction to the … camera(s)’] using the classifications and correlations. The occluded portions of the grid may then be updated 234 based on estimations of the behavior and size of the objects.”); ([ALURU teaches this limitation.]); generate a first 3D data representation of the surroundings based on the image data (HICKS; ¶ 0049; “FIG. 6 is a process flow diagram of using a camera … to augment a lidar system … The system starts at 202 and lidar capture is activated 204 in a lidar system. The lidar captures data from which a point cloud may be generated. At the same time camera capture 206 is performed and object classification 208 is performed on the camera capture at the camera system, for example in the image processor that is configured with an object classification system.”); generate a second 3D data representation of the surroundings based on sensor data captured by … the LIDAR sensor and/or the radar sensor (HICKS; FIGS. 2, 6; ¶ 50-51; “The classified objects 208 and the lidar points are brought together in … a modeling processor that … correlates 210 points of classified objects to points of the lidar point cloud … The correlated lidar points, such as point cloud points, are then modeled 212 as objects [according to] the classification. The scene model 118, 150 may then be updated 214 to include the objects from the camera capture. … an object is first detected 206 and classified 208 by a visible light camera system. The classified object is further refined 210 by the lidar system 204 to establish the precise size of the object. The object is used to update the 3D grid 214 and is modeled 212 frame-to-frame as a single object.”); and ([NONN teaches this limitation.]) that includes first regions generated from the image data detected under (HICKS; FIGS. 6-7; ¶ 0053-55; “At about the same time, the visible light camera system at 226 also captures frames that represent the same scene [‘image data’] … These frames may be in the form of a 2D bitmap of color and pixel position … and are provided to an image processor … to classify objects in the camera frames at 228. The camera data and the lidar data may then be combined [‘generate a combined 3D data representation’] … by a modeling processor … to update 236 the non-occluded portions of the grid [‘first regions’]. … For the occluded portions of the grid [‘second regions’], the modeling processor … may also correlate 230 lidar returns to classified objects. Objects that are moving into and out of occluded spaces in the grid [‘physical obstruction’] may be identified 232 as they move in the 3D model using the classifications and correlations. The occluded portions of the grid may then be updated 234 based on estimations of the behavior and size of the objects. … the object classifications in the 3D model are used to account for objects and spaces that may be temporarily occluded by other classified moving objects. … if part of a vehicle is observed [‘first regions’] while the remainder is occluded by an intervening object, the system can reasonably extrapolate that the unseen portions of the vehicle [‘second regions’] are present behind the occlusion. If the approximate size of the vehicle is known based on the classification and the observed size in the point cloud, then the size of the unseen portions may also be estimated [‘generate a combined 3D data representation’].”) and second regions generated from the sensor data generated under (HICKS; FIG. 7; ¶ 0052; “… using a camera to augment a lidar scene modeling system. … At 224 the activated lidar system captures return data. … the lidar will perform a sequence of e.g. horizontal scans of emitted laser pulses from the lidar and then capture any reflected return pulses on an imaging sensor … The lidar then generates a frame of 3D return data [‘generated from the sensor data’] based on … scans. This [is] in the form of a lidar point cloud of 3D positions of returned reflections …”). HICKS does not explicitly disclose that upon detection of the adverse weather condition, the low light condition, the low visibility condition, or the physical obstruction to the … camera(s), alternatively operate the … camera(s) and the at least one of a LIDAR sensor and a radar sensor, which ALURU discloses (ALURU; ¶ 0035; “Smart integration of visible camera technology … and LiDAR, whenever performance of the visible camera technology has been degraded, may be used to detect, recognize and predict road hazards to proactively warn customers and improvise vehicle maneuvering. Combining and alternating the usage or fusion of visible/invisible camera technology and LiDAR for efficient usage of on-vehicle energy resources and increased confidence in detection, recognition and prediction may result in an improved understanding of vehicle surroundings for precise, smooth maneuvering and vehicle control. The … system may … detect, recognize, and differentiate black-ice oil/gasoline/diesel spills, broken glass/metal/vehicle parts [‘physical obstruction to the … camera(s)’], and organic objects such as tree leaves and tree branches. In response, the exemplary ADAS may then react safely to the detected conditions and to share this information with other vehicles and/or infrastructure. … the … system may alternate usage between camera imaging technologies and LiDAR to reduce power consumption. … the ADAS algorithm may switch usage from visible light cameras to best known alternatives … in response to determining adverse weather and/or road conditions.”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the 3D map generation system of HICKS to include the disclosure that upon detection of the adverse weather condition, the low light condition, the low visibility condition, or the physical obstruction the … camera(s), alternatively operating the … camera(s) and the at least one of a LIDAR sensor and a radar sensor of ALURU. The motivation for this modification is to efficiently use on-vehicle energy resources and increase confidence in detection, recognition and prediction of roadway obstacles which results in an improved understanding of vehicle surroundings (3D map generation) for precise, smooth maneuvering and vehicle control (ALURU; ¶ 0035). PNG media_image1.png 604 469 media_image1.png Greyscale PNG media_image2.png 446 495 media_image2.png Greyscale HICKS-ALURU do not explicitly disclose replacing regions of unsatisfactory quality with corresponding regions of the second 3D data representation if … region(s) of unsatisfactory quality exist in the first 3D data representation, to generate a combined 3D data representation, which NONN discloses (NONN; FIGS. 4-6; ¶ 0040; “At block 554, the method 550 includes identifying regions of missing data in the point cloud. … the method 550 can include identifying the missing regions 442 [‘regions of unsatisfactory quality’] of the point cloud 440 where depth data is missing or incomplete. … identifying the missing data can include filtering the point cloud data and searching for holes that are greater than a predetermined threshold (e.g., a user-specified threshold) using … an inverse Eulerian approach. … the missing regions 442 can be identified by searching for regions of the images from the cameras 112 where no valid 3D correspondence exists [‘if … region(s) of unsatisfactory quality exist in the first 3D data representation’] (e.g., by examining the binary mask for each image).” ¶ 0045; “At block 557, the method 550 includes merging/fusing the depth data for the missing or invalid regions with the original depth data (e.g., captured at block 551) to generate a merged point cloud [‘replacing regions of unsatisfactory quality with corresponding regions of the second 3D data representation … to generate a combined 3D data representation’]. FIG. 6 … is a schematic view of a merged point cloud 640 in which image-based depth data 644 has been filled into the missing regions 442 of the point cloud 440 shown in FIG. 4 …”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the 3D map generation system of HICKS-ALURU to include the replacing regions of unsatisfactory quality with corresponding regions of the second 3D data representation if … region(s) of unsatisfactory quality exist in the first 3D data representation, to generate a combined 3D data representation of NONN. The motivation for this modification is to implement a merged point cloud that can provide a more accurate and robust depth map of a scene that facilitates better reconstruction and synthesis of an output image of the scene rendered from any desired virtual perspective (NONN; ¶ 0045). Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over HICKS in view of ALURU and NONN as applied to claim 1 above, and further in view of Levinson et al. (U.S. PG-PUB 2017/0248963, 'LEVINSON'). Regarding claim 2, HICKS-ALURU-NONN disclose the 3D map generation system of claim 1; however, HICKS-ALURU-NONN do not explicitly disclose that the … sensor(s) detect … the physical obstruction to the … camera(s) by: monitoring the surroundings for detecting … event(s) in the surroundings, which LEVINSON discloses (LEVINSON; FIG. 25; ¶ 0111; “Meta spin data 2522 … performs object segmentation and ground segmentation at segmentation processor 2523, whereby both meta spin data 2522 and segmentation-related data from segmentation processor 2523 are applied to a scanned differencing processor 2513. Scanned differencing processor 2513 … predicts motion and/or relative velocity of segmented image portions, which can be used to identify dynamic objects at 2517. … data from scanned differencing processor 2513 may be used to approximate locations of objects to form mapping of such objects (as well as optionally identifying a level of motion).”); or comparing the … event(s) with … predefined event(s) for detecting the adverse weather condition, , which LEVINSON also discloses (LEVINSON; FIG. 12; ¶ 0091; “At 1202, … data representing a subset of objects that are received at a planner in an [AV] [‘monitoring the surroundings’], the subset of objects including … object(s) associated with data representing a degree of certainty for a classification type. … perception engine data may include metadata associated with objects, whereby the metadata specifies a degree of certainty associated with a specific classification type. … a dynamic object may be classified as a “young pedestrian” [i.e., ‘physical obstruction’] with an 85% confidence level of being correct. At 1204, localizer data may be received (e.g., at a planner). The localizer data may include map data that is generated locally within the [AV]. The local map data may specify a degree of certainty (including a degree of uncertainty) that an event at a geographic region may occur. An event may be a condition or situation affecting operation, or potentially affecting operation, of an [AV] [‘comparing the … event(s) with … predefined event(s)’]. The events may be internal (e.g., failed or impaired sensor) to an [AV], or external (e.g., roadway obstruction). … A path coextensive with the geographic region of interest may be determined at 1206. … consider that the event is the positioning of the sun in the sky at a time of day in which the intensity of sunlight impairs the vision of drivers [‘low visibility condition’] during rush hour traffic. … it is expected or predicted that traffic may slow down responsive to the bright sunlight [‘adverse weather condition’]. … At 1208, a local position is determined at a planner based on local pose data. At 1210, a state of operation of an [AV] [is] determined (e.g., probabilistically), … based on a degree of certainty for a classification type and a degree of certainty of the event, which … may be based on any number of factors, such as speed, position, and other state information. … Consider an example in which a young pedestrian [i.e., ‘physical obstruction’] is detected by the [AV] during the event in which other drivers' vision likely will be impaired by the sun [‘low visibility condition’], thereby causing an unsafe situation for the young pedestrian. … a relatively unsafe situation can be detected as a probabilistic event that may be likely to occur [‘low visibility condition’] …”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the 3D map generation system of claim 1 of HICKS-ALURU-NONN to include the monitoring the surroundings for detecting … event(s) in the surroundings and the comparing the … event(s) with … predefined event(s) for detecting the adverse weather condition, the low visibility condition, or the physical obstruction to the … camera(s) corresponding to the … event(s) of LEVINSON. The motivation for this modification is to detect the presence of obstacles in the roadway, such as (young) pedestrians, particularly in low visibility scenarios, such as bright sunlight, to enhance the operational safety of an autonomous vehicle. Recognizing dangerous events may allow a human driver to take over or indicate that the autonomous vehicle should choose an alternate route. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over HICKS in view of ALURU and NONN as applied to claim 1 above, and further in view of Hunt (U.S. Patent 10,841,483; 'HUNT'). Regarding claim 3, HICKS-ALURU-NONN disclose the 3D map generation system of claim 1; however, HICKS-ALURU-NONN do not explicitly disclose that the … camera(s) and the … sensor(s) have at least substantially the same field of view, which HUNT discloses (HUNT; Col. 1, Lines 20-30; “… autonomous vehicles are equipped with … sensors to detect the presence of external objects. The sensors may include … camera(s) that are capable of capturing [2-D] images of the surrounding environment. … the sensors may include a LiDAR sensor that is capable of capturing a [3-D] point cloud image. … for systems to correctly interpret data from the camera and the LiDAR sensor, the camera [and] the LiDAR sensor must be calibrated such that they are capturing the same or similar field of view.”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the 3D map generation system of HICKS-ALURU-NONN to include the disclosure that the … camera(s) and the … sensor(s) have at least substantially the same field of view of HUNT. The motivation for this modification is to simultaneously align a photoreceptive sensor with a laser sensor to synthesize data that may or may not be visible depending on lighting conditions. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over HICKS in view of ALURU and NONN as applied to claim 1 above, and further in view of Yan et al. (U.S. PG-PUB 2019/0011566, 'YAN'). Regarding claim 4, HICKS-ALURU-NONN disclose the 3D map generation system of claim 1; however, HICKS-ALURU-NONN do not explicitly disclose that the processor uses a precalculated translation matrix to generate the 3D combined 3D data representation, which YAN discloses (YAN; ¶ 0076; “… the coordinates of the laser point data in the world coordinate system [are] determined according to the current pose information. … since [3-D] coordinates in the laser point cloud data collected by the lidar are coordinates of a target scanned by laser points emitted by the lidar relative to a vehicle body coordinate system, and the current pose information is based on the coordinates of the world coordinate system [WCS], the electronic device [obtains] a translation matrix according to the location information in the current pose information and obtains a rotation matrix according to the posture information in the current pose information. … the [3D] coordinates in … laser point data [are] converted according to the rotation matrix and translation matrix to obtain the coordinates of the laser point data in the [WCS].”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the 3D map generation system of claim 1 of HICKS-ALURU-NONN to include the disclosure that the processor uses a precalculated translation matrix to generate the 3D combined 3D data representation of YAN. The motivation for this modification is to reconcile the differences between a relative coordinate system centered around a particular vehicular mapping system and an absolute world coordinate system that is independent of any particular vehicle. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN M COFINO whose telephone number is (303) 297-4268. The examiner can normally be reached Monday-Friday 10A-4P MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at 571-272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JONATHAN M COFINO/Examiner, Art Unit 2614 /KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Nov 10, 2023
Application Filed
Jun 06, 2025
Non-Final Rejection — §103
Sep 12, 2025
Response Filed
Oct 23, 2025
Final Rejection — §103
Jan 29, 2026
Request for Continued Examination
Feb 01, 2026
Response after Non-Final Action
Feb 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597201
INTERACTIVE METHOD AND SYSTEM FOR DISPLAYING MEASUREMENTS OF OBJECTS AND SURFACES USING CO-REGISTERED IMAGES AND 3D POINTS
2y 5m to grant Granted Apr 07, 2026
Patent 12597202
GEOLOGICALLY MEANINGFUL SUBSURFACE MODEL GENERATION BASED ON A TEXT DESCRIPTION
2y 5m to grant Granted Apr 07, 2026
Patent 12536207
METHOD AND APPARATUS FOR RETRIEVING THREE-DIMENSIONAL (3D) MAP
2y 5m to grant Granted Jan 27, 2026
Patent 12511829
MAP GENERATION APPARATUS, MAP GENERATION METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING PROGRAM
2y 5m to grant Granted Dec 30, 2025
Patent 12505605
SOLVING LOW EFFICIENCY OF MOVING ADJUSTMENT CAUSED BY CONTROLLING MOVEMENT OF IMAGE USING MODEL PARAMETERS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
94%
With Interview (+32.2%)
2y 4m
Median Time to Grant
High
PTA Risk
Based on 210 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month