Prosecution Insights
Last updated: April 19, 2026
Application No. 18/344,333

SYSTEMS AND METHODS FOR ROAD SEGMENT MAPPING

Final Rejection §103
Filed
Jun 29, 2023
Examiner
JAGOLINZER, SCOTT ROSS
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Mobileye Vision Technologies Ltd.
OA Round
2 (Final)
41%
Grant Probability
Moderate
3-4
OA Rounds
3y 6m
To Grant
60%
With Interview

Examiner Intelligence

Grants 41% of resolved cases
41%
Career Allow Rate
45 granted / 110 resolved
-11.1% vs TC avg
Strong +19% interview lift
Without
With
+19.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
43 currently pending
Career history
153
Total Applications
across all art units

Statute-Specific Performance

§101
13.3%
-26.7% vs TC avg
§103
57.7%
+17.7% vs TC avg
§102
11.6%
-28.4% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 110 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This action is in reply to the application filed on 11/28/2025. Claims 1-8, 11-20, 23-24, 28-29, 34-56 are currently pending and have been examined. Claims 1, 11-13, 40, 44, 47, and 51 are currently amended. Claims 54-56 are added. Claims 1-8, 11-20, 23-24, 28-29, 34-56 are currently rejected. This action is made FINAL. Response to Arguments Applicant’s arguments filed 11/28/2025 have been fully considered but they are not persuasive. Applicant’s arguments with regards to the rejections are not persuasive. The amendments to claim 1 are not specific enough to overcome the rejection of Liang. Applicant argues Liang doesn’t teach “aggregating the plurality of top view images based on points correlated across the plurality of top view images, the points corresponding to a portion of an object”. Liang clearly performing aggregating plurality of images from multiple cameras mounted on the exterior of the vehicle. Liang also teaches identifying an object in multiple view, 810 in fig. 8 and 1510 in fig. 15 and uses that object to determine how to adjust the FOV of the cameras in order to create the “unified view”. The claims only require that the combination (aggregation) is performed “based” on the image, not the specific method used in the combining process. Altering a FOV to alter the stitching points of the multiple views based on a common object as taught by Liang does teach the claims as currently presented in claim 1. The applicant would need to more specifically amend the specifics of the aggregation process to differentiate over Liang. Applicant additionally argues that Zou does not teach annotating the images. Zou teaches detecting, uploading, and sharing the detected road features which the examiner interprets as “annotating” or tagging. Fig. 3 also explicitly shows annotating the image of the lane boundaries. For the reasons mentioned above the arguments are not persuasive and the rejections are maintained as shown below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-8, 11-15, 17-20, 23-24, 29, 34-36, 38, 40-44, 45-50, and 51-54 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zou et. al. (US 2017/0300763), herein Zou (from IDS) in view of Liang et. al. (US 2020/0314333), herein Liang (from IDS). Regarding claim 1: Zou teaches: A system for automatically mapping a road segment (techniques for road feature detection using a vehicle camera system [abstract]), the system comprising: at least one processor (processing system 700 has one or more central processing units (processors) 21a, 21b, 21c, etc. (collectively or generically referred to as processor(s) 21 and/or as processing device(s)) [0052]) programmed to: receive, from at least one camera mounted on a vehicle (The cameras 130 capture images external to the vehicle 100 [0024]), a plurality of images acquired as the vehicle traversed the road segment (Each of the cameras 130 has a field-of-view (FOV) 131a, 131b, 131c, 131d (collectively referred to herein as “FOV 131”). The FOV is the area observable by a camera. For example, the camera 130a has an FOV 131a, the camera 131b has an FOV 131b, the camera 130c has an FOV 131c, and the camera 131d has an FOV 131d. The captured images can be the entire FOV for the camera or can be a portion of the FOV of the camera. [0024]); convert each of the plurality of images to a corresponding top view image to provide a plurality of top view images (the captured images from the cameras 130 can be combined to form a top view or “bird's eye” view that provides a surround view around the vehicle 100 [0026]); aggregate the plurality of top view images to provide an aggregated top view image of the road segment (generates a top view of the road based at least in part on the image [0027]); analyze the aggregated top view image to identify at least one road feature associated with the road segment (detects lane boundaries of a lane of the road based at least in part on the top view of the road, and detects a road feature within the lane boundaries of the lane of the road using machine learning and/or computer vision techniques [0027]); automatically annotate the at least one road feature relative to the aggregated top view image (fig. 3, bounding boxes showing detected lane markers; Once the lane boundaries of the lane are detected, the road feature detection engine 216 uses the lane boundaries to detect road features within the lane boundaries of the lane of the road using machine learning and/or computer vision techniques. The road feature detection engine 216 searches within the top view, as defined by the lane boundaries, to detect road features. The road feature detection engine 216 can determine a type of road feature (e.g., a straight arrow, a left-turn arrow, etc.) as well as a location of the road feature (e.g., arrow ahead, bicycle lane to the left, etc.) [0034]; The road features can be predefined in a database of road features (e.g., road feature database 218). Examples of road features include a speed limit indicator, a bicycle lane indicator, a railroad indicator, a school zone indicator, and a direction indicator (e.g., left-turn arrow, straight arrow, right-turn arrow, straight and left-turn arrow, straight and right-turn arrow, etc.), and the like. The road feature database 218 can be updated when road features are detected, and the road feature database 218 can be accessible by other vehicles, such as from a cloud computing environment over a network or from the vehicle 100 directly (e.g., using direct short-range communications (DSCR)). This enables crowd-sourcing of road features. [0035]); and output to at least one memory the aggregated top view image including the annotated at least one road feature (The road feature database 218 can be updated when road features are detected, and the road feature database 218 can be accessible by other vehicles, such as from a cloud computing environment over a network or from the vehicle 100 directly [0035]; Graphics processing unit 37 is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display [0055]). Zou does not explicitly teach, however Liang teaches: aggregate the plurality of top view images (fig. 8 image 806 and 808; fig. 15, images from cameras 802 and 804) based on points correlated across the plurality of top view images (The selection of which one of cameras 802 or 804 will provide object image 1510 is variously embodied. In one embodiment, if object 810 (providing object image 1510) is 150 cm or closer to vehicle 100, then camera 802 is configured to provide the image via the second angle field of view. Alternatively, if object 810 is 150 cm or farther, camera 804 is configured to provide the image via third viewing angle. The location of the boarder delineating the portion of unified image 1500 is provided by camera 802 and which is provided by camera 804 is more fully described with respect to FIGS. 9 and 10 [0134]), the points corresponding to a portion of an object (fig. 8, object 810; fig. 15, object 1510), to provide an aggregated top view image of the road segment (FIG. 15 is unified view 1500 of a vehicle's surroundings in accordance with at least some embodiments of the present disclosure. In one embodiment, unified view 1500 may comprise a plurality of cameras, each producing a portion of unified view 1500. To avoid unnecessarily complicating the figures and description, on the images provided by cameras 802 and 804 are considered. However, it should be appreciated that any two cameras, of a two or more cameras, wherein an object is within, or potentially within, the field of view of each may be utilized without departing from the scope of the embodiments provided. [0130]); It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Zou to include the teachings as taught by Liang with a reasonable expectation of success. Zou and Liang both teach processing images from cameras mounted on vehicles and creating a top down view from them. Liang teaches the benefit of “a novel method where the seam line is not fixed, but instead dynamically varies for each overlap region based on the real-world objects present in that region. This novel method significantly improves the visibility and clarity of objects in the overlap regions in the reconstructed 360 views [Liang, 0116]”. Regarding claim 2: Zou in view of Liang teaches all the limitations of claim 1, upon which this claim is dependent. Zou further teaches: wherein the at least one camera has an optical axis projecting away from the vehicle (see at least fig. 1, cameras 130a - 130d showing cameras pointing away from vehicle.). Regarding claim 3: Zou in view of Liang teaches all the limitations of claim 1, upon which this claim is dependent. Zou further teaches: wherein each of the plurality of top view images is generated based on a simulated viewpoint that is elevated relative to an actual elevation of the at least one camera (the top view generation engine 212 uses the image to generate a top view of the road as if the point of view of the camera was directly above the road looking down at the road. An example of a top view (e.g., top-down view 304) is depicted in FIG. 3. [0031]). Regarding claim 4: Zou in view of Liang teaches all the limitations of claim 3, upon which this claim is dependent. Zou further teaches: wherein the simulated viewpoint is elevated by at least ten meters relative to the actual elevation of the camera (the top view generation engine 212 uses the image to generate a top view of the road as if the point of view of the camera was directly above the road looking down at the road. An example of a top view (e.g., top-down view 304) is depicted in FIG. 3. [0031]; examiner notes that the exact simulated heigh of the birds eye view would come down to routine optimization and be an obvious design choice.). Liang more explicitly teaches: wherein the simulated viewpoint is elevated by at least ten meters relative to the actual elevation of the camera (a synthetic top-down view, may require flatten the images, such as to give the appearance of a higher viewing perspective [0128]; examiner notes that the exact simulated heigh of the birds eye view would come down to routine optimization and be an obvious design choice.). Regarding claim 5: Zou in view of Liang teaches all the limitations of claim 3, upon which this claim is dependent. Zou further teaches: wherein the simulated viewpoint is elevated by between ten meters and twenty meters relative to the actual elevation of the camera (the top view generation engine 212 uses the image to generate a top view of the road as if the point of view of the camera was directly above the road looking down at the road. An example of a top view (e.g., top-down view 304) is depicted in FIG. 3. [0031]; examiner notes that the exact simulated heigh of the birds eye view would come down to routine optimization and be an obvious design choice.). Liang more explicitly teaches: wherein the simulated viewpoint is elevated by at least ten meters relative to the actual elevation of the camera (a synthetic top-down view, may require flatten the images, such as to give the appearance of a higher viewing perspective [0128]; examiner notes that the exact simulated heigh of the birds eye view would come down to routine optimization and be an obvious design choice.). Regarding claim 6: Zou in view of Liang teaches all the limitations of claim 3, upon which this claim is dependent. Zou further teaches: wherein an optical axis associated with the simulated viewpoint is normal to a road surface associated with the road segment (the top view generation engine 212 uses the image to generate a top view of the road as if the point of view of the camera was directly above the road looking down at the road. An example of a top view (e.g., top-down view 304) is depicted in FIG. 3. [0031]). Regarding claim 7: Zou in view of Liang teaches all the limitations of claim 1, upon which this claim is dependent. Zou further teaches: wherein each of the plurality of top view images is generated by warping an image captured by the at least one camera from a viewpoint of the at least one camera (the top view generation engine 212 uses fisheye camera imaging techniques to generate the top view from an image captured with a fisheye camera (i.e., a camera having a fisheye lens). When using a fisheye camera, the top view generation engine 212 can be calibrated to compensate for radial distortion caused by the fisheye lens. [0032]) to a simulated camera viewpoint elevated relative to the at least one camera and directed along a line normal to a surface of the road segment (the top view generation engine 212 uses the image to generate a top view of the road as if the point of view of the camera was directly above the road looking down at the road. An example of a top view (e.g., top-down view 304) is depicted in FIG. 3. [0031]). Regarding claim 8: Zou in view of Liang teaches all the limitations of claim 1, upon which this claim is dependent. Zou further teaches: wherein the at least one camera includes at least one of a forward-facing camera relative to the vehicle (fig. 1, camera 130a), a side-facing camera relative to the vehicle (fig. 1, cameras 130 b and 130c), or a rearward-facing camera relative to the vehicle (fig. 1, cameras 130d). Regarding claim 11: Zou in view of Liang teaches all the limitations of claim 1, upon which this claim is dependent. Zou further teaches: wherein aggregation of the plurality of top view images (generates a top view of the road based at least in part on the image [0027]) includes: Liang further teaches: determining a relative alignment for the plurality of top view images based on the correlated feature points (the images produced by cameras 802 and 804 are combined to avoid errors that may otherwise be produced when combining images from cameras having different vantages [0118]) and based on tracked ego motion of the vehicle (Other sensors, such as inertial measurement units, gyroscopes, wheel encoders, sonar sensors, motion sensors to perform odometry calculations with respect to nearby moving exterior objects, and exterior facing cameras (e.g., to perform computer vision processing) can provide further contextual information for generation of a more accurate three-dimensional map. [0078]). Regarding claim 12: Zou in view of Liang in view of Liang teaches all the limitations of claim 11, upon which this claim is dependent. Zou further teaches: wherein aggregation of the plurality of top view images includes determining positions of each of the plurality of points relative to the road segment (The road feature detection engine 216 can determine a type of road feature (e.g., a straight arrow, a left-turn arrow, etc.) as well as a location of the road feature (e.g., arrow ahead, bicycle lane to the left, etc.). [0034]). Regarding claim 13: Zou in view of Liang teaches all the limitations of claim 12, upon which this claim is dependent. Liang further teaches: wherein the positions of each of the plurality of points (the like and inanimate objects and attributes thereof such as other vehicles (e.g., current vehicle state or activity (parked or in motion or level of automation currently employed), occupant or operator identity, vehicle type (truck, car, etc.), vehicle spatial location, etc.), curbs (topography and spatial location), potholes (size and spatial location), lane division markers (type or color and spatial locations), signage (type or color and spatial locations such as speed limit signs, yield signs, stop signs, and other restrictive or warning signs), traffic signals (e.g., red, yellow, blue, green, etc.), buildings (spatial locations), walls (height and spatial locations), barricades (height and spatial location), and the like [0074]) are determined using structure from motion calculations (Other sensors, such as inertial measurement units, gyroscopes, wheel encoders, sonar sensors, motion sensors to perform odometry calculations with respect to nearby moving exterior objects, and exterior facing cameras (e.g., to perform computer vision processing) can provide further contextual information for generation of a more accurate three-dimensional map. [0078]). Regarding claim 14: Zou in view of Liang teaches all the limitations of claim 1, upon which this claim is dependent. Zou further teaches: wherein aggregation of the plurality of top view images includes an image segmentation process in which objects represented in the plurality of top view images are identified (a surround camera system of a vehicle to detect, track, and classify close range road features reliability and in real-time [0019]) and classified (a feature extraction to extract road features from the top view using a neural network, and performing a classification of the road feature using the neural network [0005]). Regarding claim 15: Zou in view of Liang teaches all the limitations of claim 14, upon which this claim is dependent. Zou further teaches: wherein aggregation of the plurality of top view images includes omitting from the aggregated top view image pixels from one or more of the plurality of top view images determined, via the image segmentation process (The top view generation engine 212 generates a top view of the road based at least in part on the image. That is, the top view generation engine 212 uses the image to generate a top view of the road as if the point of view of the camera was directly above the road looking down at the road. An example of a top view (e.g., top-down view 304) is depicted in FIG. 3. [0031]), to be representative of at least a portion of a vehicle (see at least fig. 3 showing portion of vehicle in 302 which is not present in transformed view 304.). Regarding claim 17: Zou in view of Liang teaches all the limitations of claim 1, upon which this claim is dependent. Zou further teaches: wherein a first top view image and a second top view image among the plurality of top view images at least partially overlap in an overlap region (see at least fig. 1 showing images 130a-d showing the corners overlapping between the 4 views.) and wherein aggregation of the plurality of top view images includes incorporating into the aggregated top view image at least some of the pixels from the first top view image that reside in the overlap region and at least some of the pixels from the second top view image that reside in the overlap region (the captured images from the cameras 130 can be combined to form a top view or “bird's eye” view that provides a surround view around the vehicle 100 [0026]). Regarding claim 18: Zou in view of Liang teaches all the limitations of claim 1, upon which this claim is dependent. Zou further teaches: wherein a first top view image (fig. 1, 131a), a second top view image (fig. 1, 131b), and a third top view image (fig. 1, 131c) among the plurality of top view images at least partially overlap in an overlap region and wherein aggregation of the plurality of top view images includes incorporating into the aggregated top view image at least some of the pixels from the first top view image that reside in the overlap region, at least some of the pixels from the second top view image that reside in the overlap region, and at least some of the pixels from the third top view image that reside in the overlap region (According to aspects of the present disclosure, although four cameras 130a-130d are shown, different numbers of cameras (e.g., 2 cameras, 3 cameras, 5 cameras, 8 cameras, 9 cameras, etc.) can be implemented [0025]; examiner notes that the birds eye view creation of Zou would inherently have an overlap region of 3 images if more cameras were implemented on the vehicles such as in the corners which would overlap the adjacent two images.). Regarding claim 19: Zou in view of Liang teaches all the limitations of claim 1, upon which this claim is dependent. Zou further teaches: wherein the automatic annotation of the at least one road feature is performed by a trained neural network (detecting the road feature within the lane boundaries further includes performing a feature extraction to extract road features from the top view using a neural network, and performing a classification of the road feature using the neural network [0009]) Regarding claim 20: Zou in view of Liang teaches all the limitations of claim 1, upon which this claim is dependent. Zou further teaches: wherein the at least one road feature includes at least one of a road surface, a lane marking, or a road edge (performing a classification of the road feature using the neural network. In some examples, the lane boundaries are defined by a lane marker, a road shoulder, or a curb [0005]). Regarding claim 23: Zou in view of Liang teaches all the limitations of claim 1, upon which this claim is dependent. Zou further teaches: wherein the at least one road feature includes a drivable path (detecting, by the processing device, lane boundaries of a lane of the road based at least in part on the top view of the road [0004]). Regarding claim 24: Zou in view of Liang teaches all the limitations of claim 23, upon which this claim is dependent. Zou further teaches: wherein the drivable path is associated with at least one of a merge lane (examiner is interpreting this limitation in the alternative.), an exit lane (examiner is interpreting this limitation in the alternative.), an intersection (determine a type of road feature (e.g., a straight arrow, a left-turn arrow, etc.) [0034]), or a crossing road (a railroad indicator [0005]). Regarding claim 29: Zou in view of Liang teaches all the limitations of claim 1, upon which this claim is dependent. Zou further teaches: wherein the at least one road feature includes at least one of a traffic light (traffic direction control indicators [0019]), a pole (examiner is interpreting this limitation in the alternative.), a traffic sign (the road feature is one of a speed limit indicator, a bicycle lane indicator, a railroad indicator, a school zone indicator, and a direction indicator [0005]), a tree (examiner is interpreting this limitation in the alternative.), or a building (examiner is interpreting this limitation in the alternative.). Regarding claim 34: Zou in view of Liang teaches all the limitations of claim 1, upon which this claim is dependent. Zou further teaches: wherein the at least one processor is further programmed to convert the aggregated top view image to a series of frame view images each including a representation of at least a portion of the at least one road feature (the disclosure provide for road feature detection using machine learning to address computational inefficiency and accuracy issues in existing road feature detection. More particularly, the embodiments described herein detect road features by generating a top view of a road based on an image from a camera associated with a vehicle on the road, detect lane boundaries of a lane of the road based on the top view of the road, and detect (e.g., using deep learning) a road feature within the lane boundaries of the lane of the road. These aspects of the disclosure constitute technical features that yield the technical effect of reducing overall computational load, power consumption, hardware costs, and time [0021]), and wherein annotations of the least one road feature represented in the aggregated top view image are translated to each of the series of frame view images (the present techniques use a surround camera system of a vehicle to detect, track, and classify close range road features reliability and in real-time. Road features include lane marks, traffic direction control indicators, curbs, shoulders, and the like that are located on or about the road surface. To detect road features, the present techniques implement a deep learning network to enable multiple road feature detection and classification in parallel as one step and in real-time. In some examples, the road features can be fused with other in-vehicle sensors/data (e.g., long range sensors, other cameras, LIDAR sensors, maps, etc.) to improve detection and classification accuracy and robustness. In additional examples, the road features can be used for self-mapping and crowdsourcing to generate and/or update a road feature database [0019]). Regarding claim 35: Zou in view of Liang teaches all the limitations of claim 1, upon which this claim is dependent. Zou further teaches: wherein the at least one processor is further programmed to generate at least one navigational map based on the aggregated top view image stored to the at least one memory (the road features can be used for self-mapping and crowdsourcing to generate and/or update a road feature database [0019]). Regarding claim 36: Zou in view of Liang teaches all the limitations of claim 1, upon which this claim is dependent. Zou further teaches: wherein the at least one processor is further programmed to overlay the aggregated top view image with a drivable path (Feature extraction takes as an input the boundary image 312, which represents the result of the image processing depicted in FIG. 3 as performed by the top view generation engine 212 and the lane boundaries detection engine 214 [0040]) generated based on trajectories collected from a plurality of vehicles during earlier traversals of the road segment (crowdsourcing to generate and/or update a road feature database [0019]). Regarding claim 38: Zou in view of Liang teaches all the limitations of claim 1, upon which this claim is dependent. Zou further teaches: wherein the plurality of images are acquired by cameras included on a plurality of different vehicles as each of the plurality of different vehicles traversed the road segment (crowdsourcing to generate and/or update a road feature database [0019]). Regarding claim 40: Zou teaches: A non-transitory computer-readable medium storing instructions executable by at least one processor to perform a method for automatically mapping a road segment (The system also includes a memory including computer readable instructions and a processing device for executing the computer readable instructions for performing a method [0006]), the method comprising: receiving, from at least one camera mounted on a vehicle (The cameras 130 capture images external to the vehicle 100 [0024]), a plurality of images acquired as the vehicle traversed the road segment (Each of the cameras 130 has a field-of-view (FOV) 131a, 131b, 131c, 131d (collectively referred to herein as “FOV 131”). The FOV is the area observable by a camera. For example, the camera 130a has an FOV 131a, the camera 131b has an FOV 131b, the camera 130c has an FOV 131c, and the camera 131d has an FOV 131d. The captured images can be the entire FOV for the camera or can be a portion of the FOV of the camera. [0024]); converting each of the plurality of images to a corresponding top view image to provide a plurality of top view images (the captured images from the cameras 130 can be combined to form a top view or “bird's eye” view that provides a surround view around the vehicle 100 [0026]); aggregating the plurality of top view images to provide an aggregated top view image of the road segment (generates a top view of the road based at least in part on the image [0027]); analyzing the aggregated top view image to identify at least one road feature associated with the road segment (detects lane boundaries of a lane of the road based at least in part on the top view of the road, and detects a road feature within the lane boundaries of the lane of the road using machine learning and/or computer vision techniques [0027]); automatically annotating the at least one road feature relative to the aggregated top view image (Once the lane boundaries of the lane are detected, the road feature detection engine 216 uses the lane boundaries to detect road features within the lane boundaries of the lane of the road using machine learning and/or computer vision techniques. The road feature detection engine 216 searches within the top view, as defined by the lane boundaries, to detect road features. The road feature detection engine 216 can determine a type of road feature (e.g., a straight arrow, a left-turn arrow, etc.) as well as a location of the road feature (e.g., arrow ahead, bicycle lane to the left, etc.) [0034]; The road features can be predefined in a database of road features (e.g., road feature database 218). Examples of road features include a speed limit indicator, a bicycle lane indicator, a railroad indicator, a school zone indicator, and a direction indicator (e.g., left-turn arrow, straight arrow, right-turn arrow, straight and left-turn arrow, straight and right-turn arrow, etc.), and the like. The road feature database 218 can be updated when road features are detected, and the road feature database 218 can be accessible by other vehicles, such as from a cloud computing environment over a network or from the vehicle 100 directly (e.g., using direct short-range communications (DSCR)). This enables crowd-sourcing of road features. [0035]); and outputting to at least one memory the aggregated top view image including the annotated at least one road feature (The road feature database 218 can be updated when road features are detected, and the road feature database 218 can be accessible by other vehicles, such as from a cloud computing environment over a network or from the vehicle 100 directly [0035]; Graphics processing unit 37 is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display [0055]). Zou does not explicitly teach, however Liang teaches: aggregate the plurality of top view images (fig. 8 image 806 and 808; fig. 15, images from cameras 802 and 804) based on points correlated across the plurality of top view images (The selection of which one of cameras 802 or 804 will provide object image 1510 is variously embodied. In one embodiment, if object 810 (providing object image 1510) is 150 cm or closer to vehicle 100, then camera 802 is configured to provide the image via the second angle field of view. Alternatively, if object 810 is 150 cm or farther, camera 804 is configured to provide the image via third viewing angle. The location of the boarder delineating the portion of unified image 1500 is provided by camera 802 and which is provided by camera 804 is more fully described with respect to FIGS. 9 and 10 [0134]), the points corresponding to a portion of an object (fig. 8, object 810; fig. 15, object 1510), to provide an aggregated top view image of the road segment (FIG. 15 is unified view 1500 of a vehicle's surroundings in accordance with at least some embodiments of the present disclosure. In one embodiment, unified view 1500 may comprise a plurality of cameras, each producing a portion of unified view 1500. To avoid unnecessarily complicating the figures and description, on the images provided by cameras 802 and 804 are considered. However, it should be appreciated that any two cameras, of a two or more cameras, wherein an object is within, or potentially within, the field of view of each may be utilized without departing from the scope of the embodiments provided. [0130]); It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Zou to include the teachings as taught by Liang with a reasonable expectation of success. Zou and Liang both teach processing images from cameras mounted on vehicles and creating a top down view from them. Liang teaches the benefit of “a novel method where the seam line is not fixed, but instead dynamically varies for each overlap region based on the real-world objects present in that region. This novel method significantly improves the visibility and clarity of objects in the overlap regions in the reconstructed 360 views [Liang, 0116]”. Regarding claim 41: Zou in view of Liang teaches all the limitations of claim 40, upon which this claim is dependent. Zou further teaches: wherein the at least one camera has an optical axis projecting away from the vehicle (see at least fig. 1, cameras 130a - 130d showing cameras pointing away from vehicle.). Regarding claim 42: Zou in view of Liang teaches all the limitations of claim 40, upon which this claim is dependent. Zou further teaches: wherein each of the plurality of top view images is generated based on a simulated viewpoint that is elevated relative to an actual elevation of the at least one camera (the top view generation engine 212 uses the image to generate a top view of the road as if the point of view of the camera was directly above the road looking down at the road. An example of a top view (e.g., top-down view 304) is depicted in FIG. 3. [0031]). Regarding claim 43: Zou in view of Liang teaches all the limitations of claim 40, upon which this claim is dependent. Zou further teaches: wherein each of the plurality of top view images is generated by warping an image captured by the at least one camera from a viewpoint of the at least one camera (the top view generation engine 212 uses fisheye camera imaging techniques to generate the top view from an image captured with a fisheye camera (i.e., a camera having a fisheye lens). When using a fisheye camera, the top view generation engine 212 can be calibrated to compensate for radial distortion caused by the fisheye lens. [0032]) to a simulated camera viewpoint elevated relative to the at least one camera and directed along a line normal to a surface of the road segment (the top view generation engine 212 uses the image to generate a top view of the road as if the point of view of the camera was directly above the road looking down at the road. An example of a top view (e.g., top-down view 304) is depicted in FIG. 3. [0031]). Regarding claim 44: Zou in view of Liang teaches all the limitations of claim 40, upon which this claim is dependent. Zou further teaches: wherein aggregation of the plurality of top view images (generates a top view of the road based at least in part on the image [0027]) includes: Liang further teaches: determining a relative alignment for the plurality of top view images based on the correlated feature points (the images produced by cameras 802 and 804 are combined to avoid errors that may otherwise be produced when combining images from cameras having different vantages [0118]) and based on tracked ego motion of the vehicle (Other sensors, such as inertial measurement units, gyroscopes, wheel encoders, sonar sensors, motion sensors to perform odometry calculations with respect to nearby moving exterior objects, and exterior facing cameras (e.g., to perform computer vision processing) can provide further contextual information for generation of a more accurate three-dimensional map. [0078]). Regarding claim 45: Zou in view of Liang teaches all the limitations of claim 40, upon which this claim is dependent. Zou further teaches: wherein aggregation of the plurality of top view images includes an image segmentation process in which objects represented in the plurality of top view images are identified (a surround camera system of a vehicle to detect, track, and classify close range road features reliability and in real-time [0019]) and classified (a feature extraction to extract road features from the top view using a neural network, and performing a classification of the road feature using the neural network [0005]). Regarding claim 46: Zou in view of Liang teaches all the limitations of claim 40, upon which this claim is dependent. Zou further teaches: wherein the automatic annotation of the at least one road feature is performed by a trained neural network (detecting the road feature within the lane boundaries further includes performing a feature extraction to extract road features from the top view using a neural network, and performing a classification of the road feature using the neural network [0009]). Regarding claim 47: Zou teaches: A method for automatically mapping a road segment (The system also includes a memory including computer readable instructions and a processing device for executing the computer readable instructions for performing a method [0006]), the method comprising: receiving, from at least one camera mounted on a vehicle (The cameras 130 capture images external to the vehicle 100 [0024]), a plurality of images acquired as the vehicle traversed the road segment (Each of the cameras 130 has a field-of-view (FOV) 131a, 131b, 131c, 131d (collectively referred to herein as “FOV 131”). The FOV is the area observable by a camera. For example, the camera 130a has an FOV 131a, the camera 131b has an FOV 131b, the camera 130c has an FOV 131c, and the camera 131d has an FOV 131d. The captured images can be the entire FOV for the camera or can be a portion of the FOV of the camera. [0024]); converting each of the plurality of images to a corresponding top view image to provide a plurality of top view images (the captured images from the cameras 130 can be combined to form a top view or “bird's eye” view that provides a surround view around the vehicle 100 [0026]); aggregating the plurality of top view images to provide an aggregated top view image of the road segment (generates a top view of the road based at least in part on the image [0027]); analyzing the aggregated top view image to identify at least one road feature associated with the road segment (detects lane boundaries of a lane of the road based at least in part on the top view of the road, and detects a road feature within the lane boundaries of the lane of the road using machine learning and/or computer vision techniques [0027]); automatically annotating the at least one road feature relative to the aggregated top view image (Once the lane boundaries of the lane are detected, the road feature detection engine 216 uses the lane boundaries to detect road features within the lane boundaries of the lane of the road using machine learning and/or computer vision techniques. The road feature detection engine 216 searches within the top view, as defined by the lane boundaries, to detect road features. The road feature detection engine 216 can determine a type of road feature (e.g., a straight arrow, a left-turn arrow, etc.) as well as a location of the road feature (e.g., arrow ahead, bicycle lane to the left, etc.) [0034]; The road features can be predefined in a database of road features (e.g., road feature database 218). Examples of road features include a speed limit indicator, a bicycle lane indicator, a railroad indicator, a school zone indicator, and a direction indicator (e.g., left-turn arrow, straight arrow, right-turn arrow, straight and left-turn arrow, straight and right-turn arrow, etc.), and the like. The road feature database 218 can be updated when road features are detected, and the road feature database 218 can be accessible by other vehicles, such as from a cloud computing environment over a network or from the vehicle 100 directly (e.g., using direct short-range communications (DSCR)). This enables crowd-sourcing of road features. [0035]); and outputting to at least one memory the aggregated top view image including the annotated at least one road feature (The road feature database 218 can be updated when road features are detected, and the road feature database 218 can be accessible by other vehicles, such as from a cloud computing environment over a network or from the vehicle 100 directly [0035]; Graphics processing unit 37 is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display [0055]). Zou does not explicitly teach, however Liang teaches: aggregate the plurality of top view images (fig. 8 image 806 and 808; fig. 15, images from cameras 802 and 804) based on points correlated across the plurality of top view images (The selection of which one of cameras 802 or 804 will provide object image 1510 is variously embodied. In one embodiment, if object 810 (providing object image 1510) is 150 cm or closer to vehicle 100, then camera 802 is configured to provide the image via the second angle field of view. Alternatively, if object 810 is 150 cm or farther, camera 804 is configured to provide the image via third viewing angle. The location of the boarder delineating the portion of unified image 1500 is provided by camera 802 and which is provided by camera 804 is more fully described with respect to FIGS. 9 and 10 [0134]), the points corresponding to a portion of an object (fig. 8, object 810; fig. 15, object 1510), to provide an aggregated top view image of the road segment (FIG. 15 is unified view 1500 of a vehicle's surroundings in accordance with at least some embodiments of the present disclosure. In one embodiment, unified view 1500 may comprise a plurality of cameras, each producing a portion of unified view 1500. To avoid unnecessarily complicating the figures and description, on the images provided by cameras 802 and 804 are considered. However, it should be appreciated that any two cameras, of a two or more cameras, wherein an object is within, or potentially within, the field of view of each may be utilized without departing from the scope of the embodiments provided. [0130]); It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Zou to include the teachings as taught by Liang with a reasonable expectation of success. Zou and Liang both teach processing images from cameras mounted on vehicles and creating a top down view from them. Liang teaches the benefit of “a novel method where the seam line is not fixed, but instead dynamically varies for each overlap region based on the real-world objects present in that region. This novel method significantly improves the visibility and clarity of objects in the overlap regions in the reconstructed 360 views [Liang, 0116]”. Regarding claim 48: Zou in view of Liang teaches all the limitations of claim 47, upon which this claim is dependent. Zou further teaches: wherein the at least one camera has an optical axis projecting away from the vehicle (see at least fig. 1, cameras 130a - 130d showing cameras pointing away from vehicle.). Regarding claim 49: Zou in view of Liang teaches all the limitations of claim 47, upon which this claim is dependent. Zou further teaches: wherein each of the plurality of top view images is generated based on a simulated viewpoint that is elevated relative to an actual elevation of the at least one camera (the top view generation engine 212 uses the image to generate a top view of the road as if the point of view of the camera was directly above the road looking down at the road. An example of a top view (e.g., top-down view 304) is depicted in FIG. 3. [0031]). Regarding claim 50: Zou in view of Liang teaches all the limitations of claim 47, upon which this claim is dependent. Zou further teaches: wherein each of the plurality of top view images is generated by warping an image captured by the at least one camera from a viewpoint of the at least one camera (the top view generation engine 212 uses fisheye camera imaging techniques to generate the top view from an image captured with a fisheye camera (i.e., a camera having a fisheye lens). When using a fisheye camera, the top view generation engine 212 can be calibrated to compensate for radial distortion caused by the fisheye lens. [0032]) to a simulated camera viewpoint elevated relative to the at least one camera and directed along a line normal to a surface of the road segment (the top view generation engine 212 uses the image to generate a top view of the road as if the point of view of the camera was directly above the road looking down at the road. An example of a top view (e.g., top-down view 304) is depicted in FIG. 3. [0031]). Regarding claim 51: Zou in view of Liang teaches all the limitations of claim 47, upon which this claim is dependent. Zou further teaches: wherein aggregation of the plurality of top view images (generates a top view of the road based at least in part on the image [0027]) includes: Liang further teaches: determining a relative alignment for the plurality of top view images based on the correlated feature points (the images produced by cameras 802 and 804 are combined to avoid errors that may otherwise be produced when combining images from cameras having different vantages [0118]) and based on tracked ego motion of the vehicle (Other sensors, such as inertial measurement units, gyroscopes, wheel encoders, sonar sensors, motion sensors to perform odometry calculations with respect to nearby moving exterior objects, and exterior facing cameras (e.g., to perform computer vision processing) can provide further contextual information for generation of a more accurate three-dimensional map. [0078]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Zou to include the teachings as taught by Liang with a reasonable expectation of success. Zou and Liang both teach processing images from cameras mounted on vehicles and creating a top down view from them. Liang teaches the benefit of “a novel method where the seam line is not fixed, but instead dynamically varies for each overlap region based on the real-world objects present in that region. This novel method significantly improves the visibility and clarity of objects in the overlap regions in the reconstructed 360 views [Liang, 0116]”. Regarding claim 52: Zou in view of Liang teaches all the limitations of claim 47, upon which this claim is dependent. Zou further teaches: wherein aggregation of the plurality of top view images includes an image segmentation process in which objects represented in the plurality of top view images are identified (a surround camera system of a vehicle to detect, track, and classify close range road features reliability and in real-time [0019]) and classified (a feature extraction to extract road features from the top view using a neural network, and performing a classification of the road feature using the neural network [0005]). Regarding claim 53: Zou in view of Liang teaches all the limitations of claim 47, upon which this claim is dependent. Zou further teaches: wherein the automatic annotation of the at least one road feature is performed by a trained neural network (detecting the road feature within the lane boundaries further includes performing a feature extraction to extract road features from the top view using a neural network, and performing a classification of the road feature using the neural network [0009]). Regarding claim 54: Zou in view of Liang teach all the limitations of claim 1, upon which this claim is dependent. Zou further teaches: wherein the points are located along a modeling structure of the road segment (fig. 3, showing modeling structure from lane markers of road surface.). Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zou et. al. (US 2017/0300763), herein Zou (From IDS) in view of Liang et. al. (US 2020/0314333), herein Liang (from IDS) in further view of Stojanovic et. al. (US 2019/0050648), herein Stojanovic. Regarding claim 16: Zou in view of Liang teaches all the limitations of claim 1, upon which this claim is dependent. Zou in view of Liang does not explicitly teach, however Stojanovic teaches: wherein aggregation of the plurality of top view images includes omitting from the aggregated top view image pixels from one or more of the plurality of top view images determined to be representative of at least a portion of a moving object (at block 1012, dynamic objects are vetoed from the terrestrial-view semantic images. The vetoed dynamic objects are removed from the terrestrial-view semantic images. That is, dynamic objects, such as, but not limited to, vehicles, pedestrians, and the like may be vetoed and/or removed from the terrestrial-view semantic images. The removal of dynamic objects from the drive time semantic images is performed because such dynamic objects will not be included in the received semantic map. As such, the performance of the image registration discussed herein is improved by vetoing dynamic objects. [0098]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Zou in view of Liang to include the teachings as taught by Stojanovic with a reasonable expectation of success. Zou and Stojanovic both teach processing images from cameras mounted on vehicles and creating a top down view from them. Stojanovic teaches the benefit of “The removal of dynamic objects from the drive time semantic images is performed because such dynamic objects will not be included in the received semantic map. As such, the performance of the image registration discussed herein is improved by vetoing dynamic objects [Stojanovic, 0098]”. Claim(s) 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zou et. al. (US 2017/0300763), herein Zou (From IDS) in view of Liang et. al. (US 2020/0314333), herein Liang (from IDS) in further view of Kang et. al. (US 2019/0095722), herein Kang. Regarding claim 28: Zou in view of Liang teaches all the limitations of claim 1, upon which this claim is dependent. Zou in view of Liang does not explicitly teach, however Kang teaches: wherein the at least one road feature includes a virtual lane marking connecting two or more discontinuous lane markings (the driving lane identifying apparatus generates virtual lines, for example, virtual lines 1510 of FIG. 15, by fitting, in a segmentation image, the boundary lines that demarcate a left boundary and a right boundary of the lane. The driving lane identifying apparatus may generate the virtual lines, for example, the virtual lines 1510, by fitting the boundary lines in the segmentation image using a spline or a polyline based on a local gradient or a local threshold [0092]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Zou in view of Liang to include the teachings as taught by Kang with a reasonable expectation of success. Zou and Kang both teach processing images from cameras mounted on vehicles. Kang teaches the benefit of “a method of identifying a driving lane, including extracting, from an input image, a left lane boundary line and a right lane boundary line of a driving lane of a vehicle, generating a segmentation image by segmenting the input image into objects included in the input image based on a semantic unit, generating a multi-virtual lane by fitting, in the segmentation image, the left lane boundary line and the right lane boundary line on a left side and a right side at equidistance intervals, determining a number of lanes of the multi-virtual lane based on whether the multi-virtual lane corresponds to a road component in the segmentation image, and identifying the driving lane by determining a relative location of the driving lane on a road on which the vehicle may be traveling based on the determined number of the lanes of the multi-virtual lane [Kang, 0022]”. Claim(s) 37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zou et. al. (US 2017/0300763), herein Zou (From IDS) in view of Liang et. al. (US 2020/0314333), herein Liang (from IDS) in further view of Dorum et. al. (US 2010/0082248), herein Dorum. Regarding claim 37: Zou in view of Liang teaches all the limitations of claim 36, upon which this claim is dependent. Zou in view of Liang does not explicitly teach, however Dorum teaches: wherein the drivable path is represented as a 3D spline (n blocks 1502 and 1504, the 3D B-spline routine 1500 associates the altitude B-spline data to the road link map database such that previous knowledge of the road attributes may be added to the splines. This allows for intelligent processing of the altitude B-splines, such as knowing if the elevation data is at an intersection or on a ramp (where crossing elevations are allowed to differ) [0133]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Zou in view of Liang to include the teachings as taught by Dorum with a reasonable expectation of success. Zou and Dorum both teach processing images from cameras mounted on vehicles and creating a top down view from them. Dorum teaches the benefit of “link chains, optimized 2D B-splines, and height data are used to create 3D splines. The height data is preferably obtained from GPS/IMU traces collected as vehicles travel on roads represented by the road segments. The height data is corrected at crossing nodes to account for GPS height inaccuracies. The link chains are fitted to create an altitude B-spline using the corrected height data. The 2D B-spline and the altitude B-spline are merged to obtain a 3D B-spline. Like the 2D B-spline, knots not needed to preserve the position, curvature, slope and/or heading are removed from the 3D B-spline to minimize storage requirements [Dorum, 0010]”. Claim(s) 39 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zou et. al. (US 2017/0300763), herein Zou (From IDS) in view of Liang et. al. (US 2020/0314333), herein Liang (from IDS) in further view of Zhang et. al. (US 2013/0293717), herein Zhang (from IDS). Regarding claim 39: Zou in view of Liang teaches all the limitations of claim 38, upon which this claim is dependent. Zou in view of Liang does not explicitly teach, however Zhang teaches: wherein the plurality of images are aligned based on collected ego motion associated with each of the different vehicles (the vehicle motion compensation process looks at the image points 72 and 74 in consecutive image frames where twice or more of the number of the image points 72 and 74 in the two or more frames are available for lane geometry analysis to align the image points 72 and 74 from one image frame to the next image frame based on the motion of the vehicle 10 [0030]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Zou in view of Liang to include the teachings as taught by Zhang with a reasonable expectation of success. Zou and Zhang both teach processing images from cameras mounted on vehicles and creating a top down view from them. Zhang teaches the benefit of “Vehicle motion compensation can be used to enhance the identification of the lanes lines 50 and 52 in the image 32 at box 82 [Zhang, 0030]”. Claim(s) 55-56 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zou et. al. (US 2017/0300763), herein Zou (from IDS) in view of Liang et. al. (US 2020/0314333), herein Liang (from IDS) in further view of Porter et. al. (US 2020/0098130), herein Porter. Regarding claim 55: Zou in view of Liang teach all the limitations of claim 1, upon which this claim is dependent. Zou in view of Liang does not explicitly teach, however Porter teaches: wherein aggregating the plurality of top view images includes determining orientation (“an orientation”) and relative spacing (“a spatial position”) of the plurality of top view images based on the correlated points (In step 42, the system performs an image orientation phase. The image orientation step determines a spatial position and an orientation of each camera relative to each other. For example, the system selects matching key points in each image pair by using a feature detector algorithm, such as, for example, KAZE. Those skilled in the art would understand that other methods for selecting matching key points or other feature detector algorithms can be used. FIG. 6 is an illustration showing an example of how key points are matched between image pairs. [0035]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Zou in view of Liang to include the teachings as taught by Porter with a reasonable expectation of success. All the references are in the same field on endeavor of aggregating images. Porter also teaches the benefit of “ground surface condition detection and extraction from digital images. The digital images can include, but are not limited to, aerial imagery, satellite imagery, ground-based imagery, imagery taken from unmanned aerial vehicles (UAVs), mobile device imagery, etc. The disclosed system can perform a high resolution scan and generate an orthomosaic and a digital surface model from the scans. The system can then perform damage detection and a geometric extraction. Finally, the system can generate a damage report. [Porter, 0005]”. Regarding claim 56: Zou in view of Liang teach all the limitations of claim 1, upon which this claim is dependent. Zou in view of Liang does not explicitly teach, however Porter teaches: wherein the points represent a 3D position associated with a surface of the object (Generating the digital surface model determines a point's 3D location when it is seen by a multiplicity of images [0038]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Zou in view of Liang to include the teachings as taught by Porter with a reasonable expectation of success. All the references are in the same field on endeavor of aggregating images. Porter also teaches the benefit of “ground surface condition detection and extraction from digital images. The digital images can include, but are not limited to, aerial imagery, satellite imagery, ground-based imagery, imagery taken from unmanned aerial vehicles (UAVs), mobile device imagery, etc. The disclosed system can perform a high resolution scan and generate an orthomosaic and a digital surface model from the scans. The system can then perform damage detection and a geometric extraction. Finally, the system can generate a damage report. [Porter, 0005]”. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ilic (US 11,315,217) discloses A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Further, operating conditions may be selected, automatically or based on instructions provided to a user, to reduce motion blur. Techniques, including relocalization such that, allow for user-selected regions of the composite image to be changed. Pellikka (US 10,628,698) discloses There is provided a method comprising receiving at least three images, wherein the images form a number of partially overlapping image pairs, wherein the number of partially overlapping image pairs is at least a number of the received images; forming one or more candidate transformations between the images of the image pairs; constructing a multigraph comprising nodes representing nodal transformations from a composite image to the images of the at least three images, and edges between the nodes, the edges representing the one or more candidate transformations; solving edge weights and the nodal transformations using an optimization problem, wherein the edge weights indicate plausibility of the one or more candidate transformations; and applying the solved nodal transformations in forming the composite image of the at least three images. Benkelman (US 6,694,064) discloses A computer-implemented method and system for use in alignment of multiple digital images to form a mosaic image involves selecting multiple search site points (SSPs) in an overlapping area of a pair of the digital images and searching for an interesting point (IP) near each of the SSPs. The system involves the calculation of a numeric interest measure (IM) at each of multiple IP-candidate sites near the SSPs. The IM is indicative of the presence of image features at the IP-candidate site and provides a basis for comparing the IP-candidate sites at each of the SSPs and selecting the IP-candidate site having the most significant IM. In a preferred embodiment, IP-candidate sites having an IM that does not exceed a predetermined minimum threshold are discarded. The method also involves locating a tie point (TP) on an overlapping one of the digital images correlating to the IP. The TP together with the IP comprise a tie point pair (TPP) that can be used to calculate and apply geometric transformations to align the images and thereby form a seamless mosaic. The system and method may also involve radiometric balancing of the images to reduce tonal mismatch. Chandra (US 2018/0068416) discloses An apparatus for generating precision maps of an area is disclosed. The apparatus receives sensor data, where the sensor data includes sensor readings each indicating a level of a parameter in one of a plurality of first portions of an area, and video data representing an aerial view of the area. The sensor data may be received from sensors that are each deployed in one of the first portions of the area. The video data may be received from an aerial vehicle. An orthomosaic may be generated from the video data, and the orthomosaic and the sensor data used to generate a predication model. The prediction model may then be used to extrapolate the sensor data to determine a level of the parameter in each of a plurality of second portions of the area. A precision map of the area may be generated using the extrapolated sensor readings. Selviah (WO 2018/138516) discloses 3D surveying, mapping, and imaging. In particular, the present invention relates to the rotational alignment of 3D datasets. Embodiments include an apparatus method and program for rotations! and optionally also transnational alignment of 3D datasets. The 3D datasets being stored as point clouds, transformed into vector sets, and the vector sets being represented as a unit sphere or Gaussian sphere and compared for best alignment. The found best alignment is used to rotate the two 3D datasets into rotational and translational alignment with one another. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Scott R Jagolinzer whose telephone number is (571)272-4180. The examiner can normally be reached M-Th 8AM - 4PM Eastern. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christian Chace can be reached at (571)272-4190. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Scott R. Jagolinzer Examiner Art Unit 3665 /S.R.J./Examiner, Art Unit 3665 /CHRISTIAN CHACE/Supervisory Patent Examiner, Art Unit 3665
Read full office action

Prosecution Timeline

Jun 29, 2023
Application Filed
Jun 14, 2025
Non-Final Rejection — §103
Nov 10, 2025
Applicant Interview (Telephonic)
Nov 10, 2025
Examiner Interview Summary
Nov 18, 2025
Response Filed
Feb 19, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12492103
REMOTE OPERATION TERMINAL AND MOBILE CRANE COMPRISING REMOTE OPERATION TERMINAL
2y 5m to grant Granted Dec 09, 2025
Patent 12441318
VEHICLE CONTROL DEVICE, VEHICLE CONTROL METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Oct 14, 2025
Patent 12344390
Method of Adjusting Directional Movement Ability in a Multi-Rotor Aircraft
2y 5m to grant Granted Jul 01, 2025
Patent 12304504
VEHICLE CONTROL SYSTEM
2y 5m to grant Granted May 20, 2025
Patent 12216018
SYSTEM AND METHOD FOR MOVING MATERIAL
2y 5m to grant Granted Feb 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
41%
Grant Probability
60%
With Interview (+19.2%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 110 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month