Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This Office Action is in response to the application 18/723,418 filed on 09/30/2025.
In the instant Amendment, claims 1 – 3, 6, 9 – 12, 14 -17 have been amended. Claims 5, 13 and 18 has been cancelled.
Claims 1 – 4, 6 – 12, 14 – 17, 19 have been examined and are pending in this application. This action is made Final.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 06/22/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
Applicant’s arguments with respect to claims 1 – 4, 6 – 12, 14 – 17 and 19 have been considered but are moot because the arguments do not apply to the same combination of references being used in the current rejection. Applicant’s arguments are directed solely to the claimed invention as amended 09/30/2025, which has been rejected under new ground of rejection necessitated by amendment. See rejection below for full detail.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1 – 4, 6 – 12, 14 – 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Bojarski et al. (US 2020/0324795 A1) in view of Wachtel et al. (US 2022/0383545 A1).
Regarding claim 1, Bojarski discloses: “a method of generating a ground truth for one or more other road participants [see abstract: generated by one or more sensors of autonomous machines may be localized to high definition (HD) map data to augment and/or generate ground truth data—e.g., automatically], comprising:
receiving first image information about surroundings of a vehicle from at least two first-type cameras that are each mounted on the vehicle at different positions of the vehicle, the first image information including images from each of the at least two first-type cameras [see para: 0070; Cameras with a field of view that include portions of the environment to the side of the vehicle 500 (e.g., side-view cameras) may be used for surround view, providing information used to create and update the occupancy grid, as well as to generate side impact collision warnings. For example, surround camera(s) 574 (e.g., four surround cameras 574 as illustrated in FIG. 5B) may be positioned to on the vehicle 500];
extracting second image information about the one or more other road participants from the first image information [see para: 0067; Front-facing cameras may be used to perform many of the same ADAS functions as LIDAR, including emergency braking, pedestrian detection, and collision avoidance];
generating the ground truth for the one or more other road participants based at least on the second image information [see para: 0029; As a further example, where the DNN 116 is trained to generate outputs corresponding to intersection (e.g., bounding shape vertices corresponding to a bounding shape encompassing an intersection), the correlator 110 may determine each of the features of the intersection (e.g., traffic lights, traffic signs, labels or markings on the driving surface, etc.) that correspond to an intersection such that the ground truth generator 112 may generate bounding shapes that encompass each of the features (e.g., similar to visualization 240 of FIG. 2B)]; and [[,]]
validating using the ground truth [see para: 0021; The process 100 may include generating and/or receiving sensor data 102 from one or more sensors of data collection vehicles 500 (which may be similar to the vehicle 500, or may include non-autonomous or semi-autonomous vehicles). The sensor data 102 may be used within the process 100 for localization, correlation, and ground truth generation, as well as for input data for a deep neural network (DNN) 116. The sensor data 102 may include, without limitation, sensor data 102 from any type of sensors, such as but not limited to those described herein with respect to the vehicle 500 and/or other vehicles].
Bojarski does not explicitly disclose: “fusing the images from each of the at least two first-type cameras to form fused first image information”.
However, Wachtel, from the same or similar field of endeavor teaches: “fusing the images from each of the at least two first-type cameras to form fused first image information [see para: 0140; Each image capture device 122, 124, and 126 may be positioned at any suitable position and orientation relative to vehicle 200. The relative positioning of the image capture devices 122, 124, and 126 may be selected to aid in fusing together the information acquired from the image capture devices. For example, in some embodiments, a FOV (such as FOV 204) associated with image capture device 124 may overlap partially or fully with a FOV (such as FOV 202) associated with image capture device 122 and a FOV (such as FOV 206) associated with image capture device 126];
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the autonomous vehicles to safely navigate through the environment system disclosed by Bojarski to add the teachings of Wachtel as above, in order to provide a means for improving vision system of the autonomous vehicle, multiple image are captured from different cameras and combined or fused those images using image processing system based on position of the vehicle [Wachtel see para: 0140].
Regarding claim 2, Bojarski and Wachtel disclose all the limitation of claim 1 and are analyzed as previously discussed with respect to that claim.
Furthermore, Bojarski discloses: “wherein the ground truth is further used to validate a further sensed value about the one or more other road participants collected by other types of sensors mounted on the vehicle[see para: 0021; The process 100 may include generating and/or receiving sensor data 102 from one or more sensors of data collection vehicles 500 (which may be similar to the vehicle 500, or may include non-autonomous or semi-autonomous vehicles). The sensor data 102 may be used within the process 100 for localization, correlation, and ground truth generation, as well as for input data for a deep neural network (DNN) 116. The sensor data 102 may include, without limitation, sensor data 102 from any type of sensors, such as but not limited to those described herein with respect to the vehicle 500 and/or other vehicles. And see para: 0003; In such instances, conventional approaches leverage on-board sensors of the vehicles—such as vision sensors (e.g., cameras, LIDAR, RADAR, etc.)—to detect objects, road features (e.g., lane markings, road edges, etc.), free-space boundaries, wait condition information, intersection structure and pose, and/or the like].
Regarding claim 3, Bojarski and Wachtel disclose all the limitation of claim 1 and are analyzed as previously discussed with respect to that claim.
Furthermore, Bojarski discloses: “further comprising: performing a coordinate system conversion on the second image information to form converted second image information[see para: 0026; In some non-limiting embodiments, the sensor data 102 and/or the information from the HD map 104 may be applied to a coordinate transformer to transform the sensor data 102 and/or the HD map 104 to a coordinate system of the vehicle 500 and/or to transform the map data from the HD map 104 to a coordinate space of the sensor data 102 (e.g., to transform the 3D world-space map data to 2D image- or sensor-space)], and
generating the ground truth based at least on the converted second image information [see para: 0027; In some embodiments, the coordinate transformer 108 may shift the perspective of the map data with respect to a location and/or orientation of the data collection vehicle 500 and/or a sensor thereof. As such, the portion of the HD map 104 that may be used by the ground truth generator 112 to generate ground truth data may be shifted relative to the vehicle 500 (e.g., with the data collection vehicle 500 at the center, at (x, y) coordinates of (0, 0), where y is a longitudinal dimension extending from front to rear of the vehicle and x is a lateral dimension perpendicular to y and extending from left to right of the vehicle) and/or a sensor thereof (e.g., to a field of view or sensory field of the sensor that generated the instance of the sensor data 102 corresponding to the HD map 104)].
Regarding claim 4, Bojarski and Wachtel disclose all the limitation of claim 1 and are analyzed as previously discussed with respect to that claim.
Furthermore, Bojarski discloses: “wherein, in the coordinate system conversion, the second image information is converted from a two-dimensional image coordinate system to a three-dimensional vehicle coordinate system [see para: 0028; In addition to or alternatively from the coordinate transformer 108 shifting or transforming the coordinate system of the HD map 104 to that of the vehicle 500 and/or a sensor thereof, the coordinate transformer 108 may, in some embodiments, shift or transform the map data to a coordinate system or dimension of the sensor data 102. For example, where the DNN 116 is trained to compute outputs 118 in 2D image-space, the map data may be transformed or shifted from 2D or 3D world-space coordinates to 2D image- or sensor-space coordinates].
Claim 5, Cancelled.
Regarding claim 6, Bojarski and Wachtel disclose all the limitation of claim 1 and are analyzed as previously discussed with respect to that claim.
Bojarski discloses: “wherein:
the positioning information is generated using data from a global positioning system of the vehicle [see para: 0060; The controller(s) 536 may provide the signals for controlling one or more components and/or systems of the vehicle 500 in response to sensor data received from one or more sensors (e.g., sensor inputs). The sensor data may be received from, for example and without limitation, global navigation satellite systems sensor(s) 558 (e.g., Global Positioning System sensor(s))].
Bojarski does not explicitly disclose: “the images from each of the at least two first-type cameras includes a first image extracted at a first time and a second image extracted at a second time that is different from the first time,
positioning information of the vehicle is determined at the first time and at the second time,
fusing the images includes fusing the first image with the second image [[,]] based on the positioning information of the vehicle
However, Wachtel, from the same or similar field of endeavor teaches: “the images from each of the at least two first-type cameras includes a first image extracted at a first time and a second image extracted at a second time that is different from the first time [see para: 0119; Image capture devices 122, 124, and 126 may each include any type of device suitable for capturing at least one image from an environment. Moreover, any number of image capture devices may be used to acquire images for input to the image processor. Some embodiments may include only a single image capture device, while other embodiments may include two, three, or even four or more image capture devices. Image capture devices 122, 124, and 126 will be further described with reference to FIGS. 2B-2E, below. And see para: 0197; Processing unit 110 may additionally compare the curvature of the snail trail (associated with the leading vehicle) with the expected curvature of the road segment in which the leading vehicle is traveling. The expected curvature may be extracted from map data (e.g., data from map database 160), from road polynomials, from other vehicles' snail trails, from prior knowledge about the road, and the like. And see para: 0282; Data (e.g., reconstructed trajectories) collected by multiple vehicles in multiple drives along a road segment at different times may be used to construct the road model (e.g., including the target trajectories, etc.) included in sparse data map 800. Data collected by multiple vehicles in multiple drives along a road segment at different times may also be averaged to increase an accuracy of the model. In some embodiments, data regarding the road geometry and/or landmarks may be received from multiple vehicles that travel through the common road segment at different times],
positioning information of the vehicle is determined at the first time and at the second time [see para: 0283; In some embodiments, a location is identified in each frame or image that is a few meters ahead of the current position of the vehicle. This location is where the vehicle is expected to travel to in a predetermined time period],
fusing the images includes fusing the first image with the second image [[,]] based on the positioning information of the vehicle [see para: 0140; Each image capture device 122, 124, and 126 may be positioned at any suitable position and orientation relative to vehicle 200. The relative positioning of the image capture devices 122, 124, and 126 may be selected to aid in fusing together the information acquired from the image capture devices. For example, in some embodiments, a FOV (such as FOV 204) associated with image capture device 124 may overlap partially or fully with a FOV (such as FOV 202) associated with image capture device 122 and a FOV (such as FOV 206) associated with image capture device 126], and
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the autonomous vehicles to safely navigate through the environment system disclosed by Bojarski to add the teachings of Wachtel as above, in order to provide a means for improving vision system of the autonomous system, multiple images are captured from different cameras in different angles at different times and combined based on the global position and output to the driver or user [Wachtel see para: 0119; 0197; 0282; 0283; 0140]
Regarding claim 7, Bojarski and Wachtel disclose all the limitation of claim 1 and are analyzed as previously discussed with respect to that claim.
Furthermore, Bojarski discloses: “wherein the ground truth is generated based on the second image information and third image information about the one or more other road participants from other types of sensors [see para: 0029; As a further example, where the DNN 116 is trained to generate outputs corresponding to intersection (e.g., bounding shape vertices corresponding to a bounding shape encompassing an intersection), the correlator 110 may determine each of the features of the intersection (e.g., traffic lights, traffic signs, labels or markings on the driving surface, etc.) that correspond to an intersection such that the ground truth generator 112 may generate bounding shapes that encompass each of the features (e.g., similar to visualization 240 of FIG. 2B)].
Regarding claim 8, Bojarski and Wachtel disclose all the limitation of claim 1 and are analyzed as previously discussed with respect to that claim.
Furthermore, Bojarski discloses: “further comprising: generating a ground truth for a relative position between the one or more other road participants and driving boundaries [see para: 0029; As a further example, where the DNN 116 is trained to generate outputs corresponding to intersection (e.g., bounding shape vertices corresponding to a bounding shape encompassing an intersection), the correlator 110 may determine each of the features of the intersection (e.g., traffic lights, traffic signs, labels or markings on the driving surface, etc.) that correspond to an intersection such that the ground truth generator 112 may generate bounding shapes that encompass each of the features (e.g., similar to visualization 240 of FIG. 2B). And see para: 0024; The HD map 104 may represent lanes, road boundaries, road shape, elevation, slope, and/or contour, heading information, wait conditions, static object locations, and/or other information].
Regarding claim 9, claim 9 is rejected under the same art and evidentiary limitations as determined for the method of claim 1.
Regarding claim 10, claim 10 is rejected under the same art and evidentiary limitations as determined for the method of claim 2.
Regarding claim 11, claim 11 is rejected under the same art and evidentiary limitations as determined for the method of claim 3.
Regarding claim 12, claim 12 is rejected under the same art and evidentiary limitations as determined for the method of claim 4.
Claim 13, Cancelled.
Regarding claim 14, claim 16 is rejected under the same art and evidentiary limitations as determined for the method of claim 6.
Regarding claim 15, claim 15 is rejected under the same art and evidentiary limitations as determined for the method of claim 7.
Regarding claim 16, claim 16 is rejected under the same art and evidentiary limitations as determined for the method of claim 8.
Regarding claim 17 and 19, claim 17 and 19 is rejected under the same art and evidentiary limitations as determined for the method of claim 1.
Claim 18, Cancelled.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Ferencz et al (WO 2022/147274 A1).
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Masum Billah whose telephone number is (571)270-0701. The examiner can normally be reached Mon - Friday 9 - 5 PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie J. Atala can be reached at (571) 272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MASUM BILLAH/Primary Patent Examiner, Art Unit 2486