DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Response to Arguments
Applicant’s amendment filed on December 17, 2025 is acknowledged. Currently Claims 1-20 are pending. Claims 1, 3, 4, 8, 11, 12, 14, 15, and 19 are amended.
Applicant's arguments with respect independent claims 1 and 12 have been considered but are moot in view of the new ground(s) of rejection. Amended claims 1 and 12 result in a different scope than that of the originally presented Claims 1 and 12 respectively.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kocamaz et al. US2023/0099494 hereinafter referred to as Kocamaz in view of Halfaoui et al. US2022/0180646 hereinafter referred to as Halfaoui and Alalao US2022/0410710.
As per Claim 1, Kocamaz teaches a method for managing location information in automated vehicles, the method comprising:
obtaining, by a processor of the autonomous vehicle, image data from a camera on the autonomous vehicle, the image data includes a digital representation of imagery in a field-of-view of the camera including an operational environment with one or more objects and a roadway having one or more driving lanes; (Kocamaz, Figure 8A, Paragraph [0027], “The process 100 may include generating and/or receiving sensor data 102 from one or more sensors. The sensor data 102 may be received, as a non-limiting example, from one or more sensors of a vehicle (e.g., vehicle 800 of FIGS. 8A-8D and described herein). The sensor data 102 may include, without limitation, sensor data 102 from any of the sensors of the vehicle including, for example and with reference to FIGS. 8A-8D” and Paragraph [0008], Figure 2A)
for each driving lane of the one or more lanes, applying, by the processor, to the image data a lane label associated with the particular lane; determining, by the processor, the driving lane of the one or more driving lanes containing the object; and (Kocamaz, Paragraph [0034], “While FIG. 2A depicts the objects as vehicles, it is not intended to be limiting, as any detected object, such as structures, animals, pedestrians, etc. are contemplated herein. The image 200A may comprise one or more lanes corresponding to the driving surfaces represented in the image 200A. For example, lanes 204, 206, 208, and 210 are depicted in image 200A and may correspond to a location relative to reference point, such as an ego-vehicle associated with the sensor data 102. From the perspective of an ego-vehicle, the lanes 204, 206, 208, and 210 may correspond to an ego-lane, a left of ego-lane, a right of ego-lane, and other lane. For instance, if an ego-vehicle is positioned in lane 204, lane 204 may be associated with the ego-lane, lane 206 may correspond to a left of ego-lane, lane 208 may correspond to a right of ego-lane, and lane 210 may correspond to an “other” lane label” and Kocamaz, Paragraph [0066], “FIG. 5A illustrates a visualization 520A of the image 510A with corresponding bounding shapes 504A, 504B, 504C, and 504D as determined using the machine learning model(s) 104 and/or the object detector 404. The visualization 530A corresponds to a combined output segmentation mask (corresponding to output masks 106) indicating pixel confidences for different object class/lane identifier combinations. In this example, the visualization 530A includes four classifications of object class/lane identifier for the pixels of the corresponding input image: pixel group 506A corresponds to an assignment of vehicles to the left of ego-lane, pixel group 506B corresponds to an assignment of vehicles to the ego-lane, pixel group 506C corresponds to an assignment of vehicles to the right of ego-lane, and pixel group 506D corresponds to an assignment of vehicles to other lanes. As such, during post-processing, the pixels for each output mask 106 (represented as a combined visual in the visualization 530A) within a given bounding shape may be analyzed to determine the object class/lane identifier combination that is most heavily represented within the bounding shape. For example, for bounding shape 504C, the most pixels within the bounding shape 504C may correspond to the pixel group 506C, and thus the object corresponding to the bounding shape 504C may be assigned a vehicle class label positioned in the right of ego-lane.”)
updating, by the processor, the image data by applying an object label indicating the lane for the driving lane having the object. (Kocamaz, Paragraph [0067, “For example, for bounding shape 514A, the most pixels within the bounding shape 514A may correspond to the pixel group 516A, and thus the object corresponding to the bounding shape 514A may be assigned a vehicle class label positioned in the ego-lane” and Paragraph [0066], “For example, for bounding shape 504C, the most pixels within the bounding shape 504C may correspond to the pixel group 506C, and thus the object corresponding to the bounding shape 504C may be assigned a vehicle class label positioned in the right of ego-lane.”)
Kocamaz does not explicitly teach lane index value
Halfaoui teaches lane index value (Halfaoui, Figure 2, Paragraph [0074], “In processing block 103 of FIG. 1, the processing circuitry of the lane detection system 100 is configured to obtain or capture the visual representation of the driving scene. To this end, in an embodiment, the lane detection system 100 can comprise or be connected to a camera implemented in the vehicle and configured to capture the image of the multi-lane road, i.e. the driving scene. Together with the input image obtained in processing block 103 of FIG. 1, ground-truth labels of the correct lane ID, i.e. the current lane number, and lane count are obtained by the processing circuitry of the lane detection system 100 in processing block 101 of FIG. 1. By way of example, in the processing block 101 shown in FIG. 1 the correct left ID is obtained by starting to count from the leftmost lane of the multi-lane road, which using this “left convention” is lane number 1. As will be appreciated, however, processing block 101 likewise could obtain the correct right ID, which is based on the rightmost lane of the multi-lane road”)
Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Halfaoui into Kocamaz because by utilizing alternate scheme for indicating lanes with numbers instead of phrases will provide quick means to identify lanes in a roadway for further processing.
Kocamaz in view of Halfaoui does not explicitly teach applying a label overlayed onto the digital representation of imagery indicating the lane index value for the driving lane
Alalao teaches applying a label overlayed onto the digital representation of imagery for the driving lane (Alalao, Figure 19, object labeled on a digital representation)
Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Alalao into Kocamaz in view of Halfaoui because by providing means to graphically display information about the object will allow the user to see the data and indicators of the data. Although Alalao simply displays the type of object, Alalao is relied to teach ability to display information stored about an object onto a display. One of ordinary skill in the art before the effective filing date would be able to modify Alalao to display other/more information of Kocamaz in view of Halfaoui in order to display more relevant data to the user.
Therefore it would have been obvious to one of ordinary skill to combine the three references to obtain the invention in Claim 1.
As per Claim 2, Kocamaz in view of Halfaoui and Alalao teaches the method according to claim 1, further comprising executing, by the processor, one or more driving operations based upon the object label and each lane label. (Kocamaz, Figure 7, B712, Paragraph [0080], [0086], “block B712, includes performing, using the one or more processing units, one or more operation for an autonomous machine based at least in part on the assignment of the first combination of object class and lane identifier to the object. For example, based on the object to lane assignments 408, the control component(s) 410 may perform or more operations associated with an autonomous machine”)
The rationale applied to the rejection of claim 1 has been incorporated herein.
As per Claim 3, Kocamaz in view of Halfaoui and Alalao teaches the method according to claim 1, wherein the lane index value of the lane label represents the lane of a number of lanes from a leftmost or rightmost lane to the lane in which the autonomous vehicle was positioned. (Halfaoui, Figure 2, Paragraph [0074] and Kocamaz, Paragraph [0027])
The rationale applied to the rejection of claim 1 has been incorporated herein.
As per Claim 4, Kocamaz in view of Halfaoui and Alalao and teaches the method according to claim 1, wherein each lane index value is a relative value relative to the driving lane having the autonomous vehicle. (Kocamaz, Paragraph [0034], “While FIG. 2A depicts the objects as vehicles, it is not intended to be limiting, as any detected object, such as structures, animals, pedestrians, etc. are contemplated herein. The image 200A may comprise one or more lanes corresponding to the driving surfaces represented in the image 200A. For example, lanes 204, 206, 208, and 210 are depicted in image 200A and may correspond to a location relative to reference point, such as an ego-vehicle associated with the sensor data 102. From the perspective of an ego-vehicle, the lanes 204, 206, 208, and 210 may correspond to an ego-lane, a left of ego-lane, a right of ego-lane, and other lane. For instance, if an ego-vehicle is positioned in lane 204, lane 204 may be associated with the ego-lane, lane 206 may correspond to a left of ego-lane, lane 208 may correspond to a right of ego-lane, and lane 210 may correspond to an “other” lane label” and Halfaoui, Figure 2, Paragraph [0074])
The rationale applied to the rejection of claim 1 has been incorporated herein.
As per Claim 5, Kocamaz in view of Halfaoui and Alalao teaches the method according to claim 1, wherein each lane index value is an absolute value relative to the roadway. (Kocamaz, Paragraph [0034], “While FIG. 2A depicts the objects as vehicles, it is not intended to be limiting, as any detected object, such as structures, animals, pedestrians, etc. are contemplated herein. The image 200A may comprise one or more lanes corresponding to the driving surfaces represented in the image 200A. For example, lanes 204, 206, 208, and 210 are depicted in image 200A and may correspond to a location relative to reference point, such as an ego-vehicle associated with the sensor data 102. From the perspective of an ego-vehicle, the lanes 204, 206, 208, and 210 may correspond to an ego-lane, a left of ego-lane, a right of ego-lane, and other lane. For instance, if an ego-vehicle is positioned in lane 204, lane 204 may be associated with the ego-lane, lane 206 may correspond to a left of ego-lane, lane 208 may correspond to a right of ego-lane, and lane 210 may correspond to an “other” lane label” and Halfaoui, Figure 2, Paragraph [0074])
The rationale applied to the rejection of claim 1 has been incorporated herein.
As per Claim 6, Kocamaz in view of Halfaoui and Alalao teaches the method according to claim 1, further comprising identifying, by the processor, an object in the operational environment by applying an image recognition engine on the image data. (Kocamaz, Paragraph [0126], “For example, according to one embodiment of the technology, the PVA is used to perform computer stereo vision. A semi-global matching-based algorithm may be used in some examples, although this is not intended to be limiting. Many applications for Level 3-5 autonomous driving require motion estimation/stereo matching on-the-fly (e.g., structure from motion, pedestrian recognition, lane detection, etc.). The PVA may perform computer stereo vision function on inputs from two monocular cameras”)
The rationale applied to the rejection of claim 1 has been incorporated herein.
As per Claim 7, Kocamaz in view of Halfaoui and Alalao teaches the method according to claim 6, wherein identifying the object includes predicting, by the processor, an object class for the object by applying an object recognition engine on a single frame of the image data. (Kocamaz, Paragraph [0126], “For example, according to one embodiment of the technology, the PVA is used to perform computer stereo vision. A semi-global matching-based algorithm may be used in some examples, although this is not intended to be limiting. Many applications for Level 3-5 autonomous driving require motion estimation/stereo matching on-the-fly (e.g., structure from motion, pedestrian recognition, lane detection, etc.). The PVA may perform computer stereo vision function on inputs from two monocular cameras”)
The rationale applied to the rejection of claim 6 has been incorporated herein.
As per Claim 8, Kocamaz in view of Halfaoui and Alalao teaches the method according to claim 1, further comprising: determining, by the processor, an object position of the object in the image data relative to the autonomous vehicle, the object position including a predicted distance and a predicted angle relative to the autonomous vehicle; and generating, by the processor, on the image data a bounding box for the object. (Kocamaz, Paragraph [0096], “One or more stereo cameras 868 may also be included in a front-facing configuration. The stereo camera(s) 868 may include an integrated control unit comprising a scalable processing unit, which may provide a programmable logic (FPGA) and a multi-core micro-processor with an integrated CAN or Ethernet interface on a single chip. Such a unit may be used to generate a 3-D map of the vehicle's environment, including a distance estimate for all the points in the image. An alternative stereo camera(s) 868 may include a compact stereo vision sensor(s) that may include two camera lenses (one each on the left and right) and an image processing chip that may measure the distance from the vehicle to the target object and use the generated information (e.g., metadata) to activate the autonomous emergency braking and lane departure warning functions. Other types of stereo camera(s) 868 may be used in addition to, or alternatively from, those described herein”)
The rationale applied to the rejection of claim 1 has been incorporated herein.
As per Claim 9, Kocamaz in view of Halfaoui and Alalao teaches the method according to claim 1, wherein the image data includes a single snapshot of imagery. (Kocamaz, Paragraph [0096], “One or more stereo cameras 868 may also be included in a front-facing configuration. The stereo camera(s) 868 may include an integrated control unit comprising a scalable processing unit, which may provide a programmable logic (FPGA) and a multi-core micro-processor with an integrated CAN or Ethernet interface on a single chip. Such a unit may be used to generate a 3-D map of the vehicle's environment”)
The rationale applied to the rejection of claim 1 has been incorporated herein.
As per Claim 10, Kocamaz in view of Halfaoui and Alalao teaches the method according to claim 1, wherein the processor determines the lane index value for the lane label associated with each lane based upon ground truth localization data. (Halfaoui, Paragraph [0074], “To this end, in an embodiment, the lane detection system 100 can comprise or be connected to a camera implemented in the vehicle and configured to capture the image of the multi-lane road, i.e. the driving scene. Together with the input image obtained in processing block 103 of FIG. 1, ground-truth labels of the correct lane ID, i.e. the current lane number, and lane count are obtained by the processing circuitry of the lane detection system 100 in processing block 101 of FIG. 1”)
The rationale applied to the rejection of claim 1 has been incorporated herein.
As per Claim 11, Kocamaz in view of Halfaoui teaches the method according to claim 1, wherein the processor obtains the image data from a plurality of cameras of the autonomous vehicle. (Kocamaz, Paragraph [0096], “One or more stereo cameras 868 may also be included in a front-facing configuration. The stereo camera(s) 868 may include an integrated control unit comprising a scalable processing unit, which may provide a programmable logic (FPGA) and a multi-core micro-processor with an integrated CAN or Ethernet interface on a single chip. Such a unit may be used to generate a 3-D map of the vehicle's environment”)
The rationale applied to the rejection of claim 1 has been incorporated herein.
As per Claim 12, Claim 12 claims a system for managing location information in automated vehicles utilizing the method as claimed in Claim 1. Therefore the rejection and rationale are analogous to that made in Claim 1.
As per Claim 13, Claim 13 claims the same limitation as Claim 2 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 2.
As per Claim 14, Claim 14 claims the same limitation as Claim 3 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 3.
As per Claim 15, Claim 15 claims the same limitation as Claim 4 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 4.
As per Claim 16, Claim 16 claims the same limitation as Claim 5 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 5.
As per Claim 17, Claim 17 claims the same limitation as Claim 6 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 6.
As per Claim 18, Claim 18 claims the same limitation as Claim 7 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 7.
As per Claim 19, Claim 19 claims the same limitation as Claim 8 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 8.
As per Claim 20, Claim 20 claims the same limitation as Claim 10 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 10.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MING HON whose telephone number is (571)270-5245. The examiner can normally be reached M-F 9am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached on 571-270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MING Y HON/Primary Examiner, Art Unit 2666