DETAILED ACTION
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 8-9 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
With respect to claim 8, the BRI of the metes and bounds of what is and is not required in the limitation:
“the function of the driver assistance system of the ego vehicle is performed based on the selected target object in different ways, taking into account:
(i) a driving situation and/or a driving environment and/or a traffic situation; and/or
(ii) an ascertained object class of the target object; and/or
(iii) a defined requirement”
is unclear and indefinite. For example, it is unclear what is meant by driver assistance functions being performed in “different ways” – is this requiring a single DA function such as lane keep assist being performed differently depending on, i.e., whether the target object is in a first class rather than a second class? Or, does this mean a different function will be used depending on the object class? Or some other meaning? In addition, the limitation “defined requirement” is unclear since each of the factors in i) and ii) are also defined requirements such that it its unclear what is meant by “or a defined requirement”.
With respect to claim 9 the BRI of the metes and bounds of what is and is not required in the limitation “a performance of the function of the driver assistance system of the ego vehicle is defined based on the selected target object in a native measurement space of the image sensor” is unclear and indefinite. It is generally unclear what it means for a performance of a function to be defined. Is this referring to selected? The time it is executed? The way it is executed?
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. 20210042535 to Abbot et al. (Abbot).
With respect to claims 1 and 10-11, Abbot discloses a method for selecting a target object for performing a function of a driver assistance system of an ego vehicle taking into account the target object, the method comprising the following steps:
reading in image data of surroundings of the ego vehicle generated using an image sensor; and
(sensor data 102, FIG. 1 and corresponding description;
selecting a foreign object, detected using the image data, as a target object, taking into account at least:
(¶ 27 object detection input may include a bounding shape, such as a box, that corresponds to an object; i.e., 204, FIG. 2A; 41 “FIG. 2A, a bounding shape 204 may be generated for a vehicle 202—using object detection 104—to identify a portion of an image that corresponds to the vehicle 202”)
a first relevance of the foreign object in relation to a lane of the ego vehicle, and
(FIG. 7A-8 and corresponding description; ¶¶ 30, 32, 46 pixels within the object fence(s) 110 may be determined to correspond to the object for lane assignment. The object fence(s) 110 may be generated prior to lane assignment 120—using any method, such as those described herein—to improve accuracy and reliability of lane assignment predictions by more closely defining the shape or footprint of the object on a driving surface. For example, once the object fences 110 are determined, the object fences 110 may be used—in combination with the lane masks 116—to make an overlap determination 118 for lane assignment 120.; 57-61, 73-74; 82-83; 89-97; claims 1-7, 10,-11, 18)
(FIG. 1, overlap determination 118, boundary scoring 134, lane assignment 120)
a second relevance of the foreign object in relation to a predicted trajectory of the ego vehicle.
(¶¶ 29, 47, 60-62)
(¶¶ 61-62 “For example, because lane assignments may be more accurate when an object is closer to a sensor(s) of vehicle 900, as the object is moving further away, the prior lane assignment 120 may be leveraged to provide more accurate predictions of the object location and lane assignment at further distances—e.g., where the lanes appear to merge in image space. In such examples, a prior prediction(s) may be weighted with respect to current predictions of lane assignment 120 (e.g., 90% for current prediction/10% for prior prediction(s), 70% for current prediction/20% for immediately preceding prediction/10% for two predictions prior, etc.). Temporal smoothing may thus leverage prior predictions to improve the accuracy of the lane assignments 120 within the process 100”; i.e., first relevance is more important and second relevance is less so, and the relative first and second relevance weightings are used to determine predictions of lane assignment)
With respect to claim 2, Abbot discloses the first relevance of the foreign object is ascertained taking into account an overlap of the foreign object and the lane of the ego vehicle.
(FIG. 7A-8 and corresponding description; ¶¶ 30, 32, 46 pixels within the object fence(s) 110 may be determined to correspond to the object for lane assignment. The object fence(s) 110 may be generated prior to lane assignment 120—using any method, such as those described herein—to improve accuracy and reliability of lane assignment predictions by more closely defining the shape or footprint of the object on a driving surface. For example, once the object fences 110 are determined, the object fences 110 may be used—in combination with the lane masks 116—to make an overlap determination 118 for lane assignment 120.; 57-61, 73-74; 82-83; 89-97; claims 1-7, 10,-11, 18)
With respect to claim 3, Abbot discloses image data in a native measurement space of the image sensor are used to ascertain the second relevance.
(FIG. 1, sensor data 102, object detection 104, free space detection 106, fence generation 108, object fence 110; FIG. 5, 502-506; FIG. 3, 302-304; ¶ 47 once the object fence 110 is determined, the sensor data 102-representative of speed, velocity, acceleration, yaw rate, etc.-may be used to determine a future path or trajectory of the objects in the environment to determine one or more future locations (e.g., 0.5 seconds in the future, 1 second in the future, etc.).)
With respect to claim 4, Abbot discloses wherein, to ascertain the second relevance:
a bounding box is assigned to the foreign object in a native measurement space of the image sensor, and/or
the predicted trajectory is defined in a native measurement space of the image sensor.
(¶ 41 bounding shaped output using object detection . . . bounding shape 204 may be generated for a vehicle 202-using object detection 104-to identify a portion of an image that corresponds to the vehicle 202)
(FIG. 1, sensor data 102, object detection 104, free space detection 106, fence generation 108, object fence 110; FIG. 5, 502-506; FIG. 3, 302-304; ¶ 47 once the object fence 110 is determined, the sensor data 102-representative of speed, velocity, acceleration, yaw rate, etc.-may be used to determine a future path or trajectory of the objects in the environment to determine one or more future locations (e.g., 0.5 seconds in the future, 1 second in the future, etc.).)
With respect to claim 5, Abbot discloses wherein the second relevance of the foreign object is ascertained, taking into account a geometric variable between a bounding box assigned to the foreign object and the predicted trajectory of the ego vehicle.
(¶ 41 bounding shaped output using object detection . . . bounding shape 204 may be generated for a vehicle 202-using object detection 104-to identify a portion of an image that corresponds to the vehicle 202 . . . Once the future location(s) are known, the object fence 110 may be generated ( e.g., in image space) using the future location and the object fence 110 information . . . scaling factor may be used as the future location of the object changes with respect to the location of the vehicle 900 (e.g., as an object moves further away, the object fence 110 may be decreased from a current size, as an object moves closer as a result of slowing down, for example, the object fence 110 may be increased in size, and so on). This information may then be used to inform the vehicle 900 of lanes or other portions of the environment where the objects may be located at a future time to aid in trajectory or path planning, obstacle avoidance, and/or other operations of the vehicle;
With respect to claim 6, Abbot discloses the second relevance of the foreign object is ascertained, taking into account a horizontal distance between the bounding box assigned to the foreign object and the predicted trajectory of the ego vehicle.
(¶ 41 bounding shaped output using object detection . . . bounding shape 204 may be generated for a vehicle 202-using object detection 104-to identify a portion of an image that corresponds to the vehicle 202 . . . Once the future location(s) are known, the object fence 110 may be generated ( e.g., in image space) using the future location and the object fence 110 information . . . scaling factor may be used as the future location of the object changes with respect to the location of the vehicle 900 (e.g., as an object moves further away, the object fence 110 may be decreased from a current size, as an object moves closer as a result of slowing down, for example, the object fence 110 may be increased in size, and so on). This information may then be used to inform the vehicle 900 of lanes or other portions of the environment where the objects may be located at a future time to aid in trajectory or path planning, obstacle avoidance, and/or other operations of the vehicle; ¶¶ 59-60, 62, 89, 32, 113, 144, 175; FIG. 7A-7B; 806-812, FIG. 8)
With respect to claim 7, Abbot discloses the second relevance of the foreign object is ascertained, taking into account a temporal change in a horizontal distance between a bounding box assigned to the foreign object and the predicted trajectory of the ego vehicle
(¶ 41 bounding shaped output using object detection . . . bounding shape 204 may be generated for a vehicle 202-using object detection 104-to identify a portion of an image that corresponds to the vehicle 202 . . . Once the future location(s) are known, the object fence 110 may be generated ( e.g., in image space) using the future location and the object fence 110 information . . . scaling factor may be used as the future location of the object changes with respect to the location of the vehicle 900 (e.g., as an object moves further away, the object fence 110 may be decreased from a current size, as an object moves closer as a result of slowing down, for example, the object fence 110 may be increased in size, and so on). This information may then be used to inform the vehicle 900 of lanes or other portions of the environment where the objects may be located at a future time to aid in trajectory or path planning, obstacle avoidance, and/or other operations of the vehicle; ¶¶ 59-60, 62, 89, 32, 113, 144, 175; FIG. 7A-7B; 806-812, FIG. 8)
With respect to claim 8, Abbot discloses wherein the function of the driver assistance system of the ego vehicle is performed based on the selected target object in different ways, taking into account:
(i) a driving situation and/or a driving environment and/or a traffic situation; and/or
(ii) an ascertained object class of the target object; and/or
(iii) a defined requirement.
(¶¶ 29, 41, 47, 59-60, 62, 89, 32, 109 advanced driver assistance, 111, 113, 144, 175; 812, Fig. 8)
With respect to claim 9, Abbot discloses a performance of the function of the driver assistance system of the ego vehicle is defined based on the selected target object in a native measurement space of the image sensor.
(FIG. 1, sensor data 102, object detection 104, free space detection 106, fence generation 108, object fence 110; FIG. 5, 502-506; FIG. 3, 302-304; ¶ 47 once the object fence 110 is determined, the sensor data 102-representative of speed, velocity, acceleration, yaw rate, etc.-may be used to determine a future path or trajectory of the objects in the environment to determine one or more future locations (e.g., 0.5 seconds in the future, 1 second in the future, etc.).)
(¶ 41 bounding shaped output using object detection . . . bounding shape 204 may be generated for a vehicle 202-using object detection 104-to identify a portion of an image that corresponds to the vehicle 202 . . . Once the future location(s) are known, the object fence 110 may be generated ( e.g., in image space) using the future location and the object fence 110 information . . . scaling factor may be used as the future location of the object changes with respect to the location of the vehicle 900 (e.g., as an object moves further away, the object fence 110 may be decreased from a current size, as an object moves closer as a result of slowing down, for example, the object fence 110 may be increased in size, and so on). This information may then be used to inform the vehicle 900 of lanes or other portions of the environment where the objects may be located at a future time to aid in trajectory or path planning, obstacle avoidance, and/or other operations of the vehicle; ¶¶ 29, 41, 47, 59-60, 62, 89, 32, 109 advanced driver assistance, 111, 113, 144, 175; 812, Fig. 8)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KENNETH J MALKOWSKI whose telephone number is (313)446-4854. The examiner can normally be reached 8:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris Almatrahi can be reached at 313-446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KENNETH J MALKOWSKI/Primary Examiner, Art Unit 3667