DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed.
Drawings
The drawings that were filed on 10/30/2024 have been considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 4-11, and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (US 20230065727 A1), and herein after will be referred to as Yang, in view of Xia et al. (US 20230072637 A1), herein after will be referred to as Xia.
Regarding Claim 1, Yang teaches an apparatus for controlling autonomous driving of a vehicle, the apparatus comprising (A controller that processes sensor information for map matching and feature extraction to control the host vehicle; [0021]):
a first sensor configured to obtain first sensor data (A first sensor as a camera to obtain data; [0044] [0045]);
a second sensor configured to obtain second sensor data (A second sensor as a radar to obtain data; [0044] [0046]);
a third sensor configured to obtain third sensor data (A third sensor as a lidar to obtain data; [0044] [0047]); and
a processor configured to: generate a probability distribution map by dividing an area into a plurality of cells, wherein the area comprises a designated angle in a designated direction from the vehicle (A probability grid map divided into cells and the sensors acquiring data within a field of view range; [0072-0074] FIG.4 ); and
control the autonomous driving of the vehicle by determining, based on fusing the first probability distribution, the second probability distribution, and the third sensor data, at least one of a static obstacle or a dynamic obstacle (Fusing the updated sensor data map, generated from the camera and radar data, with lidar data to extract feature points of the road and determines moving attribute information; [0016] [0023] [0096] FIG. 8).
Yang does not explicitly teach obtain, based on the probability distribution map, a first probability distribution for the first sensor data and a second probability distribution for the second sensor data.
However, Xia discloses an autonomous vehicle that detects a drivable area that uses camera and radar sensors to obtain a first probability distribution and a second probability distribution to indicate the probability grid map of the vehicle’s ability to drive through the area ([0005] [0015] [0018]). This teaching is equivalent to the claimed limitation because the probability distribution value is calculated from the combination of the probability distribution using the camera and radar.
Yang and Xia are considered to be analogous to the claim invention because they are in the same field of autonomous vehicle navigation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify yang to incorporate the teachings of calculating a first and second probability distributions prior to fusing the data as taught by Xia based on the motivation to improve the accuracy of the probability map fusion by ensuring that the sensors are properly weighted prior to being combined into a single data map. This provides the benefit of improving the reliability of the probability grid map by applying the proper weighting to specific sensors.
Regarding Claim 4, Yang and Xia remains as applied above in claim 1. Yang does not explicitly teach obtain, based on applying a weight to a probability value, a reliability value of each of the plurality of cells, wherein the probability value indicates at least one of the first sensor data being present in the probability distribution map or the second sensor data being present in the probability distribution map.
However, Xia discloses a method for creating a probability grid map by fusing sensor distribution using weights. Xia teaches obtaining a reliability value based on applying weight to a probability value from the probability distributions from the camera and radar ([0077-0080]). This teaching is equivalent to the claimed limitations because the fusion weight function is used with the probability data from the camera, first sensor, and the radar, the second sensor, in the grip map to calculated a probability value of the fused target grid. It would have been obvious to one having ordinary skill in the art at the time the invention was made to modify Yang to incorporate the teachings of the weighted probability formula as taught by Xia based on the motivation to improve the accuracy and reliability of the fused grid map by adjusting the influence of the camera and the radar in different environmental conditions. This provides the benefit of improving the reliability value reflects a best suited sensor data available to the system.
Regarding Claim 5, Yang and Xia remains as applied above in claim 4. Yang further teaches
identify threshold cells among the plurality of cells, wherein each of the threshold cells has a first reliability value exceeding a third threshold value, and wherein the first reliability value indicates a level of confidence to classify objects within areas of each of the threshold cells (The system calculates a confidence level and applies a predetermined threshold value to filter the data and updating the map with the data that exceeds the threshold; [0099-0105] [0113-0114]); and
classify at least one of points of the third sensor or a cluster of points as a road boundary with a second reliability value exceeding a threshold value, wherein the points of the third sensor are determined from the threshold cells, and wherein the cluster of points comprise the points of the third sensor (Extracting feature points of the road, classifying road boundaries, using data from lidar points; [0016] [0099-0105] [0113-0114]).
Regarding Claim 6, Yang and Xia remains as applied above in claim 1. Yang further teaches obtain the first probability distribution by distributing the first sensor data to the probability distribution map in a radial shape (Sensor data A2, and A3 is obtained and distributed in radial shapes that radiate from the vehicle’s center and the processor updates the map cells within the radial field-of-view range; [0044] [0074] FIG. 1).
Regarding Claim 7, Yang and Xia remains as applied above in claim 1. Yang further teaches obtain the second probability distribution by distributing the second sensor data to the probability distribution map in an arc shape (Sensor data A1 is obtained and distributed in an arc shape that radiate from the vehicle’s center and the processor updates the map cells within the radial field-of-view range; [0044] [0074] FIG. 1).
Regarding Claim 8, Yang and Xia remains as applied above in claim 1. Yang further teaches generate, based on a Cartesian coordinate system, the probability distribution map (The probability grid map consisting of rows and columns of a Cartesian coordinate system; [0072] [0033] FIG. 4).
Regarding Claim 9, Yang and Xia remains as applied above in claim 1. Yang further teaches determine at least one of the static obstacle or the dynamic obstacle in real time by discretizing a probability distribution in which at least one of the first sensor data or the second sensor data is present (Determining static and dynamic objects through a moving attribute information (moving flag) using sensor data received for the probability grid map, which divides continuous space into discrete grid cells with a first (camera) and second (radar) sensor data; [0096-0097] [0072]).
Regarding Claim 10, Yang and Xia remains as applied above in claim 1. Yang further teaches generate the probability distribution map for identifying an external object within a designated distance from the vehicle (The map is generated using sensor data that measures distance to neighboring objects (vehicle); [0046-0047]).
Regarding Claim 11, Yang teaches a method performed by an apparatus for controlling autonomous driving of a vehicle, the method comprising (A controller that processes sensor information for map matching and feature extraction to control the host vehicle; [0021]):
generating a probability distribution map by dividing an area into a plurality of cells, wherein the area comprises a designated angle in a designated direction from the vehicle (A probability grid map divided into cells and the sensors acquiring data within a field of view range; [0072-0074] FIG.4 ); and
controlling the autonomous driving of the vehicle by determining, based on fusing the first probability distribution, the second probability distribution, and third sensor data obtained by a third sensor, at least one of a static obstacle or a dynamic obstacle (Fusing the updated sensor data map, generated from the camera and radar data, with lidar data to extract feature points of the road and determines moving attribute information; [0016] [0023] [0096] FIG. 8).
Yang does not explicitly teach obtaining, based on the probability distribution map, a first probability distribution for first sensor data obtained by a first sensor and a second probability distribution for second sensor data obtained by a second sensor.
However, Xia discloses an autonomous vehicle that detects a drivable area that uses camera and radar sensors to obtain a first probability distribution and a second probability distribution to indicate the probability grid map of the vehicle’s ability to drive through the area ([0005] [0015] [0018]). This teaching is equivalent to the claimed limitation because the probability distribution value is calculated from the combination of the probability distribution using the camera and radar. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify yang to incorporate the teachings of calculating a first and second probability distributions prior to fusing the data as taught by Xia based on the motivation to improve the accuracy of the probability map fusion by ensuring that the sensors are properly weighted prior to being combined into a single data map. This provides the benefit of improving the reliability of the probability grid map by applying the proper weighting to specific sensors.
Regarding Claim 14, Yang and Xia remains as applied above in claim 11. Yang does not explicitly teach
obtaining, based on applying a weight to a probability value, a reliability value of each of the plurality of cells, wherein the probability value indicates at least one of the first sensor data being present in the probability distribution map or the second sensor data being present in the probability distribution map.
However, Xia discloses a method for creating a probability grid map by fusing sensor distribution using weights. Xia teaches obtaining a reliability value based on applying weight to a probability value from the probability distributions from the camera and radar ([0077-0080]). This teaching is equivalent to the claimed limitations because the fusion weight function is used with the probability data from the camera, first sensor, and the radar, the second sensor, in the grip map to calculated a probability value of the fused target grid. It would have been obvious to one having ordinary skill in the art at the time the invention was made to modify Yang to incorporate the teachings of the weighted probability formula as taught by Xia based on the motivation to improve the accuracy and reliability of the fused grid map by adjusting the influence of the camera and the radar in different environmental conditions. This provides the benefit of improving the reliability value reflects a best suited sensor data available to the system.
Regarding Claim 15, Yang and Xia remains as applied above in claim 14. Yang further teaches
identifying threshold cells among the plurality of cells, wherein each of the threshold cells has a first reliability value exceeding a third threshold value and wherein the first reliability value indicates a level of confidence to classify objects within areas of each of the threshold cells (The system calculates a confidence level and applies a predetermined threshold value to filter the data and updating the map with the data that exceeds the threshold; [0099-0105] [0113-0114]); and
classifying at least one of points of the third sensor or a cluster of points as a road boundary with a second reliability value exceeding a threshold value, wherein the points of the third sensor are determined from the threshold cells, and wherein the cluster of points comprise the points of the third sensor (Extracting feature points of the road, classifying road boundaries, using data from lidar points; [0016] [0099-0105] [0113-0114]).
Regarding Claim 16, Yang and Xia remains as applied above in claim 11. Yang further teaches obtaining the first probability distribution by distributing the first sensor data to the probability distribution map in a radial shape (Sensor data A2, and A3 is obtained and distributed in radial shapes that radiate from the vehicle’s center and the processor updates the map cells within the radial field-of-view range; [0044] [0074] FIG. 1).
Regarding Claim 17, Yang and Xia remains as applied above in claim 11. Yang further teaches obtaining the second probability distribution by distributing the second sensor data to the probability distribution map in an arc shape (Sensor data A1 is obtained and distributed in an arc shape that radiate from the vehicle’s center and the processor updates the map cells within the radial field-of-view range; [0044] [0074] FIG. 1).
Regarding Claim 18, Yang and Xia remains as applied above in claim 11. Yang further teaches generating, based on at least one of a polar coordinate system or a Cartesian coordinate system, the probability distribution map (The probability grid map consisting of rows and columns of a Cartesian coordinate system; [0072] [0033] FIG. 4).
Regarding Claim 19, Yang and Xia remains as applied above in claim 11. Yang further teaches determining at least one of the static obstacle or the dynamic obstacle in real time by discretizing a probability distribution in which at least one of the first sensor data or the second sensor data is present (Determining static and dynamic objects through a moving attribute information (moving flag) using sensor data received for the probability grid map, which divides continuous space into discrete grid cells with a first (camera) and second (radar) sensor data; [0096-0097] [0072]).
Regarding Claim 20, Yang and Xia remains as applied above in claim 11. Yang further teaches generating the probability distribution map for identifying an external object within a designated distance from the vehicle (The map is generated using sensor data that measures distance to neighboring objects (vehicle); [0046-0047]).
Claim(s) 2-3, and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Xia, as applied in claim 1, and in further view of Jo et al. (US 20250308206 A1), herein after will be referred to as Jo.
Regarding Claim 2, Yang and Xia remains as applied above in claim 1. Yang and Xia does not explicitly teach obtain a first candidate virtual box based on an update age of the third sensor data being greater than or equal to a first threshold value and at least one of a length of a virtual box obtained from the third sensor data or a width of the virtual box being greater than or equal to a second threshold value; and
control the autonomous driving of the vehicle by determining the static obstacle based on fusing first candidate sensor data, the first probability distribution, and the second probability distribution, wherein the first candidate sensor data corresponds to the first candidate virtual box.
However, Jo discloses an apparatus for controlling an autonomous vehicle using virtual boxes obtained by a lidar sensor. Jo teaches obtaining a virtual box and tracking its validity by maintaining a fusion age that accumulates the number of times the box has been matched across frames ([0055-0056] [0104] [0145]). This teaching is equivalent to the claimed limitation of obtain a first candidate virtual box based on an update age of the third sensor data being greater than or equal to a first threshold value because the fusion age is a counter which represents how long an object has been tracked and compares the count to a specific number, which constitutes applying a threshold to virtual box. Jo further teaches obtaining the first virtual box from a cluster of points and evaluating its reliability by comparing the measured length and/or width against specific size values ([0020] [0149]). This teaching is equivalent to the claimed limitation at least one of a length of a virtual box obtained from the third sensor data or a width of the virtual box being greater than or equal to a second threshold value because the measured dimension of the virtual box to validate the object classification is compared to a specific size threshold. Furthermore, Jo teaches controlling an autonomous vehicle by determining the presence of external objects based on the validated virtual box and generating signals for controlling the vehicle ([0021] [0092]). This teaching is equivalent to the claimed limitation control the autonomous driving of the vehicle by determining the static obstacle based on fusing first candidate sensor data, the first probability distribution, and the second probability distribution, wherein the first candidate sensor data corresponds to the first candidate virtual box because identifying and classifying an object using the fused size and age data allows the processor to confirm the obstacle for the autonomous system.
Yang, Xia, and Jo are considered to be analogous to the claim invention because they are in the same field of autonomous vehicle sensors. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Yang and Xia to incorporate the teachings of the virtual box parameters and thresholds as taught by Jo based on the motivation to improve the reliability of the obstacle detections and filter irrelevant objects or noise detected by the sensors before processing them in the system. This provides the benefit of reducing false positives and enhances the stability of the vehicle control system.
Regarding Claim 3, Yang and Xia remains as applied above in claim 1. Yang further teaches control the autonomous driving of the vehicle by determining the dynamic obstacle based on fusing second candidate sensor data, the first probability distribution, and the second probability distribution, wherein the second candidate sensor data corresponds to the second candidate virtual box (The sensor data classifies objects and determines whether the object is a moving to control the vehicle; [0086]).
Yang and Xia does not explicitly teach obtain a second candidate virtual box based on an update age of the third sensor data being smaller than a first threshold value and at least one of a length of a virtual box obtained from the third sensor data or a width of the virtual box being smaller than a second threshold value.
However, Jo discloses an apparatus for controlling a vehicle that validates virtual boxes based on the dimensions and tracking history. Jo teaches obtaining a virtual box based on the length or width being smaller than a threshold value ([0149]). This teaching is equivalent to the claimed limitation because the classification uses size thresholds to validate the object classes and specifically classifies an object as a car, van, and/or truck ([0064]), and dimensions must be smaller than the threshold for objects or noise, otherwise reliability is reduced. Jo further teaches obtaining a virtual box and tracking its validity by maintaining a fusion age that accumulates the number of times the box has been matched across frames ([0055-0056] [0104] [0145]). This teaching is equivalent to the claimed limitation of an update age of the third sensor data being smaller than a first threshold value because the system tracks the fusion age of an object and an objects with a low accumulated count, smaller than the threshold, is a newly detected object. It would have been obvious to one having ordinary skill in the art at the time the invention was made to modify Yang and Xia to incorporate the teachings of the virtual box parameters and thresholds as taught by Jo based on the motivation to improve the reliability of the obstacle detections and filter irrelevant objects or noise detected by the sensors before processing them in the system and provides the benefit of reducing false positives detections in the system.
Regarding Claim 12, Yang and Xia remains as applied above in claim 11. Yang and Xia does not explicitly teach obtaining a first candidate virtual box based on an update age of the third sensor data being greater than or equal to a first threshold value and at least one of a length of a virtual box obtained from the third sensor data or a width of the virtual box being greater than or equal to a second threshold value; and
controlling the autonomous driving of the vehicle by determining the static obstacle based on fusing first candidate sensor data, the first probability distribution, and the second probability distribution, wherein the first candidate sensor data corresponds to the first candidate virtual box.
However, Jo discloses an apparatus for controlling an autonomous vehicle using virtual boxes obtained by a lidar sensor. Jo teaches obtaining a virtual box and tracking its validity by maintaining a fusion age that accumulates the number of times the box has been matched across frames ([0055-0056] [0104] [0145]). This teaching is equivalent to the claimed limitation of obtain a first candidate virtual box based on an update age of the third sensor data being greater than or equal to a first threshold value because the fusion age is a counter which represents how long an object has been tracked and compares the count to a specific number, which constitutes applying a threshold to virtual box. Jo further teaches obtaining the first virtual box from a cluster of points and evaluating its reliability by comparing the measured length and/or width against specific size values ([0020] [0149]). This teaching is equivalent to the claimed limitation at least one of a length of a virtual box obtained from the third sensor data or a width of the virtual box being greater than or equal to a second threshold value because the measured dimension of the virtual box to validate the object classification is compared to a specific size threshold. Furthermore, Jo teaches controlling an autonomous vehicle by determining the presence of external objects based on the validated virtual box and generating signals for controlling the vehicle ([0021] [0092]). This teaching is equivalent to the claimed limitation control the autonomous driving of the vehicle by determining the static obstacle based on fusing first candidate sensor data, the first probability distribution, and the second probability distribution, wherein the first candidate sensor data corresponds to the first candidate virtual box because identifying and classifying an object using the fused size and age data allows the processor to confirm the obstacle for the autonomous system. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Yang and Xia to incorporate the teachings of the virtual box parameters and thresholds as taught by Jo based on the motivation to improve the reliability of the obstacle detections and filter irrelevant objects or noise detected by the sensors before processing them in the system. This provides the benefit of reducing false positives and enhances the stability of the vehicle control system.
Regarding Claim 13, Yang and Xia remains as applied above in claim 11. Yang further teaches controlling the autonomous driving of the vehicle by determining the dynamic obstacle based on fusing second candidate sensor data, the first probability distribution, and the second probability distribution, wherein the second candidate sensor data corresponds to the second candidate virtual box (The sensor data classifies objects and determines whether the object is a moving to control the vehicle; [0086]).
Yang and Xia does not explicitly teach obtaining a second candidate virtual box based on an update age of the third sensor data being smaller than a first threshold value and at least one of a length of a virtual box obtained from the third sensor data or a width of the virtual box being smaller than a second threshold value.
However, Jo discloses an apparatus for controlling a vehicle that validates virtual boxes based on the dimensions and tracking history. Jo teaches obtaining a virtual box based on the length or width being smaller than a threshold value ([0149]). This teaching is equivalent to the claimed limitation because the classification uses size thresholds to validate the object classes and specifically classifies an object as a car, van, and/or truck ([0064]), and dimensions must be smaller than the threshold for objects or noise, otherwise reliability is reduced. Jo further teaches obtaining a virtual box and tracking its validity by maintaining a fusion age that accumulates the number of times the box has been matched across frames ([0055-0056] [0104] [0145]). This teaching is equivalent to the claimed limitation of an update age of the third sensor data being smaller than a first threshold value because the system tracks the fusion age of an object and an objects with a low accumulated count, smaller than the threshold, is a newly detected object. It would have been obvious to one having ordinary skill in the art at the time the invention was made to modify Yang and Xia to incorporate the teachings of the virtual box parameters and thresholds as taught by Jo based on the motivation to improve the reliability of the obstacle detections and filter irrelevant objects or noise detected by the sensors before processing them in the system and provides the benefit of reducing false positives detections in the system.
Prior Art
The prior art made of record and not relied upon is considered pertinent, most relevant, to applicant's disclosure.
Murahashi (US 20210309254 A1)
Ra (US 20250086807 A1)
Whittaker (US 20100026555 A1)
Choi (US 20250349127 A1)
Yershov (US 20210354690 A1)
Castro (US 20180281680 A1)
Lilja (US 20210291816 A1)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD ANDREW IZON DIZON whose telephone number is (571)272-4834. The examiner can normally be reached M-F 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Angela Ortiz can be reached at (571) 272-1206. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EDWARD ANDREW IZON DIZON/Examiner, Art Unit 3663
/ANGELA Y ORTIZ/Supervisory Patent Examiner, Art Unit 3663