*Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-7,9-17,19 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al (U.S.2022/0326382) and further in view of Hayes et al (U.S. 2021/0382499).
1. As per claims 1,11 Wang disclosed An autonomous driving control apparatus, comprising:
a sensor device including at least one sensor [many (if not most) LiDAR points are associated with (e.g., grouped at) distances closer to the LiDAR sensors,] (Paragraph. 0054);
a memory storing at least one instruction [In an embodiment, the AV system 120 includes a data storage unit 142 and memory 144 for storing machine instructions associated with computer processors 146 or data collected by sensors 121.] (Paragraph. 0075); and
a controller electrically connected with the sensor device and the memory [The predictive feedback module 1122 then provides information to the controller 1102 that the controller 1102 can use to adjust accordingly. For example, if the sensors of the AV 100 detect (“see”) a hill, this information can be used by the controller 1102 to prepare to engage the throttle at the appropriate time to avoid significant deceleration (Paragraph. 0123),
wherein the at least one instruction is configured to, when executed by the controller, cause the autonomous driving control apparatus to [The controller 1102 receives several inputs used to determine how to control the throttle/brake 1206 and steering angle actuator 1212. A planning module 404 provides information used by the controller 1102, for example, to choose a heading when the AV 100 begins operation and to determine which road segment to traverse when the AV 100 reaches an intersection. A localization module 408 provides information to the controller 1102 describing the current location of the AV 100, for example, so that the controller 1102 can determine if the AV 100 is at a location expected based on the manner in which the throttle/brake 1206 and steering angle actuator 1212 are being controlled. In an embodiment, the controller 1102 receives information from other inputs 1214, e.g., information received from databases, computer networks, etc] (Paragraph. 0126);
obtain first sensing data about a driving path of a driving device, using a first sensor among the at least one sensor; identify a first specified object included in the first sensing data and meeting a specified condition, and
identify a first sampling rate to be applied to a first point cloud data corresponding to the first specified object;
obtain second sensing data about the driving path, using a second sensor among the at least one sensor [In an embodiment, at least one processor of a vehicle receives multiple LiDAR points from a LiDAR system of the vehicle. The multiple LiDAR points represent at least one object in an environment traveled by the vehicle. The at least one processor determines a Euclidean distance of each LiDAR point of the multiple LiDAR points. The at least one processor compares the Euclidean distance of each LiDAR point of the multiple LiDAR points with a respective sampled Euclidean distance from a standard normal distribution of Euclidean distances. Responsive to the Euclidean distance of a LiDAR point of the multiple LiDAR points being less than the respective sampled Euclidean distance, the at least one processor removes the LiDAR point from the multiple LiDAR points to generate a point cloud. The at least one processor operates the vehicle based on the point cloud] (Paragraph. 0003
identify a second specified object included in the second sensing data and identify a second sampling rate to be applied to second point cloud data corresponding to the second specified object based on a result of comparing the first specified object with the second specified object [The point cloud 1404 is a multi-channel LiDAR scan raw point cloud. The point cloud 1404 acquired using the LiDAR system can have redundant information and a non-uniform density distribution. For example, the LiDAR points 1412 that are further away from the LiDAR system 602 are more sparse as compared to the LiDAR points 1416 that are closer to the LiDAR system 602. As such, the LiDAR points 1416 that are closer to the LiDAR system are more dense. Hence, the point cloud 1404 and the LiDAR points 1412, 1416 have a first density variation] (Paragraph. 0137); and
However, Wang did not explicitly disclose wherein the first sampling rate is determined based on sampling rate information stored in the memory and information of the first specified object; sample the first sensing data and the second sensing data respectively using the first sampling rate and the second sampling rate.
In the same field of endeavor Hayes disclosed, FIG. 9 differs from FIG. 7 in that the method of FIG. 9 includes determining 902 a cost associated with the sampled data. The cost associated with the sampled data may be based on a cost to transmit the sampled data. For example, the cost may be calculated based on financial costs to transmit data (e.g., data rates), bandwidth usage or data caps. The cost may also be based on a cost to store the sampled data. For example, data ingress costs and data storage costs associated with a data center, cloud storage provider, or other resources may factor into the cost. The cost may also be based on a cost to process the sampled data. For example, the cost may be calculated as an amount of processing resources estimated to be used to process the sampled data, use the sampled data in machine learning training, etc. The cost may be expressed as a financial cost, a score or rating based at least in part on financial costs, another score or rating, etc (Paragraph. 0065). Examiner interpreted the sampling rate as frequency like quantity that counts event per second in other words sampling rate of the sampled data as data rates based on bandwidth usage. Examiner interpreted the underline limitation as, “In signal/data acquisition, the appropriate sampling rate depends on the characteristic what is being measured bandwidth and on system constraints (memory, processing).”
It would have been obvious to one having ordinary skill in the art before the effective filing date was made to have incorporated FIG. 9 differs from FIG. 7 in that the method of FIG. 9 includes determining 902 a cost associated with the sampled data. The cost associated with the sampled data may be based on a cost to transmit the sampled data. For example, the cost may be calculated based on financial costs to transmit data (e.g., data rates), bandwidth usage or data caps. The cost may also be based on a cost to store the sampled data. For example, data ingress costs and data storage costs associated with a data center, cloud storage provider, or other resources may factor into the cost. The cost may also be based on a cost to process the sampled data. For example, the cost may be calculated as an amount of processing resources estimated to be used to process the sampled data, use the sampled data in machine learning training, etc. The cost may be expressed as a financial cost, a score or rating based at least in part on financial costs, another score or rating, etc as taught by Hayes in the method and system of Wang to optimize the training process of the additional data.
2. As per claims 2,12 Wang-Hayes disclosed wherein the at least one instruction is configured to, when executed by the controller, cause the autonomous driving control apparatus to:
sample the first point cloud data corresponding to the first specified object in the first sensing data, using the first sampling rate (Hayes, Paragraph. 0065); and
sample the second point cloud data corresponding to the second specified object in the second sensing data, using the second sampling rate (Hayes, Paragraph. 0048). Claim 2 has the same motivation as to claim 1.
3. As per claims 3,13 Wang-Hayes disclosed wherein the at least one instruction is configured to, when executed by the controller, cause the autonomous driving control apparatus to:
identify the second sampling rate as the same value as the first sampling rate or a value greater than the first sampling rate, when it is identified that the first specified object and the second specified object are the same as each other (Wang, Paragraph. 0082).
4. As per claims 4,14 Wang-Hayes disclosed apparatus of wherein the at least one instruction is configured to, when executed by the controller, cause the autonomous driving control apparatus to:
sample at least a portion of the second sensing data using a predefined sampling rate, when the first specified object and the second specified object are not the same as each other or when the second specified object is not included in the second sensing data (Wang, Paragraph. 0108).
5. As per claims 5,15 Wang-Hayes disclosed wherein the predefined sampling rate is less than the first sampling rate (Hayes, Paragraph. 0059). Claim 5 has the same motivation as to claim 1.
6. As per claims 6,16 Wang-Hayes disclosed wherein the at least one instruction is configured to, when executed by the controller, cause the autonomous driving control apparatus to: generate first voxel data including location information of at least one of the first specified object, the second specified object [Voxel data is interpreted as 2D into 3D data] (Wang, Paragraph. 0131), or a combination thereof, based on at least one of the sampled first sensing data, the sampled second sensing data, or a combination thereof; and store map data including the first voxel data in the memory (Wang, Paragraph. 0102).
7. As per claims 7,17 Wang-Hayes disclosed wherein the at least one instruction is configured to, when executed by the controller, cause the autonomous driving control apparatus to: generate second voxel data based on third sensing data obtained using the second sensor; and identify a current location of the driving device, based on a result of matching the first voxel data included in the map data with the second voxel data (Wang, Paragraph. 0100).
9. As per claims 9,19 Wang-Hayes disclosed wherein the second voxel data includes voxel coordinates of a blob corresponding to at least one object meeting the specified condition (Wang, Paragraph. 0111).
10. As per claim 10 Wang-Hayes disclosed wherein the at least one instruction is configured to, when executed by the controller, cause the autonomous driving control apparatus to: apply a point cloud registration algorithm to the first voxel data and the second voxel data to generate the result (Hayes, Paragraph. 0047). Claim 10 has the same motivation as to claim 1.
Claim Rejections - 35 USC § 103
Claim(s) 8, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al (U.S.2022/0326382), Hayes et al (U.S. 2021/0382499) and further in view of Chen et al (U.S. 2021/0287037).
11. As per claims 8,18 Wang-Hayes did not explicitly disclose wherein the at least one instruction is configured to, when executed by the controller, cause the autonomous driving control apparatus to: set a weight for a portion that does not meet the specified condition between the first voxel data and the second voxel data to a first value and match the first voxel data with the second voxel data; and set a weight for a portion meeting the specified condition, the portion including at least one of the first specified object, the second specified object, or a combination thereof, to a second value greater than the first value and match the first voxel data with the second voxel data.
In the same of endeavor Chen disclosed, “referring to FIG. 1c, the feature information of the 3D voxel may be used as an input of a network and inputted into the network. A first 3D convolution layer performs a 3D convolution operation on the feature information of the 3D voxel by using a 3×3×2 (8) 3D convolution kernel, and inputs a convolution operation result into a second 3D convolution layer to perform a 3D convolution operation of which a 3D convolution kernel is 3×3×2 (16). The rest is deduced by analogy until the last 3D convolution layer in the 3D convolutional network performs a 3D convolution operation on inputted features by using a 3×3×2 (128) convolution kernel (Paragraph. 0076).
It would have been obvious to one having ordinary skill in the art before the effective filing date was made to have incorporated referring to FIG. 1c, the feature information of the 3D voxel may be used as an input of a network and inputted into the network. A first 3D convolution layer performs a 3D convolution operation on the feature information of the 3D voxel by using a 3×3×2 (8) 3D convolution kernel, and inputs a convolution operation result into a second 3D convolution layer to perform a 3D convolution operation of which a 3D convolution kernel is 3×3×2 (16). The rest is deduced by analogy until the last 3D convolution layer in the 3D convolutional network performs a 3D convolution operation on inputted features by using a 3×3×2 (128) convolution kernel as taught by Chen in the method and system of Wang-Hayes to improve the training data process.
12. As per claim 20 Wang-Hayes did not disclose wherein identifying the current location of the driving device, based on the result of matching the first voxel data included in the map data with the second voxel data, by the controller includes: applying, by the controller, a point cloud registration algorithm to the first voxel data and the second voxel data to generate the result.
In the same field of endeavor Chen disclosed, “the points in the point cloud are mapped to the 3D voxel, so that a 3D voxel to which the target point is mapped may be determined according to the location information of the target point, to extract the convolution feature information corresponding to the 3D voxel from the convolution feature set. In the embodiments of this application, the two parts of information may be used as feature information to correct the initial positioning information of the candidate object region (Paragraph. 0119).
It would have been obvious to one having ordinary skill in the art before the effective filing date was made to have incorporated the points in the point cloud are mapped to the 3D voxel, so that a 3D voxel to which the target point is mapped may be determined according to the location information of the target point, to extract the convolution feature information corresponding to the 3D voxel from the convolution feature set. In the embodiments of this application, the two parts of information may be used as feature information to correct the initial positioning information of the candidate object region as taught by Chen in the method and system of Wang-Hayes to improve the training data process.
Response to Arguments
13. Applicant's arguments filed 11/25/2025 have been fully considered but they are not persuasive. Response to applicant’s argument is as follows.
A. Applicant argued that prior art did not disclose, “wherein the first sampling rate is determined based on sampling rate information stored in the memory and information of the first specified object; sample the first sensing data and the second sensing data respectively using the first sampling rate and the second sampling rate”.
As to applicant’s argument Hayes disclosed, “FIG. 9 differs from FIG. 7 in that the method of FIG. 9 includes determining 902 a cost associated with the sampled data. The cost associated with the sampled data may be based on a cost to transmit the sampled data. For example, the cost may be calculated based on financial costs to transmit data (e.g., data rates), bandwidth usage or data caps. The cost may also be based on a cost to store the sampled data. For example, data ingress costs and data storage costs associated with a data center, cloud storage provider, or other resources may factor into the cost. The cost may also be based on a cost to process the sampled data. For example, the cost may be calculated as an amount of processing resources estimated to be used to process the sampled data, use the sampled data in machine learning training, etc. The cost may be expressed as a financial cost, a score or rating based at least in part on financial costs, another score or rating, etc (Paragraph. 0065). Examiner interpreted the sampling rate as frequency like quantity that counts event per second in other words sampling rate of the sampled data as data rates based on bandwidth usage. Examiner interpreted the underline limitation as, “In signal/data acquisition, the appropriate sampling rate depends on the characteristic what is being measured bandwidth and on system constraints (memory, processing).”
Conclusion
14. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
15. Any inquiry concerning this communication or earlier communication from the
examiner should be directed to Adnan Mirza whose telephone number is (571)-272-3885.
16. The examiner can normally be reached on Monday to Friday during normal
business hours. If attempts to reach the examiner by telephone are unsuccessful, the
examiner’s supervisor, Faris Almatrahi can be reached on (313)-446-4821.
17. Information regarding the status of an application may be obtained from the
Patent Application Information Retrieval (PAIR) system. Status information for published
applications may be obtained from either Private PAIR or Public PAIR. Status
information for un published applications is available through Private PAIR only. For
more information about the PAIR system, see http://pair-direct.uspto.gov. Should you
have questions on access to the Private PAIR system, contact the Electronic Business
Center (EBC) at (866)-217-9197 (toll-free).
/ADNAN M MIRZA/Primary Examiner, Art Unit 3667