Prosecution Insights
Last updated: April 19, 2026
Application No. 18/224,321

Method for Roadside Assisted Sensor Deviation Correction in Simultaneous Localization and Mapping

Non-Final OA §103
Filed
Jul 20, 2023
Examiner
INGRAM, THOMAS P
Art Unit
3668
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Hong Kong Applied Science and Technology Research Institute Company Limited
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
94%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
512 granted / 585 resolved
+35.5% vs TC avg
Moderate +6% lift
Without
With
+6.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
11 currently pending
Career history
596
Total Applications
across all art units

Statute-Specific Performance

§101
21.1%
-18.9% vs TC avg
§103
45.7%
+5.7% vs TC avg
§102
16.6%
-23.4% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 585 resolved cases

Office Action

§103
DETAILED ACTION Status of Claims This action is in response to the application No. 18/224321 filed on 7/20/2023. Claims 1-15 are pending for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Applicant's election with traverse of the restriction in the reply filed on 10/8/2025 is acknowledged. The traversal is on the ground(s) that the claim sets overlap in scope and amount to the same method of using matched point cloud data to determine a corrected position and/or deviation of a moving object within a selected coordinate system. This is not found persuasive because while the inventions of the on board unit found in claim 16 and the road-side unit found in claim 17 could be used together to form a combined method, the current format of the claims, particularly the broadness of claim 1, allows for divergent interpretations and usages of the combined system versus the particulars included in the on-board unit of claim 16 and the road-side unit of claim 17 such that the OBU and RSU can be used in different methods separate than the one claimed broadly in claim 1. The requirement is still deemed proper and is therefore made FINAL. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Alam et al. US 11,113,959 (“Alam”) in view of Gao et al. CN 1156907461 (“Gao”). As to claim 1¸ Alam discloses a method for position and/or deviation correction of a moving object in a simultaneous localization and mapping (SLAM) environment, the method comprising the steps of: receiving at the moving object point cloud data for an area associated with the moving object from a sensor located at a [second] location (see at least Fig 2-4; col 6, lines 38-56; The input data 92 may be processed by a set of perspective translators 94 that generate translated maps 96. In this regard, the translated maps 96 are generated with respect to each vehicle's vantage point and its reference frame; col 7, lines 55-58: the octrees corresponding to each volumetric map may themselves be aligned without explicit computation of the transformation matrix T); and at a processor of the moving object (see at least col 12, lines 28-32: The host processor 204 may include logic 224 (e.g., logic instructions, configurable logic, fixed-functionality hardware logic, etc., or any combination thereof) to perform one or more aspects of the method 40 (FIG. 2), the method 110 (FIG. 7), the method 170 (FIG. 11), and/or the method 180 (FIG. 12), already discussed), performing the steps of: transforming the received point cloud data to a coordinate system of the moving object (see at least col 6, lines 38-56; The input data 92 may be processed by a set of perspective translators 94 that generate translated maps 96. In this regard, the translated maps 96 are generated with respect to each vehicle's vantage point and its reference frame); matching the transformed point cloud data with point cloud data generated by a sensor at the moving object for the area associated with the moving object (see at least Fig 7; col 9, lines 10-21: detecting one or more differences between a crowdsourced map of an ambient environment and a real-time volumetric map of the ambient environment. As already noted, the difference(s) may be, for example, volumetric deviations corresponding to a hazardous spatial obstruction in the road. In an embodiment, block 112 includes classifying the difference(s) as one or more objects based on size and/or temporal existence). Alam fails to explicitly disclose that the received point cloud data is from a fixed location and using the matched point cloud data to determine a corrected position and/or deviation of the moving object within a selected coordinate system. However, Gao teaches that the received point cloud data is from a fixed location (road side LTE signal base station) and using the matched point cloud data to determine a corrected position and/or deviation of the moving object within a selected coordinate system (see at least pages 8-9; using the preset registration method, the vehicle end point cloud data and the road side point cloud data for registration, determining the global positioning information…The invention firstly combines the vehicle-mounted end point cloud data and vehicle-mounted end visual data to obtain the vehicle-mounted end target detection result; then through the preset registration method, realizing the global positioning under the current scene, and obtaining the road side end target detection result according to the global positioning result, at last, combining the side detection result and the vehicle-mounted end target detection result, realizing the current scene global blind-free perception. The invention can provide accurate and rich environment sensing information for the automatic driving vehicle, it overcomes the problem of blind area in the sensing range of the automatic driving vehicle, it provides a basis for the vehicle future track planning analysis). Thus, Alam discloses a system and method for comparing and translating various LIDAR sensor data into one unified coordinate system for detecting obstacles and Gao teaches a similar multi-sensor fusion that uses road side point cloud sensors and on-board vehicle sensors to determine vehicle position as well as obstacle detection. Therefore, it would have been obvious to one of ordinary kill in the art at the time the invention was filed to modify the system disclsoed by Alam, with the road side sensor positioning taught by Gao, with reasonable expectation of success, because it would allow the road side sensors to be used to particularly detect the position of the vehicle as well as environmental data around the vehicle. As to claim 2, Alam discloses wherein the SLAM environment comprises a road system, the moving object comprises an autonomous vehicle (AV) navigating (see at least Background: autonomous vehicle) its way around the road system, the processor forms part of an on-board unit (OBU) of the AV (see at least col 12, lines 28-32: The host processor 204 may include logic 224 (e.g., logic instructions, configurable logic, fixed-functionality hardware logic, etc., or any combination thereof) to perform one or more aspects of the method 40 (FIG. 2), the method 110 (FIG. 7), the method 170 (FIG. 11), and/or the method 180 (FIG. 12), already discussed), the sensor of the moving object comprises a Light Detection and Ranging (LiDAR) sensor of the AV (see at least col 5, lines 45-46: a vehicle subsystem 60 that uses range sensing 62 (e.g., stereo vision, ultrasonic, radar, LiDAR)), the area associated with the moving object comprises a field of view (FOV) of the LiDAR sensor of the AV (see at least inherent characteristics of LIDAR sensors, col 3, lines 46-47: objects in the field of view), and the selected coordinate system comprises a world coordinate system (see at least col 7, lines 42-43: global coordinate system that is used in the HD map). Alam fails to explicitly disclose the sensor located at a fixed location comprises a road-side unit (RSU) LiDAR sensor. However, Gao teaches the sensor located at a fixed location comprises a road-side unit (RSU) LiDAR sensor (road side LTE signal base station). Thus, Alam discloses a system and method for comparing and translating various LIDAR sensor data into one unified coordinate system for detecting obstacles and Gao teaches a similar multi-sensor fusion that uses road side point cloud sensors and on-board vehicle sensors to determine vehicle position as well as obstacle detection. Therefore, it would have been obvious to one of ordinary kill in the art at the time the invention was filed to modify the system disclsoed by Alam, with the road side sensor positioning taught by Gao, with reasonable expectation of success, because it would allow the road side sensors to be used to particularly detect the position of the vehicle as well as environmental data around the vehicle. As to claim 3, Alam discloses wherein the point cloud data received at the OBU of the AV from the RSU includes point cloud data for any blind spot or spots in the point cloud data generated by the LiDAR sensor of the AV (see at least Fig 2-4; col 6, lines 38-56; The input data 92 may be processed by a set of perspective translators 94 that generate translated maps 96. In this regard, the translated maps 96 are generated with respect to each vehicle's vantage point and its reference frame; col 7, lines 55-58: the octrees corresponding to each volumetric map may themselves be aligned without explicit computation of the transformation matrix T). As to claim 4, Alam fails to explicitly disclose wherein the point cloud data received at the OBU of the AV from the RSU comprises a subset of point cloud data generated by one of more RSU LiDAR sensors for an area including the FOV of the LiDAR sensor of the AV. However, Gao teaches wherein the point cloud data received at the OBU of the AV from the RSU comprises a subset of point cloud data generated by one of more RSU LiDAR sensors for an area including the FOV of the LiDAR sensor of the AV (see at least pages 8-9; using the preset registration method, the vehicle end point cloud data and the road side point cloud data for registration, determining the global positioning information…The invention firstly combines the vehicle-mounted end point cloud data and vehicle-mounted end visual data to obtain the vehicle-mounted end target detection result; then through the preset registration method, realizing the global positioning under the current scene, and obtaining the road side end target detection result according to the global positioning result, at last, combining the side detection result and the vehicle-mounted end target detection result, realizing the current scene global blind-free perception. The invention can provide accurate and rich environment sensing information for the automatic driving vehicle, it overcomes the problem of blind area in the sensing range of the automatic driving vehicle, it provides a basis for the vehicle future track planning analysis). Thus, Alam discloses a system and method for comparing and translating various LIDAR sensor data into one unified coordinate system for detecting obstacles and Gao teaches a similar multi-sensor fusion that uses road side point cloud sensors and on-board vehicle sensors to determine vehicle position as well as obstacle detection. Therefore, it would have been obvious to one of ordinary kill in the art at the time the invention was filed to modify the system disclsoed by Alam, with the road side sensor positioning taught by Gao, with reasonable expectation of success, because it would allow the road side sensors to be used to particularly detect the position of the vehicle as well as environmental data around the vehicle. Allowable Subject Matter Claims 5-15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS P INGRAM whose telephone number is (571)272-7864. The examiner can normally be reached M-F 10-6 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fadey Jabr can be reached at 571-272-1516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Thomas Ingram/Primary Examiner, Art Unit 3668 1 Citations taken from machine translation of CN 115690746, attached herein
Read full office action

Prosecution Timeline

Jul 20, 2023
Application Filed
Mar 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602056
AUTONOMOUS VEHICLE SOCIALIZATION
2y 5m to grant Granted Apr 14, 2026
Patent 12591250
ROBOT TASK EXECUTION METHOD AND APPARATUS, ROBOT, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12591251
Navigation Method and System Using Color Codes
2y 5m to grant Granted Mar 31, 2026
Patent 12590437
SYSTEMS AND METHODS OF PROVIDING RIDE CONTROL WITH A POWER MACHINE
2y 5m to grant Granted Mar 31, 2026
Patent 12585286
MAP GENERATION SYSTEM AND MAP GENERATION METHOD
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
94%
With Interview (+6.0%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 585 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month