Prosecution Insights
Last updated: April 19, 2026
Application No. 18/983,240

AUTONOMOUS VEHICLE DRIVING SYSTEM AND METHOD

Non-Final OA §103§112
Filed
Dec 16, 2024
Examiner
NGUYEN, NGA X
Art Unit
3662
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Guangzhou Tufa Network Technology Co. Ltd.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
84%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
609 granted / 784 resolved
+25.7% vs TC avg
Moderate +6% lift
Without
With
+6.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
37 currently pending
Career history
821
Total Applications
across all art units

Statute-Specific Performance

§101
11.5%
-28.5% vs TC avg
§103
46.9%
+6.9% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
21.5%
-18.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 784 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The current application is CON. of application No. PCT/CN2024/104136, relates a Foreign Application Priority Data CN202311310170.5 filed on Oct. 10, 2023. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-10 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1, cites: At line 5 thru line 6, “a vehicle-side locator …determine a vehicle-determined location of a vehicle based on a differential operation” is unclear and indefinite. The claim fails to define how and what manner “a differential operation” is obtained. Also, “determine a vehicle determined location of a vehicle” is unclear cause whether “an autonomous vehicle” in line 1 and “a vehicle” in line 6 that are the same or different vehicles. Regarding claim 2, cites “the vehicle includes a public vehicle …” and “the vehicle includes a private vehicle” which are unclear. The claim inadequately provides how and what manner “the vehicle” includes “a public vehicle” and/or “a private vehicle”. Also, the claim fails to define how the control device knowing whether the “parking area” is a non-public and/or a public parking area. Claims 2-10 depend upon rejected claim 1. Below are cited references that teach the claimed subject matter as best understood. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5, & 7-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gillett (20210165404) in view of Hyde (20210124348). With regard to claim 1, Gillett discloses an autonomous vehicle driving system, comprising: a sensing device including an image sensor, wherein the sensing device is configured to output environmental sensing data (an environment recognition unit recognizes the surrounding environment of the autonomous scooter 100 based on the information acquired by the external sensor 201, cameras 202, and etc., see [0059]; a locating device, including: a vehicle-side locator, configured to determine a vehicle-determined location of the vehicle (GPS 203 obtains current location of the scooter as it moving, see [0037]+), wherein the vehicle-determined location and the environmental image are used to determine a current vehicle location of the vehicle (the autonomous control system 200 configured with combinations of external sensors 201, cameras 202 and GPS 203 to a navigation system 205, see [0037] & claim 4); and a control device, configured to control vehicle driving based on the current vehicle location (the control unit 209 controls the traveling of the autonomous scooter 100 such that the autonomous driving mode according to the control center plan, see [0039]-[0041]+). Gillett is not clearly teaching of controlling vehicle driving based on the current vehicle location and the environmental sensing data. Hyde discloses an autonomous LEV 105 (which could be a scooter, see [0039]). The scooter LEV 105 comprises a control system 175 which control the LEV 105 based on the current vehicle location and the environment sensing data (see [0072]+). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify Gillet by including of controlling the autonomous vehicle driving based on the current vehicle location and the environmental sensing data as taught by Hyde for improving navigating accuracy. With regard to claim 2, Gillett teaches that the autonomous vehicle (autonomous scooter) which serves for a public transportation and/or for a leasing which is obvious controls its own vehicle to park at the parking area or driving home if renting, see [0128]-[0135]+ which meets the scope of the claim.. With regard to claim 3, Hyde teaches that the the autonomous vehicle (autonomous scooter) autonomous vehicle driving system according to claim 1, wherein the image sensor is configured to send the environmental image to a cloud server, so that the cloud server matches the environmental image with a pre-generated navigation map to determine a cloud-determined location of the vehicle (a remote computing system 190 as a cloud-based server system which receives sensor data from the autonomous LEV 105, see [0077]-[0079]+); and the autonomous driving system further comprises a computing device, wherein the computing device is configured to receive the cloud-determined location, and determine and output the current vehicle location of the vehicle based on the vehicle-determined location and the cloud-determined location (the cloud-based server 190 determines one or more navigational instructions for the LV 105, see [0078]+), wherein the computing device is configured to output the cloud-determined location as the current vehicle location when the vehicle-determined location does not meet a preset criterion, or the computing device is configured to receive the vehicle-determined location and the cloud-determined location, and determine and output the current vehicle location based on fusion of the vehicle-determined location and the cloud-determined location (see [0079]-[0085]+). With regard to claim 4, Hyde teaches that the autonomous vehicle driving system according to claim 3, wherein the locating device further includes at least one of an inertial measurement device or a cyclometer, configured to determine a reckoning location of the vehicle by dead reckoning (see [0109]+); and the computing device is configured to determine and output the current vehicle location based on fusion of the vehicle-determined location, the cloud-determined location, and the reckoning location (see [0061]-[0062]+). With regard to claim 5, Hyde teaches that the autonomous vehicle driving system according to claim 1, wherein the environmental sensing data includes a drivable area and a road boundary; the sensing device includes a first sensing device, wherein the first sensing device includes the image sensor and a depth camera, and the first sensing device is configured to sense the drivable area and the road boundary; the image sensor includes a surround-view camera, the surround-view camera includes four cameras respectively mounted in four directions, directly front, directly rear, directly left, and directly right, of a riser of the vehicle; and the depth camera is mounted directly in front of a vehicle front of the vehicle, wherein the vehicle front is configured to control a turning direction of the vehicle, the riser is connected to the vehicle front to and turn along with the vehicle front (the positioning system 150 (at the LEV and the cloud-based server 190) uses various models (machine-learned model, deep neural network, and etc.) for determining drivable area and road boundary, see [0052]-[0055]+)). With regard to claim 7, Hyde teaches that the vehicle sensor 102 can includes LIDAR, RADAR, cameras (visible spectrum cameras, infrared cameras, and etc., see [0047]+ ) which meets the scope of “ the environmental sensing data includes an obstacle behind the vehicle; and the sensing device includes a third sensing device, wherein the third sensing device includes at least one time-of-flight (TOF) camera, and the third sensing device is configured to sense the obstacle behind the vehicle when the vehicle is reversing wherein, the at least one TOF camera includes: a pair of front TOF cameras respectively mounted on two sides of a front end of a vehicle footboard; and a rear TOF camera, mounted on a rear fender of a rear wheel of the vehicle”. With regard to claim 8, Hyde teaches that the autonomous vehicle driving system according to claim 1, further comprising: a LiDAR offline installation interface, configured to install a LiDAR when the vehicle is in a non-operating state, wherein the LiDAR is configured to offline-collect point cloud data of the driving environment of the vehicle in the non-operating state, and the point cloud data is used to generate a navigation map of the driving environment (the cloud-based server can communicate with the autonomous LEV for providing navigational instruction based on stored data, see [0077]-[0080]+). With regard to claim 9, Hyde teaches that the autonomous vehicle driving system according to claim 1, further comprising: a computing device, configured to output a driving instruction based on the current vehicle location and the environmental sensing data, wherein the driving instruction includes a driving route and a driving speed, and the control device is configured to control the vehicle based on the driving instruction (see [0050]-[0055]+). With regard to claim 10, Hyde teaches that the autonomous vehicle driving system according to claim 9, wherein the computing device is configured to: determine a global path based on the current vehicle location, a pre-generated navigation map, and a destination location, determine a driving decision based on the global path and the environmental sensing data, wherein the driving decision includes at least one of yielding, detouring, going straight, following, changing lanes, or borrowing lanes, and output the driving instruction based on the driving decision; the control device includes: a steering device, configured to control a driving direction of the vehicle based on the driving route, and a motor controller, configured to control the vehicle driving based on the driving speed (see [0064]-[0067]+). With regard to claim 11, Gillett discloses an autonomous vehicle driving method for an autonomous driving system, comprising: determining a current vehicle location of a vehicle based on a vehicle-determined location of the vehicle and an environmental image of a driving environment of the vehicle (GPS 203 obtains current location of the scooter as it moving, see [0037]+ & cameras 202 providing imaging of an external situation of the autonomous scooter, see [0046]+); determining environmental sensing data around the vehicle based on the environmental image (an environment recognition unit recognizes the surrounding environment of the autonomous scooter 100 based on the information acquired by the external sensor 201, cameras 202, and etc., see [0059]); and a control device, configured to control vehicle driving based on the current vehicle location (the control unit 209 controls the traveling of the autonomous scooter 100 such that the autonomous driving mode according to the control center plan, see [0039]-[0041]+). Gillett is not clearly teaching of controlling vehicle driving based on the current vehicle location and the environmental sensing data. Hyde discloses an autonomous LEV 105 (which could be a scooter, see [0039]). The scooter LEV 105 comprises a control system 175 which control the LEV 105 based on the current vehicle location and the environment sensing data (see [0072]+). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify Gillet by including of controlling the autonomous vehicle driving based on the current vehicle location and the environmental sensing data as taught by Hyde for improving navigating accuracy. With regard to claim 12, Hyde teaches that the autonomous vehicle driving method according to claim 11, wherein the determining of the current vehicle location of the vehicle based on the vehicle-determined location of the vehicle and the environmental image of the driving environment of the vehicle includes: sending the environmental image to a cloud server, so that the cloud server matches the environmental image with a pre-generated navigation map to determine a cloud-determined location of the vehicle a remote computing system 190 as a cloud-based server system which receives sensor data from the autonomous LEV 105, see [0077]-[0079]+); receiving the cloud-determined location sent by the cloud server (the cloud-based server 190 determines one or more navigational instructions for the LV 105, see [0078]+), and determining the current vehicle location based on the vehicle-determined location of the vehicle and the cloud-determined location (see [0066]-[0067]+). With regard to claim 13, Hyde teaches that the autonomous vehicle driving method according to claim 12, further comprising: offline-collecting point cloud data of the driving environment of the vehicle by using a LiDAR installed when the vehicle is in a non-operating state (LEVs occasionally need to be repositioned when not in use, see [0023]+, wherein the LEV comprises sensors such as LIDAR systems, RADAR system, see [0047]+, LEV can receives loud data from the cloud-based server, see [0077]+), wherein the point cloud data is used to generate a navigation map of the driving environment, wherein the determining of the current vehicle location based on the vehicle-side location of the vehicle and the cloud-side location includes: outputting the cloud-determined location as the current vehicle location when the vehicle-determined location does not meet a preset criterion, or determining and outputting the current vehicle location based on fusion of the vehicle-determined location and the cloud-determined location, or determining a reckoning location of the vehicle by dead reckoning, and determining and outputting the current vehicle location based on fusion of the vehicle-determined location, the cloud-determined location, and the reckoning location ((see [0079]-[0085]+). With regard to claim 14, Hyde teaches that the autonomous vehicle driving method according to claim 11, wherein the determining of the environmental sensing data based on the environmental image includes: obtaining a depth image; and determining a drivable area and a road boundary based on the environmental image and the depth image, wherein the environmental sensing data includes the drivable area and the road boundary (the positioning system 150 (at the LEV and the cloud-based server 190) uses various models (machine-learned model, deep neural network, and etc.) for determining drivable area and road boundary, see [0052]-[0055]+) . With regard to claim 15, Gillett teaches that the autonomous vehicle driving method according to claim 11, wherein the determining of the environmental sensing data based on the environmental image includes: obtaining a millimeter-wave radar image; and determining a moving obstacle based on the environmental image and the millimeter-wave radar image, wherein the moving obstacle includes at least one of a pedestrian, or another vehicle, and the environmental sensing data includes the moving obstacle (see [0059]-[0061]+). With regard to claim 16, Hyde teaches that the vehicle sensor 102 can includes LIDAR, RADAR, cameras (visible spectrum cameras, infrared cameras, and etc., see [0047]+) which meets the scope of “obtaining at least one TOF image; and determining an obstacle behind the vehicle based on the environmental image and the at least one TOF image when the vehicle is reversing, wherein the environmental sensing data includes the obstacle behind the vehicle”. With regard to claim 17, Hyde teaches that the autonomous vehicle driving method according to claim 11, wherein the controlling of vehicle driving based on the current vehicle location and the environmental sensing data includes: outputting a driving instruction based on the current vehicle location and the environmental sensing data, wherein the driving instruction includes a driving route and a driving speed; and controlling the vehicle driving based on the driving instruction (see [0050]-[0055]+). With regard to claim 18, Hyde teaches that the autonomous vehicle driving method according to claim 17, wherein the outputting of the driving instruction based on the current vehicle location and the environmental sensing data includes: determining a global path based on the current vehicle location, the pre-generated navigation map, and a destination location, determining a driving decision based on the global path and the environmental sensing data, wherein the driving decision includes at least one of yielding, detouring, going straight, following, changing lanes, or borrowing lanes, and outputting the driving instruction based on the driving decision; the controlling of the vehicle driving based on the driving instruction includes: controlling a driving direction of the vehicle based on the driving route, and controlling the vehicle driving based on the driving speed (see [0064]-[0067]+). With regard to claims 19-20, Hyde teaches that the LEVs may occasionally need to be repositioned when not in use, e.g., rider (renter) reaching his or her destinations, the LEVs leaved in an unauthorized parking locations, see [0023]+. The LEVs are automatedly controlled to navigating to a desired locations such as parked at authorized parking lots, see [0029]-[0030]+ which meets the scope of “controlling of the vehicle driving based on the current vehicle location and the environmental sensing data includes: controlling the public vehicle to drive from a non-public parking area to a public parking area, or controlling the public vehicle to drive from one public parking area to another public parking area” and “controlling of the vehicle driving based on the current vehicle location and the environmental sensing data includes: controlling the private vehicle to drive from a non-designated parking area to a designated parking area, wherein the designated parking area includes a dedicated parking space for the private vehicle and any vacant parking space in a parking area in which the private vehicle is authorized to park”. Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gillett (20210165404) in view of Hyde (20210124348) as applied to claim 1 above, and further in view of Agarwal (20230154195). With regard to claim 6, Gillett teaches that the autonomous vehicle driving system according to claim 1, wherein the environmental sensing data includes a moving obstacle located in front of the vehicle (detect threat and obstacles in an environment of the scooter 100, see [0040]+); the sensing device includes a second sensing device, wherein the second sensing device includes the image sensor and a millimeter-wave radar, the second sensing device is configured to sense the moving obstacle, and the moving obstacle includes at least one of a pedestrian, or another vehicle (sensor 201-202 (cameras, lidar, radar, etc.) positioned on the scooter’s riser, see Fig. 1E, [0034]+); Howerver, both Gillett and Hyde do not teach the image sensor includes a surround-view camera, wherein the surround-view camera includes four cameras respectively mounted in four directions, directly front, directly rear, directly left, and directly right, of a riser of the vehicle. Agarwal discloses an ego vehicle 102 (could be a robot, a bike, a scooter, see [0038) comprises a camera system 114 which includes multiple cameras positioned in multiple directions (front, rear, sides, and etc.), see [0050]+. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify Gillet by including of controlling the autonomous vehicle driving based on the current vehicle location and the environmental sensing data as taught by Hyde, and further including multiple cameras positioned in multiple directions (front, rear, sides and etc.) as taught by Agarwal. The combination of Gillet, Hyde and Agarwal is an adapted system for ensuring the precise determination of the scooter’s positions and navigation. Prior Arts Cited The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Kim (20210116908) discloses a method for autonomous control of transport of a two wheeled motorized scooter, see the abstract. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NGA X NGUYEN whose telephone number is (571)272-5217. The examiner can normally be reached M-F 5:30AM - 2:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JELANI SMITH can be reached at 571-270-3969. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. NGA X. NGUYEN Examiner Art Unit 3662 /NGA X NGUYEN/Primary Examiner, Art Unit 3662
Read full office action

Prosecution Timeline

Dec 16, 2024
Application Filed
Mar 19, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600237
RECOMMENDED VEHICLE-RELATED FUNCTIONALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12601610
METHOD, DATA PROCESSING APPARATUS AND COMPUTER PROGRAM PRODUCT FOR GENERATING MAP DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12594968
VEHICLE DRIVING SWITCHING DEVICE, VEHICLE DRIVING SYSTEM, AND VEHICLE DRIVING SWITCHING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12597351
VEHICULAR AUTOMATIC BRAKING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12591247
UNMANNED VEHICLE MANAGEMENT SYSTEM AND METHOD
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
84%
With Interview (+6.5%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 784 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month