Prosecution Insights
Last updated: April 19, 2026
Application No. 18/673,982

SELF-PROPELLED CONVEYANCE SYSTEM

Final Rejection §103
Filed
May 24, 2024
Examiner
RAMIREZ, ELLIS B
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Toyota Jidosha Kabushiki Kaisha
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
156 granted / 194 resolved
+28.4% vs TC avg
Strong +18% interview lift
Without
With
+18.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
39 currently pending
Career history
233
Total Applications
across all art units

Statute-Specific Performance

§101
9.1%
-30.9% vs TC avg
§103
62.0%
+22.0% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
7.4%
-32.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 194 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments The amendment and response filed on November 7, 2025, to the Non-Final Office Action dated September 8, 2025 has been entered. Claim 1 is amended. Claims 1 - 5 are pending in this application. Response to Arguments Applicant’s arguments and amendments, see pages 4-6, filed November 7, 2025, with respect to the 35 U.S.C. 101 rejections have been fully considered and are persuasive. Applicant has amended independent claim 1 to now require or recite, in part, the action of transmitting commands so as to cause the vehicle “to self- propel by itself” along a particular road. The 35 U.S.C. § 101 rejection of claims 1-5 has been withdrawn. Applicant’s arguments and amendments, see pages 4-6, filed November 7, 2025, with respect to the 35 U.S.C. § 102 rejection based on Singh et al (US-10507841-B1) have been considered and are persuasive. The 35 U.S.C. § 102 rejection of claims 1-5 has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of further limiting amendments made, changing the scope of the claimed invention. Claim Rejections -- 35 U.S.C. § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-5 are rejected under 35 U.S.C. 103 as being unpatentable over Yohei Taniguchi (US-20220219692-A1)(“Taniguchi”) and Singh et al (US-10507841-B1)(“Singh”). As per claim 1, Taniguchi discloses a self-propelled conveyance system (Figure 1) comprising: a vehicle configured for self-propelling (Taniguchi at Para. [0054] discloses a control causing a vehicle to travel autonomously:” control device 180 sets a lane change start point P because it is necessary to change lanes from the main road to the branching lane at the branching road of the interchange. The lane change start point is a point at which the steering control is started so that the vehicle moves in the lateral direction (vehicle width direction).”); a first sensor configured to acquire spatial information of the vehicle (Taniguchi at Para. [0059] discloses determining a first target point using a first sensor device such as a camera:” traveling in the unrecognizable area or after having passed through the unrecognizable area, the control device 180 calculates a target point (also referred to as a first target point, hereinafter) using the camera image. When the lane marks cannot be stably recognized from the camera image in the unrecognizable area, the first target point is calculated after a state is achieved in which the lane marks can be recognized.”); a second sensor configured to acquire spatial information of the vehicle (Taniguchi at Para. [0050] discloses acquiring road information from a map information system:” the vehicle is to travel in the unrecognizable area. In the unrecognizable area, the lane keeping function cannot serve using the information recognized from the camera image, and the control device 180 therefore executes the lane keeping function and/or the lane change function using the road information stored in the map database as substitute for the camera image.”), the second sensor having a modality different from a modality of the first sensor (Taniguchi at Para. [0019] discloses that the road information is acquired through the combined use of GPS data and map data which are of a different modality than imaging data:” subject vehicle position detection device 12 is composed of a GPS unit, a gyro-sensor, a vehicle speed sensor, etc. The subject vehicle position detection device 12 detects radio waves transmitted from a plurality of communication satellites using the GPS unit to periodically acquire the positional information of a target vehicle (subject vehicle) and detects the current position of the target vehicle based on the acquired positional information of the target vehicle, angle variation information acquired from the gyro-sensor, and the vehicle speed acquired from the vehicle speed sensor.”); having: at least one processor configured to process first spatial information of the vehicle acquired by the first sensor and second spatial information of the vehicle acquired by the second sensor (Taniguchi at Para. [0027] discloses use of a processor:” control device 18 is composed of … a central processing unit (CPU) that executes the programs stored in the ROM, and a random access memory (RAM) that serves as an accessible storage device. As substitute for or in addition to the CPU, a micro processing unit (MPU), a digital signal processor (DSP)”); and at least one memory storing a plurality of instructions to be executed by the at least one processor, wherein the plurality of instructions is configured to cause the at least one processor to execute (Taniguchi at Para. [0027] storing of instruction a storage device , RAM, ROM, and the like:” control device 18 is composed of a read only memory (ROM) that stores programs for controlling the travel of the subject vehicle, a central processing unit (CPU) that executes the programs stored in the ROM, and a random access memory (RAM) that serves as an accessible storage device. As substitute for or in addition to the CPU, a micro processing unit (MPU), a digital signal processor (DSP).”): calculating a first position of the vehicle in a predetermined coordinate system based on the first spatial information (Taniguchi at Fig. 4B and Para. [0059] discloses using the first sensor (camera) to determine a first target point or position:” traveling in the unrecognizable area or after having passed through the unrecognizable area, the control device 180 calculates a target point (also referred to as a first target point, hereinafter) using the camera image. When the lane marks cannot be stably recognized from the camera image in the unrecognizable area, the first target point is calculated after a state is achieved in which the lane marks can be recognized.”), calculating a second position of the vehicle in the predetermined coordinate system based on the second spatial information (Taniguchi at Fig. 4B and Para. [0059] discloses using the GPS data using a map function to calculate a second position of the vehicle:” control device 180 calculates a target point (also referred to as a second target point, hereinafter) using the road information A.”), determining a deviation between the first position and the second position (Taniguchi at Para. [0051] discloses determining a deviation between the positions:” a deviation between the lane marks recognized from the camera image and the lane marks stored in the map information is large, the deviation of the target points is also large.”), generating a control instruction for the vehicle based on at least one of the first position and the second position on condition that the deviation is within an allowable range (Taniguchi at Para. [0058] discloses the use of a predetermined condition or deviation before switch control methods:” the control switching point is set at a position at which the deviation is less than a predetermined length between the center line between the right and left lane marks recognized from the camera image (or an extension line obtained by extending the center line) and the center line of the lane included in the map information (lane including the branching lane) (or an extension line obtained by extending the center line).”), and transmitting the control instruction to the vehicle (Taniguchi at Par. [0058] discloses that the vehicle is notified when a switching point between methods is reached:” When the vehicle reaches the control switching point P (lane change start point P), the control device 180 switches from the camera control to the map control. After switching from the camera control to the map control, the autonomous control of the vehicle is executed so that the vehicle travels along the center line of the lane included in the road information A”.), wherein in response to receiving the control instruction, the vehicle is caused to self- propel by itself (Taniguchi at Figure 6C, steps S20 through S24, and Para. [0058] discloses the vehicle implements autonomous control (self-propel) after a after a switching due to an acceptable deviation:” switching from the camera control to the map control, the autonomous control of the vehicle is executed so that the vehicle travels along the center line of the lane included in the road information A, and the vehicle travels in the branching lane.”). However, Taniguchi does not disclose a server for performing the processing of the first and second sensor data. In the same field of endeavor, Singh discloses a method for an autonomous vehicle where a difference between second path curvature and the first path curvature exceeding a threshold, automatically operating the controller according to a diagnostic mode. See Figures 1 & 3. In particular, Singh discloses a server for performing functions for controlling a vehicle ( Singh at Figure 1, remote computer 64, and Column 7, lines 1-8, disclosing the use of the remote computer for performing functions useful for the autonomous vehicle:” computers 64 can include, for example: a service center computer where diagnostic information and other vehicle data can be uploaded from the vehicle via the wireless communication system 28 or a third party repository to or from which vehicle data or other information is provided, whether by communicating with the host vehicle 12, the remote access center 78, the mobile device 57, or some combination of these.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the remote processing of vehicle data taught in Singh in the autonomous vehicle in Taniguchi with a reasonable expectation of success because this results in the autonomous vehicle that automatically detects discrepancy between sensors on a vehicle from current conditions and cluster of signals for a geographic region maintained at a remote server, and for automatically taking appropriate corrective action in response to such a identified discrepancy by a remote server (see Singh column 12, lines 25-28). As per claim 2, Taniguchi and Singh disclose a self-propelled conveyance system according to claim 1, wherein the plurality of instructions is configured to further cause the at least one processor to execute: generating the control instruction based on one of the first position and the second position (Taniguchi at Para. [0058] discloses generating a control instruction like a steering command using the first or second position information:” After switching from the camera control to the map control, the autonomous control of the vehicle is executed so that the vehicle travels along the center line of the lane included in the road information A, and the vehicle travels in the branching lane.”), and switching the position of the vehicle, which is a basis of the control instruction, from the first position to the second position or from the second position to the first position on condition that the deviation is within the allowable range (Taniguchi at Para. [0059] disclosing switching between modes of control like camera or map based on the deviation between differences in position:” control device 180 calculates a target point (also referred to as a first target point, hereinafter) using the camera image. When the lane marks cannot be stably recognized from the camera image in the unrecognizable area, the first target point is calculated after a state is achieved in which the lane marks can be recognized. In addition, the control device 180 calculates a target point (also referred to as a second target point, hereinafter) using the road information A. Then, when the difference between the first target point and the second target point becomes a predetermined value or less, the control device 180 switches from the map control to the camera control.”). As per claim 3, Taniguchi and Singh disclose a self-propelled conveyance system according to claim 1, wherein the plurality of instructions is configured to further cause the at least one processor to execute: instructing the vehicle to stop or take a retreat action when the deviation is outside the allowable range (Taniguchi at Para. [0086] discloses stopping a control action when the difference is outside a certain range:” when the state in which the difference between the first target point (target point C) and the second target point (target point M) is large continues for a predetermined time or more, the autonomous steering control is turned off and it is thereby possible to prevent the behavior of the vehicle from becoming large when the control is switched.”). As per claim 4, Taniguchi and Singh disclose a self-propelled conveyance system according to claim 1, wherein the first sensor is a sensor provided separately from the vehicle in a space in which the vehicle is conveyed (Taniguchi at Para. [0029] discloses that the first sensor such as a camera is external to the vehicle:” travel information the external image information around the vehicle captured by the front camera and rear camera included in the sensors 11 and/or the detection results by the front radar, rear radar, and side radars included in the sensors 11.”). As per claim 5, Taniguchi and Singh disclose a self-propelled conveyance system according to claim 4, wherein the first sensor is a camera configured to acquire a video as the first spatial information (Singh at Col 5, Lines 42-27, discloses that a camera is foreseeable as first sensor:” a plurality of sensors 26, which may include GNSS (global navigation satellite system, e.g. GPS and/or GLONASS), RADAR, LIDAR, optical cameras, thermal cameras, ultrasonic sensors, inertial measurement units (IMUs), wheel speed sensors, steering angle sensors, and/or additional sensors as appropriate.”) , and the second sensor is a LiDAR configured to acquire three-dimensional information as the second spatial information (Singh at Column 10, Lines 23-25, discloses using a lidar sensor:” the second source may be any additional sensor capable of detecting path curvature, including, but not limited to, LiDAR or RADAR.”). CONCLUSION Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ELLIS B. RAMIREZ whose telephone number is (571)272-8920. The examiner can normally be reached 7:30 am to 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached at 571-270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ELLIS B. RAMIREZ/Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

May 24, 2024
Application Filed
Sep 05, 2025
Non-Final Rejection — §103
Nov 05, 2025
Applicant Interview (Telephonic)
Nov 06, 2025
Examiner Interview Summary
Nov 07, 2025
Response Filed
Jan 28, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600034
Compensation of Positional Tolerances in the Robot-assisted Surface Machining
2y 5m to grant Granted Apr 14, 2026
Patent 12584758
VEHICLE DISPLAY DEVICE, VEHICLE DISPLAY PROCESSING METHOD, AND NON-TRANSITORY STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12571639
SYSTEM AND METHOD FOR IDENTIFYING TRIP PAIRS
2y 5m to grant Granted Mar 10, 2026
Patent 12551302
CONTROLLING A SURGICAL INSTRUMENT
2y 5m to grant Granted Feb 17, 2026
Patent 12552018
INTEGRATING ROBOTIC PROCESS AUTOMATIONS INTO OPERATING AND SOFTWARE SYSTEMS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+18.2%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 194 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month