Prosecution Insights
Last updated: April 19, 2026
Application No. 18/702,398

SIGNAL PROCESSING DEVICE

Non-Final OA §103§112
Filed
Apr 18, 2024
Examiner
EL SAYAH, MOHAMAD O
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Hitachi Astemo, Ltd.
OA Round
3 (Non-Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
82%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
166 granted / 218 resolved
+24.1% vs TC avg
Moderate +5% lift
Without
With
+5.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
41 currently pending
Career history
259
Total Applications
across all art units

Statute-Specific Performance

§101
16.9%
-23.1% vs TC avg
§103
50.2%
+10.2% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
12.1%
-27.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 218 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to RCE A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/18/2026 has been entered. Priority Acknowledgement is made of applicants claim for foreign priority under 35 U.S.C. 119(a)-(d) and (f). The certified copy has been filed in parent application JP2021-173398 filed on 10/22/2021. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation “a second position of the other vehicle at a future time subsequent to the current time”, then “the road information comprising a first gradient value at the first position and a second gradient value at a second position”. It is unclear if the second recitation of “a second position” is the same as the first recitation of “a second position” which renders the claim indefinite. All claims dependent from the above claims are rejected for incorporating the deficiency by virtue of their dependencies. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 4, 5, 6 are rejected under 35 U.S.C. 103 as being unpatentable by Yuto (JP2016103131, from IDS) in view of Ueyama (US20070005218) and Nino (US20150344034). Regarding claim 1, Yuto teaches a signal processing device comprising: A processor coupled with memory, configured to (at least [14]-[15] disclosing CPU and memory): Receiving, during a current time, of external environment information from an external environment sensor mounted on a host vehicle, the external information comprising object information including an indication of at least another vehicle detected by the external environment sensor within a distance from the host vehicle and parameters associated with the other vehicle ([16] disclosing detecting the vehicle speed of own vehicle, [17] disclosing the front recognition sensor detects the relative speed and distance of another vehicle, i.e., detects a vehicle in the front and a parameter indicative of speed and distance to the other vehicle. [18] disclosing collecting vehicle position on the map from a GPS sensor. [21] disclosing calculating the route of the own vehicle based on the navigation and the map, thus using the vehicle GPS sensor. [22] disclosing using the front recognition sensor and the map information indicative of the position of own vehicle, see [18], to determine a predicted route for a preceding vehicle, thus the determination of the route of the preceding vehicle is based on integration of vehicle position on map based on own vehicle position on map and predicting how the other vehicle will behave in the future based on the map), Receive, during the current time, host vehicle travel information from a vehicle sensor of the host vehicle ([16] detecting own vehicle speed at a current time using own vehicle speed sensor. [18] detecting position of own vehicle using GPS). Integrate the external environment information and the host vehicle travel information to generate an integration result ([26] disclosing the integration of a relative position of other vehicle which is determined based on the position of the host vehicle, i.e., integrates information, wherein the integrated information specifies the position of the preceding vehicle on the map. See also [34]-[41] disclosing integration information of the routes, i.e., own vehicle route information and the route of the preceding vehicle detected by external information on the map to calculate a lost timing based on positions, i.e., positional relationship at the future time). Predict, based on the integration result, a behavior prediction result of the host vehicle and the other vehicle, the behavior prediction result comprising at least a first position of the host vehicle and a second position of the other vehicle at a future time subsequent to the current time ([16] disclosing detecting the vehicle speed of own vehicle, [17] disclosing the front recognition sensor detects the relative speed. [18] disclosing collecting vehicle position on the map from a GPS sensor. [21] disclosing calculating the route of the own vehicle based on the navigation and the map, thus using the vehicle GPS sensor. [22] disclosing using the front recognition sensor and the map information indicative of the position of own vehicle, see [18], to determine a predicted route for a preceding vehicle, thus the determination of the route of the preceding vehicle is based on integration of vehicle position on map based on own vehicle position on map and predicting how the other vehicle will behave in the future based on the map. See also [26] disclosing the integration of the information vehicle information and external sensor information of the recognition results of the preceding object vehicle and determining the position of the other vehicle on map based on integration of results. at least [16]-[19], [26]-[35] disclosing determining a host vehicle travel route and the travel route of the preceding vehicle based on the integrated information including the host vehicle position, the relative position of the two vehicles and the relative speeds and the detection result of the other vehicle. [34]-[41] discloses the integration of the routes on the map to determine a route of the first vehicle being before a turn as at least a first position of a first vehicle being before a turn and the future position of the other preceding vehicle having already turned to determine other vehicle will be lost at that future time. The integration indicates positional relationship which is used to determine lost vehicle later), wherein the behavior prediction unit includes: acquire road shape information on a traveling road surface associated with the first position of the host vehicle and the second position of the other vehicle at the future time ([34]-[42] disclosing acquiring the road shape being at least an intersection where the vehicles turn in the future timing); determine that the other vehicle departs from a detection range of the external environment sensor of the host vehicle at the future time ([34]-[42] disclosing calculating a lost time where the preceding vehicle will be lost based on the predicted routes of the own vehicle and the other vehicle, and specifically the position of the own vehicle before the turn, the other vehicle have turned and the road shape being an intersection thus not allowing the external sensor to detect the other vehicle based on the external sensor detection relationship with road shape), and store, in the memory, the behavior prediction result based on the other vehicle departs from the detection range at the future time ([30]-[47] disclosing comparing the estimated predicted timing when the preceding other vehicle is lost, i.e., depart from the detection range of the sensor, to the actual timing that the other vehicle is lost, it is interpreted from the citation that the prediction result is stored). Detect, when the current time correspond to the future time, that the other vehicle is absent ([30]-[47] and more specifically in [41]-[47] disclosing detecting the time at which the other vehicle is lost and thus confirming that the other vehicle is at a position at least after a turn and the host vehicle is before the turn). Integrate, when the other vehicle is absent from the detection range, the second position of the other vehicle using behavior prediction result stored in memory into the integration result comprising a reconstruction of the object information, the reconstructed information including an indication of the other vehicle based on the behavior prediction result and the parameters associated with the other vehicle ([30]-[47] and specifically, in [41]-[47] disclosing the integration of the result indicative that the lost vehicle exists on the own vehicle route, i.e., the other preceding vehicle location is turned in the route of the own vehicle based on the actual lost timing coinciding with the lost timing predicted, this is indicative of a reconstruction of the position of the other vehicle by predicting that the other vehicle is still ahead of the own vehicle but is absent due to the road based on the parameters of the position of the other vehicle in the future time and prediction that the other vehicle is absent based on the predicted location of the other vehicle matching the location after a turn). Cause, according to the regenerated integration result, a control unit of the host vehicle to execute an action to maintain a predetermined distance or suppress acceleration of during the absence of the other vehicle from the detection range (at least [41]-[47] disclosing maintaining the distance with the predicted route of the other preceding vehicle being existing in front of the own vehicle even though the sensor cannot detect the other vehicle due to the road shape and the other vehicle predicted location behind the turn). Yuto does not teach the road shape information comprising a first gradient value at the first position and a second gradient value at a second position. Determine, based on a difference between the first gradient and the second gradient value at the future time being greater than or equal to a threshold, that the other vehicle departs from a detection range. Ueyama teaches the road shape information comprising a first gradient value at the first position and a second gradient value at a second position ([0030]-[0040] disclosing the difference between a first gradient and second gradient at a first and second position of the host vehicle and a preceding vehicle). Determine, based on a difference between the first gradient and the second gradient value at the future time being greater than or equal to a threshold, that the other vehicle departs from a detection range ([0030]-[0040], [0055]-[0056] disclosing determine a lost condition when the difference in gradient is greater than a threshold). Yuto already teaches determining a lost vehicle based on a shape of the road, Ueyama determines the lost vehicle based on the shape of the road being a gradient, thus the combination and or substitution of Ueyama’s gradient road shape and Yuto is obvious yielding predictable results with the motivation of controlling the own vehicle based on the gradient to determine when another vehicle disappears temporarily and control the vehicle accordingly improving safety and driving performance by immediately detecting the cause of a lost condition of a preceding vehicle as taught by Ueyama [0007], [0035]-[0039]. While Yuto does not explicitly disclose the first position and second position correspond to future positions. Nino teaches the first position and second position correspond to future positions ([0090]-[0096]] disclosing the determination of the future grade for the vehicles, and based on the future grade to increase the inter-vehicle distance to the other vehicle). The combination of the teaching of Nino to the teaching of Yuto as modified by Ueyama is obvious yielding predictable results in order to allow for controlling the vehicle ahead of time based on predicting the gradient changes thus improving vehicle control and avoiding sudden control when another vehicle slows down unexpectedly. Regarding claim 4, Yuto teaches the device according to claim 1, wherein the processor is configured to, whether the other vehicle departs from the detection range of the external environment sensor at the future time ([34]-[42] disclosing determining the departure from the detection range at a future time). Ueyama teaches compare gradient values on the traveling road surface of the host vehicle and the other vehicle at the future time ([0036]-[0040] disclosing determining a slope difference between the vehicle and another vehicle at the future position based on the descent of the road, wherein the future position of the other vehicle is denoted as a virtual preceding vehicle, in anyways the recalculation of the gradient difference takes place at multiple times including a future time from the first calculation time, thus the gradients are calculated at a future time). Determine, based on comparison of the gradient values, whether the other vehicle departs from the detection range of the external environment sensor at a future time, the gradient comprising the first gradient value and the second gradient value ([0036]-[0040] disclosing determining based on a gradient difference that the other vehicle will be lost from the detection range of the sensor). Yuto already teaches determining a lost vehicle based on a shape of the road, Ueyama determines the lost vehicle based on the shape of the road being a gradient and based on a gradient difference between the virtual preceding vehicle location on the route “predicted position in the future”, thus the combination and or substitution of Ueyama’s gradient road shape and Yuto is obvious yielding predictable results with the motivation of controlling the own vehicle based on the gradient to determine when another vehicle disappears temporarily and control the vehicle accordingly improving safety and driving performance by immediately detecting the cause of a lost condition of a preceding vehicle as taught by Ueyama [0007], [0035]-[0040]. While Yuto does not explicitly disclose the first position and second position correspond to future positions. Nino teaches the first position and second position correspond to future positions ([0090]-[0096]] disclosing the determination of the future grade for the vehicles, and based on the future grade to increase the inter-vehicle distance to the other vehicle). The combination of the teaching of Nino to the teaching of Yuto as modified by Ueyama is obvious yielding predictable results in order to allow for controlling the vehicle ahead of time based on predicting the gradient changes thus improving vehicle control and avoiding sudden control when another vehicle slows down unexpectedly. Regarding claim 5, Yuto teaches the device according to claim 1, wherein when the other vehicle actually departs from the detection range of the external environment sensor at a current time, that corresponds to the future time, the processor is configured to: Acquire, from the memory, the behavior prediction result corresponding to the current time and regenerate the integration result from the integration unit using the behavior prediction result ([34]-[43] disclosing comparing the actual time to the predicted time where the lost condition occurs based on the integration result and thus the new information indicates that if the vehicle disappears from the detection range, that the predicted route is different and could be straight forward instead of right turn along the own predicted route and otherwise the preceding vehicle will be detected indicating future route is correct along the own vehicle route. [41]-[47] further disclosing the integration of the result indicative that the lost vehicle exists on the own vehicle route, i.e., the other preceding vehicle location is turned in the route of the own vehicle based on the actual lost timing coinciding with the lost timing predicted, which is interpreted as integrating the second position of the other vehicle in response to the absence of the other vehicle from sensor ). Regarding claim 6, Yuto teaches the signal processing device according to claim 1, wherein the processor is configured to obtain the road shape information based on map information on the traveling road surface of the host vehicle and the other vehicle at the future time ([16]-[22], [27]-[28] and [34]-[40] disclosing determining a road shape at the future time based on map information). Claims 7 are rejected under 35 U.S.C. 103 as being unpatentable by Yuto (JP2016103131, from IDS) in view of Ueyama (US20070005218) and Nino (US20150344034) and Kim (US20190041861). Regarding claim 7, Yuto as modified by Ueyama and Nino teaches the signal processing device according to claim 1, Yuto as modified by Ueyama and Nino does not teach wherein the processor is configured to obtain the road shape information based on a road shape that can be acquired from the external environment information from the external environment sensor. Kim teaches wherein the processor is configured to obtain the road shape information based on a road shape that can be acquired from the external environment information from the external environment sensor ([0022]-[0024] disclosing determining a slope of a terrain ahead of the vehicle based on a slope sensor). Yuto teaches detecting the vehicles lost based on road shapes thus the combination with Kim for using a slope sensor to detect the slope of the terrain is obvious yielding predictable results in order to determine based on the moving object up the hill is out of the field of view of the own vehicle and reducing the speed as a precautionary measure [0025]. Claims 8 are rejected under 35 U.S.C. 103 as being unpatentable by Yuto (JP2016103131, from IDS) in view of Ueyama (US20070005218) and Nino (US20150344034) and Fairgrieve (US20150081189). Regarding claim 8, Yuto as modified by Ueyama and Nino teaches the signal processing device according to claim 1, Yuto as modified by Ueyama and Nino does not teach wherein the processor is configured to obtain the road shape information based on a moving amount in a height direction of the other vehicle. Fairgrieve teaches wherein the processor is configured to obtain the road shape information based on a moving amount in a height direction of the other vehicle (at least [0046]-[0055] disclosing the vertical shuffle being a change in a predetermined amount in the vertical direction, [0091],[0092]). Fairgrieve is directed towards the same problem as detecting the absence of a preceding vehicle due to the road shape [0017], thus the combination is obvious to determine a road shape based on the change in height thus detecting an inclination that would cause loss of the preceding vehicle in order to control the vehicle to slow down as a precaution [0017], [0045]-[55], [0090]-[0092]. Claims 9 are rejected under 35 U.S.C. 103 as being unpatentable by Yuto (JP2016103131, from IDS) in view of Ueyama (US20070005218) and Nino (US20150344034) and Deguchi (US20160211788). Regarding claim 9, Yuto as modified by Ueyama and Nino teaches the signal processing device according to claim 1, Yuto as modified by Ueyama and Nino does not teach wherein the processor is configured to obtain the road shape information based on an inclination of the host vehicle that can be acquired from the vehicle sensor. Deguchi teaches wherein the processor is configured to obtain the road shape information based on an inclination of the host vehicle that can be acquired from the vehicle sensor ([0037] disclosing determining the road shape is inclined based on the vehicle inclination sensor). It would have been obvious to include the sensor of Deguchi with the sensor of Yuto which would yield predictable results and determine a road shape being inclined in order to control the vehicle based on the gradient thus to overcome road resistance as taught by Deguchi [0052]. Response to Arguments Applicant’s arguments filed on 02/18/2026 has been fully considered but they are not all persuasive. With respect to the 102/103 rejection: in regards to the First argument that the cited references do not disclose the determination of the other vehicle departing from a detection range of the host vehicle based on a difference between the gradient values at the future time, since the claim language is not clear “rejected by 112b”, this argument is moot. The claim as written does not necessitate that a second position is the same as the position of the preceding vehicle at the future time. Secondly, examiner believes that Yuto discloses the reconstruction of the position of the preceding vehicle based on the determination that the preceding vehicle is lost at the position of the own vehicle before the curve and predicts thus reconstructs the position of the preceding vehicle to be behind the curve. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The prior art cited in PTO-892 and not mentioned above disclose related devices and methods. US20190061527 disclosing the braking of the vehicle based on a predicted slope and the distance to preceding vehicle. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMAD O EL SAYAH whose telephone number is (571)270-7734. The examiner can normally be reached on M-Th 6:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached on (571) 270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMAD O EL SAYAH/Examiner, Art Unit 3658B
Read full office action

Prosecution Timeline

Apr 18, 2024
Application Filed
Sep 10, 2025
Non-Final Rejection — §103, §112
Oct 27, 2025
Response Filed
Dec 23, 2025
Final Rejection — §103, §112
Feb 18, 2026
Request for Continued Examination
Feb 26, 2026
Response after Non-Final Action
Mar 23, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600372
OPTIMIZATION OF VEHICLE PERFORMANCE TO SUPPORT VEHICLE CONTROL
2y 5m to grant Granted Apr 14, 2026
Patent 12576838
PROCESS AND APPARATUS FOR CONTROLLING THE FORWARD MOVEMENT OF A MOTOR VEHICLE AS A FUNCTION OF ROUTE PARAMETERS IN A DRIVING MODE WITH A SINGLE PEDAL
2y 5m to grant Granted Mar 17, 2026
Patent 12565239
AUTONOMOUS DRIVING PREDICTIVE DEFENSIVE DRIVING SYSTEM THROUGH INTERACTION BASED ON FORWARD VEHICLE DRIVING AND SITUATION JUDGEMENT INFORMATION
2y 5m to grant Granted Mar 03, 2026
Patent 12554260
Iterative Feedback Motion Planning
2y 5m to grant Granted Feb 17, 2026
Patent 12552364
VEHICLE TURNING CONTROL DEVICE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
82%
With Interview (+5.4%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 218 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month