Prosecution Insights
Last updated: April 19, 2026
Application No. 18/540,964

VEHICLE CONTROL DEVICE, VEHICLE CONTROL METHOD, AND STORAGE MEDIUM

Final Rejection §103
Filed
Dec 15, 2023
Examiner
FEES, CHRISTOPHER GEORGE
Art Unit
3662
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Honda Motor Co. Ltd.
OA Round
2 (Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
3y 5m
To Grant
80%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
76 granted / 141 resolved
+1.9% vs TC avg
Strong +26% interview lift
Without
With
+25.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
32 currently pending
Career history
173
Total Applications
across all art units

Statute-Specific Performance

§101
17.6%
-22.4% vs TC avg
§103
57.2%
+17.2% vs TC avg
§102
15.2%
-24.8% vs TC avg
§112
8.9%
-31.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 141 resolved cases

Office Action

§103
DETAILED ACTION Response to Amendment This office action regarding application number 18/540,964, filed December 15, 2023, is in response to the applicants arguments and amendments filed December 2, 2025. Claims 3-5 have been cancelled. Claims 1-2 and 6-10 have been amended. Claims 1-2 and 6-10 are currently pending and are addressed below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments The applicants arguments and amendments to the application have overcome some of the objections and rejections previously set forth in the Non-Final action mailed September 3, 2025. Claims 3-5 have been cancelled and therefore all associated objections and rejections are withdrawn. Applicants amendments to the specification have been deemed sufficient to overcome the previous objections, therefore the objections are withdrawn. Applicants amendments to the claims have deemed portions of the previous interpretation under 35 USC 112(f) moot through the removal of the relevant language, however the interpretations with regards to “a detection device” are maintained. Applicants amendments to claims 1 and 9-10 have been deemed sufficient to overcome the previous 35 USC 102 rejections through the inclusion of “using the first information, the second information, true value data, a travel situation, and vehicle model information of the vehicle as input,generate a runway estimation model, andusing the runway estimation model, estimate the runway of the vehicle based on the first information, the second information, the travel situation, and the vehicle model information of the vehicle” therefore the rejections are withdrawn. However as this changes the scope of the claims, new art rejections have been made based on the changes in scope. Additionally the applicants arguments have been fully considered but are not fully persuasive for the reasons seen below. Applicant’s arguments with respect to claim(s) 1 and 9-10 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. On pages 9 the applicant argues “Assignee's representative submits that Tamura clearly does not disclose the presently claimed combination of features, considered in light of at least the above- highlighted portions of paragraph [0063], e.g., Tamura does not teach or suggest that a more accurate runway estimation may be determined because the installation position of the detection device DD, the number of detection devices DD, recognition performance, or the like differs for each vehicle model, and a recognition result accompanying it is also different.”, the examiner respectfully disagrees. MPEP 2142-2144 discusses the requirements for a case of obviousness using 35 USC 103 and provides examples of such cases. MPEP 2111 discusses Broadest Reasonable Interpretation and the interpretation of claims. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., installation position, number of devices, recognition performance) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a detection device” in claims 1-8 and 10. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Regarding “a detection device” the specification recites the structure of “A combination of the camera 10, the radar device 12, the LIDAR sensor 14, and the physical object recognition device 16 is an example of a "detection device DD.”” in at least page 7 of the instant specification. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-2 and 6-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tamura (US-20210291830) in view of Tian (US-20200231179). Regarding claim 1, Tamura teaches a vehicle control device comprising (Paragraph [0004], "According to one embodiment of the present invention, there is provided a travel control apparatus comprising:") a processor that executes instructions to (Paragraph [0020], “A vehicle control apparatus of FIG. 1 includes a control unit 2. The control unit 2 includes a plurality of ECUs 20 to 29 communicably connected by an in-vehicle network. Each ECU includes a processor represented by a CPU, a storage device such as a semiconductor memory, an interface with an external device, and the like.”) control traveling of a vehicle on a runway of the vehicle (Paragraph [0022], "The ECU 20 executes control associated with automated driving of the vehicle 1. In automated driving, at least one of steering and acceleration/deceleration of the vehicle 1 is automatically controlled,” here the ECU is configured to control traveling of the vehicle) decided on according to first information based on an output of a detection device obtained by detecting a surrounding situation of the vehicle and second information based on map information (See Figure 4 showing first lane detection information S1, and second map information S2, both being used to determine a travel mode which controls a traveling of the vehicle) estimate the runway of the vehicle on the basis of at least the first information and the second information (Paragraph [0044], "In this embodiment, the control unit 2 executes lane shape comparison based on the map information and the information obtained by the cameras 41 of each of a distant section B, which is ahead of the vehicle 1 as the self-vehicle, and a nearby section A, which is ahead of the vehicle 1 but is closer to the side of the vehicle 1 than the distant section B. Subsequently, the control unit 2 determines, based on the result of the comparison, whether the map information and the information obtained by the cameras 41 match." here the system is using first camera information and second map information in order to estimate the lane/path/runway of the vehicle) and decide on the runway of the vehicle on the basis of the first information, the second information, and runway estimation information about the runway of the vehicle (Paragraph [0045-0046], "FIG. 3A shows a state in which the map information and the information obtained by the cameras 41 have matched. In FIG. 3A, in both the nearby section A and the distant section B, the shapes of left and right lanes L and R of the actual road, the shapes of left and right lanes ML1 and MR1 based on the map information, and the shapes of left and right lanes CL1 and CR1 based on the information obtained by the cameras 41 match each other.") (Paragraph [0038], "An example of travel control performed by the control unit 2 in the combination mode is, for example, lane maintenance control in which the vehicle 1 is controlled to travel in the center of the travel lane. Note that since the control unit 2 will use both the map information and the information obtained by the cameras 41 in the combination mode, travel control can be executed highly accurately. Hence, in the combination mode, the control unit 2 may execute the travel control in a “hands-off” state in which the driver is not requested to grip the steering wheel,” here the system is deciding a travel control method to follow the decided lane/path/runway, this decision is based on the first camera information, second map information, and a result of the earlier estimation which determines a match of the two information pieces) wherein the first information comprises information of a first marking for defining a travel lane of the vehicle recognized on the basis of an output of the detection device (Paragraph [0024], “When images captured by the cameras 41 are analyzed, the contour of a target or a division line (a white line or the like) of a lane on a road can be extracted,” here the system is using a camera image to determine markings of a travel lane) wherein the second information comprises information of a second marking for defining the travel lane of the vehicle acquired from the map information on the basis of position information of the vehicle (Paragraph [0037], “the shape of the lane ahead of the vehicle 1 based on the map information and the information of the current position obtained by the GPS sensor 24b”) (Paragraph [0016], “the shapes of the division lines recognized by a camera and the shape of the division lines based on the map information will not match,” here the map information includes division lines/markings) and wherein the processor further executes instructions to (Paragraph [0020], “A vehicle control apparatus of FIG. 1 includes a control unit 2. The control unit 2 includes a plurality of ECUs 20 to 29 communicably connected by an in-vehicle network. Each ECU includes a processor represented by a CPU, a storage device such as a semiconductor memory, an interface with an external device, and the like.”) estimate the runway of the vehicle based on the first information, the second information, the travel situation(Paragraph [0027], “a server that provides map information and traffic information and acquires these pieces of information. … The ECU 24 searches for a route from the current position to the destination.”) (Paragraph [0039], “The camera priority mode is a mode in which travel control is performed by prioritizing the information obtained by the cameras 41 over the map information. In this mode, for example, if the map information and the information obtained by the cameras 41 are not determined to be consistent with each other or if the map information cannot be obtained, the control unit 2 will execute travel control by prioritizing the information obtained by the cameras 41. An example of travel control to be performed by the control unit 2 in the camera priority mode is, for example, the lane maintenance control in which the vehicle 1 is controlled to travel in the center of the travel lane. Note that in the camera priority mode, the control unit 2 can execute travel control in a “hands-on” state in which the driver is requested to grip the steering wheel,” here the system is performing the estimating while taking into account a travel mode/travel situation of the vehicle). However Tamura does not explicitly teach using the first information, the second information, true value data, a travel situation, and vehicle model information of the vehicle as input, generate a runway estimation model and using the runway estimation model, estimate the runway of the vehicle based on the first information, the second information, the travel situation and the vehicle model information of the vehicle. Tian teaches guidance systems and methods for a vehicle using acquired information to determine guidance information including a processor configured to generate a runway estimation model (Paragraph [0142], “Here, the learner 242 may additionally generate training data using features extracted from a data group based on external factors and learn the prediction model MDL based on the generated training data.”) using the first information, the second information, true value data, a travel situation, and vehicle model information of the vehicle as input (EXAMINERS NOTE: Here the examiner is referring to the specification for “true value data”, the specification recites “The true value data is, for example, information about the runway on which the vehicle has actually traveled at a point in time when the first information and the second information have been recognized.”, therefore the true value data is being interpretated as information about the roadway that the vehicle is currently traveling) (Paragraph [0127], “Next, the learner 242 classifies a plurality of pieces of driving data and profile data acquired by the acquirer 232 in the process of S400 into a data group”) (Paragraph [0167], “For example, the recognizer 430 compares a pattern of road lane lines (for example, an arrangement of solid lines and broken lines) obtained from the high-precision map data 362 with a pattern of road lane lines around the host vehicle M recognized from the image captured by the camera 310 and thus recognizes the host lane and the adjacent lane.”) (Paragraph [0047], “The driving data is data including the situation inside the vehicle when an occupant drives the vehicle M, the status of the vehicle M, and the like, and more specifically, a vehicle type of the vehicle M, the weather during traveling, the time during traveling, the speed of the vehicle M, the number of passengers, the current position of the vehicle M, the location of the destination, the traveling route from the current position to the destination, the duration of driving of the occupant, whether there is conversation inside the vehicle M, an occupant's feeling, and the like are included,“ here the system is using a plurality of information pieces including first information/camera images, second information/map data, true value data/current lane/adjacent lanes, travel situation/position/route data, vehicle model/vehicle type; the system is inputting these plurality of data pieces into a learner which is generating a prediction model based on this data) (See also figures 13-14) and using the runway estimation model, estimate the runway of the vehicle based on the first information, the second information, the travel situation and the vehicle model information of the vehicle (See Figure 7 showing the various features and information pieces being input into the prediction model) (Paragraph [0133], “For example, the learner 242 inputs features included in the training data to the prediction model MDL and derives a difference between the occurrence probability output from the prediction model and the occurrence probability associated as a training label for features input to the prediction model MDL. Then, the learner 242 determines parameters such as a weighting factor and a bias component of the prediction model MDL using a probabilistic gradient method or the like so that the derived difference becomes smaller,” here the system is using a learner to output a prediction model using training data, the new model is then used to output more accurate determinations such as the runway estimation of Tamura). Tamura and Tian are analogous art as they are both generally related to systems and methods for guiding and controlling a vehicle. It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to include using the first information, the second information, true value data, a travel situation, and vehicle model information of the vehicle as input, generate a runway estimation model and using the runway estimation model, estimate the runway of the vehicle based on the first information, the second information, the travel situation and the vehicle model information of the vehicle of Tian in the system of controlling a vehicle of Tamura with a reasonable expectation of success in order to improve the accuracy of the system by iteratively improving the model used to identify exterior information (Paragraph [0133], “For example, the learner 242 inputs features included in the training data to the prediction model MDL and derives a difference between the occurrence probability output from the prediction model and the occurrence probability associated as a training label for features input to the prediction model MDL. Then, the learner 242 determines parameters such as a weighting factor and a bias component of the prediction model MDL using a probabilistic gradient method or the like so that the derived difference becomes smaller”). Regarding claim 2, Tamura teaches the system as discussed above in claim 1, Tamura further teaches wherein the processor estimates the runway of the vehicle when a deviation degree between the first information and the second information is greater than or equal to a threshold value (Paragraph [0059], “In step S301, the ECU 20 confirms, based on the information obtained in the processes of steps S1 and S2, whether the angular difference Δθ1, between the lane based on the map information of the distant section B and the information obtained by the cameras 41 of the distant section B, is continuously equal to or greater than a threshold T1. If it is determined that the angular difference Δθ1 is continuously equal to or greater than the threshold T1, the ECU 20 will advance the process to step S302. Otherwise, the process will advance to step S303. For example, in terms of the example shown in FIG. 3B, the ECU 20 will confirm whether the angular difference Δθ1 between the lanes ML2 and MR2 and the lanes CL2 and CR2 is continuously equal to or greater than the threshold T1. The threshold T1 may be, for example, 1.0° to 3.0°. More specifically, the threshold T1 may be 1.50.”). Regarding claim 6, Tamura teaches the system as discussed above in claim 1, Tamura further teaches wherein the processor executes driving control for controlling one or both of steering and a speed of the vehicle runway (Paragraph [0022], "The ECU 20 executes control associated with automated driving of the vehicle 1. In automated driving, at least one of steering and acceleration/deceleration of the vehicle 1 is automatically controlled,” here the ECU is configured to control traveling of the vehicle) wherein the driving control comprises a first driving mode and a second driving mode having a heavier task imposed on a driver of the vehicle than the first driving mode or having a lower assistance degree for the driving than the first driving mode (Paragraph [0035], “FIG. 2 is a view showing travel control mode switching of the vehicle 1 performed by the control unit 2. In this embodiment, the control unit 2 controls the travel of the vehicle 1 by switching the control mode between a manual driving mode, a combination mode, and camera priority mode,” here the system includes a plurality of driving modes including the combination mode and manual mode, the manual mode having a heavier task imposed on a driver than the combination mode) and wherein the processor switches the driving mode from the first driving mode to the second driving mode when the deviation degree greater than or equal to a threshold value (Paragraph [0042], “For example, the control unit 2 can switch to the combination mode when it is determined that an occupant has made an operation such as turning on a switch to start automated driving and that the map information and the information obtained by the cameras 41 match in the manual driving mode. … In addition, for example, the control unit 2 may switch to the manual driving mode when the cameras become unable to recognize the division lines while one of the combination mode and the camera priority mode is set,” here the system can determine a switch to a manual mode from the autonomous mode based on a determination that the camera information is invalid using a threshold deviation) (See Figures 4, 5A and 5B which show the determination of a mode switch based on a matching determination using a deviation and an amount of time) has continued for a prescribed period of time or more in a state in which the first driving mode is being executed (Paragraph [0060], “In one embodiment, if a state in which the angular difference Δθ1≥the threshold T1 has continued for a predetermined time, the ECU 20 may determine that the angular difference Δθ1 is continuously equal to or greater than the threshold T1. For example, if a state in which the angular difference Δθ1 is continuously equal to or greater than the threshold T1 has continued for 0.5 sec to 3 sec, the ECU 20 may determine that the angular difference Δθ1 is continuously equal to or greater than the threshold T1.”). Regarding claim 7, Tamura teaches the system as discussed above in claim 1, Tamura further teaches wherein the processor further executes instructions to notify the driver of control content in the processor (Paragraph [0031], “The input/output device 9 outputs information to the driver and accepts input of information from the driver. A voice output device 91 notifies the driver of the information by voice (words). A display device 92 notifies the driver of information by displaying an image. The display device 92 is arranged, for example, in front of the driver's seat and constitutes an instrument panel or the like.”) wherein the processor changes content whose notification is provided to the driver in accordance with switching of the driving mode when the deviation degree greater than or equal to the threshold value has continued for the prescribed period of time or more (Paragraph [0042], “Note that in a case in which the travel mode is to be switched from one of the combination mode and the camera priority mode to the manual driving mode, the control unit 2 may request (takeover request) the driver to switch to manual driving,” here the system can output a request/notification to a driver to switch to manual driving in accordance to the determination that the deviation degree has exceeded the threshold) (See Figures 4, 5A and 5B which show the determination of a mode switch based on a matching determination using a deviation and an amount of time). Regarding claim 8, the combination of Tamura and Tian teaches the system as discussed above in claim 3, however Tamura does not explicitly teach wherein the processor relearns the runway estimation model using the runway of the vehicle estimated and a runway on which the vehicle has actually traveled. Tian further teaches wherein the learner relearns the runway estimation model using the runway of the vehicle estimated by the runway estimator and a runway on which the vehicle has actually traveled (Paragraph [0088], “the prediction model data 212 may be data in which a prediction model MDL is associated with a spot SP at which a predetermined event has occurred at least once in the past or a spot SP at which a predetermined event has occurred frequently in the past. The prediction model MDL associated with each spot SP is learned by a learner 242”) (Paragraph [0130], “Next, the learner 242 generates training data for learning a prediction model MDL associated with a spot at which the predetermined event has occurred using the features extracted by the feature extractor 234 in the process of S404 (Step S406),” here the system is using past data in order to relearn/retrain the model in order to attain more accurate results, while Tian is not explicitly directed towards a runway of the vehicle, the methodology could reasonably be applied to the runway estimation of Tamura). Tamura and Tian are analogous art as they are both generally related to systems and methods for guiding and controlling a vehicle. It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to include wherein the learner relearns the runway estimation model using the runway of the vehicle estimated by the runway estimator and a runway on which the vehicle has actually traveled of Tian in the system of controlling a vehicle of Tamura with a reasonable expectation of success in order to improve the accuracy of the system by iteratively improving the model used to identify exterior information (Paragraph [0133], “For example, the learner 242 inputs features included in the training data to the prediction model MDL and derives a difference between the occurrence probability output from the prediction model and the occurrence probability associated as a training label for features input to the prediction model MDL. Then, the learner 242 determines parameters such as a weighting factor and a bias component of the prediction model MDL using a probabilistic gradient method or the like so that the derived difference becomes smaller”). Regarding claim 9, Tamura teaches a vehicle control method comprising: (Paragraph [0002], “The present invention relates to a travel control apparatus, a vehicle, a travel control method, and a non-transitory computer-readable storage medium.”) controlling, by a computer, traveling of a vehicle on a runway of the vehicle (Paragraph [0022], "The ECU 20 executes control associated with automated driving of the vehicle 1. In automated driving, at least one of steering and acceleration/deceleration of the vehicle 1 is automatically controlled,” here the ECU is configured to control traveling of the vehicle) decided on according to first information based on an output of a detection device obtained by detecting a surrounding situation of the vehicle and second information based on map information (See Figure 4 showing first lane detection information S1, and second map information S2, both being used to determine a travel mode which controls a traveling of the vehicle) estimating, by the computer, the runway of the vehicle on the basis of the first information and the second information (Paragraph [0044], "In this embodiment, the control unit 2 executes lane shape comparison based on the map information and the information obtained by the cameras 41 of each of a distant section B, which is ahead of the vehicle 1 as the self-vehicle, and a nearby section A, which is ahead of the vehicle 1 but is closer to the side of the vehicle 1 than the distant section B. Subsequently, the control unit 2 determines, based on the result of the comparison, whether the map information and the information obtained by the cameras 41 match." here the system is using first camera information and second map information in order to estimate the lane/path/runway of the vehicle) deciding, by the computer, on the runway of the vehicle on the basis of the first information, the second information, and runway estimation information about the estimated runway of the vehicle (Paragraph [0045-0046], "FIG. 3A shows a state in which the map information and the information obtained by the cameras 41 have matched. In FIG. 3A, in both the nearby section A and the distant section B, the shapes of left and right lanes L and R of the actual road, the shapes of left and right lanes ML1 and MR1 based on the map information, and the shapes of left and right lanes CL1 and CR1 based on the information obtained by the cameras 41 match each other.") (Paragraph [0038], "An example of travel control performed by the control unit 2 in the combination mode is, for example, lane maintenance control in which the vehicle 1 is controlled to travel in the center of the travel lane. Note that since the control unit 2 will use both the map information and the information obtained by the cameras 41 in the combination mode, travel control can be executed highly accurately. Hence, in the combination mode, the control unit 2 may execute the travel control in a “hands-off” state in which the driver is not requested to grip the steering wheel,” here the system is deciding a travel control method to follow the decided lane/path/runway, this decision is based on the first camera information, second map information, and a result of the earlier estimation which determines a match of the two information pieces) wherein the first information comprises information of a first marking for defining a travel lane of the vehicle recognized on the basis of an output of the detection device (Paragraph [0024], “When images captured by the cameras 41 are analyzed, the contour of a target or a division line (a white line or the like) of a lane on a road can be extracted,” here the system is using a camera image to determine markings of a travel lane) and wherein the second information comprises information of a second marking for defining the travel lane of the vehicle acquired from the map information on the basis of a position information of the vehicle (Paragraph [0037], “the shape of the lane ahead of the vehicle 1 based on the map information and the information of the current position obtained by the GPS sensor 24b”) (Paragraph [0016], “the shapes of the division lines recognized by a camera and the shape of the division lines based on the map information will not match,” here the map information includes division lines/markings) estimate the runway of the vehicle based on the first information, the second information, the travel situation(Paragraph [0027], “a server that provides map information and traffic information and acquires these pieces of information. … The ECU 24 searches for a route from the current position to the destination.”) (Paragraph [0039], “The camera priority mode is a mode in which travel control is performed by prioritizing the information obtained by the cameras 41 over the map information. In this mode, for example, if the map information and the information obtained by the cameras 41 are not determined to be consistent with each other or if the map information cannot be obtained, the control unit 2 will execute travel control by prioritizing the information obtained by the cameras 41. An example of travel control to be performed by the control unit 2 in the camera priority mode is, for example, the lane maintenance control in which the vehicle 1 is controlled to travel in the center of the travel lane. Note that in the camera priority mode, the control unit 2 can execute travel control in a “hands-on” state in which the driver is requested to grip the steering wheel,” here the system is performing the estimating while taking into account a travel mode/travel situation of the vehicle). However Tamura does not explicitly teach generating, by the computer a runway estimation model in which the first information, the second information, true value data, a travel situation, and vehicle model information of the vehicle are input and the runway estimation information is output and estimating, by the computer, the runway of the vehicle based on the first information, the second information, the travel situation, and the vehicle model information of the vehicle using the runway estimation model. Tian teaches guidance systems and methods for a vehicle using acquired information to determine guidance information including generating, by the computer, a runway estimation model (Paragraph [0142], “Here, the learner 242 may additionally generate training data using features extracted from a data group based on external factors and learn the prediction model MDL based on the generated training data.”) in which the first information, the second information, true value data, a travel situation, and vehicle model information of the vehicle are input and the runway estimation information is output (EXAMINERS NOTE: Here the examiner is referring to the specification for “true value data”, the specification recites “The true value data is, for example, information about the runway on which the vehicle has actually traveled at a point in time when the first information and the second information have been recognized.”, therefore the true value data is being interpretated as information about the roadway that the vehicle is currently traveling) (Paragraph [0127], “Next, the learner 242 classifies a plurality of pieces of driving data and profile data acquired by the acquirer 232 in the process of S400 into a data group”) (Paragraph [0167], “For example, the recognizer 430 compares a pattern of road lane lines (for example, an arrangement of solid lines and broken lines) obtained from the high-precision map data 362 with a pattern of road lane lines around the host vehicle M recognized from the image captured by the camera 310 and thus recognizes the host lane and the adjacent lane.”) (Paragraph [0047], “The driving data is data including the situation inside the vehicle when an occupant drives the vehicle M, the status of the vehicle M, and the like, and more specifically, a vehicle type of the vehicle M, the weather during traveling, the time during traveling, the speed of the vehicle M, the number of passengers, the current position of the vehicle M, the location of the destination, the traveling route from the current position to the destination, the duration of driving of the occupant, whether there is conversation inside the vehicle M, an occupant's feeling, and the like are included,“ here the system is using a plurality of information pieces including first information/camera images, second information/map data, true value data/current lane/adjacent lanes, travel situation/position/route data, vehicle model/vehicle type; the system is inputting these plurality of data pieces into a learner which is generating a prediction model based on this data) (See also figures 13-14) and estimating, by the computer, the runway of the vehicle based on the first information, the second information, the travel situation, and the vehicle model information of the vehicle using the runway estimation model (See Figure 7 showing the various features and information pieces being input into the prediction model) (Paragraph [0133], “For example, the learner 242 inputs features included in the training data to the prediction model MDL and derives a difference between the occurrence probability output from the prediction model and the occurrence probability associated as a training label for features input to the prediction model MDL. Then, the learner 242 determines parameters such as a weighting factor and a bias component of the prediction model MDL using a probabilistic gradient method or the like so that the derived difference becomes smaller,” here the system is using a learner to output a prediction model using training data, the new model is then used to output more accurate determinations such as the runway estimation of Tamura). Tamura and Tian are analogous art as they are both generally related to systems and methods for guiding and controlling a vehicle. It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to include generating, by the computer a runway estimation model in which the first information, the second information, true value data, a travel situation, and vehicle model information of the vehicle are input and the runway estimation information is output and estimating, by the computer, the runway of the vehicle based on the first information, the second information, the travel situation, and the vehicle model information of the vehicle using the runway estimation model of Tian in the system of controlling a vehicle of Tamura with a reasonable expectation of success in order to improve the accuracy of the system by iteratively improving the model used to identify exterior information (Paragraph [0133], “For example, the learner 242 inputs features included in the training data to the prediction model MDL and derives a difference between the occurrence probability output from the prediction model and the occurrence probability associated as a training label for features input to the prediction model MDL. Then, the learner 242 determines parameters such as a weighting factor and a bias component of the prediction model MDL using a probabilistic gradient method or the like so that the derived difference becomes smaller”). Regarding claim 10, Tamura teaches a computer-readable non-transitory storage medium storing a program for causing a computer to: (Paragraph [0002], “The present invention relates to a travel control apparatus, a vehicle, a travel control method, and a non-transitory computer-readable storage medium.”) control traveling of a vehicle on a runway of the vehicle (Paragraph [0022], "The ECU 20 executes control associated with automated driving of the vehicle 1. In automated driving, at least one of steering and acceleration/deceleration of the vehicle 1 is automatically controlled,” here the ECU is configured to control traveling of the vehicle) decided on according to first information based on an output of a detection device obtained by detecting a surrounding situation of the vehicle and second information based on map information (See Figure 4 showing first lane detection information S1, and second map information S2, both being used to determine a travel mode which controls a traveling of the vehicle) estimate the runway of the vehicle on the basis of the first information and the second information (Paragraph [0044], "In this embodiment, the control unit 2 executes lane shape comparison based on the map information and the information obtained by the cameras 41 of each of a distant section B, which is ahead of the vehicle 1 as the self-vehicle, and a nearby section A, which is ahead of the vehicle 1 but is closer to the side of the vehicle 1 than the distant section B. Subsequently, the control unit 2 determines, based on the result of the comparison, whether the map information and the information obtained by the cameras 41 match." here the system is using first camera information and second map information in order to estimate the lane/path/runway of the vehicle) decide on the runway of the vehicle on the basis of the first information, the second information, and runway estimation information about the estimated runway of the vehicle (Paragraph [0045-0046], "FIG. 3A shows a state in which the map information and the information obtained by the cameras 41 have matched. In FIG. 3A, in both the nearby section A and the distant section B, the shapes of left and right lanes L and R of the actual road, the shapes of left and right lanes ML1 and MR1 based on the map information, and the shapes of left and right lanes CL1 and CR1 based on the information obtained by the cameras 41 match each other.") (Paragraph [0038], "An example of travel control performed by the control unit 2 in the combination mode is, for example, lane maintenance control in which the vehicle 1 is controlled to travel in the center of the travel lane. Note that since the control unit 2 will use both the map information and the information obtained by the cameras 41 in the combination mode, travel control can be executed highly accurately. Hence, in the combination mode, the control unit 2 may execute the travel control in a “hands-off” state in which the driver is not requested to grip the steering wheel,” here the system is deciding a travel control method to follow the decided lane/path/runway, this decision is based on the first camera information, second map information, and a result of the earlier estimation which determines a match of the two information pieces) wherein the first information comprises information of a first marking for defining a travel lane of the vehicle recognized on the basis of an output of the detection device (Paragraph [0024], “When images captured by the cameras 41 are analyzed, the contour of a target or a division line (a white line or the like) of a lane on a road can be extracted,” here the system is using a camera image to determine markings of a travel lane) wherein the second information comprises information of a second marking for defining the travel lane of the vehicle acquired from the map information on the basis of position information of the vehicle (Paragraph [0037], “the shape of the lane ahead of the vehicle 1 based on the map information and the information of the current position obtained by the GPS sensor 24b”) (Paragraph [0016], “the shapes of the division lines recognized by a camera and the shape of the division lines based on the map information will not match,” here the map information includes division lines/markings) and wherein the processor further executes instructions to (Paragraph [0020], “A vehicle control apparatus of FIG. 1 includes a control unit 2. The control unit 2 includes a plurality of ECUs 20 to 29 communicably connected by an in-vehicle network. Each ECU includes a processor represented by a CPU, a storage device such as a semiconductor memory, an interface with an external device, and the like.”) estimate the runway of the vehicle based on the first information, the second information, the travel situation(Paragraph [0027], “a server that provides map information and traffic information and acquires these pieces of information. … The ECU 24 searches for a route from the current position to the destination.”) (Paragraph [0039], “The camera priority mode is a mode in which travel control is performed by prioritizing the information obtained by the cameras 41 over the map information. In this mode, for example, if the map information and the information obtained by the cameras 41 are not determined to be consistent with each other or if the map information cannot be obtained, the control unit 2 will execute travel control by prioritizing the information obtained by the cameras 41. An example of travel control to be performed by the control unit 2 in the camera priority mode is, for example, the lane maintenance control in which the vehicle 1 is controlled to travel in the center of the travel lane. Note that in the camera priority mode, the control unit 2 can execute travel control in a “hands-on” state in which the driver is requested to grip the steering wheel,” here the system is performing the estimating while taking into account a travel mode/travel situation of the vehicle). However Tamura does not explicitly teach generate a runway estimation model in which the first information, the second information, true value data, a travel situation, and vehicle model information of the vehicle are input and the runway estimation information is output and estimate the runway of the vehicle based on the first information, the second information, the travel situation, and the vehicle model information of the vehicle using the runway estimation model. Tian teaches guidance systems and methods for a vehicle using acquired information to determine guidance information including a processor configured to generate a runway estimation model (Paragraph [0142], “Here, the learner 242 may additionally generate training data using features extracted from a data group based on external factors and learn the prediction model MDL based on the generated training data.”) in which the first information, the second information, true value data, a travel situation, and vehicle model information of the vehicle are input and the runway estimation information is output (EXAMINERS NOTE: Here the examiner is referring to the specification for “true value data”, the specification recites “The true value data is, for example, information about the runway on which the vehicle has actually traveled at a point in time when the first information and the second information have been recognized.”, therefore the true value data is being interpretated as information about the roadway that the vehicle is currently traveling) (Paragraph [0127], “Next, the learner 242 classifies a plurality of pieces of driving data and profile data acquired by the acquirer 232 in the process of S400 into a data group”) (Paragraph [0167], “For example, the recognizer 430 compares a pattern of road lane lines (for example, an arrangement of solid lines and broken lines) obtained from the high-precision map data 362 with a pattern of road lane lines around the host vehicle M recognized from the image captured by the camera 310 and thus recognizes the host lane and the adjacent lane.”) (Paragraph [0047], “The driving data is data including the situation inside the vehicle when an occupant drives the vehicle M, the status of the vehicle M, and the like, and more specifically, a vehicle type of the vehicle M, the weather during traveling, the time during traveling, the speed of the vehicle M, the number of passengers, the current position of the vehicle M, the location of the destination, the traveling route from the current position to the destination, the duration of driving of the occupant, whether there is conversation inside the vehicle M, an occupant's feeling, and the like are included,“ here the system is using a plurality of information pieces including first information/camera images, second information/map data, true value data/current lane/adjacent lanes, travel situation/position/route data, vehicle model/vehicle type; the system is inputting these plurality of data pieces into a learner which is generating a prediction model based on this data) (See also figures 13-14) and estimate the runway of the vehicle based on the first information, the second information, the travel situation, and the vehicle model information of the vehicle using the runway estimation model (See Figure 7 showing the various features and information pieces being input into the prediction model) (Paragraph [0133], “For example, the learner 242 inputs features included in the training data to the prediction model MDL and derives a difference between the occurrence probability output from the prediction model and the occurrence probability associated as a training label for features input to the prediction model MDL. Then, the learner 242 determines parameters such as a weighting factor and a bias component of the prediction model MDL using a probabilistic gradient method or the like so that the derived difference becomes smaller,” here the system is using a learner to output a prediction model using training data, the new model is then used to output more accurate determinations such as the runway estimation of Tamura). Tamura and Tian are analogous art as they are both generally related to systems and methods for guiding and controlling a vehicle. It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to include generate a runway estimation model in which the first information, the second information, true value data, a travel situation, and vehicle model information of the vehicle are input and the runway estimation information is output and estimate the runway of the vehicle based on the first information, the second information, the travel situation, and the vehicle model information of the vehicle using the runway estimation model of Tian in the system of controlling a vehicle of Tamura with a reasonable expectation of success in order to improve the accuracy of the system by iteratively improving the model used to identify exterior information (Paragraph [0133], “For example, the learner 242 inputs features included in the training data to the prediction model MDL and derives a difference between the occurrence probability output from the prediction model and the occurrence probability associated as a training label for features input to the prediction model MDL. Then, the learner 242 determines parameters such as a weighting factor and a bias component of the prediction model MDL using a probabilistic gradient method or the like so that the derived difference becomes smaller”). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Foster (US-20230140569) teaches a digital map is updated by the in-vehicle control computer with detected roadway data that is a fusion of roadway perception data from at least one perception sensor on the autonomous vehicle and real time GPS signal from at least one GPS receiving devices on the autonomous vehicle. Naserian (US-20200278684) teaches determining, by the processor, a current lane of travel of the vehicle and a future lane of travel of the vehicle based on the intersection data and the position of the vehicle. Kaminade (US-20230249682) teaches a device that executes notification control for producing a predetermined warning sound in a situation in which an angle between a traveling direction of an own vehicle and an extending direction of a boundary line of the lane is smaller than a predetermined threshold value. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER FEES whose telephone number is (303)297-4343. The examiner can normally be reached Monday-Thursday 7:30 - 5:30 MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aniss Chad can be reached at (571) 270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTOPHER GEORGE FEES/Examiner, Art Unit 3662
Read full office action

Prosecution Timeline

Dec 15, 2023
Application Filed
Aug 28, 2025
Non-Final Rejection — §103
Dec 02, 2025
Response Filed
Jan 05, 2026
Final Rejection — §103
Mar 27, 2026
Applicant Interview (Telephonic)
Mar 27, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601146
WORK MACHINE
2y 5m to grant Granted Apr 14, 2026
Patent 12600344
CARBON DIOXIDE RECOVERY SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12600608
Assistance Systems and Methods for a Material Handling Vehicle
2y 5m to grant Granted Apr 14, 2026
Patent 12603014
CLOUD-BASED AREA OBSTACLE DETECTION
2y 5m to grant Granted Apr 14, 2026
Patent 12600343
VEHICLE DRIVING FORCE CONTROL DEVICE
2y 5m to grant Granted Apr 14, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
80%
With Interview (+25.7%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 141 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month