Prosecution Insights
Last updated: April 19, 2026
Application No. 18/978,117

VEHICLE CONTROL SYSTEM THAT LIMITS THE DRIVER'S DRIVING BEHAVIOR AND VEHICLE CONTROL METHOD USING THE SAME

Non-Final OA §102§103§112
Filed
Dec 12, 2024
Examiner
DUNNE, KENNETH MICHAEL
Art Unit
3669
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
87%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
217 granted / 285 resolved
+24.1% vs TC avg
Moderate +11% lift
Without
With
+11.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
23 currently pending
Career history
308
Total Applications
across all art units

Statute-Specific Performance

§101
10.2%
-29.8% vs TC avg
§103
42.5%
+2.5% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 285 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/12/2024 was filed before the first action on the merits of the application. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. The abstract of the disclosure is objected to because: The first sentence of the abstract is language which can be implied and it repeated from the title. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “input interface device” in claims 1-8, in claim 1 the “input interface device” if a placeholder term “device” which is functionally claimed “configured to” with no claimed structure for performing its claimed functions. Looking at the specification no clear teachings for what this this device physically is are found, it is unclear if the “input interface device” is meant to be only a receiver, if it is claiming the physical inputs (e.g. a in vehicle touchscreen, accelerator pedal, etc). Figure 7 contains reference to the interface device in element 1350 however the figure does not depict what this device is beyond a labelled rectangular box nor does the specification clearly detail what the structure of 1350 is beyond that it is some part of the computer system, however while it is part of the computer it is unclear if it is simply a program, a physical receiver/port (e.g. antenna, CAN bus port, etc), both, or if it is the actual input devices (e.g. a touch screen) which is part of the vehicle?. “input module”, claims 16-20, placeholder term “module”, functionally claimed “configured to” with no structural limitations. No clear teachings for what this “input module” is were found, in fact the applicant’s specification details separate input modules (e.g. [0153]) for each type of information but it never teaches one combined input module which receives all of these inputs. “vehicle control system”, claims 16-20, placeholder term “system”, functionally claimed “configured to” with no structural limitations as to performing the recited functions. When the specification is reviewed the “vehicle control system” is understood to be a processor, a memory, and a “input interface device”, (from [0006] of the specification) however this then leads to the issue similar to claim 1’s in that the “input interface device” is never clearly described structurally in the specification. As such what is covered structurally by the “vehicle control system” is unclear in view of the specification. For example is a touch screen display part of the “input interface device” (and by extension the vehicle control system?) is the control system only an ECU/its input ports (i.e. the input interface device is a CAN bus port?). Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 16-20 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding Claim 16, it recites “an input module configured to receive surrounding environment recognition information, driver behavior detection information, driving information, and driver control input information”, However when the applicant’s specification is reviewed no one module it taught as receiving this information, the teachings for the “input module” are all in ipsis verbis or equivalent level of generality purely functional descriptions for the “input module” and when mentioned in detail [0053] (as applicant’s figure 1) it can clearly be seen that each type of information has its own corresponding input module, as such a single “input module” which receives all of these parameters lacks adequate written description in the specification. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1-8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim limitation “input interface device” (of claim 1) invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The specification never clearly gives any examples or definition for the structure of the “input interface device”; in the specification it is only taught functionally and in figure 7 it is depicted as a labelled rectangular box, it is unclear if the interface device is purely software, a receiver (e.g. antenna, CAN bus port, etc) and/or the physical inputs (e.g. a touch screen display). As such the metes and bounds of claim 1 are indefinite. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claims 1-8 are additionally rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1, the limitation “, to be not incorporated into the vehicle control without any change and controls the driver’s control input,” is idiomatically incorrect and appears to be a literal translation into English. This limitation “not incorporated … without any change” is a double negative, double negatives are idiomatically incorrect in standard English. Claims 2-8 depend on claim 1 and inherit this double negative limitation. One possible rephrasing to avoid using a double negative would be: “…wherein the processor controls the driver’s control input, which belongs to first classification, to be with at least some change and controls the driver’s control input…” Claim 16-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim limitation “input module” (claim 16) invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The structure of the “input module” is never clearly defined nor are examples for this combined input module given. It is unclear if this module is purely software, software + hardware, purely hardware, if it use the input receiver (antenna, CAN bus port, etc?) or claiming an actual input device (such as an accelerator pedal, touch screen, etc). Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claims 16-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim limitation “input module” and “vehicle control system” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Regarding the “input module” no clear teachings as for the structure of a single module which receives all of the recited parameters was found, when the applicant’s specification does detail the input modules they are separate modules ([0053] and fig. 7)) Regarding the “vehicle control system” the specification teaches that this system is a processor, a memory, and a “input interface device”, while the processor and memory have grounds in the specification/by their naming connote structures the “input interface device” lacks a clear description in the specification as to its structure as such the this renders the scope of a “vehicle control system” indefinite in that it is unclear what part(s) are or aren’t part of a “input interface device” similar to claim 1’s 112(b) rejection due to the 112(f) interpretation of “input interface device” Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-3, 7-10, 13, and 15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 20200218272 A1, “DRIVER-CENTRIC MODEL PREDICTIVE CONTROLLER”, Ellis et al. Regarding Claim 1, Ellis et al teaches “A vehicle control system that limits a driver’s driving behavior, the vehicle control system comprising: an input interface device configured to receive a driver’s control input;”( [0015] In the autonomous intervention mode, a human operates the vehicle and a vehicle control system monitors the human's commands as well as the surrounding environment.” Here monitoring of the commands inherently teaches a device to receive those commands);” memory in which a program that recognizes the driver’s control input and that determines whether to incorporate results of the recognition into vehicle control has been stored; and a processor configured to execute the program, wherein the processor controls the driver’s control input, which belongs to first classification, to be not incorporated into the vehicle control without any change and controls the driver’s control input, which belongs to second classification, to be immediately incorporated into the vehicle control based on the results of the recognition.”( [0019] In one configuration, the vehicle control system favor's the driver's intent so long as the intent is within safety boundaries set by the planner. The safety boundaries may include a maximum velocity, a minimum velocity, a maximum acceleration, and a maximum deceleration. If the driver's intent falls outside one of the safety boundaries, the vehicle control system favors the planned trajectory over the driver's intent. That is, the vehicle control system may blend the driver's intent with the planned trajectory to adjust to the driver's intent to fall within the safety boundaries.” Here teaches that if the driver’s intent (received driver input/command) falls outside the safety bounds, i.e. belongs to a first classification, it is adjusted to fall within the bounds, if it is not outside the safety bounds (i.e. belongs to the second classification) it is implemented immediately into the vehicle control) Regarding Claim 2, Ellis teaches “The vehicle control system of claim 1, wherein the processor incorporates a control input for a deceleration input, which belongs to the second classification, into the vehicle control.”( [0019] In one configuration, the vehicle control system favor's the driver's intent so long as the intent is within safety boundaries set by the planner. The safety boundaries may include a maximum velocity, a minimum velocity, a maximum acceleration, and a maximum deceleration.” Here teaches that a deceleration driver intent which falls within the safety bounds (belongs to second classification) is favored (implemented into the vehicle control)) Regarding Claim 3, Ellis teaches “The vehicle control system of claim 1, wherein the processor executes an autonomous driving function with respect to a control function related to the driver’s control input, which belongs to the first classification, and controls manual driving to be performed with respect to the driver’s control input, which belongs to the second classification.”( [0019] In one configuration, the vehicle control system favor's the driver's intent so long as the intent is within safety boundaries set by the planner. The safety boundaries may include a maximum velocity, a minimum velocity, a maximum acceleration, and a maximum deceleration. If the driver's intent falls outside one of the safety boundaries, the vehicle control system favors the planned trajectory over the driver's intent. That is, the vehicle control system may blend the driver's intent with the planned trajectory to adjust to the driver's intent to fall within the safety boundaries.” Here teaches that when a driver’s intent falls within the safety bounds (belongs to second classification) it is implemented (i.e. vehicle is manually controlled) whereas when it falls outside the safety bounds (belongs to the first classification) it is adjusted to fall within the bounds/the planned (autonomous) trajectory is favored (i.e. the vehicle performs autonomous control)) Regarding Claim 7, Ellis teaches “The vehicle control system of claim 1, wherein the processor performs driving control by mixing the driver’s control input and a control calculation value for autonomous driving.”( [0019] In one configuration, the vehicle control system favor's the driver's intent so long as the intent is within safety boundaries set by the planner. The safety boundaries may include a maximum velocity, a minimum velocity, a maximum acceleration, and a maximum deceleration. If the driver's intent falls outside one of the safety boundaries, the vehicle control system favors the planned trajectory over the driver's intent. That is, the vehicle control system may blend the driver's intent with the planned trajectory to adjust to the driver's intent to fall within the safety boundaries.” Here [0019] teaches blending (mixing) of driver’s intent (driver inputted controls) with a planned (control calculation value) from the autonomous driving controller) Regarding Claim 8, Ellis teaches “The vehicle control system of claim 1, wherein the processor incorporates the driver’s control input, which belongs to the first classification, into the vehicle control when the input interface device receives an input value for a manual driving command.”( [0019] In one configuration, the vehicle control system favor's the driver's intent so long as the intent is within safety boundaries set by the planner. The safety boundaries may include a maximum velocity, a minimum velocity, a maximum acceleration, and a maximum deceleration. If the driver's intent falls outside one of the safety boundaries, the vehicle control system favors the planned trajectory over the driver's intent. That is, the vehicle control system may blend the driver's intent with the planned trajectory to adjust to the driver's intent to fall within the safety boundaries.” Here teaches adjusting of a driver’s intent that is outside the safety range (belongs to the first value) (i.e. the manual command is still implemented, it is just first adjusted to a safe value)) Regarding Claim 9, Ellis teaches “A vehicle control method being performed by a vehicle control system that limits a driver’s driving behavior and comprising steps of: (a) receiving and classifying a driver’s control input;and (b) incorporating the driver’s control input into manual driving or performing autonomous driving by disregarding the driver’s control input based on results of the classification in the step (a).”( [0019] In one configuration, the vehicle control system favor's the driver's intent so long as the intent is within safety boundaries set by the planner. The safety boundaries may include a maximum velocity, a minimum velocity, a maximum acceleration, and a maximum deceleration. If the driver's intent falls outside one of the safety boundaries, the vehicle control system favors the planned trajectory over the driver's intent. That is, the vehicle control system may blend the driver's intent with the planned trajectory to adjust to the driver's intent to fall within the safety boundaries.” Here teaches receiving a driver’s control input (driver intent) determining if it falls within the safety boundaries (belongs to a first or second classification) if it falls within the boundaries (i.e. is part of the second classification) it is implemented, if it doesn’t fall within the boundaries (belongs to the first classification) it is adjusted to be within the safe boundaries (i.e. vehicle “disregards” the instruction and autonomously drives, that this blending falls within the scope of “disregard” is taken to alignment with claim 15’s which clearly claims that step be (“disregarding”) includes mixing of the received command and control calculation (autonomously determined/predicted command) values).) Regarding Claim 10, Ellis teaches “The vehicle control method of claim 9, wherein the step (a) comprises additionally collecting driving information and surrounding environment information in addition to the driver’s control input.”( [0021] teaches accounting for rearward vehicles (surrounding environment information) in determining if a command falls within the safety boundaries + [0030]-[0031] teaches that safe navigation is based on both surrounding information (environment information) and the current vehicle parameters (Driving information) such as speed (initial velocity)) Regarding Claim 13, Ellis teaches “wherein the step (b) comprises disregarding the driver’s control input for an acceleration input and incorporating the driver’s control input for a deceleration input into driving.”( [0019] In one configuration, the vehicle control system favor's the driver's intent so long as the intent is within safety boundaries set by the planner. The safety boundaries may include a maximum velocity, a minimum velocity, a maximum acceleration, and a maximum deceleration. If the driver's intent falls outside one of the safety boundaries, the vehicle control system favors the planned trajectory over the driver's intent. That is, the vehicle control system may blend the driver's intent with the planned trajectory to adjust to the driver's intent to fall within the safety boundaries.” Here teaches that if a command (e.g. deceleration input) is within the safety bounds it is incorporated into driving, whereas if a command is outside the safety bounds is it “disregarded” and modified to fit within the bounds; “disregard” is considered to include the mixing of a driver and of a control calculation value (autonomous vehicle controller commanded value/expected value) in view of claim 15;) Regarding Claim 15, Ellis teaches “The vehicle control method of claim 9, wherein the step (b) comprises performing driving control by mixing the driver’s control input and a control calculation value for autonomous driving.”( [0019] In one configuration, the vehicle control system favor's the driver's intent so long as the intent is within safety boundaries set by the planner. The safety boundaries may include a maximum velocity, a minimum velocity, a maximum acceleration, and a maximum deceleration. If the driver's intent falls outside one of the safety boundaries, the vehicle control system favors the planned trajectory over the driver's intent. That is, the vehicle control system may blend the driver's intent with the planned trajectory to adjust to the driver's intent to fall within the safety boundaries.”Here teaches blending of a driver’s command and the planned (control calculation value) command) Claim(s) 1-3, 8-11, 13-14 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 20240001950 A1, “VEHICULAR DRIVING ASSIST SYSTEM WITH STUDENT DRIVER MODE”, Ravuri. Regarding Claim 1, Ravuri teaches “A vehicle control system that limits a driver’s driving behavior, the vehicle control system comprising: an input interface device configured to receive a driver’s control input; memory in which a program that recognizes the driver’s control input and that determines whether to incorporate results of the recognition into vehicle control has been stored; and a processor configured to execute the program, wherein the processor controls the driver’s control input, which belongs to first classification, to be not incorporated into the vehicle control without any change and controls the driver’s control input,”( [0017] The system, while in student mode, may detect the unintended acceleration. When an unintended acceleration is detected, the system may automatically reduce or halt the acceleration entirely. In some scenarios (e.g., when objects are detected in front of or near the vehicle or predicted to cross in front of the vehicle), the system may slow the vehicle by applying the brake. The student mode may limit vehicle acceleration/velocity (e.g., to a threshold amount of acceleration/velocity) even for intended acceleration/velocity. The system may not allow or may limit rapid changes in acceleration even when the requested acceleration is below the maximum acceleration allowed by the student mode. Whenever the student mode limits control of the vehicle (e.g., by reducing acceleration or applying the brake), the system may provide a visual, audible, and/or haptic alert or warning to the driver of the vehicle (i.e., notifying the driver that control was limited by the student mode).” Here teaches that when a sudden acceleration input is detected and classified as unintentional is it limited or completely disregarded;);” which belongs to second classification, to be immediately incorporated into the vehicle control based on the results of the recognition.”( “…[0015] The student driving mode or learning driver mode may limit some control of the vehicle such as unintended or unexpected acceleration.” Here teaches that “some control” may be limit, therefore there are other controls not limited under student mode, such control (belong to a second classification) are implemented immediately)) Regarding Claim 2, Ravuri teaches “The vehicle control system of claim 1, wherein the processor incorporates a control input for a deceleration input, which belongs to the second classification, into the vehicle control.”( [0016] Referring now to FIG. 2, a student driver driving a vehicle enters a parking lot (at any speed) and approaches a parking space with the intent to slow and enter the parking space. As shown in FIG. 3, the student driver, instead of pressing the brake pedal in an attempt to slow the vehicle, presses the acceleration pedal with significant force. For example, the student driver becomes nervous about their speed and attempts to press the brake pedal with force to quickly stop the vehicle, but unintentionally presses the acceleration pedal instead. In this scenario, a sudden acceleration may cause a collision with another vehicle, pedestrians, and/or a building (see FIG. 4). [0017] The system, while in student mode, may detect the unintended acceleration. When an unintended acceleration is detected, the system may automatically reduce or halt the acceleration entirely. ” Here ravuri teaches an implementation where instead of braking the accelerator is unintentionally pressed, and thus the acceleration is limited, while not explicitly teaching what would happen if the brake pedal were pressed from the overall context (“student mode”) it is implicit that a braking command/input would be proper in this scenario (“intends to slow”) and thus would be implemented to decelerate the vehicle into the parking space.) Regarding Claim 3, Ravuri teaches “The vehicle control system of claim 1, wherein the processor executes an autonomous driving function with respect to a control function related to the driver’s control input, which belongs to the first classification, and controls manual driving to be performed with respect to the driver’s control input, which belongs to the second classification”( [0015] The student driving mode or learning driver mode may limit some control of the vehicle such as unintended or unexpected acceleration. That is, when a student driver or new driver unintentionally presses the acceleration pedal instead of the brake pedal, the vehicle, while in student driving mode, may be limited to a maximum acceleration/throttle/velocity that is lower than the vehicle's maximum acceleration. As another example, the system may limit acceleration/velocity when objects are detected in front (or predicted to cross in front of the vehicle) of the vehicle (e.g., via the sensing system 12 of FIG. 1) such as another vehicle, pedestrians, a building, etc.” Here Ravuri teaches that “some” (first classification) controls are limited whereas (from implicitly that “some” (i.e. not “all”)) others functions (second classification) are not limited) Regarding Claim 8, Ravuri teaches “The vehicle control system of claim 1, wherein the processor incorporates the driver’s control input, which belongs to the first classification, into the vehicle control when the input interface device receives an input value for a manual driving command.”( [0020] The student mode may be enabled manually by an occupant of the vehicle. For example, the student mode may be enabled in response to actuation of a user input or human machine interface (HMI) at a console, a gear selector, or a display of the vehicle. The student mode may be enabled via a user input (e.g., voice input, interaction with a user device in communication with the vehicle such as a mobile phone, touch screen, etc.).” Here teaches that the student mode (and its limiting) is enabled (or disabled) via the interface thus when disabled the “unintended acceleration”) (First classification) would be implemented into vehicle control) Regarding Claim 9, Ravuri teaches “A vehicle control method being performed by a vehicle control system that limits a driver’s driving behavior and comprising steps of: (a) receiving and classifying a driver’s control input; and (b) incorporating the driver’s control input into manual driving or performing autonomous driving by disregarding the driver’s control input based on results of the classification in the step (a).”( [0017] The system, while in student mode, may detect the unintended acceleration. When an unintended acceleration is detected, the system may automatically reduce or halt the acceleration entirely. In some scenarios (e.g., when objects are detected in front of or near the vehicle or predicted to cross in front of the vehicle), the system may slow the vehicle by applying the brake. The student mode may limit vehicle acceleration/velocity (e.g., to a threshold amount of acceleration/velocity) even for intended acceleration/velocity. The system may not allow or may limit rapid changes in acceleration even when the requested acceleration is below the maximum acceleration allowed by the student mode. Whenever the student mode limits control of the vehicle (e.g., by reducing acceleration or applying the brake), the system may provide a visual, audible, and/or haptic alert or warning to the driver of the vehicle (i.e., notifying the driver that control was limited by the student mode).” Here teaches that when a sudden acceleration input is detected and classified as unintentional is it limited or completely disregarded whereas other inputs may be implemented from “some control” limits in [0015] ) Regarding Claim 10, Ravuri teaches “The vehicle control method of claim 9, wherein the step (a) comprises additionally collecting driving information”([0018]”… The student mode may limit vehicle acceleration/velocity (e.g., to a threshold amount of acceleration/velocity) even for intended acceleration/velocity. The system may not allow or may limit rapid changes in acceleration even when the requested acceleration is below the maximum acceleration allowed by the student mode.” Here teaches that rapid “changes” in acceleration which implicitly teaches a recognition of the current acceleration (driving information))” and surrounding environment information in addition to the driver’s control input.”( [0018] Referring now to FIG. 5, in some examples, the system, while in the student mode, monitors the area surrounding the vehicle. For example, the system uses cameras, radar, and/or lidar sensors to detect the presence of objects in front of or near the vehicle (e.g., parking spaces, vehicles, pedestrians, curbs, etc.).” Here teaches monitoring of surrounding (environment information) is used for determining intent/if acceleration should be limited)) Regarding Claim 11, Ravuri teaches “The vehicle control method of claim 10, wherein the step (b) comprises generating input integration data by integrating the information collected in the step (a) and detecting the driver’s driving intention.”([0017]-[0019] give the example of a student entering the parking lot and how student mode operates during such, shows that environment (location data determining the student is in a parking lot) is combined with the driver inputs and vehicle state (currently approaching a parking spot) is used to determine is a pressing of the accelerator was intended or not) Regarding Claim 13, Ravuri teaches “The vehicle control method of claim 9, wherein the step (b) comprises disregarding the driver’s control input for an acceleration input and incorporating the driver’s control input for a deceleration input into driving.”( [0015] The student driving mode or learning driver mode may limit some control of the vehicle such as unintended or unexpected acceleration. That is, when a student driver or new driver unintentionally presses the acceleration pedal instead of the brake pedal, the vehicle, while in student driving mode, may be limited to a maximum acceleration/throttle/velocity that is lower than the vehicle's maximum acceleration. As another example, the system may limit acceleration/velocity when objects are detected in front (or predicted to cross in front of the vehicle) of the vehicle (e.g., via the sensing system 12 of FIG. 1) such as another vehicle, pedestrians, a building, etc.” read in the context of [0016] Here teaches that “some” controls are limited (disregarding) which include an acceleration input, implicitly from “some” others would not be, and given that the context is to aid in improving the safety of learning how to park implicitly the deceleration input is not changed (i.e. is incorporated) as this input is a proper/natural part of a parking sequence.) Regarding Claim 14, Ravuri teaches “The vehicle control method of claim 13, wherein the step (b) comprises incorporating the driver’s control input for the acceleration input into the driving when receiving a separate command for manual driving from a user.”( [0020] The student mode may be enabled manually by an occupant of the vehicle. For example, the student mode may be enabled in response to actuation of a user input or human machine interface (HMI) at a console, a gear selector, or a display of the vehicle. The student mode may be enabled via a user input (e.g., voice input, interaction with a user device in communication with the vehicle such as a mobile phone, touch screen, etc.).” Here teaches that student mode is enabled through an interface (separate command) thus the when not enabled (i.e. turned off through a “separate command” for manual driving via the interface) the unintentional acceleration would not be limited) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 4-5, 16-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ravuri as applied to claims 1 above, and further in view of US 8106756 B2, “Adaptive Interface Providing Apparatus and Method”, Yoon et al. Regarding Claim 4, Ravuri teaches ”wherein the input interface device receives surrounding environment information, ( [0017]-[0019] give the example of a student entering the parking lot and how student mode operates during such, shows that environment (location data determining the student is in a parking lot) is combined with the driver inputs and vehicle state (currently approaching a parking spot) is used to determine is a pressing of the accelerator was intended or not) Ravuri however does not teach recognizing/receiving “driver behavior detection information” Yoon et al teaches a vehicle control system which determines if the operation of a driver is safe or not based on the surrounding environment information, driver behavior, driving information, and driver control inputs. (Column 3, lines 19-41, “The statistics database unit 110 stores and manages information on an average degree of attention required when there is a change in at least one of a driving operation, the state of a car, and an external environment, on a degree of attention required for interface manipulation when a driver manipulates interfaces of a car, and on a similarity between the functions of the interfaces. The statistics database unit 110 is described in detail with reference to FIGS. 2A and 2B. (19) The state of the car includes speed, tire pressure, duration of use, and other information on the car. The external environment includes weather information such as temperature and humidity, the state of a road surface, the state of a road (for example, a curved road), and other factors which may externally influence driving of the car. (20) In order to obtain the average degree of attention, a number of drivers are firstly classified into driver groups according to a predetermined driver classification criterion such as gender, age, race, and physical features. The average degree of attention is an average value of degrees of attention required for individual drivers of a specific driver group when there is a change in conditions, i.e., a change in at least one of, for example, a driving operation, the state of a car, and an external environment.” Here teaches determining accounting for surroundings, driving information, driver behavior, and driver control inputs to determine the needed amount of attention for safe driving) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the application to modify Ravuri to include accounting for driver behavior for determining if the inputted command as intended during student mode. Such a modification would be obvious under the KSR rational of “Use of Known Technique To Improve Similar Devices (Methods, or Products) in the Same Way”. (I) Ravuri teaches the base device however it lacks the “improvement” of using “driver behavior” to determine if an operation was intended or not (i.e. belongs to a first or second classification). (II)Yoon teaches a similar device (a vehicle interface control system) which determines if an if a current level of attention is proper for a given “change” (changes in yoon includes manipulation of the accelerator/driver input controls), wherein the needed safe level of attention is based on all of surrounding information (e.g. other objects), driving information (e.g. car status), driver control inputs, and the driver’s behavior. (III) Applying the improvement of detecting and accounting for driver behavior for determining if a change in driver input was intended or not the current context as taught by Yoon would improve the operation of Ravuri in the same way as it improves the operation in Yoon. In general both devices account for the current context, however the modification is including the context of Ravuri to include accounting for driver behavior as taught by Yoon. Thus allowing for intended operation/danger determinations which can more accurately by adjusted. The underlying principle of operation of Ravuri is not changed in the modification and the teachings of Yoon are still being used to determine if a current situation/context. Regarding Claim 5, Ravuri teaches “The vehicle control system of claim 4, wherein the processor generates input integration data by integrating the data received by the input interface device and detects the driver’s driving intention.”( [0017]-[0019] give the example of a student entering the parking lot and how student mode operates during such, shows that environment (location data determining the student is in a parking lot) is combined with the driver inputs and vehicle state (currently approaching a parking spot) is used to determine is a pressing of the accelerator was intended or not) Regarding Claim 16, Ravuri teaches “A vehicle apparatus comprising: an input module configured to receive surrounding environment recognition information,(See [0016]-[0018] Detecting of driver control input (driver pressing of accelerator or brake pdal) and teaches examples of the vehicle monitoring/using environmental data (detected nearby objects) driving information data (location data to determine if in a parking lot) and);” and a vehicle control system configured to control autonomous driving and manual driving based on the information received by the input module and to limit a driver’s driving behavior based on a previously classified category.”( [0017] The system, while in student mode, may detect the unintended acceleration. When an unintended acceleration is detected, the system may automatically reduce or halt the acceleration entirely. In some scenarios (e.g., when objects are detected in front of or near the vehicle or predicted to cross in front of the vehicle), the system may slow the vehicle by applying the brake. The student mode may limit vehicle acceleration/velocity (e.g., to a threshold amount of acceleration/velocity) even for intended acceleration/velocity. The system may not allow or may limit rapid changes in acceleration even when the requested acceleration is below the maximum acceleration allowed by the student mode. Whenever the student mode limits control of the vehicle (e.g., by reducing acceleration or applying the brake), the system may provide a visual, audible, and/or haptic alert or warning to the driver of the vehicle (i.e., notifying the driver that control was limited by the student mode).) Ravuri however does not teach recognizing/receiving “driver behavior detection information” Yoon et al teaches a vehicle control system which determines if the operation of a driver is safe or not based on the surrounding environment information, driver behavior, driving information, and driver control inputs. (Column 3, lines 19-41, “The statistics database unit 110 stores and manages information on an average degree of attention required when there is a change in at least one of a driving operation, the state of a car, and an external environment, on a degree of attention required for interface manipulation when a driver manipulates interfaces of a car, and on a similarity between the functions of the interfaces. The statistics database unit 110 is described in detail with reference to FIGS. 2A and 2B. (19) The state of the car includes speed, tire pressure, duration of use, and other information on the car. The external environment includes weather information such as temperature and humidity, the state of a road surface, the state of a road (for example, a curved road), and other factors which may externally influence driving of the car. (20) In order to obtain the average degree of attention, a number of drivers are firstly classified into driver groups according to a predetermined driver classification criterion such as gender, age, race, and physical features. The average degree of attention is an average value of degrees of attention required for individual drivers of a specific driver group when there is a change in conditions, i.e., a change in at least one of, for example, a driving operation, the state of a car, and an external environment.” Here teaches determining accounting for surroundings, driving information, driver behavior, and driver control inputs to determine the needed amount of attention for safe driving) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the application to modify Ravuri to include accounting for driver behavior for determining if the inputted command as intended during student mode. Such a modification would be obvious under the KSR rational of “Use of Known Technique To Improve Similar Devices (Methods, or Products) in the Same Way”. (I) Ravuri teaches the base device however it lacks the “improvement” of using “driver behavior” to determine if an operation was intended or not (i.e. belongs to a first or second classification). (II)Yoon teaches a similar device (a vehicle interface control system) which determines if an if a current level of attention is proper for a given “change” (changes in yoon includes manipulation of the accelerator/driver input controls), wherein the needed safe level of attention is based on all of surrounding information (e.g. other objects), driving information (e.g. car status), driver control inputs, and the driver’s behavior. (III) Applying the improvement of detecting and accounting for driver behavior for determining if a change in driver input was intended or not the current context as taught by Yoon would improve the operation of Ravuri in the same way as it improves the operation in Yoon. In general both devices account for the current context, however the modification is including the context of Ravuri to include accounting for driver behavior as taught by Yoon. Thus allowing for intended operation/danger determinations which can more accurately by adjusted. The underlying principle of operation of Ravuri is not changed in the modification and the teachings of Yoon are still being used to determine if a current situation/context. Regarding Claim 17, Ravuri teaches “The vehicle apparatus of claim 16, wherein the vehicle control system incorporates the driver’s control input for deceleration control into driving without any change and controls the autonomous driving by disregarding the driver’s control input for acceleration control.”( [0015] The student driving mode or learning driver mode may limit some control of the vehicle such as unintended or unexpected acceleration. That is, when a student driver or new driver unintentionally presses the acceleration pedal instead of the brake pedal, the vehicle, while in student driving mode, may be limited to a maximum acceleration/throttle/velocity that is lower than the vehicle's maximum acceleration. As another example, the system may limit acceleration/velocity when objects are detected in front (or predicted to cross in front of the vehicle) of the vehicle (e.g., via the sensing system 12 of FIG. 1) such as another vehicle, pedestrians, a building, etc.” read in the context of [0016] Here teaches that “some” controls are limited (disregarding) which include an acceleration input, implicitly from “some” others would not be, and given that the context is to aid in improving the safety of learning how to park implicitly the deceleration input is not changed (i.e. is incorporated) as this input is a proper/natural part of a parking sequence) Regarding Claim 18, Ravuri teaches “The vehicle apparatus of claim 17, wherein the vehicle control system incorporates the acceleration control into the driving when receiving a separate user command for the manual driving.”( [0020] The student mode may be enabled manually by an occupant of the vehicle. For example, the student mode may be enabled in response to actuation of a user input or human machine interface (HMI) at a console, a gear selector, or a display of the vehicle. The student mode may be enabled via a user input (e.g., voice input, interaction with a user device in communication with the vehicle such as a mobile phone, touch screen, etc.).” Here teaches that the student mode is enabled (and implicitly disabled) via the interface thus when disabled through the interface (separate user command for manual driving) the unintentional accelerator pedal presses would be implemented as the student driver mode is disabled.) Claim(s) 4, 16-17, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ellis as applied to claims 1 above, and further in view of US 8106756 B2, “Adaptive Interface Providing Apparatus And Method”, Yoon et al. Regarding Claim 4, Ellis teaches “The vehicle control system of claim 1, wherein the input interface device receives surrounding environment information,( [0015] In the autonomous intervention mode, a human operates the vehicle and a vehicle control system monitors the human's commands as well as the surrounding environment. For example, the vehicle control system monitors actions of other objects, such as vehicles and pedestrians, in an environment surrounding the vehicle. The vehicle control system may override a human's commands when an imminent danger (e.g., potential collision) is detected. It is desirable to improve vehicle control systems to provide a driver-first control strategy while maintaining safety via autonomous intervention.” Here teaches the use of environment information (monitoring the surroundings) and the human commands (driver input controls) +[0025] teaches monitoring of vehicle state information (Driving information)+ [0031] teaches monitoring of the current/intial velocity (driving information)) Ellis however does not teach the use of “driver behavior detection information”. Yoon et al teaches a vehicle control system which determines if the operation of a driver is safe or not based on the surrounding environment information, driver behavior, driving information, and driver control inputs. (Column 3, lines 19-41, “The statistics database unit 110 stores and manages information on an average degree of attention required when there is a change in at least one of a driving operation, the state of a car, and an external environment, on a degree of attention required for interface manipulation when a driver manipulates interfaces of a car, and on a similarity between the functions of the interfaces. The statistics database unit 110 is described in detail with reference to FIGS. 2A and 2B. (19) The state of the car includes speed, tire pressure, duration of use, and other information on the car. The external environment includes weather information such as temperature and humidity, the state of a road surface, the state of a road (for example, a curved road), and other factors which may externally influence driving of the car. (20) In order to obtain the average degree of attention, a number of drivers are firstly classified into driver groups according to a predetermined driver classification criterion such as gender, age, race, and physical features. The average degree of attention is an average value of degrees of attention required for individual drivers of a specific driver group when there is a change in conditions, i.e., a change in at least one of, for example, a driving operation, the state of a car, and an external environment.” Here teaches determining accounting for surroundings, driving information, driver behavior, and driver control inputs to determine the needed amount of attention for safe driving) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the application to modify Ellis to include accounting for driver behavior for determining the safety limit/if the inputted driver command is within the safety limits. Such a modification would be obvious under the KSR rational of “Use of Known Technique To Improve Similar Devices (Methods, or Products) in the Same Way”. (I) Ellis teaches the base device however it lacks the “improvement” of using “driver behavior” to determine if a operation is within safety limits or not (i.e. belongs to a first or second classification). (II)Yoon teaches a similar device (a vehicle interface control system) which determines if a if a current level of attention is proper, wherein the needed safe level of attention is based on all of surrounding information (e.g. other objects), driving information (e.g. car status), driver control inputs, and the driver’s behavior. (III) Applying the improvement of detecting and accounting for driver behavior for determining if a change in driver input is safe or not given the current context as taught by Yoon would improve the operation of Ellis in the same way as it improves the operation in Yoon. In general both devices account for the current context, however the modification is including the context of Ellis to include accounting for driver behavior as taught by Yoon. Thus allowing for safety thresholds/danger determinations which can more accurately by adjusted. The underlying principle of operation of Ellis is not changed in the modification and the teachings of Yoon are still being used to determine if a current situation/context. Regarding Claim 16, Ellis teaches “A vehicle apparatus comprising: an input module configured to receive surrounding environment recognition information,( [0015] In the autonomous intervention mode, a human operates the vehicle and a vehicle control system monitors the human's commands as well as the surrounding environment. For example, the vehicle control system monitors actions of other objects, such as vehicles and pedestrians, in an environment surrounding the vehicle. The vehicle control system may override a human's commands when an imminent danger (e.g., potential collision) is detected. It is desirable to improve vehicle control systems to provide a driver-first control strategy while maintaining safety via autonomous intervention.” Here teaches the use of environment information (monitoring the surroundings) and the human commands (driver input controls) +[0025] teaches monitoring of vehicle state information (Driving information)+ [0031] teaches monitoring of the current/initial velocity (driving information)); “and a vehicle control system configured to control autonomous driving and manual driving based on the information received by the input module and to limit a driver’s driving behavior based on a previously classified category.”( [0019] In one configuration, the vehicle control system favor's the driver's intent so long as the intent is within safety boundaries set by the planner. The safety boundaries may include a maximum velocity, a minimum velocity, a maximum acceleration, and a maximum deceleration. If the driver's intent falls outside one of the safety boundaries, the vehicle control system favors the planned trajectory over the driver's intent. That is, the vehicle control system may blend the driver's intent with the planned trajectory to adjust to the driver's intent to fall within the safety boundaries.” Here teaches that if the driver’s intent (received driver input/command) falls outside the safety bounds, i.e. belongs to a first classification, it is adjusted to fall within the bounds, if it is not outside the safety bounds (i.e. belongs to the second classification) it is implemented immediately into the vehicle control)) Ellis however does not teach the use of “driver behavior detection information”. Yoon et al teaches a vehicle control system which determines if the operation of a driver is safe or not based on the surrounding environment information, driver behavior, driving information, and driver control inputs. (Column 3, lines 19-41, “The statistics database unit 110 stores and manages information on an average degree of attention required when there is a change in at least one of a driving operation, the state of a car, and an external environment, on a degree of attention required for interface manipulation when a driver manipulates interfaces of a car, and on a similarity between the functions of the interfaces. The statistics database unit 110 is described in detail with reference to FIGS. 2A and 2B. (19) The state of the car includes speed, tire pressure, duration of use, and other information on the car. The external environment includes weather information such as temperature and humidity, the state of a road surface, the state of a road (for example, a curved road), and other factors which may externally influence driving of the car. (20) In order to obtain the average degree of attention, a number of drivers are firstly classified into driver groups according to a predetermined driver classification criterion such as gender, age, race, and physical features. The average degree of attention is an average value of degrees of attention required for individual drivers of a specific driver group when there is a change in conditions, i.e., a change in at least one of, for example, a driving operation, the state of a car, and an external environment.” Here teaches determining accounting for surroundings, driving information, driver behavior, and driver control inputs to determine the needed amount of attention for safe driving) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the application to modify Ellis to include accounting for driver behavior for determining the safety limit/if the inputted driver command is within the safety limits. Such a modification would be obvious under the KSR rational of “Use of Known Technique To Improve Similar Devices (Methods, or Products) in the Same Way”. (I) Ellis teaches the base device however it lacks the “improvement” of using “driver behavior” to determine if a operation is within safety limits or not (i.e. belongs to a first or second classification). (II)Yoon teaches a similar device (a vehicle interface control system) which determines if a if a current level of attention is proper, wherein the needed safe level of attention is based on all of surrounding information (e.g. other objects), driving information (e.g. car status), driver control inputs, and the driver’s behavior. (III) Applying the improvement of detecting and accounting for driver behavior for determining if a change in driver input is safe or not given the current context as taught by Yoon would improve the operation of Ellis in the same way as it improves the operation in Yoon. In general both devices account for the current context, however the modification is including the context of Ellis to include accounting for driver behavior as taught by Yoon. Thus allowing for safety thresholds/danger determinations which can more accurately by adjusted. The underlying principle of operation of Ellis is not changed in the modification and the teachings of Yoon are still being used to determine if a current situation/context. Regarding Claim 17, modified Ellis teaches “The vehicle apparatus of claim 16, wherein the vehicle control system incorporates the driver’s control input for deceleration control into driving without any change and controls the autonomous driving by disregarding the driver’s control input for acceleration control.”( [0019] In one configuration, the vehicle control system favor's the driver's intent so long as the intent is within safety boundaries set by the planner. The safety boundaries may include a maximum velocity, a minimum velocity, a maximum acceleration, and a maximum deceleration. If the driver's intent falls outside one of the safety boundaries, the vehicle control system favors the planned trajectory over the driver's intent. That is, the vehicle control system may blend the driver's intent with the planned trajectory to adjust to the driver's intent to fall within the safety boundaries.” If a input falls within the safety boundaries it is implemented without change, this input can be a deceleration, and if an input falls outside the safety bounds it is modified (disregarded) to fall within. That “disregard” includes mixing/modifying of a input ) Regarding Claim 20, modified Ellis teaches “The vehicle apparatus of claim 16, wherein the vehicle control system performs driving control by mixing the driver’s control input and a control calculation value for the autonomous driving.”( [0019] In one configuration, the vehicle control system favor's the driver's intent so long as the intent is within safety boundaries set by the planner. The safety boundaries may include a maximum velocity, a minimum velocity, a maximum acceleration, and a maximum deceleration. If the driver's intent falls outside one of the safety boundaries, the vehicle control system favors the planned trajectory over the driver's intent. That is, the vehicle control system may blend the driver's intent with the planned trajectory to adjust to the driver's intent to fall within the safety boundaries.” Here [0019] teaches blending (mixing) of driver’s intent (driver inputted controls) with a planned (control calculation value) from the autonomous driving controller) Claim(s) 5-6, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ellis in view of Yoon as applied to claims 4 and 16 above, and further in view of US 20200086861 A1, “SYSTEMS AND METHODS FOR PREDICTING VEHICLE TRAJECTORY”, McGill et al. Regarding Claim 5, modified Ellis (Ellis + Yoon of claim 4) teaches “The vehicle control system of claim 4, wherein the processor generates input integration data by integrating the data received by the input interface device a( [0035] The vehicle control system uses various parameters. These may be user-defined and may be fixed. In one configuration, the user-defined parameters include as a maximum allowable deceleration (a.sup.min) and a maximum allowable acceleration (a.sup.max), where a.sup.min is less than zero and a.sup.max is greater than zero. A vehicle's acceleration cannot be greater than the maximum allowable acceleration. Likewise, the vehicle's deceleration cannot be greater than the maximum allowable deceleration. [0036] In one configuration, the parameters are dynamic. For example, acceleration limits may be contextual. The context may be provided via the speed limits provided to the controller. The controller parameters may represent fixed limits.” Which “contextual” is [0036] of Ellis is understood to correspond with [0025]f Additionally while Ellis does use the term “driver intent”, the “driver intent” of Ellis corresponds to “driver input” in the applicant’s claim/terminology whereas when read in light of the applicant’s specification “driver intent” is understood to correspond with a intended trajectory/maneuver (e.g. intends a u-turn as opposed to a left turn) from [0066]-[0068]) Ellis in view of Yoon however does not detect “driver intent” as read in light of the specification, while Ellis does use the term “driver intent” this equates more to “Driver input” as read in light of the applicant’s specification, whereas “driver intent” in light of the applicant’s specification equates more to an intended maneuver or trajectory (u-turn, left turn, etc). McGill et al teaches a system for determining drive intent (future trajectory or maneuver) based on all of driver behavior information, driver input information, driving information, and surrounding environment information. ([0027] In connection with predicting the trajectory of vehicle 100, trajectory prediction system 170 can store various kinds of model-related data 260 in database 250. As shown in FIG. 1, trajectory prediction system 170 receives sensor data from sensor system 120. For example, in some embodiments, trajectory prediction system 170 receives image data from one or more cameras 126. Trajectory prediction system 170 may also receive LIDAR data from LIDAR sensors 124, radar data from radar sensors 123, and/or sonar data from sonar sensors 125, depending on the particular embodiment. In some embodiments, trajectory prediction system 170 also receives inputs from vehicle systems 140. Examples include, without limitation, steering wheel angle, gas pedal (accelerator) position, linear velocity, and angular velocity. Steering-wheel-angle and gas-pedal-position data are examples of what may be termed controller-area-network (CAN bus) data, and linear velocity and angular velocity are examples of what may be termed Inertial Measurement Unit (IMU) data. As also indicated in FIG. 1, trajectory prediction system 170, in particular control module 235, can communicate with vehicle systems 140 to control, at least in part, certain aspects of the operation of vehicle 100 such as steering, in some situations.” Here teaches environment information (from lidars), driving input (steering wheel agnle, gas pedal position data), driving information (linear velocity and angular velocity) + [0070] teaches eye tracking (driver behavior) information is also collected) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the application to substitute the threshold based safety limits instead with a safe-unsafe trajectory prediction and determination as taught by McGill et al. One would be motivated to implement a trajectory based determination to improve the operation of the system by allowing the system to more closely follow natural operation (avoid unnecessarily operating/overriding the driver). This improvement (of collision avoidance and compatibility with natural operation is taught by McGill ([0021] In various embodiments, the predicted vehicle trajectories and their associated confidence scores can be used to control, at least in part, the operation of a vehicle. For example, if a particular likely trajectory is determined to be unsafe, a system in accordance with the embodiments disclosed herein can intervene to prevent the vehicle from traversing the unsafe trajectory. This can be accomplished, in some embodiments, through the system taking partial or complete control of the steering of the vehicle for a period of sufficient duration to avoid the danger. In general, the techniques described herein can be applied to at least the following use cases: (1) predicting whether the vehicle is going to hit an object or obstacle to improve automatic collision avoidance; and (2) determining which possible vehicle trajectory is most compatible with the way the driver wants to drive to improve the quality of the driving experience for the driver.) Regarding Claim 6, modified Ellis teaches “The vehicle control system of claim 5, wherein the processor performs learning based on a driving path prediction value and a driver’s driving intention prediction value and updates driver’s driving intention detection logic.”(McGill “[0048] In some embodiments, in training the confidence estimator 430, the loss function is defined as the L2 error between the predicted confidence scores computed using the coefficients output by the model and the actual confidence scores (i.e., confidence scores determined relative to the actual trajectory taken by the vehicle in the training data). In some embodiments, the error is computed with respect to the average predicted direction of travel. In other embodiments, the average error is computed over a set of samples. For example, in one embodiment, confidence estimator 430 samples from the distribution, computes the error between that and the path the vehicle actually took in the training data, and averages the error over a set of samples. In general, the confidence score can be represented by any loss metric well-defined over the variational predictor and the expert predictor(s). One illustrative choice is displacement error at the end (limit) of the predictive temporal horizon (e.g., the difference between an actual trajectory at the end of the predictive temporal horizon and the predicted trajectory at the end of the predictive temporal horizon). Another illustrative choice is root-mean-squared-error (RMSE) along the entire trajectory. Both of these metrics are used, in some embodiments.” Here teaches that the path prediction model (driver intent) is trained via comparison to the actual path) Regarding Claim 19, modified Ellis (Ellis + Yoon of claim 16), does not detect “driver intent” as read in light of the specification, while Ellis does use the term “driver intent” this equates more to “Driver input” as read in light of the applicant’s specification, whereas “driver intent” in light of the applicant’s specification equates more to an intended maneuver or trajectory (u-turn, left turn, etc). McGill et al teaches a system for determining drive intent (future trajectory or maneuver) based on all of driver behavior information, driver input information, driving information, and surrounding environment information. Which includes “vehicle control system detects the driver’s driving intention based on the information received by the input module, performs learning by using driving path prediction results and driver’s driving intention prediction results, and updates driver’s driving intention detection logic.” ([0027] In connection with predicting the trajectory of vehicle 100, trajectory prediction system 170 can store various kinds of model-related data 260 in database 250. As shown in FIG. 1, trajectory prediction system 170 receives sensor data from sensor system 120. For example, in some embodiments, trajectory prediction system 170 receives image data from one or more cameras 126. Trajectory prediction system 170 may also receive LIDAR data from LIDAR sensors 124, radar data from radar sensors 123, and/or sonar data from sonar sensors 125, depending on the particular embodiment. In some embodiments, trajectory prediction system 170 also receives inputs from vehicle systems 140. Examples include, without limitation, steering wheel angle, gas pedal (accelerator) position, linear velocity, and angular velocity. Steering-wheel-angle and gas-pedal-position data are examples of what may be termed controller-area-network (CAN bus) data, and linear velocity and angular velocity are examples of what may be termed Inertial Measurement Unit (IMU) data. As also indicated in FIG. 1, trajectory prediction system 170, in particular control module 235, can communicate with vehicle systems 140 to control, at least in part, certain aspects of the operation of vehicle 100 such as steering, in some situations.” Here teaches environment information (from lidars), driving input (steering wheel angle, gas pedal position data), driving information (linear velocity and angular velocity) + [0070] teaches eye tracking (driver behavior) information is also collected +“[0048] In some embodiments, in training the confidence estimator 430, the loss function is defined as the L2 error between the predicted confidence scores computed using the coefficients output by the model and the actual confidence scores (i.e., confidence scores determined relative to the actual trajectory taken by the vehicle in the training data). In some embodiments, the error is computed with respect to the average predicted direction of travel. In other embodiments, the average error is computed over a set of samples. For example, in one embodiment, confidence estimator 430 samples from the distribution, computes the error between that and the path the vehicle actually took in the training data, and averages the error over a set of samples. In general, the confidence score can be represented by any loss metric well-defined over the variational predictor and the expert predictor(s). One illustrative choice is displacement error at the end (limit) of the predictive temporal horizon (e.g., the difference between an actual trajectory at the end of the predictive temporal horizon and the predicted trajectory at the end of the predictive temporal horizon). Another illustrative choice is root-mean-squared-error (RMSE) along the entire trajectory. Both of these metrics are used, in some embodiments.” Here teaches that the path prediction model (driver intent) is trained via comparison to the actual path ) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the application to substitute the threshold based safety limits instead with a safe-unsafe trajectory prediction and determination as taught by McGill et al. One would be motivated to implement a trajectory based determination to improve the operation of the system by allowing the system to more closely follow natural operation (avoid unnecessarily operating/overriding the driver). This improvement (of collision avoidance and compatibility with natural operation is taught by McGill ([0021] In various embodiments, the predicted vehicle trajectories and their associated confidence scores can be used to control, at least in part, the operation of a vehicle. For example, if a particular likely trajectory is determined to be unsafe, a system in accordance with the embodiments disclosed herein can intervene to prevent the vehicle from traversing the unsafe trajectory. This can be accomplished, in some embodiments, through the system taking partial or complete control of the steering of the vehicle for a period of sufficient duration to avoid the danger. In general, the techniques described herein can be applied to at least the following use cases: (1) predicting whether the vehicle is going to hit an object or obstacle to improve automatic collision avoidance; and (2) determining which possible vehicle trajectory is most compatible with the way the driver wants to drive to improve the quality of the driving experience for the driver.) Claim(s) 11-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ellis as applied to claims 10 above, and further in view of US 20200086861 A1, “SYSTEMS AND METHODS FOR PREDICTING VEHICLE TRAJECTORY”, McGill et al. Regarding Claim 11, Ellis teaches “The vehicle control method of claim 10, wherein the step (b) comprises generating input integration data by integrating the information collected in the step”(( [0035] The vehicle control system uses various parameters. These may be user-defined and may be fixed. In one configuration, the user-defined parameters include as a maximum allowable deceleration (a.sup.min) and a maximum allowable acceleration (a.sup.max), where a.sup.min is less than zero and a.sup.max is greater than zero. A vehicle's acceleration cannot be greater than the maximum allowable acceleration. Likewise, the vehicle's deceleration cannot be greater than the maximum allowable deceleration. [0036] In one configuration, the parameters are dynamic. For example, acceleration limits may be contextual. The context may be provided via the speed limits provided to the controller. The controller parameters may represent fixed limits.” Which “contextual” is [0036] of Ellis is understood to correspond with [0025]f Additionally while Ellis does use the term “driver intent”, the “driver intent” of Ellis corresponds to “driver input” in the applicant’s claim/terminology whereas when read in light of the applicant’s specification “driver intent” is understood to correspond with a intended trajectory/maneuver (e.g. intends a u-turn as opposed to a left turn) from [0066]-[0068]) Ellis in however does not detect “driver intent” as read in light of the specification, while Ellis does use the term “driver intent” this equates more to “Driver input” as read in light of the applicant’s specification, whereas “driver intent” in light of the applicant’s specification equates more to an intended maneuver or trajectory (u-turn, left turn, etc). McGill et al teaches a system for determining drive intent (future trajectory or maneuver) based on all of driver behavior information, driver input information, driving information, and surrounding environment information. i.e. “(a) and detecting the driver’s driving intention.” ([0027] In connection with predicting the trajectory of vehicle 100, trajectory prediction system 170 can store various kinds of model-related data 260 in database 250. As shown in FIG. 1, trajectory prediction system 170 receives sensor data from sensor system 120. For example, in some embodiments, trajectory prediction system 170 receives image data from one or more cameras 126. Trajectory prediction system 170 may also receive LIDAR data from LIDAR sensors 124, radar data from radar sensors 123, and/or sonar data from sonar sensors 125, depending on the particular embodiment. In some embodiments, trajectory prediction system 170 also receives inputs from vehicle systems 140. Examples include, without limitation, steering wheel angle, gas pedal (accelerator) position, linear velocity, and angular velocity. Steering-wheel-angle and gas-pedal-position data are examples of what may be termed controller-area-network (CAN bus) data, and linear velocity and angular velocity are examples of what may be termed Inertial Measurement Unit (IMU) data. As also indicated in FIG. 1, trajectory prediction system 170, in particular control module 235, can communicate with vehicle systems 140 to control, at least in part, certain aspects of the operation of vehicle 100 such as steering, in some situations.” Here teaches environment information (from lidars), driving input (steering wheel agnle, gas pedal position data), driving information (linear velocity and angular velocity) + [0070] teaches eye tracking (driver behavior) information is also collected) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the application to substitute the threshold based safety limits instead with a safe-unsafe trajectory prediction and determination as taught by McGill et al. One would be motivated to implement a trajectory based determination to improve the operation of the system by allowing the system to more closely follow natural operation (avoid unnecessarily operating/overriding the driver). This improvement (of collision avoidance and compatibility with natural operation is taught by McGill ([0021] In various embodiments, the predicted vehicle trajectories and their associated confidence scores can be used to control, at least in part, the operation of a vehicle. For example, if a particular likely trajectory is determined to be unsafe, a system in accordance with the embodiments disclosed herein can intervene to prevent the vehicle from traversing the unsafe trajectory. This can be accomplished, in some embodiments, through the system taking partial or complete control of the steering of the vehicle for a period of sufficient duration to avoid the danger. In general, the techniques described herein can be applied to at least the following use cases: (1) predicting whether the vehicle is going to hit an object or obstacle to improve automatic collision avoidance; and (2) determining which possible vehicle trajectory is most compatible with the way the driver wants to drive to improve the quality of the driving experience for the driver.) Regarding Claim 12, modified Ellis teaches “The vehicle control method of claim 11, wherein the step (b) comprises performing learning based on driving path prediction results and results of the detection of the driver’s driving intention and updating driver’s driving intention detection logic.”(McGill [0048] In some embodiments, in training the confidence estimator 430, the loss function is defined as the L2 error between the predicted confidence scores computed using the coefficients output by the model and the actual confidence scores (i.e., confidence scores determined relative to the actual trajectory taken by the vehicle in the training data). In some embodiments, the error is computed with respect to the average predicted direction of travel. In other embodiments, the average error is computed over a set of samples. For example, in one embodiment, confidence estimator 430 samples from the distribution, computes the error between that and the path the vehicle actually took in the training data, and averages the error over a set of samples. In general, the confidence score can be represented by any loss metric well-defined over the variational predictor and the expert predictor(s). One illustrative choice is displacement error at the end (limit) of the predictive temporal horizon (e.g., the difference between an actual trajectory at the end of the predictive temporal horizon and the predicted trajectory at the end of the predictive temporal horizon). Another illustrative choice is root-mean-squared-error (RMSE) along the entire trajectory. Both of these metrics are used, in some embodiments.” Here teaches the path prediction (driver intent) model learns by comparing to predicted to driven path) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20200331488 A1; US 20220073103 A1; US 20230099555 A1; US 20240034335 A1; US 20250074476 A1 Any inquiry concerning this communication or earlier communications from the examiner should be directed to KENNETH MICHAEL DUNNE whose telephone number is (571)270-7392. The examiner can normally be reached Mon-Thurs 8:30-6:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Navid Z Mehdizadeh can be reached at (571) 272-7691. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KENNETH M DUNNE/Primary Examiner, Art Unit 3669
Read full office action

Prosecution Timeline

Dec 12, 2024
Application Filed
Feb 04, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600262
VEHICLE MANAGING ENERGY AT A LOCATION DURING AN EVENT
2y 5m to grant Granted Apr 14, 2026
Patent 12596290
DAY/NIGHT FILTER GLASS FOR AIRCRAFT CAMERA SYSTEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12594956
METHOD FOR PROVIDING INFORMATION ON RAINY ENVIRONMENT BY REFERRING TO POINT DATA ACQUIRED FROM A LIDAR SENSOR AND COMPUTING DEVICE USING THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12590815
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 31, 2026
Patent 12582041
A FORAGE HARVESTER EQUIPPED WITH A CROP PICK-UP HEADER
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
87%
With Interview (+11.1%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 285 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month