DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
The claims 1-20 are currently pending and have been examined. Applicant amended claims 1, 5, 7, 11, and 13.
Response to Arguments/Amendments
The amendment filed November 17, 2025 has been entered. Claims 1-20 are currently pending in the Application.
Applicant’s arguments with respect to Claims 1-20 under 35 U.S.C. 103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-3, 5, 7-9, 11, 13-15, 18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over TAKEHARA (US 20160347328 A1) in view of Penilla (US 20180061415 A1) and HECKMANN (US 20170050642 A1).
Regarding Claim 1, TAKEHARA teaches A method comprising: obtaining a driving intent of a driver of a target vehicle, wherein the target vehicle is an autonomous driving vehicle (See at least paragraph [0013], “The driving assistance device of the invention is a driving assistance device for assisting driving of a vehicle using switching between an automatic driving mode and a manual driving mode by a driver, and comprises: an information receiver that acquires respective position information of the vehicle and feature amounts of driving operations by the driver, as triggered by occurrence of switching in driving of the vehicle from the automatic driving mode to the manual driving mode; a determination processor that determines from the feature amounts of the driving operations by the driver acquired by the information receiver.”); detecting, based on the driving intent, that a conflict exists between a desired driving behavior of the target vehicle and the driving intent (See at least paragraph [0013], “an information receiver that acquires respective position information of the vehicle and feature amounts of driving operations by the driver, as triggered by occurrence of switching in driving of the vehicle from the automatic driving mode to the manual driving mode; a determination processor that determines from the feature amounts of the driving operations by the driver acquired by the information receiver, a driving operation to be corrected in the automatic driving mode and a correction amount thereof; a storage that stores the driving operation to be corrected and the correction amount thereof that are determined by the determination processor, in a manner associated with their corresponding position information.” The manual override by the driver demonstrates the conflict.); obtaining a first feature parameter based on the driving intent and the desired driving behavior, wherein the first feature parameter represents the driving intent and driving data corresponding to the desired driving behavior (See at least paragraph [0013], “an information receiver that acquires respective position information of the vehicle and feature amounts of driving operations by the driver, as triggered by occurrence of switching in driving of the vehicle from the automatic driving mode to the manual driving mode; a determination processor that determines from the feature amounts of the driving operations by the driver acquired by the information receiver, a driving operation to be corrected in the automatic driving mode and a correction amount thereof; a storage that stores the driving operation to be corrected and the correction amount thereof that are determined by the determination processor, in a manner associated with their corresponding position information”, paragraph [0045], “The determination processor 22 determines from the feature amounts of the driving operations by the driver acquired by the information receiver 21, a driving operation to be corrected in the automatic driving mode and a correction amount thereof”, and paragraph [0046], “For example, at a current position of the vehicle, when there is a difference by a predetermined threshold value or more, between a vehicle speed set for the automatic driving mode and a vehicle speed (feature amount) in the manual driving mode, the driving operation thereat is determined to be corrected, and the vehicle speed in the manual driving mode is determined as the correction amount for the automatic driving mode.”); and controlling the target vehicle based on the updated autonomous driving system (See at least paragraph [0048], “The correction processor 24 is a correction processor that, using the driving operation to be corrected and the correction amount thereof that are read out from the storage 23, corrects a driving operation corresponding to the position information in the automatic driving mode”, paragraph [0049], “For example, when an object to be corrected corresponding to a current position of the vehicle is a vehicle speed, the vehicle speed set for the automatic driving mode is corrected with the correction amount for the vehicle speed read out from the storage 23”, and paragraph [0050], “The vehicle controller 25 controls driving of the vehicle using switching between the automatic driving mode and the manual driving mode. For example, it controls the position of the accelerator pedal in the accelerator actuator 11 according to a driving operation set in the automatic driving mode to thereby accelerate or decelerate the vehicle. Further, it controls the brake position in the brake actuator 13 according to a driving operation set in the automatic driving mode to thereby decelerate the vehicle. Or, it controls a steered amount and a steering direction of the steering wheel in the steering actuator 15.”).
TAKEHARA does not explicitly disclose, however, Penilla, in the same field of endeavor, teaches inputting the first feature parameter into a preset neural network model to obtain a driving behavior that matches the driving intent, wherein the driving behavior comprises one or more of a visual behavior of the driver, an emotional behavior of the driver, or a physical posture behavior of the driver (See at least paragraph [0036], “In some implementations, the learning and predicting embodiments may utilize learning and prediction algorithms that are used in machine learning. In one embodiment, certain algorithms may look to patterns of input, inputs to certain user interfaces, inputs that can be identified to biometric patterns, inputs for neural network processing, inputs for machine learning (e.g., identifying relationships between inputs, and filtering based on geo-location and/or vehicle state, in real-time), logic for identifying or recommending a result or a next input, a next screen, a suggested input, suggested data that would be relevant for a particular time, geo-location, state of a vehicle, and/or combinations thereof. In one embodiment, use of machine learning enables the vehicle to learn what is needed by the user, at a particular time, in view of one or more operating/status state of the vehicle, in view of one or more state of one or more sensors of the vehicle. Thus, one or more inputs or data presented to the user may be provided without explicit input, request or programming by a user at that time. Overtime, machine learning can be used to reinforce learned behavior, which can provide weighting to certain inputs” and paragraph [0136], “Detecting emotional information can also use passive sensors which capture data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to the cues humans use to perceive emotions in others. For example, a video camera might capture facial expressions, body posture and gestures, while a microphone might capture speech. Other sensors can detect emotional cues by directly measuring physiological data, such as skin temperature and galvanic resistance. In some embodiments, a camera or IR camera can detect temperature changes in a person's skin. For instance, if a user is stressed, the blood rushing to a person's face may elevate the heat pattern or sensed heat from that person's face.” The driver-related features, including visual behaviors, emotional states, and physical posture cues, are collected and input into a trained neural network for analysis. The neural network processes these inputs to determine a driving behavior corresponding to the driver’s intent.).
TAKEHARA and Penilla do not explicitly disclose, however, HECKMANN, in the same field of endeavor, teaches presenting, to the driver, one or more questions corresponding to the conflict to confirm the driving intent (See at least paragraph [0035], “The driver information request generator 12 in addition to the identified ambiguous objects also receives the environment representation 9. Having all the information at hand in the driver information request generator 12, it is possible to more clearly define the question that is directed to the driver. For example, the driver information request generator 12 has knowledge not only about the single ambiguous object for which additional information is needed, but also on its position relative to the vehicle or other traffic participants. Thus, a question may be more detailed, because the relative position of the object for which information is necessary can be identified. This has the effect that the driver does not need to analyze the situation by himself in order to derive in a first step knowledge to which object the question is directed.”); receiving, from the driver, answers to the one or more questions indicating whether to update an autonomous driving system (See at least paragraph [0036], “In response to the question that was output by the inventive system, for example as a spoken question, the driver will answer, which is in the vehicle 1 according to FIG. 1 performed by a spoken answer received by microphone 9. The respective signals are, as indicated by arrow 14, input into the central processing unit 2 and in particular to a driver feedback input recognition unit 15. In the driver feedback input recognition unit 15, the response of the driver is analyzed and information is extracted from the spoken response or the input via a touch screen or the like. The information that can be extracted from the response is added as information on the previously ambiguous object and thus the ambiguity can be overcome. Thus, the enhanced information is now used to generate from the initial environment representation 9 and having added the additional information a full traffic environment representation.”); updating, based on the driving behavior and the answers, the autonomous driving system to obtain an updated autonomous driving system (See at least paragraph [0036], “In response to the question that was output by the inventive system, for example as a spoken question, the driver will answer, which is in the vehicle 1 according to FIG. 1 performed by a spoken answer received by microphone 9. The respective signals are, as indicated by arrow 14, input into the central processing unit 2 and in particular to a driver feedback input recognition unit 15. In the driver feedback input recognition unit 15, the response of the driver is analyzed and information is extracted from the spoken response or the input via a touch screen or the like. The information that can be extracted from the response is added as information on the previously ambiguous object and thus the ambiguity can be overcome. Thus, the enhanced information is now used to generate from the initial environment representation 9 and having added the additional information a full traffic environment representation.”).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the invention of TAKEHARA with the teachings of Penilla and HECKMANN such that the driving system of TAKEHARA is further configured to utilize inputting the first feature parameter into a preset neural network model to obtain a driving behavior that matches the driving intent, wherein the driving behavior comprises one or more of a visual behavior of the driver, an emotional behavior of the driver, or a physical posture behavior of the driver, as taught by Penilla (See paragraph [0036], [0136].), and present, to the driver, one or more questions corresponding to the conflict to confirm the driving intent; receive, from the driver, answers to the one or more questions indicating whether to update an autonomous driving system; and update, based on the driving behavior and the answers, the autonomous driving system to obtain an updated autonomous driving system, as taught by HECKMANN (See paragraph [0035], [0036].), with a reasonable expectation of success. The motivation for doing so would be enhancing driver experience and vehicle response, as taught by Penilla (See paragraph [0010].). The motivation for doing so would be improving the performance of autonomous driving systems, as taught by HECKMANN (See paragraph [0008].).
Regarding Claim 7, TAKEHARA teaches An apparatus comprising: a memory configured to store instructions and a processor coupled to the memory and configured to execute the instructions to cause the apparatus to: obtain a driving intent of a driver of a target vehicle, wherein the target vehicle is an autonomous driving vehicle (See at least paragraph [0013], “The driving assistance device of the invention is a driving assistance device for assisting driving of a vehicle using switching between an automatic driving mode and a manual driving mode by a driver, and comprises: an information receiver that acquires respective position information of the vehicle and feature amounts of driving operations by the driver, as triggered by occurrence of switching in driving of the vehicle from the automatic driving mode to the manual driving mode; a determination processor that determines from the feature amounts of the driving operations by the driver acquired by the information receiver”, paragraph [0042], “The ECU 20 is an ECU that performs controlling of the entire driving assistance system 1. For example, it is provided mainly with a CPU and includes a ROM, a RAM, an input signal circuit, an output signal circuit, a power supply circuit, and the like”, paragraph [0043], “Further, as shown in FIG. 2, the ECU 20 includes as a functional configuration of the driving assistance device according to Embodiment 1, an information receiver 21, a determination processor 22, a storage 23, a correction processor 24 and a vehicle controller 25.”); detect, based on driving intent, that a conflict exists between a desired driving behavior of the target vehicle and the driving intent (See at least paragraph [0013], “an information receiver that acquires respective position information of the vehicle and feature amounts of driving operations by the driver, as triggered by occurrence of switching in driving of the vehicle from the automatic driving mode to the manual driving mode; a determination processor that determines from the feature amounts of the driving operations by the driver acquired by the information receiver, a driving operation to be corrected in the automatic driving mode and a correction amount thereof; a storage that stores the driving operation to be corrected and the correction amount thereof that are determined by the determination processor, in a manner associated with their corresponding position information.” The manual override by the driver demonstrates the conflict.); obtain a first feature parameter based on the driving intent and the desired driving behavior, wherein the first feature parameter represents the driving intent and driving data corresponding to the desired driving behavior (See at least paragraph [0013], “an information receiver that acquires respective position information of the vehicle and feature amounts of driving operations by the driver, as triggered by occurrence of switching in driving of the vehicle from the automatic driving mode to the manual driving mode; a determination processor that determines from the feature amounts of the driving operations by the driver acquired by the information receiver, a driving operation to be corrected in the automatic driving mode and a correction amount thereof; a storage that stores the driving operation to be corrected and the correction amount thereof that are determined by the determination processor, in a manner associated with their corresponding position information”, paragraph [0045], “The determination processor 22 determines from the feature amounts of the driving operations by the driver acquired by the information receiver 21, a driving operation to be corrected in the automatic driving mode and a correction amount thereof”, and paragraph [0046], “For example, at a current position of the vehicle, when there is a difference by a predetermined threshold value or more, between a vehicle speed set for the automatic driving mode and a vehicle speed (feature amount) in the manual driving mode, the driving operation thereat is determined to be corrected, and the vehicle speed in the manual driving mode is determined as the correction amount for the automatic driving mode.”); and control the target vehicle based on the updated autonomous driving system (See at least paragraph [0048], “The correction processor 24 is a correction processor that, using the driving operation to be corrected and the correction amount thereof that are read out from the storage 23, corrects a driving operation corresponding to the position information in the automatic driving mode”, paragraph [0049], “For example, when an object to be corrected corresponding to a current position of the vehicle is a vehicle speed, the vehicle speed set for the automatic driving mode is corrected with the correction amount for the vehicle speed read out from the storage 23”, and paragraph [0050], “The vehicle controller 25 controls driving of the vehicle using switching between the automatic driving mode and the manual driving mode. For example, it controls the position of the accelerator pedal in the accelerator actuator 11 according to a driving operation set in the automatic driving mode to thereby accelerate or decelerate the vehicle. Further, it controls the brake position in the brake actuator 13 according to a driving operation set in the automatic driving mode to thereby decelerate the vehicle. Or, it controls a steered amount and a steering direction of the steering wheel in the steering actuator 15.”).
TAKEHARA does not explicitly disclose, however, Penilla, in the same field of endeavor, teaches input the first feature parameter into a preset neural network model to obtain a driving behavior that matches the driving intent, wherein the driving behavior comprises one or more of a visual behavior of the driver, an emotional behavior of the driver, or a physical posture behavior of the driver (See at least paragraph [0036], “In some implementations, the learning and predicting embodiments may utilize learning and prediction algorithms that are used in machine learning. In one embodiment, certain algorithms may look to patterns of input, inputs to certain user interfaces, inputs that can be identified to biometric patterns, inputs for neural network processing, inputs for machine learning (e.g., identifying relationships between inputs, and filtering based on geo-location and/or vehicle state, in real-time), logic for identifying or recommending a result or a next input, a next screen, a suggested input, suggested data that would be relevant for a particular time, geo-location, state of a vehicle, and/or combinations thereof. In one embodiment, use of machine learning enables the vehicle to learn what is needed by the user, at a particular time, in view of one or more operating/status state of the vehicle, in view of one or more state of one or more sensors of the vehicle. Thus, one or more inputs or data presented to the user may be provided without explicit input, request or programming by a user at that time. Overtime, machine learning can be used to reinforce learned behavior, which can provide weighting to certain inputs” and paragraph [0136], “Detecting emotional information can also use passive sensors which capture data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to the cues humans use to perceive emotions in others. For example, a video camera might capture facial expressions, body posture and gestures, while a microphone might capture speech. Other sensors can detect emotional cues by directly measuring physiological data, such as skin temperature and galvanic resistance. In some embodiments, a camera or IR camera can detect temperature changes in a person's skin. For instance, if a user is stressed, the blood rushing to a person's face may elevate the heat pattern or sensed heat from that person's face.” The driver-related features, including visual behaviors, emotional states, and physical posture cues, are collected and input into a trained neural network for analysis. The neural network processes these inputs to determine a driving behavior corresponding to the driver’s intent.).
TAKEHARA and Penilla do not explicitly disclose, however, HECKMANN, in the same field of endeavor, teaches present, to the driver, one or more questions corresponding to the conflict to confirm the driving intent (See at least paragraph [0035], “The driver information request generator 12 in addition to the identified ambiguous objects also receives the environment representation 9. Having all the information at hand in the driver information request generator 12, it is possible to more clearly define the question that is directed to the driver. For example, the driver information request generator 12 has knowledge not only about the single ambiguous object for which additional information is needed, but also on its position relative to the vehicle or other traffic participants. Thus, a question may be more detailed, because the relative position of the object for which information is necessary can be identified. This has the effect that the driver does not need to analyze the situation by himself in order to derive in a first step knowledge to which object the question is directed.”); receive, from the driver, answers to the one or more questions indicating whether to update an autonomous driving system (See at least paragraph [0036], “In response to the question that was output by the inventive system, for example as a spoken question, the driver will answer, which is in the vehicle 1 according to FIG. 1 performed by a spoken answer received by microphone 9. The respective signals are, as indicated by arrow 14, input into the central processing unit 2 and in particular to a driver feedback input recognition unit 15. In the driver feedback input recognition unit 15, the response of the driver is analyzed and information is extracted from the spoken response or the input via a touch screen or the like. The information that can be extracted from the response is added as information on the previously ambiguous object and thus the ambiguity can be overcome. Thus, the enhanced information is now used to generate from the initial environment representation 9 and having added the additional information a full traffic environment representation.”); update, based on the driving behavior and the answers, the autonomous driving system to obtain an updated autonomous driving system (See at least paragraph [0036], “In response to the question that was output by the inventive system, for example as a spoken question, the driver will answer, which is in the vehicle 1 according to FIG. 1 performed by a spoken answer received by microphone 9. The respective signals are, as indicated by arrow 14, input into the central processing unit 2 and in particular to a driver feedback input recognition unit 15. In the driver feedback input recognition unit 15, the response of the driver is analyzed and information is extracted from the spoken response or the input via a touch screen or the like. The information that can be extracted from the response is added as information on the previously ambiguous object and thus the ambiguity can be overcome. Thus, the enhanced information is now used to generate from the initial environment representation 9 and having added the additional information a full traffic environment representation.”).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the invention of TAKEHARA with the teachings of Penilla and HECKMANN such that the driving system of TAKEHARA is further configured to utilize inputting the first feature parameter into a preset neural network model to obtain a driving behavior that matches the driving intent, wherein the driving behavior comprises one or more of a visual behavior of the driver, an emotional behavior of the driver, or a physical posture behavior of the driver, as taught by Penilla (See paragraph [0036], [0136].), and present, to the driver, one or more questions corresponding to the conflict to confirm the driving intent; receive, from the driver, answers to the one or more questions indicating whether to update an autonomous driving system; and update, based on the driving behavior and the answers, the autonomous driving system to obtain an updated autonomous driving system, as taught by HECKMANN (See paragraph [0035], [0036].), with a reasonable expectation of success. The motivation for doing so would be enhancing driver experience and vehicle response, as taught by Penilla (See paragraph [0010].). The motivation for doing so would be improving the performance of autonomous driving systems, as taught by HECKMANN (See paragraph [0008].).
Regarding Claim 13, TAKEHARA teaches A computer program product comprising computer-executable instructions that are stored on a non-transitory computer-readable storage medium and that, when executed by a processor, cause an apparatus to: obtain a driving intent of a driver of a target vehicle, wherein the target vehicle is an autonomous driving vehicle (See at least paragraph [0013], “The driving assistance device of the invention is a driving assistance device for assisting driving of a vehicle using switching between an automatic driving mode and a manual driving mode by a driver, and comprises: an information receiver that acquires respective position information of the vehicle and feature amounts of driving operations by the driver, as triggered by occurrence of switching in driving of the vehicle from the automatic driving mode to the manual driving mode; a determination processor that determines from the feature amounts of the driving operations by the driver acquired by the information receiver”, paragraph [0042], “The ECU 20 is an ECU that performs controlling of the entire driving assistance system 1. For example, it is provided mainly with a CPU and includes a ROM, a RAM, an input signal circuit, an output signal circuit, a power supply circuit, and the like”, paragraph [0043], “Further, as shown in FIG. 2, the ECU 20 includes as a functional configuration of the driving assistance device according to Embodiment 1, an information receiver 21, a determination processor 22, a storage 23, a correction processor 24 and a vehicle controller 25.”); detect, based on the driving intent, that a conflict exists between a desired driving behavior of the target vehicle and the driving intent (See at least paragraph [0013], “an information receiver that acquires respective position information of the vehicle and feature amounts of driving operations by the driver, as triggered by occurrence of switching in driving of the vehicle from the automatic driving mode to the manual driving mode; a determination processor that determines from the feature amounts of the driving operations by the driver acquired by the information receiver, a driving operation to be corrected in the automatic driving mode and a correction amount thereof; a storage that stores the driving operation to be corrected and the correction amount thereof that are determined by the determination processor, in a manner associated with their corresponding position information.” The manual override by the driver demonstrates the conflict.); obtain a first feature parameter based on the driving intent and the desired driving behavior, wherein the first feature parameter represents the driving intent and driving data corresponding to the desired driving behavior (See at least paragraph [0013], “an information receiver that acquires respective position information of the vehicle and feature amounts of driving operations by the driver, as triggered by occurrence of switching in driving of the vehicle from the automatic driving mode to the manual driving mode; a determination processor that determines from the feature amounts of the driving operations by the driver acquired by the information receiver, a driving operation to be corrected in the automatic driving mode and a correction amount thereof; a storage that stores the driving operation to be corrected and the correction amount thereof that are determined by the determination processor, in a manner associated with their corresponding position information”, paragraph [0045], “The determination processor 22 determines from the feature amounts of the driving operations by the driver acquired by the information receiver 21, a driving operation to be corrected in the automatic driving mode and a correction amount thereof”, and paragraph [0046], “For example, at a current position of the vehicle, when there is a difference by a predetermined threshold value or more, between a vehicle speed set for the automatic driving mode and a vehicle speed (feature amount) in the manual driving mode, the driving operation thereat is determined to be corrected, and the vehicle speed in the manual driving mode is determined as the correction amount for the automatic driving mode.”); and control the target vehicle based on the updated autonomous driving system (See at least paragraph [0048], “The correction processor 24 is a correction processor that, using the driving operation to be corrected and the correction amount thereof that are read out from the storage 23, corrects a driving operation corresponding to the position information in the automatic driving mode”, paragraph [0049], “For example, when an object to be corrected corresponding to a current position of the vehicle is a vehicle speed, the vehicle speed set for the automatic driving mode is corrected with the correction amount for the vehicle speed read out from the storage 23”, and paragraph [0050], “The vehicle controller 25 controls driving of the vehicle using switching between the automatic driving mode and the manual driving mode. For example, it controls the position of the accelerator pedal in the accelerator actuator 11 according to a driving operation set in the automatic driving mode to thereby accelerate or decelerate the vehicle. Further, it controls the brake position in the brake actuator 13 according to a driving operation set in the automatic driving mode to thereby decelerate the vehicle. Or, it controls a steered amount and a steering direction of the steering wheel in the steering actuator 15.”).
TAKEHARA does not explicitly disclose, however, BURKHART, in the same field of endeavor, teaches input the first feature parameter into a preset neural network model to obtain a driving behavior that matches the driving intent, wherein the driving behavior comprises one or more of a visual behavior of the driver, an emotional behavior of the driver, or a physical posture behavior of the driver (See at least paragraph [0036], “In some implementations, the learning and predicting embodiments may utilize learning and prediction algorithms that are used in machine learning. In one embodiment, certain algorithms may look to patterns of input, inputs to certain user interfaces, inputs that can be identified to biometric patterns, inputs for neural network processing, inputs for machine learning (e.g., identifying relationships between inputs, and filtering based on geo-location and/or vehicle state, in real-time), logic for identifying or recommending a result or a next input, a next screen, a suggested input, suggested data that would be relevant for a particular time, geo-location, state of a vehicle, and/or combinations thereof. In one embodiment, use of machine learning enables the vehicle to learn what is needed by the user, at a particular time, in view of one or more operating/status state of the vehicle, in view of one or more state of one or more sensors of the vehicle. Thus, one or more inputs or data presented to the user may be provided without explicit input, request or programming by a user at that time. Overtime, machine learning can be used to reinforce learned behavior, which can provide weighting to certain inputs” and paragraph [0136], “Detecting emotional information can also use passive sensors which capture data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to the cues humans use to perceive emotions in others. For example, a video camera might capture facial expressions, body posture and gestures, while a microphone might capture speech. Other sensors can detect emotional cues by directly measuring physiological data, such as skin temperature and galvanic resistance. In some embodiments, a camera or IR camera can detect temperature changes in a person's skin. For instance, if a user is stressed, the blood rushing to a person's face may elevate the heat pattern or sensed heat from that person's face.” The driver-related features, including visual behaviors, emotional states, and physical posture cues, are collected and input into a trained neural network for analysis. The neural network processes these inputs to determine a driving behavior corresponding to the driver’s intent.).
TAKEHARA and Penilla do not explicitly disclose, however, HECKMANN, in the same field of endeavor, teaches present, to the driver, one or more questions corresponding to the conflict to confirm the driving intent (See at least paragraph [0035], “The driver information request generator 12 in addition to the identified ambiguous objects also receives the environment representation 9. Having all the information at hand in the driver information request generator 12, it is possible to more clearly define the question that is directed to the driver. For example, the driver information request generator 12 has knowledge not only about the single ambiguous object for which additional information is needed, but also on its position relative to the vehicle or other traffic participants. Thus, a question may be more detailed, because the relative position of the object for which information is necessary can be identified. This has the effect that the driver does not need to analyze the situation by himself in order to derive in a first step knowledge to which object the question is directed.”); receive, from the driver, answers to the one or more questions indicating whether to update an autonomous driving system (See at least paragraph [0036], “In response to the question that was output by the inventive system, for example as a spoken question, the driver will answer, which is in the vehicle 1 according to FIG. 1 performed by a spoken answer received by microphone 9. The respective signals are, as indicated by arrow 14, input into the central processing unit 2 and in particular to a driver feedback input recognition unit 15. In the driver feedback input recognition unit 15, the response of the driver is analyzed and information is extracted from the spoken response or the input via a touch screen or the like. The information that can be extracted from the response is added as information on the previously ambiguous object and thus the ambiguity can be overcome. Thus, the enhanced information is now used to generate from the initial environment representation 9 and having added the additional information a full traffic environment representation.”); update, based on the driving behavior and the answers, the autonomous driving system to obtain an updated autonomous driving system (See at least paragraph [0036], “In response to the question that was output by the inventive system, for example as a spoken question, the driver will answer, which is in the vehicle 1 according to FIG. 1 performed by a spoken answer received by microphone 9. The respective signals are, as indicated by arrow 14, input into the central processing unit 2 and in particular to a driver feedback input recognition unit 15. In the driver feedback input recognition unit 15, the response of the driver is analyzed and information is extracted from the spoken response or the input via a touch screen or the like. The information that can be extracted from the response is added as information on the previously ambiguous object and thus the ambiguity can be overcome. Thus, the enhanced information is now used to generate from the initial environment representation 9 and having added the additional information a full traffic environment representation.”).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the invention of TAKEHARA with the teachings of Penilla and HECKMANN such that the driving system of TAKEHARA is further configured to utilize inputting the first feature parameter into a preset neural network model to obtain a driving behavior that matches the driving intent, wherein the driving behavior comprises one or more of a visual behavior of the driver, an emotional behavior of the driver, or a physical posture behavior of the driver, as taught by Penilla (See paragraph [0036], [0136].), and present, to the driver, one or more questions corresponding to the conflict to confirm the driving intent; receive, from the driver, answers to the one or more questions indicating whether to update an autonomous driving system; and update, based on the driving behavior and the answers, the autonomous driving system to obtain an updated autonomous driving system, as taught by HECKMANN (See paragraph [0035], [0036].), with a reasonable expectation of success. The motivation for doing so would be enhancing driver experience and vehicle response, as taught by Penilla (See paragraph [0010].). The motivation for doing so would be improving the performance of autonomous driving systems, as taught by HECKMANN (See paragraph [0008].).
Regarding Claim 2, TAKEHARA, Penilla, and HECKMANN teach The method of claim 1, as set forth in the obviousness rejection above. TAKEHARA teaches wherein the driving intent is based on a feature parameter corresponding to the driving behavior of the driver, and wherein the driving behavior comprises an operation behavior of the driver (See at least paragraph [0045], “The determination processor 22 determines from the feature amounts of the driving operations by the driver acquired by the information receiver 21, a driving operation to be corrected in the automatic driving mode and a correction amount thereof” and paragraph [0046], “For example, at a current position of the vehicle, when there is a difference by a predetermined threshold value or more, between a vehicle speed set for the automatic driving mode and a vehicle speed (feature amount) in the manual driving mode, the driving operation thereat is determined to be corrected, and the vehicle speed in the manual driving mode is determined as the correction amount for the automatic driving mode.”).
With respect to claim 8, please see the rejection above with respect to claim 2, which is commensurate in scope to claim 8, with claim 2 being drawn to a method and claim 8 being drawn to a corresponding apparatus.
With respect to claim 14, please see the rejection above with respect to claim 2, which is commensurate in scope to claim 14, with claim 2 being drawn to a method and claim 14 being drawn to a corresponding computer program product.
Regarding Claim 3, TAKEHARA, Penilla, and HECKMANN teach The method of claim 2, as set forth in the obviousness rejection above. TAKEHARA teaches wherein detecting that the conflict exists between the desired driving behavior and the driving intent comprises detecting that the feature parameter exceeds a preset range (See at least paragraph [0013], “an information receiver that acquires respective position information of the vehicle and feature amounts of driving operations by the driver, as triggered by occurrence of switching in driving of the vehicle from the automatic driving mode to the manual driving mode; a determination processor that determines from the feature amounts of the driving operations by the driver acquired by the information receiver, a driving operation to be corrected in the automatic driving mode and a correction amount thereof; a storage that stores the driving operation to be corrected and the correction amount thereof that are determined by the determination processor, in a manner associated with their corresponding position information”, paragraph [0045], “The determination processor 22 determines from the feature amounts of the driving operations by the driver acquired by the information receiver 21, a driving operation to be corrected in the automatic driving mode and a correction amount thereof”, and paragraph [0046], “For example, at a current position of the vehicle, when there is a difference by a predetermined threshold value or more, between a vehicle speed set for the automatic driving mode and a vehicle speed (feature amount) in the manual driving mode, the driving operation thereat is determined to be corrected, and the vehicle speed in the manual driving mode is determined as the correction amount for the automatic driving mode.” The manual override by the driver demonstrates the conflict.).
Regarding Claim 5, TAKEHARA, Penilla, and HECKMANN teach The method of claim 1, as set forth in the obviousness rejection above. TAKEHARA and Penilla do not explicitly disclose, however, HECKMANN, in the same field of endeavor, teaches wherein updating the autonomous driving system further comprises updating the autonomous driving system based on the driving behavior when the answer indicates to update the autonomous driving system (See at least paragraph [0036], “In response to the question that was output by the inventive system, for example as a spoken question, the driver will answer, which is in the vehicle 1 according to FIG. 1 performed by a spoken answer received by microphone 9. The respective signals are, as indicated by arrow 14, input into the central processing unit 2 and in particular to a driver feedback input recognition unit 15. In the driver feedback input recognition unit 15, the response of the driver is analyzed and information is extracted from the spoken response or the input via a touch screen or the like. The information that can be extracted from the response is added as information on the previously ambiguous object and thus the ambiguity can be overcome. Thus, the enhanced information is now used to generate from the initial environment representation 9 and having added the additional information a full traffic environment representation.”).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the invention of TAKEHARA with the teachings of Penilla and HECKMANN such that the driving system of TAKEHARA is further configured to utilize inputting the first feature parameter into a preset neural network model to obtain a driving behavior that matches the driving intent, wherein the driving behavior comprises one or more of a visual behavior of the driver, an emotional behavior of the driver, or a physical posture behavior of the driver, as taught by Penilla (See paragraph [0036], [0136].), and present, to the driver, one or more questions corresponding to the conflict to confirm the driving intent; receive, from the driver, answers to the one or more questions indicating whether to update an autonomous driving system; and update, based on the driving behavior and the answers, the autonomous driving system to obtain an updated autonomous driving system; and wherein updating the autonomous driving system further comprises updating the autonomous driving system based on the driving behavior when the answer indicates to update the autonomous driving system, as taught by HECKMANN (See paragraph [0035], [0036].), with a reasonable expectation of success. The motivation for doing so would be enhancing driver experience and vehicle response, as taught by Penilla (See paragraph [0010].). The motivation for doing so would be improving the performance of autonomous driving systems, as taught by HECKMANN (See paragraph [0008].).
With respect to claim 11, please see the rejection above with respect to claim 5, which is commensurate in scope to claim 11, with claim 5 being drawn to a method and claim 11 being drawn to a corresponding apparatus.
Regarding Claim 9, TAKEHARA, Penilla, and HECKMANN teach The apparatus of claim 8, as set forth in the obviousness rejection above. TAKEHARA teaches wherein the processor is further configured to detect that the feature parameter exceeds a preset range (See at least paragraph [0044], “The information receiver 21 acquires respective position information of the vehicle and driving operations by the driver, as triggered by occurrence of switching in driving of the vehicle from the automatic driving mode to the manual driving mode. For example, it acquires a current position of the vehicle measured by the GPS receiver 5, as position information of the vehicle. Further, the driving operations by the driver is each identified based, for example, on a traveling direction of the vehicle inputted from the direction indicator 6, an operation amount of the accelerator pedal detected by the accelerator position sensor 12, a pressed-down amount of the brake pedal detected by the brake position sensor 14, or a steered amount of the steering wheel and/or an operation amount about a steering direction, etc. that are detected by the steering sensor 16” and paragraph [0075], “For example, upon comparison between the section feature amount and a feature amount of the driving operation in the above section having been set in the automatic driving mode, when a difference therebetween exceeds a predetermined threshold value, such setting in the automatic driving mode in that section is determined to be unmatched to the driver's desire, so that the section feature amount is stored as the correction amount.”).
Regarding Claim 15, TAKEHARA, Penilla, and HECKMANN teach The computer program product of claim 14, as set forth in the obviousness rejection above. TAKEHARA teaches wherein the computer-executable instructions further cause the apparatus to: detect that the feature parameter exceeds a preset range; detect that a time period for which the feature parameter exceeds the preset range is greater than or equal to a first preset value; or detect that a quantity of times that the feature parameter exceeds the preset range is greater than or equal to a second preset value (See at least paragraph [0044], “The information receiver 21 acquires respective position information of the vehicle and driving operations by the driver, as triggered by occurrence of switching in driving of the vehicle from the automatic driving mode to the manual driving mode. For example, it acquires a current position of the vehicle measured by the GPS receiver 5, as position information of the vehicle. Further, the driving operations by the driver is each identified based, for example, on a traveling direction of the vehicle inputted from the direction indicator 6, an operation amount of the accelerator pedal detected by the accelerator position sensor 12, a pressed-down amount of the brake pedal detected by the brake position sensor 14, or a steered amount of the steering wheel and/or an operation amount about a steering direction, etc. that are detected by the steering sensor 16” and paragraph [0075], “For example, upon comparison between the section feature amount and a feature amount of the driving operation in the above section having been set in the automatic driving mode, when a difference therebetween exceeds a predetermined threshold value, such setting in the automatic driving mode in that section is determined to be unmatched to the driver's desire, so that the section feature amount is stored as the correction amount.”).
Regarding Claim 18, TAKEHARA, Penilla, and HECKMANN teach The apparatus of claim 8, as set forth in the obviousness rejection above. TAKEHARA teaches wherein the processor is further configured to execute the instructions to cause the apparatus to detect that a quantity of times that the feature parameter exceeds a preset range is greater than or equal to a preset value (See at least paragraph [0044], “The information receiver 21 acquires respective position information of the vehicle and driving operations by the driver, as triggered by occurrence of switching in driving of the vehicle from the automatic driving mode to the manual driving mode. For example, it acquires a current position of the vehicle measured by the GPS receiver 5, as position information of the vehicle. Further, the driving operations by the driver is each identified based, for example, on a traveling direction of the vehicle inputted from the direction indicator 6, an operation amount of the accelerator pedal detected by the accelerator position sensor 12, a pressed-down amount of the brake pedal detected by the brake position sensor 14, or a steered amount of the steering wheel and/or an operation amount about a steering direction, etc. that are detected by the steering sensor 16”, paragraph [0075], “For example, upon comparison between the section feature amount and a feature amount of the driving operation in the above section having been set in the automatic driving mode, when a difference therebetween exceeds a predetermined threshold value, such setting in the automatic driving mode in that section is determined to be unmatched to the driver's desire, so that the section feature amount is stored as the correction amount”, and paragraph [0102], “If previously traveled on the setup route, and there is a correction amount for the automatic driving (Step ST303; YES), the correction processor 24 confirms whether or not the number of switching times AN to the manual driving in an objective section exceeds a predetermined threshold value (Step ST304).”).
Regarding Claim 20, TAKEHARA, Penilla, and HECKMANN teach The method of claim 2, as set forth in the obviousness rejection above. TAKEHARA teaches wherein detecting that the conflict exists between the desired driving behavior and the driving intent comprises detecting that a quantity of times that the feature parameter exceeds a preset range is greater than or equal to a preset value (See at least paragraph [0044], “The information receiver 21 acquires respective position information of the vehicle and driving operations by the driver, as triggered by occurrence of switching in driving of the vehicle from the automatic driving mode to the manual driving mode. For example, it acquires a current position of the vehicle measured by the GPS receiver 5, as position information of the vehicle. Further, the driving operations by the driver is each identified based, for example, on a traveling direction of the vehicle inputted from the direction indicator 6, an operation amount of the accelerator pedal detected by the accelerator position sensor 12, a pressed-down amount of the brake pedal detected by the brake position sensor 14, or a steered amount of the steering wheel and/or an operation amount about a steering direction, etc. that are detected by the steering sensor 16”, paragraph [0075], “For example, upon comparison between the section feature amount and a feature amount of the driving operation in the above section having been set in the automatic driving mode, when a difference therebetween exceeds a predetermined threshold value, such setting in the automatic driving mode in that section is determined to be unmatched to the driver's desire, so that the section feature amount is stored as the correction amount”, paragraph [0101], “Then, the correction processor 24 in the ECU 20 searches data related to routes for which the speed maps have been generated, from among data stored in the storage 23, to thereby determine whether or not: the route is that on which the vehicle 100 has previously traveled; and a correction amount for the automatic driving is being stored (Step ST303). At this time, if not previously traveled on the setup route (Step ST303; NO), the flow moves to processing in Step ST309”, and paragraph [0102], “If previously traveled on the setup route, and there is a correction amount for the automatic driving (Step ST303; YES), the correction processor 24 confirms whether or not the number of switching times AN to the manual driving in an objective section exceeds a predetermined threshold value (Step ST304).”).
Claim(s) 4, 10, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over TAKEHARA (US 20160347328 A1) in view of Penilla (US 20180061415 A1), HECKMANN (US 20170050642 A1), and BURKHART (DE 102016224291 A1).
Regarding Claim 4, TAKEHARA, Penilla, and HECKMANN teach The method of claim 1, as set forth in the obviousness rejection above. TAKEHARA, Penilla, and HECKMANN do not explicitly disclose, however, BURKHART, in the same field of endeavor, teaches wherein the preset neural network model is preconfigured (See at least paragraph [0024], “In the embodiment described here, the partially automated driving system TF is continuously improved during the driving operation of the motor vehicle 1 by means of training data. This is in block B1 of the Fig. 1 indicated. The training data includes a large number of digital training data sets TD , each of which specifies a limit situation GS and a driver reaction RE. In other words, both the limit situations GS and the driver reactions RE are specified by suitable parameters that are available as digital data and are typical for the corresponding limit situation and the subsequent driver reaction” and paragraph [0025], “In order to achieve this, the partially automated driving system TF is trained using a machine learning method with the training data TD previously determined during driving, whereby this training takes place in a computer unit within the vehicle 1 in the embodiment described here. The training step is completed by block B2 in 261 Fig. 1 indicated. During training, a machine learning (ML) method is used, such as: B. known neural networks or regression methods. The result of machine learning is a new parameterization PA of the partially automated driving system TF. In other words, corresponding parameters of algorithms on the basis of which the functionality of the partially automated driving system TF is realized are redefined. In this way, an adapted partially automated driving system TF is obtained, which is improved by taking driver reactions into account, so that borderline situations in which the partially automated driving system returns the driving tasks to the driver occur less frequently.” The training data includes driver reaction which is the driving behavior and the first feature parameter which are the parameters. The previous training data including the parameter is input into the neural network to get TD training data which includes the second driving behavior.).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the invention of TAKEHARA with the teachings of Penilla, HECKMANN, and BURKHART such that the driving system of TAKEHARA is further configured to utilize inputting the first feature parameter into a preset neural network model to obtain a driving behavior that matches the driving intent, wherein the driving behavior comprises one or more of a visual behavior of the driver, an emotional behavior of the driver, or a physical posture behavior of the driver, as taught by Penilla (See paragraph [0036], [0136].), present, to the driver, one or more questions corresponding to the conflict to confirm the driving intent; receive, from the driver, answers to the one or more questions indicating whether to update an autonomous driving system; and update, based on the driving behavior and the answers, the autonomous driving system to obtain an updated autonomous driving system, as taught by HECKMANN (See paragraph [0035], [0036].), and the preset neural network model being preconfigured, as taught by BURKHART (See paragraph [0024], [0025].), with a reasonable expectation of success. The motivation for doing so would be enhancing driver experience and vehicle response, as taught by Penilla (See paragraph [0010].). The motivation for doing so would be improving the performance of autonomous driving systems, as taught by HECKMANN (See paragraph [0008].). The motivation for doing so would be decreasing needing driver involvement and improving the driving system, as taught by BURKHART (See paragraph [0005].).
With respect to claim 10, please see the rejection above with respect to claim 4, which is commensurate in scope to claim 10, with claim 4 being drawn to a method and claim 10 being drawn to a corresponding apparatus.
With respect to claim 16, please see the rejection above with respect to claim 4, which is commensurate in scope to claim 16, with claim 4 being drawn to a method and claim 16 being drawn to a corresponding computer program product.
Claim(s) 6 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over TAKEHARA (US 20160347328 A1) in view of Penilla (US 20180061415 A1), HECKMANN (US 20170050642 A1), and HU (US 20200039525 A1).
Regarding Claim 6, TAKEHARA, Penilla, and HECKMANN teach The method of claim 5, as set forth in the obviousness rejection above. TAKEHARA teaches wherein updating the autonomous driving system further comprises: setting the driving behavior as the driving behavior (See at least paragraph [0072], “In contrast, when the specified section is terminated (Step ST203; YES), the information receiver 21 calculates a section feature amount (Step ST204). The section feature amount is a summarized feature amount for each specified section that is obtained from the plural feature amounts acquired in that specified section. For example, a moving average value of the plural feature amounts acquired in the specified section is determined as the section feature amount” and paragraph [0075], “For example, upon comparison between the section feature amount and a feature amount of the driving operation in the above section having been set in the automatic driving mode, when a difference therebetween exceeds a predetermined threshold value, such setting in the automatic driving mode in that section is determined to be unmatched to the driver's desire, so that the section feature amount is stored as the correction amount.” The section feature amount is the second driving behavior.); setting the safe driving behavior as the driving behavior (See at least paragraph [0066], “If the vehicle 100 is changed to the manual driving mode (Step ST201; YES), the information receiver 21 acquires a variety of feature amounts of the driving operations in the manual driving mode (Step ST202). Note that the feature amount is an operation amount of each of the driving operations in a series of vehicle controls by the driver. Examples thereof include a speed, a deceleration rate and an acceleration rate of the vehicle 100, a steered amount and a steering direction of the steering wheel, and the like, that are periodically acquired in the manual driving section.”).
TAKEHARA, Penilla, and HECKMANN do not explicitly disclose, however, HU, in the same field of endeavor, teaches when the driving behavior complies with a safe driving behavior of the target vehicle, and when the driving behavior does not comply with the safe driving behavior (See at least paragraph [0051], “In the present embodiment, the real-time safety detection model is obtained by training a neural network model by driving behavior feature data and a security marking score in a first training set. The first training set includes multiple pieces of training data, and each piece of training data is a set of data consisting of the driving behavior feature data of the vehicle and a corresponding security marking score” and paragraph [0058], “This embodiment of the present disclosure includes acquiring current driving data of a vehicle during a driving process of the vehicle; determining current driving behavior feature data of the vehicle according to the current driving data of the vehicle; inputting the current driving behavior feature data of the vehicle into a real-time safety detection model and calculating a security score corresponding to current driving behavior of the vehicle; and determining whether the current driving behavior of the vehicle is safe according to the security score corresponding to the current driving behavior of the vehicle. Usually, when a riding user feels insecure about the driving behavior of the vehicle, the vehicle has not been in danger, and the detection of the safety of the current driving behavior of the vehicle according to whether the driving behavior of the vehicle causes the user to feel insecure can assist an optimization of a vehicle driving system, reduce a safety risk of vehicle driving, and improve a riding experience of the user.”).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the invention of TAKEHARA with the teachings of Penilla, HECKMANN, and HU such that the driving system of TAKEHARA is further configured to utilize inputting the first feature parameter into a preset neural network model to obtain a driving behavior that matches the driving intent, wherein the driving behavior comprises one or more of a visual behavior of the driver, an emotional behavior of the driver, or a physical posture behavior of the driver, as taught by Penilla (See paragraph [0036], [0136].), incorporate driver’s answers to system questions presented, as taught by Donnelly (See paragraph [0057].), present, to the driver, one or more questions corresponding to the conflict to confirm the driving intent; receive, from the driver, answers to the one or more questions indicating whether to update an autonomous driving system; update, based on the driving behavior and the answers, the autonomous driving system to obtain an updated autonomous driving system; and wherein updating the autonomous driving system further comprises updating the autonomous driving system based on the driving behavior when the answer indicates to update the autonomous driving system, as taught by HECKMANN (See paragraph [0035], [0036].), and incorporate compliance of safe driving behaviors, as taught by HU (See paragraph [0058].), with a reasonable expectation of success. The motivation for doing so would be enhancing driver experience and vehicle response, as taught by Penilla (See paragraph [0010].). The motivation for doing so would be improving the performance of autonomous driving systems, as taught by HECKMANN (See paragraph [0008].). The motivation for doing so would be increasing control performance, driving behavior, and safety, as taught by HU (See paragraph [0004].).
With respect to claim 12, please see the rejection above with respect to claim 6, which is commensurate in scope to claim 12, with claim 6 being drawn to a method and claim 12 being drawn to a corresponding apparatus.
Claim(s 17 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over TAKEHARA (US 20160347328 A1) in view of Penilla (US 20180061415 A1), HECKMANN (US 20170050642 A1), and TSUJI (US 20180052458 A1).
Regarding Claim 17, TAKEHARA, Penilla, and HECKMANN teach The apparatus of claim 8, as set forth in the obviousness rejection above. TAKEHARA, Penilla, and HECKMANN do not explicitly disclose, however, TSUJI, in the same field of endeavor, teaches wherein the processor is further configured to execute the instructions to cause the apparatus to detect that a time period for which the feature parameter exceeds a preset range is greater than or equal to a preset value (See at least paragraph [0118], “Moreover, vehicle controller 7 may: set a preset behavior from among a plurality of behavior candidates as the most appropriate behavior; information on a previously selected behavior may be sorted in storage 8 and vehicle controller 7 may determine the previously selected behavior to be the most appropriate behavior; a number of times each behavior has been selected may be stored in storage 8 and vehicle controller 7 may determine the behavior having the highest count to be the most appropriate behavior”, paragraph [0119], “When input interface 51 does not receive an input within the second predetermined period of time, vehicle controller 7 controls the vehicle by causing the vehicle to implement the primary behavior, and controls brake pedal 2, accelerator pedal 3, and turn signal lever 4 in accordance with the vehicle control result”, paragraph [0121], “Vehicle controller 7 further obtains information on an input received by input interface 51 from the driver. After notifying the primary behavior and the secondary behavior(s), vehicle controller 7 determines whether input interface 51 has received an input or not within a second predetermined period of time. An input is, for example, a selection of one of the secondary behaviors”, and paragraph [0122], “When input interface 51 does not receive an input within the second predetermined period of time, vehicle controller 7 controls the vehicle by causing the vehicle to implement the primary behavior, and controls brake pedal 2, accelerator pedal 3, and turn signal lever 4 in accordance with the vehicle control result.”).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the invention of TAKEHARA with the teachings of Penilla, HECKMANN, and TSUJI such that the driving system of TAKEHARA is further configured to utilize inputting the first feature parameter into a preset neural network model to obtain a driving behavior that matches the driving intent, wherein the driving behavior comprises one or more of a visual behavior of the driver, an emotional behavior of the driver, or a physical posture behavior of the driver, as taught by Penilla (See paragraph [0036], [0136].), present, to the driver, one or more questions corresponding to the conflict to confirm the driving intent; receive, from the driver, answers to the one or more questions indicating whether to update an autonomous driving system; and update, based on the driving behavior and the answers, the autonomous driving system to obtain an updated autonomous driving system, as taught by HECKMANN (See paragraph [0035], [0036].), and to incorporate exceeding the time period for the feature parameters associated with driver behaviors, as taught by TSUJI (See paragraph [0121].), with a reasonable expectation of success. The motivation for doing so would be enhancing driver experience and vehicle response, as taught by Penilla (See paragraph [0010].). The motivation for doing so would be improving the performance of autonomous driving systems, as taught by HECKMANN (See paragraph [0008].). The motivation for doing so would be estimating a driving conduct suited to the driver, as taught by TSUJI (See paragraph [0006].).
Regarding Claim 19, TAKEHARA, Penilla, and HECKMANN teach The method of claim 2, as set forth in the obviousness rejection above. TAKEHARA teaches wherein detecting that the conflict exists between the desired driving behavior and the driving intent (See at least paragraph [0013], “an information receiver that acquires respective position information of the vehicle and feature amounts of driving operations by the driver, as triggered by occurrence of switching in driving of the vehicle from the automatic driving mode to the manual driving mode; a determination processor that determines from the feature amounts of the driving operations by the driver acquired by the information receiver, a driving operation to be corrected in the automatic driving mode and a correction amount thereof; a storage that stores the driving operation to be corrected and the correction amount thereof that are determined by the determination processor, in a manner associated with their corresponding position information.” The manual override by the driver demonstrates the conflict.).
TAKEHARA, Penilla, and HECKMANN do not explicitly disclose, however, TSUJI, in the same field of endeavor, teaches comprises detecting that a time period for which the feature parameter exceeds a preset range is greater than or equal to a preset value (See at least paragraph [0118], “Moreover, vehicle controller 7 may: set a preset behavior from among a plurality of behavior candidates as the most appropriate behavior; information on a previously selected behavior may be sorted in storage 8 and vehicle controller 7 may determine the previously selected behavior to be the most appropriate behavior; a number of times each behavior has been selected may be stored in storage 8 and vehicle controller 7 may determine the behavior having the highest count to be the most appropriate behavior”, paragraph [0119], “When input interface 51 does not receive an input within the second predetermined period of time, vehicle controller 7 controls the vehicle by causing the vehicle to implement the primary behavior, and controls brake pedal 2, accelerator pedal 3, and turn signal lever 4 in accordance with the vehicle control result”, paragraph [0121], “Vehicle controller 7 further obtains information on an input received by input interface 51 from the driver. After notifying the primary behavior and the secondary behavior(s), vehicle controller 7 determines whether input interface 51 has received an input or not within a second predetermined period of time. An input is, for example, a selection of one of the secondary behaviors”, and paragraph [0122], “When input interface 51 does not receive an input within the second predetermined period of time, vehicle controller 7 controls the vehicle by causing the vehicle to implement the primary behavior, and controls brake pedal 2, accelerator pedal 3, and turn signal lever 4 in accordance with the vehicle control result.”).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the invention of TAKEHARA with the teachings of Penilla, HECKMANN, and TSUJI such that the driving system of TAKEHARA is further configured to utilize inputting the first feature parameter into a preset neural network model to obtain a driving behavior that matches the driving intent, wherein the driving behavior comprises one or more of a visual behavior of the driver, an emotional behavior of the driver, or a physical posture behavior of the driver, as taught by Penilla (See paragraph [0036], [0136].), present, to the driver, one or more questions corresponding to the conflict to confirm the driving intent; receive, from the driver, answers to the one or more questions indicating whether to update an autonomous driving system; and update, based on the driving behavior and the answers, the autonomous driving system to obtain an updated autonomous driving system, as taught by HECKMANN (See paragraph [0035], [0036].), and to incorporate exceeding the time period for the feature parameters associated with driver behaviors, as taught by TSUJI (See paragraph [0121].), with a reasonable expectation of success. The motivation for doing so would be enhancing driver experience and vehicle response, as taught by Penilla (See paragraph [0010].). The motivation for doing so would be improving the performance of autonomous driving systems, as taught by HECKMANN (See paragraph [0008].). The motivation for doing so would be estimating a driving conduct suited to the driver, as taught by TSUJI (See paragraph [0006].).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEWEL ASHLEY KUNTZ whose telephone number is (571)270-5542. The examiner can normally be reached M-F 8:30am-5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Antonucci can be reached at (313) 446-6519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JEWEL A KUNTZ/Examiner, Art Unit 3666
/ANNE MARIE ANTONUCCI/Supervisory Patent Examiner, Art Unit 3666