DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Response to Amendment
Applicant submitted amendments and remarks on October 23, 2025. Therein, Applicant submitted substantive arguments. No claims were amended, added, or cancelled.
The submitted claims are considered below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Goldman-Shenhar, et al. (U.S. Patent No. 9815481) in view of Toyoda (U.S. Patent No. 11262755).
Regarding claim 1, Goldman-Shenhar, et al. teaches: A method, performed by a device of vehicle, for providing a human-machine interface (HMI) mode, the method comprising: determining an occupant's state; (Blocks (424-426), Col. 18, lines 6-13: "…the interaction module (150) is configured to cause the processor to at block (426) monitor the human user for, or at least receive from the human user, feedback responsive to the message communicated via block (424). The message of block (424) could be an inquiry for instance-"was that a comfortable passing maneuver?", for example-and the feedback at block (426) can include a response [getting feedback determining occupant's state].")
determining, based on the occupant's state, an HMI mode which is associated with a type of medium to be used for interaction with an occupant, a number of devices for interaction with the occupant, or a medium-specific output level corresponding to each media (Block (410), Col. 15, lines 31-37: "…the system (120) is configured to monitor the human driver. The monitoring can be performed in connection with block (410), for example. The monitoring can be performed more when the interaction mode is higher (e.g., novice mode (220)) than when the interaction mode is lower (e.g., expert passenger mode (230), et seq.) [HMI mode is determined based on occupant's state; used for interaction with occupant].")
providing, based on the determined HMI mode, to the occupant information on a driving situation of the vehicle or a behavior that the vehicle will autonomously perform (Blocks (402), (414), Col. 16, lines 19-26: "The communication of block (414) is provided based on the applicable interaction mode determined at (402) and related to one or more autonomous-driving activities or functions of the vehicle (100). In situations in which a communication is provided to the human user by the system (102) without the human user prompting for the communication, the communication, and system function, can be referred to as proactive [providing based on HMI mode - information related to autonomous vehicle activities].")
and wherein as the occupant's attention level to the external situation of the vehicle increases, the HMI mode is determined to interact with the occupant using a greater variety of media types, a larger number of devices, or a higher medium-specific output level (Block (410), Col. 15, lines 31-42: "As described above, in some embodiments the system (120) is configured to monitor the human driver. The monitoring can be performed in connection with block (410), for example. The monitoring can be performed more when the interaction mode is higher (e.g., novice mode (220)) than when the interaction mode is lower (e.g., expert passenger mode (230), et seq.). Monitoring more can include monitoring more frequently, for instance, and/or to a higher degree - e.g., configured to in addition to picking up communications made by way of a microphone or a touch-sensitive screen, pick up more communications, such as by a camera or laser-based sensor system detecting user gestures [HMI mode - as interaction mode increases; larger number of devices are used].").
Goldman-Shenhar, et al. does not teach wherein the occupant's state includes the occupant's attention level to an external situation of the vehicle.
In a similar field of endeavor (driver assistance with respect to vehicle user intent), Toyoda teaches: wherein the occupant's state includes the occupant's attention level to an external situation of the vehicle (Col. 15, lines 7-17: "…the HMI (109) may provide an interface enabling interaction between the user and the various capabilities via communications interfaces (16) and a suitable communications network (101). Data and information determinative of the existence of a predefined driving situation may be continuously gathered by sensors (28) and may be continuously transmitted to the processing facility(ies) for interpretation and formulation of vehicle control commands for generating associated indicators, if it is determined that the vehicle (11) is in a predefined driving situation [data analyzed as a function of occupant's state through interaction]." ; Col. 20, lines 55-58: "…FIG. 1, vehicle (11) may include an array (28) of vehicle sensors designed to monitor various vehicle operational status parameters and environmental conditions external to the vehicle [occupant's attention level to external situation of vehicle]."; Blocks (1373-1375), Col. 33, lines 26-40: "In block (1373), the computing system may evaluate the user-selected vehicle response mode (including the selected initial and secondary indicators) for safety and other considerations [monitoring communication between occupant and vehicle] […] the computing system may communicate with a user via HMI (109) [specific communication mode] […] In addition, in block (1375), the computing system may query a vehicle user or occupant to determine if the user/occupant wishes to disable implementation of the selected vehicle response mode in the current or pending predefined driving situation [vehicle changes based on occupant's attention to current external vehicle situation].").
Therefore, it would have been obvious to one of the ordinary skill of the art before the effective filing date of the claimed invention to modify Goldman-Shenhar, et al. to include the teaching of Toyoda based on a reasonable expectation of success and motivation to improve the process of controlling a vehicle computing system by a user to respond in a manner in response to a mode-based driving situation (Toyoda Col. 1, lines 32-62).
Regarding claim 12, Goldman-Shenhar, et al. teaches: A device for providing a human-machine interface (HMI) mode, the device comprising: a controller configured to determine an occupant's state in a vehicle, (Fig. 1, Col. 6, lines 15-17: "FIG. 1 shows schematically such a computerized controller, or control system (120) [controller], for use in accordance with embodiments of the present disclosure." ; Col. 11, lines 38-47: "…adjusting user preferences, the system (120) can determine that based on human-driver feedback during driving, the human driver would be more comfortable if the system (120) maintained a larger gap between the vehicle (100) and vehicle ahead. In one embodiment, the system can be configured to, given an applicable interaction mode, establish a maximum gap level, in terms of distance or time to stop (e.g., three seconds), for instance, and not change unless the driver requests or permits the change explicitly [determination of occupant's state based on feedback].")
determine, based on the occupant's state, an HMI mode which is associated with a type of media to be used for interaction with an occupant, a number of devices for interaction with the occupant, or a medium specific output level corresponding to each media (Col. 15, lines 31-37: "As […] the system (120) is configured to monitor the human driver. The monitoring can be performed in connection with block (410), for example. The monitoring can be performed more when the interaction mode is higher (e.g., novice mode (220)) than when the interaction mode is lower (e.g., expert passenger mode (230), et seq.) [HMI mode is determined based on occupant's state; used for interaction with occupant].")
and provide, based on the determined HMI mode, to the occupant information on a driving situation of the vehicle or a behavior that the vehicle will autonomously perform (Col. 16, lines 19-26: "The communication of block (414) is provided based on the applicable interaction mode determined at (402) and related to one or more autonomous-driving activities or functions of the vehicle (100). In situations in which a communication is provided to the human user by the system (102) without the human user prompting for the communication, the communication, and system function, can be referred to as proactive [providing based on HMI mode - information related to autonomous vehicle activities].")
and wherein as the occupant's attention level to the external situation of the vehicle increases, the HMI mode is determined to interact with the occupant using a greater variety of media types, a larger number of devices, or a higher medium-specific output level (Col. 15, lines 31-42: "As described above, in some embodiments the system (120) is configured to monitor the human driver. The monitoring can be performed in connection with block (410), for example. The monitoring can be performed more when the interaction mode is higher (e.g., novice mode (220)) than when the interaction mode is lower (e.g., expert passenger mode (230), et seq.). Monitoring more can include monitoring more frequently, for instance, and/or to a higher degree - e.g., configured to in addition to picking up communications made by way of a microphone or a touch-sensitive screen, pick up more communications, such as by a camera or laser-based sensor system detecting user gestures [HMI mode - as interaction mode increases; larger number of devices are used].")
Goldman-Shenhar, et al. does not teach wherein the occupant's state includes the occupant's attention level to an external situation of the vehicle.
In a similar field of endeavor (driver assistance with respect to vehicle user intent), Toyoda teaches: wherein the occupant's state includes the occupant's attention level to an external situation of the vehicle (Col. 15, lines 7-17: "…HMI (109) may provide an interface enabling interaction between the user and the various capabilities via communications interfaces (16) and a suitable communications network (101). Data and information determinative of the existence of a predefined driving situation may be continuously gathered by sensors (28) and may be continuously transmitted to the processing facility(ies) for interpretation and formulation of vehicle control commands for generating associated indicators, if it is determined that the vehicle (11) is in a predefined driving situation [data analyzed as a function of occupant's state through interaction]." ; Col. 20, lines 55-58: "…FIG. 1, vehicle (11) may include an array (28) of vehicle sensors designed to monitor various vehicle operational status parameters and environmental conditions external to the vehicle [occupant's attention level to external situation of vehicle]."; Col. 33, lines 26-40: "…the computing system may evaluate the user-selected vehicle response mode (including the selected initial and secondary indicators) for safety and other considerations [monitoring communication between occupant and vehicle],[…] the computing system may communicate with a user via HMI (109) [specific communication mode] to inform the user that the previously selected vehicle response mode should not be implemented in the current or pending predefined driving situation. In addition, in block (1375), the computing system may query a vehicle user or occupant to determine if the user/occupant wishes to disable implementation of the selected vehicle response mode in the current or pending predefined driving situation [vehicle changes based on occupant's attention to current external vehicle situation].").
Therefore, it would have been obvious to one of the ordinary skill of the art before the effective filing date of the claimed invention to modify Goldman-Shenhar, et al. to include the teaching of Toyoda based on a reasonable expectation of success and motivation to improve the process of controlling a vehicle computing system by a user to respond in a manner in response to a mode-based driving situation (Toyoda Col. 1, lines 32-62).
Claims 3 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Goldman-Shenhar, et al. (U.S. Patent No. 9815481) and Toyoda (U.S. Patent No. 11262755) in view of Katz, et al. (U.S. Patent Application Publication No. 20200207358).
Regarding claim 3, the combination of Goldman-Shenhar, et al. and Toyoda teaches: The method of claim 1, wherein the HMI mode is determined from among a plurality of predefined HMI modes includes one or more of a maximum guidance mode, an intermediate guidance mode, and a minimum guidance mode, each using different types of medium, different numbers of media, different medium-specific output schemes, or different medium-specific output levels (Col. 9, lines 15-28: "… system (120) can define more or less than five modes. In various embodiments, the system (120) includes at least three modes: a fully-manual mode, a lower or lowest autonomous-driving interaction mode and a higher or highest autonomous-driving interaction mode. The lowest autonomous-driving interaction mode is suitable for users having little or no experience, or at least having a low comfort level using autonomous-driving functions. The lowest mode of three can include the novice interaction mode (220) described, or a combination of that mode and features of the next higher mode or modes (e.g., (230), or (230) and (240)) described primarily herein. The highest mode, or expert, mode can correspond to any or a combination of the top three modes (230), (240), (250) of the five described primarily herein [HMI modes includes maximum/intermediate/minimum guidance mode with different medium-specific output schemes]." ; Block (410), Col. 15, lines 31-42: "As described above, in some embodiments the system (120) is configured to monitor the human driver. The monitoring can be performed in connection with block (410), for example. The monitoring can be performed more when the interaction mode is higher (e.g., novice mode (220)) than when the interaction mode is lower (e.g., expert passenger mode (230), et seq.). Monitoring more can include monitoring more frequently, for instance, and/or to a higher degree - e.g., configured to in addition to picking up communications made by way of a microphone or a touch-sensitive screen, pick up more communications, such as by a camera or laser-based sensor system detecting user gestures [HMI modes includes maximum/intermediate/minimum guidance mode with different medium-specific output schemes example].").
The combination of Goldman-Shenhar, et al. and Toyoda does not teach wherein the maximum guidance mode corresponds to a state in which the occupant continuously gazes outside of the vehicle, wherein the intermediate guidance mode corresponds to a state in which the occupant intermittently gazes outside of the vehicle, and wherein the minimum guidance mode corresponds to a state in which the occupant does not gaze outside of the vehicle or performs a task for a preset period of time or longer.
In a similar field of endeavor (driver monitoring), Katz, et al. teaches: wherein the maximum guidance mode corresponds to a state in which the occupant continuously gazes outside of the vehicle, wherein the intermediate guidance mode corresponds to a state in which the occupant intermittently gazes outside of the vehicle, and wherein the minimum guidance mode corresponds to a state in which the occupant does not gaze outside of the vehicle or performs a task for a preset period of time or longer (Paragraph [0074]: Stating that a attention level is determined and an alert be provided to the driver based on the attention level. The determination of the attention level is computed based on the driver attentiveness state. See also Paragraphs [0045], [0049], and [0050] that it can be determined that the gaze of the driver is directed towards the outside of the vehicle and can determine the period of time of the gaze).
Therefore, it would have been obvious to one of the ordinary skill of the art before the effective filing date of the claimed invention to modify the combination of Goldman-Shenhar, et al. and Toyoda to include the teaching of Katz, et al. based on a reasonable expectation of success and motivation to improve the process of determining the driver's attentiveness to the road and determining a corresponding action according to the driver's state of attentiveness (Katz, et al. Paragraph [0050]).
Regarding claim 17, the combination of Goldman-Shenhar, et al. and Toyoda teaches: The device of claim 12, wherein the HMI mode is determined from among a plurality of predefined HMI modes includes one or more of a maximum guidance mode, an intermediate guidance mode, and a minimum guidance mode, each using different types of medium, different numbers of media, different medium-specific output schemes, or different medium-specific output levels (Col. 9, lines 15-28: "…system (120) can define more or less than five modes. In various embodiments, the system (120) includes at least three modes: a fully-manual mode, a lower or lowest autonomous-driving interaction mode and a higher or highest autonomous-driving interaction mode. The lowest autonomous-driving interaction mode is suitable for users having little or no experience, or at least having a low comfort level using autonomous-driving functions. The lowest mode of three can include the novice interaction mode (220) described, or a combination of that mode and features of the next higher mode or modes (e.g., (230), or (230) and (240)) described primarily herein. The highest mode, or expert, mode can correspond to any or a combination of the top three modes (230), (240), (250) of the five described primarily herein [HMI modes includes maximum/intermediate/minimum guidance mode with different medium-specific output schemes]." ; Col. 15, lines 31-42: "As described above, in some embodiments the system (120) is configured to monitor the human driver. The monitoring can be performed in connection with block (410), for example. The monitoring can be performed more when the interaction mode is higher (e.g., novice mode (220)) than when the interaction mode is lower (e.g., expert passenger mode (230), et seq.). Monitoring more can include monitoring more frequently, for instance, and/or to a higher degree - e.g., configured to in addition to picking up communications made by way of a microphone or a touch-sensitive screen, pick up more communications, such as by a camera or laser-based sensor system detecting user gestures [HMI modes includes maximum/intermediate/minimum guidance mode with different medium-specific output schemes - example].").
The combination of Goldman-Shenhar, et al. and Toyoda does not teach wherein the maximum guidance mode corresponds to a state in which the occupant continuously gazes outside of the vehicle, wherein the intermediate guidance mode corresponds to a state in which the occupant intermittently gazes outside of the vehicle, and wherein the minimum guidance mode corresponds to a state in which the occupant does not gaze outside of the vehicle or performs a task for a preset period of time or longer.
In a similar field of endeavor (driver monitoring), Katz, et al. teaches: wherein the maximum guidance mode corresponds to a state in which the occupant continuously gazes outside of the vehicle, wherein the intermediate guidance mode corresponds to a state in which the occupant intermittently gazes outside of the vehicle, and wherein the minimum guidance mode corresponds to a state in which the occupant does not gaze outside of the vehicle or performs a task for a preset period of time or longer (Paragraph [0074]: Stating that a attention level is determined and an alert be provided to the driver based on the attention level. The determination of the attention level is computed based on the driver attentiveness state. See also Paragraphs [0045], [0049], and [0050] that it can be determined that the gaze of the driver is directed towards the outside of the vehicle and can determine the period of time of the gaze.).
Therefore, it would have been obvious to one of the ordinary skill of the art before the effective filing date of the claimed invention to modify the combination of Goldman-Shenhar, et al. and Toyoda to include the teaching of Katz, et al. based on a reasonable expectation of success and motivation to improve the process of determining the driver's attentiveness to the road and determining a corresponding action according to the driver's state of attentiveness (Katz, et al. Paragraph [0050]).
Claims 5, 14, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Goldman-Shenhar, et al. (U.S. Patent No. 9815481) and Toyoda (U.S. Patent No. 11262755) in view of Yamamoto (U.S. Patent Application Publication No. 20230037467).
Regarding claim 5, the combination of Goldman-Shenhar, et al. and Toyoda does not teach the method of claim 1, wherein the HMI mode is determined to interact with the occupant using: at least one medium among a graphical user interface (GUI), an auditory user interface (AUI), and a physical user interface (PUI), in accordance with an output scheme determined for each medium.
In a similar field of endeavor (vehicle HMI control devices), Yamamoto, et al. teaches: the method of claim 1, wherein the HMI mode is determined to interact with the occupant using: at least one medium among a graphical user interface (GUI), an auditory user interface (AUI), and a physical user interface (PUI), in accordance with an output scheme determined for each medium (Paragraph [0123]: The display control unit controls the image and sound output operation of the HMI device and various information such as level related information. See also Yamamoto, et al. Paragraph [0124] stating the display control unit controls the information presentation depending on the automation level acquired by the automation level acquisition unit).
Therefore, it would have been obvious to one of the ordinary skill of the art before the effective filing date of the claimed invention to modify the combination of Goldman-Shenhar, et al. and Toyoda to include the teaching of Yamamoto, et al. based on a reasonable expectation of success and motivation to improve the process of HMI devices presenting information to a driver (Yamamoto, et al. Paragraph [0004]).
Regarding claim 14, the combination of Goldman-Shenhar, et al. and Toyoda does not teach the device of claim 12, determine wherein the HMI mode is determined to interact with the occupant using at least one medium to be used to provide guide information to the occupant among a graphical user interface (GUI), an auditory user interface (AUI), and a physical user interface (PUI), in accordance with an output scheme determined for each medium.
In a similar field of endeavor (vehicle HMI control devices), Yamamoto, et al. teaches: the device of claim 12, determine wherein the HMI mode is determined to interact with the occupant using at least one medium to be used to provide guide information to the occupant among a graphical user interface (GUI), an auditory user interface (AUI), and a physical user interface (PUI), in accordance with an output scheme determined for each medium (Paragraph ([0123]: The display control unit controls the image and sound output operation of the HMI device and various information such as level-related information. See also Yamamoto, et al. Paragraph ([0124]) stating the display control unit controls the information presentation depending on the automation level acquired by the automation level acquisition unit.).
Therefore, it would have been obvious to one of the ordinary skill of the art before the effective filing date of the claimed invention to modify the combination of Goldman-Shenhar, et al. and Toyoda to include the teaching of Yamamoto, et al. based on a reasonable expectation of success and motivation to improve the process of HMI devices presenting information to a driver (Yamamoto, et al. Paragraph [0004]).
Regarding claim 19, the combination of Goldman-Shenhar, et al. and Toyoda does not teach a vehicle comprising the device of claim 12.
In a similar field of endeavor (vehicle HMI control devices), Yamamoto, et al. teaches: A vehicle comprising the device of claim 12 (The in-vehicle system (10) includes a vehicle state sensor (11), an external state sensor (12), a surrounding monitoring sensor (13), a locator (14), a DCM (15), a navigation device (16), a driver state detection unit (17), a driving control device (18), and an HMI device (20) (Yamamoto, et al. Paragraph [0070]). See also Yamamoto, et al. Paragraph [0123]: The in vehicle system which includes the HMI device which provides guidance information through an image or audio.").
Therefore, it would have been obvious to one of the ordinary skill of the art before the effective filing date of the claimed invention to modify the combination of Goldman-Shenhar, et al. and Toyoda to include the teaching of Yamamoto, et al. based on a reasonable expectation of success and motivation to improve the process of HMI devices presenting information to a driver (Yamamoto, et al. Paragraph [0004]).
Claims 6 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Goldman-Shenhar, et al. (U.S. Patent No. 9815481), Toyoda (U.S. Patent No. 11262755), and Yamamoto (U.S. Patent Application Publication No. 20230037467) in view of Martin, et al. (U.S. Patent Application Publication No. 20190308555).
Regarding claim 6, the combination of Goldman-Shenhar, et al., Toyoda, and Yamamoto, et al. does not teach the method of claim 5 wherein the output scheme for each medium is determined based on one or more tables in which a medium-specific output scheme or a medium specific output level is mapped to each of the plurality of predefined RMI modes.
In a similar field of endeavor (vehicle wheel torque adjustment based on occupant detection), Martin, et al. teaches: the method of claim 5, wherein the output scheme for each medium is determined based on one or more tables in which a medium-specific output scheme or a medium specific output level is mapped to each of the plurality of predefined RMI modes (Table 4 which displays various output levels and the corresponding mediums that will activate when that level is selected. See Paragraph [0050] stating that the output level corresponds to an intensity of an output signal and is based on the occupant alertness level.).
Therefore, it would have been obvious to one of the ordinary skill of the art before the effective filing date of the claimed invention to modify the combination of Goldman-Shenhar, et al., Toyoda, and Yamamoto, et al. to include the teaching of Martin, et al. based on a reasonable expectation of success and motivation to improve the process of providing a haptic output to vehicle occupants (Martin, et al. Paragraph [0027]).
Regarding claim 8, the combination of Goldman-Shenhar, et al., Toyoda, and Yamamoto, et al. does not teach the method of claim 5, wherein the output scheme for each medium includes one or more of whether to stop a task that has already been performed by a target device corresponding to the medium, a condition for the target device corresponding to the medium to output guidance information, an amount of information contained in the guidance information, and the number of the target device corresponding to the medium that is to provide the guidance information.
In a similar field of endeavor (vehicle wheel torque adjustment based on occupant detection), Martin, et al. teaches: The method of claim 5, wherein the output scheme for each medium includes one or more of whether to stop a task that has already been performed by a target device corresponding to the medium, a condition for the target device corresponding to the medium to output guidance information, an amount of information contained in the guidance information, and the number of the target device corresponding to the medium that is to provide the guidance information (Table 4 which displays at what output levels an audio, visual, or haptic signal is sent and how much information is sent. See also Paragraphs [0093]-[0094] stating that the outputs in Table 4 are examples and different haptic signals and messages may be used.).
Therefore, it would have been obvious to one of the ordinary skill of the art before the effective filing date of the claimed invention to modify the combination of Goldman-Shenhar, et al., Toyoda, and Yamamoto, et al. to include the teaching of Martin, et al. based on a reasonable expectation of success and motivation to improve the process of providing a haptic output to vehicle occupants (Martin, at al. Paragraph [0027]).
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Goldman-Shenhar, et al. (U.S. Patent No. 9815481), Toyoda (U.S. Patent No. 11262755), Yamamoto (U.S. Patent Application Publication No. 20230037467), and Martin, et al. (U.S. Patent Application Publication No. 20190308555) in view of Tian, et al. (U.S. Patent Application Publication No. 20200231179).
Regarding claim 7, the combination of Goldman-Shenhar, et al., Toyoda, Yamamoto, et al., and Martin, et al. does not teach the method of claim 6, wherein the output scheme for each medium is determined corresponding to an event that has occurred in the vehicle.
In a similar field of endeavor (vehicle guidance based on occupant parameters), Tian et al. teaches: The method of claim 6, wherein the output scheme for each medium is determined corresponding to an event that has occurred in the vehicle (Paragraph ([0143]: Guidance information will be provided if there is a possibility of an event occurring in the vehicle. See also Tian, et al. Paragraph [0074] which the HMI outputs the information as an image or sound.).
Therefore, it would have been obvious to one of the ordinary skill of the art before the effective filing date of the claimed invention to modify the combination of Goldman-Shenhar, et al., Toyoda, Yamamoto, et al., and Martin, et al. to include the teaching of Tian, et al. based on a reasonable expectation of success and motivation to improve the process of prompting the occupant to pay attention and provide information for prompting to change the route according to the guidance information (Tian, et al. Paragraphs [0010] – [0011]).
The combination of Goldman-Shenhar, et al., Toyoda, Yamamoto, et al., Martin, et al., and Tian, et al. does not teach mapping the output scheme to a specific table. However, since Tian, et al. has the ability to create prediction output model data in the form of a similar table like format (ref. Fig. 6, Paragraph [0088]), and it is obvious to one of the ordinary skill in the art to create the capability of an output scheme in the form of a table to properly inform and prepare the occupant in the vehicle of an unexpected event, this teaching would have made it obvious to modify the combination of Goldman-Shenhar, et al., Toyoda, Yamamoto, et al., Martin, et al., and Tian, et al. to include an output scheme in the form of a specific table based on the motivation to improve the process of prompting the occupant to pay attention and provide information for prompting to change the route according to the guidance information.
Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Goldman-Shenhar, et al. (U.S. Patent No. 9815481) and Toyoda (U.S. Patent No. 11262755) in view of Yamada (U.S. Patent No. 9783202).
Regarding claim 9, the combination of Goldman-Shenhar, et al. and Toyoda does not teach the method of claim 1, further comprising: determining a first reference position, which is a position occupied by a reference occupant, among a plurality of positions in the vehicle, and wherein the determining the occupant's state comprises: determining the occupant's state on the basis of occupant information acquired at the first reference position.
In a similar field of endeavor (vehicle occupant information acquisition and control), Yamada, et al. teaches: The method of claim 1, further comprising: determining a first reference position, which is a position occupied by a reference occupant, among a plurality of positions in the vehicle, and wherein the determining the occupant's state comprises: determining the occupant's state on the basis of occupant information acquired at the first reference position (Col. 4 lines 58-64: stating a method of capturing images of an occupant and determining the state of the occupant. The captured images are also used for facial recognition to identify occupants).
Therefore, it would have been obvious to one of the ordinary skill of the art before the effective filing date of the claimed invention to modify the combination of Goldman-Shenhar, et al. and Toyoda to include the teaching of Yamada based on a reasonable expectation of success and motivation to improve the process of controlling a vehicle based on vehicle occupant information (Yamada Col. 1, line 66 to Col. 2, lines 1-19).
Regarding claim 10, the combination of Goldman-Shenhar, et al. and Toyoda does not teach the method of claim 1, further comprising: guiding the occupant to a predefined second reference position; and determining whether the second reference position is occupied, and wherein the determining the occupant's state comprises: determining the occupant's state on the basis of occupant information acquired at the second reference position.
In a similar field of endeavor (vehicle occupant information acquisition and control), Yamada, et al. teaches: The method of claim 1, further comprising: guiding the occupant to a predefined second reference position; and determining whether the second reference position is occupied, and wherein the determining the occupant's state comprises: determining the occupant's state on the basis of occupant information acquired at the second reference position (Col. 12 lines 47-53 gives an example of identifying multiple occupants and retrieving respective conditions of the occupants. See also Col. 12 lines 8-16 stating riding occupants are displayed at the display panel.).
Therefore, it would have been obvious to one of the ordinary skill of the art before the effective filing date of the claimed invention to modify the combination of Goldman-Shenhar, et al. and Toyoda to include the teaching of Yamada based on a reasonable expectation of success and motivation to improve the process of controlling a vehicle based on vehicle occupant information (Yamada Col. 1, line 66 to Col. 2, lines 1-19).
Claims 11 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Goldman-Shenhar, et al. (U.S. Patent No. 9815481) and Toyoda (U.S. Patent No. 11262755) in view of Martin, et al. (U.S. Patent Application Publication No. 20190308555).
Regarding claim 11, the combination of Goldman-Shenhar, et al. and Toyoda does not teach the method of claim 1, further comprising: providing guidance information to the occupant in a medium-specific output scheme corresponding to a preset HMI mode in response to determining that a plurality of occupants are present in the vehicle or that a predefined second reference position is not occupied.
In a similar field of endeavor (vehicle wheel torque adjustment based on occupant detection), Martin, et al. teaches: The method of claim 1, further comprising: providing guidance information to the occupant in a medium-specific output scheme corresponding to a preset HMI mode in response to determining that a plurality of occupants are present in the vehicle or that a predefined second reference position is not occupied (Paragraph [0015]: Stating that a second occupant alertness level may be calculated upon detecting a second occupant. See also Paragraph [0061] stating that the output level corresponds to the occupant alertness level.).
Therefore, it would have been obvious to one of the ordinary skill of the art before the effective filing date of the claimed invention to modify the combination of Goldman-Shenhar, et al. and Toyoda to include the teaching of Martin, et al. based on a reasonable expectation of success and motivation to improve the process of providing a haptic output to vehicle occupants (Martin, et al. Paragraph [0027]).
Regarding claim 15, the combination of Goldman-Shenhar, et al. and Toyoda does not teach the device of claim 12, further comprising: a storage configured to store one or more tables in which a medium-specific output scheme or a medium-specific output level is mapped to each of a plurality of predefined HMI modes.
In a similar field of endeavor (vehicle wheel torque adjustment based on occupant detection), Martin, et al. teaches: The device of claim 12, further comprising: a storage configured to store one or more tables in which a medium-specific output scheme or a medium-specific output level is mapped to each of a plurality of predefined HMI modes (Table 4 which displays various output levels and the corresponding mediums that will activate when that level is selected. See also Paragraph [0050] stating that the output level corresponds to an intensity of an output signal and is based on the occupant alertness level and Paragraph [0097] stating that a storage medium is used to store instructions.).
Therefore, it would have been obvious to one of the ordinary skill of the art before the effective filing date of the claimed invention to modify the combination of Goldman-Shenhar, et al. and Toyoda to include the teaching of Martin, et al. based on a reasonable expectation of success and motivation to improve the process of providing a haptic output to vehicle occupants (Martin, et al. Paragraph [0027]).
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Goldman-Shenhar, et al. (U.S. Patent No. 9815481), Toyoda (U.S. Patent No. 11262755), and Martin, et al. (U.S. Patent Application Publication No. 20190308555) in view of Tian, et al. (U.S. Patent Application Publication No. 20200231179).
Regarding claim 16, the combination of Goldman-Shenhar, et al., Toyoda, and Martin, et al. does not teach the device of claim 15, wherein the controller is configured to: determine a medium-specific output scheme corresponding to the HMI mode using a table corresponding to an event occurring in the vehicle.
In a similar field of endeavor (vehicle guidance based on occupant parameters), Tian et al. teaches: The device of claim 15, wherein the controller is configured to: determine a medium-specific output scheme corresponding to the HMI mode corresponding to an event occurring in the vehicle (Paragraph [0143] Guidance information will be provided by the HMI if there is a possibility of an event occurring in the vehicle. See also Paragraph [0074] which the HMI outputs the guidance information as an image or sound).
Therefore, it would have been obvious to one of the ordinary skill of the art before the effective filing date of the claimed invention to modify the combination of Goldman-Shenhar, et al., Toyoda, and Martin, et al. to include the teaching of Tian, et al. based on a reasonable expectation of success and motivation to improve the process of prompting the occupant to pay attention and provide information for prompting to change the route according to the guidance information (Tian, et al. Paragraph [0010] – [0011]).
The combination of Goldman-Shenhar, et al., Toyoda, Martin, et al., and Tian, et al. does not teach mapping the output scheme to a specific table. However, since Tian, et al. has the ability to create prediction output model data in the form of a similar table like format (ref. Fig. 6, Paragraph [0088]), and it is obvious to one of the ordinary skill in the art to create the capability of an output scheme in the form of a table to properly inform and prepare the occupant in the vehicle of an unexpected event, this teaching would have made it obvious to modify the combination of Goldman-Shenhar, et al., Toyoda, Martin, et al., and Tian, et al. to include an output scheme in the form of a specific table based on the motivation to improve the process of prompting the occupant to pay attention and provide information for prompting to change the route according to the guidance information.
Response to Arguments
Applicant's arguments filed on October 23, 2025 have been fully considered but they are not persuasive.
Applicant asserted that claims 1 and 12 were patentable over Goldman-Shenhar, et al. (U.S. Patent No. 9815481) in view of Toyoda (U.S. Patent No. 11262755) because the references did not meet the complete claim limitation “wherein the occupant's state includes the occupant's attention level to an external situation of the vehicle, and wherein as the occupant's attention level to the external situation of the vehicle increases, the HMI mode is determined to interact with the occupant using a greater variety of media types, a larger number of devices, or a higher medium-specific output level”. Specifically, Applicant alleges that Toyoda does not teach the specific concept of “wherein the occupant’s state includes the occupant’s attention level to an external situation of the vehicle”. The examiner disagrees. In Toyoda, the human machine interface, or HMI, has the ability to respond to the safety situation indicated by the user outside the vehicle with respect to a “…current […] driving situation” (Block (1373), Col. 33, lines 26-34) in which “…the computing system may query a vehicle user or occupant to determine if the user/occupant wishes to disable implementation of the selected vehicle response mode in the current or pending predefined driving situation” (Block (1375), Col. 33, lines 35-40). Subsequently, it would have been obvious to combine Toyoda with Goldman-Shenhar, et al. because Goldman-Shenhar, et al. teaches determining an occupant’s state and determining an HMI mode associated with an occupant’s state toward the purpose of interaction (Blocks (424-426), Col. 18, lines 6-13 and Block (410), Col. 15, lines 31-37).
Therefore, it can be concluded that since ethe combination of Goldman-Shenhar, et al. and Toyoda reads on the claim limitation “wherein the occupant's state includes the occupant's attention level to an external situation of the vehicle, and wherein as the occupant's attention level to the external situation of the vehicle increases, the HMI mode is determined to interact with the occupant using a greater variety of media types, a larger number of devices, or a higher medium-specific output level, as stated in claims 1 and 12, the arguments presented by the Applicant are not persuasive, and the rejection is maintained.
Conclusion
Applicant is considered to have implicit knowledge of the entire disclosure once a reference has been cited. Therefore, any previously cited figures, columns and lines should not be considered to limit the references in any way. The entire reference must be taken as a whole; accordingly, the Examiner contends that the art supports the rejection of the claims and the rejection is maintained.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TORRENCE S MARUNDA II whose telephone number is (571)272-5172. The examiner can normally be reached Monday-Friday 8:00-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANGELA Y ORTIZ can be reached on 571-272-1206. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TORRENCE S MARUNDA II/Examiner, Art Unit 3663
/ANGELA Y ORTIZ/Supervisory Patent Examiner, Art Unit 3663