DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s amendments and remarks filed on 11/04/2025 with respect to previous claim rejections under 35 U.S.C. 101 have been fully considered and persuasive and thus withdrawn of the previous 101 rejection.
Applicant’s amendments and remarks filed on 11/04/2025 with respect to previous claim rejections under 35 U.S.C. 102 have been fully considered and persuasive.
With respect to the newly amended subject matter and applicant’s arguments, the Examiner relies upon newly cited references Xu et al (US 2020/0200558 A1) and Yasui (US 2023/0242141 A1).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4, 7, 9-12, 14-17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable in view of Yamane et al (US 2021/0291841 A1) (hereinafter Yamane) in further view of Xu et al (US 2020/0200558 A1), (hereinafter Xu) in further view of Yasui (US 2023/0242141 A1).
Regarding claim 1, Yamane discloses a message system, comprising: one or more processors; a memory communicably coupled to the one or more processors and storing instructions that, when executed by the one or more processors, cause the one or more processors to: (see Yamane paras “0033”, “0054-0056” and “0115” “an information processing device including a controller that is configured to acquire driving behavior information as an indication of prudence in driving a vehicle, create a message according to a basic personality of a vehicle avatar based on the driving behavior information, and output the message as an utterance of the vehicle avatar”),
determine a style for presenting messages associated with an occupant of a vehicle according to a context defined in relation to the occupant and an environment of the vehicle (see Yamane paras “0033-0035” “an information processing device including a controller that is configured to acquire driving behavior information as an indication of prudence in driving a vehicle, create a message according to a basic personality of a vehicle avatar based on the driving behavior information, and output the message as an utterance of the vehicle avatar” regarding that the system shows a message to the driver according to the basic personality of the driver (i.e., style)),
generate a message according to the style for the occupant; and provide the message to the occupant (see Yamane paras “0033-0035” “For example, in the case where the driving behavior information indicates that driving is prudent, the controller may create a message indicating that the vehicle avatar has a positive feeling. Furthermore, in a case where the driving behavior information indicates that driving is not prudent enough, the controller may create a message indicating that the vehicle avatar has a negative feeling. A positive feeling is a feeling such as happy, cheerful, gentle, amused or the like, for example. A negative feeling is a feeling such as angry, sad or the like, for example”),
But Yamane fails to explicitly teach the message indicating training instructions for instructing the occupant about controlling the vehicle to perform a maneuver and provide the message to the occupant to induce the driver to perform the maneuver.
However, Xu teaches the message indicating training instructions for instructing the occupant about controlling the vehicle to perform a maneuver (see Xu paras “0035”, “0051”, “0054”, “0063” and “0067” “the instructions model is a machine trained (e.g., trained via a machine learning algorithm) neural network, deep net, model, and/or the like that generates autonomous driving instructions based on the current data and/or historical autonomous driving pattern information/data” and “For example, if the autonomous driving instructions indicate that autonomous driving is not allowed, permitted, enabled, and/or the like along a portion of a route being traversed by the vehicle 5, the vehicle apparatus 20 may determine a new route along which autonomous driving is allowed, permitted, and/or enabled or may provide a message (e.g., via user interface 28) to a human operator of the vehicle 5 that the human operator will need to control the vehicle 5 along at least a portion of the route”),
and provide the message to the occupant to induce the driver to perform the maneuver (see Xu paras “0035”, “0051”, “0054”, “0063” and “0067” “the instructions model is a machine trained (e.g., trained via a machine learning algorithm) neural network, deep net, model, and/or the like that generates autonomous driving instructions based on the current data and/or historical autonomous driving pattern information/data” and “For example, if the autonomous driving instructions indicate that autonomous driving is not allowed, permitted, enabled, and/or the like along a portion of a route being traversed by the vehicle 5, the vehicle apparatus 20 may determine a new route along which autonomous driving is allowed, permitted, and/or enabled or may provide a message (e.g., via user interface 28) to a human operator of the vehicle 5 that the human operator will need to control the vehicle 5 along at least a portion of the route”),
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yamane for information processing device, recording medium, and information processing method “to prevent autonomous driving through a human operator zone within the geographic area”, as taught by Xu (paras [0051] – [0054]) in order to ensure the human operator is properly informed and prepared to control the vehicle when autonomous operation is restricted or disabled to make sure that the vehicle will be driven safely by the driver and reduce the likelihood of unsafe transitions.
But modified Yamane fails to explicitly teach including controlling a speaker to adapt at least a volume according to the style.
However, Yasui teaches including controlling a speaker to adapt at least a volume according to the style (see Yasui paras “0008” and “0016-0017” “a notification controller configured to output a notification sound… from a speaker”, “the notification controller may make the sound volume louder as a necessary degree of acceleration increases as the speed variation degree, and make the sound volume louder as a necessary degree of deceleration increases as the speed variation degree” and “the notification controller may change one or more elements among a tone height, a sound volume, a tone color, or an interval of the notification sound according to whether the acceleration or the deceleration is required for the host vehicle… and change one or more other elements among the tone height, the sound volume, the tone color, or the interval of the notification sound according to a speed variation degree of the host vehicle”),
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of modified Yamane for information processing device, recording medium, and information processing method “to assist smooth merging or lane change to a main driving lane and also assisting driving by an a driver by outputting a guiding notification sound from a speaker”, as taught by Yasui (paras [0008] – [0016-0017]) in order to ensure that the instructions are presented in a manner (e.g., volume, tone) that corresponds to the detected condition thereby improving safety and commination skills).
Regarding claim 2, Yamane discloses wherein the instructions to determine the style include instructions to analyze sensor data about the occupant, the vehicle, and the environment using a style model, and wherein the style defines how the message is presented to the occupant, including defining grammar, content, timing, and cadence of a presentation of the message (see Yamane paras “0033-0036”, “0039”, “0041”, “0061” and “0107” “the controller may acquire information about an environment of surroundings on a road where the vehicle traveled, and may create the message based on the driving behavior information and the information about the environment. For example, the feeling of a driver is possibly affected by a surrounding environment. For example, when there is a traffic jam or a road repairing work…” and “The driving behavior information acquisition unit 11 generates the driving behavior information. Specifically, the driving behavior information acquisition unit 11 generates the driving behavior information in response to detection of occurrence of a predetermined event. The predetermined event is sudden braking, sudden starting, sudden steering, sounding of a horn” and “where the server is to create and post the message of the vehicle avatar… where the vehicle 10 traveled may be collected as the environment factor information, and be taken into account at the time of creation of the message of the vehicle avatar” regarding checking the feeling of the driver and the environment around the vehicle and determine what type of message and whether to write in a cheerful way or just send a message such as “nothing good happened today”).
Regarding claim 3, Yamane discloses wherein the instructions to generate the message according to the style include instructions to use a style model to generate the message upon receiving an indicator specifying content of the message (see Yamane paras “0033-0036” “create a message according to a basic personality of a vehicle avatar based on the driving behavior information, and output the message as an utterance of the vehicle avatar.” and “the controller may acquire the message that is an output that is obtained by inputting the driving behavior information to a learned model that is associated with the vehicle avatar for which the basic personality according to the information about the vehicle is set.” regarding a style model generating message content after receiving input (i.e. indicator)).
Regarding claim 4, Yamane discloses wherein the instructions to generate the message according to the style include instructions to use separate message models to generate variations of the message according to an indicator specifying content of the message, and wherein the separate message models have separate styles that define a form of the message (see Yamane paras “0047” and “0074” “For example, in the case where the driving behavior information indicates that driving of the driver of the vehicle 10 is prudent, a message indicating that the vehicle avatar has a positive feeling, such as “it was a good day today”, is created as the message of the vehicle avatar. For example, in the case where the driving behavior information indicates that driving of the driver of the vehicle 10 is not prudent, a message indicating that the vehicle avatar has a negative feeling, such as “nothing good happened today”, is created as the message of the vehicle avatar.” and “For example, even when the driving behavior information of same content is input, the learned models output messages of nuances, tones, expressions or endings of sentences according to the settings of corresponding avatars, without outputting an exactly same message.” Regarding even when the same driving behavior information is input the learned models (i.e. separate message models) output messages of different nuances, tones, expressions or sentence endings (i.e. generate variations of the message according to an indicator specifying content of the message)).
Regarding claim 7, Yamane discloses further comprising instructions to: train a style model to at least determine the style according to annotations of inputs, wherein the inputs include one or more of an emotion of the occupant, a physiological response of the occupant, and a behavior of the vehicle (see Yamane paras “0033-0036”, “0047” and “0074” “in the case where the driving behavior information indicates that driving of the driver of the vehicle 10 is prudent, a message indicating that the vehicle avatar has a positive feeling, such as “it was a good day today”, is created as the message of the vehicle avatar. For example, in the case where the driving behavior information indicates that driving of the driver of the vehicle 10 is not prudent, a message indicating that the vehicle avatar has a negative feeling, such as “nothing good happened today”, is created as the message of the vehicle avatar” and “The learned model is a machine learning model that is obtained by learning the driving behavior information and the message as teacher data, the driving behavior information being given as input, the message being given as output, for example. The message of the teacher data is according to the setting of each avatar. That is, the message of the teacher data is a message that uses an expression, an ending of a sentence or a tone according to the sex, age and basic personality of the vehicle avatar”).
Regarding claim 9, Yamane discloses a non-transitory computer-readable medium including instructions that, when executed by one or more processors, cause the one or more processors to: (see Yamane paras “0033”, “0054-0056” and “0115” “an information processing device including a controller that is configured to acquire driving behavior information as an indication of prudence in driving a vehicle, create a message according to a basic personality of a vehicle avatar based on the driving behavior information, and output the message as an utterance of the vehicle avatar”),
determine a style for presenting messages associated with an occupant of a vehicle according to a context defined in relation to the occupant and an environment of the vehicle (see Yamane paras “0033-0035” “an information processing device including a controller that is configured to acquire driving behavior information as an indication of prudence in driving a vehicle, create a message according to a basic personality of a vehicle avatar based on the driving behavior information, and output the message as an utterance of the vehicle avatar” regarding that the system shows a message to the driver according to the basic personality of the driver (i.e., style)),
generate a message according to the style for the occupant; and provide the message to the occupant (see Yamane paras “0033-0035” “For example, in the case where the driving behavior information indicates that driving is prudent, the controller may create a message indicating that the vehicle avatar has a positive feeling. Furthermore, in a case where the driving behavior information indicates that driving is not prudent enough, the controller may create a message indicating that the vehicle avatar has a negative feeling. A positive feeling is a feeling such as happy, cheerful, gentle, amused or the like, for example. A negative feeling is a feeling such as angry, sad or the like, for example”),
But Yamane fails to explicitly teach the message indicating training instructions for instructing the occupant about controlling the vehicle to perform a maneuver and provide the message to the occupant to induce the driver to perform the maneuver.
However, Xu teaches the message indicating training instructions for instructing the occupant about controlling the vehicle to perform a maneuver (see Xu paras “0035”, “0051”, “0054”, “0063” and “0067” “the instructions model is a machine trained (e.g., trained via a machine learning algorithm) neural network, deep net, model, and/or the like that generates autonomous driving instructions based on the current data and/or historical autonomous driving pattern information/data” and “For example, if the autonomous driving instructions indicate that autonomous driving is not allowed, permitted, enabled, and/or the like along a portion of a route being traversed by the vehicle 5, the vehicle apparatus 20 may determine a new route along which autonomous driving is allowed, permitted, and/or enabled or may provide a message (e.g., via user interface 28) to a human operator of the vehicle 5 that the human operator will need to control the vehicle 5 along at least a portion of the route”),
and provide the message to the occupant to induce the driver to perform the maneuver (see Xu paras “0035”, “0051”, “0054”, “0063” and “0067” “the instructions model is a machine trained (e.g., trained via a machine learning algorithm) neural network, deep net, model, and/or the like that generates autonomous driving instructions based on the current data and/or historical autonomous driving pattern information/data” and “For example, if the autonomous driving instructions indicate that autonomous driving is not allowed, permitted, enabled, and/or the like along a portion of a route being traversed by the vehicle 5, the vehicle apparatus 20 may determine a new route along which autonomous driving is allowed, permitted, and/or enabled or may provide a message (e.g., via user interface 28) to a human operator of the vehicle 5 that the human operator will need to control the vehicle 5 along at least a portion of the route”),
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yamane for information processing device, recording medium, and information processing method “to prevent autonomous driving through a human operator zone within the geographic area”, as taught by Xu (paras [0051] – [0054]) in order to ensure the human operator is properly informed and prepared to control the vehicle when autonomous operation is restricted or disabled to make sure that the vehicle will be driven safely by the driver and reduce the likelihood of unsafe transitions.
But modified Yamane fails to explicitly teach including controlling a speaker to adapt at least a volume according to the style.
However, Yasui teaches including controlling a speaker to adapt at least a volume according to the style (see Yasui paras “0008” and “0016-0017” “a notification controller configured to output a notification sound… from a speaker”, “the notification controller may make the sound volume louder as a necessary degree of acceleration increases as the speed variation degree, and make the sound volume louder as a necessary degree of deceleration increases as the speed variation degree” and “the notification controller may change one or more elements among a tone height, a sound volume, a tone color, or an interval of the notification sound according to whether the acceleration or the deceleration is required for the host vehicle… and change one or more other elements among the tone height, the sound volume, the tone color, or the interval of the notification sound according to a speed variation degree of the host vehicle”),
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of modified Yamane for information processing device, recording medium, and information processing method “to assist smooth merging or lane change to a main driving lane and also assisting driving by an a driver by outputting a guiding notification sound from a speaker”, as taught by Yasui (paras [0008] – [0016-0017]) in order to ensure that the instructions are presented in a manner (e.g., volume, tone) that corresponds to the detected condition thereby improving safety and commination skills).
Regarding claim 10, Yamane discloses wherein the instructions to determine the style include instructions to analyze sensor data about the occupant, the vehicle, and the environment using a style model, and wherein the style defines how the message is presented to the occupant, including defining grammar, content, timing, and cadence of a presentation of the message (see Yamane paras “0033-0036”, “0039”, “0041”, “0061” and “0107” “the controller may acquire information about an environment of surroundings on a road where the vehicle traveled, and may create the message based on the driving behavior information and the information about the environment. For example, the feeling of a driver is possibly affected by a surrounding environment. For example, when there is a traffic jam or a road repairing work…” and “The driving behavior information acquisition unit 11 generates the driving behavior information. Specifically, the driving behavior information acquisition unit 11 generates the driving behavior information in response to detection of occurrence of a predetermined event. The predetermined event is sudden braking, sudden starting, sudden steering, sounding of a horn” and “where the server is to create and post the message of the vehicle avatar… where the vehicle 10 traveled may be collected as the environment factor information, and be taken into account at the time of creation of the message of the vehicle avatar” regarding checking the feeling of the driver and the environment around the vehicle and determine what type of message and whether to write in a cheerful way or just send a message such as “nothing good happened today”).
Regarding claim 11, Yamane discloses wherein the instructions to generate the message according to the style include instructions to use a style model to generate the message upon receiving an indicator specifying content of the message (see Yamane paras “0033-0036” “create a message according to a basic personality of a vehicle avatar based on the driving behavior information, and output the message as an utterance of the vehicle avatar.” and “the controller may acquire the message that is an output that is obtained by inputting the driving behavior information to a learned model that is associated with the vehicle avatar for which the basic personality according to the information about the vehicle is set.” regarding a style model generating message content after receiving input (i.e. indicator)).
Regarding claim 12, Yamane discloses wherein the instructions to generate the message according to the style include instructions to use separate message models to generate variations of the message according to an indicator specifying content of the message, and wherein the separate message models have separate styles that define a form of the message (see Yamane paras “0047” and “0074” “For example, in the case where the driving behavior information indicates that driving of the driver of the vehicle 10 is prudent, a message indicating that the vehicle avatar has a positive feeling, such as “it was a good day today”, is created as the message of the vehicle avatar. For example, in the case where the driving behavior information indicates that driving of the driver of the vehicle 10 is not prudent, a message indicating that the vehicle avatar has a negative feeling, such as “nothing good happened today”, is created as the message of the vehicle avatar.” and “For example, even when the driving behavior information of same content is input, the learned models output messages of nuances, tones, expressions or endings of sentences according to the settings of corresponding avatars, without outputting an exactly same message.” Regarding even when the same driving behavior information is input the learned models (i.e. separate message models) output messages of different nuances, tones, expressions or sentence endings (i.e. generate variations of the message according to an indicator specifying content of the message)).
Regarding claim 14, Yamane discloses a method, comprising: (see Yamane paras “0033”, “0054-0056” and “0115”),
determining a style for presenting messages associated with an occupant of a vehicle according to a context defined in relation to the occupant and an environment of the vehicle (see Yamane paras “0033-0035” “an information processing device including a controller that is configured to acquire driving behavior information as an indication of prudence in driving a vehicle, create a message according to a basic personality of a vehicle avatar based on the driving behavior information, and output the message as an utterance of the vehicle avatar” regarding that the system shows a message to the driver according to the basic personality of the driver (i.e., style)),
generating a message according to the style for the occupant; and providing the message to the occupant (see Yamane paras “0033-0035” “For example, in the case where the driving behavior information indicates that driving is prudent, the controller may create a message indicating that the vehicle avatar has a positive feeling. Furthermore, in a case where the driving behavior information indicates that driving is not prudent enough, the controller may create a message indicating that the vehicle avatar has a negative feeling. A positive feeling is a feeling such as happy, cheerful, gentle, amused or the like, for example. A negative feeling is a feeling such as angry, sad or the like, for example”),
But Yamane fails to explicitly teach the message indicating training instructions for instructing the occupant about controlling the vehicle to perform a maneuver and providing the message to the occupant to induce the driver to perform the maneuver.
However, Xu teaches the message indicating training instructions for instructing the occupant about controlling the vehicle to perform a maneuver (see Xu paras “0035”, “0051”, “0054”, “0063” and “0067” “the instructions model is a machine trained (e.g., trained via a machine learning algorithm) neural network, deep net, model, and/or the like that generates autonomous driving instructions based on the current data and/or historical autonomous driving pattern information/data” and “For example, if the autonomous driving instructions indicate that autonomous driving is not allowed, permitted, enabled, and/or the like along a portion of a route being traversed by the vehicle 5, the vehicle apparatus 20 may determine a new route along which autonomous driving is allowed, permitted, and/or enabled or may provide a message (e.g., via user interface 28) to a human operator of the vehicle 5 that the human operator will need to control the vehicle 5 along at least a portion of the route”),
and providing the message to the occupant to induce the driver to perform the maneuver (see Xu paras “0035”, “0051”, “0054”, “0063” and “0067” “the instructions model is a machine trained (e.g., trained via a machine learning algorithm) neural network, deep net, model, and/or the like that generates autonomous driving instructions based on the current data and/or historical autonomous driving pattern information/data” and “For example, if the autonomous driving instructions indicate that autonomous driving is not allowed, permitted, enabled, and/or the like along a portion of a route being traversed by the vehicle 5, the vehicle apparatus 20 may determine a new route along which autonomous driving is allowed, permitted, and/or enabled or may provide a message (e.g., via user interface 28) to a human operator of the vehicle 5 that the human operator will need to control the vehicle 5 along at least a portion of the route”),
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yamane for information processing device, recording medium, and information processing method “to prevent autonomous driving through a human operator zone within the geographic area”, as taught by Xu (paras [0051] – [0054]) in order to ensure the human operator is properly informed and prepared to control the vehicle when autonomous operation is restricted or disabled to make sure that the vehicle will be driven safely by the driver and reduce the likelihood of unsafe transitions.
But modified Yamane fails to explicitly teach including controlling a speaker to adapt at least a volume according to the style.
However, Yasui teaches including controlling a speaker to adapt at least a volume according to the style (see Yasui paras “0008” and “0016-0017” “a notification controller configured to output a notification sound… from a speaker”, “the notification controller may make the sound volume louder as a necessary degree of acceleration increases as the speed variation degree, and make the sound volume louder as a necessary degree of deceleration increases as the speed variation degree” and “the notification controller may change one or more elements among a tone height, a sound volume, a tone color, or an interval of the notification sound according to whether the acceleration or the deceleration is required for the host vehicle… and change one or more other elements among the tone height, the sound volume, the tone color, or the interval of the notification sound according to a speed variation degree of the host vehicle”),
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of modified Yamane for information processing device, recording medium, and information processing method “to assist smooth merging or lane change to a main driving lane and also assisting driving by an a driver by outputting a guiding notification sound from a speaker”, as taught by Yasui (paras [0008] – [0016-0017]) in order to ensure that the instructions are presented in a manner (e.g., volume, tone) that corresponds to the detected condition thereby improving safety and commination skills).
Regarding claim 15, Yamane discloses wherein determining the style includes analyzing sensor data about the occupant, the vehicle, and the environment using a style model, wherein providing the message includes at least one of rendering the message as text on a display and playing the message as spoken audio and wherein the style defines how the message is presented to the occupant, including defining grammar, content, timing, and cadence of a presentation of the message (see Yamane Figs. 7-8 and paras “0033-0036”, “0039”, “0041”, “0061”, “0107” and “0112” “the controller may acquire information about an environment of surroundings on a road where the vehicle traveled, and may create the message based on the driving behavior information and the information about the environment. For example, the feeling of a driver is possibly affected by a surrounding environment. For example, when there is a traffic jam or a road repairing work…” and “The driving behavior information acquisition unit 11 generates the driving behavior information. Specifically, the driving behavior information acquisition unit 11 generates the driving behavior information in response to detection of occurrence of a predetermined event. The predetermined event is sudden braking, sudden starting, sudden steering, sounding of a horn” and “where the server is to create and post the message of the vehicle avatar… where the vehicle 10 traveled may be collected as the environment factor information, and be taken into account at the time of creation of the message of the vehicle avatar” regarding checking the feeling of the driver and the environment around the vehicle and determine what type of message and whether to write in a cheerful way or just send a message such as “nothing good happened today”).
Regarding claim 16, Yamane discloses wherein generating the message according to the style includes using a style model to generate the message upon receiving an indicator specifying content of the message (see Yamane paras “0033-0036” “create a message according to a basic personality of a vehicle avatar based on the driving behavior information, and output the message as an utterance of the vehicle avatar.” and “the controller may acquire the message that is an output that is obtained by inputting the driving behavior information to a learned model that is associated with the vehicle avatar for which the basic personality according to the information about the vehicle is set.” regarding a style model generating message content after receiving input (i.e. indicator)),
and wherein determining the style includes analyzing the context, including an emotional state of the occupant and current operating characteristics of the vehicle (see Yamane paras “0035”, “0039-0040” and “0045” “the controller may acquire information about an environment of surroundings on a road where the vehicle traveled, and may create the message based on the driving behavior information and the information about the environment”, “the feeling of a driver is possibly affected by a surrounding environment… a driver may become irritate...”, “the information about the environment indicates that a surrounding environment negatively affects a feeling of the driver…” and “The driving behavior information is history information of predetermined events occurring during driving… detection of sudden braking, sudden steering, sudden starting, sounding of a horn, meandering driving” regarding the system evaluates the feeling of the driver and determines whether surrounding conditions negatively affect that feeling and then reflecting analysis of the emotional state of the occupant as part of message generation),
But modified Yamane fails to explicitly teach including adapting at least the volume and a tone of how the message is provided as defined by the style.
However, Yasui teaches including adapting at least the volume and a tone of how the message is provided as defined by the style (see Yasui paras “0008” and “0016-0017” “a notification controller configured to output a notification sound… from a speaker”, “the notification controller may make the sound volume louder as a necessary degree of acceleration increases as the speed variation degree, and make the sound volume louder as a necessary degree of deceleration increases as the speed variation degree” and “the notification controller may change one or more elements among a tone height, a sound volume, a tone color, or an interval of the notification sound according to whether the acceleration or the deceleration is required for the host vehicle… and change one or more other elements among the tone height, the sound volume, the tone color, or the interval of the notification sound according to a speed variation degree of the host vehicle”),
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of modified Yamane for information processing device, recording medium, and information processing method “to assist smooth merging or lane change to a main driving lane and also assisting driving by an a driver by outputting a guiding notification sound from a speaker”, as taught by Yasui (paras [0008] – [0016-0017]) in order to ensure that the instructions are presented in a manner (e.g., volume, tone) that corresponds to the detected condition thereby improving safety and commination skills).
Regarding claim 17, Yamane discloses wherein generating the message according to the style includes using separate message models to generate variations of the message according to an indicator specifying content of the message, and wherein the separate message models have separate styles that define a form of the message (see Yamane paras “0047” and “0074” “For example, in the case where the driving behavior information indicates that driving of the driver of the vehicle 10 is prudent, a message indicating that the vehicle avatar has a positive feeling, such as “it was a good day today”, is created as the message of the vehicle avatar. For example, in the case where the driving behavior information indicates that driving of the driver of the vehicle 10 is not prudent, a message indicating that the vehicle avatar has a negative feeling, such as “nothing good happened today”, is created as the message of the vehicle avatar.” and “For example, even when the driving behavior information of same content is input, the learned models output messages of nuances, tones, expressions or endings of sentences according to the settings of corresponding avatars, without outputting an exactly same message.” Regarding even when the same driving behavior information is input the learned models (i.e. separate message models) output messages of different nuances, tones, expressions or sentence endings (i.e. generate variations of the message according to an indicator specifying content of the message)).
Regarding claim 20, Yamane discloses training a style model to at least determine the style according to annotations of inputs, wherein the inputs include one or more of an emotion of the occupant, a physiological response of the occupant, and a behavior of the vehicle (see Yamane paras “0033-0036”, “0047” and “0074” “in the case where the driving behavior information indicates that driving of the driver of the vehicle 10 is prudent, a message indicating that the vehicle avatar has a positive feeling, such as “it was a good day today”, is created as the message of the vehicle avatar. For example, in the case where the driving behavior information indicates that driving of the driver of the vehicle 10 is not prudent, a message indicating that the vehicle avatar has a negative feeling, such as “nothing good happened today”, is created as the message of the vehicle avatar” and “The learned model is a machine learning model that is obtained by learning the driving behavior information and the message as teacher data, the driving behavior information being given as input, the message being given as output, for example. The message of the teacher data is according to the setting of each avatar. That is, the message of the teacher data is a message that uses an expression, an ending of a sentence or a tone according to the sex, age and basic personality of the vehicle avatar”).
Claims 5, 13 and 18 are rejected under 35 U.S.C. 103 as being unpatentable in view of Yamane et al (US 2021/0291841 A1) (hereinafter Yamane) in further view of Xu et al (US 2020/0200558 A1), (hereinafter Xu) in further view of Yasui (US 2023/0242141 A1) as applied to claims 1, 9 and 14 above, in further view of Mariko et al (US 11,501,088 B1) (Hereinafter Mariko).
Regarding claim 5, Yamane fails to explicitly disclose wherein the instructions to generate the message include instructions to use a style model to rank the variations and select one of the variations according to the rank of the variations in relation to the style.
However, Mariko teaches wherein the instructions to generate the message include instructions to use a style model to rank the variations and select one of the variations according to the rank of the variations in relation to the style (see Mariko col 7, lines 45-67, col 15, lines 8-29 and col 16, lines 40-50 “Scoring module 114 performs scoring procedures. For example, scoring module 114 may assign scores to natural language text segments presented as options to user 10 based on the user's selection and, if available, current values for variables representing linguistic preferences stored in database 140. Ranking module 116 ranks text segments presented to the user based on their scores.” and “Any of numerous models may be used to refine the set of linguistic preferences. Some of such models utilize machine learning techniques and/or regression-based analyses to learn which linguistic preferences are likely to impact the readability of natural language text. Some of such models are described further below”),
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yamane for information processing device, recording medium, and information processing method “to generate a natural language text customized to the user using the updated values, as taught by Mariko (col 7, lines 45-67) in order to output the natural language text customized to the user preference.
Regarding claim 13, Yamane fails to explicitly disclose wherein the instructions to generate the message include instructions to use a style model to rank the variations and select one of the variations according to the rank of the variations in relation to the style.
However, Mariko teaches wherein the instructions to generate the message include instructions to use a style model to rank the variations and select one of the variations according to the rank of the variations in relation to the style (see Mariko col 7, lines 45-67, col 15, lines 8-29 and col 16, lines 40-50 “Scoring module 114 performs scoring procedures. For example, scoring module 114 may assign scores to natural language text segments presented as options to user 10 based on the user's selection and, if available, current values for variables representing linguistic preferences stored in database 140. Ranking module 116 ranks text segments presented to the user based on their scores.” and “Any of numerous models may be used to refine the set of linguistic preferences. Some of such models utilize machine learning techniques and/or regression-based analyses to learn which linguistic preferences are likely to impact the readability of natural language text. Some of such models are described further below”),
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yamane for information processing device, recording medium, and information processing method “to generate a natural language text customized to the user using the updated values, as taught by Mariko (col 7, lines 45-67) in order to output the natural language text customized to the user preference.
Regarding claim 18, Yamane fails to explicitly disclose wherein generating the message includes using a style model to rank the variations and select one of the variations according to the rank of the variations in relation to the style.
However, Mariko teaches wherein generating the message includes using a style model to rank the variations and select one of the variations according to the rank of the variations in relation to the style (see Mariko col 7, lines 45-67, col 15, lines 8-29 and col 16, lines 40-50 “Scoring module 114 performs scoring procedures. For example, scoring module 114 may assign scores to natural language text segments presented as options to user 10 based on the user's selection and, if available, current values for variables representing linguistic preferences stored in database 140. Ranking module 116 ranks text segments presented to the user based on their scores.” and “Any of numerous models may be used to refine the set of linguistic preferences. Some of such models utilize machine learning techniques and/or regression-based analyses to learn which linguistic preferences are likely to impact the readability of natural language text. Some of such models are described further below”),
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yamane for information processing device, recording medium, and information processing method “to generate a natural language text customized to the user using the updated values, as taught by Mariko (col 7, lines 45-67) in order to output the natural language text customized to the user preference.
Claims 6 and 19 are rejected under 35 U.S.C. 103 as being unpatentable in view of Yamane et al (US 2021/0291841 A1) (hereinafter Yamane) in further view of Xu et al (US 2020/0200558 A1), (hereinafter Xu) in further view of Yasui (US 2023/0242141 A1) as applied to claims 1 and 14 above, in further view of Lundin et al (US 2022/0245322 A1) (Hereinafter Lundin).
Regarding claim 6, Yamane fails to explicitly disclose train a style model to at least determine the style according to a metric that assesses a response of the occupant to the message, wherein the metric defines how to assess the response and reward the style model, and wherein training the style model occurs according to reinforcement learning and the style model is a machine learning algorithm.
However, Lundin teaches train a style model to at least determine the style according to a metric that assesses a response of the occupant to the message, wherein the metric defines how to assess the response and reward the style model, and wherein training the style model occurs according to reinforcement learning and the style model is a machine learning algorithm (see Lundin paras “0028” and “0036-0041” “The online system 130 receives requests to generate content item variations for one or more reference content items. A content item variation for a reference content item may include the original content of the reference content item but a text variation that retains the textual content of the reference text but stylizes the reference text in a respective style. In one embodiment, the online system 130 trains one or more machine-learned style transfer models coupled to receive an encoded version of the reference text and generate a set of text variations that stylizes the reference text in a set of text styles. The style transfer model includes a set of trained parameters”),
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yamane for information processing device, recording medium, and information processing method “to train the style transfer model by collecting user interaction information that may indicate how users respond to different styles of text, as taught by Lundin (paras [0028] - [0036-0041]) in order to automatically and efficiently generate such variations in text style.
Regarding claim 19, Yamane fails to explicitly disclose training a style model to at least determine the style according to a metric that assesses a response of the occupant to the message, wherein the metric defines how to assess the response and reward the style model, and wherein training the style model occurs according to reinforcement learning and the style model is a machine learning algorithm.
However, Lundin teaches training a style model to at least determine the style according to a metric that assesses a response of the occupant to the message, wherein the metric defines how to assess the response and reward the style model, and wherein training the style model occurs according to reinforcement learning and the style model is a machine learning algorithm (see Lundin paras “0028” and “0036-0041” “The online system 130 receives requests to generate content item variations for one or more reference content items. A content item variation for a reference content item may include the original content of the reference content item but a text variation that retains the textual content of the reference text but stylizes the reference text in a respective style. In one embodiment, the online system 130 trains one or more machine-learned style transfer models coupled to receive an encoded version of the reference text and generate a set of text variations that stylizes the reference text in a set of text styles. The style transfer model includes a set of trained parameters”),
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yamane for information processing device, recording medium, and information processing method “to train the style transfer model by collecting user interaction information that may indicate how users respond to different styles of text, as taught by Lundin (paras [0028] - [0036-0041]) in order to automatically and efficiently generate such variations in text style.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable in view of Yamane et al (US 2021/0291841 A1) (hereinafter Yamane) in further view of Xu et al (US 2020/0200558 A1), (hereinafter Xu) in further view of Yasui (US 2023/0242141 A1) as applied to claims 1 and 14 above, in further view of Camp et al (US 2019/0147736 A1) (Hereinafter Camp).
Regarding claim 8, Yamane fails to explicitly disclose wherein the message system is embedded within a vehicle that operates at least semi-autonomously.
However, Camp teaches wherein the message system is embedded within a vehicle that operates at least semi-autonomously (see Camp para “0066” “transmitting a signal to operate an autonomous vehicle or a semi-autonomous vehicle based on the road event message; and (2) presenting a user interface element depicting a visual representation of the road event in a mapping user interface of a device based on the road event message.”),
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yamane for information processing device, recording medium, and information processing method “to determine a road event associated with a geographic location and a confidence metric of the road event to determine whether to operate autonomously or semi-autonomous, as taught by Camp (para [0066]) in order to improve the overall driving experience for end users.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HOSSAM M ABDELLATIF whose telephone number is (571)272-5869. The examiner can normally be reached on M-F 8 am-5 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rachid Bendidi can be reached on (571) 272-4896. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HOSSAM M ABD EL LATIF/Examiner, Art Unit 3664