Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-4, 6-14, have 16-19 been amended. Claim 5 is cancelled. Claims 1-4 and 6-19 are currently under review.
Claim Objections
Claims 1-4, 6-7, and 18-19 are objected to because of the following informalities: typographic errors and consistency. Appropriate correction is required.
Claim 1, line 11: “estimation result of the present context of the user, and ”
Claim 2, line 4: “prediction result of the future affect of the user, and”
Claim 2, line 5: “the prediction result of the future affect of the user is obtained based on the first estimation”
Claim 2, line 6: “result of the present affect of the user and at least one of the first estimation result”
Claim 2, line 7: “context of the user or the obtained prediction result of the future context of the user.”
Claim 3, line 4: “the obtained prediction result of the future affect of the user, and”
Claim 3, line 5: “the reliability of the obtained prediction result of the future affect of the user is obtained based”
Claim 3, line 6: “on the first estimation result of the present affect of the user and at least one ”
Claim 3, line 7: “result of the present context of the user or the obtained prediction result of the future context of the user.”
Claim 4, line 3: “the first estimation result of the present affect of the user based on a biosignal of the”
Claim 6, line 3: “the first estimation result of the present context of the user based on at least one of”
Claim 7, line 5: “of the prediction result of the future affect of the user or the prediction result of”
Claim 7, line 6: “the future context of the user, wherein the support content is content of the support;”
Claim 18, line 10: “of the present context of the user, and”
Claim 19, line 12: “of the present context of the user, and”
Response to Arguments
Applicant’s arguments with respect to claims 1-4 and 6-19 have been considered but are moot because the new ground of rejection does not rely on the combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 4, 7, 9, and 18-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Reinen et al. (Pub. No.: US 2020/0401933 A1) hereinafter referred to as Reinen.
With respect to Claim 1, Reinen discloses an information processing apparatus (fig. 7, item 701: computer system = fig. 8, item 12; ¶40; ¶52, “computer system/server 12 include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, handheld or laptop devices …”), comprising: circuitry (fig. 7, item 701: computer system = fig. 8, item 12 comprises circuitry; ¶52) configured to: obtain a prediction result of a future context of a user based on a time series of context estimation results of the user (¶42, “The data provided to the system is logged (e.g., recorded in memory of the laptop computer) and fed to a training mechanism (e.g., unsupervised learning of a model) that links the user's biometric time course data to the output behavior. Over time, the model is updated, improving detection functionality, e.g., detection of changes in the feature space that allows the system to anticipate how a particular emotional/biological state predicts an outcome… the system anticipate a distinct pattern that predicts an adversarial social interaction, it may alert the user 507 (e.g., through an on-screen text alert, an audio alert or a similar alert)” – future context = adversarial social interaction); and support the user based on a first result and a second result (¶42, supporting the user is via an alert), wherein the first result includes at least one of a first estimation result of a present affect of the user (¶27-28; ¶40, “The computer system 701 can include an application performing the functions of a trainer 704. The trainer 704 trains a time series model (e.g., impulse response filter, feed forward neural network, or predictive auto regression) to predict user behavior from the data made available by the sensors 702 and the data logger 703”; ¶42, “Over time, the model is updated, improving detection functionality, e.g., detection of changes in the feature space that allows the system to anticipate how a particular emotional/biological state predicts an outcome”) or a first estimation result of a present context of the user (¶33, “The context can be, for example, a degree of cooperation or aggression in a game, a purchase behavior given an advertisement or study”), the time series of the context estimation results include the first estimation result of the present context (¶32, “a method 500 can include collecting biomarker data during a time-stamped process 501, determining time points at which the behavior of interest occurs 502”; ¶40, “The trainer 704 trains a time series model”), and the second result includes a prediction result of a future affect of the user and the obtained prediction result of the future context of the user (¶42, “the system to anticipate how a particular emotional/biological state predicts an outcome”- anticipate a particular emotional or biological state = future effect, adversarial social interaction = future context ; ¶47, “responses and events are linked and the linked responses and events are used to predict a response at some time (t) in the future, e.g., at t=t+1”).
With respect to Claim 2, claim 1 is incorporated, Reinen discloses wherein the circuitry is further configured to obtain the prediction result of the future affect (¶42, “the system to anticipate how a particular emotional/biological state predicts an outcome”- anticipate a particular emotional or biological state = future effect, prediction result is anticipating a distinct pattern), and the prediction result of the future affect is obtained based on the first estimation result of the present affect (¶27-28; ¶40; ¶42) and at least one of the first estimation result of the present context (¶33, “The context can be, for example, a degree of cooperation or aggression in a game, a purchase behavior given an advertisement or study”) or the obtained prediction result of the future context (¶42, “the system to anticipate how a particular emotional/biological state predicts an outcome”- anticipate a particular emotional or biological state = future effect, adversarial social interaction = future context ; ¶47, “responses and events are linked and the linked responses and events are used to predict a response at some time (t) in the future, e.g., at t=t+1”).
With respect to Claim 4, claim 1 is incorporated, Reinen discloses wherein the circuitry is further configured to obtain the first estimation result of the present affect based on a biosignal of the user (fig. 3; ¶31).
With respect to Claim 7, claim 1 is incorporated, Reinen discloses wherein the circuitry is further configured to: set support content based on the first result (¶40, “The computer system 701 can include an application performing the functions of a trainer 704. The trainer 704 trains a time series model (e.g., impulse response filter, feed forward neural network, or predictive auto regression) to predict user behavior from the data made available by the sensors 702 and the data logger 703”) and at least one of the prediction result of the future affect (¶42, “the system to anticipate how a particular emotional/biological state predicts an outcome”- anticipate a particular emotional or biological state = future effect) or the prediction result of the future context (¶42, “should the system anticipate a distinct patter that predicts an adversarial social interaction, it may alert the user 507 (e.g., through an on-screen text alert, an audio alert or a similar alert)”), wherein the support content is content of support (¶42, “it may alert the user 507 (e.g., through an on-screen text alert, an audio alert or a similar alert)”); and control the support of the support content (¶43, “the system displays an alert that must be acknowledged, e.g., by the user selecting a certain button, before a further interaction in the virtual environment can be completed”).
With respect to Claim 9, claim 7 is incorporated, Reinen discloses wherein the circuitry is further configured to: generate a support result based on the first result and a third result (¶32, the support is an alert to diffuse a negative social interaction), wherein the first result corresponds to a result prior to the support of the user (¶32, training a machine-learning system; ¶40, “a database 705 storing the data made available by the sensors 702 and the data logger 703 and the models created by the trainer” – past games are stored in the database), the third result includes at least one of a second estimation result of the present affect of the user (¶27-28; ¶40, “The computer system 701 can include an application performing the functions of a trainer 704. The trainer 704 trains a time series model (e.g., impulse response filter, feed forward neural network, or predictive auto regression) to predict user behavior from the data made available by the sensors 702 and the data logger 703”; ¶42, “Over time, the model is updated, improving detection functionality, e.g., detection of changes in the feature space that allows the system to anticipate how a particular emotional/biological state predicts an outcome” – the third result is a later point in time corresponding to another game) or a second estimation result of the present context of the user (¶33, “The context can be, for example, a degree of cooperation or aggression in a game, a purchase behavior given an advertisement or study” – a game at a later point in time), the third result corresponds to a result subsequent to the support of the user, and the support result is associated with the user (¶32; ¶42, “the system to anticipate how a particular emotional/biological state predicts an outcome”- anticipate a particular emotional or biological state = future effect, adversarial social interaction = future context ; ¶47, “responses and events are linked and the linked responses and events are used to predict a response at some time (t) in the future, e.g., at t=t+1”); and the generated support result in association with the determined support method (¶42, “it may alert the user 507 (e.g., through an on-screen text alert, an audio alert or a similar alert)”).
With respect to Claim 18, Reinen discloses an information processing method (fig. 2, item 200; fig. 5, item 500; ¶28; ¶32; ¶51; ¶66), comprising: in an information processing apparatus (fig. 7, item 701: computer system = fig. 8, item 12; ¶40; ¶52, “computer system/server 12 include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, handheld or laptop devices …”): obtaining a prediction result of a future context of a user based on a time series of context estimation results of the user (¶42, “The data provided to the system is logged (e.g., recorded in memory of the laptop computer) and fed to a training mechanism (e.g., unsupervised learning of a model) that links the user's biometric time course data to the output behavior. Over time, the model is updated, improving detection functionality, e.g., detection of changes in the feature space that allows the system to anticipate how a particular emotional/biological state predicts an outcome… the system anticipate a distinct pattern that predicts an adversarial social interaction, it may alert the user 507 (e.g., through an on-screen text alert, an audio alert or a similar alert)” – future context = adversarial social interaction); and supporting the user based on a first result and a second result (¶42, supporting the user is via an alert), wherein the first result includes at least one of an estimation result of a present affect of the user (¶27-28; ¶40, “The computer system 701 can include an application performing the functions of a trainer 704. The trainer 704 trains a time series model (e.g., impulse response filter, feed forward neural network, or predictive auto regression) to predict user behavior from the data made available by the sensors 702 and the data logger 703”; ¶42, “Over time, the model is updated, improving detection functionality, e.g., detection of changes in the feature space that allows the system to anticipate how a particular emotional/biological state predicts an outcome”) or an estimation result of a present context of the user (¶33, “The context can be, for example, a degree of cooperation or aggression in a game, a purchase behavior given an advertisement or study”), the time series of the context estimation results include the estimation result of the present context (¶32, “a method 500 can include collecting biomarker data during a time-stamped process 501, determining time points at which the behavior of interest occurs 502”; ¶40, “The trainer 704 trains a time series model”), and the second result includes a prediction result of a future affect of the user and the obtained prediction result of the future context of the user (¶42, “the system to anticipate how a particular emotional/biological state predicts an outcome”- anticipate a particular emotional or biological state = future effect, adversarial social interaction = future context ; ¶47, “responses and events are linked and the linked responses and events are used to predict a response at some time (t) in the future, e.g., at t=t+1”).
With respect to Claim 19, Reinen discloses non-transitory computer-readable medium (fig. 8, item 28; ¶57-58) having stored thereon, computer-executable instructions (¶67-69) which, when executed by a computer (¶74, “The computer readable program instructions may also be loaded onto a computer””), cause the computer to execute operations, the operations comprising: obtaining a prediction result of a future context of a user based on a time series of context estimation results of the user (¶42, “The data provided to the system is logged (e.g., recorded in memory of the laptop computer) and fed to a training mechanism (e.g., unsupervised learning of a model) that links the user's biometric time course data to the output behavior. Over time, the model is updated, improving detection functionality, e.g., detection of changes in the feature space that allows the system to anticipate how a particular emotional/biological state predicts an outcome… the system anticipate a distinct pattern that predicts an adversarial social interaction, it may alert the user 507 (e.g., through an on-screen text alert, an audio alert or a similar alert)” – future context = adversarial social interaction); and supporting the user based on a first result and a second result (¶42, supporting the user is via an alert), wherein the first result includes at least one of an estimation result of a present affect of the user (¶27-28; ¶40, “The computer system 701 can include an application performing the functions of a trainer 704. The trainer 704 trains a time series model (e.g., impulse response filter, feed forward neural network, or predictive auto regression) to predict user behavior from the data made available by the sensors 702 and the data logger 703”; ¶42, “Over time, the model is updated, improving detection functionality, e.g., detection of changes in the feature space that allows the system to anticipate how a particular emotional/biological state predicts an outcome”) or an estimation result of a present context of the user (¶33, “The context can be, for example, a degree of cooperation or aggression in a game, a purchase behavior given an advertisement or study”), the time series of the context estimation results include the estimation result of the present context (¶32, “a method 500 can include collecting biomarker data during a time-stamped process 501, determining time points at which the behavior of interest occurs 502”; ¶40, “The trainer 704 trains a time series model”), and the second result includes a prediction result of a future affect of the user and the obtained prediction result of the future context of the user (¶42, “the system to anticipate how a particular emotional/biological state predicts an outcome”- anticipate a particular emotional or biological state = future effect, adversarial social interaction = future context ; ¶47, “responses and events are linked and the linked responses and events are used to predict a response at some time (t) in the future, e.g., at t=t+1”).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 6, 8, 10-14, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Reinen as applied to claims 1, 7, and 9 above, and further in view of Iwase et al. (Pub. No.: US 2021/0224066 A1).
With respect to Claim 6, claim 1 is incorporated, Reinen teaches the information processing apparatus is implemented in a virtual reality environment (¶3; ¶33).
Reinen does not teach wherein the circuitry is further configured to obtain the first estimation result of the present context based on at least one of environmental information or input information, the environmental information is associated with an environment surrounding the user, and the input information is associated with the user.
Iwase teaches an information processing apparatus (fig. 3, item 10), comprising: circuitry (fig. 3, item 160; ¶85) configured to: obtain a prediction result of a future context of a user based on a time series of context estimation results of the user (¶185); support the user based on a result (¶185, “the response control unit 270 may control the execution of the macro with the name “Play the music while cooking” even when the user has said “Play music””); wherein the circuitry is further configured to obtain the first estimation result of the present context based on at least one of environmental information (¶81; ¶83; ¶96; ¶186; ¶223, “the utterance learning adaptation unit 250 according to the first embodiment of the present disclosure is characterized by determining a name for the estimated macro on the basis of a context which is acquired at the time of issuing the plurality of function execution instructions included in the cluster”) or input information (¶79), the environmental information is associated with an environment surrounding the user (¶81; ¶83), and the input information is associated with the user (¶79).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the information processing apparatus of Reinen, such that the apparatus is implemented in either a virtual reality environment or a real live environment, resulting in wherein the circuitry is further configured to obtain the first estimation result of the present context based on at least one of environmental information or input information, the environmental information is associated with an environment surrounding the user, and the input information is associated with the user, as taught by Iwase so as to improve context and provide implementation alternatives.
With respect to Claim 8, claim 7 is incorporated, Reinen does not teach wherein the circuitry is further configured to: determine a support method based on the first result; and control the support of the support content based on the determined support method.
Iwase teaches an information processing apparatus (fig. 3, item 10), comprising: circuitry (fig. 3, item 160; ¶85) configured to: obtain a prediction result of a future context of a user based on a time series of context estimation results of the user (¶185); support the user based on a result (¶185, “the response control unit 270 may control the execution of the macro with the name “Play the music while cooking” even when the user has said “Play music””); wherein the circuitry is further configured to: determine a support method based on the first result (the support method depends on the user request such as displaying a friend list or playing music; ¶118; ¶130; ¶134-135; ¶188); and control the support of the support content based on the determined support method (¶134-135; ¶188).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the information processing apparatus of Reinen, wherein the circuitry is further configured to: determine a support method based on the first result; and control the support of the support content based on the determined support method, as taught by Iwase so as to provide support alternatives depending on user requests.
With respect to Claim 10, claim 9 is incorporated, Reinen does not teach wherein the circuitry is further configured to determine the support method based on the generated support result.
Iwase teaches an information processing apparatus (fig. 3, item 10), comprising: circuitry (fig. 3, item 160; ¶85) configured to: obtain a prediction result of a future context of a user based on a time series of context estimation results of the user (¶185); support the user based on a generated support result (¶185, “the response control unit 270 may control the execution of the macro with the name “Play the music while cooking” even when the user has said “Play music””); wherein the circuitry is further configured to: determine a support method based on a generated support result (the support method depends on the user request such as displaying a friend list or playing music; ¶118; ¶130; ¶134-135; ¶188); and control the support of the support content based on the determined support method (¶134-135; ¶188).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the information processing apparatus of Reinen, wherein the circuitry is further configured to determine the support method based on the generated support result, as taught by Iwase so as to provide support alternatives depending on user requests.
With respect to Claim 11, claim 8 is incorporated, Reinen does not teach wherein the circuitry is further configured to: determine support output based on the first result; and control the support of the support content based on the determined support method and the determined support output.
Iwase teaches an information processing apparatus (fig. 3, item 10), comprising: circuitry (fig. 3, item 160; ¶85) configured to: obtain a prediction result of a future context of a user based on a time series of context estimation results of the user (¶185); support the user based on a generated support result (¶185, “the response control unit 270 may control the execution of the macro with the name “Play the music while cooking” even when the user has said “Play music””); wherein the circuitry is further configured to: determine support output based on a present context of the user (the support method depends on the user request such as displaying a friend list or playing music; ¶118; ¶130; ¶134-135; ¶188); and control the support of the support content based on the determined support method and the determined support output (¶134-135; ¶188).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the information processing apparatus of Reinen, wherein the circuitry is further configured to: determine support output based on the first result; and control the support of the support content based on the determined support method and the determined support output, as taught by Iwase so as to provide support alternatives depending on user requests.
With respect to Claim 12, claim 11 is incorporated, Reinen teaches wherein the circuitry is further configured to: generate a support result based on the first result and a third result (¶32, the support is an alert to diffuse a negative social interaction), wherein the first result corresponds to a result prior to the support of the user (¶32, training a machine-learning system; ¶40, “a database 705 storing the data made available by the sensors 702 and the data logger 703 and the models created by the trainer” – past games are stored in the database), the third result includes at least one of a second estimation result of the present affect of the user (¶27-28; ¶40, “The computer system 701 can include an application performing the functions of a trainer 704. The trainer 704 trains a time series model (e.g., impulse response filter, feed forward neural network, or predictive auto regression) to predict user behavior from the data made available by the sensors 702 and the data logger 703”; ¶42, “Over time, the model is updated, improving detection functionality, e.g., detection of changes in the feature space that allows the system to anticipate how a particular emotional/biological state predicts an outcome” – the third result is a later point in time corresponding to another game) or a second estimation result of the present context of the user (¶33, “The context can be, for example, a degree of cooperation or aggression in a game, a purchase behavior given an advertisement or study” – a game at a later point in time), the third result corresponds to a result subsequent to the support of the user, and the support result is associated with the user (¶32; ¶42, “the system to anticipate how a particular emotional/biological state predicts an outcome”- anticipate a particular emotional or biological state = future effect, adversarial social interaction = future context ; ¶47, “responses and events are linked and the linked responses and events are used to predict a response at some time (t) in the future, e.g., at t=t+1”); and the generated support result in association with the determined support method (¶42, “it may alert the user 507 (e.g., through an on-screen text alert, an audio alert or a similar alert)”).
With respect to Claim 13, claim 12 is incorporated, Reinen does not teach wherein the circuitry is further configured to: determine the support method based on the generated support result; and determine the support output based on the generated support result.
Iwase teaches an information processing apparatus (fig. 3, item 10), comprising: circuitry (fig. 3, item 160; ¶85) configured to: obtain a prediction result of a future context of a user based on a time series of context estimation results of the user (¶185); support the user based on a generated support result (¶185, “the response control unit 270 may control the execution of the macro with the name “Play the music while cooking” even when the user has said “Play music””); wherein the circuitry is further configured to: determine the support method based on the generated support result (the support method depends on the user request such as displaying a friend list or playing music; ¶118; ¶130; ¶134-135; ¶188); and determine the support output based on the generated support result (¶134-135; ¶188).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the information processing apparatus of Reinen, wherein the circuitry is further configured to: determine the support method based on the generated support result; and determine the support output based on the generated support result, as taught by Iwase so as to provide support alternatives depending on user requests.
With respect to Claim 14, claim 11 is incorporated, Reinen teaches wherein the support content indicates transmission of a message to the user (¶40, “the output signals are audio feedback and/or video feedback” – the message is audio feedback), the support method corresponds to a method to transit the message to the user via audio (¶42, “it may alert the user 507 (e.g., through an on-screen text alert, an audio alert or a similar alert”).
Reinen does not mention and the support output is one of tone of voice or a volume of the audio.
Iwase teaches an information processing apparatus (fig. 3, item 10), comprising: circuitry (fig. 3, item 160; ¶85) configured to: obtain a prediction result of a future context of a user based on a time series of context estimation results of the user (¶185); support the user based on a generated support result (¶185, “the response control unit 270 may control the execution of the macro with the name “Play the music while cooking” even when the user has said “Play music””); wherein the circuitry is further configured to: determine support output based on a present context of the user (the support method depends on the user request such as displaying a friend list or playing music; ¶118; ¶130; ¶134-135; ¶188); and control the support of the support content based on the determined support method and the determined support output (¶134-135; ¶188); wherein the support content indicates transmission of a message to the user (¶210, “an audio output device” – the message is audio output), the support method corresponds to a method to transit the message to the user via audio (fig. 1; ¶48-50) and the support output is one of or a volume of the audio (¶50).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the information processing apparatus of Reinen, and the support output is one of tone of voice or a volume of the audio, as taught by Iwase so as to provide support alternatives depending on user requests.
With respect to Claim 16, claim 14 is incorporated, Reinen does not mention wherein the message is task support information, and the task support information is an information related to support of a task of the user.
Iwase teaches an information processing apparatus (fig. 3, item 10), comprising: circuitry (fig. 3, item 160; ¶85) configured to: obtain a prediction result of a future context of a user based on a time series of context estimation results of the user (¶185); support the user based on a generated support result (¶185, “the response control unit 270 may control the execution of the macro with the name “Play the music while cooking” even when the user has said “Play music””); wherein the circuitry is further configured to: determine support output based on a present context of the user (the support method depends on the user request such as displaying a friend list or playing music; ¶118; ¶130; ¶134-135; ¶188); and control the support of the support content based on the determined support method and the determined support output (¶134-135; ¶188); wherein the support content indicates transmission of a message to the user (¶210, “an audio output device” – the message is audio output), the support method corresponds to a method to transit the message to the user via audio (fig. 1; ¶48-50) and the support output is one of or a volume of the audio (¶50); wherein the message is task support information, and the task support information is an information related to support of a task of the user (¶48-50).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the information processing apparatus of Reinen, wherein the message is task support information, and the task support information is an information related to support of a task of the user, as taught by Iwase so as to provide support for real-world user requests and implementations.
Claims 15 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Reinen and Iwase as applied to claims 11 and 14 above, and further in view of Weldemariam et al. (Pub. No.: US 2020/0387603 A1) hereinafter referred to as Weldemariam.
With respect to Claim 15, claim 14 is incorporated, Reinen and Iwase combined do not teach wherein the message is navigation information.
Weldemariam discloses an information processing apparatus (fig. 2; ¶62), comprising: a supporting unit (fig. 2, item 208; ¶69) configured to support a user (by performing one or more ameliorating actions) based on an estimation result of at least one of a present affect (fig. 2, item 202) and a present context of the user (¶65; ¶67) and a prediction result of at least one of a future affect and a future context of the user (fig. 3; ¶69-70; ¶82). further comprising: a setting unit (fig. 2, item 222; ¶70) configured to set a support content (¶70, “generate one or more user amelioration actions which are actions that can be applied to move the user's state into a lower risk level”) that is a content of support (¶91) by the supporting unit based on the estimation result of at least one of the affect and the context and the prediction result of at least one of the affect and the context (¶69-72), wherein the supporting unit is configured to perform support of the support content set by the setting unit (¶69; ¶81); further comprising: a determining unit (fig. 2, items 204, 206, 212, 242; ¶29; ¶31) that is a method of support by the supporting unit based on the estimation result of at least one of the affect and the context (¶39, “the method can detect such a scenario based on contextual situation and/or cognitive heuristics, and take an ameliorative action mid-way to cancel the user activity and activate a protective overlay interface to disallow the user from further writing or conversing”; ¶43, “The system and/or method may detect the user's or another user's impaired state while performing an activity on the device such as deleting a useful app (application) on the device”; ¶44, “the system and/or method may adjust as to serve as multi-purpose protective agent, for example, including protection of one or more robotic devices and various applications. For example, the system and/or method may help prevent a user selecting one or more inappropriate or risky features on an app, for example, a selection related to touch malfunction on a mobile device that includes touch keys mapped to corresponding functions and a touch screen”; ¶45-48), wherein the supporting unit is configured to perform support of the support content by the support method determined by the determining unit (¶43-48); wherein the determining unit (fig. 2, items 204, 206, 212, 242; ¶29; ¶31) is configured to also determine support means that is means of support by the supporting unit based on an estimation result of at least one of the affect and the context (¶43, “The system and/or method may detect the user's or another user's impaired state while performing an activity on the device such as deleting a useful app (application) on the device”; ¶44-48), and the supporting unit is configured to perform support of the support content (¶43, “the system and/or method can create an overlay GUI encapsulating the app or the set of apps (or like framework) so that the app or set of apps is protected or locked and cannot be deleted, used, modified in any way”; ¶44-48) by the support method and the support means determined by the determining unit; wherein the support content (¶17, “The amelioration action may include, but is not limited to, any of:”) is to transmit a message to the user (¶17, via “voice and/or tone being changed (e.g., of a speaking robot)”), the support method is a method of transmitting the message to the user by audio, and the support means is a tone of voice or a volume of the audio (¶17, “voice and/or tone being changed (e.g., of a speaking robot)”; ¶59, “For example, an in-car smart assistant may change the tone or voice responsive to detecting user's frustration during an interaction”); wherein the message is navigation information (¶90, “Examples of workloads and functions which may be provided from this layer include: mapping and navigation 91”; ¶134, “Examples of workloads and functions which may be provided from this layer include: mapping and navigation 91”).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date to modify the combined information processing apparatus of Reinen and Iwase, wherein the message is navigation information, as taught by Weldemariam, as so as to provide an alternative real world implementation.
With respect to Claim 17, claim 11 is incorporated, Reinen and Iwase combined do not teach wherein the support content is content related to a suggestion to the user, the support method is a method of the suggestion to the user via motion of a robot, and the support output is a specific motion of the robot.
Weldemariam discloses an information processing apparatus (fig. 2; ¶62), comprising: a supporting unit (fig. 2, item 208; ¶69) configured to support a user (by performing one or more ameliorating actions) based on an estimation result of at least one of a present affect (fig. 2, item 202) and a present context of the user (¶65; ¶67) and a prediction result of at least one of a future affect and a future context of the user (fig. 3; ¶69-70; ¶82). further comprising: a setting unit (fig. 2, item 222; ¶70) configured to set a support content (¶70, “generate one or more user amelioration actions which are actions that can be applied to move the user's state into a lower risk level”) that is a content of support (¶91) by the supporting unit based on the estimation result of at least one of the affect and the context and the prediction result of at least one of the affect and the context (¶69-72), wherein the supporting unit is configured to perform support of the support content set by the setting unit (¶69; ¶81); further comprising: a determining unit (fig. 2, items 204, 206, 212, 242; ¶29; ¶31) that is a method of support by the supporting unit based on the estimation result of at least one of the affect and the context (¶39, “the method can detect such a scenario based on contextual situation and/or cognitive heuristics, and take an ameliorative action mid-way to cancel the user activity and activate a protective overlay interface to disallow the user from further writing or conversing”; ¶43, “The system and/or method may detect the user's or another user's impaired state while performing an activity on the device such as deleting a useful app (application) on the device”; ¶44, “the system and/or method may adjust as to serve as multi-purpose protective agent, for example, including protection of one or more robotic devices and various applications. For example, the system and/or method may help prevent a user selecting one or more inappropriate or risky features on an app, for example, a selection related to touch malfunction on a mobile device that includes touch keys mapped to corresponding functions and a touch screen”; ¶45-48), wherein the supporting unit is configured to perform support of the support content by the support method determined by the determining unit (¶43-48); wherein the determining unit (fig. 2, items 204, 206, 212, 242; ¶29; ¶31) is configured to also determine support means that is means of support by the supporting unit based on an estimation result of at least one of the affect and the context (¶43, “The system and/or method may detect the user's or another user's impaired state while performing an activity on the device such as deleting a useful app (application) on the device”; ¶44-48), and the supporting unit is configured to perform support of the support content (¶43, “the system and/or method can create an overlay GUI encapsulating the app or the set of apps (or like framework) so that the app or set of apps is protected or locked and cannot be deleted, used, modified in any way”; ¶44-48) by the support method and the support means determined by the determining unit; wherein the support content is content related to a suggestion to the user, the support method is a method of the suggestion to the user via motion of a robot, and the support output is a specific motion of the robot (¶51, “A bartender robot detects a customer misbehaving and shouting at it at the table, and the robot determines it is better to increase the distance at which it is programmed to approach the table”; ¶58, “For example, a bartender robot should prefer to keep attending a customer who is misbehaving by increasing the robot's serving distance and modulating a voice tone rather than to stop serving the customer completely …The robot may also send a signal to and consult with a local or remote artificial intelligence (AI) service for suggestions on how to best handle an evolving situation. This AI service may make use of a “protection database” storing suggestions on how to handle different challenges”) .
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date to modify the combined information processing apparatus of Reinen and Iwase, wherein the support content is content related to a suggestion to the user, the support method is a method of the suggestion to the user via motion of a robot, and the support output is a specific motion of the robot, as taught by Weldemariam, as so as to provide an alternative real world implementation for support.
Allowable Subject Matter
Claim 3 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: Yamamoto et al. (Pub. No.: US 2024/0386319) hereinafter referred to as Yamamoto teaches using a mood series prediction model to calculate and output the current mood (Valence, Arousal) from the behavior feature data and the preprocessed mood series data (¶122) and predicting future behavior as mean and variance, and represents the reliability of the result as variance however none of the prior art teaches wherein the circuitry is further configured to obtain a reliability of the obtained prediction result of the future affect, and the reliability of the obtained prediction result of the future affect is obtained based on the first estimation result of the present affect and at least one of the first estimation result of the present context or the obtained prediction result of the future context including all the base limitations.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Yamamoto et al. (Pub. No.: US 2024/0386319) hereinafter referred to as Yamamoto.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONNA V Bocar whose telephone number is (571)272-0955. The examiner can normally be reached Monday - Friday 8:30am to 5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr A Awad can be reached at (571)272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DONNA V Bocar/ Examiner, Art Unit 2621