Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This is the third Office Action on the merits. Claims 1-3 and 6-17 are currently pending. Claims 1-3 and 6-16 are currently amended, claims 4 and 5 are canceled, and claim 17 is new.
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JP2020-162193, filed on 03/15/2023.
Response to Amendment
The amendment filed on 02/19/2026 has been entered.
In view of the Claims, Applicant’s amendments have been acknowledged.
Response to Arguments
Applicant’s arguments, see Page 9-10, filed 02/19/2026, with respect to the rejection(s) of claims 1 and 16 under 35 USC 103 have been fully considered and are persuasive. Regarding Applicant’s argument for claims 1 and 16 rejected under 25 USC 103, Applicant argues that Liu and Fu do not teach, suggest, or render obvious at least the features of "the set of expression contents comprises a first expression for the robot device and a second expression for an external device ... each of the first expression and the second expression is associated to form a combined expression ... control, based on the generated instruction signal, the robot device and the external device to perform a plurality of operations ... the first operation and the second operation are in synchronization to execute the combined expression." Examiner found Applicant’s argument persuasive. Therefore, the rejections have been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Liu et al. (CN108115695A) in view of Shukla (US20190202063A1), hereinafter Liu and Shukla, as discussed below. Each of claims 2-3, 6-15, and 17, depend directly or indirectly, from claim 1 in view of Liu and Shukla, and by dependency of claim 1, are rejected under 35 USC 103, as discussed above.
Applicant’s arguments, see Page 10, filed 02/19/2026, with respect to rejections of dependent claims 2, 3, 6-8, 10, and 13 have been fully considered and are not persuasive.
Applicant’s arguments, see Pages 10, filed 02/19/2026, with respect to rejection of dependent claims 9 have been fully considered and are not persuasive.
Applicant’s arguments, see Page 11, filed 02/19/2026, with respect to rejections of dependent claims 11 and 12 have been fully considered and are not persuasive.
Applicant’s arguments, see Page 11, filed 02/19/2026, with respect to rejections of dependent claims 14 and 15 have been fully considered and are not persuasive.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-3, 6-8, 10, 13, and 16- 17 are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Shukla.
Regarding claim 1, Liu teaches of an information processing system ("This invention relates to the field of robotics, and in particular to an emotional color expression system", [0002]) comprising: an input acceptor configured to receive an input, wherein the received input corresponds to a user action of a first user associated with a robot device ("interactive information input module is used to obtain user operation data", [0022]); and a processor ("In addition, the functional units in the various embodiments of the present invention can be integrated into one processing unit, or each unit can exist physically separately, or two or more units can be integrated into one unit", [0060]) configured to: determine a set of expression contents based on the received input, wherein the set of expression contents comprises a first expression for the robot device ("the robot's emotions are expressed through the head LED panel, the limb LED light strips, limb movements, and voice output", [0033]); generate an instruction signal ("generate a control signal for robot's emotional expression, and transmit the control signal to the robot emotional expression module.. used to express corresponding emotions according to the control signal", [0022], [0049]); and the first operation is based on the first expression ("At this moment, the robot's head LED panel displays a happy expression, the LED panel and the LED strips on its limbs turn green, and the voice output says, "You've answered three questions correctly in a row! Great job!", [0049], shows multiple expressions being expressed to form a combined expression).
However, Liu does not teach of wherein the set of expression contents comprises a second expression for an external device, the external device is different from the robot device, and each of the first expression and the second expression is associated to form a combined expression; control, based on the generated instruction signal, the robot device and the external device to perform a plurality of operations, wherein the plurality of operations includes a first operation and a second operation, the second operation is based on the second expression, and the first operation and the second operation are in synchronization to execute the combined expression.
Shukla, in the same field of endeavor, teaches of wherein the set of expression contents comprises a second expression for an external device ("the agent device or the user interaction engine 140 may control the client running on the user device to render the speech of the response to the user", [0059]), the external device is different from the robot device (FIG. 1 shows User Device and Agent Device are different), and each of the first expression and the second expression is associated to form a combined expression ("A response may also be delivered in speech coupled with a particular nonverbal expression as a part of the delivered response, such as a nod, a shake of the head, a blink of the eyes, or a shrug. There may be other forms of deliverable form of a response that is acoustic but non verbal, e.g., a whistle", [0090], expressions can be split into verbal and non-verbal expressions to form a "combined expression"); control, based on the generated instruction signal, the robot device and the external device to perform a plurality of operations ("the user interaction engine 140 may control the state and the flow of conversations between users and agent devices. The flow of each of the conversations may be controlled based on different types of information associated with the conversation, e.g., information about the user engaged in the conversation…the user interaction engine 140 may be configured to obtain various sensory inputs such as, and without limitation, audio inputs, image inputs, haptic inputs, and/or contextual inputs, process these inputs, formulate an understanding of the human conversant, accordingly generate a response based on such understanding, and control the agent device and/or the user device to carry out the conversation based on the response", [0061]), wherein the plurality of operations includes a first operation and a second operation ("the user interaction engine 140 may be configured to obtain various sensory inputs such as, and without limitation, audio inputs, image inputs, haptic inputs, and/or contextual inputs, process these inputs, formulate an understanding of the human conversant, accordingly generate a response based on such understanding, and control the agent device and/or the user device to carry out the conversation based on the response", [0061]), the first operation is different from the second operation ("A response may also be delivered in speech coupled with a particular nonverbal expression as a part of the delivered response, such as a nod, a shake of the head, a blink of the eyes, or a shrug. There may be other forms of deliverable form of a response that is acoustic but non verbal, e.g., a whistle", [0090]), the second operation is based on the second expression ("the agent device or the user interaction engine 140 may control the client running on the user device to render the speech of the response to the user", [0059]), and the first operation and the second operation are in synchronization to execute the combined expression ("The control signals generated by the response control signal generator 1910 may then be used by the response delivery unit 1920 to deliver, at 1965, the response to the user in one or more modalities based on the control signals, as discussed herein", [0156], implicit that operations are synchronized since controls for the first and second expressions are generated at the same step, "accordingly generate a response based on such understanding, and control the agent device and/or the user device to carry out the conversation based on the response", [0061]).
Therefore, one of ordinary skill in the art, before the effective filing date of the claimed invention, would have modified the teaching of Liu with teachings of Shukla to expand the robot’s expressions past a single robot component with reasonable expectations of success. One of ordinary skill in the art would have been motivated to make this modification in order to overcome the limitations of machine-based dialogue systems, and improve the overall user experience and emotional response (Shukla, [0005]).
Regarding claim 2, modified Liu as discussed above, additionally, teaches of further comprising: an operation control unit ("interaction information recognition module" 11, [0031]) configured to: generate a control signal; and control, based on the generated control signal the robot device to execute the second operation ("...generate a control signal for robot's emotional expression, and transmit the control signal to the robot emotional expression module... used to express corresponding emotions according to the control signal", [0022]).
Regarding claim 3, modified Liu as discussed above, additionally, teaches of wherein the second operation (“If current emotion score is 8,…robot should provide encouraging feedback…if current emotion score is 2…Robot should give affirmative and happy expression to encourage the user”, [0049]) includes at least one of a variation in an attitude of the robot device, an orientation of the robot device, or a position of the robot device ("At the same time, the robot’s upper limbs make an encouraging gesture of thumbs up”, [0049]).
Regarding claim 6, modified Liu as discussed above, additionally, teaches of wherein the robot device is configured to identify an emotion of the first user based on the received input ("the emotional color recognition module captures an image of a user smiling taken by a camera, or detects sentences with a positive emotion such as "I'm happy" or "I'm very happy" in the voice input", [0046]).
Regarding claim 7, modified Liu teaches of all limitations of claim 1 as stated above, additionally, wherein the input is received via a path different from a path associated with an interaction between the robot device and the first user (“Optionally, it is also includes a communication module for performing Internet of Things (IoT) functions, the communication module being connected to the main control device”, [0019]).
Regarding claim 8, modified Liu teaches of all limitations of claim 7 as discussed above.
However, Liu does not teach of further comprising a user situation determination unit configured to determine a situation that is associated with an environment of the first user, wherein each of the set of expression contents represents the situation
Fu, in the same field of endeavor, teaches of further comprising a user situation determination unit configured to determine a situation that is associated with an environment of the first user, wherein each of the set of expression contents represents the situation ("the sensor info based profile selector 1520 is configured to estimate the state of the user well as the environment of the dialogue based on process multimodal sensor input such as visual and audio inputs", [0131], "After the user state and the environment are estimated, the dialogue setting based profile selector 1650 selects, at 1672, a profile that is considered to be appropriate to the user in the current dialogue environment…a child user may be known to prefer a soft and soothing female voice and a profile is selected that enables speech delivery in a soft and soothing female voice", [0136]-[0137], shows that there are different profiles that change the expressions depending on environment).
Therefore, one of ordinary skill in the art, before the effective filing date of the claimed invention, would have modified the teaching of Liu with teachings of Shukla to incorporate a dedicated unit configured to determine a situation associated with the user’s environment such that the set of expression contents represents that determined situation with reasonable expectations of success. One of ordinary skill in the art would have been motivated to make this modification in order to improve and adapt the response of the robot by including environmental factors that affect a user’s emotions (Shukla, [0137]).
Regarding claim 10, modified Liu as discussed above, additionally, teaches of further comprising a user attribute identification unit ("interactive information recognition module…") configured to identify an attribute of the first user ("…responsible for collecting and analyzing the data of each information input interface", [0041], "user's current mood…", [0043]), wherein the processor is further configured to change at least one expression content from the set of expression contents based on the identified attribute ("…generating control signals for the robot's emotional expression...joy, anger, sorrow, happiness and peace", [0041]).
Regarding claim 13, modified Liu as discussed above, additionally, teaches of further comprising an operation mode setting unit ("emotional color recognition function is jointly completed by the emotional information input module 4, the emotional color recognition module 13 and the user emotional expression module 51", [0033]) configured to set an operation mode of the robot device, wherein the processor is further configured to change at least one changes the expression content from the set of expression contents based on a basis of the set operation mode ("User emotional color recognition function: The user's current mood is judged through the user's information input, and different language expressions and conversation contents are selected accordingly", [0042-0043]).
Regarding claim 16, Liu teaches of an information processing device ("This invention relates to the field of robotics, and in particular to an emotional color expression system and robot", [0002]), comprising: an input acceptor configured to receive an input, wherein the received input corresponds to a user action of a user associated with a robot device ("interactive information input module is used to obtain user operation data", [0022]); a processor ("In addition, the functional units in the various embodiments of the present invention can be integrated into one processing unit, or each unit can exist physically separately, or two or more units can be integrated into one unit", [0060]) configured to: determine a set of expression contents based on the received input ("emotional expression"), wherein the set of expression contents comprises a first expression for the robot device ("the robot's emotions are expressed through the head LED panel, the limb LED light strips, limb movements, and voice output", [0033]); generate an instruction signal ("generate a control signal for robot's emotional expression, and transmit the control signal to the robot emotional expression module.. used to express corresponding emotions according to the control signal", [0022], [0049]); and the first operation is based on the first expression ("At this moment, the robot's head LED panel displays a happy expression, the LED panel and the LED strips on its limbs turn green, and the voice output says, "You've answered three questions correctly in a row! Great job!", [0049], shows multiple expressions being expressed to form a combined expression), an operation control unit ("interaction information recognition module 11", [0031]) configured to: generate a control signal; and control, based on the generated control signal, the robot device to execute the second operation ("generate a control signal for robot's emotional expression, and transmit the control signal to the robot emotional expression module… used to express corresponding emotions according to the control signal", [0022], [0049]).
However, Liu does not teach of wherein the set of expression contents comprises a second expression for an external device, the external device is different from the robot device, and each of the first expression and the second expression is associated to form a combined expression; control, based on the generated instruction signal, the robot device and the external device to perform a plurality of operations, wherein the plurality of operations includes a first operation and a second operation, the second operation is based on the second expression, and the first operation and the second operation are in synchronization to execute the combined expression.
Shukla, in the same field of endeavor, teaches of wherein the set of expression contents comprises a second expression for an external device ("the agent device or the user interaction engine 140 may control the client running on the user device to render the speech of the response to the user", [0059]), the external device is different from the robot device (FIG. 1 shows User Device and Agent Device are different), and each of the first expression and the second expression is associated to form a combined expression ("A response may also be delivered in speech coupled with a particular nonverbal expression as a part of the delivered response, such as a nod, a shake of the head, a blink of the eyes, or a shrug. There may be other forms of deliverable form of a response that is acoustic but non verbal, e.g., a whistle", [0090], expressions can be split into verbal and non-verbal expressions to form a "combined expression"); control, based on the generated instruction signal, the robot device and the external device to perform a plurality of operations ("the user interaction engine 140 may control the state and the flow of conversations between users and agent devices. The flow of each of the conversations may be controlled based on different types of information associated with the conversation, e.g., information about the user engaged in the conversation…the user interaction engine 140 may be configured to obtain various sensory inputs such as, and without limitation, audio inputs, image inputs, haptic inputs, and/or contextual inputs, process these inputs, formulate an understanding of the human conversant, accordingly generate a response based on such understanding, and control the agent device and/or the user device to carry out the conversation based on the response", [0061]), wherein the plurality of operations includes a first operation and a second operation ("the user interaction engine 140 may be configured to obtain various sensory inputs such as, and without limitation, audio inputs, image inputs, haptic inputs, and/or contextual inputs, process these inputs, formulate an understanding of the human conversant, accordingly generate a response based on such understanding, and control the agent device and/or the user device to carry out the conversation based on the response", [0061]), the first operation is different from the second operation ("A response may also be delivered in speech coupled with a particular nonverbal expression as a part of the delivered response, such as a nod, a shake of the head, a blink of the eyes, or a shrug. There may be other forms of deliverable form of a response that is acoustic but non verbal, e.g., a whistle", [0090]), the second operation is based on the second expression ("the agent device or the user interaction engine 140 may control the client running on the user device to render the speech of the response to the user", [0059]), and the first operation and the second operation are in synchronization to execute the combined expression ("The control signals generated by the response control signal generator 1910 may then be used by the response delivery unit 1920 to deliver, at 1965, the response to the user in one or more modalities based on the control signals, as discussed herein", [0156], implicit that operations are synchronized since controls for the first and second expressions are generated at the same step, "accordingly generate a response based on such understanding, and control the agent device and/or the user device to carry out the conversation based on the response", [0061]).
Therefore, one of ordinary skill in the art, before the effective filing date of the claimed invention, would have modified the teaching of Liu with teachings of Shukla to expand the robot’s expressions past a single robot component with reasonable expectations of success. One of ordinary skill in the art would have been motivated to make this modification in order to overcome the limitations of machine-based dialogue systems, and improve the overall user experience and emotional response (Shukla, [0005]).
Regarding claim 17, modified Liu as discussed above, additionally, teaches of wherein each expression content of the set of expression contents represents an emotion of one of the robot device or the first user ("Optionally, the emotional expression includes five states: joy, anger, sorrow, happiness, and peace", [0016]).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Liu and Fu, further in view of Wang (CN107378971A), hereinafter Wang.
Regarding claim 9, modified Liu teach of all limitations of claim 8 as stated above.
However, modified Liu do not teach of wherein each expression content of the set of expression contents includes alarm information.
Wang, in the same field of endeavor, teaches of wherein each expression content of the set of expression contents further includes alarm information ("expression control module 500 is used to perform different expression actions… also responsible for collecting data such as temperature, humidity and PM 2.5 in the working environment. An alarm is issued when an abnormality occurs in the working environment", [0050]).
Therefore, one of ordinary skill in the art, before the effective filing date of the claimed invention, would have modified the teaching of modified Liu with the alarm of Wang with reasonable expectations of success. One of ordinary skill in the art would have been motivated to make this modification such that in emergencies, environmental abnormalities, time constraints, etc. the robot would be able to express such alert.
Claims 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Liu and Fu, further in view of Kuroki et al. (JP2001260063A), hereinafter Kuroki.
Regarding claim 11, modified Liu teach of all limitations of claim 10 as stated above, in addition to, the user attribute identification unit ("interactive information recognition module", 11) identifies an attribute of the first user ("responsible for collecting and analyzing the data of each information input interface", [0041], "user's current mood", [0043]).
However, modified Liu does not teach of wherein a second user receives an expression associated with the first expression, and the second user is different from the first user.
Kuroki, in the same field of endeavor, teaches of wherein a second user receives an expression associated with the first expression, and the second user is different from the first user ("robot...can communicate with external devices…the external device referred to here may be another robot", [0020]), implicit that the functions performed by the first robot can be replicated by a second user and another robot).
Therefore, one of ordinary skill in the art, before the effective filing date of the claimed invention, would have combined the user attribute identification unit identifying an attribute of the first user of modified Liu with the teaching of Kuroki in which the input is performed by a first user, and an expression is received by a second user through another device to yield predictable results. One of ordinary skill in the art would have combined these elements to socialize the identified attributes of a first user to the second user, increasing awareness of the first user’s identified attributes. For example, the robot may be able to identify that a child is in distress, and then communicates this expression to a parent, the second user.
Regarding claim 12, modified teach of all limitations of claim 10 as stated above, in addition to, the user attribute identification unit ("interactive information recognition module" 11) identifies an attribute of the first user ("responsible for collecting and analyzing the data of each information input interface", [0041], "user's current mood", [0043]).
However, modified Liu does not teach of wherein a second user receives an expression associated with the first expression, the second user is different from the first user, and the user attribute identification unit is further configured to identify an attribute of the second user.
Kuroki, in the same field of endeavor, teaches of wherein a second user receives an expression associated with the first expression, the second user is different from the first user, and the user attribute identification unit is further configured to identify an attribute of the second user (robot...can communicate with external devices…the external device referred to here may be another robot", [0020], implicit that the functions performed by the first robot can be replicated by the second user and robot).
Therefore, one of ordinary skill in the art, before the effective filing date of the claimed invention, would have combined the user attribute identification unit identifying an attribute of the first user of modified Liu with the teaching of Kuroki in which the input is performed by a first user, and an expression is received by a second user through another device, along with the implication that the external device can replicate the robot’s functions to identify and express the attributes of the second user to yield predictable results. One of ordinary skill in the art would have combined these elements to socialize the identified attributes of a first user to the second user (increasing awareness of the first user’s identified attributes) and to identify and express the attribute of the second user in reaction to receiving the expression of the first user. For example, the robot may be able to identify that a child is in distress, and then communicates this expression to a parent (the second user), then identifies the reaction of the parent potentially influencing the external device or robot’s course of actions such as comforting the child in distress.
Claims 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Liu and Fu, in view of Kim et al (US20180136615A1), hereinafter Kim.
Regarding claim 14, modified Liu teaches of all limitations of claim 1 as stated above.
However, modified Liu does not teach of wherein the processor is further configured to generate, as the instruction signal, an inquiry associated with a specific device.
Kim, in the same field of endeavor, teaches of wherein the processor ("communication device" 133) is further configured to generate, as the instruction signal, an inquiry associated with a specific device (“transmits receives data by being connected with the external server 200", [0088]).
Therefore, one of ordinary skill in the art, before the effective filing date of the claimed invention, would have combined the elements of modified Liu with the communication device of Kim to yield predictable results. One of ordinary skill in the art would have combined these elements so that the robot device can communicate with another device.
Regarding claim 15, modified Liu teaches of all limitations of claim 14 as stated above.
However, modified Liu does not teach of wherein the processor is further configured to generate the inquiry voice signal.
Kim, in the same field of endeavor, teaches of wherein the processor is further configured to generate the inquiry voice signal (implicit that the "received data" is made by a voice that is received through the "voice input unit" 109 through the "concierge service processor" 115 to the "communication device" 133).
Therefore, one of ordinary skill in the art, before the effective filing date of the claimed invention, would have combined the elements of modified Liu with the voice input unit of Kim to yield predictable results. One of ordinary skill in the art would have combined these elements such that the inquiry made by voice can be communicated to another device.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABIGAIL LEE ESPINOZA whose telephone number is (571)272-4889. The examiner can normally be reached Monday - Friday 9:00 am - 5:00 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached at (571) 270-5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
ABIGAIL LEE ESPINOZA
Examiner
Art Unit 3657
/ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657