DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 21-23 and 31-33 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Stiehl et al. (US 2009/0055019).
Regarding claim 21, Stiehl et al. teaches a method for operating an animal device (para. 0111, “enable the Huggable to function in multiple modes: as a fully autonomous robot”), comprising:
receiving, by the animal device, one or more inputs (paragraphs 0160-0163 and 0228-0234; Fig. 24A), from a user;
accessing, by the animal device, a stored set of possible scenes for the animal device to perform (paragraphs 0236-0237; Fig. 24B, behavior system 2414),
determining, by the animal device, a scene type based on the one or more inputs (paragraphs 0144-0145), wherein the animal device maps the one or more inputs to the scene type based on properties of the one or more inputs (paragraphs 0171-0172, and 0257; Fig. 22c, classification data 2274), and the scene type for the animal device to perform in response to different inputs such that the animal device is configured to perform different scenes in the response to the one or more inputs (paragraphs 0296, 0304, and 0392);
selecting, by the animal device, one or more scenes from the plurality of scenes of the scene type, wherein each selected scene comprises scene parameters with instructions for operating a plurality of actuators of the animal device to perform physical output actions for the scene (Figures 24B-24C; paragraphs 0235-0238, and 0304); and
operating, by the animal device, the plurality of actuators according to instructions in the scene parameters of the selected scenes to perform the physical output actions of each of the selected scenes in a sequence of scenes (paragraphs 0240-0241, 0296, and 0306).
Regarding claim 22, Stiehl et al. teaches the method according to claim 21 as stated above wherein the one or more inputs comprise sensor input data (para. 0057), the sensor input data comprising at least one of: touch sensor data (paragraphs 0144-0149; Fig. 9, touch sensor 907; Figures 3-4), audio sensor data (paragraphs 0156 and 0182, “microphones”; Fig. 20), light sensor data, mechanical actuator sensor data, or biometric sensor data.
Regarding claim 23, Stiehl et al. teaches the method according to claim 21 as stated above wherein the one or more inputs comprise a petting input (para. 0171), the method further comprising:
detecting, by a touch sensor of the animal device, a set of touch inputs received by the animal device over a time period (paragraphs 0169-0172); and
selecting the one or more scenes based on the petting input (para. 0257).
Regarding claim 31, Stiehl et al. teaches a system (Abstract) comprising:
a set of sensors for receiving inputs from a user (paragraphs 0228-0234);
a plurality of mechanical actuators for causing an animal device to perform physical output actions (para. 0241; Fig. 24C); and
a non-transitory computer-readable storage medium storing instructions that when executed cause a processor (Fig. 24B, “embedded PC”) to:
receive one or more inputs from the user via the set of sensors (paragraphs 0160-0163; Fig. 24A);
access a stored set of possible scenes for the animal device to perform (paragraphs 0236-0237; Fig. 24B, behavior system 2414),
determine a scene type based on the one or more inputs (paragraphs 0144-0145), wherein the animal device maps the one or more inputs to the scene type based on properties of the one or more inputs (paragraphs 0171-0172, and 0257; Fig. 22c, classification data 2274), and the scene type for the animal device to perform in response to different inputs such that the animal device is configured to perform different scenes in response to the one or more inputs (paragraphs 0296, 0304, and 0392);
select one or more scenes from the plurality of scenes of the scene type, wherein each selected scene comprises scene parameters with instructions for operating a plurality of actuators of the animal device to perform physical output actions for the scene (Figures 24B-24C; paragraphs 0235-0238, and 0304); and
operate the plurality of actuators according to instructions in the scene parameters of the selected scenes, to perform the physical output actions of each of the selected scenes in a sequence of scenes (paragraphs 0240-0241, 0296, and 0306).
Regarding claim 32, Stiehl et al. teaches the system according to claim 31 as stated above wherein the one or more inputs comprise sensor input data (para. 0057), the sensor input data comprising at least one of: touch sensor data (paragraphs 0144-0149; Fig. 9, touch sensor 907; Figures 3-4), audio sensor data (paragraphs 0156 and 0182, “microphones”; Fig. 20), light sensor data, mechanical actuator sensor data, or biometric sensor data.
Regarding claim 33, Stiehl et al. teaches the system according to claim 31 as stated above wherein the one or more inputs comprise a petting input (para. 0171), the instructions further causing the processor to:
detect, by a touch sensor of the animal device, a set of touch inputs received at the animal device over a time period (paragraphs 0169-0172); and
select the one or more scenes based on the petting input (para. 0257).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 24 and 34 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bucci et al. (US 2018/0311569).
Regarding claim 24, Stiehl et al. teaches the method according to claim 23 as stated above. Stiehl et al. further teaches wherein the scene type is based on the petting input (para. 0152); however, fails to specifically disclose the plurality of scenes of the scene type comprise a fast petting input and a slow petting input.
Bucci et al. taches an analogous robotic device and method wherein the plurality of scenes of the scene type comprises a fast petting input and a slow petting input (para. 0022; para. 0045, “direction of movement and the speed of movement could be derived”; para. 0052; Fig. 7).
Therefore, it would have been obvious to someone of ordinary skill in the art, before the
effective filing date of the claimed invention, to have modified the method of Stiehl et al. with the slow and fast petting inputs of Bucci et al. Doing so provides the robotic device with the ability to learn temporal patterns and therefore discriminate user inputs (Bucci et al., para. 0022).
Regarding claim 34, Stiehl et al. teaches the system according to claim 33 as stated above wherein. Stiehl et al. further teaches wherein the scene type is based on the petting input (para. 0152); however, fails to specifically disclose the plurality of scenes of the scene type comprise a fast petting input and a slow petting input.
Bucci et al. taches an analogous robotic device wherein the plurality of scenes of the scene type comprises a fast petting input and a slow petting input (para. 0022; para. 0045, “direction of movement and the speed of movement could be derived”; para. 0052; Fig. 7).
Therefore, it would have been obvious to someone of ordinary skill in the art, before the
effective filing date of the claimed invention, to have modified the system of Stiehl et al. with the slow and fast petting inputs of Bucci et al. Doing so provides the robotic device with the ability to learn temporal patterns and therefore discriminate user inputs (Bucci et al., para. 0022).
Claim(s) 25-26 and 35-36 is/are rejected under 35 U.S.C. 103 as being unpatentable over el Kalioby et al. (US 2018/0144649).
Regarding claim 25, Stiehl et al. teaches the method according to claim 21 as stated above. Stiehl et al. fails to teach wherein the one or more inputs comprise a voice recognition event input, the method further comprising: detecting, by an audio sensor of the animal device, the voice recognition input based on audio input received at the audio sensor; and selecting the one or more scenes based on the voice recognition input.
el Kaliouby et al. teaches an analogous robotic pet-like device and method wherein the one or more inputs comprise a voice recognition event input (Fig. 1, voice recognition 150), the method further comprising:
detecting, by an audio sensor of the animal device, the voice recognition input based on audio input received at the audio sensor (para. 0046); and
selecting the one or more scenes based on the voice recognition input (Fig. 1, provide stimuli by toy 190; para. 0044).
Therefore, it would have been obvious to someone of ordinary skill in the art, before the
effective filing date of the claimed invention, to have modified the method of Stiehl et al. with the voice recognition feature of el Kaliouby et al. The voice recognition input may provide context for the device, for instance, the voice data may be evaluated to determine the user’s cognitive state, emotional state, or mood based on the individual’s prosody, vocal register, pitch, speech rate, an/or loudness (el Kaliouby et al., para. 0046).
Regarding claim 26, Stiehl et al. in view of el Kaliouby et al. teaches the method according to claim 25 as stated above wherein operating the plurality of actuators to perform output actions of the each of the selected scenes (Stiehl et al., para. 0241) comprises:
performing a mechanical output action (Stiehl et al., para. 0304) and an audio output (Stiehl et al., para. 0322, “the Huggable performs an autonomous behavior such as "laughing" in response to the gesture”) action based on the one or more scenes.
Regarding claim 35, Stiehl et al. teaches the system according to claim 31 as stated above. Stiehl fails to teacher wherein the one or more inputs comprise a voice recognition input, the instructions further causing the processor to: detect, by an audio sensor of the animal device, the voice recognition input based on audio input received at the audio sensor; and select the one or more scenes based on the voice recognition input.
el Kaliouby et al. teaches an analogous robotic pet-like device wherein the one or more inputs comprise a voice recognition input (Fig. 1, voice recognition 150), the instructions further causing the processor to:
detect, by an audio sensor of the animal device, the voice recognition input based on audio input received at the audio sensor (para. 0046); and
select the one or more scenes based on the voice recognition input (Fig. 1, provide stimuli by toy 190; para. 0044).
Therefore, it would have been obvious to someone of ordinary skill in the art, before the
effective filing date of the claimed invention, to have modified the system of Stiehl et al. with the voice recognition feature of el Kaliouby et al. The voice recognition input may provide context for the device, for instance, the voice data may be evaluated to determine the user’s cognitive state, emotional state, or mood based on the individual’s prosody, vocal register, pitch, speech rate, an/or loudness (el Kaliouby et al., para. 0046).
Regarding claim 36, Stiehl et al. in view of el Kaliouby et al. teaches the system according to claim 35 as stated above wherein the instructions for operating the plurality of actuators (Stiehl et al., para. 0241) further cause the processor to:
perform a mechanical output action (Stiehl et al., para. 0304) and an audio output action (Stiehl et al., para. 0322, “the Huggable performs an autonomous behavior such as "laughing" in response to the gesture”) based on the one or more scenes.
Claim(s) 27 and 37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stiehl et al. in view of Fong et al. (US 2009/0055019).
Regarding claims 27, Stiehl et al. teaches the method according to claim 21 as stated above. Stiehl et al. further teaches monitoring for an input at a set of sensors of the animal device (para. 0145). However, Stiehl et al. does not specifically disclose determining a lack of an input after a predetermined time period threshold; and determining a sleep scene based on the lack of input.
Fong et al. teaches an analogous animal-like device and method further comprising determining a lack of an input after a predetermined time period threshold; and determining a sleep scene based on the lack of input (paragraphs 0075-0077; Fig. 10, sleep mode in step 302).
Therefore, it would have been obvious to someone of ordinary skill in the art, before the
effective filing date of the claimed invention, to have modified the method of Stiehl et al. with
the sleep mode of Fong et al. This modification allows the animal device to enter into idle-like mode that is designed to await user input before initiating mechanical, vocal, or auditory responses (Fong et al., paragraphs 0075-0077).
Regarding claims 37, Stiehl et al. teaches the system according to claim 31 as stated above. Stiehl et al. further teaches instructions that cause the processor (Fig. 24B, “embedded PC”) to: monitor for an input at a set of sensors of the animal device (para. 0145). However, Stiehl et al. does not specifically disclose instructions that cause the processor to: determine a lack of an input after a predetermined time period threshold; and determine a sleep scene based on the lack of input.
Fong et al. teaches an analogous animal-like device further comprising instructions that cause the processor to: determine a lack of an input after a predetermined time period threshold; and determine a sleep scene based on the lack of input based on the lack of input (paragraphs 0075-0077; Fig. 10, sleep mode in step 302).
Therefore, it would have been obvious to someone of ordinary skill in the art, before the
effective filing date of the claimed invention, to have modified the system of Stiehl et al. with
the sleep setting of Fong et al. This modification allows the animal device to enter into idle-like mode that is designed to await user input before initiating mechanical, vocal, or auditory responses (Fong et al., paragraphs 0075-0077).
Claim(s) 28-30 and 38-40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stiehl et al. in view of Hashiguchi et al. (US 2012/0048027).
Regarding claim 28, Stiehl et al. teaches the method according to claim 21 as stated above. Stiehl et al. further teaches operating the plurality of actuators (paragraphs 0236, 0241, and 0296); however, Stiehl et al. does not specifically disclose receiving actuator sensor data during the performance of a first physical output action by the animal device during performance of a scene by the animal device; determining a status of the performance of the first physical output action based on mechanical and actuator sensor data; and operating the plurality of actuators to cause the animal device to perform a second physical output action based on the status of the performance of the first physical output action.
Hashiguchi et al. teaches an analogous robotic device and method further comprising:
receiving actuator sensor data during the performance of a first physical output action by the animal device during performance of a scene by the animal device (para. 0142; Fig. 9, step 1140);
determining a status of the performance of the first physical output action based on mechanical and actuator sensor data (para. 0143; Fig. 9, step 1150; para. 0142, “sensor 1122”); and
operating the plurality of actuators to cause the animal device to perform a second physical output action based on the status of the performance of the first physical output action (para. 0144; Fig. 9, step 1160).
Therefore, it would have been obvious to someone of ordinary skill in the art, before the
effective filing date of the claimed invention, to have modified the method of Stiehl et al. with
the operational step of receiving actuator sensor data, determining a status of the performance of the first physical output, and operating the plurality of actuators of Hashiguchi et al. This modification incorporates a predetermined operational flow that ensures the device operates in accordance with a specific set of instructions originating from the control system. Furthermore, per the sensors, mechanical data may be collected from the actuators in use (Hashiguchi et al., paragraphs 0128-0151).
Regarding claim 29, a modified Stiehl et al. in view of Hashiguchi et al. teaches the method according to claim 28 as stated above wherein the second physical output action is a modified version of the first physical output action for completion of the scene (Hashiguchi et al., para. 0120).
Regarding claim 30, a modified Stiehl et al. in view of Hashiguchi et al. teaches the method according to claim 28 as stated above wherein determining the status of the comprises:
measuring strain and temperature associated with the mechanical and actuator sensors (Hashiguchi et al., paragraphs 0111 and 0115); and
determining the status of the performance of the first physical output action based on the strain and temperature measurements (Hashiguchi et al., paragraphs 0142-0144).
Regarding claim 38, Stiehl et al. teaches the system according to claim 31 as stated above. Stiehl et al. further teaches wherein the instructions (paragraphs 0160-0161, where the embedded PC contains instructions) for operating the plurality of actuators (paragraphs 0236, 0241, and 0296); however, Stiehl et al. does not specifically disclose wherein the instructions for operating the plurality of actuators further cause the processor to: receive actuator sensor data during the performance of a first physical output action by the animal device during performance of a scene by the animal device; determine a status of the performance of the first physical output action based on mechanical and actuator sensor data; and operate the plurality of actuators to cause the animal device to perform a second physical output action based on the status of the performance of the first physical output action.
Hashiguchi et al. teaches an analogous robotic device wherein the instructions for operating the plurality of actuators further cause the processor to:
receive actuator sensor data during the performance of a first physical output action by the animal device during performance of a scene by the animal device (para. 0142; Fig. 9, step 1140);
determine a status of the performance of the first physical output action based on mechanical and actuator sensor data (para. 0143; Fig. 9, step 1150; para. 0142, “sensor 1122”); and
operate the plurality of actuators to cause the animal device to perform a second physical output action based on the status of the performance of the first physical output action (para. 0144; Fig. 9, step 1160).
Therefore, it would have been obvious to someone of ordinary skill in the art, before the
effective filing date of the claimed invention, to have modified the system of Stiehl et al. with
the instructions for the processor to receive actuator sensor data, determine a status of the performance of the first physical output, and operate the plurality of actuators of Hashiguchi et al. This modification incorporates a predetermined operational flow that ensures the device operates in accordance with a specific set of instructions originating from the control system. Furthermore, per the sensors, mechanical data may be collected from the actuators in use (Hashiguchi et al., paragraphs 0128-0151).
Regarding claim 39, a modified Stiehl et al. in view of Hashiguchi et al. teaches the system according to claim 38 as stated above wherein the second physical output action is a modified version of the first physical output action for completion of the scene (Hashiguchi et al., para. 0120).
Regarding claim 40, a modified Stiehl et al. in view of Hashiguchi et al. teaches the system according to claim 38 as stated above wherein the instructions for determining the status of the performance of the first physical output action further cause the processor to:
measure strain and temperature associated with a set of mechanical and actuator sensors of the set of sensors (Hashiguchi et al., paragraphs 0111 and 0115); and
determine the status of the performance of the first physical output action based on the strain and temperature measurements (Hashiguchi et al., paragraphs 0142-0144).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Prescott et al. (2017) discloses a life-like animal robotic companion with a fully programmable mobile developer platform, six senses, and eight degrees of freedom. Cross et al. (WO 2007/041295) teaches a companion robot that is able to communication via gestures, voice synthesis, earcons, text, images, and movement. Hayashi (WO 2018/008323) discloses an autonomous robot coupled to a server and external sensors, and supported with an emotion map that expresses the ups and downs of emotion as the internal state of the robot. Saito (US 2002/0016128) discloses an interactive toy emulating a dog that is capable of detecting various input stimuli and commanding accompanying movements. Makino (US 2018/0376069) discloses an animal-like robot comprising an operation unit, imager, operation controller, and a determiner; the robot operates in accordance with a motion or an expression.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BROGAN R LANDEEN whose telephone number is (571)272-1390. The examiner can normally be reached Monday - Friday 8:00am - 5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Robertson can be reached at (571) 272-5001. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.R.L./Examiner, Art Unit 3791
/JENNIFER ROBERTSON/Supervisory Patent Examiner, Art Unit 3791