DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 20 is rejected under 35 U.S.C. 101 because the claim states only a “program,” when the correct phrasing is “non-statutory computer readable storage medium.”
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 2, 11, 13-19, 20 rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hayashi (USPGPUB 2019/0143528).
Re claims 1, 19, and 20: Hayashi discloses an autonomous mobile body that autonomously moves, the autonomous mobile body comprising:
a recognition unit that recognizes an external stimulus (see Abstract, paragraphs [0243-0245])); and
an audio control unit that controls a characteristic and an output timing of audio output in response to the stimulus on a basis of at least one of a behavior of the autonomous mobile body, a state of the autonomous mobile body, a partner who gives the stimulus, a surrounding situation, or a content of the stimulus (see paragraph [0225]: “The motion of delight M1 correlated to the state condition S1 is a unit motion of staring at an owner. The motion of delight M2 correlated to the state condition S2 is a compound motion including unit motions of directing the line of sight toward an owner who is hugging, moving the arm 106, and shaking the head to left and right while emitting a sound of delight.”).
Re claim 2: Hayashi discloses with respect to an autonomous mobile body according to claim 1, wherein
the audio control unit controls a characteristic and an output timing of a cry for the stimulus. (see paragraph [0206]: “The robot 100 in this embodiment can output wordless speech like an animal's cry, such as a yawn, a shriek, or a purr, using the speech output unit 134.”).
Re claim 11: Hayashi discloses with respect to an autonomous mobile body according to claim 1, wherein the audio control unit controls the characteristic and the output timing of the audio on a basis of a reaction pattern of the autonomous mobile body to the stimulus and the content of the stimulus (see paragraph [0231]: “When a predetermined time (hereafter called an “introduction time”) elapses from hugging the robot 100, the robot 100 may express a behavioral aspect of falling asleep (hereafter called a “sleeping expression”). More specifically, when an owner hugs the robot 100, the speech control unit 154 outputs a “yawn”, after which the operation control unit 150 causes the robot 100 to gradually become lethargic by reducing the supply of power to each actuator. Subsequently, the pupil control unit 152 causes the eye image 176 to close. At this time, the speech control unit 154 may regularly output sleeping noises at a low volume.”)
Re claim 13: Hayashi discloses with respect to an autonomous mobile body according to claim 11, wherein the recognition unit sets the reaction pattern on a basis of an individual parameter regarding an individual of the autonomous mobile body (see paragraph [0231]: “When a predetermined time (hereafter called an “introduction time”) elapses from hugging the robot 100, the robot.” Wherein hugging sets a reaction sleeping/yawning pattern).
Re claim 14: Hayashi discloses with respect to an autonomous mobile body according to claim 11, further comprising: a learning unit that sets the reaction pattern on a basis of a result of learning the content of the stimulus given to the autonomous mobile body in a past (see paragraph [0141, 0142]: the system uses deep learning to recognize context of a situation that is compared to previous content, wherein an action map is selected based upon a recognized content of said situation.).
Re claim 15: Hayashi discloses with respect to an autonomous mobile body according to claim 1, wherein the audio control unit controls the characteristic of the audio by generating or processing audio data corresponding to the audio (see paragraph [0231]: “The robot 100 includes a speech output unit 134. The speech output unit 134 outputs speech. The robot 100 in this embodiment can output wordless speech like an animal's cry, such as a yawn, a shriek, or a purr, using the speech output unit 134.”).
Re claim 16: Hayashi discloses with respect to an autonomous mobile body according to claim 1, wherein the audio control unit controls transition of a sound production mode for switching an algorithm and a parameter used for generating or processing audio data corresponding to the audio on a basis of at least one of the behavior of the autonomous mobile body, the state of the autonomous mobile body, the partner who gives the stimulus, the surrounding situation, or the content of the stimulus (see paragraph [0231]: the audio control unit switches the functioning of the robot to generate audio corresponding to the surrounding situation or content of the stimulus, wherein one of the parameters used is pleasantness, see paragraph [0200]).
Re claim 17: Hayashi discloses with respect to an autonomous mobile body according to claim 1, further comprising: an internal state control unit that controls transition of an internal state included in the state of the autonomous mobile body and is an inner state of the autonomous mobile body, wherein
the audio control unit controls the characteristic and the output timing of the audio on a basis of the internal state of the autonomous mobile body (see paragraphs [0140, 0141]).
Re claim 18: Hayashi discloses with respect to an autonomous mobile body according to claim 1, wherein the recognition unit further recognizes at least one of the state of the autonomous mobile body, the partner who gives the stimulus, or the surrounding situation (see paragraph [0200]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 6 is rejected under 35 U.S.C. 103 as being unpatentable over Hayashi in view Richter (U.S. PGPUB 2009/0209170).
Re claim 6: Hayashi fails to disclose with respect to the autonomous mobile body according to claim 2, wherein the audio control unit continues outputting the cry until a first time elapses while the stimulus continues, and stops outputting the cry or attenuates the cry after the first time elapses. However, Richter teaches a continuous stimulus that is accompanied with a crying output that changes after a predetermined period of time as the stimulus continues, to a more joyful audio output (see paragraph [0057]: “For example, it can be determined whether a playing child has tickled the doll's foot once, twice, three times or more times. Independent of the evaluation of the monotony analyzer M the control register can now made a kind of “intensity evaluation” and control the speech output accordingly. A light tickle will cause the doll to giggle. Longer tickling will cause the doll to laugh. If the tickling continues, the doll will squeal for joy until with continued use he will start to whine and finally begin to cry and scream. If the monotonous attitude is changed, for example stroking the cheeks, the same process would run in reverse until the doll calms down and is again smiling cheerfully.”). It would have been obvious to one of ordinary skill in the art at the time the invention was filed, to modify the audio output of Hyashi to change said audio the course of an input stimulus as taught by Richter, to reflect human input that calms an agitated being wherein the being taken from a crying state to a more calm state.
Allowable Subject Matter
Claims 3-5, 7-10, 12 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to REGINALD A RENWICK whose telephone number is (571)270-1913. The examiner can normally be reached Monday-Friday 11am-7pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kang Hu can be reached at (571)270-1344. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
REGINALD A. RENWICK
Primary Examiner
Art Unit 3714
/REGINALD A RENWICK/Primary Examiner, Art Unit 3715