DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Status of Claims
Claims 10-24 currently pending and are examined herein. Claims 1-9 are canceled. Claims 10 and 13-18 are amended. Claims 19-24 are new.
Joint Inventors
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 26 December 2025 has been entered.
Response to Amendment / Remarks
Any reference to the prior office action, refers to the final rejection dated 24 September 2025.
Applicant’s arguments, filed 26 December 2025, with respect to the rejections under 35 U.S.C. 101 from the prior office action have been fully considered. Applicant’s argument that “…rotation caused by the motor is changed based on the set emotion data…” is a practical application is persuasive. Therefore, the rejections of under 35 U.S.C. 101 from the prior office action have been withdrawn. Applicant’s other arguments have been considered but are moot.
Applicant's arguments, filed 26 December 2025, that U.S. Pub. No. 2018/0229134 (hereinafter, Pascale) does not teach “the playback speed of a portion of the sound data corresponding to a portion of a desinence of the predetermined sound is changed” is not persuasive. Pascale teaches (see at least [0269]-[0271]) “The entire pre-recorded sound…is played back…”; therefore, the entire pre-recorded sound is “a portion of the sound data corresponding to a portion of a desinence of the predetermined sound”. Applicant’s other arguments with respect to the prior art have been considered but are moot because the new ground of rejection (see below) does not rely on any reference applied in the prior rejection of record for any other teaching or matter specifically challenged in the arguments.
Information Disclosure Statement
The information disclosure statement submitted 28 January 2026 has been considered by the examiner.
Claim Objections
The claims are objected to because of the following informalities:
Claim 14: “incudes” should be “includes”.
Claims 14 and 15: “the sound output processing” should be “the control processing”.
Claim 24: “The robot according to Claim 23” should be “The recording medium according to Claim 23”.
Appropriate corrections are required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 10-24 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2018/0143645 (Lee et al., hereinafter, Lee) in view of Pascale.
Regarding Claim 10, Lee discloses A robot that resembles an animal (see at least FIGURE 1: robotic creature 100), the robot comprising:
a sensor configured to detect an external stimulus acting on the robot (see at least [0047] and FIGURE 1: “The sensors 150 of the robotic creature function to monitor the ambient environment surrounding the robot and/or as inputs into the robotic creature”);
a motor to cause rotation in a predetermined direction (see at least [0033]: “The head can additionally or alternatively include a head mechanism, which functions to actuate the head. The head mechanism preferably pans and tilts the head relative to the body housing (e.g., yaws and pitches, respectively), but can alternatively or additionally roll the head (e.g., to create a quizzical look) or otherwise actuate the head. Each head degrees of freedom (head DOF) is preferably directly driven by an independent drive mechanism (e.g., including a force generation mechanism, such as a motor, and a force translation system, such as a series of linkages). Alternatively, all or a subset of the head DOFs can be driven by a common drive mechanism. However, the head DOFs can be indirectly driven or otherwise driven”);
a memory storing sound data (see at least [0048], [0068], and [0071]: “the audio can be selected from a set of music box sounds”; “retrieved from on-board memory”); and
at least one processor (see at least [0048] and FIGURE 1: processing system 180) configured to, in a case in which a determination is made that a condition for outputting a predetermined sound is satisfied, cause the predetermined sound to be output from a speaker included in the robot by causing the sound data stored in the memory to be played back (see at least [0045], [0048], and FIGURE 18B: “The body 130 can additionally function to mount the outputs (e.g., speakers, LEDs, etc.)”; “In one variation, audio output by the speakers (e.g., the robotic creature's “voice”) is used in conjunction with other outputs to create an expressive action. In one example, the audio can be selected from a set of music box sounds. In another example, the audio is a string of instrumental sounds from an African thumb piano. However, the audio can be any other suitable audio. Preferably, various features of the audio can be manipulated, such as the pitch, pitch envelopes, attack envelopes, frequency, or any other feature, to express emotion, convey information, prompt a user to provide information, or for any other purpose”), wherein
the at least one processor
in a case in which the external stimulus action on the robot is detected by the sensor, sets emotion data representing a pseudo-emotion of the robot based on the external stimulus acting on the robot (see at least [0074]: “The robotic creature mood can be determined based on historic user interaction with the robotic creature (e.g., happiness increased as a function of positive interactions, such as petting and detection of users smiling, degraded as a function of negative interactions, such as pushing the robotic creature over or detection of users yelling at the robotic creature, degraded as a function of time away from a given user, determined based on positivity scores), successful completion of assigned tasks by the robotic creature (e.g., happiness augmented upon successful completion and degraded upon failure), a baseline mood (e.g., happy), or otherwise determined.”),
…in accordance with the set emotion data, changes a tone of the portion of the desinence of the predetermined sound (see at least [0076]: “In one example, dynamically adjusting the parameter values based on the robotic creature mood includes: when the robotic creature mood is happy (e.g., as determined from a high robotic creature happiness score), increasing the acoustic frequency of the emitted sounds, increasing the brightness of the chest light, and increasing the duration of expressive action playback. When the robotic creature is sad (e.g., as determined from a low happiness score), decreasing the acoustic frequency of emitted sounds, decreasing the chest light brightness, and slowing the expressive action playback. However, the parameter values can be otherwise adjusted.”), and
causes a gesture in accordance with the set emotion data by causing the motor to cause the rotation, wherein at least one predetermined characteristic of the rotation is changed based on the set emotion data (see at least [0067] and [0076]: “In one example, dynamically adjusting the parameter values based on the robotic creature mood includes: when the robotic creature mood is happy (e.g., as determined from a high robotic creature happiness score), increasing the acoustic frequency of the emitted sounds, increasing the brightness of the chest light, and increasing the duration of expressive action playback. When the robotic creature is sad (e.g., as determined from a low happiness score), decreasing the acoustic frequency of emitted sounds, decreasing the chest light brightness, and slowing the expressive action playback. However, the parameter values can be otherwise adjusted.”; “Each action in the sequence is preferably associated with a set of action parameter values defining different parameters of the performed action. The action parameters can include: operated subcomponent (e.g., rangefinding system, camera system, head motor, drivetrain, etc.), subcomponent operation parameters (e.g., power provision, frequency, duration, speed, timing, etc.), action duration, action intensity (e.g., output amplitude or magnitude), or any other suitable set of parameters. For example, a “scanning” data-acquiring action can include a sequence of side-to-side and top-to-bottom head actuations to mimic scanning across and up-and-down a room, respectively. In this example, the action parameters for each head actuation include: the distance the head is actuated, the direction the head is actuated, the speed of head actuation, and the time duration separating successive head actuation actions.”).
Lee does not explicitly disclose by changing a playback speed of a portion of the sound data corresponding to a portion of a desinence of the predetermined sound…, changes a tone of the portion of the desinence of the predetermined sound.
Pascale, the same field of changing sounds for entertainment purposes, and therefore analogous art, teaches by changing a playback speed of a portion of the sound data corresponding to a portion of a desinence of the predetermined sound…, changes a tone of the portion of the desinence of the predetermined sound (see at least [0030]-[0034], [0269]-[0271], [0275]-[0276], and FIG. 2: “increase playback speed of prerecorded sound stored in the variable sound generator when the magnitude of acceleration of the variable sound generator increases, which results in an increase in perceived pitch of the played back prerecorded sound”; “The entire pre-recorded sound (in this case the at least one stored sound 156) is played back for a duration of two seconds”; The entire pre-recorded sound is a portion of the sound data corresponding to a portion of a desinence of the predetermined sound).
It would have been obvious, before the effective filing date of the invention, with a reasonable expectation of success, to one having ordinary skill in the art, to substitute the element of Pascale (changing a pitch of a sound by changing the speed of playback of a sound) into Lee (manipulating audio including pitch and frequency) because Lee discloses changing a sound in response to a robot mood but does not explicitly disclose how to do so. One of ordinary skill would find this to be a simple substitution of one known element for another to obtain predictable results. One of ordinary skill would be motivated to substitute into Lee the solution element of Pascale because manipulating the audio helps to express emotion and convey information (see at least Lee [0048]).
Regarding Claim 11, the Lee and Pascale combination teaches the limitations of Claim 10. Furthermore, Lee further discloses wherein the at least one processor causes the predetermined sound to be output by causing the sound data stored in the memory to be played back repeatedly (see at least [0066]-[0067], [0082], and FIGURE 10: “Preferably, S260 is performed after S250, but can additionally or alternatively be performed in the absence of an event associated with a technological imperfection, throughout a robot action, multiple times throughout the method (e.g. continuously), routinely (e.g. to update the robotic creature's internal map), or at any other time.”; “detecting the event of being ‘lost’, wherein the robotic creature is unfamiliar with its surroundings. The robotic creature can perform one or more data-acquiring actions, such as: a side-to-side head actuation, an up-and-down head actuation, and/or a side-to-side drivetrain actuation, which can function to increase the field of view and/or depth of the robotic creature's camera or another sensor (e.g. depth sensor); reducing the robotic creature driving speed; or any other suitable action. These actions can also serve as expressive actions, or a different expressive action (e.g. playing a ‘confused’ sound through a speaker) can be performed concurrently with the data-acquiring actions”).
Regarding Claim 12, the Lee and Pascale combination teaches all the limitations of Claim 10. Furthermore, Lee further discloses wherein the at least one processor determines that the condition for outputting the predetermined sound is satisfied in a case in which a falling state, a rolling state, a picked-up state, or a rotating state is detected as an abnormal state of the robot (see at least [0048], [0057], [0061], [0081], and FIGURE 12: “In one variation, audio output by the speakers (e.g., the robotic creature's “voice”) is used in conjunction with other outputs to create an expressive action”; “Detecting an event associated with a technological imperfection S250 functions to trigger expressive action performance”; “the event includes determining unexpected robotic creature motion. Examples of unexpected robotic creature motion include: tilt beyond a threshold angle (e.g., falling over), lift (e.g., determination that the robotic creature is being lifted), unexpected robot motion (e.g., moving faster than expected, moving slower than expected, coming to a complete stop), or any other suitable motion along any other suitable axis”; “detecting robotic creature tilt beyond a threshold angle from vertical (e.g., using the gyroscope) and performing a “sad” action including: tilting the head downward, playing a sad sound, and actuating the eyelid mechanism to lower the eyelid (e.g., upper eyelid)”).
Regarding Claim 13, this claim is substantially similar to Claim 10* with an additional limitation also disclosed by Lee: A control method implementable by a robot that resembles an animal (see at least [0003]: “This invention relates generally to the robotics field, and more specifically to a new and useful robotic creature and method of operation in the robotics field”). For all other limitations, reference the rejection for Claim 10. *Examiner notes that limitations beginning “in a case…” are contingent limitations, so the method claims have a broader broadest reasonable interpretation than the similar system and non-transitory computer readable medium claims (See MPEP 2111.04(II): CONTINGENT LIMITATIONS).
Regarding Claim 14, this claim is substantially similar to Claim 11, and therefore rejected for the same reasons.
Regarding Claim 15, this claim is substantially similar to Claim 12, and therefore rejected for the same reasons.
Regarding Claim 16, this claim is substantially similar to Claim 10 with an additional limitation also disclosed by Lee: A non-transitory computer-readable recording medium storing a program that causes a computer of a robot that resembles an animal to execute (see at least [0068] and [0071]: “The performed data-acquiring action is preferably automatically determined by the robotic creature (e.g., retrieved from on-board memory, using an on-board adjustment module, etc.), but can additionally or alternatively be entirely or partially determined by a remote computing system (e.g., server), user device (e.g., connected smartphone or tablet), or by any other suitable computing system”). For all other limitations, reference the rejection for Claim 10.
Regarding Claim 17, this claim is substantially similar to Claim 11, and therefore rejected for the same reasons.
Regarding Claim 18, this claim is substantially similar to Claim 12, and therefore rejected for the same reasons.
Regarding Claim 19, the Lee and Pascale combination teaches the limitations of Claim 10. Furthermore, Lee further discloses further comprising:
a head (see at least FIGURE 2: actuatable head 110); and
a torso (see at least FIGURE 2: body 130), wherein
the motor causes the head to rotate in the predetermined direction with respect to the torso (see at least [0033]: “The head can additionally or alternatively include a head mechanism, which functions to actuate the head. The head mechanism preferably pans and tilts the head relative to the body housing (e.g., yaws and pitches, respectively), but can alternatively or additionally roll the head (e.g., to create a quizzical look) or otherwise actuate the head. Each head degrees of freedom (head DOF) is preferably directly driven by an independent drive mechanism (e.g., including a force generation mechanism, such as a motor, and a force translation system, such as a series of linkages). Alternatively, all or a subset of the head DOFs can be driven by a common drive mechanism. However, the head DOFs can be indirectly driven or otherwise driven.”), and
the at least one processor causes the gesture in accordance with the set emotion data to be performed by causing the motor to cause the rotation of the head with respect to the torso (see at least [0071]-[0083]: “In one variation, the determined event is associated with a predetermined sequence of actions, each with a set of baseline action parameter values, wherein the action parameter values are dynamically adjusted based on the robotic creature mood and/or instantaneous operation context”; “the robotic creature can perform an expressive action, such as a series of head actuations to mimic a confused head shake and/or playing a ‘confused’ sound through a speaker”).
Regarding Claim 20, the Lee and Pascale combination teaches the limitations of Claim 19. Furthermore, Lee further discloses wherein the at least one characteristic of the rotation comprises at least one rotation speed and rotation range so that the at least one processor changes, in accordance with the set emotion data, at least one of the rotation speed and the rotation range of the rotation of the head with respect to the torso that is caused by the motor (see at least [0050] and [0067]-[0072]: “Each action in the sequence is preferably associated with a set of action parameter values defining different parameters of the performed action. The action parameters can include: operated subcomponent (e.g., rangefinding system, camera system, head motor, drivetrain, etc.), subcomponent operation parameters (e.g., power provision, frequency, duration, speed, timing, etc.), action duration, action intensity (e.g., output amplitude or magnitude), or any other suitable set of parameters. For example, a “scanning” data-acquiring action can include a sequence of side-to-side and top-to-bottom head actuations to mimic scanning across and up-and-down a room, respectively. In this example, the action parameters for each head actuation include: the distance the head is actuated, the direction the head is actuated, the speed of head actuation, and the time duration separating successive head actuation actions.”; “In one variation, the determined event is associated with a predetermined sequence of actions, each with a set of baseline action parameter values, wherein the action parameter values are dynamically adjusted based on the robotic creature mood and/or instantaneous operation context. Dynamically adjusting the action parameter values can include: scaling the values as a function of the mood and/or context; selecting a new equation to determine the action parameter values based on the mood and/or context; or otherwise adjusting the action parameter values. The action parameter value adjustment can be learned, changed over time according to a predetermined pattern, or otherwise determined.”).
Regarding Claim 21, this claim is substantially similar to Claim 19, and therefore rejected for the same reasons.
Regarding Claim 22, this claim is substantially similar to Claim 20, and therefore rejected for the same reasons.
Regarding Claim 23, this claim is substantially similar to Claim 19, and therefore rejected for the same reasons.
Regarding Claim 24, this claim is substantially similar to Claim 20, and therefore rejected for the same reasons.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDRA ROBYN MORFORD whose telephone number is (571)272-6109. The examiner can normally be reached Monday - Friday 8:00 AM - 4:00 PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thomas Worden can be reached at (571) 272-4876. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.R.M./Examiner, Art Unit 3658
/JASON HOLLOWAY/Primary Examiner, Art Unit 3658