DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claim 1 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent US12083690. Although the claims at issue are not identical, they are not patentably distinct from each other for the following reasons: Claim 1 of US12083690 discloses all the limitations of claim 1 of the current patent application.
Claim 3-9, 11, 14-21 rejected on the ground of nonstatutory double patenting as being unpatentable over claim 3-9, 11, 14-21 of U.S. Patent US12083690. Claims 3-9, 11, 14-21 of US12083690 teaches identical limitations to claims 3-9, 11, 14-21.
Claim 2 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. US12083690. Although the claims at issue are not identical, they are not patentably distinct from each other for the following reasons: “US12083690 teaches the bolded limitations”.
identifying, by a multimodal review module, changes to be made to the voice input files, the visual effect files, the facial expression files and/or the mobility files;
generating revised voice files, revised visual effect files, revised facial expression files and/or revised mobility files based at least in part on the identified changes.
Breazeal US20180133900 teaches communicating, to the editor module of the computing device, the changes to be made to the voice files, the visual effect files, the facial expression files and/or the mobility files (see [0125] and full rejection below);
It would have been obvious to one of ordinary skill in the art to combine the teaching of Breazeal of communicating the changes to be made which would yield predictable results, the use of multiple modules within computers may help in reducing load on one computer or core processor.
In regards to claims 10, 12-13, Breazeal teaches the hints/suggestions based on personality and context, see the rejections of claims 10, 12-13. It would be obvious to one of ordinary skill in the art to combine the teaching of Breazeal in order to assist in personalization and the synchronization of speech, body movements matching the emotion of a robot and matching the personality of the person.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3-9, 11, 14-22 are rejected by 35 U.S.C. 101.
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
On January 7, 2019, the USPTO released new examination guidelines setting forth a two-step inquiry for determining whether a claim is directed to non-statutory subject matter. According to the guidelines, a claim is directed to non-statutory subject matter if:
STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or
STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis:
STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon?
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
Using the two-step inquiry, it is clear that claim 1 is directed toward non-statutory subject matter, as shown below:
STEP 1: Do the claims fall within one of the statutory categories?
Yes claim 1 is directed towards a method.
STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?
Yes, the claims are directed to an abstract idea.
With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas:
Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations;
Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and
Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion).
The process in claim 1 is a mental process that can be practicably performed in the human mind, or with the aid of pen and paper and as such is directed toward and abstract idea. The claim consists of analyzing files follow guidelines, which is similar to a human analyzing and determining if a file contains correct grammar for instance. Generating presentation conversation files based on received files is similar to a human generating the conversation sentences based on a group of words. Testing the presentation files to verify correct operation of a robot computing device is similar to a human comparing the conversation files to a set of rules of the computing system and determining if it satisfies the rules which is a determination of correct operation of the computing device. Identifying changes to be made to the voice input files is similar to a human identifying that an inappropriate word needs to be removed. Generating revised voice files based on the identified changes is similar to a human removing the inappropriate words to generate a child appropriate conversation file. Verifying that the revised voice files are aligned with the robot computing device’s personality and operational characteristics is similar to a human determining whether the conversation is appropriate/aligns with a happy robot or a sad robot. Notably, the claim does not positively recite any limitations regarding actual determination of the attitude of the robot.
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
No, the claims do not recite additional elements that integrate the judicial exception into a practical application.
With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application:
an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field;
an additional element that applies or uses a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition;
an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim;
an additional element effects a transformation or reduction of a particular article to a different state or thing; and
an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application:
an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea;
an additional element adds insignificant extra-solution activity to the judicial exception; and
An additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use.
Claim 1 do not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application. The additional limitations include accessing computer readable instructions which is considered generic linking. executing the computer readable instructions which is considered apply it level technology. Receiving files which is considered merely data gathering which is insignificant extra solution and routine data collection, while the claim recites the mobility files cause parts of the robots to move, this is recited with high level of generality and just defines what the files can cause the robot to do at an apply it level, in anyways the claim only required one of the voice files or mobility files, in this case the voice files comprise only signal data which are data gathering. The memory device, computing device, processor, language processor module, renderer module, automatic testing system by simulation, multimodal review module are considered at the apply it level technology to apply a human abstract idea on a computer.
Thus, it is clear that the abstract idea is merely implemented on a computer at the “apply it level”, which is indicative of the abstract solution having not been integrated into a practical application.
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
No, the claims do not recite additional elements that amount to significantly more than the judicial exception.
With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements:
adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or
simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present.
Claim 1 do not recite any specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field. The additional limitations include accessing computer readable instructions which is considered generic linking. executing the computer readable instructions which is considered apply it level technology. The memory device, computing device, processor, language processor module, renderer module, automatic testing system, multimodal review module are considered at the apply it level technology. The receiving step referred to above are insignificant extra-solution activity, are not considered significantly more because acquiring step is mere data gathering or transmission of data over a network, which has been held to be routine and conventional activity. See Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)). MPEP 2106.05(d)(II)
CONCLUSION
Thus, since claim 1: (a) directed toward an abstract idea, (b) does not recite additional elements that integrate the judicial exception into a practical application, and (c) does not recite additional elements that amount to significantly more than the judicial exception, it is clear that the claims are directed towards non-statutory subject matter.
Regarding claim 2, identifying changes is similar to a human determining that an expression is wrong, generating revised files is similar to a human determining the correct files/action/expression/grammer/text to be used. The communication is data gathering.
Regarding claim 3, communicating files to a processor is considered extra solution data gathering. Verifying the files follow guidelines is similar to a human analyzing the files to determine if they are grammatically correct. The communicating step referred to above are insignificant extra-solution activity, are not considered significantly more because acquiring step is mere data gathering or transmission of data over a network, which has been held to be routine and conventional activity. See Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)). MPEP 2106.05(d)(II)
Regarding claim 4, generating the new presentation conversation files is part of the abstract idea of claim 1 similar to a human fixing the grammar of a text which generates new conversation files.
Regarding claims 5-8, generating files using a microphone for instance, is considered extra solution data gathering. The generating step referred to above are insignificant extra-solution activity, are not considered significantly more because acquiring step is mere data gathering or transmission of data over a network, which has been held to be routine and conventional activity. See Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)). MPEP 2106.05(d)(II)
Regarding claim 9, the multimodal authoring system including autocompletion software to automatically edit files is considered at the apply it level technology.
Regarding claim 10, the hints generating is part of the abstract idea such as a human determining a hint that a certain sentence should make a happy expression.
Regarding claim 11, providing suggestion is similar to a human determining suggestions to correct the grammatical structure of a text. The multimodal authoring system is considered at the apply it level technology.
Regarding claim 12, providing suggestions based on a companion’s characteristics is similar to a human determining that a funny human would laugh.
Claims 14-16, further define the robot characteristics which is part of the abstract idea of claim 1.
Regarding claims 17-18, analyzing similar content on characteristics of similar content is similar to a human analyzing the text based on similar synonyms that define a characteristic.
Regarding claim 19, learning synonyms is similar to a human associating a word hot with scorching which is part of the abstract idea of claim 1.
Regarding claim 20, receiving performance analysis statistics from other robotic computing devices is considered extra solution data gathering. Generating a modified presentation conversation file based on received conversation files is similar to a human determining based on statistical analysis that a certain word should not be used for instance and modifying the file based on the determination. Testing a modified presentation conversation file is similar to a human analyzing it to see if it follows guidelines.
The computing devices of claim 21 are considered apply it level technology or generic linking.
Claim Interpretation
Claim 1 recites the limitations “renderer module”, “language processor module” multimodal review module” with the structure in specifications in paragraph [0102] “processor”. It is being interpreted as means plus function.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-12, 14-19, 21 are rejected under 35 U.S.C. 103 as being unpatentable by Breazeal (US20180133900) in view of Abraham (US20180376002).
Regarding claim 1, Breazeal teaches a method of authoring and modifying presentation conversation files, comprising:
accessing computer-readable instructions from one or more memory devices for execution by one or more processors of the computing device ([0321] disclosing the computer program is stored in a memory and accessed on a computer machine readable transitory or non-transitory media. See also [0324] disclosing the computing device with instructions executed on a machine readable medium);
executing the computer-readable instructions accessed from the one or more memory devices by the one or more processors of the computing device ([0324] disclosing the computing device with instructions executed on a machine readable medium); and
wherein executing the computer-readable instructions further comprising: receiving, at a renderer module of the computing device, voice files, visual effect files, facial expression files and/or mobility files, wherein the voice files cause a robot computing device providing the voice files to make sounds and the mobility files cause parts of the robot computing device providing the mobility files to move ([0028] disclosing the social robot, i.e., via computer “renderer module”, generates or outputs stored files[0031]-[0034] disclosing the embodied speech system “renderer module” adjusts source files including natural language to generate expressive response that is appropriate to what the user is saying, i.e., the files are received by the embodied speech system to adjust. See also [0098] disclosing a set of rules by adding tags to an utterance speech text where an effect should occur such as emotional expressions. [0324] disclosing the processor to execute the program and the renderer module is interpreted as a processor. see at least [0011], [0098]-[0099], disclosing the movement of one or more body segment and the utterance of the speech caused by the multimodal expression files);
analyzing, by the language processor module of the computing device, the voice files, the visual effect files, the facial expression files and/or mobility files follow guidelines of a multimodal authoring system ([0098]-[0104] disclosing a set of rules by adding tags to an utterance speech text where an effect should occur such as emotional expressions, postural shifts, gaze i.e., analyzing voice files, facial expression files, visual effects follow rules “guidelines”. [0100]-[0104] specifically, disclosing adjusting behavior based on input and disclosing the editing of communicated intent content. [0112] disclosing analyzing developer authored ESDS “multi-modal authoring system” and applying statistical machine learning methods to learn reliable associations of keywords with expressive assets. [0113] discloses analyzing semantic content and multi-modal asset association. [0116]-[0119] disclosing receiving a text string or textual input of an utterance “voice data” and then adjusts it by adding markup tags by analyzing the speech and adding markups. It is interpreted from the citations that the robot analyzes the speech “voice data” received and adds tags based on following the multi-modal authoring system. [0324] disclosing the processor to execute the program and the language processor module is interpreted as a processor);
generating, by the renderer module, one or more presentation conversation files based at least in part on the received voice files, visual effect files, facial expression files and/or mobility files ([0098]-[0104] and [0116]-[0118] disclosing receiving a text string or textual input of an utterance “voice data” and then adjusts it by adding markup tags by analyzing the speech and adding markups. It is interpreted from the citations that the robot analyzes the speech “voice data” received and adds tags based on following the multi-modal authoring system to generate the utterance with expressions “conversation files”);
testing, at an automatic testing system, the one or more presentation conversation files to verify correct operation with respect to a robot computing device that receives the one or more presentation conversation files as an input and the automatic testing system is simulating ([0125] disclosing an automatically generating and adjusting the utterance by automatically generating appropriate tags. [0127] disclosing an automation system testing the utterance speech by analyzing the textual input and adding appropriate tags, i.e., it is interpreted that the automatic system tests the speech and analyzes the meaning to add tags of expressions to verify appropriate expressions of robot, i.e., correct operation of robot computing device. [0277] further disclosing the automatic testing is performed via simulation to verify the files are correct, while the user performs the checking, the test is run by the simulation which meets the claim limitations);
verifying, by the language processor module, that the revised voice files, revised visual effect files, revised facial expression files and/or revised mobility files are aligned with the robot computing device’s personality and operational characteristics ([0125] disclosing the adjusted utterance is aligned with the sensed context “operational characteristic” and state “personality” of the robot. See at least [0131] disclosing the state represents emotional, effective state of the robot, cognitive state, i.e., personality).
Abraham teaches Testing, at the automatic system, the one or more presentation conversation files on different device configuration files to simulate testing on different robot computing devices ([0041] disclosing testing different versions of the conversation robot on different phrases on multiple robots indicative of a plurality of robots thus different robots simulating different versions).
It would have been obvious to one of ordinary skill in the art to have modified the teaching of Breazeal to incorporate the teaching of Abraham in order to iteratively update the conversation robot to train it while testing it to improve performance and speed up the training by using multiple devices for testing.
Regarding claim 2, Breazeal as modified by Abraham teaches the method of claim 1, wherein executing the computer-readable instructions further comprising:
identifying, by a multimodal review module, changes to be made to the voice input files, the visual effect files, the facial expression files and/or the mobility files (Breazeal [0125] disclosing an automatically generating and adjusting the utterance by automatically generating appropriate tags. [0127] disclosing an automation system testing the utterance speech by analyzing the textual input and adding appropriate tags, i.e., it is interpreted that the automatic system tests the speech and analyzes the meaning to add tags of expressions to verify appropriate expressions of robot, i.e., correct operation of robot computing device. [0324] disclosing the processor to execute the program and the multimodal review module is interpreted as a processor);
communicating to the editor module of the computing device the changes to be made to the voice files, the visual files, the facial expression files and/or the mobility files (Breazeal [0124]-[0125] disclosing the adjusted utterance is aligned with the sensed context “operational characteristic” and state “personality” of the robot, thus the changes are communicated to the ESML tag to autogenerate tags based on the context. See at least [0131] disclosing the state represents emotional, effective state of the robot, cognitive state, i.e., personality. [0277] disclosing simulating on a computing system “communicating the revised files” the cues being authored until satisfaction is reached, i.e., verifying the files follow guidelines of the multimodal authoring system).
generating revised voice files, revised visual effect files, revised facial expression files and/or revised mobility files based at least in part on the identified changes (Breazeal [0125] disclosing an automatically generating and adjusting the utterance by automatically generating appropriate tags. [0127] disclosing an automation system testing the utterance speech by analyzing the textual input and adding appropriate tags, i.e., it is interpreted that the automatic system tests the speech and analyzes the meaning to add tags of expressions to verify appropriate expressions of robot, i.e., correct operation of robot computing device);
Regarding claim 3, Breazeal as modified by Abraham teaches the method of claim 1, wherein executing the computer-readable instructions further comprising: communicating revised voice files, revised visual effect files, revised facial expression files and/or revised mobility files to the language processor module to verify these files follows the guidelines of the multimodal authoring system (Breazeal [0125] disclosing the adjusted utterance is aligned with the sensed context “operational characteristic” and state “personality” of the robot. See at least [0131] disclosing the state represents emotional, effective state of the robot, cognitive state, i.e., personality. [0277] disclosing simulating on a computing system “communicating the revised files” the cues being authored until satisfaction is reached, i.e., verifying the files follow guidelines of the multimodal authoring system).
Regarding claim 4, Breazeal as modified by Abraham teaches the method of claim 3, wherein executing the computer-readable instructions further comprising: wherein if the revised voice files, the revised visual effect files, the revised facial expression files and/or the revised mobility files follow the guidelines of the multimodal authoring system, communicating the revised voice files, revised visual effect files, revised facial expression files and/or revised mobility files to the renderer module to generate a new presentation conversation file (Breazeal [0125] disclosing the adjusted utterance is aligned with the sensed context “operational characteristic” and state “personality” of the robot. See at least [0131] disclosing the state represents emotional, effective state of the robot, cognitive state, i.e., personality. [0277]-[0288] disclosing simulating on a computing system “communicating the revised files” the paralinguistic cues or multi-modal paralinguistic cues being authored until satisfaction is reached and saving them to be used, i.e., verifying the files follow guidelines of the multimodal authoring system and when authorized generate a new presentation conversation file).
Regarding claim 5, Breazeal as modified by Abraham teaches the method of claim 1, wherein the voice files are generated utilizing one or more microphones and speech recognition software (Breazeal [0027] disclosing microphone. [0063] disclosing speech recognition software).
Regarding claim 6, Breazeal as modified by Abraham teaches the method of claim 1, wherein the visual effect files are generated utilizing one or more imaging devices, one or more microphones and/or special effect software (Breazeal [0037]-[0057] disclosing the graphic “visual effect” is generated based on the microphone by the user asking about the weather).
Regarding claim 7, Breazeal as modified by Abraham teaches the method of claim 1, wherein the facial expression files are generated utilizing one or more imaging devices, one or more microphones or graphical animation software (Breazeal [0037]-[0057] disclosing the robot generates an expression of looking down with eyes dimmed in response to a question about a weather by the user, i.e., it is interpreted that a user communicated with the robot via microphone of robot and the facial expression is generated utilizing microphone input).
Regarding claim 8, Breazeal as modified by Abraham teaches the method of claim 1, wherein the mobility files are generated utilizing one or more imaging devices, one or more microphones and mobility command generation software (Breazeal [0037]-[0057] disclosing the robot shifting posture in response to a question about a weather by the user, i.e., it is interpreted that a user communicated with the robot via microphone of robot and the facial expression is generated utilizing microphone input).
Regarding claim 9, Breazeal as modified by Abraham teaches the method of claim 1, wherein the multimodal authoring system includes autocompletion software, the autocompletion software automatically authorizing the voice files or logs, the visual effect files or logs, the facial expression files or logs or the mobility files (Breazeal [0098] disclosing a set of rules by adding tags to an utterance speech text where an effect should occur such as emotional expressions, i.e., analyzing voice files follow rules “guidelines”. [0112] disclosing analyzing developer authored ESDS “multi-modal authoring system” and applying statistical machine learning methods to learn reliable associations of keywords with expressive assets. [0113] discloses analyzing semantic content and multi-modal asset association. [0116]-[0117] disclosing receiving a text string or textual input of an utterance “voice data” and then adjusts it by adding markup tags by analyzing the speech and adding markups. It is interpreted from the citations that the robot analyzes the speech “voice data” received and adds tags based on following the multi-modal authoring system. [0324] disclosing the processor to execute the program and the language processor module is interpreted as a processor).
Regarding claim 10, Breazeal as modified by Abraham teaches the method of claim 9, wherein the autocompletion software generates phrase hints to assist in generating voice input files (Breazeal [0098]-[0104] disclosing the tags and markups as hints to assist in generating the voice input files and expression files, also the words that are used as hints to generate the reactions).
Regarding claim 11, Breazeal as modified by Abraham teaches the method of claim 1, wherein the multimodal authoring system provides suggestions for generation of the voice files, the visual effect files, the facial expression files and/or the mobility files based on a current context (Breazeal [0112]-0113] disclosing analyzing developer authored ESDS “multi-modal authoring system” and applying statistical machine learning methods to learn reliable associations of keywords with expressive assets. For instance, the rule would be associating a word hot with a specific animation, i.e., based on context).
Regarding claim 12, Breazeal as modified by Abraham teaches the method of claim 1, wherein the language processor module provides suggestions for generation of the voice files, the visual effect, the facial expression files or the mobility files based on a robot computing device’s characteristics which include a companion’s personality characteristics (Breazeal [0098]-[0104], [0124]-[0127] disclosing the generation of the files to be matching an emotion of the robot for a specific context “personality characteristic”, such as the robot being a guidance robot characteristic or an emotional companion characteristics).
Regarding claim 14, Breazeal as modified by Abraham teaches the method of claim 1, wherein the robot computing device’s characteristics include atypical vocabulary (Breazeal [0112]-[0113] disclosing a group of synonyms such as scorcher and hot, i.e., the robot computing’s device characteristic includes atypical vocabulary).
Regarding claim 15, Breazeal as modified by Abraham teaches the method of claim 1, wherein the robot computing device’s characteristics include target user group characteristics (Breazeal [0124]-[0125] disclosing the robot generates different states “characteristic” when talking to child vs an adult based on user’s emotional state and prior interaction “target group characteristic).
Regarding claim 16, Breazeal teaches the method of claim 1, wherein the robot computing device’s characteristics include target user group’s needs, goals and/or abilities (Breazeal [0124]-[0125] disclosing the robot generates different states “characteristic” when talking to child wherein the robot talks slower, i.e., based on child ability “target group ability).
Regarding claim 17, Breazeal as modified by Abraham teaches the method of claim 2, wherein the revised voice files, the revised visual effect files, the revised visual effect files and/or the revised mobility files are based on the multimodal authoring system’s analyzing similar content based on characteristics of the similar content (Breazeal [0112]-0113] disclosing analyzing developer authored ESDS “multi-modal authoring system” and applying statistical machine learning methods to learn reliable associations of keywords with expressive assets, i.e., it is interpreted that the analysis is obtained by analyzing the statistical characteristics of similar content).
Regarding claim 18, Breazeal as modified by Abraham teaches the method of claim 17, wherein the characteristics of the similar content comprise sentiment of the similar content, affect of the similar content and context of the similar content (Breazeal [0112]-0113] disclosing analyzing developer authored ESDS “multi-modal authoring system” and applying statistical machine learning methods to learn reliable associations of keywords with expressive assets, i.e., it is interpreted that the analysis is obtained by analyzing the statistical characteristics of context of similar content).
Regarding claim 19, Breazeal as modified by Abraham teaches the method of claim 1, wherein executing the computer- readable instructions further comprising learning synonymous pathways to the generated presentation conversation files and generating additional presentation conversation files that are acceptable to the multimodal authoring system (Breazeal [0112]-0113] disclosing analyzing developer authored ESDS “multi-modal authoring system” and applying statistical machine learning methods to learn reliable associations of keywords with expressive assets, i.e., it is interpreted that the analysis is obtained by analyzing the statistical characteristics of context of similar content. [0112]-[0113] disclosing using the learned synonyms to generate expressions. It is interpreted from the citations that additional presentation conversation files are generated by the multimodal authoring system, i.e., acceptable to the multimodal authoring system).
Regarding claim 21, Breazeal as modified by Abraham teaches the method of claim 1, wherein the robot computing device comprises a computing device, a chatbot, a voice recognition computing device, or an artificial intelligence computing device (Breazeal [0312] and [0324] disclosing the processor to execute the program, i.e., the robot computing device comprises a computing device “processor”).
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable by Breazeal (US20180133900, from IDS) in view of Abraham (US20180376002) and Kuffner (US20150148957).
Regarding claim 20, Breazeal as modified by Abraham teaches the method of claim 1, wherein executing the computer-readable instructions further comprising receiving performance analysis statistics (Breazeal [0112]-0113] disclosing analyzing developer authored ESDS “multi-modal authoring system” and applying statistical machine learning methods to learn reliable associations of keywords with expressive assets, i.e., the statistical analysis is from implemented conversation files);
Generating one or more modified presentation conversation file based on the received performance analysis statistics (Breazeal [0112] disclosing analyzing developer authored ESDS “multi-modal authoring system” and applying statistical machine learning methods to learn reliable associations of keywords with expressive assets to automatically generate tags. [0113] discloses analyzing semantic content and multi-modal asset association. [0116]-[0119] disclosing receiving a text string or textual input of an utterance “voice data” and then adjusts it by adding markup tags by analyzing the speech and adding markups. It is interpreted from the citations that the modified presentation conversation file is based on the tags that are learned from statistical analysis); and further testing the modified presentation conversation file ([0125] disclosing the adjusted utterance is aligned with the sensed context “operational characteristic” and state “personality” of the robot. See at least [0131] disclosing the state represents emotional, effective state of the robot, cognitive state, i.e., personality. [0277] disclosing simulating on a computing system “communicating the revised files” the cues being authored until satisfaction is reached, i.e., verifying the files follow guidelines of the multimodal authoring system).
Breazeal as modified by Abraham does not teach receiving the statistical analysis from other computing devices.
Kuffner teaches receiving the statistical analysis from other computing devices ([0079] disclosing receiving statistical analysis of the information received from similar robotic devices).
Breazeal as modified by Abraham and Kuffner are analogous art because they are in the same field of endeavor, robotic assistance devices. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teaching of Breazeal as modified by Abraham to incorporate the teaching of Kuffner of receiving the statistical analysis from other robotic devices in order to determine whether the performance of the components is within an acceptable limit of the same similar component of other device, thus improving performance of the device while reducing the number of tests to be performed by a single robot and speeding the test processes. It would be obvious to combine the statistical analysis of the information received from multiple robots, with the testing of Breazeal in order to determine the performance is within acceptable limit of performance with similar robotic devices.
Response to Arguments
Applicant’s arguments filed on 09/22/2024 have been fully considered but they are not persuasive.
Claim 1 is rejected by 35 USC 101 since the claim does not recite a control step of the robot, the claim recites the mobility files cause the robot computing device to move only as an explanation definition of the received mobility files and not as a control step, they are recited with high level of generality and can be considered just information of the data gathering or citation of an apply it level by a computer. Also, the mobility files are an alternative in the claim since the claim only requires one of the voice files, facial files… or mobility files.
For the same reasons, since the presentation conversation files only require one of the files, claim 1 is rejected for being unpatentable by Breazeal in view of Abraham, Abraham teaches testing different voice files on different robots.
Examiner suggests to incorporate the presentation conversation files to incorporate all the voice files, facial expression files… and mobility files to overcome the 101 rejection and the 103 rejection.
Conclusion
The prior art made of record and not relied upon is considered pertinent to
applicant's disclosure. The prior art cited in PTO-892 and not mentioned above disclose related devices and methods.
US20200073938 disclosing testing a dialog for social robots.
US20200320975 disclosing generating a response and comparing it to an expected response from a plurality of expected responses for each test command.
US20170125008, from IDS, disclosing verifying that no bad words are said and changing the words of the conversation and or the posture of the robot according to the context.
US20180229372, from IDS, disclosing interactive robot conversation.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMAD O EL SAYAH whose telephone number is (571)270-7734. The examiner can normally be reached on M-Th 6:30-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeff Burke can be reached on 4692959067. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMAD O EL SAYAH/Primary Examiner, Art Unit 3658B