Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-7, 9-17, 19, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Favis US 2019/0143527 in view of Wang US 2024/0274122
As per claim 1. Favis teaches A method, comprising: pre-processing a dataset, wherein the dataset includes data and/or metadata indicating attributes of a user, and the dataset also includes data and/or metadata that was generated as a result of an interaction between the user and a computing system; [0012][0028][0044] [0045] [0057]-[0059] (teaches a machine learning personality that takes attributes of a user including age, location, gender, etc)
Wang teaches after the dataset is pre-processed, providing the dataset as an input to a machine learning model; using the machine learning model to generate, based on the input, respective target variable value predictions for each target variable in a group of target variables, and each of the targets variables corresponds to a respective attribute of the user; [0035] [0046] (training a voice generation based on input voices and voice identity characteristics) [0138]-[0142] (teaches recognition of user, and selection on what language )
Favis teaches using the target value variable predictions to create, or modify, a digital human that has attributes corresponding to the attributes of the user; and deploying the digital human so that the digital human is available to interact with the user. [0038][0057] (teaches that the bot chooses personality/language etc based on the learned data including attributes of the user for interaction with a user)
It would have been obvious to one of ordinary skill in the art at the time before the priority date of the current application to use the teaching of Wang with the prior art because
As per claim 2. The method as recited in claim 1, Favis teaches wherein the data and/or metadata including attributes of the user comprises a regional location of the user, user type, gender, and preferred language of the user. [0012][0044] [0047][0048] Claim 8, Claim 18 (teaches preference of the user and factoring in gender, location and preferred accents/location/dialect)
As per claim 3. The method as recited in claim 1, Favis teaches wherein the data and/or metadata generated as a result of the interaction comprises data and/or metadata provided anonymously by the user in response to a query transmitted to the user by a computing system. [0012][0032] [0048] (crowd sourcing, to learn personality types, no identifying user information is submitted)
As per claim 4. The method as recited in claim 1, Favis teaches wherein the digital human is operable to interact with the user using one of more of the attributes of the user, and the attributes of the user comprise a language and an accent preferred by the user. [0059] Claim 7, Claim 8 (teaches using preferred or localized languages and accents)
As per claim 5. The method as recited in claim 1, Wang teaches wherein the machine learning model comprises a multi-output neural network that includes multiple parallel branches, and each of the branches corresponds to a respective one of the target variables. [0144] (teaches a neural network model and decision trees)
As per claim 6. The method as recited in claim 1, Favis teaches wherein the target variable value predictions comprise a digital actor, a particular language, a particular accent, and a particular emotion. [0012][0044] [0047][0048] Claim 8, Claim 18 (teaches preference of the user and factoring in gender, location and preferred accents/location/dialect)
As per claim 7. The method as recited in claim 1, Wang teaches wherein the model performs a respective softmax activation to obtain each of the predicted target values. [0090] (prediction function by softmax)
As per claim 9. The method as recited in claim 1, Favis teaches wherein the pre-processing comprises separating the target variables from other elements of the dataset. [0047][0048][0059] (preference data/target variables, choice of accent, language) vs metadata (location gender sex) [0012][0032][0044]
As per claim 10. The method as recited in claim 1, Favis teaches wherein the digital human communicates with the user. [0057] (robot interacts with users)
As per claim 11. Favis teaches A non-transitory storage medium having stored therein instructions that are executable by one or more hardware processors to perform operations comprising: pre-processing a dataset, wherein the dataset includes data and/or metadata indicating attributes of a user, and the dataset also includes data and/or metadata that was generated as a result of an interaction between the user and a computing system; [0012][0028][0044] [0045] [0057]-[0059] (teaches a machine learning personality that takes attributes of a user including age, location, gender, etc)
Wang teaches after the dataset is pre-processed, providing the dataset as an input to a machine learning model; using the machine learning model to generate, based on the input, respective target variable value predictions for each target variable in a group of target variables, and each of the targets variables corresponds to a respective attribute of the user; [0035] [0046] (training a voice generation based on input voices and voice identity characteristics) [0138]-[0142] (teaches recognition of user, and selection on what language )
Favis teaches using the target value variable predictions to create, or modify, a digital human that has attributes corresponding to the attributes of the user;
and deploying the digital human so that the digital human is available to interact with the user.
As per claim 12. The non-transitory storage medium as recited in claim 11, Favis teaches wherein the data and/or metadata including attributes of the user comprises a regional location of the user, user type, gender, and preferred language of the user. [0012][0044] [0047][0048] Claim 8, Claim 18 (teaches preference of the user and factoring in gender, location and preferred accents/location/dialect)
As per claim 13. The non-transitory storage medium as recited in claim 11, Favis teaches wherein the data and/or metadata generated as a result of the interaction comprises data and/or metadata provided anonymously by the user in response to a query transmitted to the user by a computing system. [0012][0032] [0048] (crowd sourcing, to learn personality types, no identifying user information is submitted)
As per claim 14. The non-transitory storage medium as recited in claim 11, Favis teaches wherein the digital human is operable to interact with the user using one of more of the attributes of the user, and the attributes of the user comprise a language and an accent preferred by the user. [0059] Claim 7, Claim 8 (teaches using preferred or localized languages and accents)
As per claim 15. The non-transitory storage medium as recited in claim 11, Wang teaches wherein the machine learning model comprises a multi-output neural network that includes multiple parallel branches, and each of the branches corresponds to a respective one of the target variables. . [0144] (teaches a neural network model and decision trees)
As per claim 16. The non-transitory storage medium as recited in claim 11, Favis teaches wherein the target variable value predictions comprise a digital actor, a particular language, a particular accent, and a particular emotion. [0012][0044] [0047][0048] Claim 8, Claim 18 (teaches preference of the user and factoring in gender, location and preferred accents/location/dialect)
As per claim 17. The non-transitory storage medium as recited in claim 11, Wang teaches wherein the model performs a respective softmax activation to obtain each of the predicted target values. [0090] (prediction function by softmax)
As per claim 19. Favis teaches the non-transitory storage medium as recited in claim 11, wherein the pre- processing comprises separating the target variables from other elements of the dataset. [0047][0048][0059] (preference data/target variables, choice of accent, language) vs metadata (location gender sex) [0012][0032][0044]
As per claim 20. Favis teaches the non-transitory storage medium as recited in claim 11, wherein the digital human communicates with the user. [0057] (robot interacts with users)
Claim(s) 8, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Favis US 2019/0143527 in view of Wang US 2024/0274122 in view of Cho US 2023/0230320
As per claim 8. Cho teaches the method as recited in claim 1, wherein the input is received by the model through a single input layer of the model. [0043] (single layer neural network)
It would have been obvious to one of ordinary skill in the art before the priority date of the instant application to use the teaching of Cho with the prior art because it is a widely used neural network construction.
As per claim 18. Cho teaches the non-transitory storage medium as recited in claim 11, wherein the input is received by the model through a single input layer of the model. [0043] (single layer neural network)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER BROWN whose telephone number is (571)272-3833. The examiner can normally be reached M-F 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Luu Pham can be reached at (571) 270-5002. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTOPHER J BROWN/Primary Examiner, Art Unit 2439