DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 16-20 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
Claims 16-20 recite “The method of claim 7,” but claim 7 does not recite a method. Since one is not able to tell what the claims reference, the scope is unclear. The Examiner believes this is a typographical error, and the claims are meant to recite “The method of claim 11.” In order to allow for further examination, the Examiner will treat the claims as if they depend on claim 11.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 6, 8, 9, 11, 16, 18, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Khorshid (US 2024/0045704) in view of Sagar et al. (US 2022/1018510; hereinafter “Sagar”).
Regarding claim 1, Khorshid discloses A system for generating interactive avatar-based conversational experiences (“interact with the assistant system … in stateful and multi-turn conversations,” para. 5), comprising: a computing device comprising at least a memory and a processor; a plurality of programming instructions that, when operating on the processor (“to execute instructions, processor may retrieve (or fetch) the instructions from … memory,” para. 196), cause the computing device to: implement an avatar creation tool allowing users to generate unique avatars using a user interface system which interacts with machine learning subsystems to direct avatar creation to the user's specifications (“the XR assistant avatar may be customized according to the user's context such that the user may find the assistant system more intelligent and interactive,” para. 10); employ one or more avatar animation machine learning subsystems configured to automatically generate avatar speech, body movements, gestures, and emotional responses in real-time based on the context of a conversation with a user (“the XR assistant avatar may be based on one or more of voice, speech, emotion, tone, pitch, appearance, size, shape, clothing, orientation, position, depth, movement, gesture, facial expression … The customizations may include modifying the voice, tone, and/or pitch of the XR assistant avatar … The customizations may also include modifying the movement, gestures, and/or facial expressions of the XR assistant avatar,” para. 141; “such customization may change over time as the AR/VR system learns more about the user,” para. 145); operate one or more avatar personality machine learning subsystems configured to direct the avatar's voice, personality traits, and emotional responses based on the context of the conversation and the perceived emotional state of the user (“customize/morph an AR/VR virtual assistant avatar … the customization may include changing the look and feel, voice, emotions, or other attributes,” para. 134; “If the user is excited, the XR assistant avatar may be excited too. If the user is angry, the XR assistant avatar may be apologetic. In other words, the XR assistant avatar may not merely mimic the user's emotions or actions, but rather react in a customized way to improve the user's experience,” para. 144); and integrate one or more large language models (LLMs) configured to generate contextually relevant and emotionally responsive conversations (“a natural-language generation (NLG) component … may use different language models and/or language templates to generate natural-language outputs,” para. 109; “the customization may include changing the look and feel, voice, emotions, or other attributes. The customization of the XR assistant avatar may be triggered by … user's actions or context,” para. 134).
Khorshid does not disclose execute a lip-sync animation subsystem configured to map phonetic voice patterns to viseme mouth shapes in real-time, including dynamic viseme chart generation and transition rule encoding.
In the same art of avatar animation, Sagar teaches execute a lip-sync animation subsystem (“lip sync animations,” para. 101) configured to map phonetic voice patterns to viseme mouth shapes in real-time (“blending between the animation weights of the Model Viseme and the animation weights of the Animation Snippet to animate the phoneme in context,” para. 10), including dynamic viseme chart generation (“develop viseme sequences and populate a Lookup Table,” para. 28; “The customizability of the viseme example poses and Gaussian function modifiers allow the user to adjust avatars' speaking styles and personalities,” para. 113) and transition rule encoding (“introduces variation in the dynamics of the viseme transitions and incorporates personal styles,” para. 28).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Sagar to Khorshid. The motivation would have been “to improve real-time generation of speech animation” (Sagar, para. 7) and “To realistically animate a String of speech” (Sagar, para. 36).
Regarding claim 6, the combination of Khorshid and Sagar renders obvious an integration subsystem configured to enable the avatar to interact with user interface elements and data structures of a third-party application (“integrate first-party and third-party applications into the assistant system,” Khorshid, para. 11).
Regarding claim 8, the combination of Khorshid and Sagar renders obvious a two-way messaging architecture configured to enable real-time interactions between avatars and users (“enable the user to interact with the assistant system … in stateful and multi-turn conversations to receive assistance,” Khorshid, para. 5).
Regarding claim 9, the combination of Khorshid and Sagar renders obvious dynamically evolve the avatar's personality based on user interactions over time (“the capability of reasoning may enable the assistant system to, for example, pick up previous conversation threads at any point in the future, synthesize all signals to understand micro and personalized context, learn interaction patterns and preferences from users' historical behavior and accurately suggest interactions that they may value, generate highly predictive proactive suggestions based on micro-context understanding, understand what content a user may want to see at what time of a day, and/or understand the changes in a scene and how that may impact the user's desired content,” Khorshid, para. 83; “customize/morph an AR/VR virtual assistant avatar … the customization may include changing the look and feel, voice, emotions, or other attributes,” Khorshid, para. 134; “If the user is excited, the XR assistant avatar may be excited too. If the user is angry, the XR assistant avatar may be apologetic. In other words, the XR assistant avatar may not merely mimic the user's emotions or actions, but rather react in a customized way to improve the user's experience,” Khorshid, para. 144).
Regarding claim 11, it merely broadens the limitations of claim 1, and therefore it is rejected using the same citations and rationales described in the rejection of claim 1.
Regarding claims 16, 18, and 19, they are rejected using the same citations and rationales described in the rejections of claims 6, 8, and 9, respectively.
Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Khorshid and Sagar, and further in view of Lembersky et al. (US 2019/0095775; hereinafter “Lembersky”).
Regarding claim 2, the combination of Khorshid and Sagar does not disclose the utilization of one or more avatar asset generator machine learning subsystems configured to create unique avatar configurations by allowing users or clients to upload photographs or videos of a human persona or illustrated character.
In the same art of avatar animation, Lembersky teaches the utilization of one or more avatar asset generator machine learning subsystems configured to create unique avatar configurations by allowing users or clients to upload photographs or videos of a human persona or illustrated character (“the following description details the methodology a client can use to upload unique visual and audio files into a system to create an interactive and responsive ‘face’ and/or ‘character’ that can be utilized for a variety of purposes,” para. 34; “the images files may capture an image (e.g., color, infrared, etc.) of a face, body, etc. of the user,” para. 101).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Lembersky to the combination of Khorshid and Sagar. The motivation would have been “to engage the users in the most natural and human like way” (Lembersky, para. 6).
Regarding claim 12, it is rejected using the same citations and rationales described in the rejection of claim 2.
Claims 3, 10, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Khorshid and Sagar, and further in view of Soon-Shiong et al. (US 2024/0264996; hereinafter “Soon-Shiong”).
Regarding claim 3, the combination of Khorshid and Sagar does not disclose the management of a blockchain-based system for avatar instance management and usage tracking.
In the same art of AI personal assistants, Soon-Shiong teaches the management of a blockchain-based system for avatar instance management and usage tracking (“a digital token can be implemented to protect an asset (e.g., virtual assets, physical or real-world assets, digital assets, asset rights, property, real-estate, artwork, etc.) … The asset can represent digital data, such as artificial intelligence (AI) data (e.g., training data, trained models, large language models (LLMs), AI personal assistants, etc.) … Any transactions with or changes to the digital token, such as to the ownership, can be tracked. An example of this tracking involves the use of a blockchain,” para. 36).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Soon-Shiong to the combination of Khorshid and Sagar. The motivation would have been for “efficient computer-based indexing via digital tokens” (Soon-Shiong, para. 2).
Regarding claim 10, the combination of Khorshid and Sagar does not disclose a watermarking system for limiting and tracking instances of avatar configurations, including blockchain-based unique identifiers.
In the same art of AI personal assistants, Soon-Shiong teaches a watermarking system for limiting and tracking instances of avatar configurations, including blockchain-based unique identifiers (“a non-fungible token (NFT) is an example of a digital token that can include data stored in or on a blockchain,” para. 4; “as new personal assistants are created their corresponding NFTs may be compared to each other to determine similarity or lack thereof. Consider a case where a celebrity has trained an AI model on their own works (e.g., ChatGPT, LLMs, etc.). The celebrity may monetize their AI by offering a license, subscription, lease, or other access to their trained AI model. Should another person attempt to create a similar AI model, the similar AI model may be restricted from being minted as an NFT,” para. 230; “The comparison can span multiple dimensions and modalities (e.g., … digital watermarks, etc.), etc. depending on the digital asset type and can return a similarity score,” para. 50).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Soon-Shiong to the combination of Khorshid and Sagar. The motivation would have been for “efficient computer-based indexing via digital tokens” (Soon-Shiong, para. 2).
Regarding claims 13 and 20, they are rejected using the same citations and rationales described in the rejections of claims 3 and 10, respectively.
Claims 4, 7, 14, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Khorshid and Sagar, and further in view of Mahindru et al. (US 2023/0140553; hereinafter “Mahindru”).
Regarding claim 4, the combination of Khorshid and Sagar does not disclose a reporting and analytics module for tracking and analyzing avatar usage and performance.
In the same art of virtual assistants, Mahindru teaches a reporting and analytics module for tracking and analyzing avatar usage and performance (“the disclosed techniques can be applied to an AI chatbot used by an online merchant or service provider to perform some automated customer service interactions,” para. 23; “a detection component that detects a change associated with a performance metric included amongst performance metrics tracked for an enterprise system in association with usage of an AI model,” para. 4; “Resource usage can be monitored, controlled and/or reported, providing transparency for both the provider and consumer of the utilized service,” para. 46).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Mahindru to the combination of Khorshid and Sagar. The motivation would have been “for updating/refining the data model over time using continuous learning based on evaluating whether and how implemented solutions impact the relevant performance metrics” (Mahindru, abstract).
Regarding claim 7, the combination of Khorshid and Sagar does not disclose a deployment and asset management subsystem configured to track usage of avatar configurations and their individual components in live deployments.
In the same art of virtual assistants, Mahindru teaches a deployment and asset management subsystem configured to track usage of avatar configurations and their individual components in live deployments (“the disclosed techniques can be applied to an AI chatbot used by an online merchant or service provider to perform some automated customer service interactions,” para. 23; “address the interplay between AI model and business KPIs after the AI model is deployed in the field,” para. 2; “Resource usage can be monitored, controlled and/or reported, providing transparency for both the provider and consumer of the utilized service,” para. 46).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Mahindru to the combination of Khorshid and Sagar. The motivation would have been “for updating/refining the data model over time using continuous learning based on evaluating whether and how implemented solutions impact the relevant performance metrics” (Mahindru, abstract).
Regarding claims 14 and 17, they are rejected using the same citations and rationales described in the rejections of claims 4 and 7, respectively.
Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Khorshid and Sagar, and further in view of Ghosh et al. (US 2025/0191067; hereinafter “Ghosh”).
Regarding claim 5, the combination of Khorshid and Sagar does not disclose an authorization and licensing module using hashed authentication tokens for secure avatar deployment.
In the same art of managing digital assets, Ghosh teaches an authorization and licensing module using hashed authentication tokens for secure avatar deployment (“a non-fungible asset (e.g., NFT, real estate, domain names, event tickets, in-game items like avatars, etc.),” para. 139; “authorizing a user of the remote device to access at least a portion of the protected NFT,” para. 5; “a license code of the software application,” para. 169; “the token authenticator processor can be integrated with or store a hash,” para. 101).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Ghosh to the avatar of the combination of Khorshid and Sagar. The motivation would have been “to protect asset exchanges” (Ghosh, para. 4).
Regarding claim 15, it is rejected using the same citations and rationales described in the rejection of claim 5.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ryan McCulley whose telephone number is (571)270-3754. The examiner can normally be reached Monday through Friday, 8:00am - 4:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571) 272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RYAN MCCULLEY/Primary Examiner, Art Unit 2611