DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
In response to the Non-Final Office Action from 5/19/2025, Applicant has filed an amendment on 11/19/2025. In this reply, Applicant has amended independent claims 1, 11, and 20 to further add steps for receiving a generalized persona specification to be synthesized, interpreting one or more features from the generalized persona specification, selecting an algorithm from a static mapping algorithm or a machine learning algorithm that are selected based on a predetermined value associated with each attribute wherein each algorithm has been further defined in the claims to generate each attribute from its corresponding feature input, inputting the corresponding feature input into the selected algorithm, and generating the set of attributes using the selected algorithm.
Applicant has also argued that the applied prior art fails to teach the claimed invention as currently amended (Remarks, Pages 15-16). These arguments have been fully considered, however, are moot with respect to the new grounds of rejection, necessitated by the amended claims and further in view of Khan, et al. (U.S. PG Publication: 2024/0193839 A1).
In regards to the 35 U.S.C. 112(b) rejections of claims 4-6, Applicant argues that the amendments to claims 4 and 6 address the concerns expressed in this rejection (Remarks, Page 12).
In response to the correction or removal of the antecedent basis issues in claims 4 and 6, the 35 U.S.C. 112(b) rejection has been withdrawn.
In regards to the patent subject matter eligibility rejection of claims 1-20 under 35 U.S.C. 101, Applicant argues that independent claims 1, 11, and 20 are not directed to an abstract idea because it is alleged that the claimed processes cannot be performed by a human mentally or otherwise. Applicant also contends that the claimed invention is directed towards a practical application in step 2A prong 2 in that the field of online persona synthesis that allows a user to input generalizations (Remarks, Pages 12-14).
In response, while it is noted that a number of the steps of the amended independent claims can still practically be performed by a human such as receiving a generalized specification and interpreting that specification to generate attributes can be done by an artist such as a law officer receiving a generalized description of a suspect, the claimed invention selects specific machine-based algorithms based on attributes and proceeds to synthesize an avatar based on the attributes. As such, it is agreed that the invention set forth in claims 1, 11, and 20 include a practical application as also described in the specification that render these claims and their respective dependents patent eligible under step 2A prong 2 of the 2019 Patent Subject Matter Eligibility Guidelines (2019 PEG). As such, the 35 U.S.C. 101 rejection has been withdrawn.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang, et al (U.S. PG Publication: 2024/0129437 A1) in view of Khan, et al. (U.S. PG Publication: 2024/0193839 A1).
With respect to Claim 1, Zhang discloses:
One or more non-transitory computer-readable media storing computer-executable instructions that upon execution cause one or more processors to perform acts comprising (non-transitory computer-readable medium storing computer instructions, Paragraph 0083):
receiving a generalized persona specification for a persona to be synthesized to provide an online presence (user "request" of the computer system to generate a "video representation" or "avatar" for use as an online presence (e.g., during an online meeting) wherein the request includes generalized information such as the description or user input label of an event where the avatar would be used instead of specific attributes of that avatar (e.g., a specific mention of the type of attire of the avatar), Paragraphs 0022, 0033, 0035, and 0042-0043);
interpreting one or more features from the generalized specification (the generalized specification of the avatar can be interpreted to generate a plurality of features such relating to as clothing outfit, attire, posture, background, etc., Paragraphs 0022, 0033, 0035, 0039 (describing features and corresponding probabilities), 0042-0043, and 0048);
identifying a set of attributes to generate for the persona (the features that are identified relate to specific avatar attributes (e.g., clothing, attire, posture, etc.) and are considered with respect to a probability determination, Paragraphs 0022, 0039, and 0048);
specifying at least one feature of the interpreted one or more features for each attribute in the set of attributes to be generated, wherein each attribute is a function of its at least one feature (avatar aspects specified based on the features (e.g., outfits, hairstyles, accessories), Paragraphs 0003, 0022, and 0035; and input features associated with m avatar that "comprise an avatar selection model," Paragraph 0039);
assembling at least one feature as a corresponding feature input for the each attribute in the set of attributes (the features are assembled into a feature set or vector “<fi, f2, ... , fn>” that is used as an input in avatar selection/generation, Paragraphs 0039 and 0048);
wherein the machine learning algorithm implements a trained model trained on past persona synthesis ("avatar selection model" may take the form of a neural network" or "machine learning algorithm" that generates the avatar attributes based upon the "n features" wherein the model is trained on past avatar selections/generations (see "trained based on previous avatar selections by the user" in Paragraph 0035), Paragraphs 0038-0039 and 0048);
inputting the corresponding feature input into the selected algorithm for each attribute in the set of attributes: generating the set of attributes using the machine learning algorithm (the attributes of the generated/synthesized computer avatar are formed from the attributes generated from the machine learning algorithm via the input features, Paragraphs 0022, 0038-0039, and 0048); and
synthesizing the persona from the generated set of attributes (generated/synthesized computer avatar are formed from the attributes output via the machine-learning model, Paragraphs 0022, 0038-0039, and 0048-0049; “rendering,” Paragraph 0051).
Zhang does not teach the selecting, for each attribute in the set of attributes, an algorithm to generate each attribute from its corresponding feature input, wherein the selected algorithm is either a static mapping algorithm or a machine learning algorithm selected based on a predetermined value associated with each attribute. Khan, however, discloses an avatar generator interface using a generalized description "that uses natural words and phrases" (see Fig. 2, Element 204 and Paragraph 0040) and that selects between a machine learning model "such as a neural network" for various features mapping to different attributes (i.e., voice, appearance, and personality) or static "available default datasets, based on the content of the user input" (e.g., the user simply indicates the general name of an actor) that indicates the various features and supplements any other available user features (Paragraphs 0023, 0026-0029, 0032-0033-0036, and 0056-0057).
Zhang and Khan are analogous art because they are from a similar field of endeavor in persona generation from natural language data inferences. Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to add the default attribute dataset mapping taught by Khan to the avatar generation using only machine learning based upon features taught by Zhang to provide a predictable result of further simplifying the process of generating a custom character (Khan, Paragraphs 0029 and 0040).
With respect to Claim 2, Zhang further discloses:
The one or more non-transitory computer-readable media of claim 1, wherein the acts further comprise storing the generated set of attributes in a database for later retrieval and rendering (storing "selections of avatars, and the context of the selections...such as attributes" for future selections and retrieval, Paragraphs 0033, 0039, and 0054; see also rendering at paragraph 0051).
With respect to Claim 3, Zhang further discloses:
The one or more non-transitory computer-readable media of claim 1, wherein the acts further comprise providing generated the set of attributes as machine learning training data for training the machine learning algorithm to select additional corresponding features for synthesizing personas (attributes associated with an avatar used to train the system to select avatar attributes/features using a "machine learning algorithm" and based on "previous avatar selections by the user" and "data collected over time," Paragraphs 0033-0035).
With respect to Claim 4, Zhang further discloses:
The one or more non-transitory computer-readable media of claim 3, wherein one or more attributes in the set of attributes are associated with an aspect of the persona, and wherein the aspect is a corresponding feature input for the one or more attributes (use of an ML model such as a neural network, Paragraph 0039, to select an avatar composed of corresponding high probability features/attributes, Paragraphs 0048-0049).
With respect to Claim 5, Zhang further discloses:
The one or more non-transitory computer-readable media of claim 4, wherein the aspect of the persona includes a voice associated with the persona, a face associated with the persona, a figure associated with the persona (posture that constitutes a specific body figure or shape, Paragraph 0022), or a choice of clothing associated with the persona (attire/outfit, Paragraphs 0003 and 0022).
With respect to Claim 6, Zhang further discloses:
The one or more non-transitory computer-readable media of claim 1, wherein the interpreting includes inferring the one or more features in the generalized persona specification using a logical programming technique (features can be inferred or understood by a logical programming/classifier based upon information such as current activities or user actions, Paragraphs 0039 and 0046-0047; note that Khan also teaches logical inference based upon a general description (e.g., of an actor) in the claim 1 citations).
With respect to Claim 7, Zhang further discloses:
The one or more non-transitory computer-readable media of claim 1, wherein the set of attributes of the persona include one or more voice attributes that are based on a voice sample (avatar attributes include voice information such as tone, dialect, and words wherein the voice input includes samples such as in a frequency spectra captured by a microphone, Paragraphs 0036, 0039, and 0041).
With respect to Claim 8, Zhang further discloses:
The one or more non-transitory computer-readable media of claim 1, wherein the acts further comprise generating a confidence score that is associated with each attribute in the set of attributes for the specifying of the at least one feature for the attribute (features for avatar attribute selection that are associated "with regression probabilities" and used by a ML "avatar selection model" that can take the form of a "neural network," Paragraphs 0039 and 0048).
With respect to Claim 9, Zhang further discloses:
The one or more non-transitory computer-readable media of claim 8, wherein the attribute is associated with an aspect of the persona (attributes are associated with "m avatars" having aspects that make up each specific avatar such as "business-attire", Paragraphs 0003, 0022, 0032, 0039, 0042), and wherein the acts further comprise amalgamating the confidence score with one or more other confidence scores associated with one or more other attributes in the set of attributes of the aspect of the persona to generate an overall score for the aspect (gathering probability scores into a vector "<P 1, P 2, ... , Pm>" that is used to identify an overall probability score that is assessed against a threshold to render a specific persona/avatar, Paragraphs 0039 and 0048).
With respect to Claim 10, Zhang further discloses:
In response to the confidence score and the one or more other confidence scores being below a predetermined threshold, surfacing a warning regarding the aspect or blocking a superimposition of the aspect with one or more other aspects of the persona (individual probability scores of features of “m avatars” used to determine an overall probability, Paragraphs 0039 and 0048; Zhang implies that an avatar with a probability below a threshold will not be selected in the statement a probability for "most likely avatar must satisfy a threshold to be selected" (Paragraph 0048), where the selected avatar is rendered, Paragraph 0051. Accordingly, the lack of selection for a low-scoring highest avatar with a combination of aspects (e.g., Paragraphs 0003 and 0022) serves as a blocking of superimposing those aspects into a single persona/avatar which is not rendered due to a failure to satisfy the threshold).
Claim 11 contains subject matter similar to claim 1, and thus, is rejected under similar rationale. Furthermore, Zhang discloses system components in the form of a memory storing computer-executable software components and one or more processors (Paragraph 0083).
Claims 12-15 contain subject matter respectively similar to claims 2-5, and thus, are rejected under similar rationale.
Claims 16-19 contain subject matter respectively similar to claims 7-10, and thus, are rejected under similar rationale.
Claim 20 relates to the method practiced by a computer processor of claim 1 when instructions stored on the non-transitory computer-readable medium are executed, and thus, is rejected for reasons similar to claim 1. Claim 20 also contains subject matter similar to claim 2, and is also rejected for reasons similar to this claim.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES S WOZNIAK whose telephone number is (571)272-7632. The examiner can normally be reached 7-3, off alternate Fridays.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant may use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached at (571)272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JAMES S. WOZNIAK
Primary Examiner
Art Unit 2655
/JAMES S WOZNIAK/ Primary Examiner, Art Unit 2655