Prosecution Insights
Last updated: April 17, 2026
Application No. 18/765,822

SYSTEM AND METHOD FOR FACILITATING INTERACTIONS WITH DIGITAL VIRTUAL CLONE

Non-Final OA §102§103
Filed
Jul 08, 2024
Examiner
HE, WEIMING
Art Unit
2611
Tech Center
2600 — Communications
Assignee
unknown
OA Round
1 (Non-Final)
46%
Grant Probability
Moderate
1-2
OA Rounds
3y 4m
To Grant
60%
With Interview

Examiner Intelligence

Grants 46% of resolved cases
46%
Career Allow Rate
190 granted / 410 resolved
-15.7% vs TC avg
Moderate +14% lift
Without
With
+13.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
40 currently pending
Career history
450
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
59.2%
+19.2% vs TC avg
§102
12.4%
-27.6% vs TC avg
§112
15.0%
-25.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 410 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 10, 15-16 and 18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kasaba (US 2022/0385700 A1). As to Claim 1, Kasaba teaches A method for facilitating interactions with a digital avatar, the method comprising: generating, a plurality of digital avatars for a plurality of persons, wherein the plurality of digital avatars are based on artificial intelligence, wherein each of the plurality of digital avatars is configured to mimic a personality of a respective person of the plurality of persons (Kasaba discloses “In one embodiment, avatar database 116 may include one or more data collections containing information associated with each subject person and their associated avatar that may be digitally rendered using system 100” in [0055]; “CGI rendering module 112 may receive information or data about the subject person from AI engine 102, including information or data about the subject person stored in avatar database 116, that allows CGI rendering module 112 to digitally render and generate an interactive avatar of the subject person that physically resembles the subject person and that is configured to mimic or emulate the speech, mannerisms, and inflections of the subject person” in [0056]); and establishing, autonomously, interactions between two digital avatars of the plurality of digital avatars and between a first person and a digital avatar of the plurality of digital avatars, the digital avatar is of a second person, wherein the first person and the second person are different (Kasaba discloses “The example embodiments allow one or more users to virtually interact with the digitally rendered avatar of the subject person in a way that mimics or emulates the speech, mannerisms, and inflections of the subject person” in [0042]; “In this embodiment, first user 706 has made a request to start an interactive session with a digital avatar of subject person 702. In response, system 100 generates and renders a first interactive avatar 716 through a first avatar interface 714 to allow first user 706 to interact with first interactive avatar 716 of subject person 702” in [0092]; “Referring now to FIG. 11, a scenario 1100 of an example embodiment of users interacting with multiple interactive digitally rendered avatars of different subject people during a video 1102 is shown” in [0112].) As to Claim 10, Kasaba teaches The method of claim 1, wherein the step of mimicking comprises talking and expressing emotions like the respective person while communicating, and learning traits, habits, and interests of the respective person (Kasaba discloses “These training sessions may be used to refine the interactive avatar of the subject person to accurately mimic or emulate the speech, mannerisms, and inflections of the subject person” in [0058]; “With this arrangement, interactive digital avatars of the same subject person at different ages can accurately represent the physical appearance and speech, mannerisms, and inflections of the subject person at different time periods in the subject person's life” in [0061]; “AI engine 102 may use video data 212 to accurately mimic facial expressions, hand movements, body posture, and other physical mannerisms of the subject person” in [0064]; see also [0066].) As to Claim 15, Kasaba teaches The method of claim 1, wherein the method further comprises: receive, images and video feeds of person A and person B; and creating virtual images and videos simulating persons A and B together in real life (Kasaba discloses “In some embodiments, a video, such as video 1102, may include multiple subject persons, each of which has an associated interactive digital avatar stored in avatar database 116. As shown in scenario 1100 of FIG. 11, video 1102 may include at least two different subject persons, subject person A and subject person B” in [0112], see also [0113].) As to Claim 16, Kasaba teaches The method of claim 1, wherein the digital avatar of the second person is configured to synthesize voice of the second person for voice communication with the first person (Kasaba discloses “Audio data 210 can include one or more voice files or recordings of the subject person speaking or reading so that AI engine 102 may use audio data 210 to accurately mimic the speech, voice inflections, and manner of speaking of the subject person. For example, audio data 210 may include archived speeches by the subject person, recorded audio messages, songs, or readings by the subject person. Additionally, audio data 210 may also include audio files of the subject person obtained from video data 212.” in [0063]; see also Fig 11-14.) As to Claim 18, Kasaba teaches The method of claim 1, wherein one or more of the digital avatars are configured to replace one or more actors in a feature film (Kasaba discloses “Referring now to FIG. 7, an example embodiment of a scenario 700 in which a plurality of users are engaging with an interactive digitally rendered avatar of a subject person is shown. In this embodiment, a subject person 702, such as a celebrity or politician, is broadcasting or streaming a video to a plurality of users 704. In scenario 700, subject person 702 may be transmitting a pre-recorded video or may be live. For example, the video may be a panel discussion or talk, a movie or television program, a political rally, a sporting event, a concert, or any other live or recorded activity or event that is intended for an audience” in [0088], see also [0092].) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-5 are rejected under 35 U.S.C. 103 as being unpatentable over Kasaba (US 2022/0385700 A1) in view of Elbaz (US 2023/0126931 A1). As to Claim 2, Kasaba teaches The method of claim 1. The combination of Elbaz further teaches wherein each of the plurality of digital avatars incorporates features comprising color schemes, textures, and lighting to generate accurate and photo-realistic avatars by a generator, the generator is machine learning based (Kasaba discloses “Image data 214 can include one or more image files or photographs of the subject person so that AI engine 102 may use image data 214 to accurately render and generate the physical characteristics of at least the face/head or the partial or full body of the subject person from a variety of different angles and perspectives” in [0065]; “For example, personalization data 310 may include the user's name, birthday, hair or eye color… can be used by AI engine 102 to personalize or customize interactions between the user and the digital avatar of the subject person” in [0072]. Elbaz further discloses “The GUI tool may accept variability parameters for use in controlling attributes of the generated synthetic image data, including, for example clutter level of the background of an image, lighting type, lighting direction, time of day, lighting source color…” in [0031], see also [0077]; “In some embodiments, each of the one or more variability inputs may relate to at least one variable characteristic associated with target object representations generated based on the at least one selected target object type. In some embodiments, the at least one variable characteristic may include at least one of target object size, shape, aspect ratio, texture, orientation, material, number, or color.” in [0158].) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Kasaba with the teaching of Elbaz so as to apply one or more variable characteristics to target object representation (Elbaz, [0158]). As to Claim 3, Kasaba in view of Elbaz teaches The method of claim 2, wherein the generator is trained using a training dataset of images to learn how to generate avatars that mimic real people (Kasaba discloses “Additionally, AI engine 102 may also execute one or more training sessions using CGI rendering module 112 to generate a digital representation of the subject person for each subject person's avatar. These training sessions may be used to refine the interactive avatar of the subject person to accurately mimic or emulate the speech, mannerisms, and inflections of the subject person. In some embodiments, these training processes or sessions may be implemented using machine-learning techniques.” in [0058], see also [0065].) As to Claim 4, Kasaba in view of Elbaz teaches The method of claim 3, wherein the method further comprises: receiving by the generator, feedback for one or more avatars of the plurality of digital avatars, and improving the performance of the generator based on the feedback (Elbaz discloses “The newly generated images may be used to continue refining training of models 151. Accordingly, system 100 may serve as an active feedback loop to automatically identify weaknesses in the capabilities of a trained model and quickly and automatically develop a refined dataset to update training datasets 152 to remedy performance weakness of the trained model” in [0320]; “The new images can be used to refine the training of the model to improve its performance (i.e., to improve the model's accuracy in recognizing open car doors)” in [0321].) As to Claim 5, Kasaba in view of Elbaz teaches The method of claim 4, wherein the training dataset of images comprises high-quality images that depict different features comprising eye shape, hair texture, and facial structure (Kasaba discloses “Image data 214 can include one or more image files or photographs of the subject person so that AI engine 102 may use image data 214 to accurately render and generate the physical characteristics of at least the face/head or the partial or full body of the subject person from a variety of different angles and perspectives” in [0065]. Please note that claim term “high-quality” is a relative term and applicant doesn’t define what high-quality refers to in the specification. Elbaz further discloses “higher quality training images” in [0167]; “In some embodiments, the target object type input may indicate a human, and the one or more image parameter variability controls may enable anatomical variation in the synthetic dataset of one or more of eye color, eye shape, hair color, hair length, hair texture, face shape, weight, gender, height, skin tone, facial hair, or clothing” in [0180].) Claims 6-9 are rejected under 35 U.S.C. 103 as being unpatentable over Kasaba in view of Elbaz and Suresh et al. (US 2020/0050893 A1). As to Claim 6, Kasaba in view of Elbaz teaches The method of claim 5. The combination of Suresh further teaches wherein the method further comprises: generating the training dataset, wherein the training dataset is generated by: cleaning the high-quality images, resizing the high-quality images, converting the high-quality images to a format readable using machine learning, and labeling the high-quality images to help the generator learn to differentiate between the different features of the high-quality images (Suresh discloses “During training, algorithms can be used to clean images for maritime applications” in [0130]; “During preprocessing, a set of labeled images in a cloud storage bucket are preprocessed to extract the image features from the penultimate layer of the Inception network… Each image is processed to produce its feature representation in the form of a k-dimensional vector of floats (e.g., 2,048 dimensions). The preprocessing can include converting the image format, resizing images, and/or running the converted image through a pre-trained model to get the embeddings” in [0204].) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Kasaba and Elbaz with the teaching of Suresh so as to preprocess a set of labeled images to extract the feature during a machine learning procedure. As to Claim 7, Kasaba in view of Elbaz and Suresh teaches The method of claim 6, wherein the method further comprises: upon preparing the training dataset, selecting a machine learning model that fits requirements and data size (Suresh discloses “Model training may generate one or more trained models, which may then be sent to model selection, which is performed using validation data. The results that are produced by each one or more trained models for the validation data that is input to the one or more trained models may be compared to the labels assigned to the validation data to determine which of the models is the best model” in [0151]; “Three hyperparameters can control the size of the output volume of the convolutional layer: the depth, stride and zero-padding… The size of this zero-padding is a third hyperparameter. Zero padding provides control of the output volume spatial size. In particular, sometimes it is desirable to preserve exactly the spatial size of the input volume” in [0179].) As to Claim 8, Kasaba in view of Elbaz and Suresh teaches The method of claim 7, wherein the machine learning model is based on Generative Adversarial Networks, wherein the Generative Adversarial Networks are configured to generate synthetic data that is similar to real data used in training (Elbaz discloses “The system may further use generative models (such as generative adversarial networks) to generate datasets (including training and test datasets)” in [0336]; “In some embodiments, the plurality of images of the test dataset may be synthetically generated… To properly evaluate the ML model, the ML model should be tested on a combination of synthetic images and real images, to help ensure that there is not something in the synthetic images ( e.g., a portion of a synthetic image that is rendered in a particular manner) that provides testing metrics that are different from "real world" testing metrics” in [0325].) As to Claim 9, Kasaba in view of Elbaz and Suresh teaches The method of claim 8, wherein the method further comprises: configuring hyperparameters of the machine learning model (Suresh discloses “Three hyperparameters can control the size of the output volume of the convolutional layer: the depth, stride and zero-padding” in [0179]); and training, using the machine learning model and the training dataset, the generator for generating the plurality of digital avatars (Kasaba discloses “In some embodiments, a data collection and/or training process may be executed by AI engine 102 of system 100 to obtain, sort, analyze, and process the various data forming plurality of data collections 200 that is stored in avatar database 116 associated with each avatar” in [0058].) Claim 11 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kasaba in view of Dolignon et al. (US 2020/0012916 A1). As to Claim 11, Kasaba teaches The method of claim 1. The combination of Dolignon further teaches wherein the digital avatar is configured to help a person purchase goods and services and understand day-to-day life needs of the person (Dolignon discloses “The virtual assistant kiosk 104 includes one or more of a holographic projector 120 for displaying a holographic representation 122 and audio playback devices 124, e.g., speakers, for providing audio output to the user 116. A holographic representation 122 can include, for example, a human face/head avatar, a map, an object, e.g., a clothing item, a food item, a book, a furniture item. The holographic representation 122 can be animated to interact with the user 116.” in [0031]; “Conversational goals can include providing a user with help to purchase an item, directions to a location, suggested activities, generic information, or the like” in [0033].) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Kasaba with the teaching of Dolignon so as to provide an interaction between avatar and user to help user for shopping application. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Kasaba in view of Krawczyk-Wasilewska (Matchmaking through avatars: Social aspects of online dating). As to Claim 12, Kasaba teaches The method of claim 1, wherein the first person interacts with the digital avatar of the second person to understand the likes and dislikes of the second person for dating in a non-judgmental environment avoiding hesitation and social rejection (Krawczyk-Wasilewska discloses “Virtual dating combines online dating with online gaming. Virtual dating involves people using their avatars to interact in a virtual space that resembles a real-life dating environment, complete with photo-realistic 3D avatars and scenic props, where they can listen to music and play various games that provoke online conversation between the people behind the avatars. For example, in Second Life individuals can meet, chat, and flirt in a romantic virtual café in a city, such as New York, Paris, Cracow, or a tropical island resort. They can eat and drink in a virtual restaurant with their animated date, even though in reality the users are relaxing at home in their pyjamas, with no more than their mouse and keyboard to keep them busy and no risk of ruining their best outfits or succumbing to the temptations of too many gourmet calories. The animated screen dolls do all the usual human work of building up a relationship. They react to each other’s every move, they gesture and speak at the whim of their owners, and when the date is done they even kiss goodbye, if the proceedings went as well as the pair of dreamers behind the avatars had hoped.” at p. 92.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Kasaba with the teaching of Krawczyk-Wasilewska so as to use virtual dating to interact the people in a virtual space that resembles a real-life dating environment. Claims 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Kasaba in view of Krawczyk-Wasilewska and Marder (The Avatar's new clothes: Understanding why players purchase nonfunctional items in free-to-play games). As to Claim 13, Kasaba in view of Krawczyk-Wasilewska teaches The method of claim 12, wherein the method further comprises: ordering, by the digital avatar of the second person, an item from an online store, for the first person (Marder discloses “Gifts were given to players known within-game but also to those to whom they already had strong connection offline, illustrated by the following quote. “So, there are 2 reasons why I gift a skin, one if a friend has gifted me a particularly expensive one, I feel bad that they spent money on me, so I usually gift them back. Another one is I use it as a present for like birthdays. I have given skins to cheer my boyfriend up. If he is particularly upset, I bought him a skin he always wanted.” (F3, student)” at p. 77.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Kasaba and Krawczyk-Wasilewska with the teaching of Marder so as to give a gift to the players known within-game in the game application. As to Claim 14, Kasaba in view of Krawczyk-Wasilewska and Marder teaches The method of claim 13, wherein the digital avatar is configured for presenting a product for sale, wherein the digital avatar is configured to mimic a look, body, voice, and personality of a person desired by a buyer for selling the product (Kasaba discloses “These training sessions may be used to refine the interactive avatar of the subject person to accurately mimic or emulate the speech, mannerisms, and inflections of the subject person. In some embodiments, these training processes or sessions may be implemented using machine-learning techniques” in [0058]; “AI engine 102 may use image data 214 to accurately render and generate the physical characteristics of at least the face/head or the partial or full body of the subject person from a variety of different angles and perspectives” in [0065]; “Examples of a subject person include, but are not limited to: celebrities, politicians or elected officials, athletes, scholars, teachers or professors, authors, trainers, experts in various fields, family members, historical figures, private individuals, or any other person” in [0068]; “302. For example, personalization data 310 may include the user's name, birthday, hair or eye color, names of family members, the user's preferences (e.g., nicknames, topics of conversation, greeting types, favorite subjects, etc.), and other information that can be used by AI engine 102 to personalize or customize interactions between the user and the digital avatar of the subject person” in [0072].) Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Kasaba in view of Hamilton, II et al. (US 2010/0057478 A1). As to Claim 17, Kasaba teaches The method of claim 1. The combination of Hamilton further teaches wherein the digital avatar is further configured to detect malicious applications on a smartphone (Kasaba discloses “User interface 500 may also be embodied in a mobile device 522, such as a smartphone or tablet computer, on which the user may engage with interactive digital avatar of the subject person through avatar interface 114” in [0083]. Hamilton further discloses “Another embodiment may be a system in a virtual universe that has at least one avatar, for scanning an item when triggered, the system may have a trigger identification unit for identifying an item scan trigger, an item scanner for scanning the item to determine whether the item is malicious…” in [0007].) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Kasaba with the teaching of Hamilton so as to optimally scan an item for malicious or malfunctioning code, or scripts, or for malicious or malfunctioning geometries to minimize further contamination of the virtual universe. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Kasaba in view of Jette et al. (US 2022/0215469 A1). As to Claim 19, Kasaba teaches The method of claim 1. The combination of Jette further teaches wherein the digital avatar is configured to assist in tokenization of an asset, wherein the assisting comprises: suggesting terms and conditions of a smart contract based on a type of asset, estimated a value of the asset, and geo-location of the asset; and creating a smart contract based on blockchain technology (Jette discloses “the SAN datastore 216 may comprise information relating to a loan backed by an asset including, but not limited to, the value of the loan, various physical or digital loan-related documents, asset appraisal information, public records relating to the ownership of an asset, party and counter party identification, title information, or information pertaining to the nature of the asset, as disclosed further herein” in [0072]; “a tokenized SAN smart contract may govern a transaction between a market maker, one or more administrators, and an originator, and may include all information pertaining to a loan origination, including: a loan amount (in USD), an appraised asset value, wherein the asset is collateral to the loan and verified by one or more oracles, a combined loan-to-value, a zip code corresponding to the asset, a unique smart contract identification, the identification of all parties participating in the transaction, the market value of tokens created by the tokenized SAN system 120, and the number of tokens created by the tokenized SAN system 120.” in [0075]; “To develop a smart contract, parts of the terms that make up a traditional contract are implemented in software code and uploaded to the blockchain, producing a decentralized smart contract that does not rely on a third party for recordkeeping or enforcement” in [0060].) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Kasaba with the teaching of Jette so as to create and distribute cryptographically secure, digital tokens representing equity in assets corresponding to loan agreement (Jette, Abstract). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEIMING HE whose telephone number is (571)270-1221. The examiner can normally be reached on Monday-Friday, 8:30am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached on 571-272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WEIMING HE/ Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jul 08, 2024
Application Filed
Mar 04, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12567135
MULTIMEDIA PLAYBACK MONITORING SYSTEM AND METHOD, AND ELECTRONIC APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12561876
System and method for an audio-visual avatar creation
2y 5m to grant Granted Feb 24, 2026
Patent 12514672
System, Method And Software Program For Aiding In Positioning Of Objects In A Surgical Environment
2y 5m to grant Granted Jan 06, 2026
Patent 12494003
AUTOMATIC LAYER FLATTENING WITH REAL-TIME VISUAL DEPICTION
2y 5m to grant Granted Dec 09, 2025
Patent 12468949
SYSTEMS AND METHODS FOR FEW-SHOT TRANSFER LEARNING
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
46%
Grant Probability
60%
With Interview (+13.8%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 410 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month