Prosecution Insights
Last updated: April 19, 2026
Application No. 18/147,918

SYSTEMS AND METHODS FOR ENHANCED VIRTUAL REALITY INTERACTION

Non-Final OA §103
Filed
Dec 29, 2022
Examiner
MAZUMDER, SAPTARSHI
Art Unit
2612
Tech Center
2600 — Communications
Assignee
State Farm Mutual Automobile Insurance Company
OA Round
5 (Non-Final)
64%
Grant Probability
Moderate
5-6
OA Rounds
2y 8m
To Grant
76%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
241 granted / 375 resolved
+2.3% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
27 currently pending
Career history
402
Total Applications
across all art units

Statute-Specific Performance

§101
10.2%
-29.8% vs TC avg
§103
50.6%
+10.6% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 375 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/05/2026 has been entered. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 7-15, 18-20 and 21- 23 are rejected under 35 U.S.C. 103 as being unpatentable over Bendale et al. (US Pat. Pub. No. 20210201549 “Bendale”) in view of Nagar et al. (US Pat. Pub. No. 20230376328 “Nagar”) and Todasco et al. (US Pat. Pub. No. 20240004456 “Todasco”). Regarding claim 1 Bendale teaches A computer system ([0060] “This disclosure contemplates any suitable number of computer systems 1300. This disclosure contemplates computer system 1300 taking any suitable physical form. As example and not by way of limitation, computer system 1300 may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (e.g., a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, an augmented/virtual reality device, or a combination of two or more of these”) for generating a virtual reality replicant persona for interaction, the computer system comprising at least one memory device, and at least one processor in communication with at least one memory device and in communication with a user computer device associated with a user (“[0034] In particular embodiments, the one or more computing systems may send instructions to present the video output to a client device. In particular embodiments, a user may interface the one or more computing systems at a client device. [0062] In particular embodiments, computer system 1300 includes a processor 1302, memory 1304, storage 1306, an input/output (I/O) interface 1308, a communication interface 1310, and a bus 1312. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement. In particular embodiments, processor 1302 includes hardware for executing instructions”), the at least one processor programmed to: generate a replicant persona of an individual based upon a plurality of data, (“[0027] In particular embodiments, digital humans may be lifelike embodiments of real humans. Digital humans may provide a new form of fluid interaction that enable end-users to interact with machines in a natural way. In some embodiments of the disclosed technology, Digital humans may be created from data captured from real humans and have human-like attributes including (but not limited to) visual appearance, voice, expressions, emotions, behavioral, and personality traits. The disclosed technology may enable setting, programming, and updating these attributes. The disclosed technology may enable setting these attributes from data learned from a single individual or multiple individuals as desired. Digital humans may be driven completely or partially by real humans and/or one or more Artificial Intelligence (AI) processes/algorithms. These lifelike artificial humans may interact with an end-user through natural modalities such as speech, perception, reasoning, and other sensory mechanisms. [0038]…… In particular embodiments, the session information may include one or more of a history, previous conversations, and the like. [0028]…… For instance, video from a single individual or multiple individuals may be used to create the digital humans. [0039] In particular embodiments, the my vault 206 may store a user profile 222 (my profile), digital human profile 224 (my DH), and interactions 226 (my sessions) carried out during each session.”) but is silent about wherein the replicant persona is configured to support multiple digital avatars simultaneously in a virtual environment; Nagar teaches replicant persona is configured to support multiple digital avatars simultaneously in a virtual environment (ABSTRACT “The corpus that comprises various personas of real and/or fictional individuals can include data of the mannerisms and visual likeness of the various characters and people, allowing avatars depicting the virtual agents to be animated in the likeness of the selected persona in real-time during conversational workflows”. As avatars animation happen real time that means persona supports avatars simultaneously); Bendale and Nagar are analogous art as both of them are related to generation and update of digital persona. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Bendale by having replicant persona that is configured to support multiple digital avatars simultaneously in a virtual environment as taught by Nagar. The motivation for the above is to provide personalized custom interaction experiences. Bendale modified by Nagar teaches receive one or more requests from one or more users present in the virtual environment to interact with the replicant persona (Nagar “[0095]…… In step 703, in response to the user 601 engaging with the user interface 525 as part of the interaction with the virtual agent 603, the personalized interaction system 501 may trigger an adoption of a persona that can be applied to the virtual agent 603 for the duration of the interaction between the user 601 and the virtual agent 603”); in response to the one or more requests: generate one or more digital avatars of the multiple digital avatars in one or more locations of the virtual environment; the one or more digital avatars each being generated in a customized manner, determined in association with the one or more requests (Nagar “[0068] Referring to the drawings, FIGS. 5-6 depict approaches to personalizing user interactions with virtual agents 603 of a user interface 525. [0097]…..Application of visual components can include generation of a visual avatar that may be animated or created in real-time, in a manner that reflects the likeness of the selected character or person used as the persona for the virtual agent 603. The visual avatar can visual mannerisms and personality characteristics associated with the selected.” As there are multiple avatars/virtual agents so they are in different locations) but is silent about the one or more digital avatars each being generated in a customized manner based at least in part upon a respective use purpose of each of the one or more digital avatars; Todasco teaches one or more digital avatars each being generated in a customized manner based at least in part upon a respective use purpose of each of the one or more digital avatars (“[0023]…… Further, the avatar may be presented as one or more (e.g., face-mashing technique) of the user's contacts, social connects, friends/family, or the like. When advertising to the user, the avatar may be configured and/or displayed based on a context of the user's behaviors, such as if the user is browsing, shopping, or performing a non-shopping action. Further, when configuring the avatars to present to the user in one or more users, the merchant, storefront owner, owner or controller of a location or environment in AR or VR, or the like may select avatars that may be presented to improve an image or appeal of that location or environment. Thus, the real or virtual location and/or proprietor for the location may select and/or approve avatar configurations in AR and VR environments or experiences”); Todasco and Bendale modified by Nagar are analogous art as both of them are related to generation and update of avatar. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Bendale modified by Nagar by having one or more digital avatars each being generated in a customized manner based at least in part upon a respective use purpose of each of the one or more digital avatars as taught by Todasco. The motivation for the above is to display avatar matching with the properties of the environment. Bendale modified by Nagar and Todasco teaches present the one or more digital avatars in the one or more locations of the virtual environment in correspondence with the respective use purpose of each of the one or more digital avatars (Todasco “[0023]…… Further, the avatar may be presented as one or more (e.g., face-mashing technique) of the user's contacts, social connects, friends/family, or the like. When advertising to the user, the avatar may be configured and/or displayed based on a context of the user's behaviors, such as if the user is browsing, shopping, or performing a non-shopping action. Further, when configuring the avatars to present to the user in one or more users, the merchant, storefront owner, owner or controller of a location or environment in AR or VR, or the like may select avatars that may be presented to improve an image or appeal of that location or environment. Thus, the real or virtual location and/or proprietor for the location may select and/or approve avatar configurations in AR and VR environments or experiences”); cause replicant persona of individual to conduct a plurality of conversations via the one or more digital avatars with one or more users present in the virtual environment at the one or more locations (Nagar “[0021] Embodiments of the present disclosure may constantly receive direct and indirect feedback from the user or about the user as conversational workflows progress. With each iterative cycle of an interaction with a user, as the user interacts with the virtual agent of the disclosed system, method or program products, feedback (whether positive or negative) can influence subsequent decisions, conversation topics, integration of real-time events, actions, modifications to behaviors, the selected persona, audio or visual components being integrated into the conversation and/or other content being presented to the user via the interface. [0091]….. For example, UI module 513 may interface with the user 601 as part of the conversational workflow using the selected persona by conversing via audio components that include the persona of a real or fictitious person or character, including but not limited to dialogue provided by the virtual agent using voice signatures of the selected persona from a corpus of voice signatures; dialogue corresponding to the selected persona including speech patterns, slang, tone, grammar, speed and vocabulary thereof from a corpus of dialog options; visual mannerisms and likeness of the persona corresponding to the character or person which can be animated as a visual avatar; ”); update the replicant persona based upon reaction data captured from the plurality of conversations, the reaction data including data representing how the replicant persona and each of the one or more users reacted to each other during the plurality of conversations (Nagar [0091]…..The generated content provided to the user 601 via the user interface 525, can include the audio and visual components used to modify the selected personas, behaviors and actions selected by the AI/machine learning engine 509. For example, UI module 513 may interface with the user 601 as part of the conversational workflow using the selected persona by conversing via audio components that include the persona of a real or fictitious person or character, including but not limited to dialogue provided by the virtual agent using voice signatures of the selected persona from a corpus of voice signatures; dialogue corresponding to the selected persona including speech patterns, slang, tone, grammar, speed and vocabulary thereof from a corpus of dialog options; visual mannerisms and likeness of the persona corresponding to the character or person which can be animated as a visual avatar; and any text, video, audio, and/or images corresponding to responses”. Bendale “[0039]…… The digital human profile 224 may include user preferences about behavior, appearance, voice, and other characteristics of the digital human/avatar. In particular embodiments, the characteristics may be controlled, changed, and updated based on direct inputs from the user and/or by learning and adapting from data from user interactions. In particular embodiments, the my vault 206 may receive digital human customization data from a digital humans database 228”). for subsequent interactions with the multiple digital avatars within the virtual environment, use the updated replicant persona to support the multiple digital avatars (Nagar “[0019]…… The virtual agent can engage in conversational workflows with the user using the selected persona as modified by the behavioral change or action to answer questions, and keep the user engage by using content such as text, video, audio, and images. In some embodiments, historical interactions with other users with similar likes and preferences can be considered as well. [0021] Embodiments of the present disclosure may constantly receive direct and indirect feedback from the user or about the user as conversational workflows progress. With each iterative cycle of an interaction with a user, as the user interacts with the virtual agent of the disclosed system, method or program products, feedback (whether positive or negative) can influence subsequent decisions, conversation topics, integration of real-time events, actions, modifications to behaviors, the selected persona, audio or visual components being integrated into the conversation and/or other content being presented to the user via the interface”). Claim 12 is directed to a computer-implemented method and its steps are similar in scope and functions performed by the elements of system claim 1 and therefore claim 12 is also rejected with the same rationale as specified in the rejection of claim 1. Claim 23 is directed to a non-transitory computer-readable media (Bendale [0079] “Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate”) and its elements are similar in scope and functions performed by the elements of system claim 1 and therefore claim 23 is also rejected with the same rationale as specified in the rejection of claim 1. Regarding claims 2 and 13 Bendale modified by Nagar and Todasco teaches generate, in addition to the one or more digital avatars, a new digital avatar supported by the updated replicant persona. (Nagar “[0097]….Application of visual components can include generation of a visual avatar that may be animated or created in real-time, in a manner that reflects the likeness of the selected character or person used as the persona for the virtual agent 603.”); Regarding claims 3 and 14 Bendale modified by Nagar and Todasco teaches the one or more users includes a plurality of users, and wherein the at least one processor is further programmed to: receive, from the plurality of conversations, a plurality of responses of the plurality of users; and perform sentiment analysis on the plurality of responses, wherein the sentiment analysis is configured to be performed for at least one of: (i) each individual user of the plurality of users, or (ii) a group of users included within the plurality of users. (Nagar [0018]…. Embodiments of the ranking algorithm may consider past interactions with the user while conversing using one or more personas, the learned preferences of the user for particular types of content, including likes and dislikes of the user, and/or the inferred mental or emotional state of the user as evidenced by facial expressions, body language, tone, and other audio or visual indicators. [0101] …. In step 733, as the user 601 interacts with the persona of the virtual agent 603 using the selected behaviors and actions presented by the virtual agent 603, the user feedback module 517 can collect feedback about the user 601 in response to the virtual agent 603. For example, whether the user responds positively or negatively, body language of the user, sentiment, engagement and/or whether the user continues to respond or ends the interactive session with the user interface 525”). Regarding claims 4 and 15 Bendale modified by Nagar and Todasco teaches further comprising one or more databases in operative communication with the computer system and configured to store as part of the plurality of data, replicant persona data associated with the replicant persona of the individual (Bendale “[0028]…… For instance, video from a single individual or multiple individuals may be used to create the digital humans. [0039] In particular embodiments, the my vault 206 may store a user profile 222 (my profile), digital human profile 224 (my DH), and interactions 226 (my sessions) carried out during each session.”). Regarding claims 7 and 18 Bendale modified by Nagar and Todasco teaches wherein the replicant persona of the individual includes a plurality of replicant personas of the individual and the one or more digital avatars includes a plurality of digital avatars that are supported by the plurality of replicant personas of the individual (Nagar ABSTRACT “Personas can emulate voice signatures of popular characters, actors, celebrities and sports figures, and access a corpus of dialogue of the available personas to learn unique speech patterns, slang, tone, grammar, speed, and vocabulary. The corpus that comprises various personas of real and/or fictional individuals can include data of the mannerisms and visual likeness of the various characters and people, allowing avatars depicting the virtual agents to be animated in the likeness of the selected persona in real-time during conversational workflows.”). Regarding claims 8 and 19 Bendale modified by Nagar and Todasco teaches wherein the plurality of replicant personas of the individual includes different replicant personas of the individual (Nagar [0018] “….Moreover, the corpus that comprises various personas of real and/or fictional individuals can include data of the mannerisms and visual likeness of the various characters and people”). Regarding claims 9 and 20 Bendale modified by Nagar and Todasco teaches wherein the one or more digital avatars includes a plurality of digital avatars controlled by the same replicant persona of the individual (Nagar [0097]”…. For example, audio components may include modification of virtual agent 603 with a voice signature of a popular character, actor, celebrity, sports figure, etc”). Regarding claims 10 and 21 Bendale modified by Nagar and Todasco teaches wherein the plurality of data is sourced from a plurality of data sources and to present the one or more digital avatars in the one or more locations of the virtual environment includes the at least one processor being[[is]] further programmed to: receive user information of a first user of the one or more users prior to conducting an avatar interaction with the first user; select, based upon the user information, a portion of data sources of the plurality of data sources to use for generating a first individualized digital avatar of the one or more digital avatars for interacting with the first user (Nagar “[0073] In the exemplary embodiment of the computing environment 500, the data collection module 505 is shown to have access to a plurality of data sources 519, 521, 523, 527, 531. For example, the databases and/or repositories containing the data can include internal data 519, historical data 521, real-time data 523, IoT data 531 and one or more external data sources 527. Internal data 519 may comprise user-specific data or profiles created by the personalized interaction system 501. For instance, internal data 519 may include user-specific profiles, settings, preferences for each user 601 that interacts with an application or service interface that is customized by the personalized interaction system 501. External data source(s) 527 may include user profiles, configurations, settings or preferences from data sources that may be available outside of the personalized interaction system 501”), and generate the first individualized digital avatar based upon a portion of data of the plurality of data corresponding to the portion of data, the first individualized digital avatar being supported by the replicant persona (Nagar ABSTRACT “The corpus that comprises various personas of real and/or fictional individuals can include data of the mannerisms and visual likeness of the various characters and people, allowing avatars depicting the virtual agents to be animated in the likeness of the selected persona in real-time during conversational workflows. [0074] Personalized interaction system 501 may have access to multiple data sources containing historical data 521. Embodiments of historical data 521 can include one or more corpuses comprising audio and/or visual components of one or more available personas that may be adopted and applied to the virtual agent 603”). Regarding claims 11 and 22 Bendale modified by Nagar and Todasco teaches wherein to generate the replicant persona to support the first individualized digital avatar based upon the portion of data of the plurality of data corresponding to the portion of data sources includes the at least one processor being further programmed: ignore data from other data sources of the plurality of data sources that are not included in the portion of data( Nagar uses some specific data “[0074] Personalized interaction system 501 may have access to multiple data sources containing historical data 521. Embodiments of historical data 521 can include one or more corpuses comprising audio and/or visual components of one or more available personas that may be adopted and applied to the virtual agent 603”). Claim(s) 5 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Bendale modified by Nagar and Todasco as applied to claims 3 and 13 above, and further in view of Borchetta (US Pat. Pub. No.20170103432 “Borchetta”). Regarding claims 5 and 16 Bendale modified by Nagar and Todasco is silent about determine, which digital avatar of the one or more digital avatars is associated with which user of the one or more users; Borchetta teaches determine, which digital avatar of one or more digital avatars is associated with which user of one or more users (“Claim 21 “……identify an avatar associated with the second user, thereby identifying a second user avatar, wherein the second user avatar was created for the second user based on third data provided by the second user”). Borchetta and Bendale modified by Nagar and Todasco are analogous art as both of them are related to generation and update of digital persona. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Bendale modified by Nagar and Todasco by determining , which digital avatar of the one or more digital avatars is associated with which user of the one or more users similar to determining, which digital avatar of one or more digital avatars is associated with which user of one or more users as taught by Borchetta; The motivation for the above is to find best match and correlated avatar based on user data. Claim(s) 6 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Bendale modified by Nagar and Todasco as applied to claims 1 and 12 above, and further in view of Zavesky et al. (US Pat. Pub. No. 20230053308 “Zavesky”). Regarding claims 6 and 17 Bendale modified by Nagar and Todasco teaches wherein the plurality of data includes at least one of (i) social media data from social media featuring the individual, (ii) scripts, and historical data about the individual (Bendale “[0038]…… In particular embodiments, the session information may include one or more of a history, previous conversations, and the like”. Nagar “[0095]…. historical data 521, internal data 519 collected about the user 601, IoT data 531 associated with the user 601 and external data from one or more external data sources 527, such as social media data accessible over one or more networks, including the internet.[0097]….. Settings of the virtual agent 603 may be modified to speak to users using dialog associated with the character, actor, celebrity, etc., including speech patterns, slang, tone, grammar, speed, vocabular”) but is silent about behavior data from interviews, recordings, images, data from video clips from television show featuring the individual, and data from movies featuring the individual. Zavesky teaches plurality of data includes social media, behavior data from interviews, recordings, images, data from video clips from television shows, and movies featuring the individual (“[0036]…. (e.g., video captured from specialized cameras utilized in nature or scientific recordings of wildlife), and other types of video footage.[0037] In one example, the video footage may be obtained from a variety of sources. For instance, where the first subject is an actor, the video footage may include footage from movies and television shows in which the actor has appeared, awards shows and interviews at which the actor has been a guest, amateur video footage (e.g., videos uploaded to social media), and the like. Where the first subject is not a public figure, the video footage may include amateur video footage (e.g., videos uploaded to social media, homes movies, and the like)”); Zavesky and Bendale modified by Nagar and Todasco are analogous art as both of them are related to generation and update of digital persona. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Bendale modified by Nagar and Todasco by having behavior data from interviews, recordings, images, data from video clips from television show featuring the individual, and data from movies featuring the individual similar to plurality of data including social media, behavior data from interviews, recordings, images, data from video clips from television shows, and movies featuring the individual as taught by Zavesky. The motivation for the above is to enrich the action/participant data based on data from different sources. Bendale modified by Nagar and Todasco and Zavesky teaches the at least one processor is further programmed to: synthesize the plurality of data using one or more of machine learning, natural language processing, voice intelligence, and sentiment analysis; and cause the replicant persona to respond to the plurality of conversations based upon the synthesized plurality of data (Nagar “[0087] In the exemplary embodiment of the AI/machine learning engine 509, when a user begins an interaction with a virtual agent 603 of the user interface 525, the AI/machine learning engine 509 of the personalized interaction system 501 can be triggered to determine the best persona to adopt for the interaction using one or more ranking algorithms. In the exemplary embodiment, the AI/machine learning engine 509 may be trained to rank available personas from a corpus of personas that may be applied to a virtual agent 603 using a machine learning algorithm for regression and classification in order to identify the persona that includes visual and tonal characteristics that will most likely appeal to the user based on user preferences and insights obtained from the datasets gathered by the data collection module 505”). Response to Arguments Applicant's arguments filed on 02/05/2026 with respect to rejection under 35 USC 112(a) for claims 11 and 22 have been fully considered and they are persuasive. Therefore the rejection has been withdrawn. Applicant's arguments filed on 02/05/2025 with respect to rejection under 35 USC 103 for claim 1 have been fully considered and they are not persuasive. Therefore the rejection has been maintained. Applicant argues see remarks page 12 “Notably, no combination of Bendale, Nagar, and Todasco describes or suggests at least the at least one processor programmed to: "update the replicant persona based upon reaction data captured from the plurality of conversations, the reaction data including data representing how the replicant persona and each of the one or more users reacted to each other during the plurality of conversations;" and "for subsequent interactions with the multiple digital avatars within the virtual environment, use the updated replicant persona to support the multiple digital avatars," as recited in amended Claim 1. For example, Bendale merely describes a general digital avatar for interacting with a user (see paragraph [0027] of Bendale). Nagar merely describes a ranking algorithm for incorporating real-time events into the conversational workflow between the user and virtual agent (see paragraph [0090] of Nagar). Todasco merely describes creating custom avatars based on a user's contacts, social media connections, etc. (see paragraph [0023] of Todasco)”. Examiner wants to note that Bendale modified by Nagar and Todasco teaches argued and amended limitations of claim 1. Nagar [0091] describes modifying persona based on behavior and reaction data of conversational workflow. Nagar “[0091]…..The generated content provided to the user 601 via the user interface 525, can include the audio and visual components used to modify the selected personas, behaviors and actions selected by the AI/machine learning engine 509. For example, UI module 513 may interface with the user 601 as part of the conversational workflow using the selected persona by conversing via audio components that include the persona of a real or fictitious person or character, including but not limited to dialogue provided by the virtual agent using voice signatures of the selected persona from a corpus of voice signatures; dialogue corresponding to the selected persona including speech patterns, slang, tone, grammar, speed and vocabulary thereof from a corpus of dialog options”. Bendale also updates persona based on behavior and voice data . See “[0039]…… The digital human profile 224 may include user preferences about behavior, appearance, voice, and other characteristics of the digital human/avatar. In particular embodiments, the characteristics may be controlled, changed, and updated based on direct inputs from the user and/or by learning and adapting from data from user interactions.”. Nagar uses updated persona for subsequent interactions/conversation. Nagar describes their workflow as iterative cycle and subsequent decisions/actions depend on feedback, See Nagar “[0019]…… The virtual agent can engage in conversational workflows with the user using the selected persona as modified by the behavioral change or action to answer questions, and keep the user engage by using content such as text, video, audio, and images. [0021] Embodiments of the present disclosure may constantly receive direct and indirect feedback from the user or about the user as conversational workflows progress. With each iterative cycle of an interaction with a user, as the user interacts with the virtual agent of the disclosed system, method or program products, feedback (whether positive or negative) can influence subsequent decisions, conversation topics, integration of real-time events, actions, modifications to behaviors, the selected persona, audio or visual components being integrated into the conversation and/or other content being presented to the user via the interface”. With respect to applicant’s argument for independent claims 12 and 23 and dependent claims, examiner refer applicant to the response given above, as there is no additional arguments presented. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAPTARSHI MAZUMDER whose telephone number is (571)270-3454. The examiner can normally be reached 8 am-4 pm PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at (571)272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SAPTARSHI MAZUMDER/Primary Examiner, Art Unit 2612
Read full office action

Prosecution Timeline

Dec 29, 2022
Application Filed
Jul 12, 2024
Non-Final Rejection — §103
Oct 16, 2024
Response Filed
Nov 28, 2024
Final Rejection — §103
Feb 03, 2025
Response after Non-Final Action
Feb 11, 2025
Applicant Interview (Telephonic)
Mar 25, 2025
Request for Continued Examination
Mar 26, 2025
Response after Non-Final Action
Apr 28, 2025
Non-Final Rejection — §103
Jul 24, 2025
Interview Requested
Jul 31, 2025
Response Filed
Aug 04, 2025
Examiner Interview (Telephonic)
Aug 04, 2025
Examiner Interview Summary
Oct 02, 2025
Final Rejection — §103
Feb 05, 2026
Request for Continued Examination
Feb 17, 2026
Response after Non-Final Action
Mar 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597211
GENERATING VARIANTS OF VIRTUAL OBJECTS BASED ON ADJUSTABLE EXTERNAL FACTORS
2y 5m to grant Granted Apr 07, 2026
Patent 12586316
METHOD FOR MIRRORING 3D OBJECTS TO LIGHT FIELD DISPLAYS
2y 5m to grant Granted Mar 24, 2026
Patent 12582488
USER INTERFACE FOR CONNECTING MODEL STRUCTURES AND ASSOCIATED SYSTEMS AND METHODS
2y 5m to grant Granted Mar 24, 2026
Patent 12579745
Curvature-Guided Inter-Patch 3D Inpainting for Dynamic Mesh Coding
2y 5m to grant Granted Mar 17, 2026
Patent 12567210
Multipath Artifact Avoidance in Mobile Dimensioning
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
64%
Grant Probability
76%
With Interview (+11.8%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 375 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month