Prosecution Insights
Last updated: April 19, 2026
Application No. 17/506,734

AVATAR-BASED INTERACTION SERVICE METHOD AND APPARATUS

Non-Final OA §103§112
Filed
Oct 21, 2021
Examiner
COBB, MICHAEL J
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Datum Point Labs Inc.
OA Round
3 (Non-Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
329 granted / 432 resolved
+14.2% vs TC avg
Strong +38% interview lift
Without
With
+37.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
19 currently pending
Career history
451
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
4.4%
-35.6% vs TC avg
§112
34.7%
-5.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 432 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11 February 2026 has been entered. Status of the Claims Claims 1-25 are pending in the current application, with claims 1, 8, 15, and 22 being independent. Claims 15-25 are withdrawn from consideration. Claims 1-3 and 8 have been amended. Information Disclosure Statement The information disclosure statement (IDS) submitted on 11 February 2026 has been considered by the examiner. Response to Arguments Applicant’s arguments, see page 7, filed 11 February 2026, with respect to objection to the claims, along with accompanying amendments received on the same date, have been fully considered and are persuasive. The objection to the claims has been withdrawn. Applicant’s arguments, see page 7, filed 11 February 2026, with respect to the 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph rejection of claims 1-14, along with accompanying amendments received on the same date, have been fully considered and are partially persuasive. The 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph rejection of claims 1-14 has been maintained. With respect to claims 1 and 8, given the plain and ordinary meaning of the words themselves and/or interpreted in light of the disclosure, the scope of the claimed limitations is remains unclear. For instance, it remains unclear as to how the “based on a pre-stored learning model” fits in with training an AI avatar using the image and voice data portion of the claim. The originally filed disclosure does not appear to describe said pre-stored learning model with more specificity than what is currently recited in the claims (see for instance, paragraph 37). It is unclear as to what constitutes the pre-stored learning model and how the training of the AI avatar is based on the generically recited pre-stored learning model. How is it based on the model? It is also unclear as to the scope of without human intervention from the service provider. Is it only the providing step that is done without human intervention – such that the second terminal simply receives the generated AI avatar? Is it the interaction with the second intervention that is done without human intervention? The examiner respectfully requests the applicant clarify the scope of the claimed limitation. Accordingly, the 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph rejection of claims 1-14 have been maintained. With respect to claims 2 and 9, the scope of the claim remains unclear when interpreted using the plain and ordinary meaning of the words themselves or when interpreted in light of the corresponding disclosure. What does databasing content within a storage unit reference? Is that similar to storing content in a database? Selecting content from a database? Is it indexing content? Furthermore, what is an interaction service field from the image and voice of the service provider? Is that where the image and voice data are saved? Finally, what content is selected and databased? The claim calls for selecting and databasing content within a storage unit related to an interaction service field from the image and voice data – “related to” makes it unclear as to what data is being selected and databased and how does the image and voice of the service provider have an interaction service field? Accordingly, the 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph rejection of claims 2 and 9 and claims depending thereon have been maintained. With respect to claims 3 and 10, the scope of the claim remains unclear when interpreted using the plain and ordinary meaning of the words themselves or when interpreted in light of the corresponding disclosure. For instance, it is not immediately clear as to what is meant by the interaction service field including a customer service, counseling, education, and entertainment. For instance, how does the interaction service field include a customer service or education? Is applicant attempting to claim that the interaction service is being used in the field of customer service, education, etc as opposed to an interaction service field – which would be more like an input? The specification appears to indicate that it is used in a field/profession – such as those discussed in paragraphs 79-90 and illustrated in figs. 6-8. In addition, the type has not been previously defined. Did Applicant intend for the different fields listed to be types of interaction services? If so, that may be one way forward on how to clarify the scope of the claimed invention. With respect to claims 5 and 12, the scope of the claim is clear. The specification provides ample support for modulating a voice into the voice of an avatar and providing the modulated voice to another terminal. For instance, paragraph 81 sets forth a voice of the teacher is modulated into a voice of an avatar character and output to the second user terminal. Paragraph 64 sets forth “the voice of the service provider is modulated into a voice of the avatar character and provided to the first user terminal”. That is, as claimed, when interpreted in light of the originally filed disclosure, the scope of the claimed limitation is clear. Accordingly, the 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph rejection of claims 5 and 12 and claims depending thereon have been withdrawn. With respect to claims 6 and 13, the scope of the claim remains unclear when interpreted using the plain and ordinary meaning of the words themselves or when interpreted in light of the corresponding disclosure. For instance, it is not immediately clear as to how the facial expressions, gesture, and voice are perceived from a second user via the second terminal and how that information is used to determine an emotional state of the user. Finally, a facial expression, gesture and voice tone are used multiple times and it is unclear as to if they are different or the same as those previously used. Paragraph 40 sets forth “the avatar system may perform emotional recognition that recognizes an emotional state of a user through facial expressions, gestures, and voice tones of the user, and may perform an emotional expression that expresses emotions of the avatar through the appropriate determination of the response to the recognized emotion, the selection of the voice tone for each emotion corresponding to the facial expression, and the choice of the right word. The implementation of such an avatar will be described later with reference to FIGS. 4 and 5”. However, it remains unclear as to the scope of perceiving a facial expression...and if the various facial expressions, etc defined are related to one another. Accordingly, the 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph rejection of claims 6 and 13 and claims depending thereon have been maintained. For the purposes of further examination, the examiner is interpreting the claims along the lines of paragraph 40, similar to “recognizing/determining a facial expression, a gesture, and a voice tone of a second user by analyzing a real-time image received via the second user terminal to determine an emotional state of the second user; and changing a AI avatar facial expression, an AI Avatar gesture, and a AI Avatar voice tone of the AI avatar in response to the determined emotional state”. With respect to claims 7 and 14, the scope of the claims remains unclear when interpreted using the plain and ordinary meaning of the words themselves or when interpreted in light of the corresponding disclosure. For instance, it is not immediately clear as to how the voice of the user is perceived and to what extent it is perceived. Paragraph 72 of applicant’s disclosure sets forth “To this end, the AI avatar interaction unit 223 may recognize, understand, and respond to a voice of a second user received from the second user terminal through at least any one of automatic speech recognition (ASR), speech-to-text (STT), natural language understanding (NLU) and text-to-speech (TTS)”. Accordingly, the 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph rejection of claims 7 and 14 and claims depending thereon have been maintained. For the purposes of further examination, the examiner is interpreting claims 7 and 14 to align with paragraph 72, such that perception is being interpreted as recognizing and understanding. Applicant’s arguments, see pages 7-11, filed 11 February 2026, with respect to the prior art rejection of claims 1-14, along with accompanying amendments received on the same date, have been fully considered and are not persuasive. Applicant argues that as amended, claim 1 (and similarly claim 8) “now recites two critical limitations that are neither taught or suggested by the cited combination: a) (1) that the training of the response is performed by a learning unit, and (2) that the artificial intelligence avatar provides interaction service to the second user terminal without access from the service provider terminal”, b) that the cited references do not teach or render obvious “the claimed two-phase architecture, in which the service provider terminal is involved only during an initial real-time interaction and training phase, after which the trained artificial intelligence avatar autonomously provides the interaction service completely without access to the service provider terminal”, c) that there is “no teaching, suggestion, or motivation in Kasaba or Scholar to modify the system to include an initial live provider interaction phase followed by full terminal decoupling” and that “such a modification would be contrary to Kasaba’s core design of fully autonomous avatars based on pre-recorded data”, d) the “autonomy limitation is not a mere matter of design choice but represents a distinct architectural requirement in which the learned model, residing within and executed by the system’s learning unit, is used to enable independent AI behavior without any continuing connection to the or access from the service provider terminal”, e) neither Kasaba nor Scholar teaches or suggests the claimed additional limitation that the system performs the claimed “selecting and databasing” operation using a “content selector” and that the databasing is performed in a “storage unit”, as recited, and f) that the Office Action does not articulate a rationale explaining how or why one of ordinary skill in the art would have modified the cited references to arrive at the claimed configuration”. The examiner respectfully disagrees. Kasaba teaches the data collection and/or training process may be executed by AI engine of system 100 to obtain, sort, analyze, and process the various data forming plurality of data collection that is stored in avatar database associated with each avatar, see for instance, column 5, lines 18-45. The data collection can be associated with a first subject person, see for instance, column 5, lines 46-65. Examples of subject person may include any living or deceased person, such as celebrities, politicians, athletes, scholars, teachers, authors, experts in various fields, private individuals, or any other person, see for instance, column 7, lines 1-9. By storing different data collections for avatars of the same subject person at different ages or age ranges, the subject person may have an interactive digital avatar that mimics or emulates the speech, mannerisms, and inflections of the subject person at a first age, see for instance, column 6, lines 1-11. First data collection may include audio data, video data, image data, and/or text data associated with a subject person, see for instance, column 6, lines 12-20. The digital avatar generated by AI engine can accurately reflect the subject person at that particular age or age range, see for instance, column 6, lines 30-67. A data collection and/or training process may be executed by AI engine to obtain, sort, analyze, and process the various data forming the plurality of data collections, see for instance, column 5, lines 33-37. Additionally, AI engine may also execute one or more training sessions using CGI rendering module to generate a digital representation of the subject person’s avatar...training sessions may be used to refine the interactive avatar of the subject person to accurately mimic or emulate speech, mannerisms, and inflections of the subject person...these training processes or sessions may be implemented using machine learning techniques, see for instance, column 5, lines 35-46 and column 11, lines 49-67. The avatar can be displayed in a conversation environment between a service provider and a first user terminal, see for instance, column 12, lines 1-67 and figs.6, 7, 9-14. The AI avatar is displayed at a second terminal without human intervention by the service provider, see for instance, column 12, lines 1-67 and figs.6, 7, 9-14). Scholar teaches an artificial intelligence platform providing an interactive avatar, see for instance, paragraph 14. The avatar can, for instance, answer questions, guide users, and present results to individuals in an intuitive and easy-to-understand manner, see for instance, paragraph 14. The avatar can be trained to ask questions and provide functions of life coaching for users to address situations in which they may find themselves over the course of the day, see for instance, paragraph 14. The training component then uses machine learning to create a trained NLP model based on the corpus of questions and assigned intents, see for instance, paragraph 812, and figs. 42, 43, 58, and 59. The trained NLP model is then used to match intents to questions asked during user interaction, see for instance, paragraph 812. Given a set of probable answers, and a training set mapping existing questions to these answers, the system can formulate a probabilistic weighting of how likely each answer is for a new question never-before-seen by the system – this may require use of natural language processing, see for instance, paragraphs 792 and 793 and figs. 42 and 43. Fig. 58 is a flow diagram illustrating example NLP model creation with intents, see for instance, paragraph 805. That is, the combination of Kasaba and Scholar teach the broadest reasonable interpretation of newly amended claim 1. With respect to claim 2, the combination of Kasaba and Scholar teach that the interaction data can include various information and data associated with one or more interactive avatar sessions between the user and one or more avatars of a subject person or subject persons, see for instance, Kasaba, column 8, lines 4-14. The information or data included in the interaction data obtained from an interactive session...may be added to personalization data in the first data set forth the first user to be used by AI engine in subsequent interactive sessions to provide a personalized or customized interaction with the user, see for instance, Kasaba, column 8, lines 4-22. Avatar database may include one or more data collections comprising data or information associated with a subject person that allows AI engine to generate an interactive digital experience of the subject person, see for instance, Kabasa, column 5, lines 19-23. Avatar database can include a plurality of data collections for one or more avatars of a subject person or subject persons, see for instance, Kabasa, column 5, lines 23-26. A data collection and/or training process may be executed by AI engine of system to obtain, sort, analyze, and process the various data forming plurality of data collections that is stored in avatar database and associated with each avatar, see for instance, Kabasa, column 5, lines 33-37. That is, the combination teaches selecting and databasing content in a storage unit related to an interaction service field from the image and voice data as currently claimed. With respect to applicant’s argument, that claims 1 (and similarly claim 8) now recites two critical limitations that are neither taught or suggested by the cited combination: a) (1) that the training of the response is performed by a learning unit, and (2) that the artificial intelligence avatar provides interaction service to the second user terminal without access from the service provider terminal”, it is noted that the features upon which applicant relies (i.e., any special function of a “learning unit” as designed by the applicant and artificial intelligence avatar provides interaction service to the second user without access from the service provider terminal) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). In the present case, the claims recite “training, via a learning unit, an artificial intelligence (AI) avatar, based on a pre-stored learning model, using image and voice data; and providing the interaction service to the second user terminal by generating an artificial intelligence avatar based on the trained learning model without human intervention from the service provider”. That is, the claims recite a generic learning unit that trains an AI avatar, based on a pre-stored learning model, using the image and voice data. The claims do not offer specificity with respect to the generically worded learning unit or pre-stored learning model and the specification does not define specific structure to the learning model or unit. Thus, the broadest reasonable interpretation of a learning unit, is any routine, algorithm, hardware, or software that can perform the training step using the image and voice data. In the present case, Kabasa teaches a data collection and/or training process may be executed by AI engine to obtain, sort, analyze, and process the various data forming the plurality of data collections, see for instance, column 5, lines 33-37. Additionally, AI engine may also execute one or more training sessions using CGI rendering module to generate a digital representation of the subject person’s avatar...training sessions may be used to refine the interactive avatar of the subject person to accurately mimic or emulate speech, mannerisms, and inflections of the subject person...these training processes or sessions may be implemented using machine learning techniques, see for instance, column 5, lines 35-46 and column 11, lines 49-67. Scholar adds an artificial intelligence platform providing an interactive avatar, see for instance, paragraph 14. The avatar can be trained to ask questions and provide functions of life coaching for users to address situations in which they may find themselves over the course of the day, see for instance, paragraph 14. The training component then uses machine learning to create a trained NLP model based on the corpus of questions and assigned intents, see for instance, paragraph 812, and figs. 42, 43, 58, and 59. The claims further recite “providing the interaction service to the second user terminal by generating an artificial intelligence (AI) avatar based on the trained learning model without human intervention from the service provider”. As noted with respect to the 112(b) rejection, it is not clear if the without human intervention from the service provider is attached to the “generating an artificial intelligence (AI) avatar”, the “providing the interaction service”, both, or to something else. Furthermore, applicant’s arguments recite the artificial intelligence avatar provides interaction service to the second user terminal without access from the service provider terminal, the claims do not recite without access from the service provider terminal and even if they did, it would be unclear as to what is being done without access and how the providing is done without access. The claims recite providing the interaction service to the second user terminal by generating an AI avatar based on the trained learning model (the trained learning model, being the training step in the previous limitation) without human intervention from the service provider. Kabasa teaches providing interaction service to the second user terminal by generating an AI avatar based on the trained learning model without human intervention by the service provider, see for instance, column 12, lines 1-67 and figs. 6, 7, 9-14. Scholar adds an artificial intelligence platform providing an interactive avatar, see for instance, paragraph 14. The avatar can, for instance, answer questions, guide users, and present results to individuals in an intuitive and easy-to-understand manner, see for instance, paragraph 14. The trained NLP model is then used to match intents to questions asked during user interaction, see for instance, paragraph 812. Given a set of probable answers, and a training set mapping existing questions to these answers, the system can formulate a probabilistic weighting of how likely each answer is for a new question never-before-seen by the system – this may require use of natural language processing, see for instance, paragraphs 792 and 793 and figs. 42 and 43. Fig. 58 is a flow diagram illustrating example NLP model creation with intents, see for instance, paragraph 805. That is, the combination of Kabasa in view of Scholar would teach the newly amended limitations as currently recited. That is, Kabasa in view of Scholar teach training, via a learning unit, an artificial intelligence (AI) avatar, based on a pre-stored learning model, using image and voice data; and providing the interaction service to the second user terminal by generating an artificial intelligence avatar based on the trained learning model without human intervention from the service provider. With respect to applicant’s argument that the cited references do not teach or render obvious “the claimed two-phase architecture, in which the service provider terminal is involved only during an initial real-time interaction and training phase, after which the trained artificial intelligence avatar autonomously provides the interaction service completely without access to the service provider terminal”, it is noted that the features upon which applicant relies (i.e., two-phase architecture, in which the service provider terminal is involved only during an initial real-time interaction and training phase, after which the trained artificial intelligence avatar autonomously provides the interaction service completely without access to the service provider terminal) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). In the present case, the claims do not recite the argued two-phase architecture, real-time interaction and training phase, and providing the AI avatar autonomously to the interaction service completely without access to the service provider terminal. Instead, the claims recite “collecting image and voice data from a service provider via the service provider terminal”, the image and voice data are not required to be collected from a real-time interaction and can be previously recorded. The claims further recite “generating, based on the image and voice data, an avatar; displaying, via the first terminal, the avatar in a conversation between the serviced provider terminal and the first user terminal; training, via a learning unit, an artificial intelligence (AI) avatar, based on a pre-stored learning model, using the image and voice data”. The claim goes on to recite “providing the interaction service to the second user terminal by generating an artificial intelligence (AI) avatar based on the trained learning model without human intervention from the service provider. As previously discussed, the prior art of record teaches the claimed limitations noted above. With respect to applicant’s arguments that there is “no teaching, suggestion, or motivation in Kasaba or Scholar to modify the system to include an initial live provider interaction phase followed by full terminal decoupling” and that “such a modification would be contrary to Kasaba’s core design of fully autonomous avatars based on pre-recorded data”, the claims do not require an initial live provider interaction phase followed by full terminal decoupling. It is noted that Kasaba does teach live interactions, in at least column 10, lines 48 -62, column 11, lines 35-40, column 16, lines 20-25, and column 17, lines 1-29. It also noted that is not immediately clear as to what would comprise a full terminal decoupling, as the disclosure does not appear to discuss decoupling. It does discuss without human intervention in paragraphs 26 and 82 and without the user of a service provider terminal in paragraphs 26 and 36. For instance, paragraph 26 sets forth that once trained or pre-programmed, it is possible to perform learning guidance on the second user terminal as the student (103), without access from the service provider terminal 101 as the teacher, through the AI avatar in the non-face-to-face conversation environment. In this embodiment, the AI avatar is trained or pre-programmed or pre-programmed, there is no need for user terminals 101 or 102”. Is applicant attempting to claim that the trained models are stored on a server, and then part of an interaction service provided to a user terminal as a stand-alone service that is accessed by a client device, such as by initiating interaction on the client side to interact via an AI avatar with stored pre-trained models (that were previously trained based on a live interaction with someone)? As currently, claimed, the claims do not require either a live provider interaction or a “full terminal decoupling”, nor as previously noted would it be contrary to the current combination of references to modify them to teach such a limitation (though again, that is not claimed and thus not under examination). With respect to applicant’s arguments that the “autonomy limitation is not a mere matter of design choice but represents a distinct architectural requirement in which the learned model, residing within and executed by the system’s learning unit, is used to enable independent AI behavior without any continuing connection to the or access from the service provider terminal”, the claims do not require either autonomy limitation or the distinct architectural requirement in which the learned model, residing within and executed by the system’s learning unit, is used to enable independent AI behavior without any continuing connection to the or access from the service provider terminal. Furthermore, the claims do not require a specific architectural layout. In the present case, as previously set forth, the prior art of record teaches each and every limitation as currently claimed. With respect to applicant’s arguments that neither Kasaba nor Scholar teaches or suggests the claimed additional limitation that the system performs the claimed “selecting and databasing” operation using a “content selector” and that the databasing is performed in a “storage unit”, as recited, the prior art of record teaches interaction data can include various information and data associated with one or more interactive avatar sessions between the user and one or more avatars of a subject person or subject persons, see for instance, Kasaba, column 8, lines 4-14. The information or data included in the interaction data obtained from an interactive session...may be added to personalization data in the first data set forth the first user to be used by AI engine in subsequent interactive sessions to provide a personalized or customized interaction with the user, see for instance, Kasaba, column 8, lines 4-22. Avatar database may include one or more data collections comprising data or information associated with a subject person that allows AI engine to generate an interactive digital experience of the subject person, see for instance, Kabasa, column 5, lines 19-23. Avatar database can include a plurality of data collections for one or more avatars of a subject person or subject persons, see for instance, Kabasa, column 5, lines 23-26. A data collection and/or training process may be executed by AI engine of system to obtain, sort, analyze, and process the various data forming plurality of data collections that is stored in avatar database and associated with each avatar, see for instance, Kabasa, column 5, lines 33-37. That is, the combination teaches selecting and databasing content in a storage unit related to an interaction service field from the image and voice data as currently claimed. With respect to applicant’s arguments that the Office Action does not articulate a rationale explaining how or why one of ordinary skill in the art would have modified the cited references to arrive at the claimed configuration, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). It is noted that applicant merely asserted that the Office Action does not articulate a rationale, rather than point out distinctly why one of ordinary skill in the art would not be so motivated. In this case, as pointed out in the previous Office Action, one of ordinary skill in the art would have been so motivated to improve the user experience, enhance functionality and improve the intelligence of the system, see for instance, Scholar, abstract, and paragraphs 2, 14, 15, and 792. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a communication unit in claim 8. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-14 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 (and similarly claim 8) recites “collecting image and voice data from a service provider via the service provider terminal; ...training, via a learning unit, an artificial intelligence (AI) avatar, based on a pre-stored learning model, using the image and voice data; and providing the interaction service to the second user terminal by generating an artificial intelligence (AI) avatar based on the trained learning model without human intervention from the service provider”. These limitations correspond to computer-implemented functional claim limitations as discussed in MPEP 2161.01. Applicant’s disclosure regarding these functions does not appear to describe the methodology by which the claimed functions are performed. With respect to training, via a learning unit, an artificial intelligence (AI) avatar, based on a pre-stored learning model, using the image and voice data, outside of repeating the claim language in the summary section, the originally filed disclosure sets forth in paragraph 37 that “In particular, as will be described later, the control unit 220 may train the image and voice of the service provider acquired from the service provider terminal, which are received by the communication unit 210, with a pre-stored learning model, thereby generating an avatar”. The disclosure does not appear to provide an explanation, algorithm, description of what constitutes the learning model and how learning model is integrated into the training of the AI avatar via the learning unit using the image and voice. As explained in MPEP 2161.01 I, paragraphs 6-8, “original claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. … An algorithm is defined, for example, as "a finite sequence of steps for solving a logical or mathematical problem or performing a task." Microsoft Computer Dictionary (5th ed., 2002). Applicant may "express that algorithm in any understandable terms including as a mathematical formula, in prose, or as a flow chart, or in any other manner that provides sufficient structure." Finisar Corp. v. DirecTV Grp., Inc., 523 F.3d 1323, 1340, 86 USPQ2d 1609, 1623 (Fed. Cir. 2008) (internal citation omitted). It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement.” With respect to the claimed computer functional limitations of “training, via a learning unit, an artificial intelligence (AI) avatar, based on a pre-stored learning model, using the image and voice data”. Although the disclosure provides examples of desired outcomes with respect to the functional limitations, there is no description of an algorithm, steps, or procedure showing how the inventor(s) intended the functions to be performed. Therefore, the functions recited in claims 1 and 8 correspond to subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention Claims depending thereon do not cure the noted deficiencies and are also rejected using substantially similar rationale as to that set forth with respect to the claims from which they depend. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. With respect to claim 1, given the plain and ordinary meaning of the words themselves and/or interpreted in light of the disclosure, the scope of the claimed limitations is unclear. For instance, it is not immediately clear as to how the “based on a pre-stored learning model” fits in with training an AI avatar using the image and voice data portion of the claim. The originally filed disclosure does not appear to describe said pre-stored learning model with more specificity than what is currently recited in the claims (see for instance, paragraph 37). What constitutes the pre-stored learning model? What is meant by pre-stored? How is the training of the AI avatar, via a learning unit, based on the generically recited pre-stored learning model? It is also unclear as to the scope of without human intervention from the service provider. Is it only the providing step that is done without human intervention – such that the second terminal simply receives the generated AI avatar? Is it saying the last limitation is automated? Or the machine decides to transmit to the second user terminal based on a received command? What specifically is being done without human interaction from the service provider? The examiner respectfully requests the applicant clarify the scope of the claimed limitation. Claim 8 recites substantially similar subject matter as to that of claim 1 and is accordingly rejected using substantially similar rationale as to that in claim 1. Claims depending thereon do not cure the noted deficiencies and are accordingly also rejected using substantially similar rationale as to that of claims from which they depend. With respect to claim 2, the scope of the claim is unclear when interpreted using the plain and ordinary meaning of the words themselves or when interpreted in light of the corresponding disclosure. What does databasing content reference? Is that similar to storing content in a database? Selecting content from a database? Furthermore, what is an interaction service field from the image and voice of the service provider? Is that where the image and voice data are saved? Finally, what content is selected and databased? The claim calls for selecting and databasing content within a storage unit related to an interaction service field from the image and voice data – “related to” makes it unclear as to what data is being selected and databased and how does the image and voice of the service provider have an interaction service field? The examiner respectfully requests the applicant clarify the scope of the claimed limitation. Claim 9 recites substantially similar subject matter as to that of claim 2 and is accordingly rejected using substantially similar rationale as to that in claim 2. Claims depending thereon do not cure the noted deficiencies and are accordingly also rejected using substantially similar rationale as to that of claims from which they depend. With respect to claim 3, the scope of the claim is unclear when interpreted using the plain and ordinary meaning of the words themselves or when interpreted in light of the corresponding disclosure. For instance, it is not immediately clear as to what is meant by the interaction service field including a customer service, counseling, education, and entertainment. For instance, how does the interaction service field include a customer service or education? Is applicant attempting to claim that the interaction service is being used in the field of customer service, education, etc as opposed to an interaction service field – which would be more like an input? The specification appears to indicate that it is used in a field/profession – such as those discussed in paragraphs 79-90 and illustrated in figs. 6-8. In addition, the type has not been previously defined. Did Applicant intend for the different fields listed to be types of interaction services? If so, that may be one way forward on how to clarify the scope of the claimed invention. The examiner respectfully requests the applicant clarify the scope of the claimed limitation. Claim 10 recites substantially similar subject matter as to that of claim 3 and is accordingly rejected using substantially similar rationale as to that in claim 3. With respect to claim 6, the scope of the claim is unclear when interpreted using the plain and ordinary meaning of the words themselves or when interpreted in light of the corresponding disclosure. For instance, it is not immediately clear as to how the facial expressions, gesture, and voice are perceived from a second user via the second terminal and how that information is used to determine an emotional state of the user. Finally, a facial expression, gesture and voice tone are used multiple times and it is unclear as to if they are different or the same as those previously used. Paragraph 40 sets forth “the avatar system may perform emotional recognition that recognizes an emotional state of a user through facial expressions, gestures, and voice tones of the user, and may perform an emotional expression that expresses emotions of the avatar through the appropriate determination of the response to the recognized emotion, the selection of the voice tone for each emotion corresponding to the facial expression, and the choice of the right word. The implementation of such an avatar will be described later with reference to FIGS. 4 and 5”. However, it remains unclear as to the scope of perceiving a facial expression...and if the various facial expressions, etc defined are related to one another. The examiner respectfully requests the applicant clarify the scope of the claimed limitation. Claim 13 recites substantially similar subject matter as to that of claim 6 and is accordingly rejected using substantially similar rationale as to that in claim 6. For the purposes of further examination, the examiner is interpreting claims 6 and 13 along the lines of paragraph 40, similar to “recognizing/determining a facial expression, a gesture, and a voice tone of a second user by analyzing a real-time image received via the second user terminal to determine an emotional state of the second user; and changing a AI avatar facial expression, an AI Avatar gesture, and a AI Avatar voice tone of the AI avatar in response to the determined emotional state”. With respect to claim 7, the scope of the claim is unclear when interpreted using the plain and ordinary meaning of the words themselves or when interpreted in light of the corresponding disclosure. For instance, it is not immediately clear as to how the voice of the user is perceived and to what extent it is perceived. Paragraph 72 of applicant’s disclosure sets forth “To this end, the AI avatar interaction unit 223 may recognize, understand, and respond to a voice of a second user received from the second user terminal through at least any one of automatic speech recognition (ASR), speech-to-text (STT), natural language understanding (NLU) and text-to-speech (TTS)”. The examiner respectfully requests the applicant clarify the scope of the claimed limitation. Claim 14 recites substantially similar subject matter as to that of claim 6 and is accordingly rejected using substantially similar rationale as to that in claim 6. For the purposes of further examination, the examiner is interpreting claims 7 and 14 to align with paragraph 72, such that perception is being interpreted as recognizing and understanding. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5, 7-12, and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kasaba (US Patent 11,463,657) in view of Scholar (US PG Publication 2018/0308473). Regarding claim 1, Kasaba teaches an avatar-based interaction service method performed by a computer system using a service provider terminal, a first user terminal and a second user terminal (see for instance, abstract and figs. 1-14), the method comprising: collecting image and voice data from a service provider via the service provider terminal (Data collection and/or training process may be executed by AI engine of system 100 to obtain, sort, analyze, and process the various data forming plurality of data collection that is stored in avatar database associated with each avatar, see for instance, column 5, lines 18-45. The data collection can be associated with a first subject person, see for instance, column 5, lines 46-65. Examples of subject person may include any living or deceased person, such as celebrities, politicians, athletes, scholars, teachers, authors, experts in various fields, private individuals, or any other person, see for instance, column 7, lines 1-9); generating, based on the image and voice data, an avatar (By storing different data collections for avatars of the same subject person at different ages or age ranges, the subject person may have an interactive digital avatar that mimics or emulates the speech, mannerisms, and inflections of the subject person at a first age, see for instance, column 6, lines 1-11. First data collection may include audio data, video data, image data, and/or text data associated with a subject person, see for instance, column 6, lines 12-20. The digital avatar generated by AI engine can accurately reflect the subject person at that particular age or age range, see for instance, column 6, lines 30-67); displaying, via the first user terminal, the avatar in a conversation environment between the service provider terminal and the first user terminal (The avatar can be displayed in a conversation environment between a service provider and a first user terminal, see for instance, column 12, lines 1-67 and figs.6, 7, 9-14); training, via a learning unit, an artificial intelligence (AI) avatar, based on a prestored learning model, using the image and voice data (A data collection and/or training process may be executed by AI engine to obtain, sort, analyze, and process the various data forming the plurality of data collections, see for instance, column 5, lines 33-37. Additionally, AI engine may also execute one or more training sessions using CGI rendering module to generate a digital representation of the subject person’s avatar...training sessions may be used to refine the interactive avatar of the subject person to accurately mimic or emulate speech, mannerisms, and inflections of the subject person...these training processes or sessions may be implemented using machine learning techniques, see for instance, column 5, lines 35-46 and column 11, lines 49-67); and providing the interaction service to the second user terminal by generating an artificial intelligence (AI) avatar based on the trained learning model without human intervention from the service provider (The interaction service is provided to the second user terminal by generating an AI avatar based on the trained learning model without human intervention by the service provider, see for instance, column 12, lines 1-67 and figs. 6, 7, 9-14). While Kasaba teaches the broadest reasonable interpretation of claim 1, Scholar is being brought in to explicitly teach that in training an AI avatar, a pre-stored model is used. In the same art of avatars, Scholar teaches an artificial intelligence platform providing an interactive avatar, see for instance, paragraph 14. The avatar can, for instance, answer questions, guide users, and present results to individuals in an intuitive and easy-to-understand manner, see for instance, paragraph 14. The avatar can be trained to ask questions and provide functions of life coaching for users to address situations in which they may find themselves over the course of the day, see for instance, paragraph 14. The training component then uses machine learning to create a trained NLP model based on the corpus of questions and assigned intents, see for instance, paragraph 812, and figs. 42, 43, 58, and 59. The trained NLP model is then used to match intents to questions asked during user interaction, see for instance, paragraph 812. Given a set of probable answers, and a training set mapping existing questions to these answers, the system can formulate a probabilistic weighting of how likely each answer is for a new question never-before-seen by the system – this may require use of natural language processing, see for instance, paragraphs 792 and 793 and figs. 42 and 43. Fig. 58 is a flow diagram illustrating example NLP model creation with intents, see for instance, paragraph 805. It would have been obvious to one of ordinary skill in the art having the teachings of Kasaba and Scholar in front of them before the effective filing date of the claimed invention to incorporate artificial intelligence avatar as taught by Scholar into Kasaba’s interactive avatar system, as having a machine learning algorithm train a pre-stored learning model, such as described by Scholar was well known at the time of the effective filing date invention and would have yielded predictable results in combination with Kasaba. The modification of Kasaba with Scholar would have explicitly allowed training the AI avatar, based on a pre-stored learning model, using the image and voice data; and displaying the AI avatar to a second user terminal without human intervention from the service provider. The motivation for combining Kasaba with Scholar would have been to improve the user experience, enhance functionality and improve the intelligence of the system, see for instance, abstract, and paragraphs 2, 14, 15, and 792. Regarding claim 2, Kasaba in view of Scholar teach the avatar-based interaction service method of claim 1 and further teach selecting and databasing content within a storage unit related to an interaction service field from the image and voice data (see for instance, Kasaba, column 5, lines 18-67, column 6, lines 12-20, column 8, lines 4-22, and column 13, lines 34-60. Interaction data can include various information and data associated with one or more interactive avatar sessions between the user and one or more avatars of a subject person or subject persons, see for instance, Kasaba, column 8, lines 4-14. The information or data included in the interaction data obtained from an interactive session...may be added to personalization data in the first data set forth the first user to be used by AI engine in subsequent interactive sessions to provide a personalized or customized interaction with the user, see for instance, Kasaba, column 8, lines 4-22). The motivation to combine Kasaba and Scholar is the same as that which was set forth with respect to claim 1. Unless otherwise stated, citations are to Kasaba. Regarding claim 3, Kasaba in view of Scholar teach the avatar-based interaction service method of claim 2 and further teach wherein the interaction service field includes a customer service, counseling, education, and entertainment (see for instance, Kasaba, column 7, lines 1-10 and figs. 1-12), and further comprising: generating the AI avatar based on the type of interaction service field (The information or data included in the interaction data obtained from an interactive session...may be added to personalization data in the first data set forth the first user to be used by AI engine in subsequent interactive sessions to provide a personalized or customized interaction with the user, see for instance, Kasaba, column 8, lines 4-22). The motivation to combine Kasaba and Scholar is the same as that which was set forth with respect to claim 1. Unless otherwise stated, citations are to Kasaba. Regarding claim 4, Kasaba in view of Scholar teach the avatar-based interaction service method of claim 1 and further teach, wherein the generating the avatar comprises: analyzing the image and voice data to reflect a motion, a gesture, and an emotion of the service provider to the avatar (The AI engine processes and analyzes a plurality of data associated with one or more subject persons to process and analyze a plurality of data to render and generate an interactive avatar of the subject person that is configured to mimic and emulate the speech, mannerisms, and inflections of the subject person, see for instance, Kasaba, column 3, lines 15-27. Additionally, AI engine may also execute one or more training sessions using CGI rendering module to generate a digital representation of the subject person’s avatar...training sessions may be used to refine the interactive avatar of the subject person to accurately mimic or emulate speech, mannerisms, and inflections of the subject person...these training processes or sessions may be implemented using machine learning techniques, see for instance, Kasaba, column 5, lines 35-46 and column 11, lines 49-67. The AI engine may use video, text and image data to accurately mimic facial expressions, hand movements, body posture, physical characteristics, and other physical mannerisms of the subject person, see for instance, column 6, lines 30-50. The AI engine may use audio data to accurately mimic the speech, voice inflections, and manner of speaking of the subject person, see for instance, column 6, lines 21-29.). The motivation to combine Kasaba and Scholar is the same as that which was set forth with respect to claim 1. Unless otherwise stated, citations are to Kasaba. Regarding claim 5, Kasaba in view of Scholar teach the avatar-based interaction service method of claim 1 and further teach modulating the voice data into a voice of the avatar; and providing the modulated voice data via the first user terminal (The AI engine processes and analyzes a plurality of data associated with one or more subject persons to process and analyze a plurality of data to render and generate an interactive avatar of the subject person that is configured to mimic and emulate the speech, mannerisms, and inflections of the subject person, see for instance, Kasaba, column 3, lines 15-27. Additionally, AI engine may also execute one or more training sessions using CGI rendering module to generate a digital representation of the subject person’s avatar...training sessions may be used to refine the interactive avatar of the subject person to accurately mimic or emulate speech, mannerisms, and inflections of the subject person...these training processes or sessions may be implemented using machine learning techniques, see for instance, Kasaba, column 5, lines 35-46 and column 11, lines 49-67. The AI engine may use video, text and image data to accurately mimic facial expressions, hand movements, body posture, physical characteristics, and other physical mannerisms of the subject person, see for instance, column 6, lines 30-50. The AI engine may use audio data to accurately mimic the speech, voice inflections, and manner of speaking of the subject person, see for instance, column 6, lines 21-29). The motivation to combine Kasaba and Scholar is the same as that which was set forth with respect to claim 1. Unless otherwise stated, citations are to Kasaba. Regarding claim 7, Kasaba in view of Scholar teach the avatar-based interaction service method of claim 1 and further teach perceiving a voice of a second user received from the second user terminal; and responding to the voice of the second user through any one or more of automatic speech recognition (ASR), speech-to-text (STT), natural language understanding (NLU) and text-to-speech (TTS) (see for instance, Kasaba, column 11, lines 1-5, and column 13, lines 6-48). The motivation to combine Kasaba and Scholar is the same as that which was set forth with respect to claim 1. Unless otherwise stated, citations are to Kasaba. Regarding claim 8, Claim 8 is the apparatus claim of the method claim 1 and is rejected using substantially similar rationale as to that set forth with respect to claim 1. In addition, Kasaba in view of Scholar teach a communication unit configured to transmit and receive information through a communication network with a service provider terminal, a first user terminal, and a second user terminal; and one or more processors configured to perform operations (see for instance, Kasaba, column 4, lines 12-60 and figs 1-3 and 5). Regarding claim 9, Claim 2 is the apparatus claim of the method claim 1 and is rejected using substantially similar rationale as to that set forth with respect to claim 2. Regarding claim 10, Claim 3 is the apparatus claim of the method claim 1 and is rejected using substantially similar rationale as to that set forth with respect to claim 3. Regarding claim 11, Claim 4 is the apparatus claim of the method claim 1 and is rejected using substantially similar rationale as to that set forth with respect to claim 4. Regarding claim 12, Claim 5 is the apparatus claim of the method claim 1 and is rejected using substantially similar rationale as to that set forth with respect to claim 5. Regarding claim 14, Claim 7 is the apparatus claim of the method claim 1 and is rejected using substantially similar rationale as to that set forth with respect to claim 7. Allowable Subject Matter Since no prior art is being applied to claims 6 and 13, based on the current scope of the claims, claims 6 and 13 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US PG Publication 2019/0095775 to Lembersky teaches AI character system capable of natural verbal and visual interactions with a human, see for instance, abstract. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL J COBB whose telephone number is (571)270-3875. The examiner can normally be reached Monday - Friday, 11am - 7pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at 571-272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL J COBB/Primary Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Oct 21, 2021
Application Filed
May 04, 2024
Non-Final Rejection — §103, §112
Nov 16, 2024
Response after Non-Final Action
Feb 04, 2025
Response Filed
Aug 07, 2025
Final Rejection — §103, §112
Feb 11, 2026
Request for Continued Examination
Feb 20, 2026
Response after Non-Final Action
Feb 28, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597182
DATA INTERPOLATION PLATFORM FOR GENERATING PREDICTIVE AND INTERPOLATED PRICING DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12586321
AUTOMATED MEASUREMENT OF INTERIOR SPACES THROUGH GUIDED MODELING OF DIMENSIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12579736
METHOD AND DEVICE FOR GENERATING THREE-DIMENSIONAL IMAGE BY USING PLURALITY OF CAMERAS
2y 5m to grant Granted Mar 17, 2026
Patent 12561105
ONLINE ELECTRONIC WHITEBOARD CONTENT SYNCHRONIZATION AND SHARING SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12561859
Method and System for Visualizing a Graph
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+37.9%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 432 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month