DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This communication is in response to Application No. 18/795454, filed on 8/6/2024. Claims 1-20 are currently pending and have been examined. Claims 1-20 have been rejected as follow.
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 112(a) as follows:
The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994).
The disclosure of the prior-filed application, Application No. 17/839069 fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application.
Originally filed claims 2, 8, and 15 of the instant application recite “wherein the knowledge base of the first virtual comprises a large language model trained on the interaction data from the first virtual avatar”. However, parent application 17/839069 does not recite any reference to a large language model. The specification merely states “[0024] Further, the voice of the real person may be recorded and the virtual avatar may speak similarly as the real person. In some embodiments, an artificial intelligence engine may generate one or more machine learning models trained to answer questions asked by a customer using the virtual marketplace platform. The machine learning models may be trained to use natural language processing and training data to determine how to respond to a question or statement made by a customer.” As large language models are a narrower subset of the broader version of natural language processing, the LLM recited in claims 2, 8, and 15 is not supported in the parent application and therefore not given the earlier priority date. It is noted support for claims 2, 8, and 15 in the instant application is in [0211] and the originally filed claims. For these reasons claims 2, 8, and 15 are examined with the priority date of 8/6/2024 (filing date of the CIP/instant application).
Drawings
The drawings are objected to because Figures 4-48 are note legible copies of the drawings and correction is needed. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
In addition to Replacement Sheets containing the corrected drawing figure(s), applicant is required to submit a marked-up copy of each Replacement Sheet including annotations indicating the changes made to the previous version. The marked-up copy must be clearly labeled as “Annotated Sheets” and must be presented in the amendment or remarks section that explains the change(s) to the drawings. See 37 CFR 1.121(d)(1). Failure to timely submit the proposed drawing and marked-up copy will result in the abandonment of the application.
Claim Objections
Claims 2, 8, and 15 are objected to because of the following informalities: there is a typographical error. The claim recites “wherein the knowledge base of the first virtual comprises” and based on the independent claim should recite wherein the knowledge base of the first virtual avatar comprises”. The claim has been interpreted as such for the purposes of compact prosecution.
Claim 20 states “The system of claim 12, wherein one or more additional virtual avatar” and should recite “The system of claim 14, wherein one or more additional virtual avatar” based on the independent claim. The claim has been interpreted as such for the purposes of compact prosecution.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Step 1: The claims 1-6 are a method, claims 7-13 are a computer readable medium, and claims 14-20 are a system. Thus, each independent claim, on its face, is directed to one of the statutory categories of 35 U.S.C. §101. However, the claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 2A Prong 1: The independent claims (1, 7 and 14, taking claim 1 as a representative claim) recite:
A method for sharing knowledge between avatars in a virtual marketplace platform, comprising:
generating, via an artificial intelligence engine, one or more machine learning models trained to receive input from a customer virtual avatar and determine an output to respond to the input, wherein the input comprises a question and the output comprises an answer;
receiving interaction data from a first virtual avatar, the interaction data comprising the output within a first virtual marketplace environment and performance metrics associated with the output;
generating, via the artificial intelligence engine, a knowledge base for the first virtual avatar using the interaction data, wherein the knowledge base is operatively configured to store the interaction data corresponding to the performance metrics;
transferring the knowledge base from the first virtual avatar to a second virtual avatar, wherein the second virtual avatar operates in a second virtual marketplace environment; and
updating the second virtual avatar with the knowledge base, enabling the second virtual avatar to provide one or more targeted responses to user inquiries based on the knowledge base.
These limitations, except for the italicized portions, under their broadest reasonable interpretations, recite certain methods of organizing human activity for managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) as well as commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations). The claimed invention recites steps for enabling targeted responses to user inquiries based on information gathered into a knowledge base. Theis allows for interaction with users tailored to the specific needs of a brand (see [003] of the instant specification). The steps under its broadest reasonable interpretation specifically fall under sales activities. The Examiner notes that although the claim limitations are summarized, the analysis regarding subject matter eligibility considers the entirety of the claim and all of the claim elements individually, as a whole, and in ordered combination.
Prong 2: This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of
A method for sharing knowledge between avatars in a virtual marketplace platform, comprising: (claim 1)
A tangible, non-transitory computer-readable medium storing instructions that, when executed, cause a processing device to: (claim 7)
A system comprising: a memory device storing instructions; a processing device communicatively coupled to the memory device, wherein the processing device executes the instructions to: (claim 14)
generating, via an artificial intelligence engine, one or more machine learning models trained to receive input from a customer virtual avatar and determine an output to respond to the input, wherein the input comprises a question and the output comprises an answer;
receiving interaction data from a first virtual avatar, the interaction data comprising the output within a first virtual marketplace environment and performance metrics associated with the output;
generating, via the artificial intelligence engine, a knowledge base for the first virtual avatar using the interaction data, wherein the knowledge base is operatively configured to store the interaction data corresponding to the performance metrics;
transferring the knowledge base from the first virtual avatar to a second virtual avatar, wherein the second virtual avatar operates in a second virtual marketplace environment; and
updating the second virtual avatar with the knowledge base, enabling the second virtual avatar to provide one or more targeted responses to user inquiries based on the knowledge base.
The additional elements of A method for sharing knowledge between avatars in a virtual marketplace platform, comprising: (claim 1) A tangible, non-transitory computer-readable medium storing instructions that, when executed, cause a processing device to: (claim 7) A system comprising: a memory device storing instructions; a processing device communicatively coupled to the memory device, wherein the processing device executes the instructions to: (claim 14); and via an artificial intelligence engine, one or more machine learning models trained to receive are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. The instant specification states [0067-0068] The machine learning models 132 may be trained to answer questions asked by customer virtual avatars and/or users of the virtual marketplace platform. The machine learning models 132 may be trained with training data including a corpus of labeled questions and a corpus of labeled answers. In some embodiments, the machine learning models 132 may perform natural language processing and/or sentiment analysis and/or tone analysis. [0068] The training engine 130 may be a rackmount server, a router, a personal computer, a portable digital assistant, a smartphone, a laptop computer, a tablet computer, a netbook, a desktop computer, an Internet of Things (IoT) device, any other desired computing device, or any combination of the above. The training engine 130 may be cloud-based, be a real-time software platform, include privacy software or protocols, or include security software or protocols. The limitations do not impose any meaningful limits on practicing the abstract idea, and therefore do not integrate the abstract idea into a practical application – MPEP 2106.05(f).
The additional elements of the […] a customer virtual avatar; receiving interaction data from a first virtual avatar, the interaction data comprising the output within a first virtual marketplace environment […]; a knowledge base for the first virtual avatar using the interaction data[…] ; from the first virtual avatar to a second virtual avatar, wherein the second virtual avatar operates in a second virtual marketplace environment; and updating the second virtual avatar with the knowledge base, enabling the second virtual avatar to provide one or more targeted responses to user inquiries based on the knowledge base merely indicates a field of use or technological environment in which the judicial exception is performed. Although these additional elements limit the identified judicial exception, this type of limitation merely confines the use of the abstract idea to a particular technological environment (virtual reality/environment) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h). These limitations do not impose any meaningful limits on practicing the abstract idea, and therefore does not integrate the abstract idea into a practical application (see MPEP 2106.05(g)).
Accordingly, these additional elements when considered individually or as a whole do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The independent claims are directed to an abstract idea.
Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed with respect to Step 2A Prong two, the additional elements in the claims amount to no more than mere instructions to apply the judicial exception using a generic computer component and technical environment.
Even when considered as an ordered combination, the additional elements of claim 1, 7, and 14 do not add anything that is not already present when they are considered individually. Therefore, under Step 2B, there are no meaningful limitations in claims 1, 7 and 14 that transform the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception itself (see MPEP 2106.05).
As such, independent claims 1, 7, and 14 are ineligible.
Dependent claims 2-6, 8-13, and 15-20 when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. §101 because the additional recited limitations fail to establish that the claims are not directed to the same abstract idea of Independent Claims 1, 7 and 14 without significantly more.
2. The method of claim 1, wherein the knowledge base of the first virtual [avatar] comprises a large language model trained on the interaction data from the first virtual avatar. The limitation adds the additional element of the large language model, however as shown in the rejection of the independent claims, the machine learning of the instant application is recited at a high level of generality and therefore does not integrate the judicial exception into a practical application.
3. The method of claim 1, further comprising updating the knowledge base based on monitoring user interactions with the second virtual avatar, wherein the updating of the knowledge base results in an updated knowledge base. The limitation merely further limits the abstract idea and does not further integrate the judicial exception into a practical application.
4. The method of claim 3, further comprising transferring the updated knowledge base back to the first virtual avatar. The limitation merely further limits the abstract idea and does not further integrate the judicial exception into a practical application.
5. The method of claim 1, wherein at least one of the first virtual avatar or the second virtual avatar comprises an entity virtual avatar associated with an entity occupying a virtual building in the virtual cityscape, and the entity virtual avatar is included inside the virtual building associated with the entity. The limitation merely indicates a field of use or technological environment in which the judicial exception is performed. Although these additional elements limit the identified judicial exception, this type of limitation merely confines the use of the abstract idea to a particular technological environment (virtual reality/environment) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h). These limitations do not impose any meaningful limits on practicing the abstract idea, and therefore does not integrate the abstract idea into a practical application (see MPEP 2106.05(g)).
6. The method of claim 1, wherein the knowledge base comprises interaction data from one or more additional virtual avatars operating in one or more additional virtual marketplace environments. The limitation merely further limits the abstract idea and does not further integrate the judicial exception into a practical application.
13. The computer-readable medium of claim 12, wherein one or more additional virtual avatars are accessible via an application programming interface associated with another system and one or more additional virtual avatars are implemented in the another system. The limitation merely indicates a field of use or technological environment in which the judicial exception is performed. Although these additional elements limit the identified judicial exception, this type of limitation merely confines the use of the abstract idea to a particular technological environment (virtual reality/environment) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h). These limitations do not impose any meaningful limits on practicing the abstract idea, and therefore does not integrate the abstract idea into a practical application (see MPEP 2106.05(g)).
Claims 8-12 and 15-20 recite parallel claim language and therefore are rejected for the same reasons set forth above. For these reasons claims 1-20 are rejected under 35 USC 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3, 4, 7, 9, 10, 14, 16, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Ge (US12400129) in view of Sibai (US20160048908).
Regarding claim 1, Ge discloses:
A method for sharing knowledge between avatars in a virtual marketplace platform, comprising: generating, via an artificial intelligence engine, one or more machine learning models trained to receive input [Col. 6 lines 10-15] The form of knowledge may be question-and-answer pairs, knowledge bases or knowledge graphs, and dialogue generation algorithms obtained by machine learning methods. For example, a dialogue system A has its own knowledge set and may share its knowledge with a dialogue system B so that the dialogue system B may answer all questions corresponding to the knowledge in the dialogue system A and the dialogue system B. […] avatar and determine an output to respond to the input, wherein the input comprises a question and the output comprises an answer; [Col. 2 lines 25-40] parsing a question submitted by a user; searching for an answer corresponding to the question in the shared knowledge base according to a result of the parsing; when no answer is found by the search from the shared knowledge base, notifying the user that there is no answer corresponding to the question when; when a unique answer is found by the search from the shared knowledge base, presenting the unique answer to the user; and when a plurality of answers is found by the search from the shared knowledge base, determining a priority of each answer and presenting an answer with the highest priority to the user.
receiving interaction data from a first virtual avatar, the interaction data comprising the output within a […] environment and performance metrics associated with the output; [Col. 9 lines 15-31] In some embodiments, the dialogue method according to the present application further includes updating associated information of a knowledge point corresponding to the unique answer, specifically, the updating includes: updating a number of times the knowledge point corresponding to the unique answer is adopted, and/or updating points of an external dialogue system corresponding to the knowledge point corresponding to the unique answer.
generating, via the artificial intelligence engine, a knowledge base for the first virtual avatar using the interaction data, wherein the knowledge base is operatively configured to store the interaction data [Col. 2 lines 49-55] an information parsing program module configured to parse the knowledge sharing request to determine the feature information of the knowledge point to be shared; and a base updating program module configured to add the feature information of the knowledge point to be shared to a knowledge base of the first dialogue system to form a shared knowledge base. corresponding to the performance metrics [Col. 9 lines 20-25] updating a number of times the knowledge point corresponding to the unique answer is adopted, and/or
transferring the knowledge base from the first virtual avatar to a second virtual avatar, wherein the second virtual avatar operates […] in an environment; and [Col. 6 lines 25-40] The configuration information includes at least avatar setting information of the external dialogue system, which is used to display an avatar of the external dialogue system on a dialogue interface when the first dialogue system answers questions involving the knowledge points of the external dialogue system. In this embodiment, when a knowledge point is shared, the configuration information of the corresponding external dialogue system is also shared with the first dialogue system, so that when there are questions beyond the scope of the knowledge points of the first dialogue system in an actual dialogue, not only an answer may be acquired from the knowledge points of the external dialogue system, but also the avatar information of the external dialogue system that provides the answer may be presented as a third party. The examiner interprets the first dialogue system and the external dialogue systems to be the first and second avatars.
updating the second virtual avatar with the knowledge base, enabling the second virtual avatar to provide one or more targeted responses to user inquiries based on the knowledge base. [Col. 9 lines 15-31] In some embodiments, the dialogue method according to the present application further includes updating associated information of a knowledge point corresponding to the unique answer, specifically, the updating includes: updating a number of times the knowledge point corresponding to the unique answer is adopted, and/or updating points of an external dialogue system corresponding to the knowledge point corresponding to the unique answer. Through real-time updating of a number of times a knowledge point is adopted according to actual usage and corresponding scoring of a dialogue system, the accuracy of the data is ensured, thereby providing a reliable data source for evaluating a priority of the knowledge point, which is helpful to improve the accuracy of recommending an answer to a user.
While Ge discloses the sharing of knowledge between two systems incorporating avatars, the reference does not expressly disclose:
A customer avatar
A first and second virtual marketplace environment
However Sibai teaches:
A customer avatar [0046] The seller and the buyer may separately choose the vantage point from which their avatar is viewed in their respective client devices 110, 112. For example, the seller may view the environment, on a display (not shown) of the seller client device 110, from a first person perspective, i.e., through the “eyes” of the digital seller avatar. Similarly, the buyer may view the environment, on a display (not shown) of the buyer client device 112, from a first person perspective, i.e., through the “eyes” of the digital buyer avatar. A 3D, first person perspective is illustrated in FIG. 3 from the vantage point of the digital buyer avatar. The buyer may, alternatively, view the environment, on a display of the buyer client device 112, from a third person perspective. A 3D, third person perspective is illustrated in FIG. 4, which perspective allows the buyer to view the digital buyer avatar within the environment. Further alternatively, the third person perspective may be presented on the display of the buyer client device 112 in a 2D format. A 2D, third person perspective is illustrated in FIG. 5. And see the corresponding seller in [0040] and [0043]
A first and second virtual marketplace environment [0049] FIG. 6 schematically illustrates activity related to a virtual asset marketplace aspect of the present application. In particular, a developer 602 may create designs 604 that may be used in virtual stores. Examples of designs may include, without limitation: store signs; shelving; sale posters; trees; shrubs; and statues. Furthermore, the developer 602 may create avatars 606. Such avatars 606 may be employed by a buyer as a digital buyer avatar, by a seller as a digital seller avatar or by a seller as a digital NPC. The developer 602 may also create additional assets 608, such as dialogs for digitals NPCs. The developer 602 may upload (step 610) various virtual assets 604, 606, 608 to a virtual asset marketplace 614 hosted at the server 102 (see FIG. 1). [0026] The virtual, interactive, global marketplace system allows sellers/advertisers and consumers to interact in a digital, virtual environment to facilitate the buying and selling of goods and services. It is expected that the system may allow prospective purchasers to view, interact with, and customize aspects of the virtual environment, including digital representations of the goods or services being sold, in ways not practical to accomplish in the real world, in real-time, in order facilitate the decision-making process of the prospective purchasers.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the knowledge sharing between two systems incorporating avatar functionality to include the features of a first and second virtual marketplace and a customer and seller avatar relationship, as taught in Sibai, in order to customize aspects of the virtual environment, including digital representations of the goods or services being sold, in ways not practical to accomplish in the real world, in real-time, in order facilitate the decision-making process of the prospective purchasers. (see paragraph 0049)
Regarding claim 3, Ge in view of Sibai teaches the limitations set forth above. Ge further discloses:
further comprising updating the knowledge base based on monitoring user interactions with the second virtual avatar, wherein the updating of the knowledge base results in an updated knowledge base. [Col. 9 lines 15-31] In some embodiments, the dialogue method according to the present application further includes updating associated information of a knowledge point corresponding to the unique answer, specifically, the updating includes: updating a number of times the knowledge point corresponding to the unique answer is adopted, and/or updating points of an external dialogue system corresponding to the knowledge point corresponding to the unique answer. Through real-time updating of a number of times a knowledge point is adopted according to actual usage and corresponding scoring of a dialogue system, the accuracy of the data is ensured, thereby providing a reliable data source for evaluating a priority of the knowledge point, which is helpful to improve the accuracy of recommending an answer to a user. The examiner interprets the updating and tracking of the knowledge points within the external dialogue system to be monitoring user interactions with the second virtual avatar
Regarding claim 4, Ge in view of Sibai teaches the limitations set forth above. Ge further discloses:
further comprising transferring the updated knowledge base back to the first virtual avatar. [Col. 9 lines 15-31] In some embodiments, the dialogue method according to the present application further includes updating associated information of a knowledge point corresponding to the unique answer, specifically, the updating includes: updating a number of times the knowledge point corresponding to the unique answer is adopted, and/or updating points of an external dialogue system corresponding to the knowledge point corresponding to the unique answer. Through real-time updating of a number of times a knowledge point is adopted according to actual usage and corresponding scoring of a dialogue system, the accuracy of the data is ensured, thereby providing a reliable data source for evaluating a priority of the knowledge point, which is helpful to improve the accuracy of recommending an answer to a user. [Col. 2 lines 50-55] (20) a base updating program module configured to add the feature information of the knowledge point to be shared to a knowledge base of the first dialogue system to form a shared knowledge base. The examiner interprets shared or sharing and the transfer of updated knowledge between two parties (avatars or systems)
Regarding claim 7, Ge discloses:
A tangible, non-transitory computer-readable medium storing instructions that, when executed, cause a processing device to [Col. 3 lines 15-25]: generating, via an artificial intelligence engine, one or more machine learning models trained to receive input [Col. 6 lines 10-15] The form of knowledge may be question-and-answer pairs, knowledge bases or knowledge graphs, and dialogue generation algorithms obtained by machine learning methods. For example, a dialogue system A has its own knowledge set and may share its knowledge with a dialogue system B so that the dialogue system B may answer all questions corresponding to the knowledge in the dialogue system A and the dialogue system B. […] avatar and determine an output to respond to the input, wherein the input comprises a question and the output comprises an answer; [Col. 2 lines 25-40] parsing a question submitted by a user; searching for an answer corresponding to the question in the shared knowledge base according to a result of the parsing; when no answer is found by the search from the shared knowledge base, notifying the user that there is no answer corresponding to the question when; when a unique answer is found by the search from the shared knowledge base, presenting the unique answer to the user; and when a plurality of answers is found by the search from the shared knowledge base, determining a priority of each answer and presenting an answer with the highest priority to the user.
receiving interaction data from a first virtual avatar, the interaction data comprising the output within a […] environment and performance metrics associated with the output; [Col. 9 lines 15-31] In some embodiments, the dialogue method according to the present application further includes updating associated information of a knowledge point corresponding to the unique answer, specifically, the updating includes: updating a number of times the knowledge point corresponding to the unique answer is adopted, and/or updating points of an external dialogue system corresponding to the knowledge point corresponding to the unique answer.
generating, via the artificial intelligence engine, a knowledge base for the first virtual avatar using the interaction data, wherein the knowledge base is operatively configured to store the interaction data [Col. 2 lines 49-55] an information parsing program module configured to parse the knowledge sharing request to determine the feature information of the knowledge point to be shared; and a base updating program module configured to add the feature information of the knowledge point to be shared to a knowledge base of the first dialogue system to form a shared knowledge base. corresponding to the performance metrics [Col. 9 lines 20-25] updating a number of times the knowledge point corresponding to the unique answer is adopted, and/or
transferring the knowledge base from the first virtual avatar to a second virtual avatar, wherein the second virtual avatar operates […] in an environment; and [Col. 6 lines 25-40] The configuration information includes at least avatar setting information of the external dialogue system, which is used to display an avatar of the external dialogue system on a dialogue interface when the first dialogue system answers questions involving the knowledge points of the external dialogue system. In this embodiment, when a knowledge point is shared, the configuration information of the corresponding external dialogue system is also shared with the first dialogue system, so that when there are questions beyond the scope of the knowledge points of the first dialogue system in an actual dialogue, not only an answer may be acquired from the knowledge points of the external dialogue system, but also the avatar information of the external dialogue system that provides the answer may be presented as a third party. The examiner interprets the first dialogue system and the external dialogue systems to be the first and second avatars.
updating the second virtual avatar with the knowledge base, enabling the second virtual avatar to provide one or more targeted responses to user inquiries based on the knowledge base. [Col. 9 lines 15-31] In some embodiments, the dialogue method according to the present application further includes updating associated information of a knowledge point corresponding to the unique answer, specifically, the updating includes: updating a number of times the knowledge point corresponding to the unique answer is adopted, and/or updating points of an external dialogue system corresponding to the knowledge point corresponding to the unique answer. Through real-time updating of a number of times a knowledge point is adopted according to actual usage and corresponding scoring of a dialogue system, the accuracy of the data is ensured, thereby providing a reliable data source for evaluating a priority of the knowledge point, which is helpful to improve the accuracy of recommending an answer to a user.
While Ge discloses the sharing of knowledge between two systems incorporating avatars, the reference does not expressly disclose:
A customer avatar
A first and second virtual marketplace environment
However Sibai teaches:
A customer avatar [0046] The seller and the buyer may separately choose the vantage point from which their avatar is viewed in their respective client devices 110, 112. For example, the seller may view the environment, on a display (not shown) of the seller client device 110, from a first person perspective, i.e., through the “eyes” of the digital seller avatar. Similarly, the buyer may view the environment, on a display (not shown) of the buyer client device 112, from a first person perspective, i.e., through the “eyes” of the digital buyer avatar. A 3D, first person perspective is illustrated in FIG. 3 from the vantage point of the digital buyer avatar. The buyer may, alternatively, view the environment, on a display of the buyer client device 112, from a third person perspective. A 3D, third person perspective is illustrated in FIG. 4, which perspective allows the buyer to view the digital buyer avatar within the environment. Further alternatively, the third person perspective may be presented on the display of the buyer client device 112 in a 2D format. A 2D, third person perspective is illustrated in FIG. 5. And see the corresponding seller in [0040] and [0043]
A first and second virtual marketplace environment [0049] FIG. 6 schematically illustrates activity related to a virtual asset marketplace aspect of the present application. In particular, a developer 602 may create designs 604 that may be used in virtual stores. Examples of designs may include, without limitation: store signs; shelving; sale posters; trees; shrubs; and statues. Furthermore, the developer 602 may create avatars 606. Such avatars 606 may be employed by a buyer as a digital buyer avatar, by a seller as a digital seller avatar or by a seller as a digital NPC. The developer 602 may also create additional assets 608, such as dialogs for digitals NPCs. The developer 602 may upload (step 610) various virtual assets 604, 606, 608 to a virtual asset marketplace 614 hosted at the server 102 (see FIG. 1). [0026] The virtual, interactive, global marketplace system allows sellers/advertisers and consumers to interact in a digital, virtual environment to facilitate the buying and selling of goods and services. It is expected that the system may allow prospective purchasers to view, interact with, and customize aspects of the virtual environment, including digital representations of the goods or services being sold, in ways not practical to accomplish in the real world, in real-time, in order facilitate the decision-making process of the prospective purchasers.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the knowledge sharing between two systems incorporating avatar functionality to include the features of a first and second virtual marketplace and a customer and seller avatar relationship, as taught in Sibai, in order to customize aspects of the virtual environment, including digital representations of the goods or services being sold, in ways not practical to accomplish in the real world, in real-time, in order facilitate the decision-making process of the prospective purchasers. (see paragraph 0049)
Regarding claim 9, Ge in view of Sibai teaches the limitations set forth above. Ge further discloses:
further comprising updating the knowledge base based on monitoring user interactions with the second virtual avatar, wherein the updating of the knowledge base results in an updated knowledge base. [Col. 9 lines 15-31] In some embodiments, the dialogue method according to the present application further includes updating associated information of a knowledge point corresponding to the unique answer, specifically, the updating includes: updating a number of times the knowledge point corresponding to the unique answer is adopted, and/or updating points of an external dialogue system corresponding to the knowledge point corresponding to the unique answer. Through real-time updating of a number of times a knowledge point is adopted according to actual usage and corresponding scoring of a dialogue system, the accuracy of the data is ensured, thereby providing a reliable data source for evaluating a priority of the knowledge point, which is helpful to improve the accuracy of recommending an answer to a user. The examiner interprets the updating and tracking of the knowledge points within the external dialogue system to be monitoring user interactions with the second virtual avatar
Regarding claim 10, Ge in view of Sibai teaches the limitations set forth above. Ge further discloses:
further comprising transferring the updated knowledge base back to the first virtual avatar. [Col. 9 lines 15-31] In some embodiments, the dialogue method according to the present application further includes updating associated information of a knowledge point corresponding to the unique answer, specifically, the updating includes: updating a number of times the knowledge point corresponding to the unique answer is adopted, and/or updating points of an external dialogue system corresponding to the knowledge point corresponding to the unique answer. Through real-time updating of a number of times a knowledge point is adopted according to actual usage and corresponding scoring of a dialogue system, the accuracy of the data is ensured, thereby providing a reliable data source for evaluating a priority of the knowledge point, which is helpful to improve the accuracy of recommending an answer to a user. [Col. 2 lines 50-55] (20) a base updating program module configured to add the feature information of the knowledge point to be shared to a knowledge base of the first dialogue system to form a shared knowledge base. The examiner interprets shared or sharing and the transfer of updated knowledge between two parties (avatars or systems)
Regarding claim 14, Ge discloses:
A system comprising: a memory device storing instructions; a processing device communicatively coupled to the memory device, wherein the processing device executes the instructions to: [Col. 3 lines 15-25]: generating, via an artificial intelligence engine, one or more machine learning models trained to receive input [Col. 6 lines 10-15] The form of knowledge may be question-and-answer pairs, knowledge bases or knowledge graphs, and dialogue generation algorithms obtained by machine learning methods. For example, a dialogue system A has its own knowledge set and may share its knowledge with a dialogue system B so that the dialogue system B may answer all questions corresponding to the knowledge in the dialogue system A and the dialogue system B. […] avatar and determine an output to respond to the input, wherein the input comprises a question and the output comprises an answer; [Col. 2 lines 25-40] parsing a question submitted by a user; searching for an answer corresponding to the question in the shared knowledge base according to a result of the parsing; when no answer is found by the search from the shared knowledge base, notifying the user that there is no answer corresponding to the question when; when a unique answer is found by the search from the shared knowledge base, presenting the unique answer to the user; and when a plurality of answers is found by the search from the shared knowledge base, determining a priority of each answer and presenting an answer with the highest priority to the user.
receiving interaction data from a first virtual avatar, the interaction data comprising the output within a […] environment and performance metrics associated with the output; [Col. 9 lines 15-31] In some embodiments, the dialogue method according to the present application further includes updating associated information of a knowledge point corresponding to the unique answer, specifically, the updating includes: updating a number of times the knowledge point corresponding to the unique answer is adopted, and/or updating points of an external dialogue system corresponding to the knowledge point corresponding to the unique answer.
generating, via the artificial intelligence engine, a knowledge base for the first virtual avatar using the interaction data, wherein the knowledge base is operatively configured to store the interaction data [Col. 2 lines 49-55] an information parsing program module configured to parse the knowledge sharing request to determine the feature information of the knowledge point to be shared; and a base updating program module configured to add the feature information of the knowledge point to be shared to a knowledge base of the first dialogue system to form a shared knowledge base. corresponding to the performance metrics [Col. 9 lines 20-25] updating a number of times the knowledge point corresponding to the unique answer is adopted, and/or
transferring the knowledge base from the first virtual avatar to a second virtual avatar, wherein the second virtual avatar operates […] in an environment; and [Col. 6 lines 25-40] The configuration information includes at least avatar setting information of the external dialogue system, which is used to display an avatar of the external dialogue system on a dialogue interface when the first dialogue system answers questions involving the knowledge points of the external dialogue system. In this embodiment, when a knowledge point is shared, the configuration information of the corresponding external dialogue system is also shared with the first dialogue system, so that when there are questions beyond the scope of the knowledge points of the first dialogue system in an actual dialogue, not only an answer may be acquired from the knowledge points of the external dialogue system, but also the avatar information of the external dialogue system that provides the answer may be presented as a third party. The examiner interprets the first dialogue system and the external dialogue systems to be the first and second avatars.
updating the second virtual avatar with the knowledge base, enabling the second virtual avatar to provide one or more targeted responses to user inquiries based on the knowledge base. [Col. 9 lines 15-31] In some embodiments, the dialogue method according to the present application further includes updating associated information of a knowledge point corresponding to the unique answer, specifically, the updating includes: updating a number of times the knowledge point corresponding to the unique answer is adopted, and/or updating points of an external dialogue system corresponding to the knowledge point corresponding to the unique answer. Through real-time updating of a number of times a knowledge point is adopted according to actual usage and corresponding scoring of a dialogue system, the accuracy of the data is ensured, thereby providing a reliable data source for evaluating a priority of the knowledge point, which is helpful to improve the accuracy of recommending an answer to a user.
While Ge discloses the sharing of knowledge between two systems incorporating avatars, the reference does not expressly disclose:
A customer avatar
A first and second virtual marketplace environment
However Sibai teaches:
A customer avatar [0046] The seller and the buyer may separately choose the vantage point from which their avatar is viewed in their respective client devices 110, 112. For example, the seller may view the environment, on a display (not shown) of the seller client device 110, from a first person perspective, i.e., through the “eyes” of the digital seller avatar. Similarly, the buyer may view the environment, on a display (not shown) of the buyer client device 112, from a first person perspective, i.e., through the “eyes” of the digital buyer avatar. A 3D, first person perspective is illustrated in FIG. 3 from the vantage point of the digital buyer avatar. The buyer may, alternatively, view the environment, on a display of the buyer client device 112, from a third person perspective. A 3D, third person perspective is illustrated in FIG. 4, which perspective allows the buyer to view the digital buyer avatar within the environment. Further alternatively, the third person perspective may be presented on the display of the buyer client device 112 in a 2D format. A 2D, third person perspective is illustrated in FIG. 5. And see the corresponding seller in [0040] and [0043]
A first and second virtual marketplace environment [0049] FIG. 6 schematically illustrates activity related to a virtual asset marketplace aspect of the present application. In particular, a developer 602 may create designs 604 that may be used in virtual stores. Examples of designs may include, without limitation: store signs; shelving; sale posters; trees; shrubs; and statues. Furthermore, the developer 602 may create avatars 606. Such avatars 606 may be employed by a buyer as a digital buyer avatar, by a seller as a digital seller avatar or by a seller as a digital NPC. The developer 602 may also create additional assets 608, such as dialogs for digitals NPCs. The developer 602 may upload (step 610) various virtual assets 604, 606, 608 to a virtual asset marketplace 614 hosted at the server 102 (see FIG. 1). [0026] The virtual, interactive, global marketplace system allows sellers/advertisers and consumers to interact in a digital, virtual environment to facilitate the buying and selling of goods and services. It is expected that the system may allow prospective purchasers to view, interact with, and customize aspects of the virtual environment, including digital representations of the goods or services being sold, in ways not practical to accomplish in the real world, in real-time, in order facilitate the decision-making process of the prospective purchasers.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the knowledge sharing between two systems incorporating avatar functionality to include the features of a first and second virtual marketplace and a customer and seller avatar relationship, as taught in Sibai, in order to customize aspects of the virtual environment, including digital representations of the goods or services being sold, in ways not practical to accomplish in the real world, in real-time, in order facilitate the decision-making process of the prospective purchasers. (see paragraph 0049)
Regarding claim 16, Ge in view of Sibai teaches the limitations set forth above. Ge further discloses:
further comprising updating the knowledge base based on monitoring user interactions with the second virtual avatar, wherein the updating of the knowledge base results in an updated knowledge base. [Col. 9 lines 15-31] In some embodiments, the dialogue method according to the present application further includes updating associated information of a knowledge point corresponding to the unique answer, specifically, the updating includes: updating a number of times the knowledge point corresponding to the unique answer is adopted, and/or updating points of an external dialogue system corresponding to the knowledge point corresponding to the unique answer. Through real-time updating of a number of times a knowledge point is adopted according to actual usage and corresponding scoring of a dialogue system, the accuracy of the data is ensured, thereby providing a reliable data source for evaluating a priority of the knowledge point, which is helpful to improve the accuracy of recommending an answer to a user. The examiner interprets the updating and tracking of the knowledge points within the external dialogue system to be monitoring user interactions with the second virtual avatar
Regarding claim 17, Ge in view of Sibai teaches the limitations set forth above. Ge further discloses:
further comprising transferring the updated knowledge base back to the first virtual avatar. [Col. 9 lines 15-31] In some embodiments, the dialogue method according to the present application further includes updating associated information of a knowledge point corresponding to the unique answer, specifically, the updating includes: updating a number of times the knowledge point corresponding to the unique answer is adopted, and/or updating points of an external dialogue system corresponding to the knowledge point corresponding to the unique answer. Through real-time updating of a number of times a knowledge point is adopted according to actual usage and corresponding scoring of a dialogue system, the accuracy of the data is ensured, thereby providing a reliable data source for evaluating a priority of the knowledge point, which is helpful to improve the accuracy of recommending an answer to a user. [Col. 2 lines 50-55] (20) a base updating program module configured to add the feature information of the knowledge point to be shared to a knowledge base of the first dialogue system to form a shared knowledge base. The examiner interprets shared or sharing and the transfer of updated knowledge between two parties (avatars or systems)
Claims 2, 8, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Ge (US12400129) in view of Sibai (US20160048908) in view of Gogin (US 20240296753).
Regarding claims 2, 8 and 15, Ge in view of Sibai teaches the limitations set forth above. While Ge discloses the sharing of knowledge between two systems incorporating avatars and Sibai teaches the virtual marketplace environment, the reference combination does not expressly disclose:
wherein the knowledge base of the first virtual comprises a large language model trained on the interaction data from the first virtual avatar.
However Gogin teaches:
wherein the knowledge base of the first virtual comprises a large language model trained on the interaction data from the first virtual avatar. [0051] As indicated in FIG. 6A, the first avatar 602 may correspond to a first generative artificial intelligence (AI) model 612 and the second avatar 604 may correspond to a second generative artificial intelligence (AI) model 614. As described in further detail herein, the respective generative AI models 612, 614 may be accessed by the server 102 to control the output of the corresponding avatars 602, 604. The respective generative AI models 612, 614 may each be a trained deep neural network, such as a large language model (LLM) chatbot that are able to generate responses to prompts. The respective generative AI models 612, 614 may be trained with training data that includes, for example, books, articles, texts, and the like. In some examples, the respective generative AI models 612, 614 may be different, unique, or distinct instances or sessions of the same generative AI model, thus acting as different generative AI models 612, 614 from the perspective of devices interacting with the models.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the knowledge sharing between two systems incorporating avatar functionality and the virtual marketplace environment to include wherein the knowledge base of the first virtual comprises a large language model trained on the interaction data from the first virtual avatar, as taught in Gogin, in order to create a human-like conversational language person that would look like a human (in a virtual reality environment), speak like a human, and that will provide personalized responses to users (paragraph 0018).
Claims 5, 11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Ge (US12400129) in view of Sibai (US20160048908) in view of Ghazanfari (US 20190369742).
Regarding claims 5, 11 and 18, Ge in view of Sibai teaches the limitations set forth above. While Ge discloses the sharing of knowledge between two systems incorporating avatars and Sibai teaches the virtual marketplace environment, the reference combination does not expressly disclose:
wherein at least one of the first virtual avatar or the second virtual avatar comprises an entity virtual avatar associated with an entity occupying a virtual building in the virtual cityscape, and the entity virtual avatar is included inside the virtual building associated with the entity.
However Ghazanfari teaches:
wherein at least one of the first virtual avatar or the second virtual avatar comprises an entity virtual avatar associated with an entity occupying a virtual building in the virtual cityscape, and the entity virtual avatar is included inside the virtual building associated with the entity. [0044] In some embodiments, the virtual environment can be interactive. More specifically, an artificial intelligence (AI) character, as shown in FIG. 7, can be created in the virtual environment to interact with the user. For example, an AI sales person can be embedded in a virtual shopping destination (e.g., virtual car dealership). Using various AI technologies (e.g., CNN, speech recognition, etc.), the AI character can have a meaningful conversation with the user (e.g., answering questions or introducing products). In addition to chatting, the AI character may also perform actions in response to the user's actions or commands, thus providing the user with a shopping experience similar to the one in the physical world. For example, a car salesman may “follow” the user around as the user “walks” in the virtual car dealership; or a clothing model may turn or move (e.g., raising an arm) in response to the user's verbal request.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the knowledge sharing between two systems incorporating avatar functionality and the virtual marketplace environment to include wherein at least one of the first virtual avatar or the second virtual avatar comprises an entity virtual avatar associated with an entity occupying a virtual building in the virtual cityscape, and the entity virtual avatar is included inside the virtual building associated with the entity, as taught in Ghazanfari, in order to provide the user with a shopping experience similar to the one in the physical world (paragraph 0044).
Claims 6, 12, 13, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ge (US12400129) in view of Sibai (US20160048908) in view of DeLuca (US 8484158).
Regarding claims 6, 12, 19, Ge in view of Sibai teaches the limitations set forth above. While Ge discloses the sharing of knowledge between two systems incorporating avatars and Sibai teaches the virtual marketplace environment, the reference combination does not expressly disclose:
wherein the knowledge base comprises interaction data from one or more additional virtual avatars operating in one or more additional virtual marketplace environments.
However DeLuca teaches:
wherein the knowledge base comprises interaction data from one or more additional virtual avatars operating in one or more additional virtual marketplace environments. [Col. 6 lines 27-40] For example, in the course of creating and/or using an avatar 204 in a virtual world 206, a user 202 may provide information about the user 202 and/or about the avatar 204 to the virtual world 206. The information may include a name of the user 202, a name for the avatar 204, contact information for the user 202, etc. Further, the user 202 may also provide information about the user 202 and/or about the avatar 204 to the virtual world 206 via activity of the user in the virtual world 206. An example of an activity of the user 202 in the virtual world 206 is purchasing a product or service via the virtual world 206. For example, the user may purchase footwear or order a magazine subscription via the virtual world 206.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the knowledge sharing between two systems incorporating avatar functionality and the virtual marketplace environment to include wherein the knowledge base comprises interaction data from one or more additional virtual avatars operating in one or more additional virtual marketplace environments, as taught in DeLuca, in order to identify marketing opportunities (Col. 2 lines 50-55).
Regarding claims 13 and 20, Ge in view of Sibai in further view of DeLuca teaches the limitations set forth above. While Ge discloses the sharing of knowledge between two systems incorporating avatars and Sibai teaches the virtual marketplace environment, the reference combination does not expressly disclose:
wherein one or more additional virtual avatars are accessible via an application programming interface associated with another system and one or more additional virtual avatars are implemented in the another system
However DeLuca teaches:
wherein one or more additional virtual avatars (avatars 3-4) are accessible via an application programming interface (application 150) associated with another system and one or more additional virtual avatars are implemented in the another system (virtual world N and virtual world connector N) [elements shown in Figure 2].
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the knowledge sharing between two systems incorporating avatar functionality and the virtual marketplace environment to include wherein one or more additional virtual avatars are accessible via an application programming interface associated with another system and one or more additional virtual avatars are implemented in the another system, as taught in DeLuca, in order to identify marketing opportunities (Col. 2 lines 50-55).
Relevant Art Not Cited
“A Survey on Chatbot Implementation in Customer Service Industry through Deep Neural Networks” discusses the state of the art with regards to the types of existing virtual assistant chat bots.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VICTORIA E. FRUNZI whose telephone number is (571)270-1031. The examiner can normally be reached Monday- Friday 7-4 (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached at (571) 272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
VICTORIA E. FRUNZI
Primary Examiner
Art Unit TC 3689
/VICTORIA E. FRUNZI/Primary Examiner, Art Unit 3689 12/18/2025