DETAILED ACTION
Acknowledgement
This final office action is in response to the amendment filed on 09/09/2025.
Status of Claims
Claims 1, 3-4, 13-15, 25, and 27-28 have been amended.
Claims 1-36 are now pending.
Response to Arguments
The claims objection is withdrawn in light of amendments.
The 35 U.S.C. 101 “signal per se” rejection is withdrawn in light of claim 13 amendments.
Applicant's arguments filed on 09/09/2025 regarding the 35 U.S.C. 101 and 102 rejections of claims 1-36 have been fully considered. The Applicant argues the following.
(1) As per the 101 rejection, the Applicant argues, in summary, that amended claims 1, 13, and 25 recite a practical application of the alleged abstract idea. The specification describes the invention such that the improvement (i.e. a process that enables communication of requests between non-human distributed actors, such that the non-human distributed actors are able to perform a particular skill associated with the request without human intervention). Independent claims 1, 13, and 25 include component or steps of the invention that provide the improvement. For example, “mapping the one or more portions of the request…; parsing the request into a subject portion, a verb portion, and input requirement portion…; filtering out the subset of distributed actors...; assigning one or more distributed actors to address the unfulfilled need…” by processing content and functionality for the respectively electronic interface.
The Examiner respectfully disagrees. The Examiner submits the additional elements recited in the amended claims and highlighted in Steps 2A(2) and 2B do not integrate the abstract idea into a practical application because the additional elements do not improve the functioning of a computer beyond its original capacity nor improve upon another technology or technological component. The claim limitations/steps argued by the Applicant that provides the improvement are considered abstract. As per MPEP 2106.05, abstract elements cannot furnish the improvement. Providing instructions to a computing device/system to perform a specific function (e.g. executing one or more interactions on a respective electronic interface) that it’s capable of performing is not an improvement in technology. The interaction on a respective electronic interface could simply be to display data in response to a request. The claims do not recite a specific function or interaction that reflects an improvement. Therefore, the 35 U.S.C. 101 rejection is maintained.
(2) As per the 102 rejection, the Applicant argues, in summary, that Horvitz fails to teach, disclose, or even suggest the limitations of amended claim 1.
The Examiner finds the Applicant’s arguments persuasive. Therefore, the 102 rejection has been withdrawn. However, upon further search and consideration, a new ground of 103 rejection for claim 1 and similarly claims 13 and 25. See details below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-36 are rejected under 35 U.S.C. 101 because the claimed invention, “Distributed Actor-Based Information System & Method”, is directed to an abstract idea, specifically Certain Methods of Organizing Human Activity, without significantly more. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements individually or in combination provide mere instructions to implement the abstract idea on a computer.
Step 1: Claims 1-36 are directed to a statutory category, namely a process (claims 1-12), a manufacture (claims 13-24), and a machine (claims 25-36).
Step 2A (1): Claims 1-4, 6-16, 18-28, and 30-36 are directed to an abstract idea of Certain Methods of Organizing Human Activity, based on the following claim limitations: maintaining a group of distributed actors, wherein each of the distributed actors offers at least one skill…; receiving a request from a user concerning an unfulfilled need; mapping one or more portions of the request to one or more skills offered by the group of distributed actors, thus defining one or more skill mappings, wherein mapping the one or more portions of the request to the one or more skills offered by the group of distributed actors includes: parsing the request into a subject portion, a verb portion, an input requirement portion, and an output constraint portion, and filtering out a subset of distributed actors from the group of distributed actors with incompatible constraints based upon, at least in part, the subject portion, the verb portion, the input requirement portion, and the output constraint portion of the request; and assigning one or more distributed actors to address the unfulfilled need based, at least in part, upon the one or more skill mappings, thus defining one or more assigned distributed actors (claims 1, 13, and 25); wherein assigning one or more distributed actors to address the unfulfilled need includes one or more of: immediately assigning to the one or more distributed actors; inquiring on the availability of the one or more distributed actors; and allowing the user to choose the one or more distributed actors from a group of potential distributed actors (claims 2, 14, and 26); wherein receiving a request from a user concerning the unfulfilled need includes: receiving the request from a human distributed actor (claims 3, 15, and 27); wherein receiving a request from a user concerning the unfulfilled need includes: receiving the request from a non-human distributed actor (claims 4, 16, and 28); wherein the one or more assigned distributed actors interact, directly or indirectly, with one or more distributed sub-actors to address at least a portion of the unfulfilled need (claims 6, 18, and 30); addressing at least a portion of the unfulfilled need with the at least one skill offered by the one or more assigned distributed actors (claims 7, 19, and 31); wherein addressing at least a portion of the unfulfilled need with the at least one skill offered by the one or more assigned distributed actors includes: generating one or more response portions with the at least one skill offered by the one or more assigned distributed actors (claims 8, 20, and 32); forming a bespoke response to the unfulfilled need based, at least in part, upon the one or more response portions (claims 9, 21, and 33); providing the bespoke response to a party associated with the unfulfilled need (claims 10, 22, and 34); effectuating, in whole or in part, the bespoke response (claims 11, 23, and 35); wherein maintaining a group of distributed actors includes: maintaining a database that defines the group of distributed actors (claims 12, 24, and 36).”. The claims describe a process of maintaining skills for a group of distributed actors (i.e. resources), receiving a request from a user concerning a unfulfilled need (i.e. task), mapping the request to skills offered by the distributed actors (i.e. resources), and assigning distributed actors (i.e. resources) to address the request/unfulfilled need (i.e. task). Maintaining skills of distributed actors (i.e. resources), receiving requests from users, and assigning distributed actors (i.e. resources) to address the requests reflects certain methods of organizing human activity as the assigning facilitates interaction between a person (i.e. user) that has an unfulfilled need or request and a computer (i.e. resource) that is fulfilling the need or request. Therefore, these limitations, under the broadest reasonable interpretation, fall within the abstract grouping of Certain Methods of Organizing Human Activity. Certain Methods of Organizing Human Activity includes managing personal behavior or relationships or interactions between people including social activities, teaching, and following rules or instructions. Certain Methods of Organizing Human Activity can encompass the activity of a single person (e.g. a person following a set of instructions), activity that involve multiple people (e.g. a commercial interaction), and certain activity between a person and a computer (e.g. a method of anonymous loan shopping). Therefore, claims 1-36 are directed to an abstract idea and are not patent eligible.
Step 2A (2): This judicial exception is not integrated into a practical application. In particular, claims 1, 5, 12, 13, 17, 24, 25, 29, and 36 recite additional elements of “a computer-implemented method, executed on a computing device (claim 1); wherein each of the distributed actors offers at least one skill, wherein the group of distributed actors are one or more non-human distributed actors that perform a respective skill without human intervention; wherein addressing the unfulfilled need includes performing the at least one skill offered by the one or more assigned non-human distributed actors by executing one or more interactions on a respective electronic interface specific to each assigned non- human distributed actor without human intervention, wherein executing the one or more interactions includes: processing the respective electronic interface specific to each assigned non-human distributed actor by identifying content of the respective electronic interface and functionality of the respective electronic interface; and processing the content and the functionality for the respective electronic interface to perform the one or more interactions on the electronic interface (claims 1, 13, and 25); wherein the group of distributed actors include one or more of: a software platform; a software application; a virtual machine; and a web-based service (claims 5, 17, and 29); database (claims 12, 24, and 36); a computer program product residing on a non-transitory computer readable medium having a plurality of instructions stored thereon which, when executed by a processor, cause the processor to perform operations (claim 13); and a computing system including a processor and memory configured to perform operations (claim 25)”. These additional elements do not integrate the abstract idea into a practical application because the claims do not recite (a) an improvement to another technology or technical field and (b) an improvement to the functioning of the computer itself and (c) implementing the abstract idea with or by use of a particular machine, (d) effecting a particular transformation or reduction of an article, or (e) applying the judicial exception in some other meaningful way beyond generally linking the use of an abstract idea to a particular technological environment. These additional elements evaluated individually and in combination are viewed as computing devices that are used to perform that abstract idea and communicate results. Limitations that recite mere instructions to implement an abstract idea on a computer or merely uses a computer as a tool to perform an abstract idea are not indicative of integration into a practical application (see MPEP 2106.05(f)). Therefore, claims 1-36 do not include individual or a combination of additional elements that integrate the judicial exception into a practical application and thus are not patent eligible.
Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Claims 1, 5, 12, 13, 17, 24, 25, 29, and 36 recite additional elements of “a computer-implemented method, executed on a computing device (claim 1); wherein each of the distributed actors offers at least one skill, wherein the group of distributed actors are one or more non-human distributed actors that perform a respective skill without human intervention; wherein addressing the unfulfilled need includes performing the at least one skill offered by the one or more assigned non-human distributed actors by executing one or more interactions on a respective electronic interface specific to each assigned non- human distributed actor without human intervention, wherein executing the one or more interactions includes: processing the respective electronic interface specific to each assigned non-human distributed actor by identifying content of the respective electronic interface and functionality of the respective electronic interface; and processing the content and the functionality for the respective electronic interface to perform the one or more interactions on the electronic interface (claims 1, 13, and 25); wherein the group of distributed actors include one or more of: a software platform; a software application; a virtual machine; and a web-based service (claims 5, 17, and 29); database (claims 12, 24, and 36); a computer program product residing on a non-transitory computer readable medium having a plurality of instructions stored thereon which, when executed by a processor, cause the processor to perform operations (claim 13); and a computing system including a processor and memory configured to perform operations (claim 25)”. These additional elements evaluated individually and in combination are viewed as mere instructions to apply or implement the abstract idea on a computer. Applying an abstract idea on a computer does not integrate a judicial exception into a practical application or provide an inventive concept (see MPEP 2106.05(f)). Therefore, claims 1-36 do not include individual or a combination of additional elements that are sufficient to amount to significantly more than the judicial exception and thus are not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-36 are rejected under 35 U.S.C. 103 as being unpatentable over Horvitz et al. (US 2014/0278634 A1) in view of Murakhovs’ka et al. (US 2023/0086668 A1).
As per claim 1, 13, and 25 (Currently Amended), Horvitz teaches a computer-implemented method, executed on a computing device, comprising (Horvitz e.g. FIG. 4 is a flow diagram summarizing some example steps that may be taken by a geospatial crowdsourcing service or the like when a task is received, which may be in real time, or based upon a scheduled task [0057]. The techniques described herein can be applied to any device. Accordingly, the below general purpose remote computer described below in FIG. 5 is but one example of a computing device [0062]. A method implemented at least in part on at least one processor, comprising (claim 1).); Horvitz teaches a computer program product residing on a computer readable medium having a plurality of instructions stored thereon which, when executed by a processor, cause the processor to perform operations comprising (Horvitz e.g. Computer 510 typically includes a variety of computer readable media and can be any available media that can be accessed by computer 510 (Fig. 5 and [0066]. One or more computer-readable media having computer-executable instructions, which when executed on at least one processor perform step (claim 19).); Horvitz teaches a computing system including a processor and memory configured to perform operations comprising (Horvitz e.g. With reference to FIG. 5, an example remote device for implementing one or more embodiments includes a general purpose computing device in the form of a computer 510 [0065]. A system comprising, one or more processors, a memory communicatively coupled to the processor, and a spatiotemporal crowdsourcing configured to execute on the one or more processors from memory,.. (claim 13).):
Horvitz teaches maintaining a group of distributed actors, wherein each of the distributed actors offers at least one skill, wherein each of the distributed actors offers at least one skill, wherein the group of distributed actors are one or more non-human distributed actors that perform a respective skill without human intervention; (Horvitz e.g. The subject matter described herein are directed towards spatiotemporal crowdsourcing technology (e.g., implemented as a service) configured to receive a task that includes task-related criteria [0005]. Actor-related data is accessed to select an actor set needed for accomplishing the task, including selecting one or more actors until the actor set is sufficient to accomplish the task [0007]. The actor set includes at least one human actor and at least one non-human actor (claim 14). FIG. 1 is a block diagram showing example components of an example spatiotemporal crowdsourcing implementation, in the form of a service 102 [0018]. In general, users 104 register as actors (members) in the spatiotemporal crowdsourcing service via a network (e.g., Internet) interface, shown as coupling to a user task preference and ability component 106 [0018]. The preference and qualification information is stored in an actor data store 108, and may include a list of capabilities, competencies and/or experience (e.g., a math tutor for algebra to calculus, five years teaching), price/rate (including overtime), preferences (e.g., evenings OK but not weekends, will work within thirty minutes of a general location), assets e.g., (have a bike or car), calendar data, schedule data and/or possibly other pertinent information. This information may be updated in real time as it changes, either with explicit instructions from the user and/or automatically through sensing and inference [0019]. Users are one type of actor that may be summoned to accomplish a task. Although not explicitly shown in FIG. 1, other example types of actors that may have actor data stored in the data store108 include non-human participants, such as a sensor (e.g., as a security camera, traffic camera and/or microphone), an automobile equipped with communication capabilities, tagged equipment, and so forth. Such other non-human actors may be summoned as part of accomplishing a task, as described below [0020].)
Horvitz teaches receiving a request from a user concerning an unfulfilled need; (Horvitz e.g. The subject matter described herein are directed towards spatiotemporal crowdsourcing technology (e.g., implemented as a service) configured to receive a task that includes task-related criteria [0005]. A "task" may be a complete, full task, or may be a task that is actually a subtask of a larger task [0016]. FIG. 1 shows a task being received at a planning component 112, in which the task includes any number of criteria. Example criteria include a task deadline, a maximum cost, any reputation requirements, any acquaintance requirements and so on [0024]. FIG. 4 is a flow diagram summarizing some example steps that may be taken by a geospatial crowdsourcing service or the like when a task is received, which may be in real time, or based upon a scheduled task [0057].)
Horvitz teaches mapping one or more portions of the request to one or more skills offered by the group of distributed actors, thus defining one or more skill mappings, wherein mapping the one or more portions of the request to the one or more skills offered by the group of distributed actors includes… (Horvitz e.g. FIG. 1 shows a task being received at a planning component 112, in which the task includes any number of criteria. Example criteria include a task deadline, a maximum cost, any reputation requirements, any acquaintance requirements and so on [0024]. Other criteria may specify a number of workers, skill sets required for the workers, non-human assets needed, and so forth. Basically any task requirement that may be matched against known data regarding actors' preferences and abilities to perform that task may be used as part of the task criteria [0024]. When a task needs to be performed, a subset of one or more actors is summoned by the planning component 112, which in the example of FIG. 1. In general, the planning component 112 uses the task criteria matching module 114 to work with the preference and ability component 106 to match the criteria associated with the task with an actor set ( e.g., one or more users and/or any other actor or actors) [0025].):
Horvitz teaches filtering out a subset of distributed actors from the group of distributed actors with incompatible constraints… (Horvitz e.g. Actor-related data is accessed to select an actor set needed for accomplishing the task, including selecting one or more actors until the actor set is sufficient to accomplish the task [0007]. When a task needs to be performed, a subset of one or more actors is summoned by the planning component 112, which in the example of FIG. 1. In general, the planning component 112 uses the task criteria matching module 114 to work with the preference and ability component 106 to match the criteria associated with the task with an actor set ( e.g., one or more users and/or any other actor or actors) [0025]. Filtering may be used to determine the subset of actors for the actor set, and/or a cost function [0026]. FIG. 4 is a flow diagram summarizing some example steps that may be taken by a geospatial crowdsourcing service or the like when a task is received, which may be in real time, or based upon a scheduled task [0057]. Step 404 represents accessing the actor data store to select actors that meet the criteria [0058]. The actors may be ranked, sorted and so forth, e.g., by reputation and/or cost, or by random or round-robin selection [0058].)
Horvitz teaches assigning one or more distributed actors to address the unfulfilled need based, at least in part, upon the one or more skill mappings, thus defining one or more assigned distributed actors…(Horvitz e.g. The spatiotemporal crowdsourcing service selects an actor set (e.g., one or more human workers and/or entities) for accomplishing the task, including having the task-related criteria and actor-related data used to determine inclusion in the actor set [0005]. There is described receiving a task including task criteria, accessing actor data of one or more actors, including task preference data and task ability data, and selecting an actor set based upon the task criteria and the task preference data and task ability data [0006].One or more various criteria may be used to select suitable actors to summon [0015]. One or more actors are matched to task-related criteria and summoned to accomplish a task, which may be divided into a set of coordinated tasks (subtasks) [0061].)
Horvitz does not explicitly teach, however, Murakhovs’ka teaches the following limitations:
Murakhovs’ka teaches parsing the request into a subject portion, a verb portion, an input requirement portion, and an output constraint portion, and (Murakhovs’ka e.g. One or more implementations relate to the field of database systems, and more specifically, to automatically assigning metadata to unstructured conversations to support analytics, recommendations and other automations [0002]. The subject matter described herein generally relates to computing systems and methods for automatically mapping conversations to different high-level semantic groups for determining performance metrics or other key performance indicators (KPIs) for a particular semantic group [0021]. FIG. 6 depicts an exemplary representative utterance identification process 600 that may be implemented or otherwise performed by a computing system in connection with the conversation mapping process 500 (e.g., at task 504) to identify the representative utterance associated with a respective conversation and perform additional tasks, functions, and/or operations described herein [0078]. The illustrated utterance identification process 600 extracts or otherwise identifies, from the transcript of a conversation, the subset of utterances that are associated with a particular speaker or source (task 602) [0079]. Thereafter, the representative utterance identification process 600 performs parts of speech tagging on the subset of utterances by the customer before applying one or more natural language processing logic rules to the sequence of tagged customer utterances to identify the earliest utterance in the sequence of tagged customer utterances that is most likely to express the customer's intent (e.g., the contact reason) (tasks 604, 606) [0079]. In this regard, utterances are parsed by applying NLP to identify syntax that consists of a verb followed by a noun or other subject, which is capable of expressing intent [0079]. In one implementation, a NLP rules-based algorithm is utilized to extract intent spans from representative utterances using parts of speech tagging to identify discrete combinations of a noun and its associated verb contained within a respective representative utterance [0112]. For example, for a representative utterance of "I want to cancel my order," the potential candidate name of "cancel my order" may be extracted by identifying the verb "cancel" and its associated noun "order" as a potential intent associated with the representative utterance [0112].)
Murakhovs’ka teaches filtering…based upon, at least in part, the subject portion, the verb portion, the input requirement portion, and the output constraint portion of the request; and (Murakhovs’ka e.g. Using chat-bots, automated AI systems conduct text-based chat conversations with users, through which users request and receive information. Chat-bots generally provide information to users for predetermined situations and applications [0003]. The subject matter described herein generally relates to computing systems and methods for automatically mapping conversations to different high-level semantic groups for determining performance metrics or other key performance indicators (KPIs) for a particular semantic group [0021]. Additionally, within each semantic group, the constituent conversations are automatically grouped into different clusters of similar conversations (e.g., based on similar semantics, syntax, intents, nouns, verbs, and/or the like), which likewise support determining performance metrics or other KPIs on a per-cluster basis [0021]. The conversations being analyzed are unstructured and free-form using natural language that is not constrained to any particular syntax or ordering of speakers or utterances thereby [0023]. In this regard, an utterance should be understood as a discrete uninterrupted chain of language provided by an individual conversation participant or actor or otherwise associated with a particular source of the content of the utterance, which could be a human user or speaker (e.g., a customer, a sales representative, a customer support representative, a live agent, and/or the like) or an automated actor or speaker (e.g., a "chat-bot" or other automated system) [0023]. Thereafter, the representative utterance identification process 600 performs parts of speech tagging on the subset of utterances by the customer before applying one or more natural language processing logic rules to the sequence of tagged customer utterances to identify the earliest utterance in the sequence of tagged customer utterances that is most likely to express the customer's intent (e.g., the contact reason) (tasks 604, 606) [0079]. In some implementations, the cluster groups are analyzed to filter, exclude, or otherwise remove cluster groups exhibiting low quality that are unlikely to be representative of something semantically significant, for example, by eliminating cluster groups having an intra-cluster distance greater than a threshold, an inter-cluster distance less than a threshold, and/or the like [0106]. Thus, the clustering step may identify different sets of cluster groups with associated utterances, for each potential combination of agent speaker (e.g., chat bot or live agent) and desired level of granularity [0107].)
Murakhovs’ka teaches wherein addressing the unfulfilled need includes performing the at least one skill offered by the one or more assigned non-human distributed actors by executing one or more interactions on a respective electronic interface specific to each assigned non-human distributed actor without human intervention, wherein executing the one or more interactions includes: (Murakhovs’ka e.g. Businesses also increasingly interface with customers using different electronic communications channels, including online chats, text messaging, email or other forms of remote support. Artificial intelligence (AI) may also be used to provide information to users via online communications with "chat-bots" or other automated interactive tools [0003]. The subject matter described herein derives business intelligence from unstructured conversational data associated with historical conversations or interactions maintained by a computing platform to facilitate creation of recommendations or automations with respect to subsequent conversations or interactions on the platform [0022]. The conversational interactions between customers and businesses representatives are semantically organized into cohesive contact reason groups with associated KPIs that enable CRM leaders to take action to better solve and support these contact reasons [0028]. Some implementations support a chat messaging interface, which is a graphical element provided by a GUI or other presentation interface that enables a user to communicate with another chat participant [0052]. The chat messaging interface is configured to present user entered communications and communications received by the client device and directed to the user from other chat participants [0052]. FIG. 3 depicts a block diagram of a system 300 for providing browser-based, communication session continuity for rendering conversation content via a messaging application for a browser-based presentation interface [0053]. The system 300 includes a client device 302 for operation by a user. The client device 302 may be implemented using a standalone personal computer, a portable computer (e.g., a laptop, a tablet computer, or a handheld computing device), a computer integrated into another device or system (e.g., a "smart" television, a smartphone, or a smartwatch), or any other device or platform including at least one processor 310, a data storage element 312 (or memory), and a user interface 314 to allow a user to interact with the client device 302 [0053]. The user interface 314 may include various human-to-machine interfaces, e.g., a keypad, keys, a keyboard, buttons, switches, knobs, a touchpad, a joystick, a pointing device, a virtual writing tablet, a touch screen, a microphone, or any device, component, or function that enables the user to select options, input information, or otherwise control the operation of the client device 302 [0053]. During typical operation, the client device 302 executes a browser application 320 that presents a GUI display for the browser application, with the browser application 320 being utilized to establish a communication session with the server system 306 to exchange communications between the client device 302 and the server system 306 ( e.g., by a user inputting a network address for the server system 306 via the GUI display of the browser application) [0057]. The GUI display may be realized as a browser tab or browser window that provides a corresponding chat messaging interface or "chat window" through which a user can exchange chat messages with other parties [0057]. Alternatively, the computer system 304 could be configured to support or otherwise provide an automated agent (e.g., a "chat-bot") configured to exchange chat messages with users originating from the computer system 304 or the server system 306 [0057]. Chat messages exchanged via the chat messaging interface may include text-based messages that include plain-text words only, and/or rich content messages that include graphical elements, enhanced formatting, interactive functionality, or the like [0057]. the cluster group analysis GUI display includes one or more selectable GUI elements that are selectable by a user to create or otherwise define one or more automated actions to be associated with a particular representative utterance [0027]. The user may select a GUI element to create a recommended response for a live agent to provide to a customer responsive to a subsequent occurrence of that utterance (or a semantically-similar utterance) by the customer, an automated response for a chat bot to provide by a customer responsive to a subsequent occurrence of that utterance (or a semantically-similar utterance) by the customer, and/or the like [0027]. In this manner, one or more automated actions may be created or otherwise defined in association with a particular semantic group, cluster group, representative utterance and/or speaker(s) and subsequently performed or applied in real-time with respect to subsequent conversations that are mapped to that same semantic group, cluster group, representative utterance and/or speaker(s) [0131]. For example, in one or more implementations, the automated action may include a recommended reply to a particular representative utterance for a conversation with a live agent, an automated reply to a particular representative utterance for a conversation with a chat bot, or the like [0132].)
Murakhovs’ka teaches processing the respective electronic interface specific to each assigned non-human distributed actor by identifying content of the respective electronic interface and functionality of the respective electronic interface; and (Murakhovs’ka e.g. When the current conversation is assigned a representative utterance by a customer, client or other end user that matches or is within a threshold similarity to the representative utterance assigned with an automated action (e.g., based on cosine similarity between encoded numerical representations), the server system 306 may automatically initiate the automated action, for example, by providing a graphical representation of a recommended agent response utterance that includes the recommended reply to a live agent at the computer system 304 or configuring the chat bot at the computer system 304 to automatically generate an utterance that includes the automated reply, and/or the like [0132]. In this regard, the server system 306 may perform the steps of identifying a representative utterance associated with a current conversation (e.g., task 504) and assigning the current conversation to a cluster group and/or a semantic group (e.g., tasks 506, 510) in real-time to detect when the structural metadata associated with the current conversation matches one or more triggering criteria for the automated action [0132].)
Murakovs’ka teaches processing the content and the functionality for the respective electronic interface to perform the one or more interactions on the electronic interface. (Murakhovs’ka e.g. For purposes of explanation, but without limitation, the subject matter may be described herein in the context of a customer relationship management (CRM) system or service, where conversational interactions between customers and business representatives (e.g., a sales representative, a customer support representative, a chat bot or other automated agent, and/or the like) are automatically mapped to different contact reason semantic groups, which contain different clusters ( or contact reason subgroups) and constituent conversations associated with that particular contact reason [0028]. By automatically identifying and mapping unstructured conversations to a structured for that supports CRM automations, the subject matter described herein allows CRM leaders to understand what their customers are needing support for, track key KPIs across these issues, and plan and implement automations (such as creating intents and chat bots) using the provided insights around what contact reasons are driving KPIs [0028]. In this manner, one or more automated actions may be created or otherwise defined in association with a particular semantic group, cluster group, representative utterance and/or speaker(s) and subsequently performed or applied in real-time with respect to subsequent conversations that are mapped to that same semantic group, cluster group, representative utterance and/or speaker(s) [0131]. When the current conversation is assigned a representative utterance by a customer, client or other end user that matches or is within a threshold similarity to the representative utterance assigned with an automated action (e.g., based on cosine similarity between encoded numerical representations), the server system 306 may automatically initiate the automated action, for example, by providing a graphical representation of a recommended agent response utterance that includes the recommended reply to a live agent at the computer system 304 or configuring the chat bot at the computer system 304 to automatically generate an utterance that includes the automated reply, and/or the like [0132]. For example, in one or more implementations, the automated action may include a recommended reply to a particular representative utterance for a conversation with a live agent, an automated reply to a particular representative utterance for a conversation with a chat bot, or the like [0132]. In this manner, the semantic content of the utterance provided by a live human agent or chat bot in response to the customer utterance includes or otherwise reflects the recommended reply that is designed, configured or otherwise intended to improve performance or KPIs with respect to the current conversation (e.g., by reducing the conversation duration, improving the likelihood of resolution of a related case, and/or the like) [0132]. FIG. 13 depicts an exemplary automation assistance process 1300 that may be implemented or otherwise performed by a computing system to create automations using structural conversation metadata derived from the conversation mapping process 500 and perform additional tasks, functions, and/or operations described herein [0133].)
The Examiner submits that before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Horvitz’s Crowdsourcing Service’s mapping of requests and assigning non-human distributed actors with Murakhovs’ka’s system and method for analyzing (e.g. parsing) customer conversations with agents to derive intent (e.g. reason for contact) and create automated responses and recommendations on an electronic interface in order to assist agents, improve KPIs of contact reasons and processes, and improve user experience while reducing costs and time devoted to recurring common contacts (Murakhovs’ka e.g. [0028]).
As per claims 2, 14, and 26 (Original), Horvitz in view of Murakhovs’ka teach the computer-implemented method of claim 1, the computer program product of claim 13, and the computing system of claim 25, Horvitz teaches wherein assigning one or more distributed actors to address the unfulfilled need includes one or more of: immediately assigning to the one or more distributed actors; inquiring on the availability of the one or more distributed actors; and allowing the user to choose the one or more distributed actors from a group of potential distributed actors (Horvitz e.g. The spatiotemporal crowdsourcing service selects an actor set (e.g., one or more human workers and/or entities) for accomplishing the task, including having the task-related criteria and actor-related data used to determine inclusion in the actor set [0005]. Note that the selection of actors for the actor set may be dynamic, e.g., selection may change as a task progresses [0025]. The planning component 112 operates in conjunction with the other components 106, 114 and 116 to coordinate the summoning of the equipment and personnel to a specified location at a desired time, based upon who is available, when and where, and their pricing, along with any other criteria such as experience, reputation and so forth [0027]. When the summoning is done and the appropriate actors have confirmed their availability, step 414 represents tracking the task completion state, e.g., versus the deadline, as task state information becomes available (Fig. 4 and [0060]).).
As per claims 3, 15, and 27 (Currently Amended), Horvitz in view of Murakhovs’ka teach the computer-implemented method of claim 1, the computer program product of claim 13, and the computing system of claim 25, Horvitz teaches wherein receiving a request from a user concerning the unfulfilled need includes: receiving the request from a human distributed actor (Horvitz e.g. FIG. 4 is a flow diagram summarizing some example steps that may be taken by a geospatial crowdsourcing service or the like when a task is received, which may be in real time, or based upon a scheduled task [0057]. Step 410 evaluates whether the needed distributed actors have confirmed and the summoning is done. Note that if the task criteria cannot be met, step 412 represents notifying the owner of the issue (i.e. human) [0059]. The actor set includes at least one human actor and at least one non-human actor (claim 14).).
As per claims 4, 16, and 28 (Currently Amended), Horvitz in view of Murakhovs’ka teach the computer-implemented method of claim 1, the computer program product of claim 13, and the computing system of claim 25, Horvitz teaches wherein receiving a request from a user concerning a unfulfilled need includes: receiving the request from a non-human distributed actor (Horvitz e.g. FIG. 4 is a flow diagram summarizing some example steps that may be taken by a geospatial crowdsourcing service or the like when a task is received, which may be in real time, or based upon a scheduled task [0057]. Step 410 evaluates whether the needed distributed actors have confirmed and the summoning is done. Note that if the task criteria cannot be met, step 412 represents notifying the owner of the issue (i.e. human) [0059]. The actor set includes at least one human actor and at least one non-human actor (claim 14).).
As per claims 5, 17, and 29 (Original), Horvitz in view of Murakhovs’ka teach the computer-implemented method of claim 1, the computer program product of claim 13, and the computing system of claim 25, Horvitz teaches wherein the group of distributed actors include one or more of: a software platform; a software application; a virtual machine; and a web-based service (Horvitz e.g. Users are one type of actor that may be summoned to accomplish a task. Although not explicitly shown in FIG. 1, other example types of actors that may have actor data stored in the data store108 include non-human participants, such as a sensor (e.g., as a security camera, traffic camera and/or microphone), an automobile equipped with communication capabilities, tagged equipment, and so forth. Such other non-human actors may be summoned as part of accomplishing a task, as described below [0020]. Via contemporary computer-aware connectedness, mobile actors also may provide current state information (e.g., via a mobile device application) such as including current GPS coordinates and velocity at a certain sampling rate, and possibly a destination. A non-human mobile actor may likewise provide such state information, e.g., via GPS coordinates and velocity, a nearby truck may be summoned to help accomplish a task, regardless of who is actually driving the truck [0021]. Embodiments can partly be implemented via an operating system, for use by a developer of services for a device or object, and/or included within application software that operates to perform one or more functional aspects of the various embodiments described herein [0063].).
As per claims 6, 18, and 30 (Original), Horvitz in view of Murakhovs’ka teach the computer-implemented method of claim 1, the computer program product of claim 13, and the computing system of claim 25, Horvitz teaches wherein the one or more assigned distributed actors interact, directly or indirectly, with one or more distributed sub-actors to address at least a portion of the unfulfilled need (Horvitz e.g. Note that the selection of actors for the actor set may be dynamic, e.g., selection may change as a task progresses. If a larger task is broken up into smaller tasks, or subtasks, each subtask may have an actor set selected for that task at whatever time is appropriate including dynamically; for example, a single actor may be selected for a subtask such as part of a package delivery, with a next single actor selected ( e.g., dynamically based on proximity and availability) for the next subtask part of the delivery, and so on [0025]. The planning component 112 operates in conjunction with the other components 106, 114 and 116 to coordinate the summoning of the equipment and personnel to a specified location at a desired time, based upon who is available, when and where, and their pricing, along with any other criteria such as experience, reputation and so forth [0027]. One or more actors are matched to task-related criteria and summoned to accomplish a task, which may be divided into a set of coordinated tasks (subtasks) [0061].).
As per claims 7, 19, and 31 (Original), Horvitz in view of Murakhovs’ka teach the computer-implemented method of claim 1, the computer program product of claim 13, and the computing system of claim 25 further comprising: Horvitz teaches addressing at least a portion of the unfulfilled need with the at least one skill offered by the one or more assigned distributed actors (Horvitz e.g. One or more actors are matched to task-related criteria and summoned to accomplish a task, which may be divided into a set of coordinated tasks (subtasks) [0061]. Other criteria may specify a number of workers, skill sets required for the workers, non-human assets needed, and so forth. Basically any task requirement that may be matched against known data regarding actors' preferences and abilities to perform that task may be used as part of the task criteria [0024].).
As per claims 8, 20, and 32 (Original), Horvitz in view of Murakhovs’ka teach the computer-implemented method of claim 7, the computer program product of claim 19, and the computing system of claim 31, Horvitz teaches wherein addressing at least a portion of the unfulfilled need with the at least one skill offered by the one or more assigned distributed actors includes: generating one or more response portions with the at least one skill offered by the one or more assigned distributed actors (Horvitz e.g. FIG. 4 is a flow diagram summarizing some example steps that may be taken by a geospatial crowdsourcing service or the like when a task is received [0057]. If the actor data store is arranged as a data store, an optimized query may be made against the data store to obtain the actor set of actors that meet the criteria (Fig. 4 step 402 and [0058]). One or more actors are matched to task-related criteria and summoned to accomplish a task, which may be divided into a set of coordinated tasks (subtasks) [0061]. Step 406 represents summoning the actor to appear at the specified location at the specified time. If the actor does not confirm (non-human actors may have automated confirmation or a person confirm on their behalf) within a confirmation time [0059]. The planning component 112 may specify that to be hired, each worker needs to confirm that he or she will be at the specified location at the specified time with any specified equipment [0028]. Step 410 evaluates whether the needed actors have confirmed and the summoning is done. The process continues until the needed actors have done so [0059]. Note that if the task criteria cannot be met, step 412 represents notifying the owner of the issue (Fig. 4 and [0059]).
As per claims 9, 21, and 33 (Original), Horvitz in view of Murakhovs’ka teach the computer-implemented method of claim 8, the computer program product of claim 20, and the computing system of claim 32 further comprising: Horvitz teaches forming a bespoke response to the unfulfilled need based, at least in part, upon the one or more response portions (Horvitz e.g. Step 406 represents summoning the actor to appear at the specified location at the specified time. If the actor does not confirm (non-human actors may have automated confirmation or a person confirm on their behalf) within a confirmation time. Note that the candidate pool may be larger than the number of actors needed at step 404 so that the query and/or function need not be re-run each time an actor does not confirm in time [0059]. Step 410 evaluates whether the needed actors have confirmed and the summoning is done. The process continues until the needed actors have done so [0059]. The planning component 112 ( or another component of the