Prosecution Insights
Last updated: April 19, 2026
Application No. 18/144,018

Distributed Actor-Based Information System and Method

Non-Final OA §101§102§103§112§DP
Filed
May 05, 2023
Examiner
MINOR, AYANNA YVETTE
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Grokit Data Inc.
OA Round
3 (Non-Final)
18%
Grant Probability
At Risk
3-4
OA Rounds
3y 6m
To Grant
43%
With Interview

Examiner Intelligence

Grants only 18% of cases
18%
Career Allow Rate
33 granted / 179 resolved
-33.6% vs TC avg
Strong +25% interview lift
Without
With
+24.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
47 currently pending
Career history
226
Total Applications
across all art units

Statute-Specific Performance

§101
37.9%
-2.1% vs TC avg
§103
33.6%
-6.4% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
14.1%
-25.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 179 resolved cases

Office Action

§101 §102 §103 §112 §DP
DETAILED ACTION Acknowledgement This non-final office action is in response to the request for continued examination (RCE) filed on 02/13/2026. Status of Claims Claims 4, 17, and 30 have been cancelled. Claims 1, 14, and 27 have been amended. Claims 1-3, 5-16, 18-29, and 31-39 are now pending. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/13/2026 has been entered. Response to Arguments The double patenting rejection is withdrawn in light of the present claim amendments being distinct from claims in application 18/143,994. The previous 35 U.S.C. 112(a) rejection is withdrawn in light of amendments. Applicant's arguments filed on 02/13/2026 regarding the 35 U.S.C. 101 and 103 rejections of the claims have been fully considered. The Applicant argues the following. (1) As per the 101 rejection, the Applicant argues, in summary, that (i) amended independent claims 1, 14, and 27 do not involve any human activity as part of the Certain Methods of Organizing Human Activity alleged by the Office. The claims process manages communications and operations processed by and between non-human distributed actors; and (ii) the amended claims recite a practical application of the alleged abstract idea. The Examiner respectfully disagrees. The Examiner maintains the position that the claims are directed to the abstract group of Certain Methods of Organizing Human Activity because the claims describes a process of managing and assigning distributed actor resources to perform skills to address an unfulfilled need, which can be performed by a human. Although the claims describe these distributed actors as non-human, the claims still recite user involvement with the limitations of "previous content provided by the user", "allowing the user to choose the one or more non-distributed actors from a group of potential distributed actors", and “providing the bespoke response to a party associated with the unfulfilled need”, thus reflecting interaction between a human and a computer-based component. The claims do not recite specific skills performed by the distributed actors that would exclude a human from performing them. The Examiner also maintains the position that the additional elements recited in the amended claims and highlighted in Steps 2A(2) do not integrate the abstract idea into a practical application because the additional elements do not improve the functioning of a computer beyond its original capacity nor improve upon another technology or technological component. The additional elements are viewed as computing components/devices that are used to perform that abstract idea stated above. Limitations that recite mere instructions to implement an abstract idea on a computer or merely uses a computer as a tool to perform an abstract idea are not indicative of integration into a practical application (see MPEP 2106.05(f)).The Applicant has not presented any new arguments to counter this position. Therefore, the 35 U.S.C. 101 rejection is maintained. (2) As per the 103 rejections, the Applicant argues, in summary, that the combination of Horvitz and Lynch do not teach the limitations of amended claims 1, 14, and 27. The Examiner finds the Applicant’s arguments persuasive. Therefore, the previous 103 rejection has been withdrawn. However, upon further search and consideration, a new ground of 102 and 103 rejections for the claims has been made. See details below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 09/18/2025, 12/02/2025, 02/18/2026, 02/23/2026, and 03/17/2026 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. Claims 1-3, 5-16, 18-29, and 31-39 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1, 14, and 27 recite the limitation of “…wherein the previous content includes a description of a previous interaction between a non-human distributed actor and the first non-human distributed actor…”, which is not supported by the Applicant’s specification. The Applicant’s specification in paragraphs [00371], [00381], [00382], [00383], [00384], [00390], [00391], [00392], and [00428] states that the previous content is provided by a user, may be associated with a distributed actor, and the previous content may include user conversations. The specification does not state that the previous content includes a previous interaction between non-human distributed actors. Therefore, claims 1, 14, and 27 contain new matter and are rejected under 35 U.S.C. 112(a). Dependent claims 2-3, 5-13, 15-16, 18-26, 28-29, and 31-39 are also rejected under 35 U.S.C. 112(a). The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1-3, 5-16, 18-29, and 31-39 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1-2, 14-15, and 27-28 recite the limitations of “proving a second level of access to the previous content provided by the user” and “allowing the user to choose the one or more non-human distributed actors from a group of potential distributed actors”. There is insufficient antecedent basis for “the user” because the preceding claim limitations do not recite “a user”. Therefore, claims 1-2, 14-15, and 27-28 are considered indefinite and are rejected under 35 U.S.C. 112(b). Dependent claims 3, 5-13, 16, 18-26, 29, and 31-39 are also rejected under 35 U.S.C. 112(b). For examination purposes, “the user” will be interpreted as “a user”. Claims 1, 14, and 27 include the limitation of “…at least one skill offered by the one or more non-human distributed actors”. The preceding claim limitations refers to “a first non-human distributed actor” and “one or more second non-human distributed actors”. Therefore, it’s not clear if “the one or more non-human distributed actors” is referring to the first or second non-human distributed actors. Therefore, claims 1, 14, and 27 are considered indefinite and are rejected under 35 U.S.C. 112(b). Dependent claims 2-3, 5-13, 15-16, 18-26, 28-29, and 31-39 are also rejected under 35 U.S.C. 112(b). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 5-16, 18-29, and 31-39 are rejected under 35 U.S.C. 101 because the claimed invention, “Distributed Actor-Based Information System & Method”, is directed to an abstract idea, specifically Certain Methods of Organizing Human Activity, without significantly more. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements individually or in combination provide mere instructions to implement the abstract idea on a computer. Step 1: Claims 1-3, 5-16, 18-29, and 31-39 are directed to a statutory category, namely a process (claims 1-3 and 5-13), a manufacture (claims 14-16 and 18-26), and a machine (claims 27-29 and 31-39). Step 2A (1): Claims 1-3, 6-16, 19-29, and 32-39 are directed to an abstract idea of Certain Methods of Organizing Human Activity, based on the following claim limitations: monitoring an environment to detect the existence of an unfulfilled need associated with a first non-human distributed actor; assigning one or more second non-human distributed actors… to address the unfulfilled need based, at least in part, upon the at least one skill offered by the one or more non-human distributed actors, thus defining one or more assigned distributed actors; providing a first level of access to previous content…to the one or more assigned distributed actors, wherein the previous content includes a description of a previous interaction between a non-human distributed actor and the first non-human distributed actor…following performance of a corresponding skill offered by the non-human distributed actor; assigning, via the one or more assigned distributed actors, at least a portion of the unfulfilled need to one or more additional non-human distributed actors; providing a second level of access to the previous content provided by the user to the one or more additional non-human distributed actors, wherein the reduced access is less than the first level of access provided to the one or more assigned distributed actors and concern one or more corresponding skills offered by the one or more additional non-human distributed actors; determining that the one or more additional non-human distributed actors offer their respective skills through one or more non-human distributed sub-actors; assigning, via the one or more assigned distributed actors, at least a portion of the unfulfilled need to one or more non-human distributed sub-actors, thus defining one or more non-human assigned distributed sub-actors; and addressing the unfulfilled need by performing the respective skills of the one or more assigned distributed actors and the one or more additional non-human distributed actors, wherein performing the respective skills of the one or more additional non-human distributed actors includes performing the respective skills of the one or more assigned non-human distributed sub- actors. (claims 1, 14, and 27); wherein assigning the one or more non-human distributed actors to address the unfulfilled need includes one or more of: immediately assigning to the one or more non-human distributed actors; inquiring on the availability of the one or more non-human distributed actors; and allowing the user to choose the one or more non-human distributed actors from a group of potential distributed actors (claims 2, 15, and 28); wherein monitoring an environment to detect the existence of an unfulfilled need includes: detecting the existence of a request (claims 3, 16, and 29); wherein the one or more distributed sub-actors address at least a portion of the unfulfilled need (claims 6, 19, and 32); addressing at least a portion of the unfulfilled need with the at least one skill offered by the one or more assigned distributed actors (claims 7, 20, and 33); wherein addressing at least a portion of the unfulfilled need with the at least one skill offered by the one or more assigned distributed actors includes: generating one or more response portions with the at least one skill offered by the one or more assigned distributed actors (claims 8, 21, and 34); forming a bespoke response to the unfulfilled need based, at least in part, upon the one or more response portions (claims 9, 22, and 35); providing the bespoke response to a party associated with the unfulfilled need (claims 10, 23, and 36); effectuating, in whole or in part, the bespoke response (claims 11, 24, and 37); maintaining a group of distributed actors, wherein each of the distributed actors offers at least one skill (claims 12 and 38); wherein maintaining a group of distributed actors includes: maintaining the distributed database with the group of distributed actors (claims 13, 25, and 39); and wherein maintaining a group of distributed actors includes: maintaining a database that defines the group of distributed actors (claim 26)”. These claims describe a process of managing and assigning distributed actor resources to perform skills to address an unfulfilled need, which can be performed by a human. Although the claims describe these distributed actors as non-human, the claims still recite user involvement with the limitations of "previous content provided by the user", "allowing the user to choose the one or more non-distributed actors from a group of potential distributed actors", and “providing the bespoke response to a party associated with the unfulfilled need”, thus reflecting interaction between a human and a computer-based component. Also, the claims do not recite specific skills performed by the distributed actors that would exclude a human from performing them. This interpretation is supported by Applicant's specification that refers to skills performed by the distributed actor as travel skills, temperature skills, traffic skills, etc. These types of skills can be performed by a human and thus may reflect mere automation of a manual process. Therefore, these limitations, under the broadest reasonable interpretation, fall within the abstract grouping of Certain Methods of Organizing Human Activity. Certain Methods of Organizing Human Activity can encompass the activity of a single person (e.g. a person following a set of instructions), activity that involve multiple people (e.g. a commercial interaction), and certain activity between a person and a computer (e.g. a method of anonymous loan shopping). Therefore, claims 1-3, 5-16, 18-29, and 31-39 are directed to an abstract idea and are not patent eligible. Step 2A (2): This judicial exception is not integrated into a practical application. In particular, claims 1, 5, 13, 14, 18, 25, 26, 27, 31, and 39 recite additional elements of “a computer-implemented method, executed on a computing device; a distributed database; wherein the group of distributed actors include one or more of: a software platform; a software application; a virtual machine; and a web-based service; a database; a computer program product residing on a computer readable medium having a plurality of instructions stored thereon which, when executed by a processor, cause the processor to perform operations; and a computing system including a processor and memory configured to perform operations”. These additional elements do not integrate the abstract idea into a practical application because the claims do not recite (a) an improvement to another technology or technical field and (b) an improvement to the functioning of the computer itself and (c) implementing the abstract idea with or by use of a particular machine, (d) effecting a particular transformation or reduction of an article, or (e) applying the judicial exception in some other meaningful way beyond generally linking the use of an abstract idea to a particular technological environment. These additional elements evaluated individually and in combination are viewed as computing components/devices that are used to perform that abstract idea stated above. Limitations that recite mere instructions to implement an abstract idea on a computer or merely uses a computer as a tool to perform an abstract idea are not indicative of integration into a practical application (see MPEP 2106.05(f)). Therefore, claims 1-3, 5-16, 18-29, and 31-39 do not include individual or a combination of additional elements that integrate the judicial exception into a practical application and thus are not patent eligible. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Claims 1, 5, 13, 14, 18, 25, 26, 27, 31, and 39 recite additional elements of “a computer-implemented method, executed on a computing device; a distributed database; wherein the group of distributed actors include one or more of: a software platform; a software application; a virtual machine; and a web-based service; a database; a computer program product residing on a computer readable medium having a plurality of instructions stored thereon which, when executed by a processor, cause the processor to perform operations; and a computing system including a processor and memory configured to perform operations”. These additional elements evaluated individually and in combination are viewed as mere instructions to apply or implement the abstract idea on a computer. Applying an abstract idea on a computer does not integrate a judicial exception into a practical application or provide an inventive concept (see MPEP 2106.05(f)). Therefore, claims 1-3, 5-16, 18-29, and 31-39 do not include individual or a combination of additional elements that are sufficient to amount to significantly more than the judicial exception and thus are not patent eligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3, 5-11, 14-16, 18-24, 27-29, and 31-37 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lynch et al. (US 2014/0164317 A1). As per claims 1, 14, and 27 (Currently Amended), Lynch teaches a computer-implemented method, executed on a computing device, comprising; a computer program product residing on a non-transitory computer readable medium having a plurality of instructions stored thereon which, when executed by a processor, cause the processor to perform operations comprising; a computing system including a processor and memory configured to perform operations comprising: (Lynch e.g. Systems, methods, and apparatus for use with at least one virtual agent (Abstract). In some further embodiments, at least one computer-readable medium is provided, having encoded thereon instructions that, when executed by at least one processor, perform a method for use in connection with at least one virtual agent,...[0006]. FIG. 4 shows an illustrative process that may be used by a virtual agent to formulate a task to be performed and/or to perform the task, in accordance with some embodiments of the present disclosure [0011]. FIG. 6 shows an illustrative system in which multiple virtual agents interact with each other in formulating a task to be performed and/or in performing the task, in accordance with some embodiments of the present disclosure [0013]. FIG. 1 shows an illustrative system 100 in which the concepts disclosed herein may be implemented [0068].) Lynch teaches monitoring an environment to detect an existence of an unfulfilled need associated with a first non-human distributed actor; (Lynch e.g. Some electronic devices such as smartphones and tablet computers include applications known as virtual agents [0001]. Some virtual agents are programmed to assist a user in performing various tasks. For example, a virtual agent may be programmed to send electronic messages, make appointments, place phone calls, and get directions [0002]. A virtual agent may be invoked in any suitable manner to perform a task for one or more persons. In accordance with some embodiments, a process may monitor a conversation taking place over a messaging application and listen for a “trigger,” which may be a word or phrase designated for invoking a virtual agent [0029]. The process that monitors the conversation may be the messaging application itself, or some other process that is given access to one or more portions of the conversation content in some suitable manner. In alternative embodiments, the process may execute on a server, such as a server handling the communication traffic associated with the messaging application, or a separate server to which one or more portions of the conversation content is forwarded [0030]. In some further embodiments, a process may intercept user input to a messaging application and determine whether the user input includes a trigger that is designated for invoking a virtual agent. For example, the process may be programmed to intercept input from various types of input devices (e.g., keyboard, mouse, touchscreen, hardware buttons, etc.) on a device used by a conversation participant to detect the designated trigger [0031].) Lynch teaches assigning one or more second non-human distributed actors from a distributed database to address the unfulfilled need based, at least in part, upon at least one skill offered by the one or more non-human distributed actors, thus defining one or more assigned distributed actors; (Lynch e.g. Upon detecting the designated trigger (which may be in the form of a keystroke, mouse click, touchscreen gesture, button press, speaking or typing a trigger word or phrase, etc., or any suitable combination thereof), the process may invoke the virtual agent, for example, by injecting the virtual agent into a conversation taking place over the messaging application [0031]. Invocation of a virtual agent from a multiparty conversation may cause the virtual agent to be injected into the conversation as an additional participant. For example, the virtual agent may be asked to make a recommendation for the group (e.g., for a restaurant, shop, movie, etc.) [0021]. In completing such tasks, the virtual agent may interact with other applications (e.g., an email client) and may search for information either locally (e.g., from a user's electronic address book) or via one or more networks (e.g., from the World Wide Web, or the Web) [0002]. In the example illustrated in FIG. 1, the electronic device 110 also includes a virtual agent 124. The virtual agent 124 may be programmed to perform any of the functionalities described herein. For example, the virtual agent may be programmed to assist a user in performing any of numerous tasks (e.g., sending messages, placing calls, launching applications, accessing information from the Web, etc.). In performing a task, the virtual agent 124 may interact with the user 102 via the user interface(s) 114. The virtual agent 124 may also interact with the operating system 116 and/or one or more of the application(s) 118, access the user data 120, and/or obtain information from a sensor such as the location sensor 122 [0079]. In some embodiments, the virtual agent 124 may be implemented as an application that resides locally on the electronic device 110. In some further embodiments, the virtual agent may be distributed and may execute partially on the device 110 and partially on one or more remote computers [0080]. In some embodiments, the virtual agent may be invoked on a server (e .g., one or more of the server(s) 170 shown in FIG. 1) that is remote from both users [0093]. FIG. 6 shows an illustrative system 600 in which multiple virtual agents interact with each other in formulating a task to be performed and/or in performing the task for a group of one or more users. These virtual agents may execute on different devices (e.g., electronic devices 110A and 110B, respectively) associated with the respective users [0173].) Lynch teaches providing a first level of access to previous content persisted within the distributed database to the one or more assigned distributed actors, wherein the previous content includes a description of a previous interaction between a non-human distributed actor and the first non-human distributed actor that is persisted in the distributed database following performance of a corresponding skill offered by the non-human distributed actor; (Lynch e.g. Systems, methods, and apparatus for use with at least one virtual agent. In some embodiments, at least one processor is programmed to store a receipt for an interaction between the at least one virtual agent and one or more users, wherein the receipt comprises at least some information provided by the one or more users to the at least one virtual agent during the interaction (Abstract). It should be appreciated that the virtual agent may be programmed to analyze any suitable types of records of previous interactions, such as a full discussion thread between the virtual agent and one or more users, or an abridged version containing the virtual agent's previous recommendation and/or one or more pieces of salient information [0041]. The virtual agent may access information regarding one or more relevant persons in any suitable way. The virtual agent may be programmed to obtain such information from one or more other sources. For example, in some embodiments, the virtual agent may be programmed to access information stored locally on a user's device, such as scheduling and contact information stored by a calendar application, user preference information stored by the virtual agent or some other application, web browsing history, etc. [0047]. In some embodiments, the virtual agent may be further programmed to access information from a remote device via one or more networks [0047]. In accordance with some embodiments, a virtual agent may be programmed to maintain a profile for a user. The profile may store information that may be used by the virtual agent in interactions with the user. Any suitable type of information may be stored, such as information derived from the virtual agent's prior interactions with the user (e.g., preferences expressed by the user, decisions made by the user, information requested by the user to make certain types of decisions, etc.), information collected from a third party service provider, or any other information that may be useful to the virtual agent in formulating a task to be performed for the user or in performing the task [0048]. In accordance with some embodiments, a record may be stored for an interaction between a virtual agent and one or more users [0051]. Having a record of the prior interaction may facilitate the virtual agent making a recommendation. For instance, in some embodiments, the user may modify the record of the previous interaction (e.g., by adding, modifying, and/or removing information) and provide the modified record to the virtual agent to request a new recommendation, without having to recreate the interaction or otherwise manually input all information desired to be provided to the virtual agent [0051]. A record of an interaction between a single user and a virtual agent and/or a record of a virtual agent action for a single user may be stored [0053]. FIG. 5 shows an illustrative data store 500 (e.g., a database or some other suitable data store) for storing receipts for virtual agent interactions [0161].) Lynch teaches assigning, via the one or more assigned distributed actors, at least a portion of the unfulfilled need to one or more additional non-human distributed actors; (Lynch e.g. FIG. 6 shows an illustrative system in which multiple virtual agents interact with each other in formulating a task to be performed and/or in performing the task, in accordance with some embodiments of the present disclosure [0013]. In some further embodiments, a virtual agent may be invoked on a device in response to input received from another device. For example, a first device having virtual agent capability may receive from a second device a communication and invoke a virtual agent upon detecting a designated trigger in the communication. The communication may be received via a messaging application (e.g., SMS, IM, email, voice chat, etc.), via telephone, or in any other suitable way. In this manner, even if the second device does not have virtual agent capability, a user of the second device may be able to take advantage of the virtual agent capability of the first device [0032].) Lynch teaches providing a second level of access to the previous content provided by the user to the one or more additional non-human distributed actors, wherein the reduced access is less than the first level of access provided to the one or more assigned distributed actors and concern one or more corresponding skills offered by the one or more additional non-human distributed actors; (Lynch e.g. Multiple virtual agents may interact with each other in formulating a task to be performed and/or in performing the task. For instance, in some embodiments, each virtual agent may be associated with a different user in the group and may execute on a different device associated with the respective user. In this manner, each virtual agent may have access to various types of information regarding the respective user, such as contact information (e.g., physical addresses, phone numbers, email and/or other virtual addresses, etc.), location information (e.g., present location, recently visited locations, e.g., as determined based on a threshold length of time, frequently visited locations, e.g., as determined by a threshold number of visits during a certain time interval, etc.), preference information (e.g., gleaned from activity histories, reviews, etc.), and/or any other suitable information [0061]. In some embodiments, the virtual agents may be programmed to share information with each other within constraints set by the respective users. Such constraints may be established for privacy reasons or any other reason. For example, a user may wish to share different types of information with different groups of people. In some embodiments, the user may make certain information ( e.g., preference and/ or location information) available to a group only if all members of the group belong to a trusted circle of friends, or by applying any other desired constraint [0062]. For instance, in making a recommendation, the virtual agents may be programmed to negotiate with each other to reach a compromise based on the respective users' preferences and/or constraints. In conducting such a negotiation, a virtual agent may make a proposal to other virtual agents, or accept or reject a proposal made by another virtual agent, with or without divulging to the other virtual agents the underlying information used by the virtual agent to make, accept, or reject the proposal [0063].) Lynch teaches determining that the one or more additional non-human distributed actors offer their respective skills through one or more non-human distributed sub-actors; assigning, via the one or more assigned distributed actors, at least a portion of the unfulfilled need to one or more non-human distributed sub-actors, thus defining one or more non- human assigned distributed sub-actors; and addressing the unfulfilled need by performing the respective skills of the one or more assigned distributed actors and the one or more additional non-human distributed actors, wherein performing the respective skills of the one or more additional non-human distributed actors includes performing the respective skills of the one or more assigned non-human distributed sub- actors. (Lynch e.g. Multiple virtual agents may interact with each other in formulating a task to be performed and/or in performing the task. For instance, in some embodiments, each virtual agent may be associated with a different user in the group and may execute on a different device associated with the respective user. In this manner, each virtual agent may have access to various types of information regarding the respective user, such as contact information (e.g., physical addresses, phone numbers, email and/or other virtual addresses, etc.), location information (e.g., present location, recently visited locations, e.g., as determined based on a threshold length of time, frequently visited locations, e.g., as determined by a threshold number of visits during a certain time interval, etc.), preference information (e.g., gleaned from activity histories, reviews, etc.), and/or any other suitable information [0061]. In some embodiments, the virtual agents may be programmed to share information with each other within constraints set by the respective users [0062]. In some further embodiments, the virtual agents may be programmed to collaborate with each other in formulating a task to be performed and/or in performing the task, regardless of how much information the virtual agents share with each other. For instance, in making a recommendation, the virtual agents may be programmed to negotiate with each other to reach a compromise based on the respective users' preferences and/or constraints. In conducting such a negotiation, a virtual agent may make a proposal to other virtual agents, or accept or reject a proposal made by another virtual agent, with or without divulging to the other virtual agents the underlying information used by the virtual agent to make, accept, or reject the proposal [0063]. In some further embodiments, the virtual agents may be programmed to collaborate with each other in formulating a task to be performed and/or in performing the task, regardless of whether the task arose from a multiparty conversation. For instance, in some embodiments, a virtual agent associated with a first user may (e.g., upon the first user's request) obtain information regarding a second user from a virtual agent associated with the second user. Any suitable types of information may be obtained in this manner. As one non-limiting example, the virtual agent associated with the first user may request from the virtual agent associated with the second user location and/or ETA information regarding the second user, even if neither virtual agent has assisted in arranging the meeting between the first and second users [0064].As one non-limiting example, a virtual agent running on a user device may interact with a virtual agent running on a server (e.g., in the cloud), for example, by forwarding information to and receiving a recommendation from the server-side virtual agent. The server-side virtual agent may interact with a single client-side virtual agent (e.g., when making a recommendation for a single user) or multiple client-side virtual agents (e.g., when making a recommendation for multiple users), as aspects of the present disclosure relating to multiple virtual agents collaborating with each other are not limited to any particular arrangement among the virtual agents [0065].) As per claims 2, 15, and 28 (Previously Presented), Lynch teaches the computer-implemented method of claim 1, the computer program product of claim 14, and the computing system of claim 27, Lynch teaches wherein assigning the one or more non-human distributed actors to address the unfulfilled need includes one or more of: immediately assigning to the one or more distributed actors; inquiring on the availability of the one or more distributed actors; and allowing the user to choose the one or more distributed actors from a group of potential distributed actors. (Lynch e.g. Upon detecting the designated trigger (which may be in the form of a keystroke, mouse click, touchscreen gesture, button press, speaking or typing a trigger word or phrase, etc., or any suitable combination thereof), the process may invoke the virtual agent, for example, by injecting the virtual agent into a conversation taking place over the messaging application [0031]. Once invoked, the virtual agent may inject itself into the conversation to present the requested information and/or recommendation to the participants [0022]. In some further embodiments, a virtual agent may be invoked on a device in response to input received from another device. For example, a first device having virtual agent capability may receive from a second device a communication and invoke a virtual agent upon detecting a designated trigger in the communication. The communication may be received via a messaging application (e.g., SMS, IM, email, voice chat, etc.), via telephone, or in any other suitable way. In this manner, even if the second device does not have virtual agent capability, a user of the second device may be able to take advantage of the virtual agent capability of the first device [0032].) As per claims 3, 16, and 29 (Original), Lynch teaches the computer-implemented method of claim 1, the computer program product of claim 14, and the computing system of claim 27 Lynch teaches wherein monitoring an environment to detect the existence of an unfulfilled need includes: detecting the existence of a request. (Lynch e.g. A virtual agent may be invoked in any suitable manner to perform a task for one or more persons. In accordance with some embodiments, a process may monitor a conversation taking place over a messaging application and listen for a “trigger,” which may be a word or phrase designated for invoking a virtual agent [0029]. Upon detecting the designated trigger (which may be in the form of a keystroke, mouse click, touchscreen gesture, button press, speaking or typing a trigger word or phrase, etc., or any suitable combination thereof), the process may invoke the virtual agent, for example, by injecting the virtual agent into a conversation taking place over the messaging application [0031]. A user (who may or may not be a participant in a conversation) may invoke the virtual agent to gather information and/or make a recommendation for multiple participants in the conversation. Once invoked, the virtual agent may inject itself into the conversation to present the requested information and/or recommendation to the participants [0022].) As per claims 5, 18, and 31 (Original), Lynch teaches the computer-implemented method of claim 1, the computer program product of claim 14, and the computing system of claim 27, Lynch teaches wherein the group of distributed actors include one or more of: a software platform; a software application; a virtual machine; and a web-based service. (Lynch e.g. Some electronic devices such as smartphones and tablet computers include applications known as virtual agents [0001]. Some virtual agents are programmed to assist a user in performing various tasks. For example, a virtual agent may be programmed to send electronic messages, make appointments, place phone calls, and get directions [0002]. In the example illustrated in FIG. 1, the electronic device 110 also includes a virtual agent 124. The virtual agent 124 may be programmed to perform any of the functionalities described herein. For example, the virtual agent may be programmed to assist a user in performing any of numerous tasks (e.g., sending messages, placing calls, launching applications, accessing information from the Web, etc.) [0079]. The virtual agent 124 may be implemented as an application that resides locally on the electronic device 110. In other embodiments, the virtual agent 124 may execute on one or more remote computers (e.g., the server(s) 170) and may be accessible from the electronic device 110 via a web interface, a remote access protocol, or some other suitable technology. In some further embodiments, the virtual agent may be distributed and may execute partially on the device 110 and partially on one or more remote computers [0080].) As per claims 6, 19, and 32 (Original), Lynch teaches the computer-implemented method of claim 1, the computer program product of claim 14, and the computing system of claim 27, Lynch teaches wherein the one or more assigned distributed sub-actors address at least a portion of the unfulfilled need. (Lynch e.g. Some virtual agents are programmed to assist a user in performing various tasks. For example, a virtual agent may be programmed to send electronic messages, make appointments, place phone calls, and get directions [0002]. In completing such tasks, the virtual agent may interact with other applications (e.g., an email client) and may search for information either locally (e.g., from a user's electronic address book) or via one or more networks (e.g., from the World Wide Web, or the Web) [0002]. Multiple virtual agents may interact with each other in formulating a task to be performed and/or in performing the task [0061]. In accordance with some embodiments, multiple virtual agents running on different devices may interact with each other in formulating a task to be performed and/or in performing the task, irrespective of whether the task is performed for a single user or for multiple users. As one non-limiting example, a virtual agent running on a user device may interact with a virtual agent running on a server (e.g., in the cloud), for example, by forwarding information to and receiving a recommendation from the server-side virtual agent. The server-side virtual agent may interact with a single client-side virtual agent (e.g., when making a recommendation for a single user) or multiple client-side virtual agents (e.g., when making a recommendation for multiple users), as aspects of the present disclosure relating to multiple virtual agents collaborating with each other are not limited to any particular arrangement among the virtual agents [0065]. The virtual agent 124 may also interact with the operating system 116 and/or one or more of the application(s) 118, access the user data 120, and/or obtain information from a sensor such as the location sensor 122 [0079].) As per claims 7, 20, and 33 (Original), Lynch teaches the computer-implemented method of claim 1, the computer program product of claim 14, and the computing system of claim 27, Lynch teaches further comprising: addressing at least a portion of the unfulfilled need with the at least one skill offered by the one or more assigned distributed actors. (Lynch e.g. Some virtual agents are programmed to assist a user in performing various tasks. For example, a virtual agent may be programmed to send electronic messages, make appointments, place phone calls, and get directions [0002]. In completing such tasks, the virtual agent may interact with other applications (e.g., an email client) and may search for information either locally (e.g., from a user's electronic address book) or via one or more networks (e.g., from the World Wide Web, or the Web) [0002]. Multiple virtual agents may interact with each other in formulating a task to be performed and/or in performing the task [0061]. In the example illustrated in FIG. 1, the electronic device 110 also includes a virtual agent 124. The virtual agent 124 may be programmed to perform any of the functionalities described herein. For example, the virtual agent may be programmed to assist a user in performing any of numerous tasks (e.g., sending messages, placing calls, launching applications, accessing information from the Web, etc.). In performing a task, the virtual agent 124 may interact with the user 102 via the user interface(s) 114. The virtual agent 124 may also interact with the operating system 116 and/or one or more of the application(s) 118, access the user data 120, and/or obtain information from a sensor such as the location sensor 122. [0079].) As per claims 8, 21, and 34 (Original), Lynch teaches the computer-implemented method of claim 7, the computer program product of claim 20, and the computing system of claim 33, Lynch teaches wherein addressing at least a portion of the unfulfilled need with the at least one skill offered by the one or more assigned distributed actors includes: generating one or more response portions with the at least one skill offered by the one or more assigned distributed actors. (Lynch e.g. Some virtual agents are programmed to assist a user in performing various tasks. For example, a virtual agent may be programmed to send electronic messages, make appointments, place phone calls, and get directions [0002]. Invocation of a virtual agent from a multiparty conversation may cause the virtual agent to be injected into the conversation as an additional participant. For example, the virtual agent may be asked to make a recommendation for the group (e.g., for a restaurant, shop, movie, etc.) [0021]. A user (who may or may not be a participant in a conversation) may invoke the virtual agent to gather information and/or make a recommendation for multiple participants in the conversation. Once invoked, the virtual agent may inject itself into the conversation to present the requested information and/or recommendation to the participants [0022].) As per claims 9, 22, and 35 (Original), Lynch teaches the computer-implemented method of claim 8, the computer program product of claim 21, and the computing system of claim 34, Lynch teaches further comprising: forming a bespoke response to the unfulfilled need based, at least in part, upon the one or more response portions. (Lynch e.g. A user (who may or may not be a participant in a conversation) may invoke the virtual agent to gather information and/or make a recommendation for multiple participants in the conversation. Once invoked, the virtual agent may inject itself into the conversation to present the requested information and/or recommendation to the participants [0022]. In some embodiments, the virtual agent may be injected into the conversation to interact with one or more participants, for example, to prompt for additional information to further define the requested task and/or to provide a recommendation or result of a task to the participants in the conversation [0037].) As per claims 10, 23, and 36 (Original), Lynch teaches the computer-implemented method of claim 9, the computer program product of claim 22, and the computing system of claim 35, Lynch teaches further comprising: providing the bespoke response to a party associated with the unfulfilled need. (Lynch e.g. A user (who may or may not be a participant in a conversation) may invoke the virtual agent to gather information and/or make a recommendation for multiple participants in the conversation. Once invoked, the virtual agent may inject itself into the conversation to present the requested information and/or recommendation to the participants [0022].) As per claims 11, 24, and 37 (Original), Lynch teaches the computer-implemented method of claim 9, the computer program product of claim 22, and the computing system of claim 35, Lynch teaches further comprising: effectuating, in whole or in part, the bespoke response. (Lynch e.g. A user (who may or may not be a participant in a conversation) may invoke the virtual agent to gather information and/or make a recommendation for multiple participants in the conversation. Once invoked, the virtual agent may inject itself into the conversation to present the requested information and/or recommendation to the participants [0022]. In some embodiments, the virtual agent may be injected into the conversation to interact with one or more participants, for example, to prompt for additional information to further define the requested task and/or to provide a recommendation or result of a task to the participants in the conversation [0037].) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 12-13, 25-26, and 38-39 are rejected under 35 U.S.C. 103 as being unpatentable over Lynch et al. (US 2014/0164317 A1) in view of Charisius et al. (US 2002/0107914 A1). As per claims 12 and 38 (Original), Lynch teaches the computer-implemented method of claim 1 and the computing system of claim 27, Lynch does not explicitly teach, however, Charisius teaches further comprising: maintaining a group of distributed actors, wherein each of the distributed actors offers at least one skill (Charisius e.g. The invention relates to methods and systems for optimizing resource allocation and resource profiles used in resource allocation based on data mined from plans created from a workflow [0006]. FIG. 1 depicts a data processing system 100 suitable for practicing methods and systems consistent with the present invention [0074]. The Client Interface 134 allows any enterprise affiliate to create, delete, move, and copy workflows, project plans, and associated roles/resource lists on WebDAV server 140 [0079]. FIG. 2 depicts a functional architectural overview of the workflow modeling and project planning integration tool 200 used to integrate workflow modeling and project planning [0087]. The Resource Manager Module 206 further allows an enterprise affiliate to create, modify, and store the resource profiles (e.g., the person, equipment, or systems, such as a development facility) that may be assigned to a task of a plan created from a workflow [0093]. The resource profile includes a resource ID and a unique identifier for the role profile so that the Client Interface 134 may communicate to the Tool Server 144 that the identified resource has skills or capabilities corresponding to the role profile [0093]. The Client Interface 134 may also receive other resource information (not shown) for other types of resources (e.g., equipment, facilities, computer systems, or other known entities) that may be assigned to any task of a plan [0162]. Resource information 5404 may also include one or more skill identifiers that indicate one or more capabilities that a task of a plan may require for the task to be completed. Skill identifiers may include any foreseeable skill for the named resource, including a user, equipment, facilities, computer systems, or other known entities that may be assigned to any task of a plan [0163].) The Examiner submits that before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Lynch’s virtual agent system with Charisius’s data processing system that maintains a groups of distributed actors with associated skills in order to improve resource allocation to a given plan (Charisius e.g. Abstract). As per claims 13, 25, and 39 (Previously Presented), Lynch in view of Charisius teach the computer-implemented method of claim 12 and the computing system of claim 38, Lynch teaches the computer program product of claim 14, Lynch does not explicitly teach, however, Charisius teaches wherein maintaining a group of distributed actors includes: maintaining the distributed database with the group of distributed actors. (Charisius e.g. FIG. 1 depicts a data processing system 100 suitable for practicing methods and systems consistent with the present invention [0074]. Memory 110 in computer 102a includes a Client Interface 134 to a Web-based “Distributed Authoring and Versioning” (WebDAV) server 140 in memory 112 [0076]. The WebDAV server 140 in memory 112 of computer 104 operates as a virtual file system for computers 102a, 102n, and 106 on the network 108 [0078]. The Client Interface 134 allows any enterprise affiliate to create, delete, move, and copy workflows, project plans, and associated roles/resource lists on WebDAV server 140 [0079]. The WebDAV protocol defines a WebDAV resource to be a collection (e.g., a directory or folder on WebDAV Storage 142) or a collection member (e.g., a file or Web page on WebDAV Storage 142). Each WebDAV resource has a content file and properties associated with the content file [0081]. The various types of client files include a condition model, a user profile, a resource profile, a work flow definition file, and a plan definition file [0090]. The Resource Manager Module 206 further allows an enterprise affiliate to create, modify, and store the resource profiles (e.g., the person, equipment, or systems, such as a development facility) that may be assigned to a task of a plan created from a workflow [0093]. The resource profile includes a resource ID and a unique identifier for the role profile so that the Client Interface 134 may communicate to the Tool Server 144 that the identified resource has skills or capabilities corresponding to the role profile [0093]. The Resource/Role Management Module 220 checks the resource profile corresponding to the assigned resource on the WebDAV Storage 142 to verify that the resource is not overloaded. For example, the Resource/Role Management Module 220 determines whether a resource is already assigned to another task in another plan during the same time frame, thus preventing it from being able to complete one of the tasks to which it is assigned [0105].) The Examiner submits that before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Lynch’s virtual agent system with Charisius’s data processing system that maintains a groups of distributed actors with associated skills in order to improve resource allocation to a given plan (Charisius e.g. Abstract). As per claim 26 (Original), Lynch in view of Charisius teach the computer program product of claim 25, Lynch does not explicitly teach, however, Charisius teaches wherein maintaining a group of distributed actors includes: maintaining a database that defines the group of distributed actors. (Charisius e.g. FIG. 1 depicts a data processing system 100 suitable for practicing methods and systems consistent with the present invention [0074]. Memory 110 in computer 102a includes a Client Interface 134 to a Web-based “Distributed Authoring and Versioning” (WebDAV) server 140 in memory 112 [0076]. The WebDAV server 140 in memory 112 of computer 104 operates as a virtual file system for computers 102a, 102n, and 106 on the network 108 [0078]. The Client Interface 134 allows any enterprise affiliate to create, delete, move, and copy workflows, project plans, and associated roles/resource lists on WebDAV server 140 [0079]. The WebDAV protocol defines a WebDAV resource to be a collection (e.g., a directory or folder on WebDAV Storage 142) or a collection member (e.g., a file or Web page on WebDAV Storage 142). Each WebDAV resource has a content file and properties associated with the content file [0081]. The various types of client files include a condition model, a user profile, a resource profile, a work flow definition file, and a plan definition file [0090]. The Resource Manager Module 206 further allows an enterprise affiliate to create, modify, and store the resource profiles (e.g., the person, equipment, or systems, such as a development facility) that may be assigned to a task of a plan created from a workflow [0093]. The resource profile includes a resource ID and a unique identifier for the role profile so that the Client Interface 134 may communicate to the Tool Server 144 that the identified resource has skills or capabilities corresponding to the role profile [0093]. The Resource/Role Management Module 220 checks the resource profile corresponding to the assigned resource on the WebDAV Storage 142 to verify that the resource is not overloaded. For example, the Resource/Role Management Module 220 determines whether a resource is already assigned to another task in another plan during the same time frame, thus preventing it from being able to complete one of the tasks to which it is assigned [0105].) The Examiner submits that before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Lynch’s virtual agent system with Charisius’s data processing system that maintains a groups of distributed actors with associated skills in order to improve resource allocation to a given plan (Charisius e.g. Abstract). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure include FOR: White, R. (WO-2023191968-A1) “Auto-Managing Requestor Communications to Accommodate Pending Activities of Diverse Actors” and NPL: I. Djordjevic and C. Phillips, "Architecture for secure work of dynamic distributed groups," First IEEE Consumer Communications and Networking Conference, 2004. CCNC 2004., Las Vegas, NV, USA, 2004, pp. 495-500. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ayanna Minor whose telephone number is (571)272-3605. The examiner can normally be reached M-F 9am-5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor can be reached at 571-272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.M./Examiner, Art Unit 3624 /Jerry O'Connor/Supervisory Patent Examiner,Group Art Unit 3624
Read full office action

Prosecution Timeline

May 05, 2023
Application Filed
Apr 10, 2025
Non-Final Rejection — §101, §102, §103
Jul 01, 2025
Interview Requested
Jul 15, 2025
Response Filed
Jul 17, 2025
Applicant Interview (Telephonic)
Jul 17, 2025
Examiner Interview Summary
Aug 13, 2025
Final Rejection — §101, §102, §103
Feb 13, 2026
Request for Continued Examination
Mar 11, 2026
Response after Non-Final Action
Mar 19, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12556890
ACTIVE TRANSPORT BASED NOTIFICATIONS
2y 5m to grant Granted Feb 17, 2026
Patent 12518234
CONVERSATIONAL BUSINESS TOOL
2y 5m to grant Granted Jan 06, 2026
Patent 12455761
TECHNIQUES FOR WORKFLOW ANALYSIS AND DESIGN TASK OPTIMIZATION
2y 5m to grant Granted Oct 28, 2025
Patent 12450542
CONVERSATIONAL BUSINESS TOOL
2y 5m to grant Granted Oct 21, 2025
Patent 12450543
CONVERSATIONAL BUSINESS TOOL
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
18%
Grant Probability
43%
With Interview (+24.7%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 179 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month