Prosecution Insights
Last updated: April 19, 2026
Application No. 18/510,151

SYSTEM AND METHOD THEREOF FOR AUTOMATICALLY UPDATING A DECISION-MAKING MODEL OF AN ELECTRONIC SOCIAL AGENT BY ACTIVELY COLLECTING AT LEAST A USER RESPONSE

Final Rejection §101§102§103§DP
Filed
Nov 15, 2023
Examiner
OGUNBIYI, OLUWADAMILOL M
Art Unit
2653
Tech Center
2600 — Communications
Assignee
Intuition Robotics Ltd.
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 12m
To Grant
96%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
236 granted / 304 resolved
+15.6% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
31 currently pending
Career history
335
Total Applications
across all art units

Statute-Specific Performance

§101
20.1%
-19.9% vs TC avg
§103
47.0%
+7.0% vs TC avg
§102
12.1%
-27.9% vs TC avg
§112
13.7%
-26.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 304 resolved cases

Office Action

§101 §102 §103 §DP
DETAILED ACTION Claims 1, 3 – 8, 10 – 15, 17 – 22, and 24 – 27 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment With regard to the Non-Final Office Action from 12 June 2025, the Applicant has filed a response on 22 September 2025. Claims 2, 9, 16 and 23 have been cancelled. Claim 23 was objected to for a minor informality. The claim has been cancelled and the objection is rendered moot. Response to Arguments Regarding the nonstatutory double patenting rejection given to the most-recent claims listings, the Applicant has requested that the ‘rejection be held in abeyance until at least one claim has been found otherwise allowable.’ The Examiner considers the current claims listing and observes that the limitations of the claims are still subject to a nonstatutory double patenting rejection over claims of U.S. 11,907,298 B2. The double patenting rejection would be maintained. Regarding the 35 U.S.C. 101 rejection given to the claims for being directed to a judicial exception without significantly more, the Applicants disagree that there is a judicial exception contained within the claims, and that the claims do not pre-empt or substantially pre-empt any abstract idea, nor do they fall within the three defined categories of abstract ideas (Remarks: page 2 par 2), but that as currently presented, the claims instead ‘provide for an improvement in the functioning of computers or other technology and apply any judicial exception to effect improved operation of a computer system’ (Remarks: page 2 par 3). The Applicant refers to the Specification (Remarks: page 2 – page 3) to show the indicated improvements, but the Examiner indicates that while this may be so, the contents of the indicated sections of the Specification are not directly indicated in the claims listing, and are not considered for the purpose of the U.S.C. 101 Abstract idea rejection. The Applicant also argues that an improvement to a computer can be effectuated by better software (Remarks: page 3 cols 2–3). This would appear to be related to the updating of the decision-making model of the claimed electronic social agent. The Examiner holds that, while it is true that ‘a person is not an electronic social agent’ (Remarks: page 4 par 1), the functions of the electronic social agent provided here can be entirely performed by a person. Regarding the updating of the decision-making model, the Examiner notes that a person is also capable of updating a decision-making process, based on available information, or based on the determination that there would not be a need to collect further information. The electronic social agent is embodied on a computer and, by this analysis, qualifies as a tool for the implementation of the Abstract idea. The Examiner maintains that, the entire process (of claim 1) can be performed mentally by a human being. The Examiner disagrees with the indicated improvements to the functioning of a computer, particularly with the indicated improvements being contained in the Specification rather than the claim itself. The Examiner also holds that a human is able to update a decision-making process by making use of newly-received information pertinent to the entire process, as well as making further decisions based on the determination that further information need not be collected. The Examiner maintains the 35 U.S.C. 101 Abstract idea rejection. Regarding the 35 U.S.C. 103 rejection, the Applicant indicates that (Remarks: page 4 par 5 – 6) that the Office Action from 12 June 2025 is incomplete due to the claim 9 being rejected as being unpatentable over ‘Dahan (US 2018/0054524 A1) in view of Petill (US 2021/0012113 A1)’ rather than the reference of Wangikar as claim 2 was rejected earlier. The Applicant holds that, because of this, the Office Action is incomplete and a further Office Action cannot be made Final. The Examiner holds that, while there was an error in the presentation of the reference applied to the previously-presented claim 9, this Petill reference was not directly relied upon in the rejection of previously-presented claim 9. The previously-presented claim 2 was addressed making use of the appropriate prior art references applicable to it, and this would follow also for claim 9. While this would be an error in the Office Action, it does not constitute an incomplete Office Action. MPEP §§ 706.07(a) provides that second or any subsequent actions on the merits shall be final, except where the Examiner introduces a new ground of rejection that is neither necessitated by Applicant’s amendment of the claims. In this situation, the Examiner would not be introducing new references to address the independent claims, as the Applicant has also not presented claims in a way that would necessitate a new ground of rejection, and the Examiner made use of prior art references that were made available in the previous Office Action, and would be applied here as well. Regarding the instant claim 1, the Applicant has amended the claim to incorporate the limitations of cancelled claims 2 and 9. The Applicant indicates (Remarks: page 5 par 3) that updating the decision-making model is not just revising outputs or storing logs but ‘requires altering underlying decision-making model, such as its algorithm, coefficients, or structure, that governs the future behavior of the electronic social agent.’ The Examiner applied the Wangikar reference to provide teaching for the updating of the decision-making model, to which the Applicant indicates (Remarks: page 6 par 3) that while this Wangikar reference discloses machine learning model updates, the field and function are very different from the claim, the applied reference’s being limited to deployment error handling for code, unrelated to a social agent decision-making. The Applicant admits here that the Wangikar reference has model updating, albeit in a different domain. The Dahan et al. reference has already been made available to teach of an electronic social agent, and incorporating the Wangikar reference, which, as indicated by [0045], does update a decision-making model, rendering it suitable to address the claim limitation. This was presented with regard to the rejection of the previously-presented claim 2. The Applicant argues (Remarks: page 6 par 4 – 5) against the further reference of Dechu et al. which was directly applied to teach the previously-presented claim 9, the Applicant providing that this claim is unrelated to the previously-applied two references, and that its updating teaches nothing about a decision-making model of an electronic social agent. The Examiner applied FIG. 1 of this reference to teach the claimed limitation. The process shows an updating of state disambiguation model in Step 108, this being taken as the updating of the claimed decision-making model. But before that, the Examiner referred to Steps 106→107, which provide a check to note that a final mode has been reached, and if so, proceed to updating the model. This does appear to be an iterative process, as indicated by the Applicant, but its iterative nature does not remove it from being applicable in this situation, given that the ‘No’ decision of Step 106 need not be observed, and in such a situation that this ‘No’ decision is not observed, the procedure proceeds to the updating of the model, after determining that a second input from the user is indeed not required, thereby suitable to teach the claimed limitation, just as presented. The Examiner hereby maintains the claim rejection. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Instant claim 1 is rejected on the ground of nonstatutory double patenting as being obviously unpatentable over claim 1 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Although the claims at issue are not identical, they are not patentably distinct from each other because they are both directed to receiving a first user input and determining that it is necessary to collect a second user input, collecting the second user input through at least one sensor, and then updating the decision-making model of the electronic social agent based on the collected second user input. The instant claim fails to teach the limitation of ‘updating the decision-making model of the electronic social agent based on the first user input, upon determination that generation of the at least one second user input is not required.’ This is however taught by the reference of Dechu et al. (US 2018/0341684 A1). It would have been obvious to one of ordinary skill in the art to incorporate this reference based on the predictable result of keeping each state of the conversation between the user and the social agent up-to--date so that the system knows what type of response to provide at any point in the conversation. Instant claim 3 is rejected on the ground of nonstatutory double patenting as being obviously unpatentable over claim 1 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Instant claim 4 is rejected on the ground of nonstatutory double patenting as being obviously unpatentable over claim 2 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Instant claim 5 is rejected on the ground of nonstatutory double patenting as being obviously unpatentable over claim 1 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Instant claim 6 is rejected on the ground of nonstatutory double patenting as being obviously unpatentable over claim 1 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Instant claim 7 is rejected on the ground of nonstatutory double patenting as being obviously unpatentable over claim 1 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Instant claim 8 is rejected on the ground of nonstatutory double patenting as being obviously unpatentable over claim 1 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Instant claim 10 is rejected on the ground of nonstatutory double patenting as being obviously unpatentable over claim 7 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Instant claim 11 is rejected on the ground of nonstatutory double patenting as being obviously unpatentable over claim 8 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Instant claim 12 is rejected on the ground of nonstatutory double patenting as being obviously unpatentable over claim 9 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Instant claim 14 is rejected on the ground of nonstatutory double patenting as being obviously unpatentable over claim 10 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Instant claim 15 is rejected on the ground of nonstatutory double patenting as being obviously unpatentable over claim 11 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Instant claim 17 is rejected on the ground of nonstatutory double patenting as being obviously unpatentable over claim 11 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Instant claim 18 is rejected on the ground of nonstatutory double patenting as being obviously unpatentable over claim 12 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Instant claim 19 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 11 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Instant claim 20 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 11 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Instant claim 21 is rejected on the ground of nonstatutory double patenting as being obviously unpatentable over claim 11 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Instant claim 22 is rejected on the ground of nonstatutory double patenting as being obviously unpatentable over claim 11 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Instant claim 24 is rejected on the ground of nonstatutory double patenting as being obviously unpatentable over claim 17 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Instant claim 25 is rejected on the ground of nonstatutory double patenting as being obviously unpatentable over claim 18 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Instant claim 26 is rejected on the ground of nonstatutory double patenting as being obviously unpatentable over claim 12 of U.S. Patent No. 11,907,298 B2 in view of Dechu et al. (US 2018/0341684 A1). Instant claim 27 is rejected on the ground of nonstatutory double patenting as being obviously unpatentable over claim 9 of U.S. Patent No. 11,907,298 B2 (claim 9 here being directed to a method instead of the system of instant claim 27) in view of Dechu et al. (US 2018/0341684 A1). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3 – 8, 10 – 15, 17 – 22, and 24 – 27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. Independent claims 1, 14 and 15 recite the limitations of managing an electronic social agent by processing a first user input, determining based on the first user input if to collect a second user input, collecting the second user input when it is determined that it is required to collect it, updating of a decision-making model based on the at least one collected second user input, and also updating the decision-making model based on the first user input upon the determination that the generation of the at least one second user input is not required. Nothing in the claims precludes the claims from being performed in the human mind. The entire process involves data collection and data analysis. A human, acting as the electronic social agent (which typically would require maintain a conversation or performing an action based on an input) may receive a first user input, consider it properly to determine if it is necessary to obtain further input, and if necessary, collect another user input, having the human gain more information/knowledge to be applied to making a decision at a later time by receiving a second user input, and also, in a situation where it is determined that another user input is not required, the human would gain more information/knowledge to be applied to making a decision at a later time as well. The claims hereby recite a mental process. This judicial exception is not integrated into a practical application as the claims simply teach of collecting data in the form of collecting the first and second user inputs and also through updating the decision-making model under both situations, and analysing data in the form of determining if collecting the second user input is required. The mentioned processing circuitry and memory are also recited in generic terms. The invention is not tied to any particular defining structure and simply provides instructions to apply the judicial exception. The technique can be performed by a generic computer which would be presented as a tool to implement the abstract idea (classifiable as automation of the mental process steps). The Specification in [18] shows the implementation of the electronic social agent on a computer, which can be a generic computer. The claim also refers to an electronic social agent which, in [3], is provided as software systems that provide simplification of the user experience. By its current presentation, it appears to be an application on a computer that is able to perform simplify user experience, a function that is routine and well-known in the art, and can also be performed by a human. The one or more sensor used for collecting at least one second user input serve as general-purpose sensors utilised for collecting the necessary data (data gathering) and serve as additional elements but are not yet sufficient to amount to significantly more than the mentioned judicial exception. This is recited at a high level of generality that it amounts to no more than mere instructions to apply the exception using a generic computer. The claims do not provide any additional detail. The claims therefore do not include additional elements that would be sufficient to amount to significantly more than the judicial exception because the invention is not tied to a practical application. The claims provide techniques that amount to no more than mere instructions that apply the judicial exception which can be performed by a generic device. Merely mentioning the processing circuitry and memory amounts to no more than general-purpose hardware used as tools to implement the abstract idea and does not provide any particular application other than applying it for the purpose of implementing a judicial exception. Mere instructions to apply an exception using a generic device cannot provide an inventive concept. Claims 1, 14 and 15 are not eligible. Claims 3 and 17 the application of a process to at least a portion of a first dataset collected by the electronic social agent with the first dataset indicating at least a current state. A human may simply collect information about a state of a conversation. Claims 4 and 18 provide that the current state is associated with a user and the user’s environment, the first dataset being collected using sensors connected to the electronic social agent. The one or more sensors simply serve as additional tools for data gathering and a user through observation may collect state information about the user and the user’s environment. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception. Claims 5 and 19 provide determining if it is required to collect at least one second user input based on the first dataset and the first user input. A human may analyse the available user information to determine if it would be required to obtain further information from the user. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception. Claims 6 and 20 provide generating at least one question for collecting at least one second user input. A human may generate a question to collect further user input. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception. Claims 7 and 21 provide presenting the at least one question to the user. A human may present a generated question to the user by pen and paper. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception. Claims 8 and 22 provide that the at least one process is a machine learning algorithm. The Specification in [33] simply provides a generic machine learning algorithm without any specifics on how it is implemented. Having a machine learning algorithm serves as mathematical tool for implementing the abstract idea. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception. Claims 10 and 24 provide that the sensors are virtual receiving inputs form online sources. A human may receive further information about a situation from another human at a different location. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception. Claims 11 and 25 provide applying a pre-determined threshold to determine whether to collect the at least one second user input. A human may apply a mental threshold to a first user input to determine if to request a second user input. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception. Claims 12 and 27 provide that the at least one second user input is at least one of verbal or non-verbal input. The user being a human can provide either verbal or non-verbal input, which would be understood by a second human. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception. Claims 13 and 26 provide execution of one or more action based on at least the second user input. A human may perform an action that clarified by a second user input. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 1, 3, 6, 7, 8, 12, 13, 14, 15, 17, 20, 21, 22, 26 and 27 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Dahan et al. (US 2018/0054524 A1: hereafter — Dahan) in view of Wangikar (US 2021/0132926 A1), further in view of Dechu et al. (US 2018/0341684 A1: hereafter — Dechu). For claim 1, Dahan discloses a method for managing an electronic social agent (Dahan: [0016] — engaging in a conversation with chatbot (the chatbot here is taken as the claimed electronic social agent)), comprising: applying at least one process to a first user input, wherein the first user input is collected by the electronic social agent (Dahan: [0084] — applying a natural language processing to a first user input (the voice command here can be taken as the first user input); [0018] — a chatbot which engages in a conversation with the user (indicating that the chatbot (electronic social agent) receives the first user input)); determining based on the first user input whether collection of at least one second user input is required, wherein at least one process is used in determining whether collection of the at least one second user input is required (Dahan: [0084] — applying a natural language processing to the user’s input and an AI unit handling the user’s request; [0085] — an AI unit that is able to determine that a request for clarification is required and creates the request for clarification to be present to the user (indicating the use of the NLP and AI unit to determine that at least a second user input is required); [0080] — prompting the user to enter or utter a request (indicating the presence of a sensor to receive the user’s input)); and collecting, using one or more sensors, the at least one second user input upon determination that the collection of at least one second user input is required (Dahan: [0080] — prompting the user to enter or utter a request (indicating the presence of a sensor to receive the user’s input); [0053] — the system comprises an interface with a chat channel, voice channel and telephone module, these being used to engage a user with a request and that a user can connect using a telephone module (all these indicate the presence of a sensor for receiving the user’s input); [0083] — a local interface that can receive a voice command (indicating the presence of a sensor for receiving the user’s input); [0113] — receiving feedback from a user). The reference of Dahan fails to provide the further teachings of this claim, for which the reference of Wangikur is now introduced to teach as: updating a decision-making model of the electronic social agent based on the at least one collected second user input (Wangikar: [0045] — updating a decision model of the system based on user’s response, the decision model having been trained using machine learning model techniques; [0108] — allowing a user interact with social media servers). The reference of Dahan provides teaching for the presence of an electronic social agent as a chatbot, but differs from the claimed invention in that the claimed invention further provides teaching for updating a decision-making model of the electronic social agent with the collected user input. This isn’t new to the art as the reference of Wangikar is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Wangikar which updates a decision model, with the teaching of Dahan which provides a chatbot that can maintain a dialogue with a user, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of ensuring that the social agent is performing its required processing based on every recently-available information, rather than outdated information. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). The combination of Dahan in view of Wangikar teaches the above limitations, but fails to teach the further limitations of this claim, for which the reference of Dechu is now introduced to teach as: updating the decision-making model of the electronic social agent based on the first user input, upon determination that generation of the at least one second user input is not required (Dechu: FIG. 1 Steps 106→108 — updating a disambiguation model after a final response has been provided when no further disambiguation is required (indicating the updating of a decision model when determined that the generation of a second user input is not required, with Steps 106→107 providing a check to note that a final mode has been reached, and if so, proceed to updating the model, considering a situation whereby observing a ‘Yes’ at Step 106 makes the procedure proceed to updating of the model after a determination that a second input from the user is not required)). The combination of Dahan in view of Wangikar provides teaching for updating a decision-making model but differs from the claimed invention in that the claimed invention further provides teaching for updating the decision-making model based on determination that the generation of the at least one second user input is not required. This is not new to the art as the reference of Dechu is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Dechu which updates a decision-making model as a state disambiguation model after it has been determined that no further disambiguation is needed, so a second user input would not be required, with the teaching of the combination of Dahan in view of Wangikar which simply provides the updating of a decision-making model, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of keeping each state of the conversation between the user and the social agent up-to--date so that the system knows what type of response to provide at any point in the conversation. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). For claim 3, claim 1 is incorporated and the combination of Dahan in view of Wangikar further in view of Dechu discloses the method, wherein the at least one process is applied to at least a portion of a first dataset collected by the electronic social agent (Dahan: [0016] — extracting attached data from the chat conversation; [0127] — collecting session data (as another portion of a first dataset collected by the electronic social agent)), wherein the first dataset indicates at least a current state (Dahan: [0029] — carrying over session information (the session information here is indicative of the current state)). For claim 6, claim 1 is incorporated and the combination of Dahan in view of Wangikar further in view of Dechu discloses the method, further comprising: generating at least one question for collecting the at least one second user input (Dahan: [0080], [0085] — creating and prompting the user to enter or utter a request). For claim 7, claim 1 is incorporated and the combination of Dahan in view of Wangikar further in view of Dechu discloses the method, further comprising: presenting the at least one question to the user (Dahan: [0080], [0085] — creating and prompting the user to enter or utter a request). For claim 8, claim 1 is incorporated and the combination of Dahan in view of Wangikar further in view of Dechu discloses the method, wherein the at least one process is a machine learning (ML) algorithm (Dahan: [0089] — applying deep learning (which is a machine learning model)). For claim 12, claim 1 is incorporated and the combination of Dahan in view of Wangikar further in view of Dechu discloses the method, wherein the at least one second user input is at least one of: a verbal input, and a non-verbal input (Dahan: [0014] — chatbot that receives user input; [0085] — the AI unit that creates a request for clarification from the user; [0080] — receiving a speech request (indicating a verbal input)). For claim 13, claim 1 is incorporated and the combination of Dahan in view of Wangikar further in view of Dechu discloses the method, further comprising: executing one or more actions by the electronic social agent based on the at least one second user input (Dahan: [0116] — after feedback (the second user input), it can show that a course of action turns out to be satisfactory (indicating the execution of an action)). As for claim 14, computer program product claim 14 and method claim 1 are related as computer program product storing executable instructions required for performing the claimed method steps on a computer. Dahan in [0079] provides a mobile device like a laptop/computer or tablet, and in [0021], provides a computerised method for implementing a chatbot conversation, inherently showing the storage of the system instructions, and these are able to read upon the limitations of this claim. Accordingly, claim 14 is similarly rejected under the same rationale as applied above with respect to method claim 1. As for claim 15, system claim 15 and method claim 1 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Dahan in [0079] provides contain a processor, and in [0146] provides data storage, these being suitable to read upon the limitations of this claim. Accordingly, claim 15 is similarly rejected under the same rationale as applied above with respect to method claim 1. As for claim 17, system claim 17 and method claim 3 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 17 is similarly rejected under the same rationale as applied above with respect to method claim 3. As for claim 20, system claim 20 and method claim 6 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 20 is similarly rejected under the same rationale as applied above with respect to method claim 6. As for claim 21, system claim 21 and method claim 7 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 21 is similarly rejected under the same rationale as applied above with respect to method claim 7. As for claim 22, system claim 22 and method claim 8 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 22 is similarly rejected under the same rationale as applied above with respect to method claim 8. As for claim 26, system claim 26 and method claim 13 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 26 is similarly rejected under the same rationale as applied above with respect to method claim 13. As for claim 27, system claim 27 and method claim 12 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 27 is similarly rejected under the same rationale as applied above with respect to method claim 12. Claims 4 and 18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Dahan (US 2018/0054524 A1) in view of Wangikar (US 2021/0132926 A1), further in view of Dechu (US 2018/0341684 A1) as applied to claims 3 and 17, and further in view of Lecaros Easton et al. (US 2021/0026896 A1: hereafter — Lecaros Easton). For claim 4, claim 3 is incorporated but the combination of Dahan in view of Wangikar further in view of Dechu fails to disclose the limitation of this claim, for which Lecaros Easton is now introduced to teach as:t the method, wherein the current state is associated with a user and an environment of the user, wherein the at least portion of the first dataset is collected using the one or more sensors that are connected to the electronic social agent (Lecaros Easton: [0045] — sensor data received from the hardware sensor on a bot; [0067] — receiving at the bot, contextual data as well as speech data; [0064]–[0065] — contextual data from sensors that are in the environment). The combination of Dahan in view of Wangikar further in view of Dechu provides teaching for obtaining a portion of a first dataset as current state information, but differs from the claimed invention in that the claimed invention further provides teaching for obtaining current state information, but differs from the claimed invention in that the claimed invention further provides teaching for obtaining current state information as that associated with the user and an environment of the user. This isn’t new to the art as the reference of Lecaros Easton is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Lecaros Easton which obtains information associated with the user and the user’s environment through sensors, with the teaching of the combination of Dahan in view of Wangikar further in view of Dechu which just teaches of obtaining current state information, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of delivering responses to a user in a manner that considers the environmental situation of the user. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). As for claim 18, system claim 18 and method claim 4 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 18 is similarly rejected under the same rationale as applied above with respect to method claim 4. Claims 5 and 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Dahan (US 2018/0054524 A1) in view of Wangikar (US 2021/0132926 A1), further in view of Dechu (US 2018/0341684 A1) as applied to claims 3 and 17, and further in view of PETILL et al. (US 2021/0012113 A1: hereafter — Petill). For claim 5, claim 3 is incorporated but the combination of Dahan in view of Wangikar further in view of Dechu fails to disclose the limitation of this claim, for which the reference of Petill is now introduced to teach as the method, further comprising: determining based on the first dataset and the first user input, whether the collection of at least one second user input is required (Petill: [0055] — after a user provides input, the system attempts to disambiguate the user’s request and intention by applying the first dataset which includes information such as gaze direction and pointing or tapping gesture, to decide on if the system needs to a second user input). The combination of Dahan in view of Wangikar further in view of Dechu provides teaching for obtaining a portion of a first dataset as current state information collected by the chatbot, but differs from the claimed invention in that the claimed invention further provides teaching for determining whether the collection of at least one second user input is required based on the first dataset and the first user input. This isn’t new to the art as the reference of Petill is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Petill which determines if second user input is required based on the first dataset and the first user input, with the teaching of the combination of Dahan in view of Wangikar further in view of Dechu which just teaches of obtaining current state information, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of clarifying a situation from the user’s first input that raises an ambiguity. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). As for claim 19, system claim 19 and method claim 5 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 19 is similarly rejected under the same rationale as applied above with respect to method claim 5. Claims 10 and 24 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Dahan (US 2018/0054524 A1) in view of Wangikar (US 2021/0132926 A1), further in view of Dechu (US 2018/0341684 A1) as applied to claims 1 and 15, and further in view of Price et al. (US 2020/0279490 A1: hereafter — Price). For claim 10, claim 1 is incorporated but the combination of Dahan in view of Wangikar further in view of Dechu fails to disclose the limitation of this claim, for which Price is now introduced to teach as the method, wherein the one or more sensors are virtual sensors which receive inputs from online sources (Price: [0018] — a computer system that receives a signal from a remote sensor (teaching of this as a virtual sensor that receives remotes signals, the remote signal interpreted as being from an online source) so that the system can query a database (to further generate a query)). The combination of Dahan in view of Wangikar further in view of Dechu provides teaching for the presence of one or more sensors for collecting a second user input, but differs from the claimed invention in that the claimed invention further provides teaching for the presence of virtual sensors which receive input from online sources. This isn’t new to the art as the reference of Price is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Price which obtains further input from virtual sensors connected to online sources, with obtaining information from one or more sources as provided by the combination of Dahan in view of Wangikar further in view of Dechu, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of increasing the number of sources to pull from in order to properly generate the right information to clarify a first user input. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). As for claim 24, system claim 24 and method claim 10 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 24 is similarly rejected under the same rationale as applied above with respect to method claim 10. Claims 11 and 25 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Dahan (US 2018/0054524 A1) in view of Wangikar (US 2021/0132926 A1), further in view of Dechu (US 2018/0341684 A1) as applied to claims 1 and 15, and further in view of Eakin et al. (US 11,211,058 B1: hereafter — Eakin). For claim 11, claim 1 is incorporated but the combination of Dahan in view of Wangikar further in view of Dechu fails to disclose the limitation of this claim, for which Eakin is now introduced to teach as the method, further comprising: applying a pre-determined threshold to determine whether it is required to collect the at least one second user input (Eakin: Col 39 lines 38–45 — checking natural language understanding for interpretations exceeding a threshold, and if they exceed a threshold, a disambiguation is triggered in order to determine the correct interpretation (the triggering of the disambiguation would elicit a response from the user as a second user input)). The combination of Dahan in view of Wangikar further in view of Dechu provides teaching for collecting at least one second user input, but differs from the claimed invention in that the claimed invention further provides teaching for applying a pre-determined threshold to determine if collecting a second user input. This isn’t new to the art as the reference of Eakin is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Eakin which applies a threshold to determine if a second user input should be collected, with the collection of the one or more second user inputs as provided by the combination of Dahan in view of Wangikar further in view of Dechu, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of applying the presence of a level of confidence that the first user input was properly understood, the threshold serving the purpose of not requesting/collecting disambiguation input only if required by the system for obtaining clarification. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). As for claim 25, system claim 25 and method claim 11 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 25 is similarly rejected under the same rationale as applied above with respect to method claim 11. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to OLUWADAMILOLA M. OGUNBIYI whose telephone number is (571)272-4708. The Examiner can normally be reached Monday – Thursday (8:00 AM – 5:30 PM Eastern Standard Time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s Supervisor, PARAS D. SHAH can be reached at (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OLUWADAMILOLA M OGUNBIYI/Examiner, Art Unit 2653 /Paras D Shah/Supervisory Patent Examiner, Art Unit 2653 01/13/2026
Read full office action

Prosecution Timeline

Nov 15, 2023
Application Filed
Jun 10, 2025
Non-Final Rejection — §101, §102, §103
Sep 22, 2025
Response Filed
Jan 13, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579979
NAMING DEVICES VIA VOICE COMMANDS
2y 5m to grant Granted Mar 17, 2026
Patent 12537007
METHOD FOR DETECTING AIRCRAFT AIR CONFLICT BASED ON SEMANTIC PARSING OF CONTROL SPEECH
2y 5m to grant Granted Jan 27, 2026
Patent 12508086
SYSTEM AND METHOD FOR VOICE-CONTROL OF OPERATING ROOM EQUIPMENT
2y 5m to grant Granted Dec 30, 2025
Patent 12499885
VOICE-BASED PARAMETER ASSIGNMENT FOR VOICE-CAPTURING DEVICES
2y 5m to grant Granted Dec 16, 2025
Patent 12469510
TRANSFORMING SPEECH SIGNALS TO ATTENUATE SPEECH OF COMPETING INDIVIDUALS AND OTHER NOISE
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
96%
With Interview (+18.6%)
2y 12m
Median Time to Grant
Moderate
PTA Risk
Based on 304 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month