Prosecution Insights
Last updated: April 19, 2026
Application No. 17/708,226

SYSTEMS AND METHODS FOR TASK DETERMINATION, DELEGATION, AND AUTOMATION

Final Rejection §103
Filed
Mar 30, 2022
Examiner
HO, THOMAS Y
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Yohana LLC
OA Round
6 (Final)
15%
Grant Probability
At Risk
7-8
OA Rounds
3y 10m
To Grant
47%
With Interview

Examiner Intelligence

Grants only 15% of cases
15%
Career Allow Rate
27 granted / 175 resolved
-36.6% vs TC avg
Strong +32% interview lift
Without
With
+31.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
46 currently pending
Career history
221
Total Applications
across all art units

Statute-Specific Performance

§101
35.3%
-4.7% vs TC avg
§103
41.8%
+1.8% vs TC avg
§102
10.5%
-29.5% vs TC avg
§112
11.7%
-28.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 175 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Status of the Claims The pending claims in the present application are claims 1-21 as presented in the Amendment filed 01 December 2025. Information Disclosure Statement The information disclosure statement (IDS) submitted on 02 October 2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the IDS is being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3, 4, 6-8, 10, 11, 13-15, 17, 18, 20, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over WIPO Int’l Pub. No. 2021/181095 A1 to Yusuf et al. (international filing date of 10 March 2021, hereinafter referred to as “Yusuf”), in view of U.S. Pat. App. Pub. No. 2022/0147944 A1 to Kendall et al. (hereinafter referred to as “Kendall”), further in view of U.S. Pat. App. Pub. No. 2019/0394289 A1 to Lehrian et al. (hereinafter referred to as “Lehrian”), and further in view of U.S. Pat. App. Pub. No. 2020/0044996 A1 to Johnson et al. (hereinafter referred to as “Johnson”). Regarding independent claim 1, Yusuf discloses the following limitations: “A computer-implemented method comprising: ...” - Yusuf discloses, “FIG. 1 schematically shows a concierge network or platform 100 in which the method and system for assisting agents to provide personalized services can be implemented. A platform 100 may include one or more user devices 101-1, 101-2, 101-3, a server 120, an agent responder system 121, one or more third-party systems 130, and a database 111, 123” (para. [0041]). “... receiving a set of messages exchanged between the member and the representative, wherein the set of messages were exchanged via a communication interface associated with the member and the representative; ...” - See the aspects of Yusuf that have been mentioned above. Yusuf also discloses, “In some embodiments, a user (e.g., agent) 103-1, 103-2 may be associated with one or more user devices 101-1, 101-2, 101-3. In some cases, a user (e.g., agent) may communicate with customers using a user device. For example, the user 103-1 may receive one or more customer requests, receive one or more auto-generated recommendations, edit and customize recommendations or proposals within an agent responder user interface rendered on the user device 101-1. The user 103-1 may also communicate with the customer via an instant communication channel running on the user device 101-1” (para. [0047]), and “network 110 may be a network that is configured to provide communication between the various components illustrated in FIG. 1. The network may be implemented, in some embodiments, as one or more networks that connect devices and/or components in the network layout for allowing communication between them. For example, user device 101-1, 101-2, 101-3 third-party system 130, server 120, agent responder system 121, and database 111, 123 may be in operable communication with one another over network 110” (para. [0066]). The agents obtaining the additional information via in-app messaging or communications with customers, wherein the messaging or communications take place via communication channels and the network between devices of the customers and the agents, in Yusuf, reads on the recited limitation. The combination of Yusuf and Kendall (hereinafter referred to as “Yusuf/Kendall”) teaches limitations below of independent claim 1 that do not appear to be disclosed in their entirety by Yusuf: “... transmitting, in real-time a request for additional information associated with one or more family members of the member, wherein the request is automatically generated based on the set of messages; ...” - Kendall discloses, “The chat bot, functioning as a virtual assistant, is programmed to 'reach out' to clients and ask them questions that relate to major life events. Based on the client answers the computer system 1 records the client against one or more demographics. Examples of such questions are whether the client is married, has children or has been to college/university, etc. Based on the client answers, the system 1 builds a profile for each client. For communications with each client the virtual assistant accesses their profile and tailors communications based on the information there. For example the virtual assistant generates an action plan for each client and adopts a different course of communication depending on the demographic that client is in The action plans may involve the system generating communications about the client preparing for a new job, about getting ready to meet someone's family for the first time or about attending their child's wedding” (para. [0089]). The chat bot reaching out, during chat communications with clients, with questions relating to major life events like marriage, children, weddings of children, and the like, wherein the question is generated by the chat bot based on profiles built by prior communications, in Kendall, when applied in the context of the operations of the system and chat features of Yusuf, reads on the recited limitation. “... receiving the additional information; ...” - See the aspects of Kendall that have been cited above. Receiving answers to the questions, in Kendall, reads on the recited limitation. “... applying a machine learning algorithm to the set of messages and the additional information to identify a set of tasks, wherein the machine learning algorithm identifies the set of tasks as other messages exchanged between the member and the representative are continually received; ...” - See the aspects of Yusuf and Kendall that have been mentioned above. Yusuf also discloses, “generate, using a first machine learning algorithm trained model, one or more proposals based on request data related to a service request” (para. [0006]), “The provided systems may employ artificial intelligence techniques to analyze customer request to extract data points, triage the requests based on the extracted data points, generate recommended proposals with editable fields and insight data extracted from customer feedback, and guide human assistant to customize the recommended proposals in an optimized flow. In some cases, personalized feedback survey may also be generated using artificial intelligence techniques. Artificial intelligence, including machine learning algorithms, may be used to train a predictive model for predicting a customer intent, generating customizable proposals and/or personalized survey, and various other functionalities as described above” (para. [0036]), and “The processed customer data and customer feedback data may be used to form at least part of the input feature vector to train a predictive model” (para. [0060]). Applying the machine learning algorithms to data points extracted from customer communications to generate recommended proposals for activities or events, in Yusuf, reads on the recited “applying a machine learning algorithm to the set of messages ... to identify a set of tasks, wherein the machine learning algorithm identifies the set of tasks as other messages exchanged between the member and the representative are continually received” limitation. The communications including the answers to questions, in Kendall, reads on the recited limitation. Kendall discloses a “chat bot, functioning as a virtual assistant” (para. [0089]), similar to the claimed invention and to Yusuf. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system operations and chat features, of Yusuf, to include the question and answer aspects of the chat bot, in Kendall, because doing so helps to build profiles for generating tailored action plans, per Kendall (para. [0089]). The combination of Yusuf, Kendall, and Lehrian (hereinafter referred to as “Yusuf/Kendall/Lehrian”) teaches limitations below of independent claim 1 that do not appear to be taught in their entirety by Yusuf/Kendall: “... predicting a task of the set of tasks that is most likely to be delegated by the member to the representative, wherein the machine learning algorithm continues to process the other messages to identify additional tasks that are likely to be delegated by the member; ...” - See the aspects of Yusuf that have been mentioned above. Yusuf also discloses, “The agent responder system 121 may be configured to assist agents to provide concierge-type services (e.g., making dinner reservations, purchasing event tickets, making travel arrangements, etc). The agent responder system 121 may implement one or more trained predictive models to assist human agents throughout various stages such as customer request analysis and triage, recommendations, proposal, booking, fulfillment tracking and post-fulfillment stage (e.g., feedback survey generation and analysis)” (para. [0042]). FIG. 7 of Yusuf shows a continual stream of received messages requiring replies, and different activities associated with each (left side of FIG. 7). Performing customer requests analysis and triage to predict proposals regarding tasks that the users might perform for customers, wherein machine learning is used on continual incoming messages to generate requests and proposals, in Yusuf, reads on the recited “predicting a task of the set of tasks that is ... to be delegated by the member to the representative, wherein the machine learning algorithm continues to process the other messages to identify additional tasks that are likely to be delegated by the member” limitation. Lehrian discloses, “priority score may quantify a priority associated with the task and/or the user who may potentially be assigned performance of the task. A priority score may indicate a higher priority for tasks that have relatively small timing windows, approaching deadlines, and the like. In some cases, the type of task may be factored into a priority score. That is, if the task was directed to picking up diabetic medication for the user 304, the priority score for the task may be higher than if the task was directed to picking up flowers” and “relevance scores may indicate whether a user’s current, planned, and/or inferred activities are relevant to the task (indicating a greater likelihood that the user could/would in fact perform the task), which priority scores may indicate some urgency with respect to performing the tasks” (para. [0050]), and “At 320, the user who will be chosen to perform the task may be selected” (para. [0051]). A task having a higher priority score and lower relevance score than others, wherein the task is to be performed for the user, in Lehrian, reads on the recited “task of the set of tasks that is most likely to be delegated by the member to the representative” limitation. “... transmitting the predicted task, wherein, when a computing device associated with the member receives the predicted task, the computing device automatically launches a task-specific interface to display the predicted task and corresponding task data, and wherein the computing device automatically launches the task-specific interface without interaction from the representative; ...” - See the aspects of Yusuf that have been mentioned above. Yusuf also discloses, “The customer user interface (UI) module 211 may be configured for representing and delivering proposals, establishing instant communications with virtual or human agent, and providing a payment platform for conducting various transactions. In some cases, the customer interface may permit a customer submit a request with simple user input (e.g., text message). The customer interface (UI) module 211 may provide a graphical user interface (GUI) that can be integrated into other applications, or via any suitable communication channels (e.g., email, Slack, SMS) for presenting the feedback survey, proposals or messages from a virtual/human agents” (para. [0087]), “After the proposals are completed by the agent, one or more proposals may be delivered to the customer via the customer interface. A customer may select a proposal to proceed with (operation 601)” (para. [0106]), and “FIG. 14 shows example of graphical user interfaces allowing a customer to provide simple input for requesting a service. FIG. 14B shows an example of graphical user interface for a customer to make a reservation instantly. FIG. 15 shows a graphical user interface displaying proposals to a customer. The proposals are generated using the methods as described above. FIG. 16A shows an example of graphical user interface for a customer to perform instant in-app chatting with an agent” (para. [0119]). Delivering proposals to customers, wherein, when devices associated with the customers receive the proposals, the devices provide the GUI for displaying the proposals and related data, wherein proposal-specific interface screens are presented by the system without (e.g., prior to) interactions with agents (see FIG. 14A), in Yusuf, reads on the recited limitation. Lehrian discloses “improving configuration aspects and reminders related to tasks associated with a user” (para. [0002]), similar to the claimed invention and to Yusuf/Kendall. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the tasking processes, of Yusuf/Kendall, to include consideration of priority and relevance scores, as in Lehrian, so tasks of more importance are emphasized, and can be taken care of more efficiently, per Lehrian (see paras. [0033] and [0049]-[0051]). The combination of Yusuf, Kendall, Lehrian, and Johnson (hereinafter referred to as “Yusuf/Kendall/Lehrian/Johnson”) teaches limitations below of independent claim 1 that do not appear to be taught in their entirety by Yusuf/Kendall/Lehrian: “... configuring the task-specific interface to be distinct from a general communications channel between the member and the representative and from other task-related communications channels between the member and the representative, thereby reducing transmission of messages associated with the predicted task in the other task-related communications channels and ensuring communications within the task-specific interface are directed to the predicted task; ...” - Johnson discloses, “In an aspect, in response to identification of a sub-conversation that that develops within a primary conversation topic, sub-dialogue component 108 can generate a sub-dialogue messaging area (e.g., a different graphical display area, window, panel, tab, etc.) that is separated from the initial or primary messaging area. In particular, sub-dialogue component can take a sub-conversation identified by conversation platform 102 and render it in a manner that distinguishes it from the primary conversation topic. For example, sub-dialogue component 108 can generate a separate sub-dialogue messaging area for each sub-conversation and arrange them in columns side-by-side. In another example, sub-dialogue component 108 can show each sub-conversation in a different window that can be resized and repositioned” (para. [0039]). Separating sub-conversations out from primary conversations in different windows or other messaging areas, in Johnson, when applied in the context of the interfaces of Yusuf, reads on the recited limitation. “... tracking user interactions performed via the task-specific interface; ...” - Yusuf also discloses, “the personalized feedback survey may also be presented to the customer via the customer interface 213 provided by the customer interface module 211” (para. [0082]), “The agent responder interface module 207 may be configured for agents to interact with customer requests, recommendations generated by the system, proposals, tracked fulfillment information, notifications, and various other data as described above. The agent responder user interface (UI) module may provide a graphical user interface (GUI) that can be integrated into other applications (e.g., customer application), or via any suitable communication channels (e.g., email, Slack, SMS) for delivering notifications. A user (e.g., agent) may preview, edit, save, create recommendation or proposals via the GUI” (para. [0085]), and “A customer may provide user input or feedback via the GUI to interact with the system. For example, the user feedback may be provided via the graphical user interface (GUI)” (para. [0088]). Receiving interactions from agents and customers via use of the GUIs, in Yusuf, reads on the recited limitation. “... modifying the task data based on the user interactions tracked via the task-specific interface; and ...” - See the aspects of Yusuf that have been mentioned above. Previewing, editing, saving, and creating proposals via the GUIs, in Yusuf, reads on the recited limitation. “... transmitting the modified task data associated with the predicted task, wherein, when the computing device receives the modified task data, the task-specific interface is updated to display the modified task data instead of the task data.” - See the aspects of Yusuf that have been mentioned above. Yusuf also discloses, “One or more fields of the recommended proposal can be edited and further customized by an agent as described above. A preview of the proposal may be displayed on a graphical user interface along with editable fields of the proposal. In some cases, at least a portion of the proposal content (e.g., images, texts, etc.) is retrieved from the database 520 for generating a preview of the proposal. After the agent completes editing a selected proposal 513, the proposal may be pushed to the customer interface module and displayed to the customer” (para. [0105]). Generating and providing editable fields of proposals, previous of edited proposals, and pushing edited proposals for display on GUIs, in Yusuf, reads on the recited limitation. Johnson discloses, “systems and methods that automatically identify and extract or distinguish sub-conversations within a live chat session” (para. [0001]), similar to the claimed invention and to Yusuf/Kendall/Lehrian. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the interfaces of Yusuf, to include the separate windows or messaging areas, as in Johnson, to prevent conversations from breaking down quickly and to develop long-form discussions, as taught by Johnson (see paras. [0018] and [0019]). Regarding claim 3, Yusuf/Kendall/Lehrian/Johnson teaches the following limitations: “The computer-implemented method of claim 1, wherein the representative is assigned to the member based on vectors of similarity between a member profile associated with the member and the representative.” - Yusuf discloses, “methods provided herein may utilize customer intent extracted by clustering analysis as part of the labeled dataset” (para. [0037]), “The triage stage may comprise allocating (operation 305) the requests to one or more agents and/or prioritized (operation 307) the requests based on the extracted request data points. In some cases, the customer requests may be assigned to different agents based on the type of request, date of the request, specific request details and the agent profile (e.g., availability of the agent, specialties of the agent, location of the agent, etc.). For instance, the allocation operation may be performed by an agent profile matching unit which is invoked to determine an agent via profile matching. For example, if the extracted request data points indicate the type of request is travel, a travel agent may be selected. The profile matching may be conducted in one or more fields (e.g., request type, location, time, availability of agent, etc.) based on the extracted data points. In some cases, the allocation operation may also determine that an agent has a domain or service matching an estimated customer intent. The user intent may be extracted using a machine learning algorithm trained model” (para. [0091]), “the customer data (e.g., demographic data, purchase data, social graph, etc.) may be augmented with insight data. The customer data may include customer profile data as described elsewhere herein. For instance, the customer data may include customer profile information (e.g., name, address, spouse, age, gender, activation date, etc.), user preferences (questionnaire results, user feedback collected during registration or subscription), App Usage (content viewed / engaged with), Request History (history of requests and statuses), Fulfillment History (all bookings made through the platform), Transaction History (transactional data through platform), and various others” (para. [0103]), and “the customer data may be augmented with insight data extracted from the personalized feedback survey or an expert input in a particular field. For instance, the insight data may be an implicit user preference extracted from the personalized feedback survey or an expert input tailored to the customer and the service. In some cases, the implicit user preference and/or the insight data may be extracted from the feedback survey and tracked fulfillment data using a machine learning trained model” (para. [0104]). Assigning agents to customers based on extracted request data points of the customers, or estimated customer intents, being matched to agent profiles, in Yusuf, reads on the recited limitation. Regarding claim 4, Yusuf/Kendall/Lehrian/Johnson teaches the following limitations: “The computer-implemented method of claim 1, further comprising: generating one or more experience recommendations for experiences offerable to the member, wherein the one or more experience recommendations are generated based on a member profile associated with the member; and ...” - Yusuf discloses, “user of the provided system may be an individual human agent, or the user may be an entity (e.g., business, travel organization, etc.), a group of human agents who are responding to and fulfilling customer requests to provide concierge-type services (e.g., making dinner reservations, purchasing event tickets, making travel arrangements etc.) and highly personalized services tailored (e.g., favorite seat in a restaurant, temperature in a hotel room, special local events such as swim with orca whales, etc.)” (para. [0034]), “the one or more database systems 123, 111, which may be configured for storing or retrieving relevant data. Relevant data may comprise user data (e.g., agent ID, specialties, expertise/skills, training credentials, ratings, availability, geolocation, service category/field, etc.), customer profile data (e.g., customer preferences, personal data such as identity, age, gender, contact information, demographic data, ratings, subscription, member redemption of loyalty points, etc ), augmented customer data records (e.g., labeled with additional data related to customer intent, service type, expert insight, customer segmentation, etc ), historical data (e.g., social graph, transportation history, transportation subscription plan data, purchase or transaction history, loyalty programs, etc.)” (para. [0069]), “the customer input data 501 received from the customer interface may be analyzed and the request may be allocated to a matching agent. One or more recommended proposals 503 may be generated and presented to an agent via the agent responder interface. As described above, the recommended proposals may be generated based on customer data” (para. [0102]), “The augmented customer database 510 may store customer data and augmented data. In some embodiments, the customer data (e.g., demographic data, purchase data, social graph, etc.) may be augmented with insight data. The customer data may include customer profile data as described elsewhere herein. For instance, the customer data may include customer profile information (e.g., name, address, spouse, age, gender, activation date, etc.), user preferences (questionnaire results, user feedback collected during registration or subscription), App Usage (content viewed / engaged with), Request History (history of requests and statuses), Fulfillment History (all bookings made through the platform), Transaction History (transactional data through platform)” (para. [0103]). Generating proposals for trips offerable to customers, wherein the proposals are generated based on customer data, including customer profiles, in Yusuf, reads on the recited limitation. “... providing the one or more experience recommendations, wherein when the one or more experience recommendations are provided, the representative presents the one or more experience recommendations to the member.” - See the aspects of Yusuf that have been mentioned above. Pushing the proposals from the agents to the customers, in Yusuf, reads on the recited limitation. Regarding claim 6, Yusuf/Kendall/Lehrian/Johnson teaches the following limitations: “The computer-implemented method of claim 1, wherein the machine learning algorithm includes a Natural Language Processing (NLP) algorithm.” - Yusuf discloses, “NLP engine may be used to process the input data (e.g., input text captured from chatbot) and produce a structured output including the linguistic information. The NLP engine may employ any suitable NLP techniques such as a parser to perform parsing on the input text” (para. [0090]). Regarding claim 7, Yusuf/Kendall/Lehrian/Johnson teaches the following limitations: “The computer-implemented method of claim 1, further comprising: automatically processing a member profile associated with the member in real-time to populate one or more data fields associated with the predicted task, wherein the one or more data fields correspond to information provided during an onboarding of the member.” - Yusuf discloses, “the one or more database systems 123, 111, which may be configured for storing or retrieving relevant data. Relevant data may comprise user data (e.g., agent ID, specialties, expertise/skills, training credentials, ratings, availability, geolocation, service category/field, etc.), customer profile data (e.g., customer preferences, personal data such as identity, age, gender, contact information, demographic data, ratings, subscription, member redemption of loyalty points, etc ), augmented customer data records (e.g., labeled with additional data related to customer intent, service type, expert insight, customer segmentation, etc ), historical data (e.g., social graph, transportation history, transportation subscription plan data, purchase or transaction history, loyalty programs, etc.)” (para. [0069]), “manage real-time tasks” (para. [0094]), “The machine learning algorithm trained model may output one or more recommended proposals to the agent for selection (operation 421) or further customization” (para. [0096]), “proposal may be presented in a hierarchical data structure such that the editable fields are arranged in hierarchical levels” (para. [0098]), and “The augmented customer database 510 may store customer data and augmented data. In some embodiments, the customer data (e.g., demographic data, purchase data, social graph, etc.) may be augmented with insight data. The customer data may include customer profile data as described elsewhere herein. For instance, the customer data may include customer profile information (e.g., name, address, spouse, age, gender, activation date, etc.), user preferences (questionnaire results, user feedback collected during registration or subscription), App Usage (content viewed / engaged with), Request History (history of requests and statuses)” (para. [0103]). The machine learning algorithm using customer data, including the customer profile data, in-real time, to generate fields for proposals, wherein the fields relate to customer data obtained at registration of the customer, in Yusuf, reads on the recited limitation. Regarding claims 8, 10, 11, 13 and 14, while the claims are of different scope relative to claims 1, 3, 4, 6, and 7, the claims recite limitations similar to those recited by claims 1, 3, 4, 6, and 7. As such, the rationales applied to reject claims 1, 3, 4, 6, and 7 also apply for purposes of rejecting claims 8, 10, 11, 13 and 14. Any limitations recited by claims 8-14 that do not have a counterpart in claims 1, 3, 4, 6, and 7, such as the recited “system, comprising: one or more processors; and memory storing thereon instructions that, as a result of being executed by the one or more processors, cause the system to” of independent claim 8, are taught by Yusuf/Kendall/Lehrian/Johnson (see, e.g., para. [0015] of Yusuf). Claims 8, 10, 11, 13 and 14 are, therefore, also rejected under 35 USC 103 as obvious in view of Yusuf/Kendall/Lehrian/Johnson. Regarding claims 15, 17, 18, 20, and 21, while the claims are of different scope relative to claims 1, 3, 4, 6, and 7 and to claims 8, 10, 11, 13 and 14, the claims recite limitations similar to those recited by claims 1, 3, 4, 6-8, 10, 11, 13, and 14. As such, the rationales applied to reject claims 1, 3, 4, 6-8, 10, 11, 13, and 14 also apply for purposes of rejecting claims 15, 17, 18, 20, and 21. Claims 15, 17, 18, 20, and 21 are, therefore, also rejected under 35 USC 103 as obvious in view of Yusuf/Kendall/Lehrian/Johnson. Claims 2, 5, 9, 12, 16, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Yusuf, in view of Kendall, further in view of Lehrian, further in view of Johnson, and further in view of U.S. Pat. No. 10,467,122 to Doyle (patent date of 04 November 2019, hereinafter referred to as “Doyle”). Regarding claim 2, the combination of Yusuf, Kendall, Lehrian, Johnson, and Doyle (hereinafter referred to as “Yusuf/Kendall/Lehrian/Johnson/Doyle”) teaches limitations below that do not appear to be taught in their entirety by Yusuf/Kendall/Lehrian/Johnson: “The computer-implemented method of claim 1, wherein predicting includes ranking the set of tasks and selecting the predicted task from the ranked set of tasks, and wherein the method further comprising: ...” - Doyle teaches, “A class for the inquiry may correspond to a plurality of actions that may be optionally ranked according to one or more criteria. For example, recommended actions may be ranked based on the percentage or number of similar users' inquiries (e.g., inquiries belonging to the same class) that have been reflected as helpful or well accepted. Other criteria such as domain expert reviews, etc. may also be used to rank the recommended actions to reflect the corresponding significance of these recommendations in response to users' inquiries. For example, recommended actions may be ranked based on a weighted combination of a plurality of criteria (e.g., a percentage or a number of identical or similar inquiries in the class, etc.)” (col. 35, ll. 3-15) and “The recommended actions or the class thereof described so far may be selected from a plurality of actions associated with a class” (col. 35, ll. 16-18). Ranking actions and then selecting recommended actions therefrom, in Doyle, reads on the recited limitation. “... receiving a request to generate a proposal for a task associated with the ranked set of tasks; ...” - See the aspects of Yusuf and Doyle that have been mentioned above. Yusuf also discloses, “Customer request data may be captured (operation 301). The customer request data may be received via a customer interface (e.g., chat hot, instant messaging, etc.) as described above” (para. [0090]). Receiving the customer request data for generating proposals involving performance of tasks, in Yusuf, reads on the recited “receiving a request to generate a proposal for a task associated with the ... set of tasks” limitation. Recommended actions being ranked, in Doyle, reads on the recited “ranked” limitation. “... providing a proposal template corresponding to a task type, wherein the task type corresponds to the task associated with the set of tasks, wherein the proposal template is provided with a set of data fields, and wherein the set of data fields are provided according to a member profile associated with the member; and ...” - Yusuf discloses, “The one or more recommended proposals may be provided to an agent via an interactive graphical user interface such that the agent is prompted to edit one or more fields of the proposals. The proposal may include information such as the details of the service (e.g., time, location, hotel room, restaurant seat, etc.) and personalized insights (e.g., recommended dishes, events). The user (e.g., agent) may be permitted to modify one or more fields of the recommended proposal” (para. [0097]), and “In some cases, proposal may be presented in a hierarchical data structure such that the editable fields are arranged in hierarchical levels. The hierarchical data structure may be service-specific” (para. [0098]). Providing proposals in the form of an interactive GUI with arranged, editable fields having information about the proposals, and wherein said fields are service-specific, in Yusuf, reads on the recited limitation. “... presenting a completed proposal, wherein the completed proposal is presented as a result of receiving the proposal template, and wherein when the completed proposal is presented, member interaction with the completed proposal is monitored to identify revisions to the proposal template.” - See the aspects of Yusuf that have been mentioned above. Yusuf also discloses, “In some cases, additional information may be obtained via in-app messaging or communication with the customer to generate different proposals (operation 417). For instance, customer may provide input in response to an initial proposal. The additional customer input may be analyzed by the NPL engine to extract opinion such as deny or not satisfied, or request data points such as additional requirement, and such additional customer input data may be used to produce a new proposal” (para. [0100]). Presenting the customer with the initial proposal, resulting from the completing of fields of the initial proposal by the agent, and wherein after the initial proposal is presented, customer inputs including opinions or additional requirements for the initial proposal are received via the GUIs to modify the fields to make the new proposal, in Yusuf, reads on the recited limitation. Doyle teaches “capturing and classification of digital data and providing recommendations in real-time” (abstract), similar to the claimed invention and to Yusuf/Kendall/Lehrian/Johnson. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the proposals of Yusuf/Kendall/Lehrian/Johnson, to be ranked, as in Doyle, to highlight those that were most helpful or well accepted, as taught by Doyle (see col. 35, ll. 3-15). Regarding claim 5, Yusuf/Kendall/Lehrian/Johnson/Doyle teaches the following limitations: “The computer-implemented method of claim 1, wherein predicting includes ranking the set of tasks and selecting the predicted task from the ranked set of tasks, and wherein the method further comprising: ...” - Yusuf discloses, “The machine learning algorithm trained model may output one or more recommended proposals to the agent for selection (operation 421)” (para. [0096]). Selecting a proposal from multiple recommended proposals, in Yusuf, reads on the recited “wherein predicting includes ... selecting the predicted task from the ... set of tasks, and wherein the method further comprising” limitation. Doyle teaches, “A class for the inquiry may correspond to a plurality of actions that may be optionally ranked according to one or more criteria. For example, recommended actions may be ranked based on the percentage or number of similar users' inquiries (e.g., inquiries belonging to the same class) that have been reflected as helpful or well accepted. Other criteria such as domain expert reviews, etc. may also be used to rank the recommended actions to reflect the corresponding significance of these recommendations in response to users' inquiries. For example, recommended actions may be ranked based on a weighted combination of a plurality of criteria (e.g., a percentage or a number of identical or similar inquiries in the class, etc.)” (col. 35, ll. 3-15) and “The recommended actions or the class thereof described so far may be selected from a plurality of actions associated with a class” (col. 35, ll. 16-18). The ranking of actions, in Doyle, reads on the recited “ranking the set of tasks” and “from the ranked set of tasks” limitations. The rationales for combining the teachings of the cited references, from the rejections of claims 1 and 2, also apply for purposes of this rejection of claim 5. “... detecting input to one or more data fields corresponding a task associated with the ranked set of tasks; and ...” - Yusuf discloses, “receive a user input for modifying one or more fields of the one or more proposals via a first graphical user interface” (para. [0006]), and “The proposal may include editable objective fields and subjective fields” (para. [0099]]). Receiving a user input to objective and/or subjective fields corresponding to one or more proposals, in Yusuf, reads on the recited “detecting input to one or more data fields corresponding a task associated with the ... set of tasks” limitation. See the aspects of Doyle that have been mentioned above. The ranking of actions, in Doyle, reads on the recited “ranked” limitation. “... automatically updating a member profile associated with the member in real-time to incorporate the input to the one or more data fields.” - See the aspects of Yusuf that have been mentioned above. Yusuf also discloses, “manage real-time tasks” (para. [0094]), “additional information may be obtained via in-app messaging or communication with the customer to generate different proposals (operation 417). For instance, customer may provide input in response to an initial proposal. The additional customer input may be analyzed by the NPL engine to extract opinion such as deny or not satisfied, or request data points such as additional requirement, and such additional customer input data may be used to produce a new proposal” (para. [0100]), “The augmented customer database 510 may store customer data and augmented data. In some embodiments, the customer data (e.g., demographic data, purchase data, social graph, etc.) may be augmented with insight data. The customer data may include customer profile data as described elsewhere herein. For instance, the customer data may include customer profile information (e.g., name, address, spouse, age, gender, activation date, etc.), user preferences (questionnaire results, user feedback collected during registration or subscription), App Usage (content viewed / engaged with), Request History (history of requests and statuses), Fulfillment History (all bookings made through the platform), Transaction History (transactional data through platform)” (para. [0103]), and “data generated during the process such as booking information, transaction information, feedback data, and insight may be stored in the augmented customer database 510. At least a portion of the data can be used to update and continual train one or more predictive models of the system as described elsewhere herein” (para. [0108]). Updating the customer data, including customer profile information, during processing involving presenting completed fields in proposals, receiving customer insights and feedback, and incorporating them into the customer data, in Yusuf, reads on the recited limitation. The rationales for combining the teachings of the cited references, from the rejection of claim 2, also apply to this rejection of claim 5. Regarding claims 9 and 12, while the claims are of different scope relative to claims 2 and 5, the claims recite limitations similar to those recited by claims 2 and 5. As such, the rationales applied to reject claims 9 and 12 also apply for purposes of rejecting claims 2 and 5. Claims 9 and 12 are, therefore, also rejected under 35 USC 103 as obvious in view of Yusuf/Kendall/Lehrian/Johnson/Doyle. Regarding claims 16 and 19, while the claims are of different scope relative to claims 2 and 5, and to claims 9 and 12, the claims recite limitations similar to those recited by claims 1, 5, 9, and 12. As such, the rationales applied to reject claims 1, 5, 9, and 12 also apply for purposes of rejecting claims 16 and 19. Claims 16 and 19 are, therefore, also rejected under 35 USC 103 as obvious in view of Yusuf/Kendall/Lehrian/Johnson/Doyle. Response to Arguments On pp. 11 and 12 of the Amendment, the applicant requests reconsideration and withdrawal of the claim rejections under 35 USC 103. The applicant’s arguments have been considered but are moot because the new grounds of rejection do not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. See the 35 USC 103 section above and the rejection’s additional reliance on the cited Kendall and Lehrian references. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS Y. HO, whose telephone number is (571)270-7918. The examiner can normally be reached Monday through Friday, 9:30 AM to 5:30 PM Eastern. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor, can be reached at 571-272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THOMAS YIH HO/Primary Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Mar 30, 2022
Application Filed
May 06, 2023
Non-Final Rejection — §103
Sep 06, 2023
Interview Requested
Oct 06, 2023
Examiner Interview Summary
Oct 06, 2023
Applicant Interview (Telephonic)
Oct 10, 2023
Response Filed
Jan 13, 2024
Final Rejection — §103
Feb 27, 2024
Interview Requested
Apr 01, 2024
Examiner Interview Summary
Apr 22, 2024
Request for Continued Examination
Apr 24, 2024
Response after Non-Final Action
Jul 27, 2024
Non-Final Rejection — §103
Sep 18, 2024
Interview Requested
Oct 15, 2024
Applicant Interview (Telephonic)
Oct 15, 2024
Examiner Interview Summary
Oct 31, 2024
Response Filed
Jan 20, 2025
Final Rejection — §103
Jun 18, 2025
Interview Requested
Jul 02, 2025
Applicant Interview (Telephonic)
Jul 08, 2025
Examiner Interview Summary
Jul 24, 2025
Request for Continued Examination
Jul 30, 2025
Response after Non-Final Action
Sep 24, 2025
Non-Final Rejection — §103
Nov 10, 2025
Interview Requested
Nov 25, 2025
Examiner Interview Summary
Dec 01, 2025
Response Filed
Jan 29, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572893
DECISION SUPPORT SYSTEM OF INDUSTRIAL COPPER PROCUREMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12456126
SYSTEMS AND PROCESSES THAT AUGMENT TRANSPARENCY OF TRANSACTION DATA
2y 5m to grant Granted Oct 28, 2025
Patent 12406215
SCALABLE EVALUATION OF THE EXISTENCE OF ONE OR MORE CONDITIONS BASED ON APPLICATION OF ONE OR MORE EVALUATION TIERS
2y 5m to grant Granted Sep 02, 2025
Patent 12393902
CONTINUOUS AND ANONYMOUS RISK EVALUATION
2y 5m to grant Granted Aug 19, 2025
Patent 12367438
Parallelized and Modular Planning Systems and Methods for Orchestrated Control of Different Actors
2y 5m to grant Granted Jul 22, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
15%
Grant Probability
47%
With Interview (+31.7%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 175 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month