Prosecution Insights
Last updated: April 19, 2026
Application No. 18/428,323

ADJUSTING INCIDENT PRIORITY

Final Rejection §101§102
Filed
Jan 31, 2024
Examiner
LEMIEUX, JESSICA
Art Unit
3626
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Pagerduty Inc.
OA Round
2 (Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
4y 0m
To Grant
89%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
297 granted / 452 resolved
+13.7% vs TC avg
Strong +23% interview lift
Without
With
+23.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
22 currently pending
Career history
474
Total Applications
across all art units

Statute-Specific Performance

§101
41.2%
+1.2% vs TC avg
§103
27.9%
-12.1% vs TC avg
§102
9.3%
-30.7% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 452 resolved cases

Office Action

§101 §102
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 2. This Non-Final Office action is in response to the application filed on January 31st, 2024. Claims 1-20 are pending. Priority 3. Application 18/428,323 was filed on January 31st, 2024. Examiner Request 4. The Applicant is requested to indicate where in the specification there is support for amendments to claims should Applicant amend. The purpose of this is to reduce potential 35 U.S.C. §112(a) or §112 1st paragraph issues that can arise when claims are amended without support in the specification. The Examiner thanks the Applicant in advance. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 5. Claims 16-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to nonstatutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the broadest reasonable interpretation of the “computer readable storage medium” encompasses signals per se. A claim whose BRI covers both statutory and non-statutory embodiments embraces subject matter that is not eligible for patent protection and therefore is directed to non-statutory subject matter. See MPEP 2106.03(II). It is suggested that claim 16 be amended to recite a “non-transitory” computer readable medium to overcome this rejection. Despite this analysis, the claims are reanalyzed under the full 2‐step process for purposes of compact prosecution. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1-15 are directed to a system, method, or product which are/is one of the statutory categories of invention. (Step 1: YES). Although claims 16-20 do not fall within one of the four statutory categories, nonetheless, for purposes of compact prosecution have also been included in the two-part analysis and will be treated as if it falls into one of the statutory categories. (Step 1: YES). Claims 1, 10, and 16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites adjusting priority levels of incidents. Claims 1, 10 and 16 the limitations of (Claim 1 being representative) recites: receiving, […], event data for one or more events; generating, […], and based on the event data, one or more incident objects for one or more incidents, wherein the one or more incident objects include incident data including a priority level; generating, […], an incident workflow for each of the one or more incident objects; applying, […], […], a […] model to determine an adjusted priority level for each of the one or more incident objects, wherein the […] model is configured to receive a first […] prompt indicative of incident data included in the one or more incident objects; receiving, […],the adjusted priority level for each of the one or more incident objects; and updating, […],the incident workflow for each of the one or more incident objects with the adjusted priority level for each of the one or more incident objects. These limitations as drafted is process that, under the broadest reasonable interpretation, covers certain methods of organizing human activity (i.e., commercial or legal interactions and managing personal behavior including following rules or instructions) but for recitation of generic computer components. That is, other than reciting a computing system (Claim 1), an application programming interface (Claims 1, 10, and 16), a machine learning model and natural language prompt (Claims 1, 10, and 16), a memory and one or more processors (claim 10), and a computer-readable storage medium and processor of a computing system (claim 16) the claimed invention amounts to commercial or legal interactions and managing personal behavior including following rules or instructions. If a claim limitation, under its broadest reasonable interpretation, covers commercial or legal interactions and managing personal behavior or interactions between people but for the recitation of generic computer components (see MPEP 2106.04(a)(2)(II)), then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, Claims 1, 10, and 16 recite an abstract idea. (Step 2A- Prong 1: YES. The claims are abstract). Alternately, as drafted, the limitations recite a process that, under the broadest reasonable interpretation cover performance of the limitation in the mind but for recitation of generic computer components. That is, other than reciting a computing system (Claim 1), an application programming interface (Claims 1, 10, and 16), a machine learning model and natural language prompt (Claims 1, 10, and 16), a memory and one or more processors (claim 10), and a computer-readable storage medium and processor of a computing system (claim 16), nothing in the claim precludes the step from practically being performed in the mind. For example, but for the generic computer components recited above, this claim encompasses receiving event data, generating an incident priority and workflow based on the event data, applying a model to adjust the priority level, receiving the adjusted priority level and updating the workflow to reflect the adjusted priority level described in the identified abstract idea, supra. If a claim limitation under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract idea. Accordingly, Claims 1, 10, and 16 recite an abstract idea. The types of identified abstract ideas are considered together as a single abstract idea for analysis purposes. (Step 2A- Prong 1: YES. The claims are abstract). This judicial exception is not integrated into a practical application. Claims 1, 10, and 16 recite the additional elements of a computing system (Claim 1), an application programming interface (Claims 1, 10, and 16), a machine learning model and natural language prompt (Claims 1, 10, and 16), a memory and one or more processors (claim 10), and a computer-readable storage medium and processor of a computing system (claim 16). These additional elements are not described by the applicant and are recited at a high-level of generality (i.e., generic computers performing generic computer functions) such that it amounts no more than mere instructions to apply the exception using a generic computer components. Alternatively or in addition, the implementation of machine learning merely confines the use of the abstract idea to a particular technological environment or field of use (machine learning). MPEP 2106.04(d)(I) and MPEP 2106.05(A) indicate that merely “generally linking” the abstract idea to a particular technological environment or field of use cannot provide a practical application. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claims 1, 10, and 16 are directed to an abstract idea without a practical application. (Step 2A-Prong 2: NO: the additional claimed elements are not integrated into a practical application). Claims 1, 10, and 16 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an “inventive concept”) to the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a computing system (Claim 1), an application programming interface (Claims 1, 10, and 16), a machine learning model and natural language prompt (Claims 1, 10, and 16), a memory and one or more processors (claim 10), and a computer-readable storage medium and processor of a computing system (claim 16) to perform the noted steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (“significantly more”). Alternatively or in addition, the implementation of machine learning merely confines the use of the abstract idea to a particular technological environment or field of use (machine learning). MPEP 2106.04(d)(I) and MPEP 2106.05(A) indicate that merely “generally linking” the abstract idea to a particular technological environment or field of use cannot provide an inventive concept (“significantly more”). Accordingly, even when considered separately and as an ordered combination, nothing in the claim adds significantly more (i.e. an inventive concept) to the abstract idea. Thus claims 1, 10, and 16 are not patent eligible. (Step 2B: NO. The claims do not provide significantly more). Dependent Claims 2, 3, 11, and 17 are similarly rejected because they merely further narrow the same abstract idea of independent claims 1, 10, and 16 as discussed above and hence are abstract for at least the reasons presented above. Claims 2, 3, 11, and 17 merely describe incident data and priority level. Therefore claims 2, 3, 11, and 17 are considered patent ineligible for the reasons given above. Dependent claims 4-9, 12-15 and 18-20 recite limitations that further define the same abstract idea of independent claims 1, 10, and 16 as discussed above. In addition, they recite the additional elements of a second natural language prompt (claims 4 and 18), the machine learning model (claims 5, 12, and 18), computing system (claims 5-9, 12-15, and 19-20), application programming interface (claims 8, 14, and 19) and the first natural language prompt (claims 8, 9, 14, 15, 19, and 20). These additional elements are again recited at a high-level of generality (i.e., generic computers performing generic computer functions) such that it amounts no more than mere instructions to apply the exception using a generic computer components. Even in combination, these additional elements do not integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea itself. Alternatively or in addition, the implementation of machine learning (natural language prompts) merely confines the use of the abstract idea to a particular technological environment or field of use (machine learning). MPEP 2106.04(d)(I) and MPEP 2106.05(A) indicate that merely “generally linking” the abstract idea to a particular technological environment or field of use cannot provide a practical application or significantly more. Therefore claims 4-9, 12-15 and 18-20 are patent ineligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 6. Claims 1- 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Azmoon (US 2019/0227822). As per claim 1 Azmoon teaches a method comprising: receiving, by a computing system, event data for one or more events (paragraph [0163: The process illustrated by FIGS. 8A and 8B may be carried out by a computing device, such as computing device 100, a computational instance, such as computational instance 322, and/or a cluster of computing devices, such as server cluster 200, paragraph [0005]: obtaining, from a database, an incident record relating to a user… The incident record region may include the incident record, where components of the incident record contain at least one of an identifier of the user, an incident status, and an incident description, and paragraph [0112]: receipt of certain information from the user related to the task); generating, by the computing system, and based on the event data, one or more incident objects for one or more incidents, wherein the one or more incident objects include incident data including a priority level (paragraph [0112] The incident record can be generated in various ways. For instance, the user assistance system can be configured to autonomously generate the incident record upon receipt of certain information from the user related to the task. Additionally or alternatively, an agent or other authorized administrator may generate the incident record either manually or with the help of the user assistance system, and paragraph [0113] The incident record can take the form of a set of data that represents a variety of information, or “components,” associated with the incident. Such components can include, for example: (i) an identifier of the user (e.g., the user's name, or a unique string of characters associated with the user), (ii) a status of the incident (e.g., open, unassigned, in progress, closed), (iii) a description of the incident (e.g., a manually, semi-autonomously, or fully-autonomously generated textual summary of the problem the user has encountered), (iv) a date/time when the incident record is created, (v) dates/times when the status of the incident or any other information of the incident record is changed, (vi) a current owner of the incident record (e.g., the agent or group of agents to which the task is assigned), (vii) a priority level for the incident (e.g., low, medium, or high), (viii) information indicating any efforts that has been made towards resolving the incident (e.g., dates/times such efforts were started and/or completed, and a description of such efforts), (ix) an incident number, and/or other possible information); generating, by the computing system, an incident workflow for each of the one or more incident objects (paragraph [0006]: determining a plurality of candidate messages by incorporating the components of the incident record into predetermined message templates. The predetermined message templates may include sentence fragments and define fields in which to incorporate the components. The first example embodiment may also involve determining a scoring for the plurality of candidate messages based on a relevance to messages from the conversation. The first example embodiment may also involve, based on the scoring, selecting one or more of the plurality of candidate messages to include in the set of suggested messages displayed in the suggestion region. and paragraph: [0132] In some embodiments, the user assistance system may assess the relevance of the candidate message based on a set of rules that define how the agent might want to respond based on a received message or messages. For example, the rules may define a particular sentence structure or particular terms that should appear in a message from the agent to the user based on a semantic analysis of one or more messages of the conversation. As another example, the rules may define paths (e.g., branches of a tree-based structure) that the conversation may follow to achieve a desired result (e.g., resolution of the incident), and the user assistance system may implement techniques such as heuristics to determine which path to follow in the conversation); applying, by the computing system, and using an application programming interface, a machine learning model to determine an adjusted priority level for each of the one or more incident objects, wherein the machine learning model is configured to receive a first natural language prompt indicative of incident data included in the one or more incident objects (paragraph [0138]: User feedback may take various forms, such as messages received from users in response to suggested messages sent by agents and/or user-submitted reviews of agent performance. In an example implementation, the user assistance system may be configured to parse one or more messages received from the user in response to a suggested message that the agent added to the conversation, such as the first message that was received from the user after the suggested message was added. Then, using natural language processing and/or other techniques, the user assistance system may evaluate the user's feedback regarding the suggested message and may score future candidate messages based on the evaluation. For example, if the agent adds a suggested message that reads “I will try and find your lost luggage and get back to you by the end of the day” and the user then responds with a message that reads “No! Find my luggage NOW!!!!”, the user assistance system may determine from the user's response that the suggested message was not acceptable to the user and may thus take this into account when scoring a candidate message that has a similar or identical template as the suggested message (e.g., may decrease the score of the candidate message) and paragraph [0156]: Such embodiments may be advantageous in various scenarios, such as if there is incorrect information discovered in the incident record, or the user explicitly or implicitly expresses a desire for the incident to be resolved more quickly. For example, if during the conversation the agent receives a message from the user that includes text indicating frustration or an unexpected urgency, the agent can increase the priority level of the incident in the incident record (e.g., from Medium to High) so that the incident might be resolved faster); receiving, by the computing system, the adjusted priority level for each of the one or more incident objects (paragraph [0160]: if the user assistance system determines that the user has expressed frustration, impatience, and/or another indication of urgency with regard to the incident, the user assistance system may responsively increase the priority level of the incident in the incident record. With respect to message 710 of FIGS. 7A, 7B, and 7C, for instance, a sentiment analysis of message 710 may cause the user assistance system to change the priority of the incident from Medium to High, and the user assistance system may update the window to display the change in the incident record region 704. Conversely, if the user assistance system determines that the user is in little or no hurry for the incident to be resolved, the user assistance system may responsively lower the priority level of the incident in the incident record (e.g., from Medium to Low), or may make no changes to the priority level. Other types of natural language processing can be performed and used as a basis for updating the incident record as well); and updating, by the computing system, the incident workflow for each of the one or more incident objects with the adjusted priority level for each of the one or more incident objects (abstract: The embodiment may also involve based on the scoring, selecting one or more of the candidate messages to include in a set of suggested messages displayed in the suggestion region, paragraph [0127]: the user assistance system may have access to stored past incident records and may refer to such past incident records as a basis for providing certain candidate messages and/or scoring candidate messages. This may occur in scenarios in which a current incident and a past incident have similar or identical users or circumstances. Such embodiments may have various advantages. For instance, the user assistance system may be configured to determine and learn over time a user's preferences, emotions, and/or circumstances surrounding repeated incidents, and may thus use past interactions with that user as a basis for determining how to help an agent interact with the user in handing a current incident. Further, different users may encounter the same incidents over time, and thus the user assistance system may be configured to learn over time how to more efficiently resolve such incidents, and paragraph [0132]: For instance, the user assistance system may over time monitor various aspects of user assistance including conversation messages, agent feedback regarding suggested messages, etc., and use these aspects as a basis for making/refining various determinations of how to score candidate messages, perhaps improving the quality and relevance of such candidate messages over time). As per claim 10 Azmoon teaches a system comprising: a memory; and one or more processors having access to the memory, wherein the one or more processors are configured to (paragraph [0008]: a computing system may include at least one processor, as well as memory and program instructions): receive event data for one or more events (paragraph [0005]: obtaining, from a database, an incident record relating to a user… The incident record region may include the incident record, where components of the incident record contain at least one of an identifier of the user, an incident status, and an incident description, and paragraph [0112]: receipt of certain information from the user related to the task); generate, based on the event data, one or more incident objects for one or more incidents, wherein the one or more incident objects include incident data including a priority level (paragraph [0112] The incident record can be generated in various ways. For instance, the user assistance system can be configured to autonomously generate the incident record upon receipt of certain information from the user related to the task. Additionally or alternatively, an agent or other authorized administrator may generate the incident record either manually or with the help of the user assistance system, and paragraph [0113] The incident record can take the form of a set of data that represents a variety of information, or “components,” associated with the incident. Such components can include, for example: (i) an identifier of the user (e.g., the user's name, or a unique string of characters associated with the user), (ii) a status of the incident (e.g., open, unassigned, in progress, closed), (iii) a description of the incident (e.g., a manually, semi-autonomously, or fully-autonomously generated textual summary of the problem the user has encountered), (iv) a date/time when the incident record is created, (v) dates/times when the status of the incident or any other information of the incident record is changed, (vi) a current owner of the incident record (e.g., the agent or group of agents to which the task is assigned), (vii) a priority level for the incident (e.g., low, medium, or high), (viii) information indicating any efforts that has been made towards resolving the incident (e.g., dates/times such efforts were started and/or completed, and a description of such efforts), (ix) an incident number, and/or other possible information); generate an incident workflow for each of the one or more incident objects (paragraph [0006]: determining a plurality of candidate messages by incorporating the components of the incident record into predetermined message templates. The predetermined message templates may include sentence fragments and define fields in which to incorporate the components. The first example embodiment may also involve determining a scoring for the plurality of candidate messages based on a relevance to messages from the conversation. The first example embodiment may also involve, based on the scoring, selecting one or more of the plurality of candidate messages to include in the set of suggested messages displayed in the suggestion region. and paragraph: [0132] In some embodiments, the user assistance system may assess the relevance of the candidate message based on a set of rules that define how the agent might want to respond based on a received message or messages. For example, the rules may define a particular sentence structure or particular terms that should appear in a message from the agent to the user based on a semantic analysis of one or more messages of the conversation. As another example, the rules may define paths (e.g., branches of a tree-based structure) that the conversation may follow to achieve a desired result (e.g., resolution of the incident), and the user assistance system may implement techniques such as heuristics to determine which path to follow in the conversation); apply, using an application programming interface, a machine learning model to determine an adjusted priority level for each of the one or more incident objects, wherein the machine learning model is configured to receive a first natural language prompt indicative of incident data included in the one or more incident objects; and wherein the adjusted priority level is determined based on a severity level and an urgency level (paragraph [0113]: (vii) a priority level for the incident (e.g., low, medium, or high), paragraph [0138]: User feedback may take various forms, such as messages received from users in response to suggested messages sent by agents and/or user-submitted reviews of agent performance. In an example implementation, the user assistance system may be configured to parse one or more messages received from the user in response to a suggested message that the agent added to the conversation, such as the first message that was received from the user after the suggested message was added. Then, using natural language processing and/or other techniques, the user assistance system may evaluate the user's feedback regarding the suggested message and may score future candidate messages based on the evaluation. For example, if the agent adds a suggested message that reads “I will try and find your lost luggage and get back to you by the end of the day” and the user then responds with a message that reads “No! Find my luggage NOW!!!!”, the user assistance system may determine from the user's response that the suggested message was not acceptable to the user and may thus take this into account when scoring a candidate message that has a similar or identical template as the suggested message (e.g., may decrease the score of the candidate message) and paragraph [0156]: Such embodiments may be advantageous in various scenarios, such as if there is incorrect information discovered in the incident record, or the user explicitly or implicitly expresses a desire for the incident to be resolved more quickly. For example, if during the conversation the agent receives a message from the user that includes text indicating frustration or an unexpected urgency, the agent can increase the priority level of the incident in the incident record (e.g., from Medium to High) so that the incident might be resolved faster); receive the adjusted priority level for each of the one or more incident objects (paragraph [0160]: if the user assistance system determines that the user has expressed frustration, impatience, and/or another indication of urgency with regard to the incident, the user assistance system may responsively increase the priority level of the incident in the incident record. With respect to message 710 of FIGS. 7A, 7B, and 7C, for instance, a sentiment analysis of message 710 may cause the user assistance system to change the priority of the incident from Medium to High, and the user assistance system may update the window to display the change in the incident record region 704. Conversely, if the user assistance system determines that the user is in little or no hurry for the incident to be resolved, the user assistance system may responsively lower the priority level of the incident in the incident record (e.g., from Medium to Low), or may make no changes to the priority level. Other types of natural language processing can be performed and used as a basis for updating the incident record as well); and update the incident workflow for each of the one or more incident objects with the adjusted priority level for each of the one or more incident objects, wherein the incident workflow includes one or more actions for addressing an incident from the one or more incidents, and wherein the system is configured to perform the one or more actions based on the adjusted priority level for the incident workflow (abstract: The embodiment may also involve based on the scoring, selecting one or more of the candidate messages to include in a set of suggested messages displayed in the suggestion region, paragraph [0127]: the user assistance system may have access to stored past incident records and may refer to such past incident records as a basis for providing certain candidate messages and/or scoring candidate messages. This may occur in scenarios in which a current incident and a past incident have similar or identical users or circumstances. Such embodiments may have various advantages. For instance, the user assistance system may be configured to determine and learn over time a user's preferences, emotions, and/or circumstances surrounding repeated incidents, and may thus use past interactions with that user as a basis for determining how to help an agent interact with the user in handing a current incident. Further, different users may encounter the same incidents over time, and thus the user assistance system may be configured to learn over time how to more efficiently resolve such incidents, and paragraph [0132]: For instance, the user assistance system may over time monitor various aspects of user assistance including conversation messages, agent feedback regarding suggested messages, etc., and use these aspects as a basis for making/refining various determinations of how to score candidate messages, perhaps improving the quality and relevance of such candidate messages over time). As per claim 16 Azmoon teaches a computer-readable storage medium encoded with instructions that, when executed, cause at least one processor of a computing system to (paragraph [0007]): a non-transitory computer-readable medium, having stored thereon program instructions): receive event data for one or more events paragraph [0005]: obtaining, from a database, an incident record relating to a user… The incident record region may include the incident record, where components of the incident record contain at least one of an identifier of the user, an incident status, and an incident description, and paragraph [0112]: receipt of certain information from the user related to the task); generate, based on the event data, one or more incident objects for one or more incidents, wherein the one or more incident objects include incident data including a priority level (paragraph [0112] The incident record can be generated in various ways. For instance, the user assistance system can be configured to autonomously generate the incident record upon receipt of certain information from the user related to the task. Additionally or alternatively, an agent or other authorized administrator may generate the incident record either manually or with the help of the user assistance system, and paragraph [0113] The incident record can take the form of a set of data that represents a variety of information, or “components,” associated with the incident. Such components can include, for example: (i) an identifier of the user (e.g., the user's name, or a unique string of characters associated with the user), (ii) a status of the incident (e.g., open, unassigned, in progress, closed), (iii) a description of the incident (e.g., a manually, semi-autonomously, or fully-autonomously generated textual summary of the problem the user has encountered), (iv) a date/time when the incident record is created, (v) dates/times when the status of the incident or any other information of the incident record is changed, (vi) a current owner of the incident record (e.g., the agent or group of agents to which the task is assigned), (vii) a priority level for the incident (e.g., low, medium, or high), (viii) information indicating any efforts that has been made towards resolving the incident (e.g., dates/times such efforts were started and/or completed, and a description of such efforts), (ix) an incident number, and/or other possible information); generate an incident workflow for each of the one or more incident objects (paragraph [0006]: determining a plurality of candidate messages by incorporating the components of the incident record into predetermined message templates. The predetermined message templates may include sentence fragments and define fields in which to incorporate the components. The first example embodiment may also involve determining a scoring for the plurality of candidate messages based on a relevance to messages from the conversation. The first example embodiment may also involve, based on the scoring, selecting one or more of the plurality of candidate messages to include in the set of suggested messages displayed in the suggestion region. and paragraph: [0132] In some embodiments, the user assistance system may assess the relevance of the candidate message based on a set of rules that define how the agent might want to respond based on a received message or messages. For example, the rules may define a particular sentence structure or particular terms that should appear in a message from the agent to the user based on a semantic analysis of one or more messages of the conversation. As another example, the rules may define paths (e.g., branches of a tree-based structure) that the conversation may follow to achieve a desired result (e.g., resolution of the incident), and the user assistance system may implement techniques such as heuristics to determine which path to follow in the conversation); apply, using an application programming interface, a machine learning model to determine an adjusted priority level for each of the one or more incident objects, wherein the machine learning model is configured to receive a first natural language prompt indicative of incident data included in the one or more incident objects, and wherein the adjusted priority level is determined based on a severity level and an urgency level (paragraph [0113]: (vii) a priority level for the incident (e.g., low, medium, or high), paragraph [0138]: User feedback may take various forms, such as messages received from users in response to suggested messages sent by agents and/or user-submitted reviews of agent performance. In an example implementation, the user assistance system may be configured to parse one or more messages received from the user in response to a suggested message that the agent added to the conversation, such as the first message that was received from the user after the suggested message was added. Then, using natural language processing and/or other techniques, the user assistance system may evaluate the user's feedback regarding the suggested message and may score future candidate messages based on the evaluation. For example, if the agent adds a suggested message that reads “I will try and find your lost luggage and get back to you by the end of the day” and the user then responds with a message that reads “No! Find my luggage NOW!!!!”, the user assistance system may determine from the user's response that the suggested message was not acceptable to the user and may thus take this into account when scoring a candidate message that has a similar or identical template as the suggested message (e.g., may decrease the score of the candidate message) and paragraph [0156]: Such embodiments may be advantageous in various scenarios, such as if there is incorrect information discovered in the incident record, or the user explicitly or implicitly expresses a desire for the incident to be resolved more quickly. For example, if during the conversation the agent receives a message from the user that includes text indicating frustration or an unexpected urgency, the agent can increase the priority level of the incident in the incident record (e.g., from Medium to High) so that the incident might be resolved faster); receive the adjusted priority level for each of the one or more incident objects (paragraph [0160]: if the user assistance system determines that the user has expressed frustration, impatience, and/or another indication of urgency with regard to the incident, the user assistance system may responsively increase the priority level of the incident in the incident record. With respect to message 710 of FIGS. 7A, 7B, and 7C, for instance, a sentiment analysis of message 710 may cause the user assistance system to change the priority of the incident from Medium to High, and the user assistance system may update the window to display the change in the incident record region 704. Conversely, if the user assistance system determines that the user is in little or no hurry for the incident to be resolved, the user assistance system may responsively lower the priority level of the incident in the incident record (e.g., from Medium to Low), or may make no changes to the priority level. Other types of natural language processing can be performed and used as a basis for updating the incident record as well); and update the incident workflow for each of the one or more incident objects with the adjusted priority level for each of the one or more incident objects, wherein the computing system is further configured to receive user input to further adjust the adjusted priority level for each of the one or more incident objects, wherein the incident workflow includes one or more actions for addressing an incident from the one or more incidents, and wherein the computing system is configured to perform the one or more actions based on the adjusted priority level for the incident workflow (abstract: The embodiment may also involve based on the scoring, selecting one or more of the candidate messages to include in a set of suggested messages displayed in the suggestion region, paragraph [0127]: the user assistance system may have access to stored past incident records and may refer to such past incident records as a basis for providing certain candidate messages and/or scoring candidate messages. This may occur in scenarios in which a current incident and a past incident have similar or identical users or circumstances. Such embodiments may have various advantages. For instance, the user assistance system may be configured to determine and learn over time a user's preferences, emotions, and/or circumstances surrounding repeated incidents, and may thus use past interactions with that user as a basis for determining how to help an agent interact with the user in handing a current incident. Further, different users may encounter the same incidents over time, and thus the user assistance system may be configured to learn over time how to more efficiently resolve such incidents, and paragraph [0132]: For instance, the user assistance system may over time monitor various aspects of user assistance including conversation messages, agent feedback regarding suggested messages, etc., and use these aspects as a basis for making/refining various determinations of how to score candidate messages, perhaps improving the quality and relevance of such candidate messages over time). As per claims 2, 11, and 17 Azmoon teaches wherein each of the one or more incident objects is a structured representation of an incident, and wherein the incident data further includes one or more of the event data, an identifier, timestamps, incident type, incident source, severity level, urgency level, current incident status, one or more response actions, incident resolution, associated support tickets, and an action log (paragraph [0005]: The incident record region may include the incident record, where components of the incident record contain at least one of an identifier of the user, an incident status, and an incident description and paragraph [0113] The incident record can take the form of a set of data that represents a variety of information, or “components,” associated with the incident. Such components can include, for example: (i) an identifier of the user (e.g., the user's name, or a unique string of characters associated with the user), (ii) a status of the incident (e.g., open, unassigned, in progress, closed), (iii) a description of the incident (e.g., a manually, semi-autonomously, or fully-autonomously generated textual summary of the problem the user has encountered), (iv) a date/time when the incident record is created, (v) dates/times when the status of the incident or any other information of the incident record is changed, (vi) a current owner of the incident record (e.g., the agent or group of agents to which the task is assigned), (vii) a priority level for the incident (e.g., low, medium, or high), (viii) information indicating any efforts that has been made towards resolving the incident (e.g., dates/times such efforts were started and/or completed, and a description of such efforts), (ix) an incident number, and/or other possible information). As per claim 3 Azmoon teaches wherein the adjusted priority level is determined based on the severity level and the urgency level (paragraph [0113]: (vii) a priority level for the incident (e.g., low, medium, or high), paragraph [0138]: User feedback may take various forms, such as messages received from users in response to suggested messages sent by agents and/or user-submitted reviews of agent performance. In an example implementation, the user assistance system may be configured to parse one or more messages received from the user in response to a suggested message that the agent added to the conversation, such as the first message that was received from the user after the suggested message was added. Then, using natural language processing and/or other techniques, the user assistance system may evaluate the user's feedback regarding the suggested message and may score future candidate messages based on the evaluation. For example, if the agent adds a suggested message that reads “I will try and find your lost luggage and get back to you by the end of the day” and the user then responds with a message that reads “No! Find my luggage NOW!!!!”, the user assistance system may determine from the user's response that the suggested message was not acceptable to the user and may thus take this into account when scoring a candidate message that has a similar or identical template as the suggested message (e.g., may decrease the score of the candidate message) and paragraph [0156]: Such embodiments may be advantageous in various scenarios, such as if there is incorrect information discovered in the incident record, or the user explicitly or implicitly expresses a desire for the incident to be resolved more quickly. For example, if during the conversation the agent receives a message from the user that includes text indicating frustration or an unexpected urgency, the agent can increase the priority level of the incident in the incident record (e.g., from Medium to High) so that the incident might be resolved faster). As per claim 4 Azmoon teaches wherein the machine learning model is configured to receive a second natural language prompt indicative of additional instructions for determining the adjusted priority level for each of the one or more incident objects (paragraph [0127]: the user assistance system may have access to stored past incident records and may refer to such past incident records as a basis for providing certain candidate messages and/or scoring candidate messages. This may occur in scenarios in which a current incident and a past incident have similar or identical users or circumstances. Such embodiments may have various advantages. For instance, the user assistance system may be configured to determine and learn over time a user's preferences, emotions, and/or circumstances surrounding repeated incidents, and may thus use past interactions with that user as a basis for determining how to help an agent interact with the user in handing a current incident. Further, different users may encounter the same incidents over time, and thus the user assistance system may be configured to learn over time how to more efficiently resolve such incidents, and paragraph [0132]: For instance, the user assistance system may over time monitor various aspects of user assistance including conversation messages, agent feedback regarding suggested messages, etc., and use these aspects as a basis for making/refining various determinations of how to score candidate messages, perhaps improving the quality and relevance of such candidate messages over time, paragraph [0138]: User feedback may take various forms, such as messages received from users in response to suggested messages sent by agents and/or user-submitted reviews of agent performance. In an example implementation, the user assistance system may be configured to parse one or more messages received from the user in response to a suggested message that the agent added to the conversation, such as the first message that was received from the user after the suggested message was added. Then, using natural language processing and/or other techniques, the user assistance system may evaluate the user's feedback regarding the suggested message and may score future candidate messages based on the evaluation. For example, if the agent adds a suggested message that reads “I will try and find your lost luggage and get back to you by the end of the day” and the user then responds with a message that reads “No! Find my luggage NOW!!!!”, the user assistance system may determine from the user's response that the suggested message was not acceptable to the user and may thus take this into account when scoring a candidate message that has a similar or identical template as the suggested message (e.g., may decrease the score of the candidate message) and paragraph [0156]: Such embodiments may be advantageous in various scenarios, such as if there is incorrect information discovered in the incident record, or the user explicitly or implicitly expresses a desire for the incident to be resolved more quickly. For example, if during the conversation the agent receives a message from the user that includes text indicating frustration or an unexpected urgency, the agent can increase the priority level of the incident in the incident record (e.g., from Medium to High) so that the incident might be resolved faster). As per claims 5, and 12 Azmoon teaches receiving, by the computing system and from the machine learning model, a description indicative of how the adjusted priority level for each of the one or more incident objects was determined; and updating, by the computing system, the incident workflow to include the description (paragraph [0127]: the user assistance system may have access to stored past incident records and may refer to such past incident records as a basis for providing certain candidate messages and/or scoring candidate messages. This may occur in scenarios in which a current incident and a past incident have similar or identical users or circumstances. Such embodiments may have various advantages. For instance, the user assistance system may be configured to determine and learn over time a user's preferences, emotions, and/or circumstances surrounding repeated incidents, and may thus use past interactions with that user as a basis for determining how to help an agent interact with the user in handing a current incident. Further, different users may encounter the same incidents over time, and thus the user assistance system may be configured to learn over time how to more efficiently resolve such incidents, and paragraph [0132]: For instance, the user assistance system may over time monitor various aspects of user assistance including conversation messages, agent feedback regarding suggested messages, etc., and use these aspects as a basis for making/refining various determinations of how to score candidate messages, perhaps improving the quality and relevance of such candidate messages over time, paragraph [0138]: User feedback may take various forms, such as messages received from users in response to suggested messages sent by agents and/or user-submitted reviews of agent performance. In an example implementation, the user assistance system may be configured to parse one or more messages received from the user in response to a suggested message that the agent added to the conversation, such as the first message that was received from the user after the suggested message was added. Then, using natural language processing and/or other techniques, the user assistance system may evaluate the user's feedback regarding the suggested message and may score future candidate messages based on the evaluation. For example, if the agent adds a suggested message that reads “I will try and find your lost luggage and get b
Read full office action

Prosecution Timeline

Jan 31, 2024
Application Filed
Jun 12, 2025
Non-Final Rejection — §101, §102
Sep 15, 2025
Applicant Interview (Telephonic)
Sep 15, 2025
Examiner Interview Summary
Sep 16, 2025
Response Filed
Dec 18, 2025
Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12499453
Anti-counterfeiting System for Bottled Products
2y 5m to grant Granted Dec 16, 2025
Patent 12211094
SYSTEMS AND METHODS FOR PREVENTING UNNECESSARY PAYMENTS
2y 5m to grant Granted Jan 28, 2025
Patent 12147975
MOBILE WALLET REGISTRATION VIA ATM
2y 5m to grant Granted Nov 19, 2024
Patent 12148037
SYSTEM AND METHOD FOR PROCESSING A TRADE ORDER
2y 5m to grant Granted Nov 19, 2024
Patent 12067626
SYSTEMS AND METHODS FOR MAINTAINING A DISTRIBUTED LEDGER PERTAINING TO SMART CONTRACTS
2y 5m to grant Granted Aug 20, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
89%
With Interview (+23.4%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 452 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month