Prosecution Insights
Last updated: April 19, 2026
Application No. 18/615,686

METHODS AND ARRANGEMENTS FOR SIMILARITY SEARCH BASED ON MULTI-LABEL TEXT CLASSIFICATION

Final Rejection §103§112
Filed
Mar 25, 2024
Examiner
DWIVEDI, MAHESH H
Art Unit
2168
Tech Center
2100 — Computer Architecture & Software
Assignee
Capital One Services LLC
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 6m
To Grant
74%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
521 granted / 751 resolved
+14.4% vs TC avg
Minimal +4% lift
Without
With
+4.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
21 currently pending
Career history
772
Total Applications
across all art units

Statute-Specific Performance

§101
16.5%
-23.5% vs TC avg
§103
40.2%
+0.2% vs TC avg
§102
17.2%
-22.8% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 751 resolved cases

Office Action

§103 §112
DETAILED ACTION 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment 2. Receipt of Applicant’s Amendment filed on 03/04/2026 is acknowledged. The amendment includes the amending of claims 1, 6, 8, 15, and 20 and the amending of the specification. Specification 3. The objections raised in the Office Action mailed on 12/05/2025 have been overcome by applicant’s amendment received on 03/04/2026. Claim Objections 4. The objections raised in the Office Action mailed on 12/05/2025 have been overcome by applicant’s amendment received on 03/04/2026. Claim Rejections - 35 USC § 112 5. The rejections raised in the Office Action mailed on 12/05/2025 have been overcome by applicant’s amendment received on 03/04/2026. 6. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. 7. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 8. Claims 1, 8, and 15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Specifically, it is unclear as to whether the claimed “document” in the newly added limitation of “perform a similarity search between a vector for a label structure of a new document and a vector of the performance data for labels associated with each assignee in an identified set of assignees, the identified set comprising one or more of the assignees in the complete set of assignees, to select an assignee for processing the document” refers to the earlier claimed “document” in the limitation of “the label profile comprising a predicted set of labels associated with each document in the set of documents” or “new document” in the newly added limitation of “perform a similarity search between a vector for a label structure of a new document and a vector of the performance data for labels associated with each assignee in an identified set of assignees, the identified set comprising one or more of the assignees in the complete set of assignees, to select an assignee for processing the document”. For the purposes of examining the instant application, the examiner interprets the claimed “document” in the newly added limitation of “perform a similarity search between a vector for a label structure of a new document and a vector of the performance data for labels associated with each assignee in an identified set of assignees, the identified set comprising one or more of the assignees in the complete set of assignees, to select an assignee for processing the document” as referring to the earlier claimed “new document”. Dependent claims 2-7, 9-14, and 16-20 are rejected for incorporating the deficiencies independent claims 1, 8, and 15 respectively. Claim Rejections - 35 USC § 103 9. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 10. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 11. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 12. Claims 8 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Rathod et al. (U.S. PGPUB 2020/0202302), in view of Wang (U.S. PGPUB 2023/0129123), and further in view of Mujumdar et al. (U.S. PGPUB 2022/0270019). 13. Regarding claim 8, Rathod teaches a non-transitory storage medium comprising: A) instructions, which when executed by a processor, cause the processor to perform operations, the operations to: receive a label profile for a set of documents (Paragraphs 22 and 26); B) the label profile comprising a predicted set of labels associated with each document in the set of documents (Paragraphs 22 and 26); C) add log entries in a database for one or more documents in the set of documents (Paragraphs 17 and 26); D) each log entry comprising one or more labels identified in the label profile for a document of the set of documents (Paragraphs 17 and 26); E) determine performance data for the one or more labels in the log entries in the database based on feedback from a client system associated with at least one assignee of a set of assignees (Paragraphs 26-28); and F) add the performance data associated with the document and the at least one assignee associated with the performance data in the log entry (Paragraphs 17 and 26-28). The examiner notes that Rathod teaches “instructions, which when executed by a processor, cause the processor to perform operations, the operations to: receive a label profile for a set of documents” as “categorization apparatus 102 and/or another component of the system uses clusters 142 of related words to generate incident categories 114 to which the incident tickets can be assigned” (Paragraph 22) and “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26). The examiner further notes that the assigned generated categories (i.e. a profile of labels) for each incident ticket (i.e. a document) in the set of incident tickets (i.e. a set of documents) are received for subsequent processing. The examiner further notes that Rathod teaches “the label profile comprising a predicted set of labels associated with each document in the set of documents” as “categorization apparatus 102 and/or another component of the system uses clusters 142 of related words to generate incident categories 114 to which the incident tickets can be assigned” (Paragraph 22) and “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26). The examiner further notes that the assigned generated categories (i.e. a profile of labels) for each incident ticket (i.e. a document) in the set of incident tickets (i.e. a set of documents) are predicted via classification apparatus 108. The examiner further notes that Rathod teaches “add log entries in a database for one or more documents in the set of documents” as “After an incident ticket is received, ITSM system 132 stores the incident ticket in an incident repository 134. For example, ITSM system 132 may create and/or persist a record of the incident ticket in a database, flat file, distributed filesystem, issue-tracking system, bug-tracking system, and/or another data store providing incident repository 134” (Paragraph 17) and “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26). The examiner further notes that the storage of records (i.e. log entries) for each incident ticket (i.e. document) in the set of incident tickets (i.e. the set of documents) in the incident repository (which can be a database) teaches the claimed adding. The examiner further notes that Rathod teaches “each log entry comprising one or more labels identified in the label profile for a document of the set of documents” as “After an incident ticket is received, ITSM system 132 stores the incident ticket in an incident repository 134. For example, ITSM system 132 may create and/or persist a record of the incident ticket in a database, flat file, distributed filesystem, issue-tracking system, bug-tracking system, and/or another data store providing incident repository 134” (Paragraph 17) and “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26). The examiner further notes that the storage of records (i.e. log entries) for each incident ticket (i.e. document) in the set of incident tickets (i.e. the set of documents) in the incident repository (which can be a database) includes the assigned incident categories (i.e. the claimed one or more labels in the label profile) for that incident ticket (i.e. document). The examiner further notes that Rathod teaches “determine performance data for the one or more labels in the log entries in the database based on feedback from a client system associated with at least one assignee of a set of assignees” as “classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26), “Management apparatus 110 then generates routings 144 of incident tickets to agents and/or groups of agents according to incident categories 114 assigned to the incident tickets. For example, management apparatus 110 may assign each incident ticket to an agent and/or group of agents with experience and/or expertise in handling issues described in the incident ticket. Management apparatus 110 may also update incident repository 134 and/or another data store with the assignment of the ticket to the agent(s)” (Paragraph 27), and “Management apparatus 110 additionally collects feedback 146 related to incident categories 114 and/or routings 144 of the incident tickets, and management apparatus 110 and/or another component of the system updates clusters 142 and/or incident categories 114 based on feedback 146” (Paragraph 28). The examiner further notes that collected feedback (i.e. the claimed feedback) from a client system of an assigned agent (i.e. the claimed at least one assignee) amongst multiple agents (i.e. the claimed set of assignees) regarding the assigned categories (i.e. the claimed one or more labels) results in updated categories (i.e. labels) for the incident ticket. Such updated categories (i.e. labels) from an agent teaches the claimed undefined performance data in the broadest reasonable interpretation. The examiner further notes that Rathod teaches “add the performance data associated with the document and the at least one assignee associated with the performance data in the log entry” as “After an incident ticket is received, ITSM system 132 stores the incident ticket in an incident repository 134. For example, ITSM system 132 may create and/or persist a record of the incident ticket in a database, flat file, distributed filesystem, issue-tracking system, bug-tracking system, and/or another data store providing incident repository 134” (Paragraph 17), “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26), “Management apparatus 110 then generates routings 144 of incident tickets to agents and/or groups of agents according to incident categories 114 assigned to the incident tickets. For example, management apparatus 110 may assign each incident ticket to an agent and/or group of agents with experience and/or expertise in handling issues described in the incident ticket. Management apparatus 110 may also update incident repository 134 and/or another data store with the assignment of the ticket to the agent(s)” (Paragraph 27), and “Management apparatus 110 additionally collects feedback 146 related to incident categories 114 and/or routings 144 of the incident tickets, and management apparatus 110 and/or another component of the system updates clusters 142 and/or incident categories 114 based on feedback 146” (Paragraph 28). The examiner further notes that collected feedback from a client system of an assigned agent (i.e. the claimed at least one assignee) regarding the assigned categories results in updated categories (i.e. labels) for the incident ticket. Such updated categories (i.e. labels) from an agent teaches the claimed undefined performance data in the broadest reasonable interpretation. Moreover, the updated labels (i.e. performance data) as well as the assigned agent (i.e. the claimed at least one assignee) are stored in records (i.e. log entries) in the repository 134 (which can be a database). Rathod does not explicitly teach: A, B, & D) hierarchical label profile; B & G) hierarchical labels; D & E) one or more hierarchical labels; G) hierarchical label Wang, however, teaches “hierarchical label profile” as “management module 305 may also perform certain operations such as classifying submitted trouble tickets and assigning certain unique identifiers. Management module 305 may also contain a predefined set of event type and root cause classifications which may be utilized for issue prediction as described below. An event type may be a hierarchically based description of the event type that has occurred (e.g., hard disk not providing requested data) and a root cause may be a hierarchically based description of a cause of the event type (e.g., hard disk spindle failure). The predefined set of event type and root cause classifications may be generated or modified by the IT vendor or other authorized entities or persons” (Paragraph 33) and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60), “hierarchical labels” as “management module 305 may also perform certain operations such as classifying submitted trouble tickets and assigning certain unique identifiers. Management module 305 may also contain a predefined set of event type and root cause classifications which may be utilized for issue prediction as described below. An event type may be a hierarchically based description of the event type that has occurred (e.g., hard disk not providing requested data) and a root cause may be a hierarchically based description of a cause of the event type (e.g., hard disk spindle failure). The predefined set of event type and root cause classifications may be generated or modified by the IT vendor or other authorized entities or persons” (Paragraph 33) and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60), “one or more hierarchical labels” as “management module 305 may also perform certain operations such as classifying submitted trouble tickets and assigning certain unique identifiers. Management module 305 may also contain a predefined set of event type and root cause classifications which may be utilized for issue prediction as described below. An event type may be a hierarchically based description of the event type that has occurred (e.g., hard disk not providing requested data) and a root cause may be a hierarchically based description of a cause of the event type (e.g., hard disk spindle failure). The predefined set of event type and root cause classifications may be generated or modified by the IT vendor or other authorized entities or persons” (Paragraph 33) and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60), and “hierarchical label” as “management module 305 may also perform certain operations such as classifying submitted trouble tickets and assigning certain unique identifiers. Management module 305 may also contain a predefined set of event type and root cause classifications which may be utilized for issue prediction as described below. An event type may be a hierarchically based description of the event type that has occurred (e.g., hard disk not providing requested data) and a root cause may be a hierarchically based description of a cause of the event type (e.g., hard disk spindle failure). The predefined set of event type and root cause classifications may be generated or modified by the IT vendor or other authorized entities or persons” (Paragraph 33) and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60). The examiner further notes that although the primary reference of Rathod teaches the assigning of categories (i.e. labels) to tickets (i.e. documents) (including receiving a “profile” of such labels), there is no explicit teaching that such categories (i.e. labels) are hierarchical. Nevertheless, the secondary reference of Wang teaches the concept of assigning hierarchical event type and root cause classifications (i.e. examples of hierarchical labels) to tickets (i.e. documents). The combination would result in the categories (i.e. labels) of the tickets of Rathod to be hierarchical (including its profile of labels). It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Wang’s would have allowed Rathod’s to provide a method for helping in the predicting issues for a support ticket, as noted by Wang (Paragraph 61). Rathod and Wang do not explicitly teach: G) perform a similarity search between a vector for a label structure of a new document and a vector of the performance data for labels associated with each assignee in an identified set of assignees, the identified set comprising one or more of the assignees in the complete set of assignees, to select an assignee for processing the document. Mujumdar, however, teaches “perform a similarity search between a vector for a label structure of a new document and a vector of the performance data for labels associated with each assignee in an identified set of assignees, the identified set comprising one or more of the assignees in the complete set of assignees, to select an assignee for processing the document” as “The ticket profile subsystem 110 can use a classification model 114 to assess the complexity of the received annotated tickets 102. As such, the ticket profile subsystem 110 can assess the complexity of the annotated ticket 104 by applying the classification model 114 to the annotated ticket 104. More specifically in some embodiments, as is illustrated in FIG. 2A, the ticket profile subsystem 110 can include an evaluation component 210 that can generate a complexity attribute 214 by applying the classification model 114 to the annotated ticket 104. The complexity attribute 214 is denoted by C in FIG. 2A. The classification model 114 can perform a multi-class classification task in response to being applied to the annotated ticket 104. The multi-class classification task can include discerning a particular difficulty category among a group of multiple difficulty categories. The group of multiple difficulty categories can be ordered and can be represented by an ordered set of parameters. Those parameters can be numerical, alphabetical, or alphanumeric. In some cases, the ordered set of parameters can be an ordered set of integers or real numbers. In one example, the number of difficulty categories in that group can be five, represented by the following integer numbers: 1, 2, 3, 4, and 5, where 1 represents the least difficulty and 5 represents the greatest difficulty). The number of difficulty categories that constitute the group of difficulty categories is configurable, and, thus, more or fewer than five difficulty categories can be contemplated” (Paragraph 33), “The ticket profile subsystem 110 can use the assessed complexity of annotated tickets included in the received annotated tickets 102 in order to generate respective ticket profiles 116, as is shown in FIG. 1” (Paragraph 37), “The ticket profile subsystem 110 can generate tickets profiles for other annotated tickets included in the received annotated tickets 102 as is described above. As a result, in some embodiments, each one of those ticket profiles can be included in the ticket profiles 116 and can be embodied in a skill complexity vector” (Paragraph 40), “the agent profile subsystem 120 can generate multiple agent profiles 128, including an agent profile 128. Each one of the agent profiles 126 characterizes a respective agent in one or many of those terms. To that end, the agent profile subsystem 120 can be functionally coupled to a performance data repository 130 and to a skillset data repository 140. The performance data repository 130 can include, in some cases, data representative of historical performance of each agent in the pool of agents with respect to a group of skills. The skillset data repository 140 can include data representative of skills available to an agent and proficiency in those skills. The agent profile subsystem 120 can obtain first data from the performance data repository 130 for the agent and also can obtain second data from the skillset data repository 140 for the agent. The agent profile subsystem 120 can then generate the agent profile 128 using the first data and the second data. The agent profile subsystem 120 can generate other agent profiles for respective agents by obtaining such first data and second data for those other agents” (Paragraph 41), “the ticket profile and the agent profile are embodied in respective vectors V and V′ having a same dimension d. The magnitude of d can be the number of skills collective available in a pool of agents. For instance, the ticket profile can be embodied in the ticket profile 118 and the agent profile can be embodied in the agent profile 128. As described herein, the ticket profile can be embodied in a SCV and the agent profile can be embodied in a SPV, where the SCV and the SPV have a same dimension. Thus, in those cases, the similarity function ƒ can be embodied in the cosine similarity among V and V′. The similarity function ƒ also can be embodied in another type of function, such as Minkowsky distance or Manhattan distance” (Paragraph 54), and “the ticket-agent matching subsystem 150 can generate list of ticket-agent matches. Each match can be represented by a pair including a ticket ID identifying a ticket and an agent ID identifying an agent in a pool of agents. The list can be referred to as a ticket-agent match list 160 (represented by match list 160 in FIG. 1)” (Paragraph 59). The examiner further notes that the secondary reference of Mujumdar teaches the concept of executing a similarity search of vectorized ticket profile(s) (i.e. new document(s)) (which include classifications (i.e. labels)) against vectorized agent profiles (which includes performance data) to identify suitable agent(s) to process the ticket(s). The combination would result in the use of the hierarchical labels of Wang and the performance data of the labels of Rathod to be vectorized for performing the similarity search to identify suitable agents. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Mujumdar’s would have allowed Rathod’s and Wang’s to provide a method for improving the performance of ticket resolution systems, as noted by Mujumdar (Paragraph 31). Regarding claim 11, Rathod further teaches a non-transitory storage medium comprising: A) wherein the operations further comprise operations to add a subsequently received document to the set of documents (Paragraphs 25 and 41, Figure 1); and B) create another log entry for the subsequently received document (Paragraphs 17, 25, 26, and 41, Figure 1). The examiner notes that Rathod teaches “wherein the operations further comprise operations to add a subsequently received document to the set of documents” as “When a new incident ticket is received by ITSM system 132 and/or in incident repository 134, classification apparatus 108 may perform stemming and/or removal of stop words and infrequent words from the incident ticket” (Paragraph 25) and “Operations 202-210 may be repeated for remaining incident tickets (operation 212). For example, incident categories may be assigned to each new incident ticket received through the incident management system, and incident tickets may be routed within the incident management system according to the assigned incident categories to streamline resolution of issues associated with the incident tickets. Feedback related to the assignments may additionally be used to improve the accuracy of the incident categories and/or assignments of subsequent incident tickets to the incident categories” (Paragraph 41). The examiner further notes that new ticket(s) (i.e. the claimed subsequently received document) are added to the repository with the other tickets (i.e. the claimed set of documents). The examiner further notes that Rathod teaches “create another log entry for the subsequently received document” as “After an incident ticket is received, ITSM system 132 stores the incident ticket in an incident repository 134. For example, ITSM system 132 may create and/or persist a record of the incident ticket in a database, flat file, distributed filesystem, issue-tracking system, bug-tracking system, and/or another data store providing incident repository 134” (Paragraph 17), “When a new incident ticket is received by ITSM system 132 and/or in incident repository 134, classification apparatus 108 may perform stemming and/or removal of stop words and infrequent words from the incident ticket” (Paragraph 25), “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26), and “Operations 202-210 may be repeated for remaining incident tickets (operation 212). For example, incident categories may be assigned to each new incident ticket received through the incident management system, and incident tickets may be routed within the incident management system according to the assigned incident categories to streamline resolution of issues associated with the incident tickets. Feedback related to the assignments may additionally be used to improve the accuracy of the incident categories and/or assignments of subsequent incident tickets to the incident categories” (Paragraph 41). The examiner further notes that the storage of records (i.e. log entries) for a new ticket (i.e. the subsequently received document) in the incident repository (which can be a database) teaches the claimed adding. 14. Claims 1-4, 7, 9-10, and 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over Rathod et al. (U.S. PGPUB 2020/0202302), in view of Wang (U.S. PGPUB 2023/0129123), and further in view of Mujumdar et al. (U.S. PGPUB 2022/0270019) as applied to claims 8 and 11 above, and further in view of Gelbukh et al. (Article entitled “Use of a Weighted Topic Hierarchy for Document Classification”, dated 1999). 15. Regarding claim 1, Rathod teaches an apparatus comprising: A) memory (Paragraph 47); and B) logic circuitry coupled with the memory to: provide a complete set of assignees (Paragraph 27); C) receive a label profile for a set of documents (Paragraphs 22 and 26); D) the label profile comprising a predicted set of labels associated with each document in the set of documents (Paragraphs 22 and 26); E) for each document in the set of documents: create a log entry in a database for a document (Paragraphs 17 and 26); F) the log entry comprising unique labels identified in the label profile for the document (Paragraphs 17 and 26); G) determine performance data for each label in the log entry in the database based on feedback from a client system (Paragraphs 26-28); H) the performance data for the log entry associated with at least one assignee of the complete set of assignees (Paragraphs 17 and 26-28); and I) store the performance data associated with the document and the at least one assignee associated with the performance data in the log entry (Paragraphs 17 and 26-28); and K) wherein the at least one assignee of the complete set of assignees is associated with the performance data for more than one document in the set of documents (Paragraphs 17 and 26-28). The examiner notes that Rathod teaches “memory” as “FIG. 4 shows a computer system 400 in accordance with the disclosed embodiments. Computer system 400 includes a processor 402, memory 404, storage 406, and/or other components found in electronic computing devices” (Paragraph 47). The examiner further notes that memory 404 teaches the claimed memory. The examiner further notes that Rathod teaches “logic circuitry coupled with the memory to: provide a complete set of assignees” as “Management apparatus 110 then generates routings 144 of incident tickets to agents and/or groups of agents according to incident categories 114 assigned to the incident tickets. For example, management apparatus 110 may assign each incident ticket to an agent and/or group of agents with experience and/or expertise in handling issues described in the incident ticket. Management apparatus 110 may also update incident repository 134 and/or another data store with the assignment of the ticket to the agent(s)” (Paragraph 27). The examiner further notes that the example routing of ticket(s) groups of agents (i.e. a complete set of assignees) teaches the claimed providing. The examiner further notes that Rathod teaches “receive a label profile for a set of documents” as “categorization apparatus 102 and/or another component of the system uses clusters 142 of related words to generate incident categories 114 to which the incident tickets can be assigned” (Paragraph 22) and “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26). The examiner further notes that the assigned generated categories (i.e. a profile of labels) for each incident ticket (i.e. a document) in the set of incident tickets (i.e. a set of documents) are received for subsequent processing. The examiner further notes that Rathod teaches “the label profile comprising a predicted set of labels associated with each document in the set of documents” as “categorization apparatus 102 and/or another component of the system uses clusters 142 of related words to generate incident categories 114 to which the incident tickets can be assigned” (Paragraph 22) and “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26). The examiner further notes that the assigned generated categories (i.e. a profile of labels) for each incident ticket (i.e. a document) in the set of incident tickets (i.e. a set of documents) are predicted via classification apparatus 108. The examiner further notes that Rathod teaches “for each document in the set of documents: create a log entry in a database for a document” as “After an incident ticket is received, ITSM system 132 stores the incident ticket in an incident repository 134. For example, ITSM system 132 may create and/or persist a record of the incident ticket in a database, flat file, distributed filesystem, issue-tracking system, bug-tracking system, and/or another data store providing incident repository 134” (Paragraph 17) and “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26). The examiner further notes that the storage of records (i.e. log entries) for each incident ticket (i.e. document) in the set of incident tickets (i.e. the set of documents) in the incident repository (which can be a database) teaches the claimed creating. The examiner further notes that Rathod teaches “the log entry comprising unique labels identified in the label profile for the document” as “After an incident ticket is received, ITSM system 132 stores the incident ticket in an incident repository 134. For example, ITSM system 132 may create and/or persist a record of the incident ticket in a database, flat file, distributed filesystem, issue-tracking system, bug-tracking system, and/or another data store providing incident repository 134” (Paragraph 17) and “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26). The examiner further notes that the storage of records (i.e. log entries) for each incident ticket (i.e. document) in the set of incident tickets in the incident repository (which can be a database) includes the assigned incident categories (i.e. the claimed unique labels in the label profile) for that incident ticket (i.e. document). The examiner further notes that Rathod teaches “determine performance data for each label in the log entry in the database based on feedback from a client system” as “classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26), “Management apparatus 110 then generates routings 144 of incident tickets to agents and/or groups of agents according to incident categories 114 assigned to the incident tickets. For example, management apparatus 110 may assign each incident ticket to an agent and/or group of agents with experience and/or expertise in handling issues described in the incident ticket. Management apparatus 110 may also update incident repository 134 and/or another data store with the assignment of the ticket to the agent(s)” (Paragraph 27), and “Management apparatus 110 additionally collects feedback 146 related to incident categories 114 and/or routings 144 of the incident tickets, and management apparatus 110 and/or another component of the system updates clusters 142 and/or incident categories 114 based on feedback 146” (Paragraph 28). The examiner further notes that collected feedback (i.e. the claimed feedback) from a client system of an assigned agent amongst multiple agents regarding the assigned categories (i.e. the claimed one or more labels) results in updated categories (i.e. labels) for the incident ticket. Such updated categories (i.e. labels) from an agent teaches the claimed undefined performance data in the broadest reasonable interpretation. The examiner further notes that Rathod teaches “the performance data for the log entry associated with at least one assignee of the complete set of assignees” as “After an incident ticket is received, ITSM system 132 stores the incident ticket in an incident repository 134. For example, ITSM system 132 may create and/or persist a record of the incident ticket in a database, flat file, distributed filesystem, issue-tracking system, bug-tracking system, and/or another data store providing incident repository 134” (Paragraph 17), “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26), “Management apparatus 110 then generates routings 144 of incident tickets to agents and/or groups of agents according to incident categories 114 assigned to the incident tickets. For example, management apparatus 110 may assign each incident ticket to an agent and/or group of agents with experience and/or expertise in handling issues described in the incident ticket. Management apparatus 110 may also update incident repository 134 and/or another data store with the assignment of the ticket to the agent(s)” (Paragraph 27), and “Management apparatus 110 additionally collects feedback 146 related to incident categories 114 and/or routings 144 of the incident tickets, and management apparatus 110 and/or another component of the system updates clusters 142 and/or incident categories 114 based on feedback 146” (Paragraph 28). The examiner further notes that collected feedback from a client system of an assigned agent (i.e. the claimed at least one assignee) regarding the assigned categories results in updated categories (i.e. labels) for the incident ticket. Such updated categories (i.e. labels) from an agent teaches the claimed undefined performance data in the broadest reasonable interpretation. Moreover, the updated labels (i.e. performance data) as well as the assigned agent (i.e. the claimed at least one assignee) are associated with one another via the storage of such records (i.e. log entries) in the repository 134 (which can be a database). The examiner further notes that Rathod teaches “store the performance data associated with the document and the at least one assignee associated with the performance data in the log entry” as “After an incident ticket is received, ITSM system 132 stores the incident ticket in an incident repository 134. For example, ITSM system 132 may create and/or persist a record of the incident ticket in a database, flat file, distributed filesystem, issue-tracking system, bug-tracking system, and/or another data store providing incident repository 134” (Paragraph 17), “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26), “Management apparatus 110 then generates routings 144 of incident tickets to agents and/or groups of agents according to incident categories 114 assigned to the incident tickets. For example, management apparatus 110 may assign each incident ticket to an agent and/or group of agents with experience and/or expertise in handling issues described in the incident ticket. Management apparatus 110 may also update incident repository 134 and/or another data store with the assignment of the ticket to the agent(s)” (Paragraph 27), and “Management apparatus 110 additionally collects feedback 146 related to incident categories 114 and/or routings 144 of the incident tickets, and management apparatus 110 and/or another component of the system updates clusters 142 and/or incident categories 114 based on feedback 146” (Paragraph 28). The examiner further notes that collected feedback from a client system of an assigned agent (i.e. the claimed at least one assignee) regarding the assigned categories results in updated categories (i.e. labels) for the incident ticket. Such updated categories (i.e. labels) from an agent teaches the claimed undefined performance data in the broadest reasonable interpretation. Moreover, the updated labels (i.e. performance data) as well as the assigned agent (i.e. the claimed at least one assignee) are stored in records (i.e. log entries) in the repository 134 (which can be a database). The examiner further notes that Rathod teaches “wherein the at least one assignee of the complete set of assignees is associated with the performance data for more than one document in the set of documents” as “After an incident ticket is received, ITSM system 132 stores the incident ticket in an incident repository 134. For example, ITSM system 132 may create and/or persist a record of the incident ticket in a database, flat file, distributed filesystem, issue-tracking system, bug-tracking system, and/or another data store providing incident repository 134” (Paragraph 17), “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26), “Management apparatus 110 then generates routings 144 of incident tickets to agents and/or groups of agents according to incident categories 114 assigned to the incident tickets. For example, management apparatus 110 may assign each incident ticket to an agent and/or group of agents with experience and/or expertise in handling issues described in the incident ticket. Management apparatus 110 may also update incident repository 134 and/or another data store with the assignment of the ticket to the agent(s)” (Paragraph 27), and “Management apparatus 110 additionally collects feedback 146 related to incident categories 114 and/or routings 144 of the incident tickets, and management apparatus 110 and/or another component of the system updates clusters 142 and/or incident categories 114 based on feedback 146” (Paragraph 28). The examiner further notes that collected feedback from a client system of an assigned agent (i.e. the claimed at least one assignee) regarding the assigned categories results in updated categories (i.e. labels) for the incident ticket. Such an assigned agent can clearly be assigned multiple tickets (i.e. documents) based on the assigned categories (i.e. labels) of those tickets. Moreover, such updated categories (i.e. labels) from an agent teaches the claimed undefined performance data in the broadest reasonable interpretation. Moreover, the updated labels (i.e. performance data) as well as the assigned agent (i.e. the claimed at least one assignee) are associated with one another via the storage of such records (i.e. log entries) in the repository 134 (which can be a database) for each ticket (i.e. document). Rathod does not explicitly teach: C, D, & F) hierarchical label profile; D, F, J, & L) hierarchical labels; G, J, & L) hierarchical label. Wang, however, teaches “hierarchical label profile” as “management module 305 may also perform certain operations such as classifying submitted trouble tickets and assigning certain unique identifiers. Management module 305 may also contain a predefined set of event type and root cause classifications which may be utilized for issue prediction as described below. An event type may be a hierarchically based description of the event type that has occurred (e.g., hard disk not providing requested data) and a root cause may be a hierarchically based description of a cause of the event type (e.g., hard disk spindle failure). The predefined set of event type and root cause classifications may be generated or modified by the IT vendor or other authorized entities or persons” (Paragraph 33) and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60), “hierarchical labels” “management module 305 may also perform certain operations such as classifying submitted trouble tickets and assigning certain unique identifiers. Management module 305 may also contain a predefined set of event type and root cause classifications which may be utilized for issue prediction as described below. An event type may be a hierarchically based description of the event type that has occurred (e.g., hard disk not providing requested data) and a root cause may be a hierarchically based description of a cause of the event type (e.g., hard disk spindle failure). The predefined set of event type and root cause classifications may be generated or modified by the IT vendor or other authorized entities or persons” (Paragraph 33) and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60), and “hierarchical label” as “management module 305 may also perform certain operations such as classifying submitted trouble tickets and assigning certain unique identifiers. Management module 305 may also contain a predefined set of event type and root cause classifications which may be utilized for issue prediction as described below. An event type may be a hierarchically based description of the event type that has occurred (e.g., hard disk not providing requested data) and a root cause may be a hierarchically based description of a cause of the event type (e.g., hard disk spindle failure). The predefined set of event type and root cause classifications may be generated or modified by the IT vendor or other authorized entities or persons” (Paragraph 33) and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60). The examiner further notes that although the primary reference of Rathod teaches the assigning of categories (i.e. labels) to tickets (i.e. documents) (including receiving a “profile” of such labels), there is no explicit teaching that such categories (i.e. labels) are hierarchical. Nevertheless, the secondary reference of Wang teaches the concept of assigning hierarchical event type and root cause classifications (i.e. examples of hierarchical labels) to tickets (i.e. documents). The combination would result in the categories (i.e. labels) of the tickets of Rathod to be hierarchical (including its profile of labels). It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Wang’s would have allowed Rathod’s to provide a method for helping in the predicting issues for a support ticket, as noted by Wang (Paragraph 61). Rathod and Wang do not explicitly teach: L) perform a similarity search between a vector for a label structure of a new document and a vector of the performance data for labels associated with each assignee in an identified set of assignees, the identified set comprising one or more of the assignees in the complete set of assignees, to select an assignee for processing the document. Mujumdar, however, teaches “perform a similarity search between a vector for a label structure of a new document and a vector of the performance data for labels associated with each assignee in an identified set of assignees, the identified set comprising one or more of the assignees in the complete set of assignees, to select an assignee for processing the document” as “The ticket profile subsystem 110 can use a classification model 114 to assess the complexity of the received annotated tickets 102. As such, the ticket profile subsystem 110 can assess the complexity of the annotated ticket 104 by applying the classification model 114 to the annotated ticket 104. More specifically in some embodiments, as is illustrated in FIG. 2A, the ticket profile subsystem 110 can include an evaluation component 210 that can generate a complexity attribute 214 by applying the classification model 114 to the annotated ticket 104. The complexity attribute 214 is denoted by C in FIG. 2A. The classification model 114 can perform a multi-class classification task in response to being applied to the annotated ticket 104. The multi-class classification task can include discerning a particular difficulty category among a group of multiple difficulty categories. The group of multiple difficulty categories can be ordered and can be represented by an ordered set of parameters. Those parameters can be numerical, alphabetical, or alphanumeric. In some cases, the ordered set of parameters can be an ordered set of integers or real numbers. In one example, the number of difficulty categories in that group can be five, represented by the following integer numbers: 1, 2, 3, 4, and 5, where 1 represents the least difficulty and 5 represents the greatest difficulty). The number of difficulty categories that constitute the group of difficulty categories is configurable, and, thus, more or fewer than five difficulty categories can be contemplated” (Paragraph 33), “The ticket profile subsystem 110 can use the assessed complexity of annotated tickets included in the received annotated tickets 102 in order to generate respective ticket profiles 116, as is shown in FIG. 1” (Paragraph 37), “The ticket profile subsystem 110 can generate tickets profiles for other annotated tickets included in the received annotated tickets 102 as is described above. As a result, in some embodiments, each one of those ticket profiles can be included in the ticket profiles 116 and can be embodied in a skill complexity vector” (Paragraph 40), “the agent profile subsystem 120 can generate multiple agent profiles 128, including an agent profile 128. Each one of the agent profiles 126 characterizes a respective agent in one or many of those terms. To that end, the agent profile subsystem 120 can be functionally coupled to a performance data repository 130 and to a skillset data repository 140. The performance data repository 130 can include, in some cases, data representative of historical performance of each agent in the pool of agents with respect to a group of skills. The skillset data repository 140 can include data representative of skills available to an agent and proficiency in those skills. The agent profile subsystem 120 can obtain first data from the performance data repository 130 for the agent and also can obtain second data from the skillset data repository 140 for the agent. The agent profile subsystem 120 can then generate the agent profile 128 using the first data and the second data. The agent profile subsystem 120 can generate other agent profiles for respective agents by obtaining such first data and second data for those other agents” (Paragraph 41), “the ticket profile and the agent profile are embodied in respective vectors V and V′ having a same dimension d. The magnitude of d can be the number of skills collective available in a pool of agents. For instance, the ticket profile can be embodied in the ticket profile 118 and the agent profile can be embodied in the agent profile 128. As described herein, the ticket profile can be embodied in a SCV and the agent profile can be embodied in a SPV, where the SCV and the SPV have a same dimension. Thus, in those cases, the similarity function ƒ can be embodied in the cosine similarity among V and V′. The similarity function ƒ also can be embodied in another type of function, such as Minkowsky distance or Manhattan distance” (Paragraph 54), and “the ticket-agent matching subsystem 150 can generate list of ticket-agent matches. Each match can be represented by a pair including a ticket ID identifying a ticket and an agent ID identifying an agent in a pool of agents. The list can be referred to as a ticket-agent match list 160 (represented by match list 160 in FIG. 1)” (Paragraph 59). The examiner further notes that the secondary reference of Mujumdar teaches the concept of executing a similarity search of vectorized ticket profile(s) (i.e. new document(s)) (which include classifications (i.e. labels)) against vectorized agent profiles (which includes performance data) to identify suitable agent(s) to process the ticket(s). The combination would result in the use of the hierarchical labels of Wang and the performance data of the labels of Rathod to be vectorized for performing the similarity search to identify suitable agents. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Mujumdar’s would have allowed Rathod’s and Wang’s to provide a method for improving the performance of ticket resolution systems, as noted by Mujumdar (Paragraph 31). Rathod, Wang, and Majumdar do not explicitly teach: J) generate a weight for each hierarchical label in a complete set of the hierarchical labels for each assignee in the complete set of assignees. Gelbukh, however, teaches “generate a weight for each hierarchical label in a complete set of the hierarchical labels for each assignee in the complete set of assignees” as “The other part of the dictionary is the topic tree, which organizes the topics, as integral units, into a hierarchy or, more generally, a lattice (since some topics can belong to several nodes of the hierarchy)” (Page 134, Section 2), “Instead of simple lists of words, some numeric weights can be used by the algorithm to define the quantitative measures of relevance of the words for topics and the measure of importance of the nodes of the hierarchy. Thus, there are two kind of such weights: the weights of links in the hierarchy and the weights associated with the individual nodes” (Page 135, Section 3), “The first type of weights is associated with the links between words and topics and between the nodes in the tree (actually, the former type is a kind of the latter since the individual words can be considered as terminal tree nodes related to the corresponding topic). For example, if the document mentions the word carburetor, is it about cars? And the word wheel? Intuitively, the contribution of the word carburetor into the topic cars is more than that of the word wheel; thus, the link between wheel and cars is assigned a less weight. The algorithm of classification takes into account these weights when compiling the accumulated relevance of the topics” (Page 135, Section 4), and “An interesting application of the method is classification of the documents by similarity with respect to a given topic. Clearly, a document mentioning the use of animals for military purposes and the document mentioning feeding of animals are similar (both mention animals) from the point of view of a biologist, but not from the point of view of a military man they are very different. The comparison is made on the basis of the weights of the topics for the two documents” (Page 137, Section 6). The examiner further notes that the secondary reference of Gelbukh teaches the concept of weighting hierarchical classifications/topics (i.e. labels) that are subsequently assigned to documents. The combination would result in weighting the hierarchical labels of Wang and the labels of Rathod for each assignee of the complete set of assignees of Rathod. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Gelbukh’s would have allowed Rathod’s, Wang’s, and Majumdar’s to provide a method for optimally assigning classifications to documents in order to identify its principal topics, as noted by Gelbukh (Page 133, Section 1 & Page 137, Section 6). Regarding claim 2, Rathod further teaches an apparatus comprising: A) the logic circuitry coupled with the memory to further add a subsequently received document to the set of documents (Paragraphs 25 and 41, Figure 1); and B) create another log entry for the subsequently received document (Paragraphs 17, 25, 26, and 41, Figure 1). The examiner notes that Rathod teaches “the logic circuitry coupled with the memory to further add a subsequently received document to the set of documents” as “When a new incident ticket is received by ITSM system 132 and/or in incident repository 134, classification apparatus 108 may perform stemming and/or removal of stop words and infrequent words from the incident ticket” (Paragraph 25) and “Operations 202-210 may be repeated for remaining incident tickets (operation 212). For example, incident categories may be assigned to each new incident ticket received through the incident management system, and incident tickets may be routed within the incident management system according to the assigned incident categories to streamline resolution of issues associated with the incident tickets. Feedback related to the assignments may additionally be used to improve the accuracy of the incident categories and/or assignments of subsequent incident tickets to the incident categories” (Paragraph 41). The examiner further notes that new ticket(s) (i.e. the claimed subsequently received document) are added to the repository with the other tickets (i.e. the claimed set of documents). The examiner further notes that Rathod teaches “create another log entry for the subsequently received document” as “After an incident ticket is received, ITSM system 132 stores the incident ticket in an incident repository 134. For example, ITSM system 132 may create and/or persist a record of the incident ticket in a database, flat file, distributed filesystem, issue-tracking system, bug-tracking system, and/or another data store providing incident repository 134” (Paragraph 17), “When a new incident ticket is received by ITSM system 132 and/or in incident repository 134, classification apparatus 108 may perform stemming and/or removal of stop words and infrequent words from the incident ticket” (Paragraph 25), “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26), and “Operations 202-210 may be repeated for remaining incident tickets (operation 212). For example, incident categories may be assigned to each new incident ticket received through the incident management system, and incident tickets may be routed within the incident management system according to the assigned incident categories to streamline resolution of issues associated with the incident tickets. Feedback related to the assignments may additionally be used to improve the accuracy of the incident categories and/or assignments of subsequent incident tickets to the incident categories” (Paragraph 41). The examiner further notes that the storage of records (i.e. log entries) for a new ticket (i.e. the subsequently received document) in the incident repository (which can be a database) teaches the claimed creating. Regarding claim 3, Rathod further teaches an apparatus comprising: A) the logic circuitry coupled with the memory to further add an indication or value for each label associated with the subsequently received document in one or more label fields of the log entry (Paragraphs 17, 25, 26, and 41, Figure 1). The examiner notes that Rathod teaches “the logic circuitry coupled with the memory to further add an indication or value for each label associated with the subsequently received document in one or more label fields of the log entry” as “After an incident ticket is received, ITSM system 132 stores the incident ticket in an incident repository 134. For example, ITSM system 132 may create and/or persist a record of the incident ticket in a database, flat file, distributed filesystem, issue-tracking system, bug-tracking system, and/or another data store providing incident repository 134” (Paragraph 17), “When a new incident ticket is received by ITSM system 132 and/or in incident repository 134, classification apparatus 108 may perform stemming and/or removal of stop words and infrequent words from the incident ticket” (Paragraph 25), “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26), and “Operations 202-210 may be repeated for remaining incident tickets (operation 212). For example, incident categories may be assigned to each new incident ticket received through the incident management system, and incident tickets may be routed within the incident management system according to the assigned incident categories to streamline resolution of issues associated with the incident tickets. Feedback related to the assignments may additionally be used to improve the accuracy of the incident categories and/or assignments of subsequent incident tickets to the incident categories” (Paragraph 41). The examiner further notes that the storage of records (i.e. log entries) for a new ticket (i.e. the subsequently received document) in the incident repository (which can be a database) includes the assigned incident categories (i.e. labels). Such stored assigned incident categories (i.e. labels) in the database records (which entails a field for such assigned incident categories (i.e. labels)) teach the claimed indication or value in the broadest reasonable interpretation. Rathod does not explicitly teach: A) hierarchical label. Wang, however, teaches “hierarchical label” as “management module 305 may also perform certain operations such as classifying submitted trouble tickets and assigning certain unique identifiers. Management module 305 may also contain a predefined set of event type and root cause classifications which may be utilized for issue prediction as described below. An event type may be a hierarchically based description of the event type that has occurred (e.g., hard disk not providing requested data) and a root cause may be a hierarchically based description of a cause of the event type (e.g., hard disk spindle failure). The predefined set of event type and root cause classifications may be generated or modified by the IT vendor or other authorized entities or persons” (Paragraph 33) and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60). The examiner further notes that although the primary reference of Rathod teaches the assigning of categories (i.e. labels) to tickets (i.e. documents), there is no explicit teaching that such categories (i.e. labels) are hierarchical. Nevertheless, the secondary reference of Wang teaches the concept of assigning hierarchical event type and root cause classifications (i.e. examples of hierarchical labels) to tickets (i.e. documents). The combination would result in the categories (i.e. labels) of the tickets of Rathod to be hierarchical (including its database fields of labels). It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Wang’s would have allowed Rathod’s to provide a method for helping in the predicting issues for a support ticket, as noted by Wang (Paragraph 61). Regarding claim 4, Rathod further teaches an apparatus comprising: A) wherein the complete set of assignees comprises each assignee that may be assigned subsequently received documents for processing (Paragraphs 25, 27, and 41, Figure 1). The examiner notes that Rathod teaches “wherein the complete set of assignees comprises each assignee that may be assigned subsequently received documents for processing” as “When a new incident ticket is received by ITSM system 132 and/or in incident repository 134, classification apparatus 108 may perform stemming and/or removal of stop words and infrequent words from the incident ticket” (Paragraph 25), “Management apparatus 110 then generates routings 144 of incident tickets to agents and/or groups of agents according to incident categories 114 assigned to the incident tickets. For example, management apparatus 110 may assign each incident ticket to an agent and/or group of agents with experience and/or expertise in handling issues described in the incident ticket. Management apparatus 110 may also update incident repository 134 and/or another data store with the assignment of the ticket to the agent(s)” (Paragraph 27), and “Operations 202-210 may be repeated for remaining incident tickets (operation 212). For example, incident categories may be assigned to each new incident ticket received through the incident management system, and incident tickets may be routed within the incident management system according to the assigned incident categories to streamline resolution of issues associated with the incident tickets. Feedback related to the assignments may additionally be used to improve the accuracy of the incident categories and/or assignments of subsequent incident tickets to the incident categories” (Paragraph 41). The examiner further notes that new ticket(s) (i.e. the claimed subsequently received documents) are assigned to agents/groups of agents (i.e. the claimed complete set of assignees). Regarding claim 7, Rathod further teaches an apparatus comprising: A) wherein the performance data associated with the document may comprise one or more values based on feedback about performance by the at least one assignee of one or more tasks to resolve the document associated with the log entry (Paragraphs 17 and 26-28). The examiner notes that Rathod teaches “wherein the performance data associated with the document may comprise one or more values based on feedback about performance by the at least one assignee of one or more tasks to resolve the document associated with the log entry” as “classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26), “Management apparatus 110 then generates routings 144 of incident tickets to agents and/or groups of agents according to incident categories 114 assigned to the incident tickets. For example, management apparatus 110 may assign each incident ticket to an agent and/or group of agents with experience and/or expertise in handling issues described in the incident ticket. Management apparatus 110 may also update incident repository 134 and/or another data store with the assignment of the ticket to the agent(s)” (Paragraph 27), and “Management apparatus 110 additionally collects feedback 146 related to incident categories 114 and/or routings 144 of the incident tickets, and management apparatus 110 and/or another component of the system updates clusters 142 and/or incident categories 114 based on feedback 146” (Paragraph 28). The examiner further notes that due to the diction of “may” (See “wherein the performance data associated with the document may comprise”), the claimed “one or more values based on feedback about performance by the at least one assignee of one or more tasks to resolve the document associated with the log entry” is deemed as an optional limitation in the broadest reasonable interpretation. Regarding claim 9, Rathod further teaches a non-transitory storage medium comprising: B) wherein at least one assignee of the complete set of assignees is associated with the performance data for more than one document in the set of documents (Paragraphs 17 and 26-28). The examiner notes that Rathod teaches “wherein at least one assignee of the complete set of assignees is associated with the performance data for more than one document in the set of documents” as “After an incident ticket is received, ITSM system 132 stores the incident ticket in an incident repository 134. For example, ITSM system 132 may create and/or persist a record of the incident ticket in a database, flat file, distributed filesystem, issue-tracking system, bug-tracking system, and/or another data store providing incident repository 134” (Paragraph 17), “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26), “Management apparatus 110 then generates routings 144 of incident tickets to agents and/or groups of agents according to incident categories 114 assigned to the incident tickets. For example, management apparatus 110 may assign each incident ticket to an agent and/or group of agents with experience and/or expertise in handling issues described in the incident ticket. Management apparatus 110 may also update incident repository 134 and/or another data store with the assignment of the ticket to the agent(s)” (Paragraph 27), and “Management apparatus 110 additionally collects feedback 146 related to incident categories 114 and/or routings 144 of the incident tickets, and management apparatus 110 and/or another component of the system updates clusters 142 and/or incident categories 114 based on feedback 146” (Paragraph 28). The examiner further notes that collected feedback from a client system of an assigned agent (i.e. the claimed at least one assignee) regarding the assigned categories results in updated categories (i.e. labels) for the incident ticket. Such an assigned agent can clearly be assigned multiple tickets (i.e. documents) based on the assigned categories (i.e. labels) of those tickets. Moreover, such updated categories (i.e. labels) from an agent teaches the claimed undefined performance data in the broadest reasonable interpretation. Moreover, the updated labels (i.e. performance data) as well as the assigned agent (i.e. the claimed at least one assignee) are associated with one another via the storage of such records (i.e. log entries) in the repository 134 (which can be a database) for each ticket (i.e. document). Rathod, Wang, and Majumdar do not explicitly teach: A) wherein the operations further comprise operations to generate a weight for each hierarchical label in a complete set of the hierarchical labels for each assignee in a complete set of assignees. Gelbukh, however, teaches “wherein the operations further comprise operations to generate a weight for each hierarchical label in a complete set of the hierarchical labels for each assignee in a complete set of assignees” as “The other part of the dictionary is the topic tree, which organizes the topics, as integral units, into a hierarchy or, more generally, a lattice (since some topics can belong to several nodes of the hierarchy)” (Page 134, Section 2), “Instead of simple lists of words, some numeric weights can be used by the algorithm to define the quantitative measures of relevance of the words for topics and the measure of importance of the nodes of the hierarchy. Thus, there are two kind of such weights: the weights of links in the hierarchy and the weights associated with the individual nodes” (Page 135, Section 3), “The first type of weights is associated with the links between words and topics and between the nodes in the tree (actually, the former type is a kind of the latter since the individual words can be considered as terminal tree nodes related to the corresponding topic). For example, if the document mentions the word carburetor, is it about cars? And the word wheel? Intuitively, the contribution of the word carburetor into the topic cars is more than that of the word wheel; thus, the link between wheel and cars is assigned a less weight. The algorithm of classification takes into account these weights when compiling the accumulated relevance of the topics” (Page 135, Section 4), and “An interesting application of the method is classification of the documents by similarity with respect to a given topic. Clearly, a document mentioning the use of animals for military purposes and the document mentioning feeding of animals are similar (both mention animals) from the point of view of a biologist, but not from the point of view of a military man they are very different. The comparison is made on the basis of the weights of the topics for the two documents” (Page 137, Section 6). The examiner further notes that the secondary reference of Gelbukh teaches the concept of weighting hierarchical classifications/topics (i.e. labels) that are subsequently assigned to documents. The combination would result in weighting the hierarchical labels of Wang and the labels of Rathod for each assignee of the complete set of assignees of Rathod. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Gelbukh’s would have allowed Rathod’s, Wang’s, and Majumdar’s to provide a method for optimally assigning classifications to documents in order to identify its principal topics, as noted by Gelbukh (Page 133, Section 1 & Page 137, Section 6). Regarding claim 10, Rathod further teaches a non-transitory storage medium comprising: A) wherein the complete set of assignees comprises each assignee that may be assigned subsequently received documents for processing (Paragraphs 25, 27, and 41, Figure 1). The examiner notes that Rathod teaches “wherein the complete set of assignees comprises each assignee that may be assigned subsequently received documents for processing” as “When a new incident ticket is received by ITSM system 132 and/or in incident repository 134, classification apparatus 108 may perform stemming and/or removal of stop words and infrequent words from the incident ticket” (Paragraph 25), “Management apparatus 110 then generates routings 144 of incident tickets to agents and/or groups of agents according to incident categories 114 assigned to the incident tickets. For example, management apparatus 110 may assign each incident ticket to an agent and/or group of agents with experience and/or expertise in handling issues described in the incident ticket. Management apparatus 110 may also update incident repository 134 and/or another data store with the assignment of the ticket to the agent(s)” (Paragraph 27), and “Operations 202-210 may be repeated for remaining incident tickets (operation 212). For example, incident categories may be assigned to each new incident ticket received through the incident management system, and incident tickets may be routed within the incident management system according to the assigned incident categories to streamline resolution of issues associated with the incident tickets. Feedback related to the assignments may additionally be used to improve the accuracy of the incident categories and/or assignments of subsequent incident tickets to the incident categories” (Paragraph 41). The examiner further notes that new ticket(s) (i.e. the claimed subsequently received documents) are assigned to agents/groups of agents (i.e. the claimed complete set of assignees). Regarding claim 15, Rathod teaches a method comprising: A) instructions, which when executed by a processor, cause the processor to perform operations, the operations to: receive a label profile for a set of documents (Paragraphs 22 and 26); B) the label profile comprising a predicted set of labels associated with each document in the set of documents (Paragraphs 22 and 26); C) for each document in the set of documents: creating a log entry in a database for a document (Paragraphs 17 and 26); D) the log entry comprising unique labels identified in the label profile for the document (Paragraphs 17 and 26); E) determining performance data for each label in the log entry in the database based on feedback from a client system (Paragraphs 26-28); and F) adding the performance data associated the document and at least one assignee associated with the performance data in the log entry (Paragraphs 17 and 26-28). The examiner notes that Rathod teaches “receiving a label profile for a set of documents” as “categorization apparatus 102 and/or another component of the system uses clusters 142 of related words to generate incident categories 114 to which the incident tickets can be assigned” (Paragraph 22) and “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26). The examiner further notes that the assigned generated categories (i.e. a profile of labels) for each incident ticket (i.e. a document) in the set of incident tickets (i.e. a set of documents) are received for subsequent processing. The examiner further notes that Rathod teaches “the label profile comprising a predicted set of labels associated with each document in the set of documents” as “categorization apparatus 102 and/or another component of the system uses clusters 142 of related words to generate incident categories 114 to which the incident tickets can be assigned” (Paragraph 22) and “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26). The examiner further notes that the assigned generated categories (i.e. a profile of labels) for each incident ticket (i.e. a document) in the set of incident tickets (i.e. a set of documents) are predicted via classification apparatus 108. The examiner further notes that Rathod teaches “for each document in the set of documents: creating a log entry in a database for a document” as “After an incident ticket is received, ITSM system 132 stores the incident ticket in an incident repository 134. For example, ITSM system 132 may create and/or persist a record of the incident ticket in a database, flat file, distributed filesystem, issue-tracking system, bug-tracking system, and/or another data store providing incident repository 134” (Paragraph 17) and “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26). The examiner further notes that the storage of records (i.e. log entries) for each incident ticket (i.e. document) in the set of incident tickets (i.e. the set of documents) in the incident repository (which can be a database) teaches the claimed adding. The examiner further notes that Rathod teaches “the log entry comprising unique labels identified in the label profile for the document” as “After an incident ticket is received, ITSM system 132 stores the incident ticket in an incident repository 134. For example, ITSM system 132 may create and/or persist a record of the incident ticket in a database, flat file, distributed filesystem, issue-tracking system, bug-tracking system, and/or another data store providing incident repository 134” (Paragraph 17) and “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26). The examiner further notes that the storage of records (i.e. log entries) for each incident ticket (i.e. document) in the set of incident tickets in the incident repository (which can be a database) includes the assigned incident categories (i.e. the claimed unique labels in the label profile) for that incident ticket (i.e. document). The examiner further notes that Rathod teaches “determining performance data for each label in the log entry in the database based on feedback from a client system” as “classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26), “Management apparatus 110 then generates routings 144 of incident tickets to agents and/or groups of agents according to incident categories 114 assigned to the incident tickets. For example, management apparatus 110 may assign each incident ticket to an agent and/or group of agents with experience and/or expertise in handling issues described in the incident ticket. Management apparatus 110 may also update incident repository 134 and/or another data store with the assignment of the ticket to the agent(s)” (Paragraph 27), and “Management apparatus 110 additionally collects feedback 146 related to incident categories 114 and/or routings 144 of the incident tickets, and management apparatus 110 and/or another component of the system updates clusters 142 and/or incident categories 114 based on feedback 146” (Paragraph 28). The examiner further notes that collected feedback (i.e. the claimed feedback) from a client system of an assigned agent amongst multiple agents regarding the assigned categories (i.e. the claimed one or more labels) results in updated categories (i.e. labels) for the incident ticket. Such updated categories (i.e. labels) from an agent teaches the claimed undefined performance data in the broadest reasonable interpretation. The examiner further notes that Rathod teaches “adding the performance data associated the document and at least one assignee associated with the performance data in the log entry” as “After an incident ticket is received, ITSM system 132 stores the incident ticket in an incident repository 134. For example, ITSM system 132 may create and/or persist a record of the incident ticket in a database, flat file, distributed filesystem, issue-tracking system, bug-tracking system, and/or another data store providing incident repository 134” (Paragraph 17), “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26), “Management apparatus 110 then generates routings 144 of incident tickets to agents and/or groups of agents according to incident categories 114 assigned to the incident tickets. For example, management apparatus 110 may assign each incident ticket to an agent and/or group of agents with experience and/or expertise in handling issues described in the incident ticket. Management apparatus 110 may also update incident repository 134 and/or another data store with the assignment of the ticket to the agent(s)” (Paragraph 27), and “Management apparatus 110 additionally collects feedback 146 related to incident categories 114 and/or routings 144 of the incident tickets, and management apparatus 110 and/or another component of the system updates clusters 142 and/or incident categories 114 based on feedback 146” (Paragraph 28). The examiner further notes that collected feedback from a client system of an assigned agent (i.e. the claimed at least one assignee) regarding the assigned categories results in updated categories (i.e. labels) for the incident ticket. Such updated categories (i.e. labels) from an agent teaches the claimed undefined performance data in the broadest reasonable interpretation. Moreover, the updated labels (i.e. performance data) as well as the assigned agent (i.e. the claimed at least one assignee) are stored in records (i.e. log entries) in the repository 134 (which can be a database). Rathod does not explicitly teach: A, B, & D) hierarchical label profile; B, D, G, & H) hierarchical labels; E, G, & H) hierarchical label. Wang, however, teaches “hierarchical label profile” as “management module 305 may also perform certain operations such as classifying submitted trouble tickets and assigning certain unique identifiers. Management module 305 may also contain a predefined set of event type and root cause classifications which may be utilized for issue prediction as described below. An event type may be a hierarchically based description of the event type that has occurred (e.g., hard disk not providing requested data) and a root cause may be a hierarchically based description of a cause of the event type (e.g., hard disk spindle failure). The predefined set of event type and root cause classifications may be generated or modified by the IT vendor or other authorized entities or persons” (Paragraph 33) and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60), “hierarchical labels” as “management module 305 may also perform certain operations such as classifying submitted trouble tickets and assigning certain unique identifiers. Management module 305 may also contain a predefined set of event type and root cause classifications which may be utilized for issue prediction as described below. An event type may be a hierarchically based description of the event type that has occurred (e.g., hard disk not providing requested data) and a root cause may be a hierarchically based description of a cause of the event type (e.g., hard disk spindle failure). The predefined set of event type and root cause classifications may be generated or modified by the IT vendor or other authorized entities or persons” (Paragraph 33) and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60), and “hierarchical label” as “management module 305 may also perform certain operations such as classifying submitted trouble tickets and assigning certain unique identifiers. Management module 305 may also contain a predefined set of event type and root cause classifications which may be utilized for issue prediction as described below. An event type may be a hierarchically based description of the event type that has occurred (e.g., hard disk not providing requested data) and a root cause may be a hierarchically based description of a cause of the event type (e.g., hard disk spindle failure). The predefined set of event type and root cause classifications may be generated or modified by the IT vendor or other authorized entities or persons” (Paragraph 33) and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60). The examiner further notes that although the primary reference of Rathod teaches the assigning of categories (i.e. labels) to tickets (i.e. documents) (including receiving a “profile” of such labels), there is no explicit teaching that such categories (i.e. labels) are hierarchical. Nevertheless, the secondary reference of Wang teaches the concept of assigning hierarchical event type and root cause classifications (i.e. examples of hierarchical labels) to tickets (i.e. documents). The combination would result in the categories (i.e. labels) of the tickets of Rathod to be hierarchical (including its profile of labels). It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Wang’s would have allowed Rathod’s to provide a method for helping in the predicting issues for a support ticket, as noted by Wang (Paragraph 61). Rathod and Wang do not explicitly teach: H) perform a similarity search between a vector for a label structure of a new document and a vector of the performance data for labels associated with each assignee in an identified set of assignees, the identified set comprising one or more of the assignees in the complete set of assignees, to select an assignee for processing the document. Mujumdar, however, teaches “perform a similarity search between a vector for a label structure of a new document and a vector of the performance data for labels associated with each assignee in an identified set of assignees, the identified set comprising one or more of the assignees in the complete set of assignees, to select an assignee for processing the document” as “The ticket profile subsystem 110 can use a classification model 114 to assess the complexity of the received annotated tickets 102. As such, the ticket profile subsystem 110 can assess the complexity of the annotated ticket 104 by applying the classification model 114 to the annotated ticket 104. More specifically in some embodiments, as is illustrated in FIG. 2A, the ticket profile subsystem 110 can include an evaluation component 210 that can generate a complexity attribute 214 by applying the classification model 114 to the annotated ticket 104. The complexity attribute 214 is denoted by C in FIG. 2A. The classification model 114 can perform a multi-class classification task in response to being applied to the annotated ticket 104. The multi-class classification task can include discerning a particular difficulty category among a group of multiple difficulty categories. The group of multiple difficulty categories can be ordered and can be represented by an ordered set of parameters. Those parameters can be numerical, alphabetical, or alphanumeric. In some cases, the ordered set of parameters can be an ordered set of integers or real numbers. In one example, the number of difficulty categories in that group can be five, represented by the following integer numbers: 1, 2, 3, 4, and 5, where 1 represents the least difficulty and 5 represents the greatest difficulty). The number of difficulty categories that constitute the group of difficulty categories is configurable, and, thus, more or fewer than five difficulty categories can be contemplated” (Paragraph 33), “The ticket profile subsystem 110 can use the assessed complexity of annotated tickets included in the received annotated tickets 102 in order to generate respective ticket profiles 116, as is shown in FIG. 1” (Paragraph 37), “The ticket profile subsystem 110 can generate tickets profiles for other annotated tickets included in the received annotated tickets 102 as is described above. As a result, in some embodiments, each one of those ticket profiles can be included in the ticket profiles 116 and can be embodied in a skill complexity vector” (Paragraph 40), “the agent profile subsystem 120 can generate multiple agent profiles 128, including an agent profile 128. Each one of the agent profiles 126 characterizes a respective agent in one or many of those terms. To that end, the agent profile subsystem 120 can be functionally coupled to a performance data repository 130 and to a skillset data repository 140. The performance data repository 130 can include, in some cases, data representative of historical performance of each agent in the pool of agents with respect to a group of skills. The skillset data repository 140 can include data representative of skills available to an agent and proficiency in those skills. The agent profile subsystem 120 can obtain first data from the performance data repository 130 for the agent and also can obtain second data from the skillset data repository 140 for the agent. The agent profile subsystem 120 can then generate the agent profile 128 using the first data and the second data. The agent profile subsystem 120 can generate other agent profiles for respective agents by obtaining such first data and second data for those other agents” (Paragraph 41), “the ticket profile and the agent profile are embodied in respective vectors V and V′ having a same dimension d. The magnitude of d can be the number of skills collective available in a pool of agents. For instance, the ticket profile can be embodied in the ticket profile 118 and the agent profile can be embodied in the agent profile 128. As described herein, the ticket profile can be embodied in a SCV and the agent profile can be embodied in a SPV, where the SCV and the SPV have a same dimension. Thus, in those cases, the similarity function ƒ can be embodied in the cosine similarity among V and V′. The similarity function ƒ also can be embodied in another type of function, such as Minkowsky distance or Manhattan distance” (Paragraph 54), and “the ticket-agent matching subsystem 150 can generate list of ticket-agent matches. Each match can be represented by a pair including a ticket ID identifying a ticket and an agent ID identifying an agent in a pool of agents. The list can be referred to as a ticket-agent match list 160 (represented by match list 160 in FIG. 1)” (Paragraph 59). The examiner further notes that the secondary reference of Mujumdar teaches the concept of executing a similarity search of vectorized ticket profile(s) (i.e. new document(s)) (which include classifications (i.e. labels)) against vectorized agent profiles (which includes performance data) to identify suitable agent(s) to process the ticket(s). The combination would result in the use of the hierarchical labels of Wang and the performance data of the labels of Rathod to be vectorized for performing the similarity search to identify suitable agents. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Mujumdar’s would have allowed Rathod’s and Wang’s to provide a method for improving the performance of ticket resolution systems, as noted by Mujumdar (Paragraph 31). Rathod, Wang, and Majumdar do not explicitly teach: G) generating a weight for each hierarchical label in a complete set of the hierarchical labels for each assignee in a complete set of assignees. Gelbukh, however, teaches “generating a weight for each hierarchical label in a complete set of the hierarchical labels for each assignee in a complete set of assignees” as “The other part of the dictionary is the topic tree, which organizes the topics, as integral units, into a hierarchy or, more generally, a lattice (since some topics can belong to several nodes of the hierarchy)” (Page 134, Section 2), “Instead of simple lists of words, some numeric weights can be used by the algorithm to define the quantitative measures of relevance of the words for topics and the measure of importance of the nodes of the hierarchy. Thus, there are two kind of such weights: the weights of links in the hierarchy and the weights associated with the individual nodes” (Page 135, Section 3), “The first type of weights is associated with the links between words and topics and between the nodes in the tree (actually, the former type is a kind of the latter since the individual words can be considered as terminal tree nodes related to the corresponding topic). For example, if the document mentions the word carburetor, is it about cars? And the word wheel? Intuitively, the contribution of the word carburetor into the topic cars is more than that of the word wheel; thus, the link between wheel and cars is assigned a less weight. The algorithm of classification takes into account these weights when compiling the accumulated relevance of the topics” (Page 135, Section 4), and “An interesting application of the method is classification of the documents by similarity with respect to a given topic. Clearly, a document mentioning the use of animals for military purposes and the document mentioning feeding of animals are similar (both mention animals) from the point of view of a biologist, but not from the point of view of a military man they are very different. The comparison is made on the basis of the weights of the topics for the two documents” (Page 137, Section 6). The examiner further notes that the secondary reference of Gelbukh teaches the concept of weighting hierarchical classifications/topics (i.e. labels) that are subsequently assigned to documents. The combination would result in weighting the hierarchical labels of Wang and the labels of Rathod for each assignee of the complete set of assignees of Rathod. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Gelbukh’s would have allowed Rathod’s and Wang’s to provide a method for optimally assigning classifications to documents in order to identify its principal topics, as noted by Gelbukh (Page 133, Section 1 & Page 137, Section 6). Regarding claim 16, Rathod further teaches a method comprising: A) adding a subsequently received document to the set of documents (Paragraphs 25 and 41, Figure 1); and B) create another log entry for the subsequently received document (Paragraphs 17, 25, 26, and 41, Figure 1). The examiner notes that Rathod teaches “adding a subsequently received document to the set of documents” as “When a new incident ticket is received by ITSM system 132 and/or in incident repository 134, classification apparatus 108 may perform stemming and/or removal of stop words and infrequent words from the incident ticket” (Paragraph 25) and “Operations 202-210 may be repeated for remaining incident tickets (operation 212). For example, incident categories may be assigned to each new incident ticket received through the incident management system, and incident tickets may be routed within the incident management system according to the assigned incident categories to streamline resolution of issues associated with the incident tickets. Feedback related to the assignments may additionally be used to improve the accuracy of the incident categories and/or assignments of subsequent incident tickets to the incident categories” (Paragraph 41). The examiner further notes that new ticket(s) (i.e. the claimed subsequently received document) are added to the repository with the other tickets (i.e. the claimed set of documents). The examiner further notes that Rathod teaches “create another log entry for the subsequently received document” as “After an incident ticket is received, ITSM system 132 stores the incident ticket in an incident repository 134. For example, ITSM system 132 may create and/or persist a record of the incident ticket in a database, flat file, distributed filesystem, issue-tracking system, bug-tracking system, and/or another data store providing incident repository 134” (Paragraph 17), “When a new incident ticket is received by ITSM system 132 and/or in incident repository 134, classification apparatus 108 may perform stemming and/or removal of stop words and infrequent words from the incident ticket” (Paragraph 25), “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26), and “Operations 202-210 may be repeated for remaining incident tickets (operation 212). For example, incident categories may be assigned to each new incident ticket received through the incident management system, and incident tickets may be routed within the incident management system according to the assigned incident categories to streamline resolution of issues associated with the incident tickets. Feedback related to the assignments may additionally be used to improve the accuracy of the incident categories and/or assignments of subsequent incident tickets to the incident categories” (Paragraph 41). The examiner further notes that the storage of records (i.e. log entries) for a new ticket (i.e. the subsequently received document) in the incident repository (which can be a database) teaches the claimed creating. Regarding claim 17, Rathod further teaches a method comprising: A) adding an indication or value for each label associated with the subsequently received document in one or more label fields of the log entry (Paragraphs 17, 25, 26, and 41, Figure 1). The examiner notes that Rathod teaches “adding an indication or value for each label associated with the subsequently received document in one or more label fields of the log entry” as “After an incident ticket is received, ITSM system 132 stores the incident ticket in an incident repository 134. For example, ITSM system 132 may create and/or persist a record of the incident ticket in a database, flat file, distributed filesystem, issue-tracking system, bug-tracking system, and/or another data store providing incident repository 134” (Paragraph 17), “When a new incident ticket is received by ITSM system 132 and/or in incident repository 134, classification apparatus 108 may perform stemming and/or removal of stop words and infrequent words from the incident ticket” (Paragraph 25), “Classification apparatus 108 uses match scores 112 between each incident ticket and incident categories 114 to assign one or more incident categories 114 to the incident ticket. For example, classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26), and “Operations 202-210 may be repeated for remaining incident tickets (operation 212). For example, incident categories may be assigned to each new incident ticket received through the incident management system, and incident tickets may be routed within the incident management system according to the assigned incident categories to streamline resolution of issues associated with the incident tickets. Feedback related to the assignments may additionally be used to improve the accuracy of the incident categories and/or assignments of subsequent incident tickets to the incident categories” (Paragraph 41). The examiner further notes that the storage of records (i.e. log entries) for a new ticket (i.e. the subsequently received document) in the incident repository (which can be a database) includes the assigned incident categories (i.e. labels). Such stored assigned incident categories (i.e. labels) in the database records (which entails a field for such assigned incident categories (i.e. labels)) teach the claimed indication or value in the broadest reasonable interpretation. Rathod does not explicitly teach: A) hierarchical label. Wang, however, teaches “hierarchical label” as “management module 305 may also perform certain operations such as classifying submitted trouble tickets and assigning certain unique identifiers. Management module 305 may also contain a predefined set of event type and root cause classifications which may be utilized for issue prediction as described below. An event type may be a hierarchically based description of the event type that has occurred (e.g., hard disk not providing requested data) and a root cause may be a hierarchically based description of a cause of the event type (e.g., hard disk spindle failure). The predefined set of event type and root cause classifications may be generated or modified by the IT vendor or other authorized entities or persons” (Paragraph 33) and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60). The examiner further notes that although the primary reference of Rathod teaches the assigning of categories (i.e. labels) to tickets (i.e. documents), there is no explicit teaching that such categories (i.e. labels) are hierarchical. Nevertheless, the secondary reference of Wang teaches the concept of assigning hierarchical event type and root cause classifications (i.e. examples of hierarchical labels) to tickets (i.e. documents). The combination would result in the categories (i.e. labels) of the tickets of Rathod to be hierarchical (including its database fields of labels). It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Wang’s would have allowed Rathod’s to provide a method for helping in the predicting issues for a support ticket, as noted by Wang (Paragraph 61). Regarding claim 18, Rathod further teaches a method comprising: A) wherein the complete set of assignees comprises each assignee that may be assigned subsequently received documents for processing (Paragraphs 25, 27, and 41, Figure 1). The examiner notes that Rathod teaches “wherein the complete set of assignees comprises each assignee that may be assigned subsequently received documents for processing” as “When a new incident ticket is received by ITSM system 132 and/or in incident repository 134, classification apparatus 108 may perform stemming and/or removal of stop words and infrequent words from the incident ticket” (Paragraph 25), “Management apparatus 110 then generates routings 144 of incident tickets to agents and/or groups of agents according to incident categories 114 assigned to the incident tickets. For example, management apparatus 110 may assign each incident ticket to an agent and/or group of agents with experience and/or expertise in handling issues described in the incident ticket. Management apparatus 110 may also update incident repository 134 and/or another data store with the assignment of the ticket to the agent(s)” (Paragraph 27), and “Operations 202-210 may be repeated for remaining incident tickets (operation 212). For example, incident categories may be assigned to each new incident ticket received through the incident management system, and incident tickets may be routed within the incident management system according to the assigned incident categories to streamline resolution of issues associated with the incident tickets. Feedback related to the assignments may additionally be used to improve the accuracy of the incident categories and/or assignments of subsequent incident tickets to the incident categories” (Paragraph 41). The examiner further notes that new ticket(s) (i.e. the claimed subsequently received documents) are assigned to agents/groups of agents (i.e. the claimed complete set of assignees). 16. Claims 5-6, 12-14, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Rathod et al. (U.S. PGPUB 2020/0202302), in view of Wang (U.S. PGPUB 2023/0129123), and further in view of Mujumdar et al. (U.S. PGPUB 2022/0270019) as applied to claims 8 and 11 above, and further in view of Gelbukh et al. (Article entitled “Use of a Weighted Topic Hierarchy for Document Classification”, dated 1999) as applied to claims 1-4, 7, 9-10, and 15-18 above, and further in view of Szczepanik et al. (U.S. Patent 11,310,129). 17. Regarding claim 5, Rathod does not explicitly teach an apparatus comprising: A) wherein the log entry comprises an assignee field, one or more performance fields, and one or more hierarchical label fields. Wang, however, teaches “wherein the log entry comprises an assignee field, one or more performance fields, and one or more hierarchical label fields” as “remediation may include replacing a hardware IT asset, upgrading some software to a newer version, correcting a software bug, modifying some IT configuration parameters, or other changes to some IT assets” (Paragraph 16), “An event type may be a hierarchically based description of the event type that has occurred (e.g., hard disk not providing requested data) and a root cause may be a hierarchically based description of a cause of the event type (e.g., hard disk spindle failure)” (Paragraph 33), and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60). The examiner further notes that although Rathod clearly stores records (that can be stored in a database) that houses multiple types of information such as assigned agents, labels, performance data, etc., there is no explicit teaching of “fields” housing such information. Nevertheless, the secondary reference of Wang teaches the concept of label database record (i.e. an example log entry) specifically housing multiple “fields” including an agent ID (i.e. the claimed assignee field), event type and/or root cause (i.e. examples of the claimed one or more hierarchical labels), and remediation (i.e. the claimed one or more performance fields in the broadest reasonable interpretation). The combination would result in the log entries of Rathod to explicitly have fields to store its assignee information, performance, data, and labels. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Wang’s would have allowed Rathod’s to provide a method for helping in the predicting issues for a support ticket, as noted by Wang (Paragraph 61). Rathod, Wang, Majumdar, and Gelbukh do not explicitly teach: A) wherein the log entry comprises a document identifier field. Szczepanik, however, teaches “wherein the log entry comprises a document identifier field” as “the dispatch list 140b stores data that defines assignments of tickets to agents. In one example, the dispatch list 140b contains a respective entry corresponding to each ticket received and dispatched by the dispatcher server 120, the entry containing data such as: ticket identifier (e.g., ticket number or other data that uniquely identifies this ticket); support system identifier (e.g., data that defines which one of the plural different support systems this ticket originated in); agent identifier (e.g., a name or number of an agent to whom this ticket is assigned); and date information (e.g., date the ticket was created, date the ticket was assigned to the agent)” (Column 11, lines 58-67-Column 12, lines 1-2). The examiner further notes that the secondary reference of Szczepanik teaches the concept of a specific field housing a ticket identifier (i.e. an example of the claimed document identifier). The combination would result in the log entry of Wang to also have a field housing such a ticket ID for its tickets. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Szczepanik’s would have allowed Rathod’s, Wang’s, Majumdar’s, and Gelbukh’s to provide a method for improving the dispatching of tickets from different support systems to agents, as noted by Szczepanik (Column 2, lines 33-35). Regarding claim 6, Rathod does not explicitly teach an apparatus comprising: A) wherein the assignee field comprises a value or indication associated with an assignee; C) the one or more performance fields comprise a value or indication of a performance factor associated with processing the document; and D) the one or more hierarchical label fields comprise a value or indication to identify a hierarchical label in the complete set of the hierarchical labels. Wang, however, teaches “wherein the assignee field comprises a value or indication associated with an assignee” as “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60), “the one or more performance fields comprise a value or indication of a performance factor associated with processing the document” as “remediation may include replacing a hardware IT asset, upgrading some software to a newer version, correcting a software bug, modifying some IT configuration parameters, or other changes to some IT assets” (Paragraph 16) and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60), and “the one or more hierarchical label fields comprise a value or indication to identify a hierarchical label in the complete set of the hierarchical labels” as “An event type may be a hierarchically based description of the event type that has occurred (e.g., hard disk not providing requested data) and a root cause may be a hierarchically based description of a cause of the event type (e.g., hard disk spindle failure)” (Paragraph 33) and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60). The examiner further notes that although Rathod clearly stores records (that can be stored in a database) that houses multiple types of information such as assigned agents, labels, performance data, etc., there is no explicit teaching of “fields” housing such information. Nevertheless, the secondary reference of Wang teaches the concept of label database record (i.e. an example log entry) specifically housing multiple “fields” including an agent ID (i.e. the claimed value or indication associated with an assignee), event type and/or root cause (i.e. examples of the claimed value or indication to identify a hierarchical label in the complete set of the hierarchical labels), and remediation (i.e. the claimed value or indication of a performance factor associated with processing the document in the broadest reasonable interpretation). The combination would result in the log entries of Rathod to explicitly have fields to store its assignee information, performance, data, and labels. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Wang’s would have allowed Rathod’s to provide a method for helping in the predicting issues for a support ticket, as noted by Wang (Paragraph 61). Rathod, Wang, Majumdar, and Gelbukh do not explicitly teach: B) the document identifier field comprises a value to identify a document processed by the assignee. Szczepanik, however, teaches “the document identifier field comprises a value to identify a document processed by the assignee” as “the dispatch list 140b stores data that defines assignments of tickets to agents. In one example, the dispatch list 140b contains a respective entry corresponding to each ticket received and dispatched by the dispatcher server 120, the entry containing data such as: ticket identifier (e.g., ticket number or other data that uniquely identifies this ticket); support system identifier (e.g., data that defines which one of the plural different support systems this ticket originated in); agent identifier (e.g., a name or number of an agent to whom this ticket is assigned); and date information (e.g., date the ticket was created, date the ticket was assigned to the agent)” (Column 11, lines 58-67-Column 12, lines 1-2). The examiner further notes that the secondary reference of Szczepanik teaches the concept of a specific field housing a ticket identifier (i.e. an example of the claimed value to identify a document processed by the assignee). The combination would result in the log entry of Wang to also have a field housing such a ticket ID for its tickets. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Szczepanik’s would have allowed Rathod’s, Wang’s, Majumdar’s, and Gelbukh’s to provide a method for improving the dispatching of tickets from different support systems to agents, as noted by Szczepanik (Column 2, lines 33-35). Regarding claim 12, Rathod does not explicitly teach a non-transitory storage medium comprising: A) wherein the log entry comprises an assignee field, one or more performance fields, and one or more hierarchical label fields. Wang, however, teaches “wherein the log entry comprises an assignee field, one or more performance fields, and one or more hierarchical label fields” as “remediation may include replacing a hardware IT asset, upgrading some software to a newer version, correcting a software bug, modifying some IT configuration parameters, or other changes to some IT assets” (Paragraph 16), “An event type may be a hierarchically based description of the event type that has occurred (e.g., hard disk not providing requested data) and a root cause may be a hierarchically based description of a cause of the event type (e.g., hard disk spindle failure)” (Paragraph 33), and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60). The examiner further notes that although Rathod clearly stores records (that can be stored in a database) that houses multiple types of information such as assigned agents, labels, performance data, etc., there is no explicit teaching of “fields” housing such information. Nevertheless, the secondary reference of Wang teaches the concept of label database record (i.e. an example log entry) specifically housing multiple “fields” including an agent ID (i.e. the claimed assignee field), event type and/or root cause (i.e. examples of the claimed one or more hierarchical labels), and remediation (i.e. the claimed one or more performance fields in the broadest reasonable interpretation). The combination would result in the log entries of Rathod to explicitly have fields to store its assignee information, performance, data, and labels. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Wang’s would have allowed Rathod’s to provide a method for helping in the predicting issues for a support ticket, as noted by Wang (Paragraph 61). Rathod, Wang, Majumdar, and Gelbukh do not explicitly teach: A) wherein the log entry comprises a document identifier field. Szczepanik, however, teaches “wherein the log entry comprises a document identifier field” as “the dispatch list 140b stores data that defines assignments of tickets to agents. In one example, the dispatch list 140b contains a respective entry corresponding to each ticket received and dispatched by the dispatcher server 120, the entry containing data such as: ticket identifier (e.g., ticket number or other data that uniquely identifies this ticket); support system identifier (e.g., data that defines which one of the plural different support systems this ticket originated in); agent identifier (e.g., a name or number of an agent to whom this ticket is assigned); and date information (e.g., date the ticket was created, date the ticket was assigned to the agent)” (Column 11, lines 58-67-Column 12, lines 1-2). The examiner further notes that the secondary reference of Szczepanik teaches the concept of a specific field housing a ticket identifier (i.e. an example of the claimed document identifier). The combination would result in the log entry of Wang to also have a field housing such a ticket ID for its tickets. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Szczepanik’s would have allowed Rathod’s, Wang’s, Majumdar’s, and Gelbukh’s to provide a method for improving the dispatching of tickets from different support systems to agents, as noted by Szczepanik (Column 2, lines 33-35). Regarding claim 13, Rathod does not explicitly teach a non-transitory storage medium comprising: A) wherein the assignee field comprises a value or indication associated with an assignee; C) the one or more performance fields comprise a value or indication of a performance factor associated with processing the document; and D) the one or more hierarchical label fields comprise a value or indication to identify a hierarchical label in a complete set of the hierarchical labels. Wang, however, teaches “wherein the assignee field comprises a value or indication associated with an assignee” as “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60), “the one or more performance fields comprise a value or indication of a performance factor associated with processing the document” as “remediation may include replacing a hardware IT asset, upgrading some software to a newer version, correcting a software bug, modifying some IT configuration parameters, or other changes to some IT assets” (Paragraph 16) and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60), and “the one or more hierarchical label fields comprise a value or indication to identify a hierarchical label in a complete set of the hierarchical labels” as “An event type may be a hierarchically based description of the event type that has occurred (e.g., hard disk not providing requested data) and a root cause may be a hierarchically based description of a cause of the event type (e.g., hard disk spindle failure)” (Paragraph 33) and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60). The examiner further notes that although Rathod clearly stores records (that can be stored in a database) that houses multiple types of information such as assigned agents, labels, performance data, etc., there is no explicit teaching of “fields” housing such information. Nevertheless, the secondary reference of Wang teaches the concept of label database record (i.e. an example log entry) specifically housing multiple “fields” including an agent ID (i.e. the claimed value or indication associated with an assignee), event type and/or root cause (i.e. examples of the claimed value or indication to identify a hierarchical label in a complete set of the hierarchical labels), and remediation (i.e. the claimed value or indication of a performance factor associated with processing the document in the broadest reasonable interpretation). The combination would result in the log entries of Rathod to explicitly have fields to store its assignee information, performance, data, and labels. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Wang’s would have allowed Rathod’s to provide a method for helping in the predicting issues for a support ticket, as noted by Wang (Paragraph 61). Rathod, Wang, Majumdar, and Gelbukh do not explicitly teach: B) the document identifier field comprises a value to identify a document processed by the assignee. Szczepanik, however, teaches “the document identifier field comprises a value to identify a document processed by the assignee” as “the dispatch list 140b stores data that defines assignments of tickets to agents. In one example, the dispatch list 140b contains a respective entry corresponding to each ticket received and dispatched by the dispatcher server 120, the entry containing data such as: ticket identifier (e.g., ticket number or other data that uniquely identifies this ticket); support system identifier (e.g., data that defines which one of the plural different support systems this ticket originated in); agent identifier (e.g., a name or number of an agent to whom this ticket is assigned); and date information (e.g., date the ticket was created, date the ticket was assigned to the agent)” (Column 11, lines 58-67-Column 12, lines 1-2). The examiner further notes that the secondary reference of Szczepanik teaches the concept of a specific field housing a ticket identifier (i.e. an example of the claimed value to identify a document processed by the assignee). The combination would result in the log entry of Wang to also have a field housing such a ticket ID for its tickets. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Szczepanik’s would have allowed Rathod’s, Wang’s, Majumdar’s, and Gelbukh’s to provide a method for improving the dispatching of tickets from different support systems to agents, as noted by Szczepanik (Column 2, lines 33-35). Regarding claim 14, Rathod further teaches a non-transitory storage medium comprising: A) wherein the performance data associated with the document may comprise one or more values based on feedback about performance by the at least one assignee of one or more tasks to resolve the document associated with the log entry (Paragraphs 17 and 26-28). The examiner notes that Rathod teaches “wherein the performance data associated with the document may comprise one or more values based on feedback about performance by the at least one assignee of one or more tasks to resolve the document associated with the log entry” as “classification apparatus 108 may assign the incident category 114 with the highest match score to the incident ticket. In another example, classification apparatus 108 and/or another component of the system may display a subset of incident categories 114 with highest match scores 112 (e.g., the three highest-scoring incident categories 114 for a given incident ticket) to a user (e.g., an agent), and the user may select one of the incident categories as the incident category to assign to the incident ticket. Alternatively, the user may override the displayed incident categories 114 with a manual selection of an incident category that is not one of the highest-scoring incident categories 114. After an incident category is selected for an incident ticket, classification apparatus 108 stores a mapping of the incident ticket to the incident category in incident repository 134 and/or another data store” (Paragraph 26), “Management apparatus 110 then generates routings 144 of incident tickets to agents and/or groups of agents according to incident categories 114 assigned to the incident tickets. For example, management apparatus 110 may assign each incident ticket to an agent and/or group of agents with experience and/or expertise in handling issues described in the incident ticket. Management apparatus 110 may also update incident repository 134 and/or another data store with the assignment of the ticket to the agent(s)” (Paragraph 27), and “Management apparatus 110 additionally collects feedback 146 related to incident categories 114 and/or routings 144 of the incident tickets, and management apparatus 110 and/or another component of the system updates clusters 142 and/or incident categories 114 based on feedback 146” (Paragraph 28). The examiner further notes that due to the diction of “may” (See “wherein the performance data associated with the document may comprise”), the claimed “one or more values based on feedback about performance by the at least one assignee of one or more tasks to resolve the document associated with the log entry” is deemed as an optional limitation in the broadest reasonable interpretation. Regarding claim 19, Rathod does not explicitly teach a method comprising: A) wherein the log entry comprises an assignee field, one or more performance fields, and one or more hierarchical label fields. Wang, however, teaches “wherein the log entry comprises an assignee field, one or more performance fields, and one or more hierarchical label fields” as “remediation may include replacing a hardware IT asset, upgrading some software to a newer version, correcting a software bug, modifying some IT configuration parameters, or other changes to some IT assets” (Paragraph 16), “An event type may be a hierarchically based description of the event type that has occurred (e.g., hard disk not providing requested data) and a root cause may be a hierarchically based description of a cause of the event type (e.g., hard disk spindle failure)” (Paragraph 33), and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60). The examiner further notes that although Rathod clearly stores records (that can be stored in a database) that houses multiple types of information such as assigned agents, labels, performance data, etc., there is no explicit teaching of “fields” housing such information. Nevertheless, the secondary reference of Wang teaches the concept of label database record (i.e. an example log entry) specifically housing multiple “fields” including an agent ID (i.e. the claimed assignee field), event type and/or root cause (i.e. examples of the claimed one or more hierarchical labels), and remediation (i.e. the claimed one or more performance fields in the broadest reasonable interpretation). The combination would result in the log entries of Rathod to explicitly have fields to store its assignee information, performance, data, and labels. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Wang’s would have allowed Rathod’s to provide a method for helping in the predicting issues for a support ticket, as noted by Wang (Paragraph 61). Rathod, Wang, Majumdat, and Gelbukh do not explicitly teach: A) wherein the log entry comprises a document identifier field. Szczepanik, however, teaches “wherein the log entry comprises a document identifier field” as “the dispatch list 140b stores data that defines assignments of tickets to agents. In one example, the dispatch list 140b contains a respective entry corresponding to each ticket received and dispatched by the dispatcher server 120, the entry containing data such as: ticket identifier (e.g., ticket number or other data that uniquely identifies this ticket); support system identifier (e.g., data that defines which one of the plural different support systems this ticket originated in); agent identifier (e.g., a name or number of an agent to whom this ticket is assigned); and date information (e.g., date the ticket was created, date the ticket was assigned to the agent)” (Column 11, lines 58-67-Column 12, lines 1-2). The examiner further notes that the secondary reference of Szczepanik teaches the concept of a specific field housing a ticket identifier (i.e. an example of the claimed document identifier). The combination would result in the log entry of Wang to also have a field housing such a ticket ID for its tickets. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Szczepanik’s would have allowed Rathod’s, Wang’s, Majumdar’s, and Gelbukh’s to provide a method for improving the dispatching of tickets from different support systems to agents, as noted by Szczepanik (Column 2, lines 33-35). Regarding claim 20, Rathod does not explicitly teach a method comprising: A) wherein the assignee field comprises a value or indication associated with an assignee; C) the one or more performance fields comprise a value or indication of a performance factor associated with processing the document; and D) the one or more hierarchical label fields comprise a value or indication to identify a hierarchical label in the complete set of the hierarchical labels. Wang, however, teaches “wherein the assignee field comprises a value or indication associated with an assignee” as “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60), “the one or more performance fields comprise a value or indication of a performance factor associated with processing the document” as “remediation may include replacing a hardware IT asset, upgrading some software to a newer version, correcting a software bug, modifying some IT configuration parameters, or other changes to some IT assets” (Paragraph 16) and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60), and “the one or more hierarchical label fields comprise a value or indication to identify a hierarchical label in the complete set of the hierarchical labels” as “An event type may be a hierarchically based description of the event type that has occurred (e.g., hard disk not providing requested data) and a root cause may be a hierarchically based description of a cause of the event type (e.g., hard disk spindle failure)” (Paragraph 33) and “a label data item, also referred to herein as a label record, may be generated by label generator 370 and stored in label database 340 for each session event that is associated with a filtered log data line that passed filtering step 570. These label data items stored in label database 340 may be utilized in modeling as described below. Each label data item may include the log data lines associated with the session event, even if not all of those log data lines passed the filtration process described above. Each label data item may include information from the session event and trouble ticket associated with that session event such as event type, root cause, IT agent ID, timing information, remediation, etc” (Paragraph 60). The examiner further notes that although Rathod clearly stores records (that can be stored in a database) that houses multiple types of information such as assigned agents, labels, performance data, etc., there is no explicit teaching of “fields” housing such information. Nevertheless, the secondary reference of Wang teaches the concept of label database record (i.e. an example log entry) specifically housing multiple “fields” including an agent ID (i.e. the claimed value or indication associated with an assignee), event type and/or root cause (i.e. examples of the claimed value or indication to identify a hierarchical label in the complete set of the hierarchical labels), and remediation (i.e. the claimed value or indication of a performance factor associated with processing the document in the broadest reasonable interpretation). The combination would result in the log entries of Rathod to explicitly have fields to store its assignee information, performance, data, and labels. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Wang’s would have allowed Rathod’s to provide a method for helping in the predicting issues for a support ticket, as noted by Wang (Paragraph 61). Rathod, Wang, Majumdar, and Gelbukh do not explicitly teach: B) the document identifier field comprises a value to identify a document processed by the assignee. Szczepanik, however, teaches “the document identifier field comprises a value to identify a document processed by the assignee” as “the dispatch list 140b stores data that defines assignments of tickets to agents. In one example, the dispatch list 140b contains a respective entry corresponding to each ticket received and dispatched by the dispatcher server 120, the entry containing data such as: ticket identifier (e.g., ticket number or other data that uniquely identifies this ticket); support system identifier (e.g., data that defines which one of the plural different support systems this ticket originated in); agent identifier (e.g., a name or number of an agent to whom this ticket is assigned); and date information (e.g., date the ticket was created, date the ticket was assigned to the agent)” (Column 11, lines 58-67-Column 12, lines 1-2). The examiner further notes that the secondary reference of Szczepanik teaches the concept of a specific field housing a ticket identifier (i.e. an example of the claimed value to identify a document processed by the assignee). The combination would result in the log entry of Wang to also have a field housing such a ticket ID for its tickets. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Szczepanik’s would have allowed Rathod’s, Wang’s, Majumdar’s, and Gelbukh’s to provide a method for improving the dispatching of tickets from different support systems to agents, as noted by Szczepanik (Column 2, lines 33-35). Response to Arguments 18. Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument (See newly applied reference of Majumdar). Conclusion 19. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. PGPUB 2020/0125992 issued to Agarwal et al. on 23 April 2020. The subject matter disclosed therein is pertinent to that of claims 1-20 (e.g., methods to hierarchically label documents). U.S. PGPUB 2020/0409982 issued to Buchanan et al. on 31 December 2020. The subject matter disclosed therein is pertinent to that of claims 1-20 (e.g., methods to hierarchically label documents). Article entitled “Hierarchical Incident Ticket Classification with Minimal Supervision”, by Maksai et al., dated 2014. The subject matter disclosed therein is pertinent to that of claims 1-20 (e.g., methods to hierarchically label documents). Article entitled “Hierarchical Multi-Label Classification over Ticket Data using Contextual Loss”, by Zeng et al., dated 2014. The subject matter disclosed therein is pertinent to that of claims 1-20 (e.g., methods to hierarchically label documents). Article entitled “Hierarchical Document Classification as a Sequence Generation Task”, by Risch et al., dated 05 August 2020. The subject matter disclosed therein is pertinent to that of claims 1-20 (e.g., methods to hierarchically label documents). U.S. PGPUB 2019/0378028 issued to Chaudhuri et al. on 12 December 2019. The subject matter disclosed therein is pertinent to that of claims 1-20 (e.g., methods to determine document labels). U.S. PGPUB 2020/0356851 issued to Li et al. on 12 November 2020. The subject matter disclosed therein is pertinent to that of claims 1-20 (e.g., methods to determine document labels). 20. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information 21. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Mahesh Dwivedi whose telephone number is (571) 272-2731. The examiner can normally be reached on Monday to Friday 8:20 am – 4:40 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached (571) 272-4085. The fax number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Mahesh Dwivedi Primary Examiner Art Unit 2168 March 19, 2026 /MAHESH H DWIVEDI/Primary Examiner, Art Unit 2168
Read full office action

Prosecution Timeline

Mar 25, 2024
Application Filed
Dec 02, 2025
Non-Final Rejection — §103, §112
Mar 03, 2026
Examiner Interview Summary
Mar 03, 2026
Applicant Interview (Telephonic)
Mar 04, 2026
Response Filed
Mar 19, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591818
FORECASTING AND MITIGATING CONCEPT DRIFT USING NATURAL LANGUAGE PROCESSING
2y 5m to grant Granted Mar 31, 2026
Patent 12585690
COMPUTER-READABLE RECORDING MEDIUM STORING INFORMATION VERIFICATION PROGRAM, INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12561366
Real-Time Micro-Profile Generation Using a Dynamic Tree Structure
2y 5m to grant Granted Feb 24, 2026
Patent 12561469
INFERRING SCHEMA STRUCTURE OF FLAT FILE
2y 5m to grant Granted Feb 24, 2026
Patent 12554730
HYBRID DATABASE IMPLEMENTATIONS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
74%
With Interview (+4.3%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 751 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month