DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/23/2026 has been entered.
In response to Final Communications received 11/26/2024, Applicant, on 12/23/2025, amended Claims 1, 8, and 15, and cancelled Claims 22-24. Claims 1, 4-8, 11, 15, and 18-21 are pending in this case, are considered in this application, and have been rejected below.
Response to Arguments
Arguments regarding 35 USC §112(a) – The rejection is hereby removed in light of Applicant’s amendment and citing of paragraph [0067].
Arguments regarding 35 USC §101 Alice – Applicant asserts that the amended limitations of the claims provide a technical solution to the problem of manual skill management, and sites Desjardins in stating that there is an improvement to the machine learning module using agent feedback as a ground truth score to update weight embeddings using an L2 loss function in a supervised manner, and likens it more to Desjardins, stating that there is a cascading effect on the other elements leading to the automated routing of new tickets, which would reduce the likelihood of kickbacks, and thus an improvement. Examiner disagrees as first Desjardins and this case are dissimilar, as in Desjardins the Specification and Claims are focused on the improvement of machine learning, and the present case uses machine learning in order to improve, as Applicant stated in the Remarks and cited above, “the problem of manual skill management”. The use of agent feedback here does not create a feedback loop, but rather is new input being utilized with the machine learning, and the system is not improved but rather is utilized to perform the abstract limitations of the claims. This is updating scores based on a machine learning cluster algorithm and agent feedback which is then processed using such tools as a L2 loss function, which is clearly both a Mental Process and a Certain Method of Organizing Human Activity as per the rejection below. The routing of a new ticket without user intervention is at best a transmission step, as there are no cited additional elements to even begin to try to practically integrate it. The limitations are not practically integrated, as the claim limitations merely utilize current technologies such as machine learning on a computer and processor to perform the abstract limitations of the claims, similar to that of Alice, essentially “Applying It” for scoring of agents or “manual skill management”. There is no improvement to a technology or any technological process, including the machine learning algorithm, as there is no feedback from the limitations where the algorithm is used, and any inventive concept would be contained wholly within the abstraction.
Therefore, the arguments are non-persuasive, the Claims are ineligible as there is no inventive concept, and the rejection of the Claims and their dependents are maintained under 35 USC 101.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Alice - Claims 1, 4-8, 11-15, and 18-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 15 is directed to the limitations for receiving a plurality of tickets, wherein each ticket in the plurality of tickets includes a plurality of fields and at least one agent that resolved the ticket (Collecting Information, an observation, a Mental Process; Organizing and Tracking Information for Managing Human Behavior, i.e. Identifying Skills of Agents; a Certain Method of Organizing Human Activity); determining skills from the plurality of tickets by inputting the plurality of tickets and one or more of the plurality of fields into a clustering algorithm and outputting the skills (Analyzing and Transmitting the Information, an evaluation and judgment, a Mental Process; Organizing and Tracking Information for Managing Human Behavior, i.e. Identifying Skills of Agents; a Certain Method of Organizing Human Activity); generating a taxonomy of the skills (Analyzing the Information, an evaluation, a Mental Process; Organizing and Tracking Information for Managing Human Behavior, i.e. Identifying Skills of Agents; a Certain Method of Organizing Human Activity); creating and outputting a skills knowledge graph using the taxonomy of the skills with agents connected to the skills (Transmitting the Information, a judgment, a Mental Process; Organizing and Tracking Information for Managing Human Behavior, i.e. Identifying Skills of Agents; a Certain Method of Organizing Human Activity) computing a skills score for each agent (Analyzing and Transmitting the Information, an evaluation, a Mental Process; Organizing and Tracking Information for Managing Human Behavior, i.e. Identifying Skills of Agents; a Certain Method of Organizing Human Activity); updating the skills using the machine learning clustering algorithm to output updated skills (Analyzing and Transmitting the Information, an evaluation and judgment, a Mental Process; Organizing and Tracking Information for Managing Human Behavior, i.e. Identifying Skills of Agents; a Certain Method of Organizing Human Activity); inputting agent feedback to a machine learning module and improve/utilize the machine learning module using the agent feedback as a ground truth score to update weight embeddings using an L2 loss function in a supervised manner (Collecting and Analyzing the Information, an observation and evaluation, a Mental Process; Organizing and Tracking Information for Managing Human Behavior, i.e. Identifying Skills of Agents; a Certain Method of Organizing Human Activity); update the skill using the machine learning clustering algorithm to output updated skills (Analyzing and Transmitting the Information, an evaluation and judgment, a Mental Process; Organizing and Tracking Information for Managing Human Behavior, i.e. Identifying Skills of Agents; a Certain Method of Organizing Human Activity); updating the skills knowledge graph with the updated skills score and the updated skills to generate an updated skills knowledge graph; (Analyzing and Transmitting the Information, an evaluation and judgment, a Mental Process; Organizing and Tracking Information for Managing Human Behavior, i.e. Identifying Skills of Agents; a Certain Method of Organizing Human Activity); receiving a new ticket (Collecting Information, an observation, a Mental Process; Organizing and Tracking Information for Managing Human Behavior, i.e. Identifying Skills of Agents; a Certain Method of Organizing Human Activity); determining the skills needed to resolve the new ticket (Analyzing the Information, an evaluation, a Mental Process; Organizing and Tracking Information for Managing Human Behavior, i.e. Identifying Skills of Agents; a Certain Method of Organizing Human Activity); using a search engine to search for the determined skills in the updated skills knowledge graph and an agent with a high skills score for the determined skills (Analyzing the Information, an evaluation, a Mental Process; Organizing and Tracking Information for Managing Human Behavior, i.e. Identifying Skills of Agents; a Certain Method of Organizing Human Activity); and automatically routing the new ticket to the agent without user intervention with the high skills score for the determined skills from the updated skills knowledge graph (Transmitting the Information, a judgment, a Mental Process; Organizing and Tracking Information for Managing Human Behavior, i.e. Identifying Skills of Agents; a Certain Method of Organizing Human Activity), which under their broadest reasonable interpretation, covers performance of the limitation in the mind for the purposes of organizing and tracking information for Managing Human Behavior, but for the recitation of generic computer components. That is, other than reciting a system, at least one processor, machine learning module, and a non-transitory computer-readable medium, nothing in the claim element precludes the step from practically being performed or read into the mind for the purposes of Organizing and Tracking information for Managing Human Behavior, i.e. Tracking Skills of Agents. For example, determining skills from the plurality of tickets encompasses what a supervisor or manager would do while observing and evaluating how agents do on calls, and assigning skills to them or ranking those skills, an observation, evaluation, and judgment. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas, an observation and evaluation. Further, as described above, the claims recite limitations for organizing and tracking information for Managing Human Behavior, a “Certain Method of Organizing Human Activity”. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim recites the above stated additional elements to perform the abstract limitations as above. The system, processors, and computer readable medium are recited at a high-level of generality (i.e., as a generic software/module performing a generic computer function of storing, retrieving, sending, and processing data) such that they amount to no more than mere instructions to apply the exception using generic computer components. Even if taken as an additional element, the receiving and transmission steps above are insignificant extra-solution activity as these are receiving, storing, and transmitting data as per the MPEP 2106.05(d). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, when considered both individually and as an ordered combination. As discussed above with respect to integration of the abstract idea into a practical application, the additional element being used to perform the abstract limitations stated above amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible. Applicant’s Specification states:
“[0029] The system 100 may be implemented on a computing device 101. The computing device 101 includes at least one memory 154, at least one processor 156, and at least one application 158. The computing device 101 may communicate with one or more other computing devices over a network (not shown). The computing device 101 may be implemented as a server (e.g., an application server), a desktop computer, a laptop computer, a mobile device such as a tablet device or mobile phone device, a mainframe, as well as other types of computing devices.”
Which states that any computing device or server can be used, such as any personal computer, smart phone, tablet, etc., to perform the abstract limitations, and from this interpretation, one would reasonably deduce the aforementioned steps are all functions that can be done on generic components, and thus application of an abstract idea on a generic computer, as per the Alice decision and not requiring further analysis under Berkheimer, but for edification the Applicant’s specification has been used as above satisfying any such requirement. This is “Applying It” by utilizing current technologies. For the receiving and transmitting steps that were considered extra-solution activity in Step 2A above, if they were to be considered additional elements, they have been re-evaluated in Step 2B and determined to be well-understood, routine, conventional, activity in the field. The background does not provide any indication that the additional elements, such as the processor, system, etc., nor the receiving and transmitting steps as above, are anything other than a generic, and the MPEP Section 2106.05(d) indicates that mere collection or receipt, storing, or transmission of data is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). For these reasons, there is no inventive concept. The claim is not patent eligible.
Independent Claims 1 and 8 contains the identified abstract ideas, with no other additional elements to be considered as part of a practical application or under prong 2 of the 2019 PEG, and thus not integrated into a practical application, nor significantly more for the same reasons and rationale as above.
Claims 4-7, 11-14, and 18-21 contain the identified abstract ideas, further narrowing them, with no additional elements to be considered as part of a practical application or under prong 2 of the 2019 PEG, thus not integrated into a practical application, nor are they significantly more for the same reasons and rationale as above.
New Claims 22-24 contain the identified abstract ideas, further narrowing them, with the additional element of a machine learning module being utilized which is generic when considered as part of a practical application or under prong 2 of the 2019 PEG, thus not integrated into a practical application, nor are they significantly more for the same reasons and rationale as above.
After considering all claim elements, both individually and in combination, Examiner has determined that the claims are directed to the above abstract ideas and do not amount to significantly more. Therefore, the claims and dependent claims are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. See Alice Corporation Pty. Ltd. v. CLS Bank International, No. 13–298.
Allowable Subject Matter
Claims 1, 4-8, 11-15, and 18-21 are objected to as being dependent upon a rejected base claim, but would be allowable if the independent claims were amended in such a way as to overcome the 35 USC 101 rejection and any other rejections.
The closest prior art of record are Rath (U.S. Publication No. 2021/001,4136), Dialani (U.S. Publication No. 2018/023,2421), and Skomoroch (U.S. Publication No. 2014/008,1928). Rath, a system and method for assigning support tickets to support agents, teaches at least one processor, receiving a plurality of tickets, wherein each ticket in the plurality of tickets includes a plurality of fields and at least one agent that resolved the ticket, computing a skills score for each agent and a related skill, inputting agent feedback to a machine learning module which is used to update the scores for agents based on feedback collected, receiving a new ticket, determining skills needed to resolve the new ticket with use of a machine learning model, automatically routing the new ticket to the agent with the high skills score for the determined skills, determining skills from the plurality of tickets using a clustering algorithm on one or more of the plurality of fields, outputting clusters of experts in pods (groups) based on the scoring, and to search for the determined skills in the skills matrix or the skills knowledge graph and an agent with a high skills score for the determined skills, but it does not teach this using a search engine, nor it does not explicitly state using a taxonomy of skills, and does not explicitly state a skills matrix or knowledge graph in use with the skills score or using a ground truth score. Dialani, a system and method for query intent clustering for automated sourcing teaches generating a taxonomy of the skills using an algorithm, creating and outputting a skills matrix or a skills knowledge graph using the taxonomy of the skills with agents connected to the skills, updating the skills matrix or the skills knowledge graph with the skills score where the expertise/skill scores are updated in the profile for the taxonomy, and using a search engine to search for skills and professional expertise, but does not teach use of a ground truth score for improving a machine learning model. Skomoroch, a skill extraction system and method, teaches weighting from agent feedback to come up with an initial score for nodes, and use of machine learning, but does not explicitly state a ground truth score. None of the above prior art explicitly teaches this particular manner as to which a ground truth score is utilized along with a machine learning model, along with the other limitations of the claims in combination, as Applicant points out on pgs. 2-3 of the Remarks of 11/12/2024, and these are the reasons which adequately reflect the Examiner's opinion as to why Claims 1, 4-8, 11-15, and 18-21 are allowable over the prior art of record, and are objected to as provided above.
Conclusion
The prior art made of record is considered pertinent to applicant's disclosure.
US 20210014136 A1
Rath; Poonam
ASSIGNING SUPPORT TICKETS TO SUPPORT AGENTS
US 20200184347 A1
Lindsley; Hannah R.
Structurally Defining Knowledge Elements Within a Cognitive Graph
US 20200034776 A1
PERAN; Michael et al.
MANAGING SKILLS AS CLUSTERS USING MACHINE LEARNING AND DOMAIN KNOWLEDGE EXPERT
US 20190197487 A1
Jersin; John Robert et al.
AUTOMATED MESSAGE GENERATION FOR HIRING SEARCHES
US 20180232421 A1
Dialani; Vijay et al.
QUERY INTENT CLUSTERING FOR AUTOMATED SOURCING
US 20180173501 A1
SRINIVASAN; Ramya Malur et al.
FORECASTING WORKER APTITUDE USING A MACHINE LEARNING COLLECTIVE MATRIX FACTORIZATION FRAMEWORK
US 20180121823 A1
Bauer; John H. et al.
Method, System and Computer Program Product for Automating Expertise Management Using Social and Enterprise Data
US 20170061550 A1
Lin; Song et al.
GENERATING GRAPHICAL PRESENTATIONS USING SKILLS CLUSTERING
US 20170039527 A1
Rangan; Venkat
AUTOMATIC RANKING AND SCORING OF MEETINGS AND ITS ATTENDEES WITHIN AN ORGANIZATION
US 20140081928 A1
Skomoroch; Peter N. et al.
SKILL EXTRACTION SYSTEM
US 11238411 B1
Wong; Wang-Chan et al.
Artificial neural networks-based domain- and company-specific talent selection processes
US 20200394222 A1
Saxena; Manoj et al.
Cognitive Session Graphs Including Blockchains
US 20200126022 A1
Gaspar; Brian et al.
Automated Systems and Methods for Determining Jobs, Skills, and Training Recommendations
US 20190220695 A1
Nefedov; Nikolai
CLUSTERING AND TAGGING ENGINE FOR USE IN PRODUCT SUPPORT SYSTEMS
US 20190197486 A1
Jersin; John Robert et al.
PROBABILITY OF HIRE SCORING FOR JOB CANDIDATE SEARCHES
US 20190042988 A1
Brown; Stephen et al.
OMNICHANNEL, INTELLIGENT, PROACTIVE VIRTUAL AGENT
US 20180189380 A1
HOLTZ; Manuel et al.
JOB SEARCH ENGINE
US 20160048579 A1
Hall; Patrick et al.
PROBABILISTIC CLUSTER ASSIGNMENT
US 20100005087 A1
Basco; Stephen et al.
Facilitating collaborative searching using semantic contexts associated with information
US 20220382989 A1
Poddar; Shivani et al.
Multimodal Entity and Coreference Resolution for Assistant Systems
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH M WAESCO whose telephone number is (571)272-9913. The examiner can normally be reached on 8 AM - 5 PM M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BETH BOSWELL can be reached on (571) 272-6737. The fax phone number for the organization where this application or proceeding is assigned is 571-273-1348.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSEPH M WAESCO/Primary Examiner, Art Unit 3625B 2/9/2026