DETAILED ACTION
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
2. Claims 1-9 are currently pending. Claims 1-2, 4 and 7-9 have been amended. Claims 1-9 have been rejected.
Status of the Application
3. Claims 1-9 are currently pending and have been examined in this application. This communication is the first action on the merits.
Response to Amendments
4. Applicant’s amendment filed on 02/27/2026 necessitated new grounds of rejection in this office action.
Continued Examination under 37 CFR 1.114
5. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/27/2026 has been entered.
Priority
6. The Examiner has noted the Applicants claiming Priority from Foreign Application IN202241050850 filed on 09/06/2022. Receipt is acknowledged of papers submitted under 35 U.S.C. § 119(a)-(d), which papers have been placed of record in the file. Therefore, Examiner notes the effective filing date of this application examined on the record is 09/06/2022.
Response to Arguments
7. Applicant’s arguments, see page 7 of 12 filed on 02/27/2026, with respect to the Claim Objections for Claims 7-8 have been fully considered and are found to be partially persuasive. The Claim Objection for Claim 8 has been withdrawn. However, due to Applicant’s proposed amendments, Examiner adds Claim Objections to Claims 2 and 7. See Claim Objections Section shown below for further details.
8. Applicant’s arguments, see page 7-8 of 12 filed on 02/27/2026, with respect to the 35 U.S.C. § 112 (b) Claim Rejections for Claims 1-9 have been fully considered and are found to be partially persuasive. The 35 U.S.C. § 112 (b) Claim Rejections for Claims 1-8 has been withdrawn. However, due to Applicant’s proposed amendments, Examiner maintains 35 U.S.C. § 112 (b) Claim Rejection to Independent Claim 9. See 35 U.S.C. § 112 (b) Claim Rejection Section shown below for further details.
9. Applicant’s arguments, see pages 10-11 of 12 filed on 02/27/2026, with respect to the 35 U.S.C. § 102 (a) (1) Claim Rejections for Claims 1-9 have been fully considered and are found to be persuasive. Therefore, the 35 U.S.C. § 102 (a) (1) Claim Rejections for Claims 1-9 have been withdrawn. See Examining Claims with Respect to Prior Art Section shown below.
Response to 35 U.S.C. § 101 Arguments
10. Applicant’s 35 U.S.C. § 101 arguments, filed with respect to Claims 1-9 have been fully considered, but they are found not persuasive (see Applicant Remarks, Pages 8-10 of 12, dated 02/27/2026). Examiner respectfully disagrees.
Argument #1:
(A). Applicant argues that Claims 1-9 recite additional elements that integrate the judicial exception into a practical application under revised step 2a prong two of the 35 U.S.C. § 101 analysis (see Applicant Remarks, 1st ¶ of Page 8 of 12, dated 02/27/2026). Examiner respectfully disagrees.
Specifically, Applicant asserts that for Independent Claims 1, 4 and 9, the amendments apply the concepts into an electronic output displaying workflow assignments for a project. The output is a tangible manifestation that can be viewed and used by employees to plan out and start on tasks within the greater scope of a project. The amendments also add specific technical features that provide meaningful limitations beyond merely applying an abstract idea on a generic computer.
In response, for Independent Claims 1, 4 and 9, Examiner notes that the “technical features” are merely recitations of generic computer functions that do not change the underlying nature of the abstract idea. First, the “output” is insignificant extra-solution activity. The Applicant’s argument that the electronic output is a “tangible manifestation” fails because the mere displaying of information is considered insignificant extra-solution activities. Displaying a workflow assignment on a user interface is the “final step” of the abstract process. Per Flook and Electric Power Group, simply presenting the results of a mathematical calculation or mental process on a screen does not transform an abstract idea into a practical application. The “tangibility” of the screen does not provide a technical solution to a technical problem. Also, according to MPEP § 2106.04 (d) (i): “It is notable that mere physicality or tangibility of an additional element or elements is not a relevant consideration in Step 2A Prong Two. As the Supreme Court explained in Alice Corp., mere physical or tangible implementation of an exception does not guarantee eligibility. Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 224, 110 USPQ2d 1976, 1983-84 (2014) ("The fact that a computer ‘necessarily exist[s] in the physical, rather than purely conceptual, realm,’ is beside the point"). See also Genetic Technologies Ltd. v. Merial LLC, 818 F.3d 1369, 1377, 118 USPQ2d 1541, 1547 (Fed. Cir. 2016) (steps of DNA amplification and analysis are not "sufficient" to render claim 1 patent eligible merely because they are physical steps).” Reason #2: Lack of Improvement to Computer Functionality. For example; the features described (beta updates, thresholds and ensembles) improve the business process of evaluation, not the operation of the computer itself. The “beta update process” and “machine learning instructions” are described in terms of their functional mathematical results (calculating scores) rather than a change to how a computer stores data or processes signals. Under Alice, if the “improvement” is merely a more accurate calculation or a better organizational chart, it remains an abstract idea. These claims do not improve the speed, memory usage, or security of the processing circuitry. Reason #3: “Machine Learning” as a Functional Black Box. Reciting “machine learning” and “ensembles” without specific technical implementation details is a “drafting effort” to monopolize an abstract concept. These claims recite “machine learning instructions” and “threshold buckets” as functional black boxes. It does not provide a specific algorithm or a new hardware architecture. According to the USPTO 2024 Guidance, simply saying “do it with machine learning” is no different than saying “do it on a computer”. Without a specific technical constraint on how the ensemble is constructed, it remains a high-level mathematical concept. Reason #4: Failure to Impose a “Meaningful Limit”. The “technical features” do not narrow the claims to a specific practical application; they merely describe the abstract idea in a specialized environment. Assigning a “highest ranked resource” based on “availability” is a fundamental human activity. Adding “time weights” to “avoid bias” is a policy decision (business logic), not a technical one. These features do not impose a “meaningful limit” because they broadly preempt the entire field of data-driven workforce management. Any system doing this work would inevitability use the same mathematical logic steps. Reason #5: The “Practical Application” is a Business Method. The “integration” claimed by the Applicant is simply the automation of a human management task. The Applicant states the output is used by employees to “plan out and start on tasks”. This is a method of organizing human activity. Integrating an abstract idea into the field of project management” does not satisfy Prong 2 if the integration is merely the automated execution of the abstract steps themselves.
In Conclusion: Independent Claims 1, 4 and 9 fails Step 2A Prong 2 because it lacks an “inventive concept” that transforms the abstract mathematical scoring and human resource management into a specific technical improvement in the field of computer science. Therefore, Examiner maintains that Claims 1-9 as currently recited do not contain additional elements that integrate the judicial exception into a practical application under step 2a prong 2 of the 35 U.S.C. § 101 analysis.
Argument #2:
(B). Applicant argues that Independent Claims 1, 4 and 9 now require a specific work flow interface and display elements. Independent Claim 1 as amended, recites that the processing circuitry is configured “for a given task in a workflow display in an interface of a user device” and further recites “display, through the interface of the user device, an assignment of the corresponding employee to the workflow for the given task, based on the index of fit score.” These limitations provide a specific technological implementation that goes beyond mere data output and provides a concrete technological application under revised step 2a prong two of the 35 U.S.C. § 101 analysis (see Applicant Remarks, 2nd ¶ of Page 8 of 12, dated 02/27/2026). Examiner respectfully disagrees.
In response, for Independent Claims 1, 4 and 9, Examiner discloses that the “workflow interface” and “display elements” are mere insignificant post-solution activities that do not provide a technical improvement to computer functionality. The Applicant argues that the interface elements provide a “specific technological implementation.” However, under Federal Circuit precedent, a claim must disclose a technical improvement to how computer applications are used, rather than just identifying and presenting data. The recited “workflow interface” and “display” do not improve the functioning of the computer itself (e.g., making it faster or more efficient). Instead, they use the computer as a generic tool to perform the abstract idea of “resource assignment”. The recited “workflow interface” and “display” do not improve the functioning of the computer itself (e.g., making it faster or more efficient). Instead, they use the computer as a generic tool to perform the abstract idea of “resource assignment”. Reason #2: Displaying Data is “Insignificant Extra-Solution Activities”. The Applicant claims that displaying a “specific assignment” distinguishes the claims from USPTO Example 47 Claim 2. This is a distinction without a legal difference. The USPTO Guidance clarifies that “insignificant extra-solution activities”, such as displaying information, does not integrate an abstract idea into a practical application. Whether the output is “anomaly data” (Example 47) or an “employee assignment” (these claims), it is still just a communication of a result from an abstract mental process or mathematical calculation. Reason #3: Generic Interface Elements (MPEP § 2106.04 (d)). The Applicant cites the specification’s mention of “digital displays”, “touch screens”, and “GUIs” as support for a concrete implementation. These are WURC hardware components. Per Alice, adding a “apply it on a computer” or “display it on a GUI” limitation does not satisfy Step 2A Prong 2 if those elements perform only their generic, intended functions. These claims describe what is displayed (the assignment), but fails to describe a specific technical method for how the interface itself has been modified or improved. Reason #4: Directing a Business Method, not a Technology. The Applicant argues the output is a “tangible manifestation” for employees to use. This confirms the claim is directed to a method of organizing human activity (project management). Integration requires the exception to be applied in a way that solves a technological problem. Planning tasks is a business problem, not a technological one.
Therefore, Examiner maintains that Claims 1-9 as currently recited do not contain additional elements that integrate the judicial exception into a practical application under step 2a prong 2 of the 35 U.S.C. § 101 analysis.
Argument #3:
(C). Applicant argues that the amended claims for Independent Claims 1, 4 and 9 recite that “the time weight adjusts the base score and the recent performance index score to predict the compound performance index score over time or transaction count to avoid biasness of years of experience. This limitation specifies a technical function which avoid bias in scoring based on years of experience. This is a concrete improvement to resource planning technology” (see Applicant Remarks, 1st ¶ of Page 9 of 12, dated 02/27/2026). Examiner respectfully disagrees.
In response to Applicant’s 35 U.S.C. § 101 arguments here, “avoiding bias” is a social or business objective, not a technical one, and that mathematical complexity does not equal patent eligibility. Reason #1: Avoid “Bias” is a Non-Technical Purpose. The Applicant argues that avoiding experience bias is a “technical function”. However, under USPTO Guidance, an improvement must be to a technology or computer functionality, not to a business process or a social policy. “Biasness of years of experience” is a subjective human resource concern. Adjusting scores to level the playing field between junior and senior employees is a method of organizing human activity (personnel management). In Electric Power Group, the Federal Circuit held that even if a process is complex, if the “innovation” is in the type of information filtered or the social/business goal achieved, it is not a technical improvement. Reason #2: Mathematical Complexity does not equate to a practical application. The Applicant claims the computation “cannot be practically performed in the human mind.” This is a common “Mental Processes” argument, but it fails Prong 2. The inability of a human to perform a calculation quickly does not transform a mathematical concept into a practical application. Per Gottschalk v. Benson, a formula is an abstract idea even if it is too complex for manual calculation. The “dynamic adjustment” described is merely a mathematical relationship (weighting variables over time). Reciting that a computer performs this calculation more efficiently than a human is a statement of the computer’s generic utility, not a technical improvement to the computer itself. Reason #3: Lack of Technological “How”. The specification describes the purpose (avoiding bias) but does not disclose a technical solution to a computer-centric problem. The claim uses result-oriented language. It recites that a time weight adjusts scores to avoid bias, but it does not recite a specific, unconventional technical architecture that changes how the computer processes data. According to the 2024 AI Guidance, if a claim simply applies a mathematical model (like a beta update or weight adjustment0 to achieve a result, without an improvement to the underlying AI or computer system, it does not move beyond the abstract idea. Reason #4: Administrative / Organization al “Improvement” is not “Technical”. The Applicant claims that is a “concrete improvement to resource planning technology”. “Resource Planning” is a quintessential business method. An improvement to the accuracy or fairness of a business decision is not a “technical improvement” under the meaning of 35 U.S.C. 101. The technology (the computer / interface) remains unchanged; only the data being processed and the logic of the business rule have been modified. Reason #5: Preemption of a Mathematical Concept. By claiming the “dynamic adjustment of scores over time or transaction count”, the Applicant is attempting to monopolize a mathematical principle of normalization. Any system attempting to normalize performance data over time would necessarily use some form of time-weighting or transaction-counting. Allowing this claim would effectively preempt other developers from using basis statistical normalization in workforce management software.
In conclusion, for Independent Claims 1, 4 and 9, Examiner maintains that the “time weight” and “bias avoidance” are mathematical concepts applied to a business method, failing to provide the “significantly more” or the “technical integration” required to satisfy Step 2A Prong 2.
Argument #4:
(D). Applicant argues that the amended claims for Independent Claims 1, 4 and 9 recite “generating an index of fit score that combines the compound performance index score with employee availability calculated based on expected time of a certain job and workload details. This amendment specifies how the index of fit score is generated by combining the compound performance index score with availability calculated based on expected job time… This is not merely a field-of-use limitation, but specifies a particular technical methodology for calculating employee fit scores” (see Applicant Remarks, 2nd ¶ of Page 9 of 12, dated 02/27/2026). Examiner respectfully disagrees.
In response, Examiner notes that “index of fit” calculation is merely a mathematical exercise or a business method that lacks a technical “how-to” to transform the computer’s operation. The Applicant characterizes the calculation as a “technical methodology”, but it is actually a functional result. The claim recites what the system does (combining a performance score with availability and workload) rather than how the computer’s hardware or software architecture is technically modified. Under MPEP § 2106.04 (d), a “practical application” requires more than just an “apply it” instruction. Calculating a “fit” based on “expected time” is a standard administrative task that humans have performed for decades; automating this logic on a computer does not create a “technical methodology”. Reason #2: Mere Automation of “Mental Processes”. The “index of Fit” (IoF) mimics a mental process used by project managers. A human manager regularly assesses if an employee is a “fit” for a job by looking at their past performance (CPI) and their current calendar (availability). The Federal Circuit in Credit Acceptance Corp. v. Westlake Services established that taking a well-known manual process and automating it using a computer’s generic functions does not constitute a “practical application”. The claim simply replaces a manager’s judgment with a mathematical algorithm. Reason #3: “Expected Time” and “workload” are non-technical data points. The inputs for the IoF score are administrative data, not technical parameters. “Expected time of a job” and “workload details” are business variables. Processing these specific variables does not solve a technological problem. If the “improvement” is simply that the computer provides a more accurate ranking than a human, that is an improvement in the quality of information, not an improvement in the technology of the computer. Reason #4: Failure to Impose a “Meaningful limit” (Preemption). The Applicant claims this is more than a “field of use” limitation, yet it preempts the entire concept of automated scheduling. Any automated resource allocation system must consider performance and availability to function. By claiming the combination of these two factors, the Applicant is attempting to patent the logical concept of scheduling. According to the USPTO 2024 Guidance, if the additional elements are “inextricably linked” to the abstract idea itself (the scoring and ranking), they do not integrate that idea into a practical application. Reason #5: Absence of a “Technical Solution” to a “Technical Problem”. The specification describes the IoF as a “final score used to rank employees”. This is a business solution to a problem of “who should do this task”? It is not a technical solution to a computer-centric problem like data latency, network bandwidth, or memory management. Because the problem being solved is organizational (workforce management), the integration into a computer interface is viewed as an “insignificant extra-solution activity.”
In conclusion, the “Index of Fit” scoring is a mathematical algorithm directed to an administrative task. The claims fail Step 2A Prong 2 because it does not provide a technical improvement to a technological field; it merely uses a computer to perform a sophisticated version of a method of organizing human activity. Therefore, Examiner maintains that Claims 1-9 as currently recited do not contain additional elements that integrate the judicial exception into a practical application under step 2a prong 2 of the 35 U.S.C. § 101 analysis.
Argument #5:
(E). Applicant argues that “the combination of these technical elements: the workflow interface display, the time weight adjustment to avoid bias, and the specific index of fit score calculation, provides specific improvements to the field of autonomous resource planning systems” (see Applicant Remarks, 3rd ¶ of Page 9 of 12, dated 02/27/2026). Examiner respectfully disagrees.
The combination of elements fails to integrate the abstract idea into a practical application because the elements are merely generic computer functions or insignificant post-solution activities. First, Applicant misapplies USPTO Example 47. In Example 47 Claim 3 was found eligible because it took a physical / technical remedial action regarding “using artificial neural network (ANN) to detect malicious network packets, demonstrating eligibility
under patent law by integrating an abstract idea into a practical application”. In contrast, the current claims which merely displays an assignment. According to the USPTO 2024 Guidance, display information or notifying a user is “insignificant extra-solution activity” that does not constitute a “remedial action” or “improvement to technology or a technical field”. Reason #2: “Autonomous Resource Planning” is a Business Method, not a Technology. The Applicant claims an improvement to “resource planning systems”. “Resource planning” is a fundamental method of organizing human activity. Improving the logic of a business process (e.g., making it more “fair” or “efficient”) is not a technical improvement to a computer’s hardware or software architecture. Under Alice, if the “improvement” is in the concept of management rather than the functionality of the computer, it remains ineligible. Reason #3: “Avoiding Bias” is Social/Business Policy. The “time weight adjustment to avoid bias” is a policy-driven mathematical rule. Reducing “bias” is a qualitative business objective. The mathematical weights used to achieve this are Mathematical Concepts that do not solve a technical problem in computer science (such as reducing latency or increasing security). Simply automating a “fairer” management decision does not create a “technical function” under Prong 2. Reason #4: Ordered Combination Lacks “Significantly More”. The Applicant argues that the combination of elements provides the integration. The combination is merely the sum of its abstract parts: (1) an abstract mathematical score, (2) and abstract mental evaluation of “fit”, and (3) a generic display of the result. Per Electric Power Group, an ordered combination of non-inventive, abstract steps (gathering data, analyzing it, and displaying it) does not satisfy the requirement for a practical application. The “workflow interface” is a conventional tool for displaying data and does not provide a specialized technical environment. Reason #5: Failure to Transform the Computer’s Operational State. The claim does not change how the computer’s memory is managed or how its processor handles tasks. The “practical application” described by the Applicant employees starting on tasks happens outside of the computer and is performed by humans, not by the technology system itself.
In conclusion, these claims remain ineligible because it uses a generic computer to automate a complex business method. The “display” and “bias avoidance” are non-technical features that fail to integrate the abstract idea into a specific technical improvement under Step 2A Prong 2. The ordered combination of elements in the Dependent Claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Accordingly, the subject matter encompassed by the dependent claims fails to amount to a practical application or significantly more than the abstract idea itself. Therefore, under Step 2B, Claims 1-9 do not include additional elements that are sufficient to amount to significantly more than the recited judicial exceptions.
Thus, Claims 1-9 are ineligible with respect to the 35 U.S.C. § 101 analysis.
Claim Objections
11. Claims 2 and 7 are objected to because of the following informalities:
(A). Dependent Claim 2 recites the following claim limitation: “The autonomous resource planning device as claimed in claim 1, further comprising a wherein the user device that is coupled with the processing circuitry and adapted to receive the one or more inputs representing the static scores associated with the one or more employees, such that the processing circuitry is configured to compute the base score associated with each employee of the one or more employees”. Examiner notes it is unnecessary results in a minor claim informality to have the phrase “a wherein the user device” and should recite just “wherein the user device” with the “a” deleted/removed. Therefore, for the purposes of examination, Examiner suggests to Applicant to amend Dependent Claim 2 to recite the following: “The autonomous resource planning device as claimed in claim 1, further comprising [[ wherein the user device that is coupled with the processing circuitry and adapted to receive the one or more inputs representing the static scores associated with the one or more employees, such that the processing circuitry is configured to compute the base score associated with each employee of the one or more employees”.
(B). Dependent Claim 7 recites the following claim limitation: “The autonomous resource planning system as claimed in claim 4, wherein the time weight facilitates the autonomous resource planning system to adjust the base score and recent performance index scores the recent performance index score and further facilitates to predict the compound performance index score”. Examiner points out there seems to be an unnecessary duplication of the phrase “adjust the base score and recent performance index scores the recent performance index score” with “recent performance index scores” and “the recent performance index score” duplicated, which results in a minor claim informality here. Please delete the plural phrase “recent performance index scores” and keep the singular phrase of “the recent performance index score”. Therefore, for the purposes of examination, Examiner suggests to Applicant to amend Dependent Claim 7 to recite the following: “The autonomous resource planning system as claimed in claim 4, wherein the time weight facilitates the autonomous resource planning system to adjust the base score and [[the recent performance index score and further facilitates to predict the compound performance index score”. Appropriate corrections are required.
Claim Rejections - 35 USC § 112
12. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
13. Claim 9 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
(A). Independent Claim 9 recite the following claim limitation of: “generating, by way of the processing circuitry, a compound performance index score based on computing the base score and the recent performance index score of the one or more employees with time weight, wherein the time weight adjusts the base score and the recent performance index score to predict the compound performance index score over time or transaction count to avoid biasness of years of experience, wherein the weights are determined through a beta update process that aggregates multiple machine learning instructions, and wherein depending upon [[the volume of transactions]] different thresholds will be activated and each threshold bucket contains an ensemble of machine learning instruction”. There appears to be insufficient antecedent basis for this limitation in Independent Claim 9 concerning the phrase “the volume of transactions” which has not been previously introduced in the preceding claim limitations of Independent Claim 9, which previously recites “transaction count” which is not the same or not consistent with the recitation of “the volume of transactions” or rewrite this here as “the transaction count” instead of “the volume of transactions”.
For the purposes of examination, Examiner suggests to Applicant to amend the claim limitations of Independent Claim 9 to recite the following: “generating, by way of the processing circuitry, a compound performance index score based on computing the base score and the recent performance index score of the one or more employees with time weight, wherein the time weight adjusts the base score and the recent performance index score to predict the compound performance index score over time or transaction count to avoid biasness of years of experience, wherein the weights are determined through a beta update process that aggregates multiple machine learning instructions, and wherein depending upon [[a volume of transactions different thresholds will be activated and each threshold bucket contains an ensemble of machine learning instruction”. Appropriate corrections are required.
Claim Rejections - 35 USC § 101
14. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
15. Claims 1-9 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Claims 1-9 are each focused to a statutory category namely an “apparatus” or a “device” (Claims 1-3), a “apparatus” or a “system” (Claims 4-8) and a “method” or a “process” (Claim 9).
Step 2A Prong One: Independent Claims 1, 4 and 9 recite limitations that set forth the abstract idea(s), namely (see in bold except via strikethrough):
“receive one or more inputs representing static scores associated with one or more employees” (see Independent Claim 4);
“” (see Independent Claim 4);
“” (see Independent Claims 1 and 4);
“for a given task in a workflow displayed ” (see Independent Claims 1 and 4);
“(i) receive one or more inputs representing static scores associated with one or more employees” (see Independent Claims 1 and 4);
“(ii) compute a base score associated with each employee of one or more employees using a multicriteria decision making process” (see Independent Claims 1 and 4);
“(iii) allocate a recent performance index score for each employee of one or more employees based on” (see Independent Claims 1 and 4);
“(a) performances associated with a respective employee of one or more employees” (see Independent Claims 1 and 4)
“(b) corresponding weights associated with historic details” (see Independent Claims 1 and 4)
“(iv) generate a compound performance index score based on computing the base score and the recent performance index score of the one or more employees with time weight, wherein the time weight adjusts the base score and the recent performance index score to predict the compound performance index score over time or transaction count to avoid biasness of years of experience, wherein the weights are determined through a beta update process that aggregates multiple instructions, and wherein depending upon a volume of transactions different threshold will be activated and each threshold bucket contains an ensemble of instructions” (see Independent Claims 1 and 4);
“(v) generate an index of fit score based on the respective employee of the one or more employees availability and workload details that combines the compound performance index score with employee availability calculated based on expected time of a certain job and workload details” (see Independent Claims 1 and 4);
“(vi) allocate highest ranked resource to a corresponding employee of the one or more employees with maximum index of fit score” (see Independent Claims 1 and 4);
“display, , an assignment of the corresponding employee to the workflow for the given task, based on the index of fit score” (see Independent Claims 1 and 4);
“” (see Independent Claim 4);
“(i) receive data representing the performances associated with each employee of the one or more employees” (see Independent Claim 4);
“(ii) generate the compound performance index score based on computing the base score and the recent performance index score of the one or more employees with time weight” (see Independent Claim 4);
“display the assignment of the corresponding employee to the workflow for the given task” (see Independent Claim 4);
“receiving, , a given task in a workflow and one or more inputs representing static scores associated with one or more employees” (see Independent Claims 1 and 4);
“computing, , a base score associated with each employee of one or more employees using a multicriteria decision making process” (see Independent Claim 9);
“allocating, , a recent performance index score for each employee of one or more employees based on” (see Independent Claim 9);
“(a) performances associated with a respective employee of one or more employees” (see Independent Claim 9);
“(b) corresponding weights associated with historic details” (see Independent Claim 9);
“generating, , a compound performance index score based on computing the base score and the recent performance index score of the one or more employees with time weight, wherein the time weight adjusts the base score and the recent performance index score to predict the compound performance index score over time or transaction count to avoid biasness of years of experience, wherein the weights are determined through a beta update process that aggregates multiple instructions, and wherein depending upon the volume of transactions different thresholds will be activated and each threshold bucket contains an ensemble of instruction” (see Independent Claim 9);
“generating, , an index of fit score based on the respective employee of the one or more employees availability and workload details that combines the compound performance index score with employee availability calculated based on expected time of a certain job and workload details” (see Independent Claim 9);
“allocating , highest ranked resource to a corresponding employee of the one or more employees with maximum index of fit score” (see Independent Claim 9);
“displaying an assignment of the corresponding employee to the workflow for the given task” (see Independent Claim 9).
Here, for Independent Claims 1, 4 and 9, these claim limitation steps are directed to the abstract idea of organizing and automating employee performance evaluation, performance indexing, and workload assignment through the application of mathematical concepts (compound indexing) and data-driven analysis to allocate tasks.
Examiner notes that the evaluation of performance and determining “fit” based on availability are concepts that can be performed in the human mind or with pen and paper. These steps are hereby classified under the “Mental Processes” grouping.
These claim limitations steps also recite “computing a base score”; “allocating a recent performance index score”; “beta update process” and “computing a compound performance index score”. These are all mathematical calculations or mathematical relationships performed on data, which is hereby classified under the “Mathematical Concepts” grouping. Moreover, Independent Claims 1, 4 and 9 ultimate goal is “managing personal behavior or relationships or interactions between people”, specifically managing a workflow by assigning employees to tasks, which is hereby classified under the “Certain Methods of Organizing Human Activities” grouping. This includes for example, the steps of “allocating highest ranked resource” and generating an index of fit score (availability/workload).
Therefore, in summary, the abstract idea limitations (as identified above in bold), under their broadest reasonable interpretation of the claims as a whole, cover performance of their limitations as “Mental Processes” which pertains to (1) concepts performed in the human mind (including observations or evaluations or judgments) or (2) using pen and paper as a physical aid, in order to help perform these mental steps does not negate the mental nature of these limitations. The use of "physical aids" in implementing the abstract mental process, does not preclude these claims from reciting an abstract idea. See MPEP § 2106.04(a) III C.
Additionally, or alternatively, these abstract idea limitations (as identified above in bold), under the broadest reasonable interpretation of the claims as a whole, cover performance of their limitations as “Certain Methods of Organizing Human Activities” which pertains to (3) managing personal behavior or relationships or interactions between people (including teachings or following rules or instructions).
Additionally, or alternatively, these abstract idea limitations (as identified above in bold), under the broadest reasonable interpretation of the claims as a whole, cover performance of their limitations as “Mathematical Concepts” which pertains to (4) mathematical calculations or (5) mathematical relationships.
That is, other than reciting the additional elements of (e.g., “processing circuitry” & “a user device” & “interface” & “autonomous resource planning device” & “a desktop computer” & “laptop computer” & “smartphone” & “tablet computer” & “wearable device” & “one or more employee devices”, etc…), nothing in the claim elements precludes the steps from being performed as “Certain Methods of Organizing Human Activities” which pertains to (1) managing personal behavior or relationships or interactions between people (including teachings or following rules or instructions) and additionally or alternatively as “Mental Processes” which pertains to (2) concepts performed in the human mind (including observations or evaluations or judgments) or (3) using pen and paper as a physical aid and additionally or alternatively as “Mathematical Concepts” which pertains to (4) mathematical calculations or (5) mathematical relationships.
Moreover, the mere recitation of generic computer components such as (e.g., “processing circuitry” & “user device”) does not take the claims out of “Certain Methods of Organizing Human Activities” or “Mental Processes” or “Mathematical Concepts” Groupings.
Therefore, at step 2a prong 1, Yes, Claims 1-9 recite an abstract idea. We proceed onto analyzing the claims at step 2a prong 2.
Step 2A Prong Two: With respect to Step 2A Prong Two of the eligibility inquiry (as explained in MPEP § 2106.04(d)), the judicial exception is not integrated into a practical application. Independent Claim 1 recites additional elements directed to: (e.g., “processing circuitry” & “user device” & “interface”). Independent Claim 4 recites additional elements directed to: (e.g., “processing circuitry” & “a desktop computer” & “laptop computer” & “smartphone” & “tablet computer” & “wearable device” & “user device” & “one or more employee devices” & “user device” & “interface”). Independent Claim 9 recites additional elements directed to: (e.g., “processing circuitry” & “user device” & “autonomous resource planning (ARP) system”). These additional elements have been considered individually and in combination, but fail to integrate the abstract idea into a practical application because they amount to using computing elements or instructions (software) to perform the abstract idea, similar to adding the words “apply it” (or an equivalent), which merely serves to link the use of the judicial exception to a particular technological environment. See MPEP § 2106.05(f) and MPEP § 2106.05(h).
Independent Claims 1, 4 and 9: With respect to reliance on (e.g., “multiple machine learning instructions” & “autonomous resource planning device” & “an ensemble of machine learning instructions”) as additional elements when considered individually and as a ordered combination (as a whole) for the claim limitations for Independent Claims 1, 4 and 9, these additional elements do not provide limitations that are indicative of integration into a practical application under step 2a prong 2 due to: (1) the claims as a whole are limited to a particular field of use or technological environment pertaining to performance scoring of employees based on availability and workload details in order to allocate the highest ranked resource to a corresponding employee in the field of business management or performance evaluation and scoring of employees field of use (see MPEP § 2106.05(h)) or (2) reciting mere instructions to implement an abstract idea on a computer or using a computer as a tool to “apply” the recited judicial exceptions (see MPEP § 2106.05(f)). Furthermore, certain/particular limitations in Independent Claims 1, 4 and 9 even if the steps of “mere data gathering” (e.g., “receive one or more inputs representing static scores associated with one or more employees” (see Independent Claims 1, 4 and 9) & “receive data representing the performances associated with each employee of the one or more employees (see Independent Claim 4)) and “mere data outputting/displaying” (e.g. “display, through the interface of the user device, an assignment of the corresponding employee to the workflow for the given task, based on the index of fit score” (see Independent Claims 1, 4 and 9)), which when evaluated as additional elements, these activities at most amount to insignificant extra-solution activities (see MPEP § 2106.05 (g)).
In addition, these limitations fail to provide an improvement to the functioning of a computer or to any other technology or technical field, fail to apply the exception with a particular machine, fail to apply the judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, fail to effect a transformation of a particular article to a different state or thing, and fail to apply/use the abstract idea in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment.
Accordingly, because the Step 2A Prong One and Prong Two analysis resulted in the conclusion that the claims are directed to an abstract idea, additional analysis under Step 2B of the eligibility inquiry must be conducted in order to determine whether any claim element or combination of elements amount to significantly more than the judicial exception. Therefore, at step 2a prong 2, Claims 1-9 are directed to the abstract idea and do not recite additional elements that integrate into a practical application.
Step 2B: (As explained in MPEP § 2106.05), it has been determined that the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Independent Claim 1 recites additional elements directed to: (e.g., “processing circuitry” & “user device” & “interface”). Independent Claim 4 recites additional elements directed to: (e.g., “processing circuitry” & “a desktop computer” & “laptop computer” & “smartphone” & “tablet computer” & “wearable device” & “user device” & “one or more employee devices” & “user device” & “interface”). Independent Claim 9 recites additional elements directed to: (e.g., “processing circuitry” & “user device” & “autonomous resource planning (ARP) system”). These elements have been considered individually and in combination, but fail to add significantly more to the claims because they amount to using computing elements or instructions (software) to perform the abstract idea, similar to adding the words “apply it” (or an equivalent), which merely serves to link the use of the judicial exception to a particular technological environment (computing environment) and does not amount to significantly more than the abstract idea itself. See MPEP § 2106.05 (h) and See MPEP § 2106.05 (f). Notably, Applicant’s Specification suggests that the claimed invention relies on nothing more than a general-purpose computer executing the instructions to implement the invention (see at least Applicant’s Specification at Page 11, Lns. 7-16 & Page 12, Lns. 29-31).
Independent Claims 1, 4 and 9: With respect to reliance on (e.g., “multiple machine learning instructions” & “autonomous resource planning device” & “an ensemble of machine learning instructions”) as additional elements when considered individually and as a ordered combination (as a whole) for the claim limitations for Independent Claims 1, 4 and 9, these additional elements do not recite additional elements that amount to significantly more than the recited judicial exceptions under step 2B due to: (1) the claims as a whole are limited to a particular field of use or technological environment pertaining to performance scoring of employees based on availability and workload details in order to allocate the highest ranked resource to a corresponding employee in the field of business management or performance evaluation and scoring of employees field of use (see MPEP § 2106.05(h)) or (2) reciting mere instructions to implement an abstract idea on a computer or using a computer as a tool to “apply” the recited judicial exceptions (see MPEP § 2106.05(f)). Furthermore, certain/particular limitations in Independent Claims 1, 4 and 9 even if the steps of “mere data gathering” (e.g., “receive one or more inputs representing static scores associated with one or more employees” (see Independent Claims 1, 4 and 9) & “receive data representing the performances associated with each employee of the one or more employees (see Independent Claim 4)), which when evaluated as additional elements, these activities at most amount to insignificant extra-solution activities (see MPEP § 2106.05 (g)), and have been recognized as Well-Understood, Routine and Conventional (WURC), and thus insufficient to add significantly more to the abstract idea. See MPEP § 2106.05(d) ii - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network).
The additional element of “machine learning” in Independent Claims 1, 4 and 9 does not amount to significantly more than the judicial exceptions under step 2B due being expressly recognized as Well-Understood, Routine and Conventional (WURC) in the art.
See for example; US PG Pub (US 2022/0391801 A1) hereinafter Kober, et. al. Kober at ¶ [0067] recites the following: “The processor 230 may utilize data stored in the memory 240 as a neural network. The neural network may include a machine learning architecture. In some aspects, the neural network may be configured for decision making processes based on Analytic Hierarchy Processing (AHP), example aspects of which are described herein. In some cases, the neural network may be or include an artificial neural network (ANN). In some other aspects, the neural network may be or include any machine learning network such as, for example, a deep learning network, a convolutional neural network, or the like. Some elements stored in memory 240 may be described as or referred to as instructions or instruction sets, and some functions of the communication device 205 may be implemented using machine learning techniques.” See for example; US PG Pub (US 2023/0004923 A1) hereinafter Luch, et. al. Luch at ¶ [0067] recites the following: “The neural network may include a machine learning architecture. In some aspects, the neural network may be configured for decision making processes based on Analytic Hierarchy Processing (AHP), example aspects of which are described herein. In some cases, the neural network may be or include an artificial neural network (ANN). In some other aspects, the neural network may be or include any machine learning network such as, for example, a deep learning network, a convolutional neural network, or the like.”
In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements integrates the abstract idea into a practical application. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that, as an ordered combination, amount to significantly more than the abstract idea itself.
Dependent Claims 2-3 and 5-8 recite additional elements directed to: (e.g., “one or more employee devices” & “autonomous resource planning system”), and when considered individually and as an ordered combination (as a whole) with the limitations recite the same abstract idea(s) as shown in Independent Claims 1, 4 and 9 along with further steps/details that could be performed as “Mental Processes” which pertains to (1) concepts performed in the human mind (including observations or evaluations or judgments) or (2) using pen and paper as a physical aid and additionally or alternatively as “Certain Methods of Organizing Human Activities” which pertains to (3) managing personal behavior or relationships or interactions between people (including teachings or following rules or instructions) and additionally or alternatively as “Mathematical Concepts” which pertains to (4) mathematical calculations or (5) mathematical relationships.
Dependent Claims 2-3, 5-6 and 8 further narrow the abstract ideas, and are therefore still ineligible for the reasons previously provided in Steps 2A Prong 2 and 2B for Independent Claims 1, 4 and 9. Dependent Claim 7: With respect to reliance on “autonomous resource planning system” as an additional element shown in Dependent Claim 7 when considered individually and as an ordered combination (as a whole) in view of these claim limitations, this additional element does not provide limitations that are indicative of integration into a practical application under step 2a prong 2 and also do not recite additional elements that amount to significantly more than the recited judicial exceptions under step 2B due to: (1) the claims as a whole are limited to a particular field of use or technological environment pertaining to performance scoring of employees based on availability and workload details in order to allocate the highest ranked resource to a corresponding employee in the field of business management or performance evaluation and scoring of employees field of use (see MPEP § 2106.05(h)) or (2) reciting mere instructions to implement an abstract idea on a computer or using a computer as a tool to “apply” the recited judicial exceptions (see MPEP § 2106.05(f)).
The ordered combination of elements in the Dependent Claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Accordingly, the subject matter encompassed by the dependent claims fails to amount to a practical application or significantly more than the abstract idea itself. Therefore, under Step 2B, Claims 1-9 do not include additional elements that are sufficient to amount to significantly more than the recited judicial exceptions. Thus, Claims 1-9 are ineligible with respect to the 35 U.S.C. § 101 analysis.
Examining Claims with Respect to Prior Art
16. Independent Claims 1, 4 and 9 have overcome the prior art rejection (see Applicant’s Remarks, Pages 10-11 of 12 filed on 02/27/2026), have been fully considered and are found to be persuasive. Therefore Claims 1-9 are withdrawn over the 35 U.S.C. § 102 (a) (1) prior art rejections.
However, Claims 1-9 remain still rejected over 35 U.S.C. § 101. Additionally, there are still a 35 U.S.C. § 112 (b) Rejection for Claim 9 and Claim Objections for Claims 2 and 7 which also remain as well.
For Independent Claims 1, 4 and 9, there is no disclosure in the existing prior art or any new art that either teaches and/or discloses the sequence operation of each of these features either individually or in combination relating to:
“(iv) generate a compound performance index score based on computing the base score and the recent performance index score of the one or more employees with time weight, wherein the time weight adjusts the base score and the recent performance index score to predict the compound performance index score over time or transaction count to avoid biasness of years of experience, wherein the weights are determined through a beta update process that aggregates multiple machine learning instructions, and wherein depending upon a volume of transactions different threshold will be activated and each threshold bucket contains an ensemble of machine learning instructions”;
“generating an index of fit score based on the respective employee of the one or more employes availability and workload details that combines the compound performance index score with employee availability calculated based on expected time of a certain job and workload details”.
The closest prior arts are as follows:
(1). US PG Pub (US 2022/0391801 A1) – “Composite Worker Ranking”, hereinafter Kober, et. al.;
(2). US PG Pub (US 2021/006509 A1) – “Data Driven Systems and Methods for Optimization of a Target Business”, hereinafter Bhattacharyya, et. al.;
(3). US PG Pub (US 2020/0104777 A1) – “Adaptive Artificial Intelligence for User Training and Task Management, hereinafter Bouhini, et. al.
Regarding the Kober reference, Kober teaches or suggests the sequence of operations comprising the following:
- (i) receive one or more inputs (see at least Kober: ¶ [0093] & ¶ [0120-0122] & ¶ [0166]. Kober teaches that the application manager 241 may build any number of user profiles using automatic processing, using artificial intelligence and/or using input from one or more users associated with the communication devices 205. The subjective evaluations may be input, for example, via a ranking application 241-b described herein. The evaluation may be autonomously or semi-autonomously (e.g., based on user inputs by upper management) performed by the ranking application 106.) representing static scores associated with one or more employees (see at least Kober: ¶ [0037-0039] & ¶ [0100]. Kober notes that the ranking application 106 may support features associated with ranking of members (e.g., employees, contract workers) of one or more workforces. See Kober at ¶ [0039]: The ranking manager 111 may support features associated with ranking of members (e.g., employees, contract workers) of one or more workforces. See Kober at ¶ [0120]: “One or more workers (e.g., supervisors, managers) may provide subjective evaluations associated with another worker (e.g., employee, independent contractor) via a communication device 205 using the ranking application 241-b.);
- (ii) compute a base score associated with each employee of one or more employees using a multicriteria decision making process (see at least Kober: ¶ [0044-0047] & ¶ [0084-0085] & ¶ [0100]. Kober teaches that the system 200 (e.g., via AHP) may support using numerical data for comparative ranking among workers with respect to any criteria. In some aspects, the system 200 may support merging comparative rankings (e.g., based on employee tardiness and other criteria) with other AHP qualitative evaluations described herein. An example of comparative ranking and pair-wise ratio judgements is described herein with respect to an example objective metric (e.g., worker tardiness). In an example, the system 200 may support using numerical data for employee tardiness for comparative ranking among workers. See at least Kober at ¶ [0044-0047]: “Techniques are described for applying analytic hierarchy process (AHP) techniques for scheduling workers with respect to available work shifts or time-slots of a schedule. The analytic hierarchy process (AHP) which is a “multicriteria decision making process is shown for ranking/scheduling of workers.” The subjective metrics may be, for example, subjective and sparser data provided by one or more judges (e.g., current managers, former managers) with respect to worker performance. In an example, the server may combine the objective metrics and subjective metrics using AHP, based on which the server may produce a composite ranking (e.g., a single overall metric, also referred to herein as composite worker ranking) for each worker. See at least Kober at ¶ [0100]: The ranking application 241-b may support features associated with ranking of members (e.g., employees, contract workers) of one or more workforces.);
- (iii) allocate a recent performance index score for each employee of one or more employees based on (see at least Kober: ¶ [0027] & ¶ [0045] & ¶ [0083] & ¶ [0125-0128]. Kober notes that The AHP rankings described herein may be based on a scale. That is, for example, the AHP scale for pairwise ratio comparisons may include a numerical range of 1 to 9. The dynamic range of the ratios of various metrics described herein (e.g., objective metrics, subjective evaluations, scores) may include values in the range of 1 to 9. See also Kober at ¶ [0027]: HCM techniques for monitoring, evaluating, scheduling, and managing members (e.g., workers, employees, managers) of one or more workforces based on parameters such as associated skillsets, objective performance metrics, and subjective performance metrics. See also Kober at ¶ [0045]: The productivity of a worker may be based on a combination of subjective ratings (e.g., reliability, flexibility, skill level, etc.) and objective performance metrics (e.g., measurable metrics based on criteria such as punctuality, efficiency, etc.). See also Kober at ¶ [0047]: The subjective metrics may be, for example, subjective and sparser data provided by one or more judges (e.g., current managers, former managers) with respect to worker performance. See also Kober at ¶ [0125-0128].)
- (a) performances associated with a respective employee of one or more employees (see at least Kober: ¶ [0125-0129] & ¶ [0131]. Kober notes that the performance data 269-b may include objective metrics (also referred to herein as performance metrics or objective assessments of performance). The objective metrics may include, for example, assessments identified or derived from any measured or other unbiased classification of the performance of a work task. The objective assessments may be measurable based on a scale such that the objective assessments are independent of a manager (e.g., supervisor, worker) reporting the data. The assessment data 269-a and the performance data 269-b may be employer driver and/or employee driven. For example, the server 210 may support objective and subjective rankings provided by any worker (e.g., supervisor, manager, supervisee, worker, independent contractor, etc.) with respect to another worker. For example, a supervisor may provide subjective evaluations and/or objective metrics associated with a worker (e.g., supervisee), and a worker (e.g., supervisee) may provide subjective evaluations and/or objective metrics associated with a supervisor.);
- (b) corresponding weights associated with historic details (see at least Kober: ¶ [0048] & ¶ [0093] & ¶ [0158]. Kober notes that the objective function may be parameterized to provide varying weights to different criteria in the scheduling process. See also Kober at ¶ [0093]: The application manager 241 may be configured to analyze content, which may be any type of information, including information that is historical or in real-time. See also Kober at ¶ [0158]: The system 300 may support applying weighting factors to the managers and/or judgements and rankings provided by the managers. See also Kober at ¶ [0185-0186]: At 430, the server 110 (e.g., ranking manager 111) may apply AHP on the upper manager's PCM to calculate a total ordering of managers' relative weights. See also Kober at ¶ [0337]: “Historical scheduling information associated with the set of first members.”)
- (iv) generate a compound performance index score (see at least Kober: ¶ [0047-0049] & ¶ [0051-0054]. Kober teaches that example aspects applying AHP techniques for producing composite worker rankings and scheduling workers based on the composite worker rankings are described herein with reference to FIGS. 2 through 5 and FIG. 8. Techniques are described for iteratively scheduling workers based on composite worker rankings and feedback from workers. In an example, the techniques may include multiple scheduling passes or iterations for proposing work shifts (and/or time-slots) to a set of workers based on a priority order corresponding to composite worker rankings.) based on computing the base score (see at least Kober: ¶ [0044-0047] & ¶ [0084-0085] & ¶ [0100]. Kober teaches that the system 200 (e.g., via AHP) may support using numerical data for comparative ranking among workers with respect to any criteria. In some aspects, the system 200 may support merging comparative rankings (e.g., based on employee tardiness and other criteria) with other AHP qualitative evaluations described herein. An example of comparative ranking and pair-wise ratio judgements is described herein with respect to an example objective metric (e.g., worker tardiness). In an example, the system 200 may support using numerical data for employee tardiness for comparative ranking among workers. See at least Kober at ¶ [0044-0047]: “Techniques are described for applying analytic hierarchy process (AHP) techniques for scheduling workers with respect to available work shifts or time-slots of a schedule. The analytic hierarchy process (AHP) which is a “multicriteria decision making process is shown for ranking/scheduling of workers.” The subjective metrics may be, for example, subjective and sparser data provided by one or more judges (e.g., current managers, former managers) with respect to worker performance. In an example, the server may combine the objective metrics and subjective metrics using AHP, based on which the server may produce a composite ranking (e.g., a single overall metric, also referred to herein as composite worker ranking) for each worker. See at least Kober at ¶ [0100]: The ranking application 241-b may support features associated with ranking of members (e.g., employees, contract workers) of one or more workforces.) and the recent performance index score of the one or more employees (see at least Kober: ¶ [0027] & ¶ [0045] & ¶ [0083] & ¶ [0125-0128]. Kober notes that The AHP rankings described herein may be based on a scale. That is, for example, the AHP scale for pairwise ratio comparisons may include a numerical range of 1 to 9. The dynamic range of the ratios of various metrics described herein (e.g., objective metrics, subjective evaluations, scores) may include values in the range of 1 to 9. See also Kober at ¶ [0027]: HCM techniques for monitoring, evaluating, scheduling, and managing members (e.g., workers, employees, managers) of one or more workforces based on parameters such as associated skillsets, objective performance metrics, and subjective performance metrics. See also Kober at ¶ [0045]: The productivity of a worker may be based on a combination of subjective ratings (e.g., reliability, flexibility, skill level, etc.) and objective performance metrics (e.g., measurable metrics based on criteria such as punctuality, efficiency, etc.). See also Kober at ¶ [0047]: The subjective metrics may be, for example, subjective and sparser data provided by one or more judges (e.g., current managers, former managers) with respect to worker performance. See also Kober at ¶ [0125-0128].) with time weight, wherein the weights (see at least Kober: ¶ [0172-0175] & ¶ [0185-0187] & ¶ [0233]. Kober notes that the server 110 may distinguish between available workers based on criteria such as seniority, proficiency, or other configurable categories, in which case W>1.) are determined through a beta update process that aggregates multiple machine learning instructions (see at least Kober: ¶ [0067-0070] & ¶ [0097] & ¶ [0355]. Kober notes that the neural network may include a machine learning architecture. The neural network may be configured for decision making processes based on Analytic Hierarchy Processing (AHP), example aspects of which are described herein. Data that may be stored in memory for use by components thereof is a data model(s) (inclusive of a neural network model(s) and/or AHD model(s)) and/or training data (also referred to herein as a training data and feedback). See also Kober at ¶ [0070]: The communication device 205 (e.g., the application manager 241) may update one or more data models 242 based on learned information included in the training data 243. See also Kober at ¶ [0097]: Data within the database of the memory 240 may be updated, revised, edited, or deleted by the application manager 241. The application manager 241 may support continuous, periodic, and/or batch fetching of content (e.g., content referenced within member evaluations or rankings, member schedules, member information, preferences or parameters related to a user, etc.) and content aggregation. See also Kober at ¶ [0355]: The server 110 may identify response information associated with the first set of candidate temporal periods and provide the response information to a machine learning network. The response information may include at least one of: a set of responses corresponding to a first set of proposed candidate temporal periods; and an indication that an elapsed time satisfies a threshold associated with receiving responses for a second set of proposed candidate temporal periods.), and wherein depending upon the volume of transactions different thresholds will be activated and each threshold bucket contains an ensemble of machine learning instruction (see at least Kober: ¶ [0085-0087] & ¶ [0109] & ¶ [0355-0356]. Kober notes that incidents of on-time arrivals may be included in the aggregate computation (e.g., where on-time arrivals correspond to an entry value of ‘0’). In some aspects, units of measure associated with tardiness may be expressed in any temporal units (e.g., minutes, seconds, etc.). The system 200 may support penalizing workers for arriving early (e.g., with respect to a temporal threshold) for a time-slot or shift. the server 210 (e.g., ranking manager 266-a) may compute an aggregate measure over a time period for each worker. See at least Kober at ¶ [0109]: Also, that the server 210 (e.g., ranking manager 266-a, scheduling manager 266-b, enterprise manager 266-c) may update one or more data models 267 based on learned information included in the training data 268. The aggregate measure may be, for example, an average tardiness or a median tardiness. See Kober at ¶ [0149] & ¶ [0275] & ¶ [0484-0487] wherein each section here “noting threshold buckets”. See Kober at ¶ [0125-0126] noting “number of transactions completed” and “efficiency associated with transactions associated with the transactional information”.);
- generate an index of fit score based on the respective employee of the one or more employees availability and workload details (see at least Kober: ¶ [0044] & ¶ [0362] & ¶ [0386-0388]. Kober teaches that the labor sharing platform 1000 may support placement of the worker with another member based on best fit (e.g., based on worker preferences, worker qualifications, member preferences, scheduling parameters, etc.). See at least Kober at ¶ [0044]: Numerous constraints specific to an individual business must be adhered when creating a schedule that accommodates for worker availability, worker ability, and open time slots to be filled. For example, some workers may wish to maximize their respective hours, but the same workers may have availability preferences (e.g., desired working times, desired positions or tasks, etc.), task preferences, and or skillsets which may not align with time-slots to be filled. See at least Kober at ¶ [0362]: The labor sharing platform 1000 may support the identification and reallocation of workers' excess capacity (e.g., scheduling availability) to members of the consortium 1005. See at least Kober at ¶ [0386]: Kober teaches that the server 110 (e.g., enterprise manager 113) may identify a work task associated with a first member of the consortium 1005. The server 110 may select, from among a set of workers associated with a different member (e.g., the second member) of the consortium 1005, one or more workers that may be compatible with the work task. For example, the server 110 may identify and select a worker that is compatible with the work task based on parameters associated with the work task (e.g., task type, scheduling information associated with the work task, compensation, location) and/or worker data associated with the worker (e.g., skill set information, scheduling information, preference information associated with work tasks).);
- (vi) allocate highest ranked resource to a corresponding employee of the one or more employees with maximum index of fit score (see at least Kober: ¶ [0052] & ¶ [0189] & ¶ [0274-0276] & ¶ [0388]. Kober teaches that a (e.g., scheduling manager) may offer a shift to a first worker having the highest composite worker ranking (e.g., most productive workers). If the first worker rejects the shift (or the first worker does not accept the shift within a minimum decision time), the server may offer the same shift to a second worker having the second highest composite worker ranking. See also Kober at ¶ [0274-0276]: At 705, the server 110 (e.g., scheduling manager 112) may select a ‘worker A’ having a highest composite worker ranking (e.g., based on composite evaluation data described herein). For example, ‘worker A’ may have the highest productivity among workers included in the composite evaluation data. For example, at 705, the server 110 may select a ‘worker B’ having the next highest composite worker ranking (e.g., the next highest productivity). At 710, the server 110 may offer ‘worker B’ a proposed work shift (e.g., ‘proposed work shift 2’). See also Kober at ¶ [0189]: Part I may include accommodating worker temporal preferences (e.g., time-slot preferences) when allocating time-slots to workers. In an example, the time-slot preferences may be indicated as preference scores in a PCM (e.g., a worker pairwise matrix 310 described with reference to FIG. 3. the labor sharing platform 1000 may support placement of the worker with another member based on best fit (e.g., based on worker preferences, worker qualifications, member preferences, scheduling parameters, etc.).).
However, Kober, et. al. specifically, does not teach or suggest the sequence of operations comprising:
“(iv) generate a compound performance index score based on computing the base score and the recent performance index score of the one or more employees with time weight, wherein the time weight adjusts the base score and the recent performance index score to predict the compound performance index score over time or transaction count to avoid biasness of years of experience, wherein the weights are determined through a beta update process that aggregates multiple machine learning instructions, and wherein depending upon a volume of transactions different threshold will be activated and each threshold bucket contains an ensemble of machine learning instructions”;
“generating an index of fit score based on the respective employee of the one or more employes availability and workload details that combines the compound performance index score with employee availability calculated based on expected time of a certain job and workload details”.
Regarding the Bhattacharyya reference, Bhattacharyya teaches or suggests the sequence of operations comprising the following:
Bhattacharyya at ¶ [0219] notes that for the “Procurement” domain and the question “To what extent is data analytics use . . . ,” the cost weight is 40%, the quality weight is 20%, and the time weight is 40%. The performance driver weights are determined based on a ratio between the numbers of occurrences of the performance drivers. For example, the ratio between the numbers of occurrences of cost:quality:time is 1:0:1. Based on the ratio, a higher weight (e.g., percentage) is determined for cost and time, and a lower weight (e.g., percentage) is determined for quality. As another example, if the ratio between the numbers of occurrences of cost:quality:time is 1:1:1 or 0:0:0, the cost, quality, and time weights are equal at 33.33%. Table 11 illustrates a matrix including exemplary performance driver weights for different occurrences of the performance drivers: Bhattacharyya at ¶ [0220]: “The performance drivers A, B, and C are cost, quality, and time respectively. As an example, for the “Compliance” domain and the question “Does the management of pharmac . . . ,” the ratio of the numbers of occurrences of cost:quality:time is 0:0:1 (e.g., a ratio of A:B:C). From the exemplary performance driver weight matrix, a cost weight would be 25%, a quality weight would be 25%, and a time weight would be 50%. Bhattacharyya at ¶ [0412]: Determining the factor weights may include determining at least one factor based on the performance variables. Determining the factor weights may further include determining eigenvalues corresponding to each of the factors, selecting a first set of the factors based on a factor threshold, applying a factor rotation to the first set of factors, determining at least one variance associated with the first set of factors, and determining a first set of the factor weights based on the factor rotation and the at least one variance.
However, Bhattacharyya, et. al. specifically, does not teach or suggest the sequence of operations comprising:
“(iv) generate a compound performance index score based on computing the base score and the recent performance index score of the one or more employees with time weight, wherein the time weight adjusts the base score and the recent performance index score to predict the compound performance index score over time or transaction count to avoid biasness of years of experience, wherein the weights are determined through a beta update process that aggregates multiple machine learning instructions, and wherein depending upon a volume of transactions different threshold will be activated and each threshold bucket contains an ensemble of machine learning instructions”;
“generating an index of fit score based on the respective employee of the one or more employes availability and workload details that combines the compound performance index score with employee availability calculated based on expected time of a certain job and workload details”.
Regarding the Bouhini reference, Bouhini teaches or suggests the sequence of operations comprising the following:
Bouhini at ¶ [0053]: An employee entity has a performance level defined by an achievement time of a task, a speed, and a concentration level when executing an activity and/or task. The artificial intelligence module may take actions according to the employee learned behavior. For example, the artificial intelligence module may give a nudge for an employee with a relatively low concentration level (e.g., as defined by input form monitoring devices) or notification to that employee taking too taking too much time on a task, and/or the like. Furthermore, the artificial intelligence module can adapt the training material regarding the performance level of the employee on one or more tasks (e.g., by subsequently providing easier or more complex training materials). Bouhini at ¶ [0084]: Process 800 may include determining, using the plurality of performance parameters as inputs to the machine learning model, that the level of expertise of the user does not satisfy an expertise threshold (block 830). For example, the artificial intelligence task management system (e.g., using computing resource 625, processor 720, and/or the like) may determine, using the plurality of performance parameters as inputs to the machine learning model, that the level of expertise of the user does not satisfy an expertise threshold.
However, Bouhini, et. al. specifically, does not teach or suggest the sequence of operations comprising:
“(iv) generate a compound performance index score based on computing the base score and the recent performance index score of the one or more employees with time weight, wherein the time weight adjusts the base score and the recent performance index score to predict the compound performance index score over time or transaction count to avoid biasness of years of experience, wherein the weights are determined through a beta update process that aggregates multiple machine learning instructions, and wherein depending upon a volume of transactions different threshold will be activated and each threshold bucket contains an ensemble of machine learning instructions”;
“generating an index of fit score based on the respective employee of the one or more employes availability and workload details that combines the compound performance index score with employee availability calculated based on expected time of a certain job and workload details”.
Therefore, when taken as a whole, the claims are not rendered obvious as the available prior art does not suggest or otherwise render obvious the noted features nor do the available art suggest or otherwise render obvious further modification of the evidence at hand. Such modification would require substantial reconstruction relying solely on improper hindsight bias, and thus would not be obvious.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DERICK HOLZMACHER whose telephone number is (571) 270-7853. The examiner can normally be reached on Monday-Friday 9:00 AM – 6:30 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Epstein can be reached on 571-270-5389. The fax phone number for the organization where this application or proceeding is assigned is 571-270-8853.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/DERICK J HOLZMACHER/Patent Examiner, Art Unit 3625A /BRIAN M EPSTEIN/Supervisory Patent Examiner, Art Unit 3625