DETAILED ACTION
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
2. Claims 1-8, 10-17 and 19 are currently pending. Claims 1, 6-8, 10, 15-17 and 19 has been amended. Claims 9 and 18 has been canceled. Claims 1-8, 10-17 and 19 have been rejected.
Status of the Application
3. Claims 1-8, 10-17 and 19 are currently pending and have been examined in this application. This communication is the first action on the merits.
Response to Amendments
4. Applicant’s amendment filed on 01/20/2026 necessitated new grounds of rejection in this office action.
Continued Examination under 37 CFR 1.114
5. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/20/2026 has been entered.
Response to Arguments
6. Applicant’s arguments, see pages 13-16 filed on 01/20/2026, with respect to the 35 U.S.C. § 103 Claim Rejections for Claims 1-19 have been fully considered and are found to be not persuasive. Applicant’s arguments with respect to Claims 1-8, 10-17 and 19 have been considered, but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Response to 35 U.S.C. § 101 Arguments
7. Applicant’s 35 U.S.C. § 101 arguments, filed with respect to Claims 1-8, 10-17 and 19 have been fully considered, but they are found not persuasive (see Applicant Remarks, Pages 9-12, dated 01/20/2026). Examiner respectfully disagrees.
Argument #1:
(A). Applicant argues that Claims 1-8, 10-17 and 19 recite additional elements that amount to significantly more than the recited judicial exceptions under revised step 2B of the 35 U.S.C. § 101 analysis (see Applicant Remarks, Pages 9-12, dated 01/20/2026). Examiner respectfully disagrees.
Specifically, Applicant argues that the present claims are patent eligible under Step 2B for Independent Claims 1, 10 and 19 since the additional elements recite improvements to the technical field of enterprise resource planning (ERP) application analytics and that specific limitations other than what is WURC in ERP systems or unconventional steps that confine the claims to a particular useful application (see MPEP 2106.05 (I) (A) (v)) (see Applicant Remarks, Page 9, dated 01/20/2026). Examiner respectfully disagrees.
In response, Examiner refers Applicant to Examiner’s 35 U.S.C. § 101 analysis section (e.g., Claim Rejections - 35 U.S.C. § 101 section shown below) shown for step 2B particularly for Independent Claims 1, 10 and 19. The claims do not recite additional elements that amount to significantly more than the recited judicial exceptions, because they are merely directed to the particulars of the abstract idea and likewise do not add significantly more to the above-identified judicial exceptions. The limitations are directed to limitations referenced in MPEP § 2106.05I.A. that are not enough to qualify as significantly more when recited in these claims with the abstract idea which include: (1) adding the words “apply it” (or an equivalent) with the judicial exception, (2) or mere instructions to implement an abstract idea on a computer and providing the results to the user on a computer, and (3) generally linking the use of the judicial exception to a particular technological environment or field of use.
Examiner notes that for Independent Claims 1, 10 and 19 regarding the step 2B analysis, the steps of “configuring a service”, “selecting templates”, “sending/receiving/messages” and “obtaining data from a database” are the most basic blocks of modern computing. These elements represent insignificant extra-solution activities (see MPEP § 2106.05 (g)) that does not change the nature of the abstract idea (evaluating an entity). While a generative adversarial network (GAN) is a sophisticated tool, these claims recite it as a “black box” to perform the mathematical task of “applying weights”. The use of GANs and ML models for data weighting is considered a computer implementation of a statistical process. These claims do not recite an improvement to the GAN itself (e.g., a specific loss function or architectural change). Conventional Output: Populating a “user interface” and publishing to a content library” are standard methods of data display and storage. They do not solve a technical problem in the computer’s operation but merely report the results of the administrative evaluation. Ordered Combination: The sequence – collecting data, processing it through an algorithm, and displaying the results – is a workflow for a data processing task. It lacks a synergistic effect that transforms the abstract idea into a technological invention.
Moreover, regarding Applicant’s assertion that the claims recite an “ERP analytics improvement”, Examiner points out the following reasons why this is not accurate. First, the improvement is administrative not technical. The argument that the claims improve “ERP analytics improvement” confuses a business utility with a technological improvement. Under USPTO Guidance (and established case law like Alice and Electric Power Group), improving the accuracy or consistency of a report (e.g., an “evaluation of an entity”) is an administrative benefit. To be eligible, the claim must improve the performance of the ERP system – such as reducing its memory footprint, increasing its processing speed, or solving a specific data-synchronization bottleneck. Here the “improvement” is merely better data for human decision=makers, which is not a technical solution. Second Reason: Specificity of GAN does not equal inventiveness. The assertion that a GAN is “unconventional” in ERP systems is insufficient. A claim that merely takes an existing high-level tool (a GAN) and applies it to a specific field (ERP) is a classic example of “applying it on a computer” in a particular technological environment. These claims do not provide a technical explanation of how the GAN is uniquely modified to handle ERP data differently than any other dataset. Without a specific technical implementation that deviates from standard GAN usage, the choice of the model remains a “mathematical concept” used as a tool. Third Reason: Failure to Confine these claims technologically. The argument that the steps “confine the claims to a particular useful application” (ERP analytics) is a “field of use” limitation. The Supreme Court has explicitly stated that limiting an abstract idea to a specific technological field (like ERP systems) does not make it patent-eligible. If the underlying processing (collecting scores and weighting them) can be done with pen and paper, adding the label “ERP” does not provide an inventive concept.
Because the claims do not solve a technical problem with a technical solution – but instead automate an administrative task using standard AI tools – they do not provide “significantly more” than the abstract idea. Claims 1-8, 10-17 and 19 are therefore patent ineligible under 35 U.S.C. § 101.
Argument #2:
(B). Applicant argues that for Independent Claims 1, 10 and 19 the claimed machine learning model which “comprises a scoring engine that includes a generative adversarial network … trained using training data comprising a) questionnaires for evaluation of the one or more entities and b) responses to the questionnaires” is recited at such a specific, granular level that is distinguishable from mere “generic ML techniques”. In other words, the claimed ML based approach for generating a total score for an entity that is used to populate a user interface is not WURC within ERP systems (see Applicant Remarks, Page 12, dated 01/20/2026). Examiner respectfully disagrees.
In response to Applicant’s 35 U.S.C. § 101 arguments here, Examiner refers Appellant to BSG Tech LLC v. Buyseasons Inc. decision (Aug. 15, 2018) court case noting that: “But the relevant inquiry is not whether the claimed invention as a whole is unconventional or non-routine. At Step two, we “search for an ‘inventive concept’… that is sufficient to ensure that the patent in practice amounts to significantly more than a patent upon the [ineligible concept] itself.” Alice, 134 S. Ct. at 2355 (internal quotation marks omitted) (quoting Mayo, 566 U.S. at 72-73). But this simply restates what we have already determined is an abstract idea. At Alice step two, it is irrelevant whether considering historical usage information while inputting data may have been non-routine or unconventional as a factual matter. As a matter of law, narrowing or reformulating an abstract idea does not add “significantly more” to it. See SAP Am., Inc. v. InvestPic, LLC. No. 2017-2081, slip op. at 14 (Fed. Cir. 2018). Appellant’s suggestion that specific limitations (or the claimed invention as a whole) must be shown to be well-understood, routine, and conventional to support the conclusion of subject matter ineligibility is not persuasive. Examiner submits that the question of novelty and non-obviousness evidence (application of prior art) is not relevant to the question of determining whether the claims as constructed contain an inventive concept. Examiner cites the case of (Two-Way Media v. Comcast, (Fed. Cir. 2017)) and the District Court from this case concluded that “the proffered materials are irrelevant to the § 101 motion for judgment on the pleadings. None of the proffered materials addresses a § 101 challenge to claims of the asserted patents. The novelty and non-obviousness of the claims under §§ 102 and 103 does not bear on whether the claims are directed to patent-eligible subject matter under § 101. . . . Because the proffered materials are irrelevant to the instant§ 101 issue, I have not considered them.” The appeal to Federal Circuit Court affirmed the District Court’s ruling that “eligibility and novelty are separate inquiries”.
Argument #3:
(C). Applicant argues that for Independent Claims 1, 10 and 19 that these amended steps represents an application of ML based techniques that is unconventional in the field of ERP systems, confining the claims to a particular useful application in which an ERP application can generate analytics for many different suppliers using a single consistent process, which saves memory, bandwidth, and processor resources and secondly that is recited at such a specific, granular level that is distinguishable from mere generic machine learning techniques (see Applicant Remarks, Page 12, dated 01/20/2026). Examiner respectfully disagrees.
Examiner responds by stating that the argument that using GANs is “unconventional in the field of ERP systems” is insufficient to establish eligibility. The Supreme Court (e.g., Alice Corp) and subsequent USPTO Guidance state that limiting an abstract idea to a particular technological field (e.g., ERP analytics) does not make it patent-eligible. Merely taking a known mathematical tool (GAN weighting) and applying it to a specific dataset (supplier scores) is a “field of use” limitation, not a technical improvement to the ERP system’s underlying code or hardware. The assertion that a “single, consistent process” saves memory, bandwidth, and processor resources is a conclusionary statement not supported by the claim language. No Technical Mechanism: Independent Claims 1, 10 and 19 do not recite how these resources are saved. For example, it does not describe a novel data compression algorithm or a specific memory management protocol. Any automated system is “faster” or “more consistent” than a human process. These are the inherent benefits of automation, not a “technical solution to a technical problem.” Speed or consistency gains derived from automating a business process does not constitute an inventive concept. The argument that the ML is recited at a “granular level” is misplaced. Functional vs. Structural: These claims describe the GAN functionally (what it does: “generate total score”, “apply weights”) rather than structurally (how it is built: specific layers, loss functions, or mathematical breakthroughs). Identifying specific training data (questionnaires and responses) is a data gathering step. Specifying the type of data an algorithm processes does not transform the algorithm into a technical invention. Because the “granular” detail focuses on the mathematical weighting of scores for an administrative outcome, it remains directed to a judicial exception (Mathematical Concept / Mental Processes) without adding an unconventional technical step that changes the nature of these claims. These claims are patent ineligible because they use generic tools to perform a fundamental business task. The cited “efficiencies” are the result of basic automation rather than a technical breakthrough in ERP system architecture. Thus, Claims 1-8, 10-17 and 19 are ineligible with respect to the 35 U.S.C. § 101 analysis.
Claim Rejections - 35 USC § 101
8. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
9. Claims 1-8, 10-17 and 19 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Claims 1-8, 10-17 and 19 are each focused to a statutory category namely, a “system” or an “apparatus” (Claims 1-8), a “process” or a “method” (Claims 10-17), and a “non-transitory computer readable medium” or an “article of manufacture” (Claim 19).
Step 2A Prong One: Independent Claims 1, 10 and 19 recites limitations that set forth the abstract idea(s), namely (see in bold except where strikethrough):
“” (see Independent Claim 1);
“:” (see Independent Claim 1);
“” (see Independent Claim 19);
“creating, an evaluation service by at least configuring the evaluation service to evaluate one or more entities in an enterprise , the configuring comprising selecting a first template and a second template” (see Independent Claims 1, 10 and 19);
“in response to create of the evaluation service, the operations further comprise:” (see Independent Claims 1, 10 and 19);
“causing one or more messages to be sent to one or more evaluators” (see Independent Claims 1, 10 and 19);
“receiving one or more responses to the one or more messages” (see Independent Claims 1, 10 and 19);
“determining one or more first scores based on the one or more responses” (see Independent Claims 1, 10 and 19);
“obtaining one or more second scores , the one or more second scores comprising one or more quantitative key indicators associated with the one or more entities” (see Independent Claims 1, 10 and 19);
“inputting the one or more first scores and the one or more second scores to that is configured to output a total score for an entity of the one or more entities, trained to generate the total score for the entity by applying one or more weights to the one or more first scores and the one or more second scores, being trained using training data comprising a) questionnaires for evaluation of the one or more entities and b) responses to the questionnaires” (see Independent Claims 1, 10 and 19);
“in response to the total score for the entity being output populating with the one or more first scores, the one or more second scores, and the total score for the entity, generated at least in part based on the second template” (see Independent Claims 1, 10 and 19);
“publishing to a content accessible by users in the enterprise such that any of the users can access for consumption, wherein provides an evaluation of the entity of the one or more entities” (see Independent Claims 1, 10 and 19).
Here, for Independent Claims 1, 10 and 19, these steps recite an abstract idea directed to a methodical, automated evaluation and scoring of business entities (or ERP systems) by collecting, aggregating, and analyzing both subjective survey data (questionnaires) and objective quantitative key performance indicators (KPIs) to generate a consolidated score. The steps of “receiving responses”, “determining first scores” and “harmonizing” scores are essentially types of evaluations and judgments under “Mental Processes”. The steps of “causing one or more messages to be sent to one or more evaluators” and “publishing the populated first user interface … for consumption” are methods of managing interactions between people and managing personal behavior within an enterprise. These steps organize how evaluators provide data and how users consume the final product, and thus henceforth are categorized under “Certain Methods of Organizing Human Activities”. Thus, data gathering and evaluations result in (Mental Processes), calculating weights and total scores result in (Mathematical Concepts) and managing evaluator communications and enterprise data access result in (Certain Methods of Organizing Human Activities).
Therefore, these abstract idea limitations (as identified above in bold), under their broadest reasonable interpretation of the claims as a whole, cover performance of their limitations as “Mental Processes” which pertains to (1) concepts performed in the human mind (including observations or evaluations or judgments) or (2) using pen and paper as a physical aid, which in order to help perform these mental steps does not negate the mental nature of these limitations. The use of "physical aids" in implementing the abstract mental process, does not preclude the claim from reciting an abstract idea. See MPEP § 2106.04(a) III C.
Additionally, or alternatively, these abstract idea limitations (as identified above in bold), under their broadest reasonable interpretation of the claims as a whole, cover performance of their limitations “Certain Methods of Organizing Human Activities” which pertains to (3) managing personal behavior or relationships or interactions between people (including teachings or following rules or instructions) and additionally or alternatively as “Mathematical Concepts” which pertains to (4) mathematical calculations.
That is, other than reciting (e.g., “at least one processor” & “at least one memory” & “a first user interface” & “content library” & “enterprise resource planning (ERP) system” & “a communication network” & “a database” & “program code” & “an evaluation harmonizer”), nothing in the claim elements precludes the steps from being performed as “Mental Processes” which pertains to (1) concepts performed in the human mind (including observations or evaluations or judgments) or (2) using pen and paper as a physical aid and additionally or alternatively as “Certain Methods of Organizing Human Activities” which pertains to (3) managing personal behavior or relationships or interactions between people (including teachings or following rules or instructions) and additionally or alternatively as “Mathematical Concepts” which pertains to (4) mathematical calculations.
Therefore, at step 2a prong 1, Yes, Claims 1-8, 10-17 and 19 recite an abstract idea. We proceed onto analyzing the claims at step 2a prong 2.
Step 2A Prong Two: With respect to Step 2A Prong Two of the eligibility inquiry (as explained in MPEP § 2106.04(d)), the judicial exception is not integrated into a practical application. Independent Claims 1, 10 and 19 recites additional elements directed to: (e.g., “at least one processor” & “a communication network” & “at least one memory” & “a database” & “program code”). These additional elements have been considered individually and in combination, but fail to integrate the abstract idea into a practical application because they amount to using generic computing elements or instructions (software) to perform the abstract idea, similar to adding the words “apply it” (or an equivalent), which merely serves to link the use of the judicial exception to a particular technological environment. See MPEP § 2106.05(f) and MPEP § 2106.05(h). The “evaluation harmonizer” is described in terms of a business or administrative problem (organizing and scoring evaluations) rather than a technical one.
Independent Claims 1, 10 and 19: With respect to reliance on (e.g., “evaluation harmonizer” & “generative adversarial network” & “machine learning (ML) model” & “content library” & “enterprise resource planning (ERP) system” & “a first user interface”) as additional elements shown in Independent Claims 1, 10 and 19 when considered individually and as an ordered combination (as a whole) in view of these claim limitations, this additional element does not provide limitations that are indicative of integration into a practical application under step 2a prong 2 due to the following: (1) recites mere instructions to implement an abstract idea on a computer or using a computer as a tool to “apply” the recited judicial exceptions by providing the results to the user on a computer (see MPEP § 2106.05 (f)) or (2) limiting a particular field of use or technological environment pertaining to creating an evaluation service to evaluate one or more entities (e.g., in this case suppliers) by selecting a first scorecard template and a second scorecard template and publishing the results on a user interface showing an aggregated score of evaluating suppliers using a computer in a business operations enterprise environment (see MPEP § 2106.05(h)). While Independent Claims 1, 10 and 19 use a generative adversarial network (GAN), the claims do not recite how the GAN architecture is modified or improved. It simply uses the GAN to “apply weights” to scores, which is a use of a ML model for data processing.
In addition, these limitations fail to provide an improvement to the functioning of a computer or to any other technology or technical field, fail to apply the exception with a particular machine, fail to apply the judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, fail to effect a transformation of a particular article to a different state or thing, and fail to apply/use the abstract idea in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment.
Accordingly, because the Step 2A Prong One and Prong Two analysis resulted in the conclusion that the claims are directed to an abstract idea, additional analysis under Step 2B of the eligibility inquiry must be conducted in order to determine whether any claim element or combination of elements amount to significantly more than the judicial exception. Therefore, at step 2a prong 2, Claims 1-8, 10-17 and 19 are directed to the abstract idea and do not recite additional elements that integrate into a practical application.
Step 2B: (As explained in MPEP § 2106.05), it has been determined that the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Independent Claims 1, 10 and 19 recites additional elements directed to: (e.g., “at least one processor” & “a communication network” & “at least one memory” & “a database” & “program code”). These elements have been considered individually and in combination, but fail to add significantly more to the claims because they amount to using computing elements or instructions (software) to perform the abstract idea, similar to adding the words “apply it” (or an equivalent), which merely serves to link the use of the judicial exception to a particular technological environment (computing environment) and does not amount to significantly more than the abstract idea itself. See MPEP § 2106.05 (f) and MPEP § 2106.05 (h). Notably, Applicant’s Specification suggests that the claimed invention relies on nothing more than a general-purpose computer executing the instructions to implement the invention (e.g., see at least Applicant’s Specification ¶ [0111]: “These various aspects or features can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.”).
Independent Claims 1, 10 and 19: With respect to reliance on (e.g., “evaluation harmonizer” & “generative adversarial network” & “machine learning (ML) model” & “content library” & “enterprise resource planning (ERP) system” & “a first user interface”) as additional elements shown in Independent Claims 1, 10 and 19 when considered individually and as an ordered combination (as a whole) in view of these claim limitations, these additional elements do not amount to significantly more than the judicial exceptions under step 2B due to the following: (1) recites mere instructions to implement an abstract idea on a computer or using a computer as a tool to “apply” the recited judicial exceptions by providing the results to the user on a computer (see MPEP § 2106.05 (f)) or (2) limiting a particular field of use or technological environment pertaining to creating an evaluation service to evaluate one or more entities (e.g., in this case suppliers) by selecting a first scorecard template and a second scorecard template and publishing the results on a user interface showing an aggregated score of evaluating suppliers using a computer in a business operations enterprise environment (see MPEP § 2106.05(h)).
With respect to Independent Claims 1, 10 and 19, certain/particular limitations shown recite (1) “mere data gathering” (e.g., “receiving one or more responses to the one or more messages” (see Independent Claims 1, 10 and 19) & “obtaining one or more second scores from a database, the one or more second scores comprising one or more quantitative key indicators associated with the one or more entities” (see Independent Claims 1, 10 and 19)) and “mere data outputting” (e.g., “causing one or more messages to be sent to one or more evaluators” (see Independent Claims 1, 10 and 19)) wherein which each of these claim limitations reflects mere insignificant extra-solution activities (see MPEP § 2106.05 (g)). Furthermore, these certain/particular claim limitations as demonstrated above for Independent Claims 1, 10 and 19 reflects Well-Understood, Routine and Conventional Activities (WURC) under MPEP § 2106.05 (d) ii: See Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec,838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359,1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network).
The additional elements of “machine learning model” in the claims do not amount to significantly more than the judicial exceptions under step 2B due being expressly recognized as Well-Understood, Routine and Conventional (WURC) in the art. See for example; US PG Pub (US 2021/0019674 A1) hereinafter Crabtree, et. al. Crabtree at ¶ [0117]: “System 2000 may also take into consideration feedback from a plurality of feedback sources 2210a-n, which may include, expert judgement 2210b, generative adversarial networks (GAN's) 2210c.” See also Crabtree at ¶ [0118]: “From this analysis of business impact 2412, a network resilience rating is assigned 2405, representing a weighted and adjusted total of relative exposure the organization has to various types of risks, each of which may be assigned a sub-rating. The network resilience rating 2405 may be a single score for all factors, a combination of scores, or a score for a particular risk or area of concern.” See also Crabtree at ¶ [0127]: “The risk rating engine 3111 then sums all scores and produces a risk rating a profile 3140 to the client comprising the knowledge graph and numerical risk score.” See for example; US PG Pub (US 2022/0366345 A1) hereinafter Jones, et. al. Jones at ¶ [0036]: “It is also contemplated that the analytics engine 126 may include one or more machine learning algorithms that may analyze questionnaire response to determine performance scores associated with an institution, such as a police agency. In the manner, the analytics engine 126 may leverage historical questionnaire response and performance scores to train the machine learning algorithms and better predict performance scores based on questionnaire responses. The analytics engine 126 may also include one or more machine learning algorithms that analyze performance scores to determine correlations between performance scores and any data in the databases 104.” See for example; US PG Pub (US 2023/0186219 A1) – “System and Method for Enterprise Change Management Evaluation”, hereinafter Savage, et. al. Savage at ¶ [0027]: “The present invention is directed to more than merely a computer implementation of a routine or conventional activity previously known in the industry as it provides a specific advancement in the area of electronic record availability, consistency, and analysis by providing improvements in the operation of a computer system that uses machine learning and a weighted average model to implement a change management evaluation.” In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements integrates the abstract idea into a practical application. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that, as an ordered combination, amount to significantly more than the abstract idea itself.
Dependent Claims 2-8 and 11-17 recite substantially the same or similar additional elements as addressed above and when considered individually and as an ordered combination (as a whole) with these limitations recite the same abstract idea(s) as shown in Independent Claims 1, 10 and 19 along with further steps/details pertaining to “Mental Processes” such as (1) concepts performed in the human mind (including observations or evaluations or judgments) or (2) using pen and paper as a physical aid and additionally or alternatively as “Certain Methods of Organizing Human Activities” which pertains to (3) managing personal behavior or relationships or interactions between people (including teachings or following rules or instructions) and additionally or alternatively as “Mathematical Concepts” which pertains to (4) mathematical calculations.
Dependent Claims 3, 5-6, 12 and 14-15 further narrow the abstract ideas, and are therefore still ineligible for the reasons previously provided in Steps 2A Prong 2 and 2B for Independent Claims 1, 10 and 19. Dependent Claims 2, 4, 7-8, 11, 13, and 16-17: With respect to reliance on the additional elements of (e.g., “a library” (see Dependent Claims 2 & 11) & “second user interface” (see Dependent Claims 4 & 13) & “generative adversarial network (GAN)” (see Dependent Claims 7 and 16) & “machine learning (ML) model” (see Dependent Claims 8 and 17)), these additional elements do not provide limitations that are indicative of integration into a practical application under step 2a prong 2 and also do not recite additional elements that amount to significantly more than the recited judicial exceptions under step 2B due to: (1) limiting a particular field of use or technological environment pertaining to the second scorecard template of the suppliers being selected from a library using a computer in a business operations enterprise environment (see MPEP § 2106.05(h)) or (2) alternatively recites mere instructions to implement an abstract idea on a computer or using a computer as a tool to “apply” the recited judicial exceptions by providing the results to the user on a computer (see MPEP § 2106.05 (f)).
The ordered combination of elements in the Dependent Claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Accordingly, the subject matter encompassed by the dependent claims fails to amount to a practical application or significantly more than the abstract idea itself. Therefore, under Step 2B, Claims 1-8, 10-17 and 19 do not include additional elements that are sufficient to amount to significantly more than the recited judicial exceptions. Thus, Claims 1-8, 10-17 and 19 are ineligible with respect to the 35 U.S.C. § 101 analysis.
Claim Rejections - 35 USC § 103
10. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
11. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
12. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
13. Claims 1-6, 10-15 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over US PG Pub (US 2015/0039359 A1) hereinafter Katakol, et. al., in view of US PG Pub (US 2013/0132233 A1) hereinafter Rothley, et. al., in view of US PG Pub (US 2021/0019674 A1) hereinafter Crabtree, et. al., in view of US PG Pub (US 2012/0209890 A1) hereinafter Nowacki, et. al., and in further view of US PG Pub (US 2007/0055564 A1) to Fourman.
Regarding Independent Claim 1, Katakol system for an evaluation harmonizer teaches the following:
- at least one processor (see at least Katakol: ¶ [0149]. Katakol notes that “for example, it is well known that a “computer processor” is also called a “central processing unit (CPU).”)
- at least one memory including program code which when executed by the at least one processor causes operations (see at least Katakol: ¶ [0149]. Katakol notes that as another example, it is well known that the terms “flash RAM” and “flash memory” are used interchangeably.) comprising:
- creating, by an evaluation harmonizer (see at least Katakol: Fig. 3 & Fig. 6 & ¶ [0088-0090]. Katakol notes automated aggregation, cleansing, and normalizing of data using an Artificial Intelligence based system that learns. Real-time human feedback and ability to change classification.), an evaluation service for an enterprise resource planning (ERP) system (see at least Katakol: ¶ [0015] & ¶ [0070] & ¶ [0129]. Katakol notes that the platform further includes a unified supplier portal for sourcing and procurement tasks and is easy to integrate with ERP systems. Integrates directly with ERP system to ensure real time updates of system. The scoring component can be used to evaluate suppliers during the RFP (Request for Proposal) proposal or can also be used for the strategic evaluation of the existing suppliers for balanced score card purposes.) by at least configuring the evaluation service to evaluate one or more entities in an enterprise (see at least Katakol: See also FIGS. 8, 12, 12-a, and 12-b of Katakol noting the evaluation of one or more suppliers.) associated with the ERP system (see at least Katakol: ¶ [0015] & ¶ [0070] & ¶ [0129].) , the configuring comprising selecting a first template and a second template (see at least Katakol: ¶ [0015] & ¶ [0070] & ¶ [0116]. Katakol notes category specific supplier performance and risk management templates. See also Katakol at ¶ [0070]: The scoring component can be used to evaluate suppliers during the RFP (Request for Proposal) proposal or can also be used for the strategic evaluation of the existing suppliers for balanced score card purposes. See also Katakol at ¶ [0116]: Katakol teaches providing templates and clause library for easy creation. See also FIGS. 8, 12, 12-a, and 12-b of Katakol noting the evaluation of one or more suppliers. Also Examiner notes guided sourcing events with category specific templates (FIG. 15 a, b, c and d).)
- in response to creation of the evaluation service, the operations further comprise (see at least Katakol: ¶ [0070] & Fig. 12, Fig. 12-a and Fig. 12-b. Katakol teaches that the scoring component can be used to evaluate suppliers during the RFP (Request for Proposal) proposal or can also be used for the strategic evaluation of the existing suppliers for balanced score card purposes.)
- causing one or more messages to be sent to one or more evaluators (see at least Katakol: ¶ [0050] & Fig. 12, Fig. 12-a and Fig. 12-b. Katakol notes that survey section (FIG. 1) which provides means to select pre-set questions for which answers may be required to be submitted by a supplier or vendor and which answers may be automatically scored and/or a rating provided. Returning a vendor list in accordance with characteristics selected by a user, or for employing the assigned rating to a supplier's response or responses. Also Examiner notes guided sourcing events with category specific templates (FIG. 15 a, b, c and d).)
- receiving one or more responses to the one or more messages (see at least Katakol: ¶ [0050] & Fig. 12, Fig. 12-a and Fig. 12-b. Katakol notes that the scoring engine for creating ratings related to a supplier's response to a set of questions served by the Survey Section. Also Examiner notes guided sourcing events with category specific templates (FIG. 15 a, b, c and d).)
- determining one or more first scores based on the one or more responses (see at least Katakol: Fig. 12, 12-a, 12-b & ¶ [0050]. Katakol at Survey Section (FIG. 1) which provides means to select pre-set questions for which answers may be required to be submitted by a supplier or vendor and which answers may be automatically scored and/or a rating provided, or for employing the assigned rating to a supplier's response or responses; Auction Engine (FIG. 1) for setting up and conducting auctions; Scoring Engine for creating ratings related to a supplier's response to a set of questions served by the Survey Section. See also Katakol at ¶ [0070]: This concept is also extended to business components such as a scoring process (see FIG. 1 at “Scoring Engine”) related to suppliers or a response from a supplier is identical using scoring component. The scoring component can be used to evaluate suppliers during the RFP (Request for Proposal) proposal or can also be used for the strategic evaluation of the existing suppliers for balanced score card purposes.)
Moreover, Katakol system for an evaluation harmonizer does not explicitly disclose, but Rothley does disclose the following:
- obtaining one or more second scores from a database (see at least Rothley: Figs. 3-4 & Fig. 5 & ¶ [0035]. Rothley notes scoring each of the one or more suppliers based on the customized green sourcing metrics and providing a ranked list of suppliers according to their respective individual and overall scores. Performing the sourcing analysis includes identifying one or more suppliers whose green scores falls under a pre-defined threshold limit.), the one or more second scores comprising one or more quantitative key indicators associated with the one or more entities (see at least Rothley: Figs. 3-4 & ¶ [0039] & ¶ [0047]. Rothley notes that the product data may be received from the supplier 245 through a questionnaire or be extracted from product factsheets and supplier factsheets stored in databases within the ERP system 200. See also Rothley at ¶ [0017]: The product data for a supplier can be automatically extracted from supplier factsheets, product factsheets, supplier master records, product category factsheets, product master records, supplier invoices, contracts, surveys, questionnaires, integrated ERP systems, web services, and external data feeds. See also Rothley at ¶ [0047]: The product data relating to the chemical compound supplied by supplier C is (if applicable) automatically extracted from a filled-in Questionnaire sent along with an RFx. In addition, other product or supplier related data such as the supplier's location, climatic conditions, governing standards and laws pertaining to supplier's location etc., are received through external data source systems in real-time.).
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the teachings of Katakol system for an evaluation harmonizer with the aforementioned teachings of: obtaining one or more second scores from a database, the one or more second scores comprising one or more quantitative key indicators associated with the one or more entities, and in view of Rothley, whereby the ERP system is enabled with automated pull mechanisms allowing real-time processing and execution of inspection data. As used herein, the term “real-time” refers to a time frame that is brief, appearing to be immediate or near concurrent (see at least Rothley: ¶ [0014].). Also, that if the one or more constraints are fulfilled, the green sourcing metrics for evaluating the supplier is customized according to the one or more constraints that are fulfilled i.e., the green sourcing metrics are assigned an adjusted score, automatically by the system, according to the one or more constraints that are fulfilled (see at least Rothley: ¶ [0030].). The base scores under each metric table are automatically populated by the system based on predefined scores stored in the system. The base scores are then automatically adapted from the base scores based on pre-configured settings involving various constraints (see at least Rothley: ¶ [0043].).
Further, the claimed invention is merely a combination of old elements in a similar field for an evaluation harmonizer, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Rothley, the results of the combination were predictable.
Moreover, Katakol / Rothley system for an evaluation harmonizer does not explicitly disclose, but Crabtree does disclose the following:
- inputting the one or more first scores and the one or more second scores to a machine learning model that is configured to output a total score for an entity of the one or more entities (see at least Crabtree: ¶ [0054] & ¶ [0118] & ¶ [0127]. Crabtree teaches that ML algorithms assist in determining the impact and severity of the risk by consulting actuarial tables and commercial-off-the-shelf (COTS) modeling tools, and together with the system's semantic computing, assign a summed total of the risk rating. The risk rating scale is customizable but as an example, it may be configured where a negative numerical score means a higher risk, a risk rating of zero is neutral, and a positive numerical rating is of low risk or beneficial relationship to the user. See also Crabtree at ¶ [0118]: From this analysis of business impact 2412, a network resilience rating is assigned 2405, representing a weighted and adjusted total of relative exposure the organization has to various types of risks, each of which may be assigned a sub-rating. The network resilience rating 2405 may be a single score for all factors, a combination of scores, or a score for a particular risk or area of concern. The network resilience rating 2411 may then be adjusted or filtered depending on the context in which it is to be used 2409. See also Crabtree at ¶ [0127]: This information is used by the risk rating engine's 3111 semantic computing and machine learning algorithms to determine the risk impact likelihood to the entity. Machine learning algorithm which identifies, categorizes, and scores each relation with a risk score. The risk rating engine 3111 then sums all scores and produces a risk rating a profile 3140 to the client comprising the knowledge graph and numerical risk score.), wherein the machine learning model (see at least Crabtree: Fig. 19 & ¶ [0103]. Crabtree notes machine learning models 1901 shown at Fig. 19.) comprises a scoring engine (see at least Crabtree: ¶ [0104] & Fig. 24. Crabtree teaches that Fig. 24 denoting an architecture diagram for the scoring engine. The cybersecurity profile is sent to the scoring engine 1910 along with event and loss data 1914 and context data 1909 for the scoring engine 1910 to develop a score and/or rating for the organization that takes into consideration both the cybersecurity profile, context, and other information.) that includes a generative adversarial network (see at least Crabtree: ¶ [0117] & ¶ [0123]. Crabtree teaches generative adversarial networks (GAN’s) in Fig. 22 denoted as 2210c and Fig. 28 denoted as 2812.) trained to generate the total score for the entity (see at least Crabtree: Figs. 32-33 & ¶ [0118] & ¶ [0127].) by applying one or more weights (see at least Crabtree: ¶ [0122] & ¶ [0149] & Fig. 24. Crabtree notes that the edges may also be assigned numerical weights or probabilities, indicating, for example, the likelihood of a successful attack gaining access from one node to another. The next step in the process is to assign a risk category 3303. This is critical as each category 3304 is weighted based on the impact the type of risk would have on the entity. See also Crabtree at ¶ [0093]: Operations may be assigned a score up to 400 points, along with up to 200 additional points for web/application recon results, 100 points for patch frequency, and 50 points each for additional endpoints and open-source intel results. This yields a weighted score incorporating all available information from all scanned sources, allowing a meaningful and readily-appreciable representation of an organization's overall cybersecurity strength. See also Fig. 32 of Crabtree.) to the one or more first scores and the one or more second scores (see at least Crabtree: ¶ [0054] & ¶ [0118] & ¶ [0127].), the generative adversarial network (see at least Crabtree: ¶ [0117] & ¶ [0123]. Crabtree teaches generative adversarial networks (GAN’s) in Fig. 22 denoted as 2210c and Fig. 28 denoted as 2812.) being trained using training data (see at least Crabtree: ¶ [0074]. Crabtree notes that machine learning algorithms develop models of behavior or understanding based on information fed to them as training sets, and can modify those models based on new incoming information.) comprises a) questionnaires for evaluation of the one or more entities (see at least Crabtree: Figs. 32-33 & ¶ [0078] & ¶ [0103]. Crabtree teaches that the directed computational graph module 155 retrieves one or more streams of data from a plurality of sources, which includes, but is in no way not limited to, a plurality of physical sensors, network service providers, web-based questionnaires and surveys, monitoring of electronic infrastructure, crowd sourcing campaigns, and human input device information. See also Crabtree at ¶ [0103]: The cyber-physical graph 1902 plus the analyses of data directed by the directed computational graph on the reconnaissance data received from the reconnaissance engine 1906 are combined to represent the cyber-security profile of the client organization whose network 1907 is being evaluated.) and b) responses to the questionnaires (see at least Crabtree: ¶ [0053] & ¶ [0078] & ¶ [0085]. Crabtree teaches receiving scan responses may be collected and processed through a plurality of data pipelines 155 a to analyze the collected information. See also Crabtree at ¶ [0053]: A knowledge graph is generated which may be presented to the user for advanced insight and analysis into the risk factors and relationships associated with the queried entity, but also is used by the system to answer additional queries through various procedures. Crabtree teaches that the directed computational graph module 155 retrieves one or more streams of data from a plurality of sources, which includes, but is in no way not limited to, a plurality of physical sensors, network service providers, web-based questionnaires and surveys.)
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the teachings of Katakol / Rothley system for an evaluation harmonizer with the aforementioned teachings of: inputting the one or more first scores and the one or more second scores to a machine learning model that is configured to output a total score for an entity of the one or more entities, wherein the machine learning model comprises a scoring engine that includes a GAN trained to generate the total score for the entity by applying one or more weights to the one or more first scores and the one or more second scores, the generative adversarial network being trained using training data comprising a) questionnaires for evaluation of the one or more entities and b) responses to the questionnaires, and in further view of Crabtree, whereby the analysis and situational information external to the already available data in the automated planning service module which also runs powerful information theory based predictive statistics functions and machine learning algorithms to allow future trends and outcomes to be rapidly forecast based upon the current system derived results and choosing each a plurality of possible business decisions (see at least Crabtree: ¶ [0079].). Moreover, the machine learning algorithm which identifies, categorizes, and scores each relation with a risk score. The risk rating engine then sums all scores and produces a risk rating a profile to the client comprising the knowledge graph and numerical risk score (see at least Crabtree: ¶ [0127].)
Further, the claimed invention is merely a combination of old elements in a similar field for an evaluation harmonizer, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Crabtree, the results of the combination were predictable.
Moreover, Katakol / Rothley / Crabtree system for an evaluation harmonizer does not explicitly disclose, but Nowacki does disclose the following:
- in response to the total score for the entity being output by the machine learning model (see at least Nowacki: Fig. 9 & ¶ [0079-0080] & ¶ [0139-0141]. Nowacki notes that when the process 900 begins (901), normalized values are obtained (902). The normalized values may be received from a receiver entity, or the receiver entity may provide one or more actual values and one or more expected values, and the one or more normalized values may be generated using the received values. The normalized value may include a combined risk score for all or some ingredients and attributes provided by a particular supplier, or the normalized value may be a risk score for a particular ingredient or attribute provided by the particular supplier. The risk score may be a second derivative risk score, reflecting whether the particular supplier is trending in a good or bad direction. See also Nowacki at ¶ [0079-0080]: “The risk score may also be calculated using an algorithm or a look-up table that accepts the raw or actual deviation amount as an input, and that maps the raw or actual deviation amounts to normalized values. To make the normalized values meaningful for use in a comparison, different algorithms may be used to normalize the values to fit within different ranges.”), populating a first user interface with the one or more first scores, the one or more second scores, and the total score for the entity (see at least Nowacki: Figs. 5-6 & Fig. 8 & Fig. 10.), the first user interface generated at least in part based on the second template (see at least Nowacki: Figs. 5-6 & Figs. 8 & Fig. 10 & ¶ [0130]. Nowacki notes that a user of the user device 804 enters information through a user interface 815, where the information identifies a particular ingredient, supplier entity, and/or attribute, and/or information associated with a credibility or reliability expectation, such as information indicating that the user wishes to determine the extent to which a particular supplier satisfies or does not satisfy credibility or reliability expectations. See also Nowacki at ¶ [0043-0045]: The supplier-specific templates stored on the specification compliance server 101 may specify that, for a particular supplier, a particular value for a particular attribute is shown in a particular region of a certificate of analysis that is provided by that supplier. The templates may also be updated on an other-than-periodic basis, such as after determining that certain attribute values that have been automatically read from a certificate of analysis using a template fall outside the usual, normal, possible, or acceptable range of values that are associated with the supplier, attribute, and/or the ingredient. See also Nowacki at Fig. 8 & Fig. 10.)
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the teachings of Katakol / Rothley / Crabtree system for an evaluation harmonizer with the aforementioned teachings of: in response to the total score for the entity being output by the machine learning model, populating a first user interface with the one or more first scores, the one or more second scores, and the total score for the entity, the first user interface generated at least in part based on the second template, and in further view of Nowacki, whereby the dashboards may be populated with data that the enterprise generates, and/or data that is received from other enterprises received, for instance, from a global ingredient database. The information may include, for instance, trend reports that show the trend of risk scores for a supplier, attribute, or ingredient, over a user-selectable period of time, as observed by a particular plant within an enterprise. Risk scores may also be used to develop risk scorecard reports that rank different suppliers based on past shipments, and benchmarking reports 530 that compare the performance of suppliers to industry standards (see at least Nowacki: ¶ [0100].)
Further, the claimed invention is merely a combination of old elements in a similar field for an evaluation harmonizer, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Nowacki, the results of the combination were predictable.
Moreover, Katakol / Rothley / Crabtree / Nowacki system for an evaluation harmonizer does not explicitly disclose, but Fourman does disclose the following:
- publishing the populated first user interface to a content library accessible by users (see at least Fourman: ¶ [0082] & ¶ [0218] & Fig. 10. Fourman notes that the KPIs, KSI and KTIs may be saved in a library for re-use within template definitions and/or plan definitions. See also Fourman at ¶ [0218]: Risk Management indicators are saved in a library by different master users and combined together in a Template to support the Hierarchy of Intent shown in FIG. 3 and reflected in FIG. 2. The same approach has been used in the following example of developing a Service Improvement Plan and scorecard for a local government organization. See also Fourman at Fig. 10.) in the enterprise via a communication network (see at least Fourman: ¶ [0168] & Figs. 1A-1B.), such that any of the users can access the populated first user interface for consumption (see at least Fourman: ¶ [0233-0235] & ¶ [0250]. Fourman notes that creators of expertise for re-use 703 access use the User Interface 704 to interact with the system. Communities of Practice 711 use the system, via the portal, as a means to collaborate. Users, or subscribers to the hub then access benchmark organizations information via the Hub shown schematically in with the linkages of three users to a central hub represented 1802. To request access to a further scorecard, the owner of Entity 1 has several options including use of e-mail. While the owner of Entity 1 knew of the existence of Entity 2, they were unaware of the Existance of Entity 3 until the system identified Entity 3 (using the Purposeful Clustering approach described below) as an appropriate organization for benchmarking and learning and displayed its name to the owner of Entity 1.), wherein the populated first user interface provides an evaluation of the entity of the one or more entities (see at least Fourman: (Claim 1 of Fourman) & ¶ [0171] & ¶ [0223-0225]. Fourman teaches that entities for which plans exist are labelled organization units or scorecards 1601. Additional frames within the portal 1602 and 1603 show associated knowledge related to the selected scorecard since no indicator is selected. Other approaches to feedback include Plan-Do-Check-Act, Plan-Do-Study-Act, Plan-Do-Review and Plan-Implement-Evaluate, all of the above being known in the field of management and particularly quality management and continuous improvement. See also Fourman at ¶ [0171]: The PC 102 supports a Graphical User Interface (GUI) capable of displaying a scorecard 210 (FIG. 2) that is a representation of an intention of an entity in a measurable form. See also Claim 1 of Fourman: “A graphical user interface arranged to display, when in use, a scorecard or other representation of information.” See also Fourman at Figs. 9-10.).
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the teachings of Katakol / Rothley / Crabtree / Nowacki system for an evaluation harmonizer with the aforementioned teachings of: publishing the populated first user interface to a content library accessible by users in the enterprise via a communication network, such that any of the users can access the populated first user interface for consumption, wherein the populated first user interface provides an evaluation of the entity of the one or more entities, and in further view of Fourman, whereby the scorecard includes a representation of a plurality of indicators associated with the entity, the processor being responsive to selection from the plurality of indicators of an indicator using the input device so as to provide access to a plurality of selectable discrete elements that constitute a basis upon which a state of the indicator is determined (see at least Fourman: ¶ [0007].). Scorecard templates may be defined by a named group of users. Scorecard templates may be used to complete plans by a further named group of users. Parts of a scorecard template may only be accessible to designated users (see at least Fourman: ¶ [0079-0080].). Moreover, the PC supports a Graphical User Interface (GUI) capable of displaying a scorecard 210 (FIG. 2) that is a representation of an intention of an entity in a measurable form (see at least Fourman: ¶ [0171].).
Further, the claimed invention is merely a combination of old elements in a similar field for an evaluation harmonizer, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Fourman, the results of the combination were predictable.
Regarding Independent Claims 10 and 19, Katakol method / non-transitory computer-readable medium for an evaluation harmonizer teaches the following:
- creating, by an evaluation harmonizer (see at least Katakol: Fig. 3 & Fig. 6 & ¶ [0088-0090]. Katakol notes automated aggregation, cleansing, and normalizing of data using an Artificial Intelligence based system that learns. Real-time human feedback and ability to change classification.), an evaluation service for an enterprise resource planning (ERP) system (see at least Katakol: ¶ [0015] & ¶ [0070] & ¶ [0129]. Katakol notes that the platform further includes a unified supplier portal for sourcing and procurement tasks and is easy to integrate with ERP systems. Integrates directly with ERP system to ensure real time updates of system. The scoring component can be used to evaluate suppliers during the RFP (Request for Proposal) proposal or can also be used for the strategic evaluation of the existing suppliers for balanced score card purposes.) by at least configuring the evaluation service to evaluate one or more entities in an enterprise (see at least Katakol: See also FIGS. 8, 12, 12-a, and 12-b of Katakol noting the evaluation of one or more suppliers.) associated with the ERP system (see at least Katakol: ¶ [0015] & ¶ [0070] & ¶ [0129].), the configuring comprising selecting a first template and a second template (see at least Katakol: ¶ [0015] & ¶ [0070] & ¶ [0116]. Katakol notes category specific supplier performance and risk management templates. See also Katakol at ¶ [0070]: The scoring component can be used to evaluate suppliers during the RFP (Request for Proposal) proposal or can also be used for the strategic evaluation of the existing suppliers for balanced score card purposes. See also Katakol at ¶ [0116]: Katakol teaches providing templates and clause library for easy creation. See also FIGS. 8, 12, 12-a, and 12-b of Katakol noting the evaluation of one or more suppliers. Also Examiner notes guided sourcing events with category specific templates (FIG. 15 a, b, c and d).)
- in response to creation of the evaluation service, the operations further comprise (see at least Katakol: ¶ [0070] & Fig. 12, Fig. 12-a and Fig. 12-b. Katakol teaches that the scoring component can be used to evaluate suppliers during the RFP (Request for Proposal) proposal or can also be used for the strategic evaluation of the existing suppliers for balanced score card purposes.)
- causing one or more messages to be sent to one or more evaluators (see at least Katakol: ¶ [0050] & Fig. 12, Fig. 12-a and Fig. 12-b. Katakol notes that survey section (FIG. 1) which provides means to select pre-set questions for which answers may be required to be submitted by a supplier or vendor and which answers may be automatically scored and/or a rating provided. Returning a vendor list in accordance with characteristics selected by a user, or for employing the assigned rating to a supplier's response or responses. Also Examiner notes guided sourcing events with category specific templates (FIG. 15 a, b, c and d).)
- receiving one or more responses to the one or more messages (see at least Katakol: ¶ [0050] & Fig. 12, Fig. 12-a and Fig. 12-b. Katakol notes that the scoring engine for creating ratings related to a supplier's response to a set of questions served by the Survey Section. Also Examiner notes guided sourcing events with category specific templates (FIG. 15 a, b, c and d).)
- determining one or more first scores based on the one or more responses (see at least Katakol: Fig. 12, 12-a, 12-b & ¶ [0050]. Katakol at Survey Section (FIG. 1) which provides means to select pre-set questions for which answers may be required to be submitted by a supplier or vendor and which answers may be automatically scored and/or a rating provided, or for employing the assigned rating to a supplier's response or responses; Auction Engine (FIG. 1) for setting up and conducting auctions; Scoring Engine for creating ratings related to a supplier's response to a set of questions served by the Survey Section. See also Katakol at ¶ [0070]: This concept is also extended to business components such as a scoring process (see FIG. 1 at “Scoring Engine”) related to suppliers or a response from a supplier is identical using scoring component. The scoring component can be used to evaluate suppliers during the RFP (Request for Proposal) proposal or can also be used for the strategic evaluation of the existing suppliers for balanced score card purposes.)
Moreover, Katakol method / non-transitory computer-readable medium for an evaluation harmonizer does not explicitly disclose, but Rothley does disclose the following:
- obtaining one or more second scores from a database (see at least Rothley: Figs. 3-4 & Fig. 5 & ¶ [0035]. Rothley notes scoring each of the one or more suppliers based on the customized green sourcing metrics and providing a ranked list of suppliers according to their respective individual and overall scores. Performing the sourcing analysis includes identifying one or more suppliers whose green scores falls under a pre-defined threshold limit.), the one or more second scores comprising one or more quantitative key indicators associated with the one or more entities (see at least Rothley: Figs. 3-4 & ¶ [0039] & ¶ [0047]. Rothley notes that the product data may be received from the supplier 245 through a questionnaire or be extracted from product factsheets and supplier factsheets stored in databases within the ERP system 200. See also Rothley at ¶ [0017]: The product data for a supplier can be automatically extracted from supplier factsheets, product factsheets, supplier master records, product category factsheets, product master records, supplier invoices, contracts, surveys, questionnaires, integrated ERP systems, web services, and external data feeds. See also Rothley at ¶ [0047]: The product data relating to the chemical compound supplied by supplier C is (if applicable) automatically extracted from a filled-in Questionnaire sent along with an RFx. In addition, other product or supplier related data such as the supplier's location, climatic conditions, governing standards and laws pertaining to supplier's location etc., are received through external data source systems in real-time.).
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the teachings of Katakol method / non-transitory computer-readable medium for an evaluation harmonizer with the aforementioned teachings of: obtaining one or more second scores from a database, the one or more second scores comprising one or more quantitative key indicators associated with the one or more entities, and in view of Rothley, whereby the ERP system is enabled with automated pull mechanisms allowing real-time processing and execution of inspection data. As used herein, the term “real-time” refers to a time frame that is brief, appearing to be immediate or near concurrent (see at least Rothley: ¶ [0014].). Also, that if the one or more constraints are fulfilled, the green sourcing metrics for evaluating the supplier is customized according to the one or more constraints that are fulfilled i.e., the green sourcing metrics are assigned an adjusted score, automatically by the system, according to the one or more constraints that are fulfilled (see at least Rothley: ¶ [0030].). The base scores under each metric table are automatically populated by the system based on predefined scores stored in the system. The base scores are then automatically adapted from the base scores based on pre-configured settings involving various constraints (see at least Rothley: ¶ [0043].).
Further, the claimed invention is merely a combination of old elements in a similar field for an evaluation harmonizer, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Rothley, the results of the combination were predictable.
Moreover, Katakol / Rothley method / non-transitory computer-readable medium for an evaluation harmonizer does not explicitly disclose, but Crabtree does disclose the following:
- inputting the one or more first scores and the one or more second scores to a machine learning model that is configured to output a total score for an entity of the one or more entities (see at least Crabtree: ¶ [0054] & ¶ [0118] & ¶ [0127]. Crabtree teaches that ML algorithms assist in determining the impact and severity of the risk by consulting actuarial tables and commercial-off-the-shelf (COTS) modeling tools, and together with the system's semantic computing, assign a summed total of the risk rating. The risk rating scale is customizable but as an example, it may be configured where a negative numerical score means a higher risk, a risk rating of zero is neutral, and a positive numerical rating is of low risk or beneficial relationship to the user. See also Crabtree at ¶ [0118]: From this analysis of business impact 2412, a network resilience rating is assigned 2405, representing a weighted and adjusted total of relative exposure the organization has to various types of risks, each of which may be assigned a sub-rating. The network resilience rating 2405 may be a single score for all factors, a combination of scores, or a score for a particular risk or area of concern. The network resilience rating 2411 may then be adjusted or filtered depending on the context in which it is to be used 2409. See also Crabtree at ¶ [0127]: This information is used by the risk rating engine's 3111 semantic computing and machine learning algorithms to determine the risk impact likelihood to the entity. Machine learning algorithm which identifies, categorizes, and scores each relation with a risk score. The risk rating engine 3111 then sums all scores and produces a risk rating a profile 3140 to the client comprising the knowledge graph and numerical risk score.), wherein the machine learning model (see at least Crabtree: Fig. 19 & ¶ [0103]. Crabtree notes machine learning models 1901 shown at Fig. 19.) comprises a scoring engine (see at least Crabtree: ¶ [0104] & Fig. 24. Crabtree teaches that Fig. 24 denoting an architecture diagram for the scoring engine. The cybersecurity profile is sent to the scoring engine 1910 along with event and loss data 1914 and context data 1909 for the scoring engine 1910 to develop a score and/or rating for the organization that takes into consideration both the cybersecurity profile, context, and other information.) that includes a generative adversarial network (see at least Crabtree: ¶ [0117] & ¶ [0123]. Crabtree teaches generative adversarial networks (GAN’s) in Fig. 22 denoted as 2210c and Fig. 28 denoted as 2812.) trained to generate the total score for the entity (see at least Crabtree: Figs. 32-33 & ¶ [0118] & ¶ [0127].) by applying one or more weights (see at least Crabtree: ¶ [0122] & ¶ [0149] & Fig. 24. Crabtree notes that the edges may also be assigned numerical weights or probabilities, indicating, for example, the likelihood of a successful attack gaining access from one node to another. The next step in the process is to assign a risk category 3303. This is critical as each category 3304 is weighted based on the impact the type of risk would have on the entity. See also Crabtree at ¶ [0093]: Operations may be assigned a score up to 400 points, along with up to 200 additional points for web/application recon results, 100 points for patch frequency, and 50 points each for additional endpoints and open-source intel results. This yields a weighted score incorporating all available information from all scanned sources, allowing a meaningful and readily-appreciable representation of an organization's overall cybersecurity strength. See also Fig. 32 of Crabtree.) to the one or more first scores and the one or more second scores (see at least Crabtree: ¶ [0054] & ¶ [0118] & ¶ [0127].), the generative adversarial network (see at least Crabtree: ¶ [0117] & ¶ [0123]. Crabtree teaches generative adversarial networks (GAN’s) in Fig. 22 denoted as 2210c and Fig. 28 denoted as 2812.) being trained using training data (see at least Crabtree: ¶ [0074]. Crabtree notes that machine learning algorithms develop models of behavior or understanding based on information fed to them as training sets, and can modify those models based on new incoming information.) comprises a) questionnaires for evaluation of the one or more entities (see at least Crabtree: Figs. 32-33 & ¶ [0078] & ¶ [0103]. Crabtree teaches that the directed computational graph module 155 retrieves one or more streams of data from a plurality of sources, which includes, but is in no way not limited to, a plurality of physical sensors, network service providers, web-based questionnaires and surveys, monitoring of electronic infrastructure, crowd sourcing campaigns, and human input device information. See also Crabtree at ¶ [0103]: The cyber-physical graph 1902 plus the analyses of data directed by the directed computational graph on the reconnaissance data received from the reconnaissance engine 1906 are combined to represent the cyber-security profile of the client organization whose network 1907 is being evaluated.) and b) responses to the questionnaires (see at least Crabtree: ¶ [0053] & ¶ [0078] & ¶ [0085]. Crabtree teaches receiving scan responses may be collected and processed through a plurality of data pipelines 155 a to analyze the collected information. See also Crabtree at ¶ [0053]: A knowledge graph is generated which may be presented to the user for advanced insight and analysis into the risk factors and relationships associated with the queried entity, but also is used by the system to answer additional queries through various procedures. Crabtree teaches that the directed computational graph module 155 retrieves one or more streams of data from a plurality of sources, which includes, but is in no way not limited to, a plurality of physical sensors, network service providers, web-based questionnaires and surveys.)
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the teachings of Katakol / Rothley method / non-transitory computer-readable medium for an evaluation harmonizer with the aforementioned teachings of: inputting the one or more first scores and the one or more second scores to a machine learning model that is configured to output a total score for an entity of the one or more entities, wherein the machine learning model comprises a scoring engine that includes a GAN trained to generate the total score for the entity by applying one or more weights to the one or more first scores and the one or more second scores, the generative adversarial network being trained using training data comprising a) questionnaires for evaluation of the one or more entities and b) responses to the questionnaires, and in further view of Crabtree, whereby the analysis and situational information external to the already available data in the automated planning service module which also runs powerful information theory based predictive statistics functions and machine learning algorithms to allow future trends and outcomes to be rapidly forecast based upon the current system derived results and choosing each a plurality of possible business decisions (see at least Crabtree: ¶ [0079].). Moreover, the machine learning algorithm which identifies, categorizes, and scores each relation with a risk score. The risk rating engine then sums all scores and produces a risk rating a profile to the client comprising the knowledge graph and numerical risk score (see at least Crabtree: ¶ [0127].)
Further, the claimed invention is merely a combination of old elements in a similar field for an evaluation harmonizer, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Crabtree, the results of the combination were predictable.
Moreover, Katakol / Rothley / Crabtree method / non-transitory computer-readable medium for an evaluation harmonizer does not explicitly disclose, but Nowacki does disclose the following:
- in response to the total score for the entity being output by the machine learning model (see at least Nowacki: Fig. 9 & ¶ [0079-0080] & ¶ [0139-0141]. Nowacki notes that when the process 900 begins (901), normalized values are obtained (902). The normalized values may be received from a receiver entity, or the receiver entity may provide one or more actual values and one or more expected values, and the one or more normalized values may be generated using the received values. The normalized value may include a combined risk score for all or some ingredients and attributes provided by a particular supplier, or the normalized value may be a risk score for a particular ingredient or attribute provided by the particular supplier. The risk score may be a second derivative risk score, reflecting whether the particular supplier is trending in a good or bad direction. See also Nowacki at ¶ [0079-0080]: “The risk score may also be calculated using an algorithm or a look-up table that accepts the raw or actual deviation amount as an input, and that maps the raw or actual deviation amounts to normalized values. To make the normalized values meaningful for use in a comparison, different algorithms may be used to normalize the values to fit within different ranges.”), populating a first user interface with the one or more first scores, the one or more second scores, and the total score for the entity (see at least Nowacki: Figs. 5-6 & Fig. 8 & Fig. 10.), the first user interface generated at least in part based on the second template (see at least Nowacki: Figs. 5-6 & Figs. 8 & Fig. 10 & ¶ [0130]. Nowacki notes that a user of the user device 804 enters information through a user interface 815, where the information identifies a particular ingredient, supplier entity, and/or attribute, and/or information associated with a credibility or reliability expectation, such as information indicating that the user wishes to determine the extent to which a particular supplier satisfies or does not satisfy credibility or reliability expectations. See also Nowacki at ¶ [0043-0045]: The supplier-specific templates stored on the specification compliance server 101 may specify that, for a particular supplier, a particular value for a particular attribute is shown in a particular region of a certificate of analysis that is provided by that supplier. The templates may also be updated on an other-than-periodic basis, such as after determining that certain attribute values that have been automatically read from a certificate of analysis using a template fall outside the usual, normal, possible, or acceptable range of values that are associated with the supplier, attribute, and/or the ingredient. See also Nowacki at Fig. 8 & Fig. 10.)
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the teachings of Katakol / Rothley / Crabtree method / non-transitory computer-readable medium for an evaluation harmonizer with the aforementioned teachings of: in response to the total score for the entity being output by the machine learning model, populating a first user interface with the one or more first scores, the one or more second scores, and the total score for the entity, the first user interface generated at least in part based on the second template, and in further view of Nowacki, whereby the dashboards may be populated with data that the enterprise generates, and/or data that is received from other enterprises received, for instance, from a global ingredient database. The information may include, for instance, trend reports that show the trend of risk scores for a supplier, attribute, or ingredient, over a user-selectable period of time, as observed by a particular plant within an enterprise. Risk scores may also be used to develop risk scorecard reports that rank different suppliers based on past shipments, and benchmarking reports 530 that compare the performance of suppliers to industry standards (see at least Nowacki: ¶ [0100].)
Further, the claimed invention is merely a combination of old elements in a similar field for an evaluation harmonizer, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Nowacki, the results of the combination were predictable.
Moreover, Katakol / Rothley / Crabtree / Nowacki method / non-transitory computer-readable medium for an evaluation harmonizer does not explicitly disclose, but Fourman does disclose the following:
- publishing the populated first user interface to a content library accessible by users (see at least Fourman: ¶ [0082] & ¶ [0218] & Fig. 10. Fourman notes that the KPIs, KSI and KTIs may be saved in a library for re-use within template definitions and/or plan definitions. See also Fourman at ¶ [0218]: Risk Management indicators are saved in a library by different master users and combined together in a Template to support the Hierarchy of Intent shown in FIG. 3 and reflected in FIG. 2. The same approach has been used in the following example of developing a Service Improvement Plan and scorecard for a local government organization. See also Fourman at Fig. 10.) in the enterprise via a communication network (see at least Fourman: ¶ [0168] & Figs. 1A-1B.), such that any of the users can access the populated first user interface for consumption (see at least Fourman: ¶ [0233-0235] & ¶ [0250]. Fourman notes that creators of expertise for re-use 703 access use the User Interface 704 to interact with the system. Communities of Practice 711 use the system, via the portal, as a means to collaborate. Users, or subscribers to the hub then access benchmark organizations information via the Hub shown schematically in with the linkages of three users to a central hub represented 1802. To request access to a further scorecard, the owner of Entity 1 has several options including use of e-mail. While the owner of Entity 1 knew of the existence of Entity 2, they were unaware of the Existance of Entity 3 until the system identified Entity 3 (using the Purposeful Clustering approach described below) as an appropriate organization for benchmarking and learning and displayed its name to the owner of Entity 1.), wherein the populated first user interface provides an evaluation of the entity of the one or more entities (see at least Fourman: (Claim 1 of Fourman) & ¶ [0171] & ¶ [0223-0225]. Fourman teaches that entities for which plans exist are labelled organization units or scorecards 1601. Additional frames within the portal 1602 and 1603 show associated knowledge related to the selected scorecard since no indicator is selected. Other approaches to feedback include Plan-Do-Check-Act, Plan-Do-Study-Act, Plan-Do-Review and Plan-Implement-Evaluate, all of the above being known in the field of management and particularly quality management and continuous improvement. See also Fourman at ¶ [0171]: The PC 102 supports a Graphical User Interface (GUI) capable of displaying a scorecard 210 (FIG. 2) that is a representation of an intention of an entity in a measurable form. See also Claim 1 of Fourman: “A graphical user interface arranged to display, when in use, a scorecard or other representation of information.” See also Fourman at Figs. 9-10.).
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the teachings of Katakol / Rothley / Crabtree / Nowacki method / non-transitory computer-readable medium for an evaluation harmonizer with the aforementioned teachings of: publishing the populated first user interface to a content library accessible by users in the enterprise via a communication network, such that any of the users can access the populated first user interface for consumption, wherein the populated first user interface provides an evaluation of the entity of the one or more entities, and in further view of Fourman, whereby the scorecard includes a representation of a plurality of indicators associated with the entity, the processor being responsive to selection from the plurality of indicators of an indicator using the input device so as to provide access to a plurality of selectable discrete elements that constitute a basis upon which a state of the indicator is determined (see at least Fourman: ¶ [0007].). Scorecard templates may be defined by a named group of users. Scorecard templates may be used to complete plans by a further named group of users. Parts of a scorecard template may only be accessible to designated users (see at least Fourman: ¶ [0079-0080].). Moreover, the PC supports a Graphical User Interface (GUI) capable of displaying a scorecard 210 (FIG. 2) that is a representation of an intention of an entity in a measurable form (see at least Fourman: ¶ [0171].).
Further, the claimed invention is merely a combination of old elements in a similar field for an evaluation harmonizer, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Fourman, the results of the combination were predictable.
Regarding Dependent Claims 2 and 11, Katakol / Rothley / Crabtree / Nowacki / Fourman system / method for an evaluation harmonizer teaches the limitations of Independent Claims 1 and 10 above, and Katakol further teaches the system / method for an evaluation harmonizer comprising:
- wherein the second template comprises a scorecard template selected from a library (see at least Katakol: Fig. 12 & Fig. 12-a & ¶ [0116]. Katakol notes that FIG. 12 shows an example scorecard to be completed by a user of the platform. FIG. 12-a; shows a screenshot of the sourcing scorecard analysis data for multiple suppliers by item and team member. See also Katakol at ¶ [0070]: The scoring component can be used to evaluate suppliers during the RFP (Request for Proposal) proposal or can also be used for the strategic evaluation of the existing suppliers for balanced score card purposes. See also Katakol at ¶ [0116]: Katakol teaches providing templates and clause library for easy creation.).
Regarding Dependent Claims 3 and 12, Katakol / Rothley / Crabtree / Nowacki / Fourman system / method for an evaluation harmonizer teaches the limitations of Independent Claims 1 and 10 above, and Katakol further teaches the system / method for an evaluation harmonizer comprising:
- wherein the first template comprises a questionnaire template comprising one or more questions (see at least Katakol: Fig. 12 & Figs. 15A-15D & ¶ [0050]. Katakol notes that Survey Section (FIG. 1) which provides means to select pre-set questions for which answers may be required to be submitted by a supplier or vendor and which answers may be automatically scored and/or a rating provided. FIG. 15-a thru d are screenshots of the step-wise progression to create a sourcing event using category specific templates already provided by the system. Scoring Engine for creating ratings related to a supplier's response to a set of questions served by the Survey Section; and Organization and Account Structure (FIG. 1), Customizable Fields (FIG. 1), and/or an Exchange Rate Monitor (FIG. 1). See also Katakol at Fig. 12.), and wherein the second template is linked to one or more first templates stored at a questionnaire service (see at least Katakol: ¶ [0015] & ¶ [0050] & ¶ [0091]. Katakol notes that supplier consolidation and other effective fac tors, and may include a 'sourcing workbench” which includes the ability to house research, Strategy, templates and models in shared workspace allowing the user to build a common knowledge and tools base for a given team. See also Katakol at ¶ [0034]: FIG. 15-a thru dare screenshots of the step-wise progression to create a sourcing event using category specific templates already provided by the system. See also Katakol at ¶ [0050]. See also Katakol at ¶ [0091]: Single consolidated supplier spend view through parent-child linkage to every single variation (FIG. 13 or 13a).) and is further linked to the database (see at least Katakol: ¶ [0056]. Katakol notes that data access application block: (FIG. 1) An enterprise application in the cloud deployed in Platform as Service Model requires “data” to persist to multiple storage types. Examples include a relational database, big data, table storage or others.) that stores the one or more quantitative key indicators associated with the one or more entities (see at least Katakol: FIGS. 19, 19-a and 19-b. Katakol teaches provides graphic representations of a variety of metrics related to a specific supplier, including savings.), wherein the one or more entities comprise one or more suppliers (see at least Katakol: Fig. 12-a & Fig. 12-b.).
Regarding Dependent Claims 4 and 13, Katakol / Rothley / Crabtree / Nowacki / Fourman system / method for an evaluation harmonizer teaches the limitations of Independent Claims 1 and 10 above, and Rothley further teaches the system / method for an evaluation harmonizer comprising:
- wherein the creating further comprises selecting, via a second user interface (see at least Rothley: ¶ [0038] & (Dependent Claim 5 of Rothley). Rothley teaches that the user interface of the computer 230 is configured to display a dashboard showing results of the supplier evaluation generated in the backend system 210 as will be discussed in more detail later with reference to FIGS. 3 and 4. Green sourcing metrics comprises selecting a set of metrics of the green sourcing metrics over another set of metrics of the green sourcing metrics based on the at least one of the one or more constraints that is fulfilled.), a name of the evaluation service (see at least Rothley: ¶ [0030] & Figs. 3-4. Rothley teaches that the name of evaluation service is “Evaluation for Sustainability Performance” for each supplier.), a description of the evaluation service (see at least Rothley: ¶ [0030] & Figs. 3-4. Rothley notes the description of the evaluation service is waste disposal, packaging and transportation in order to ascertain sustainability scores or green scores for each supplier.), a type of evaluation to be performed by the evaluation service (see at least Rothley: ¶ [0014] & ¶ [0040-0042] & Figs. 3-4. Rothley teaches that the one or more criteria, used to evaluate the sustainability score of a supplier, refer to material(s) used for manufacturing the product, design and functionality of the product, extraction and processing of materials, packaging and distribution, manufacturing processes, mode of transport, emission control measures, distance between a supplier site and a procurement site (of either purchaser or customer), energy management measures, waste disposal measures, environmental policies, and compliance certificates.), a frequency for performing the evaluation (see at least Rothley: ¶ [0036] & ¶ [0040] & ¶ [0046]. Rothley notes that the processor may perform an analysis of a supplier's green scores for a certain number of years. The suppliers green scores that are archived over a period of time may be analyzed to track the green trend of a particular supplier. The green supplier evaluation process may be performed as a part of supplier lifecycle management, sustainability studies, certification program, compliance audits by an inspecting body, supplier development program, and regular scheduled intervals. The suppliers under the development plan may be re-evaluated after a certain period of time to measure their improved green scores.), the one or more entities (see at least Rothley: Figs. 3-4. Rothley notes that the one or more entities are the one or more suppliers factored into the evaluation service. In this case, the one or more entities/suppliers are AMS, Omega, Fargo and Bertek.), and the one or more evaluators (see at least Rothley: ¶ [0046] & Figs. 3-4. Rothley notes that the sourcing personnel may, based on the results displayed on the dashboard, opt to source goods from a supplier with a better green score then the supplier currently engaged with.).
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the teachings of Katakol / Rothley / Crabtree / Nowacki / Fourman system / method for an evaluation harmonizer with the aforementioned teachings of: wherein the creating further comprises selecting, via a second user interface, a name of the evaluation service, a description of the evaluation service, a type of evaluation to be performed by the evaluation service, a frequency for performing the evaluation, the one or more entities, and the one or more evaluators, and in further view of Rothley, whereby the ERP system is enabled with automated pull mechanisms allowing real-time processing and execution of inspection data. As used herein, the term “real-time” refers to a time frame that is brief, appearing to be immediate or near concurrent (see at least Rothley: ¶ [0014].). Also, that if the one or more constraints are fulfilled, the green sourcing metrics for evaluating the supplier is customized according to the one or more constraints that are fulfilled i.e., the green sourcing metrics are assigned an adjusted score, automatically by the system, according to the one or more constraints that are fulfilled (see at least Rothley: ¶ [0030].). The base scores under each metric table are automatically populated by the system based on predefined scores stored in the system. The base scores are then automatically adapted from the base scores based on pre-configured settings involving various constraints (see at least Rothley: ¶ [0043].).
Further, the claimed invention is merely a combination of old elements in a similar field for an evaluation harmonizer, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Rothley, the results of the combination were predictable.
Regarding Dependent Claims 5 and 14, Katakol / Rothley / Crabtree / Nowacki / Fourman system / method for an evaluation harmonizer teaches the limitations of Independent Claims 1 and 10 above, and Rothley further teaches the system / method for an evaluation harmonizer comprising:
- wherein the one or more messages comprise one or more questionnaires generated at least in part based on the first template selected during the creating of the evaluation service (see at least Rothley: ¶ [0017] & ¶ [0039-0041] & ¶ [0047]. Rothley notes that the product data for a supplier can be automatically extracted from supplier factsheets, product factsheets, supplier master records, product category factsheets, product master records, supplier invoices, contracts, surveys, questionnaires, integrated ERP systems, web services, and external data feeds. See also Rothley at ¶ [0039]: The product data may be received from external data sources such as external data feeds, web services, market research data, surveys and statistics. The product data may be received from the supplier 245 through a questionnaire or be extracted from product factsheets and supplier factsheets stored in databases within the ERP system 200. See also Rothley at ¶ [0041]: The data collection template may be auto-populated with information regarding the minimum number of metrics that need to be applied. See also Rothley at ¶ [0047]: The product data relating to the chemical compound supplied by supplier C is (if applicable) automatically extracted from a filled-in Questionnaire sent along with an RFx.).
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the teachings of Katakol / Rothley / Crabtree / Nowacki / Fourman system / method for an evaluation harmonizer with the aforementioned teachings of: wherein the one or more messages comprise one or more questionnaires generated at least in part based on the first template selected during the creating of the evaluation service, and the one or more evaluators, and in further view of Rothley, whereby the ERP system is enabled with automated pull mechanisms allowing real-time processing and execution of inspection data. As used herein, the term “real-time” refers to a time frame that is brief, appearing to be immediate or near concurrent (see at least Rothley: ¶ [0014].). Also, that if the one or more constraints are fulfilled, the green sourcing metrics for evaluating the supplier is customized according to the one or more constraints that are fulfilled i.e., the green sourcing metrics are assigned an adjusted score, automatically by the system, according to the one or more constraints that are fulfilled (see at least Rothley: ¶ [0030].). The base scores under each metric table are automatically populated by the system based on predefined scores stored in the system. The base scores are then automatically adapted from the base scores based on pre-configured settings involving various constraints (see at least Rothley: ¶ [0043].).
Further, the claimed invention is merely a combination of old elements in a similar field for an evaluation harmonizer, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Rothley, the results of the combination were predictable.
Regarding Dependent Claims 6 and 15, Katakol / Rothley / Crabtree / Nowacki / Fourman system / method for an evaluation harmonizer teaches the limitations of Independent Claims 1 and 10 above, and Nowacki further teaches the system / method for an evaluation harmonizer comprising:
- wherein the operations further comprise normalizing the one or more first scores into a predetermined range (see at least Nowacki: ¶ [0087] & ¶ [0139-0140] & Fig. 9. Nowacki teaches that the normalized values may be received from a receiver entity, or the receiver entity may provide one or more actual values and one or more expected values, and the one or more normalized values may be generated using the received values. The normalized values reflect an extent to which a supplier entity has satisfied an expectation of a receiver entity. See also Nowacki at ¶ [0087]: These ranges may include an ‘out of spec, high—reject’ range of values, an ‘out of spec, high—ask’ range of values, an ‘inspect, warning track, high—accept’ range of values, an ‘accept’ value or range of values, an ‘inspect, warning track, low—accept’ range of values, an ‘out of spec, low—ask’ range of values, and an ‘out of spec, low—reject’ range of values. See also Nowacki at ¶ [0095]: A value or a range of values for each of one or more attributes associated with the supplier may be automatically determined (e.g., a date of a last audit).), and normalizing the one or more second scores into the predetermined range (see at least Nowacki: ¶ [0087] & ¶ [0095] & ¶ [0117]. Nowacki teaches that the user 704 includes information 711 that identifies the user's selection, and a first representation 712 of the normalized values, i.e., a raw risk score, and a second representation 714 of the normalized values, i.e., an analysis of the risk score. See also Nowacki at ¶ [0087]: These ranges may include an ‘out of spec, high—reject’ range of values, an ‘out of spec, high—ask’ range of values, an ‘inspect, warning track, high—accept’ range of values, an ‘accept’ value or range of values, an ‘inspect, warning track, low—accept’ range of values, an ‘out of spec, low—ask’ range of values, and an ‘out of spec, low—reject’ range of values. See also Nowacki at ¶ [0095]: A value or a range of values for each of one or more attributes associated with the supplier may be automatically determined (e.g., a date of a last audit). See also Nowacki at Fig. 9 noting “obtaining normalized values at step 902.”), and combining the normalized one or more first scores (see at least Nowacki: ¶ [0039] & ¶ [0078] & ¶ [0141]. Nowacki notes that the duplicate form detection process may also aggregate or combine values that are supplied on different forms, e.g., by averaging values, ignoring values, filling in missing values on one form with supplied values on another form, selecting the values associated with the most recent date, etc. The normalized value may include a combined risk score for all or some ingredients and attributes provided by a particular supplier, or the normalized value may be a risk score for a particular ingredient or attribute provided by the particular supplier.)
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the teachings of Katakol / Rothley / Crabtree / Nowacki / Fourman system / method for an evaluation harmonizer with the aforementioned teachings of: wherein the operations further comprise normalizing the one or more first scores into a predetermined range, and normalizing the one or more second scores into the predetermined range, and in further view of Nowacki, whereby the dashboards may be populated with data that the enterprise generates, and/or data that is received from other enterprises received, for instance, from a global ingredient database. The information may include, for instance, trend reports that show the trend of risk scores for a supplier, attribute, or ingredient, over a user-selectable period of time, as observed by a particular plant within an enterprise. Risk scores may also be used to develop risk scorecard reports that rank different suppliers based on past shipments, and benchmarking reports 530 that compare the performance of suppliers to industry standards (see at least Nowacki: ¶ [0100].)
Further, the claimed invention is merely a combination of old elements in a similar field for an evaluation harmonizer, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Nowacki, the results of the combination were predictable.
Examining Claims with Respect to Prior Art
14. The proposed claim amendments to Dependent Claims 7 and 16 have overcome the prior art only. Dependent Claim 8 based on dependency hereby overcomes the prior art as well, since this claim is dependent on Claims 1 and 7. Dependent Claim 17 based on dependency hereby overcomes the prior art, since this claim is dependent on Claims 10 and 16.
However, there are still pending issues which remains: (1) 35 U.S.C. § 101 rejection for Claims 1-8, 10-17 and 19 and (2) the remaining 35 U.S.C. § 103 rejection for Claims 1-6, 10-15 and 19 which are each are maintained on the record. Please note that Claims 7-8 and 16-17 are not allowed, however, because they stand rejected under one or more of 35 U.S.C. § 101 rejection and the 35 U.S.C. § 103 rejections for the remaining/rest of the claims.
Regarding Dependent Claims 7 and 16, none of the prior arts of record do not reach or render obvious the sequence of limitations directed to:
- wherein the generative adversarial network is trained using training data comprising scores for a particular entity considered to be accurate given a set of input data and scores for the particular entity considered to be inaccurate given the set of input data
The closest prior arts are as follows:
#1) US PG Pub (US 2015/0039359 A1) hereinafter Katakol;
#2) US PG Pub (US 2013/0132233 A1) hereinafter Rothley;
#3) US PG Pub (US 2021/0019674 A1) hereinafter Crabtree.
Katakol method for an evaluation harmonizer teaches or renders obvious the sequence of limitations directed to:
- creating, by an evaluation harmonizer (see at least Katakol: Fig. 3 & Fig. 6 & ¶ [0088-0090]. Katakol notes automated aggregation, cleansing, and normalizing of data using an Artificial Intelligence based system that learns. Real-time human feedback and ability to change classification.), an evaluation service for an enterprise resource planning (ERP) system (see at least Katakol: ¶ [0015] & ¶ [0070] & ¶ [0129]. Katakol notes that the platform further includes a unified supplier portal for sourcing and procurement tasks and is easy to integrate with ERP systems. Integrates directly with ERP system to ensure real time updates of system. The scoring component can be used to evaluate suppliers during the RFP (Request for Proposal) proposal or can also be used for the strategic evaluation of the existing suppliers for balanced score card purposes.) by at least configuring the evaluation service to evaluate one or more entities in an enterprise (see at least Katakol: See also FIGS. 8, 12, 12-a, and 12-b of Katakol noting the evaluation of one or more suppliers.) associated with the ERP system (see at least Katakol: ¶ [0015] & ¶ [0070] & ¶ [0129].), the configuring comprising selecting a first template and a second template (see at least Katakol: ¶ [0015] & ¶ [0070] & ¶ [0116]. Katakol notes category specific supplier performance and risk management templates. See also Katakol at ¶ [0070]: The scoring component can be used to evaluate suppliers during the RFP (Request for Proposal) proposal or can also be used for the strategic evaluation of the existing suppliers for balanced score card purposes. See also Katakol at ¶ [0116]: Katakol teaches providing templates and clause library for easy creation. See also FIGS. 8, 12, 12-a, and 12-b of Katakol noting the evaluation of one or more suppliers. Also Examiner notes guided sourcing events with category specific templates (FIG. 15 a, b, c and d).)
- in response to creation of the evaluation service, the operations further comprise (see at least Katakol: ¶ [0070] & Fig. 12, Fig. 12-a and Fig. 12-b. Katakol teaches that the scoring component can be used to evaluate suppliers during the RFP (Request for Proposal) proposal or can also be used for the strategic evaluation of the existing suppliers for balanced score card purposes.)
- causing one or more messages to be sent to one or more evaluators (see at least Katakol: ¶ [0050] & Fig. 12, Fig. 12-a and Fig. 12-b. Katakol notes that survey section (FIG. 1) which provides means to select pre-set questions for which answers may be required to be submitted by a supplier or vendor and which answers may be automatically scored and/or a rating provided. Returning a vendor list in accordance with characteristics selected by a user, or for employing the assigned rating to a supplier's response or responses. Also Examiner notes guided sourcing events with category specific templates (FIG. 15 a, b, c and d).)
- receiving one or more responses to the one or more messages (see at least Katakol: ¶ [0050] & Fig. 12, Fig. 12-a and Fig. 12-b. Katakol notes that the scoring engine for creating ratings related to a supplier's response to a set of questions served by the Survey Section. Also Examiner notes guided sourcing events with category specific templates (FIG. 15 a, b, c and d).)
- determining one or more first scores based on the one or more responses (see at least Katakol: Fig. 12, 12-a, 12-b & ¶ [0050]. Katakol at Survey Section (FIG. 1) which provides means to select pre-set questions for which answers may be required to be submitted by a supplier or vendor and which answers may be automatically scored and/or a rating provided, or for employing the assigned rating to a supplier's response or responses; Auction Engine (FIG. 1) for setting up and conducting auctions; Scoring Engine for creating ratings related to a supplier's response to a set of questions served by the Survey Section. See also Katakol at ¶ [0070]: This concept is also extended to business components such as a scoring process (see FIG. 1 at “Scoring Engine”) related to suppliers or a response from a supplier is identical using scoring component. The scoring component can be used to evaluate suppliers during the RFP (Request for Proposal) proposal or can also be used for the strategic evaluation of the existing suppliers for balanced score card purposes.)
Regarding the Katakol reference, this prior art of record does not reach or render obvious the sequence of limitations directed to:
- wherein the generative adversarial network is trained using training data comprising scores for a particular entity considered to be accurate given a set of input data and scores for the particular entity considered to be inaccurate given the set of input data.
Rothley method for an evaluation harmonizer teaches or renders obvious the sequence of limitations directed to:
- obtaining one or more second scores from a database (see at least Rothley: Figs. 3-4 & Fig. 5 & ¶ [0035]. Rothley notes scoring each of the one or more suppliers based on the customized green sourcing metrics and providing a ranked list of suppliers according to their respective individual and overall scores. Performing the sourcing analysis includes identifying one or more suppliers whose green scores falls under a pre-defined threshold limit.), the one or more second scores comprising one or more quantitative key indicators associated with the one or more entities (see at least Rothley: Figs. 3-4 & ¶ [0039] & ¶ [0047]. Rothley notes that the product data may be received from the supplier 245 through a questionnaire or be extracted from product factsheets and supplier factsheets stored in databases within the ERP system 200. See also Rothley at ¶ [0017]: The product data for a supplier can be automatically extracted from supplier factsheets, product factsheets, supplier master records, product category factsheets, product master records, supplier invoices, contracts, surveys, questionnaires, integrated ERP systems, web services, and external data feeds. See also Rothley at ¶ [0047]: The product data relating to the chemical compound supplied by supplier C is (if applicable) automatically extracted from a filled-in Questionnaire sent along with an RFx. In addition, other product or supplier related data such as the supplier's location, climatic conditions, governing standards and laws pertaining to supplier's location etc., are received through external data source systems in real-time.).
Regarding the Rothley reference, this prior art of record does not reach or render obvious the sequence of limitations directed to:
- wherein the generative adversarial network is trained using training data comprising scores for a particular entity considered to be accurate given a set of input data and scores for the particular entity considered to be inaccurate given the set of input data.
Crabtree method for an evaluation harmonizer teaches or renders obvious the sequence of limitations directed to:
- inputting the one or more first scores and the one or more second scores to a machine learning model that is configured to output a total score for an entity of the one or more entities (see at least Crabtree: ¶ [0054] & ¶ [0118] & ¶ [0127]. Crabtree teaches that ML algorithms assist in determining the impact and severity of the risk by consulting actuarial tables and commercial-off-the-shelf (COTS) modeling tools, and together with the system's semantic computing, assign a summed total of the risk rating. The risk rating scale is customizable but as an example, it may be configured where a negative numerical score means a higher risk, a risk rating of zero is neutral, and a positive numerical rating is of low risk or beneficial relationship to the user. See also Crabtree at ¶ [0118]: From this analysis of business impact 2412, a network resilience rating is assigned 2405, representing a weighted and adjusted total of relative exposure the organization has to various types of risks, each of which may be assigned a sub-rating. The network resilience rating 2405 may be a single score for all factors, a combination of scores, or a score for a particular risk or area of concern. The network resilience rating 2411 may then be adjusted or filtered depending on the context in which it is to be used 2409. See also Crabtree at ¶ [0127]: This information is used by the risk rating engine's 3111 semantic computing and machine learning algorithms to determine the risk impact likelihood to the entity. Machine learning algorithm which identifies, categorizes, and scores each relation with a risk score. The risk rating engine 3111 then sums all scores and produces a risk rating a profile 3140 to the client comprising the knowledge graph and numerical risk score.), wherein the machine learning model (see at least Crabtree: Fig. 19 & ¶ [0103]. Crabtree notes machine learning models 1901 shown at Fig. 19.) comprises a scoring engine (see at least Crabtree: ¶ [0104] & Fig. 24. Crabtree teaches that Fig. 24 denoting an architecture diagram for the scoring engine. The cybersecurity profile is sent to the scoring engine 1910 along with event and loss data 1914 and context data 1909 for the scoring engine 1910 to develop a score and/or rating for the organization that takes into consideration both the cybersecurity profile, context, and other information.) that includes a generative adversarial network (see at least Crabtree: ¶ [0117] & ¶ [0123]. Crabtree teaches generative adversarial networks (GAN’s) in Fig. 22 denoted as 2210c and Fig. 28 denoted as 2812.) trained to generate the total score for the entity (see at least Crabtree: Figs. 32-33 & ¶ [0118] & ¶ [0127].) by applying one or more weights (see at least Crabtree: ¶ [0122] & ¶ [0149] & Fig. 24. Crabtree notes that the edges may also be assigned numerical weights or probabilities, indicating, for example, the likelihood of a successful attack gaining access from one node to another. The next step in the process is to assign a risk category 3303. This is critical as each category 3304 is weighted based on the impact the type of risk would have on the entity. See also Crabtree at ¶ [0093]: Operations may be assigned a score up to 400 points, along with up to 200 additional points for web/application recon results, 100 points for patch frequency, and 50 points each for additional endpoints and open-source intel results. This yields a weighted score incorporating all available information from all scanned sources, allowing a meaningful and readily-appreciable representation of an organization's overall cybersecurity strength. See also Fig. 32 of Crabtree.) to the one or more first scores and the one or more second scores (see at least Crabtree: ¶ [0054] & ¶ [0118] & ¶ [0127].), the generative adversarial network (see at least Crabtree: ¶ [0117] & ¶ [0123]. Crabtree teaches generative adversarial networks (GAN’s) in Fig. 22 denoted as 2210c and Fig. 28 denoted as 2812.) being trained using training data (see at least Crabtree: ¶ [0074]. Crabtree notes that machine learning algorithms develop models of behavior or understanding based on information fed to them as training sets, and can modify those models based on new incoming information.) comprises a) questionnaires for evaluation of the one or more entities (see at least Crabtree: Figs. 32-33 & ¶ [0078] & ¶ [0103]. Crabtree teaches that the directed computational graph module 155 retrieves one or more streams of data from a plurality of sources, which includes, but is in no way not limited to, a plurality of physical sensors, network service providers, web-based questionnaires and surveys, monitoring of electronic infrastructure, crowd sourcing campaigns, and human input device information. See also Crabtree at ¶ [0103]: The cyber-physical graph 1902 plus the analyses of data directed by the directed computational graph on the reconnaissance data received from the reconnaissance engine 1906 are combined to represent the cyber-security profile of the client organization whose network 1907 is being evaluated.) and b) responses to the questionnaires (see at least Crabtree: ¶ [0053] & ¶ [0078] & ¶ [0085]. Crabtree teaches receiving scan responses may be collected and processed through a plurality of data pipelines 155 a to analyze the collected information. See also Crabtree at ¶ [0053]: A knowledge graph is generated which may be presented to the user for advanced insight and analysis into the risk factors and relationships associated with the queried entity, but also is used by the system to answer additional queries through various procedures. Crabtree teaches that the directed computational graph module 155 retrieves one or more streams of data from a plurality of sources, which includes, but is in no way not limited to, a plurality of physical sensors, network service providers, web-based questionnaires and surveys.)
Regarding the Crabtree reference, this prior art of record does not reach or render obvious the sequence of limitations directed to:
- wherein the generative adversarial network is trained using training data comprising scores for a particular entity considered to be accurate given a set of input data and scores for the particular entity considered to be inaccurate given the set of input data.
Therefore, when taken as a whole, the claims are not rendered obvious as the available prior art does not suggest or otherwise render obvious the noted features nor do the available art suggest or otherwise render obvious further modification of the evidence at hand. Such modification would require substantial reconstruction relying solely on improper hindsight bias, and thus would not be obvious.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DERICK HOLZMACHER whose telephone number is (571) 270-7853. The examiner can normally be reached on Monday-Friday 9:00 AM – 6:30 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Epstein can be reached on 571-270-5389. The fax phone number for the organization where this application or proceeding is assigned is 571-270-8853.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/DERICK J HOLZMACHER/Patent Examiner, Art Unit 3625A
/BRIAN M EPSTEIN/Supervisory Patent Examiner, Art Unit 3625