Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Status of the Application
The following is a Final Office Action.
In response to Examiner's communication of 5/30/2025, Applicant responded on 10/30/2025. Amended claim 1-9, 13, 14, 19, and 20. Cancelled claim 11.
IDS filed on 10/30/2025 is acknowledged and considered by the Examiner.
Claims 1-10 and 12-20 are pending in this application and have been examined.
Response to Amendment
Applicant's amendments to claims 1-9, 13, 14, 19, and 20 are not sufficient to overcome the 35 USC 101 rejections set forth in the previous action.
Applicant's amendments to claims 1-9, 13, 14, 19, and 20 are not sufficient to overcome the prior art rejections set forth in the previous action.
Response to Arguments – 35 USC § 101
Applicant’s arguments with respect to the rejections have been fully considered, but they are not persuasive.
Applicant submits, “…The Examiner's conclusion that the amended claims recite abstract ideas is fundamentally flawed and contradicts established USPTO guidance regarding the proper scope of abstract idea groupings. The amended claims do not recite any of the enumerated categories of abstract ideas and therefore do not require further eligibility analysis.…The claimed machine learning model must analyze historical patterns of compensation actions, detect anomalies or outliers relative to learned patterns, and flag anomalous compensation actions for review. These technological operations are far beyond the practical capabilities of human mental processing, even with pen and paper assistance...The Examiner's characterization of the amended claims as "mental processes" and "certain methods of organizing human activity" misapprehends the technological nature of the claimed invention. The claims are not directed to fundamental economic principles, commercial interactions, or personal behavior management-the enumerated categories within methods of organizing human activity. Instead, they recite specific technological operations for processing data through machine learning systems...The amended claims reflect substantial improvements to computer technology in compensation management systems through specific technological implementations. The claims now specifically recite "detecting anomalies or outliers relative to learned patterns of typical compensation amounts, distributions, or adjustments;" and "flagging anomalous compensation actions for revies via a graphical user interface." This represents a particular technological solution that goes beyond generic computer implementation...The amended claims provide a particular technological solution through machine learning model application to automatically detect anomalies or outliers and flag anomalous compensation actions. This specific implementation addresses the technological problem of efficiently identifying anomalies in employee compensation…The amended claims recite exactly this type of specific AI application machine learning model application to the technological field of organizational compensation management for identifying anomalies…The "flexible ontology comprising one or more customizable compensation components, relationships, and properties" represents sophisticated data structure design. The machine learning model application for anomaly detection provides algorithmic processing that cannot be performed through conventional database operations or manual organizational management techniques… in McRO demonstrates how specific technological implementations can establish practical application. As the court found, claims that describe "a specific way" to solve technological problems through "incorporation of the particular claimed rules" that "improved [the] existing technological process" are eligible. Similarly, the amended claims achieve technological improvement through incorporation of machine learning model prediction capabilities that detect anomalies and outliers and flag anomalous compensation actions for review…The amended claims, when analyzed as integrated technological systems, demonstrate that machine learning model application works in combination with a flexible ontology to detect anomalies and flag anomalous compensation actions…The claims reflect improvements beyond merely automating existing processes. The combination of machine learning-based anomaly detection with real-time updating of one or more interactive compensation representation views provides real-time interactive functionality that improves upon conventional compensation management approaches. This represents the type of technological advancement that the 2024 AI SME Update identifies as demonstrating practical application in AI-related inventions...This technological integration distinguishes the claims from mere automation of abstract ideas. As the Deputy Commissioner noted, examiners should consider whether claims "purport to improve computer capabilities or to improve an existing technology" rather than merely invoking "computers or other machinery merely as a tool to perform an existing process." The amended claims clearly fall into the former category through their specific machine learning implementations that enhance compensation management system capabilities. In conclusion, the amended claims integrate any recited judicial exception into practical applications through specific technological improvements in computer-based compensation management systems and technological solutions that improve both computer functionality and organizational management technology. These claims are not directed to any judicial exception and are eligible under Step 2A, Prong Two…The amended claims now specifically require "analyzing historical patterns of compensation actions using one or more machine learning algorithms," "detecting anomalies or outliers relative to learned patterns of typical compensation amounts, distributions, or adjustments," and "flagging anomalous compensation actions for revies via a graphical user interface." This machine learning implementation provides a concrete inventive concept that transforms any abstract concept into specific technological innovation…The 2024 AISME Update emphasizes that AI-related inventions can demonstrate patent eligibility through specific technological implementations that go beyond well-understood, routine, conventional activity. The detection of anomalies and flagging of anomalous compensation actions through machine learning algorithms represents precisely the type of technological advancement that provides an inventive concept under Step 2B analysis...The amended claims integrate machine learning model application with sophisticated ontological processing and real-time interactive compensation representation views represent comprehensive technological solution. This integration represents more than the sum of its parts and provides an inventive concept that transforms any abstract concepts into concrete technological implementation. The McRO court recognized that specific technological combinations can provide inventive concepts even when individual elements might be known. Here, the specific combination of machine learning-based anomaly detection, a flexible ontology, and real-time interactive compensation representation views creates a technological solution that provides significantly more than any alleged abstract idea… the amended independent claims 1, 19, and 20 provide significantly more than any alleged abstract idea. These elements, individually and in combination, provide an inventive concept sufficient to render the claims patent-eligible under Step 2B analysis…” The Examiner respectfully disagrees.
Although, Applicant’s amendments furthers prosecution, unlike McRO, 2024 AISME and 2025 SME Memo, by Applicant’s own admission in Applicant’s remarks, the claims are directed to, …compensation management… to automatically detect anomalies or outliers and flag anomalous compensation actions...problem of efficiently identifying anomalies in employee compensation…organizational compensation management for identifying anomalies…prediction capabilities that detect anomalies and outliers and flag anomalous compensation actions for review…to detect anomalies and flag anomalous compensation actions…, which is a problem directed to a mental process (i.e. human observing, analyzing payments to humans and flagging anomaly payments to humans for human review) and organizing human activities (i.e. human observing, analyzing payments to humans and flagging anomaly payments to humans for human review, which are fundamental economic principles or practices, commercial or legal interactions, managing personal behavior or relationships or interactions between people), as established in Step 2A Prong 1. This problem does not specifically arise in the realm of computer technology, but rather, this problem existed and was addressed long before the advent of computers. Thus, the claims do not recite a technical improvement to a technical problem or necessarily roots in computing technologies. Pursuant to the broadest reasonable interpretation, as an ordered combination, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea, and thus, are no more than applying the abstract idea with generic computer components. Further, these additional elements generally link the abstract idea to a technical environment, namely the environment of a computer and machine learning, performing extra solution activities. Therefore, as a whole, the additional elements do not integrate the abstract ideas into a practical application in Step 2A Prong 2 or amount to significantly more in Step 2B. Even novel and newly discovered judicial exceptions are still exceptions, despite their novelty. July 2015 Update, p. 3; see SAP America Inc. v. Investpic, LLC, No. 2017-2081, slip op. at 2 (Fed Cir. May 15, 2018).
Simply reciting specific limitations that narrow the abstract idea does not make an abstract idea non-abstract. 79 Fed. Reg. 74631; buySAFE Inc. v. Google, Inc., 765 F.3d 1350, 1355 (2014); see SAP America at p. 12. As discussed in SAP America, no matter how much of an advance the claims recite, when “the advance lies entirely in the realm of abstract ideas, with no plausibly alleged innovation in the non-abstract application realm,” “[a]n advance of that nature is ineligible for patenting.” Id. at p. 3.
Response to Arguments – Prior Art
Applicant’s arguments with respect to the rejections have been fully considered, but they are not persuasive.
Applicant submits, “…The Examiner cited to Psenka as allegedly teaching all of the elements of each of independent claims 1, 19, and 20 and to Qamar as allegedly teaching the elements of claim 11, which have been incorporated into each of independent claims 1, 19, and 20. Applicant respectfully disagrees. Psenka fails to teach or suggest several key limitations of claims 1, 19, and 20. Specifically, Psenka does not disclose a "flexible ontology comprising customizable compensation components, relationships, and properties." While Psenka appears to discuss data mapping and operational datasets, it does not teach a flexible ontology structure specifically designed for compensation components with customizable relationships and properties. Psenka's system is focused on general data visualization and reporting, not compensation-specific ontological structures. Additionally, although Psenka appears to discuss data refreshing and updating, it does not teach truly real-time updates that occur automatically in response to changes in "compensation data, user context, or access permissions" without requiring manual regeneration. Instead, Psenka's updates require user-initiated refresh actions. Furthermore, Psenka is directed to general data visualization and analysis tools, not specifically to compensation management systems with the specialized requirements and workflows inherent in compensation data handling. The Examiner explicitly conceded that Psenka "does not expressly disclose" the following key limitations of claim 11 that have been incorporated into amended claims 1, 19, and 20:"analyzing historical patterns of compensation actions using one or more machine learning algorithms" and the machine learning-based approach to pattern analysis. For these admitted deficiencies, the Examiner relied solely on Qamar to supply the missing machine learning elements. Qamar fails to cure the deficiencies of Psenka for several reasons. First, Qamar is directed to employee retention prediction and assessment modification techniques, not compensation management systems. Qamar's machine learning models are specifically designed to "determine retention risk" and provide "retention suggestions," which is fundamentally different from analyzing compensation patterns for compensation management purposes. Second, Qamar's machine learning approach involves "regression models" built using "training subset of the organization data" to predict employee retention outcomes. This retention-focused machine learning framework cannot be directly applied to compensation pattern analysis, as the underlying data relationships, training objectives, and output requirements are entirely different. Third, Qamar's system is designed to identify "predictors for retention" and generate "retention suggestions" such as "one-time bonus, a pay increase, a promotion.". This retention-prediction focus is incompatible with the compensation pattern analysis required by claim 11, which involves detecting anomalies in compensation amounts, distributions, and adjustments. Finally, Qamar does not teach machine learning algorithms specifically adapted for compensation data analysis. The machine learning techniques disclosed in Qamar are tailored for retention prediction using performance metrics and retention probabilities, not for analyzing compensation patterns, amounts, or distributions. One skilled in the art would not have been motivated to combine Psenka and Qamar for several compelling reasons. Psenka addresses general data visualization and reporting, while Qamar focuses specifically on employee retention prediction. These represent different technical domains with different objectives, data requirements, and user needs. A person skilled in the art working on compensation visualization systems would not naturally look to retention prediction systems for guidance. Moreover, Psenka's architecture is built around "databooks" and interactive data visualization interfaces, while Qamar's architecture centers on predictive modeling for retention analysis using "Kaplan-Meier estimator curves" and "clustering analysis." These architectural differences would make integration technically challenging and unnatural. Additionally, Psenka is designed for interactive data exploration and visualization by business users, while Qamar is designed for automated retention risk assessment and prediction. The user workflows, interface requirements, and interaction patterns are fundamentally different. The combination would not provide any technical advantages or solve any recognized problems in the field. Psenka's data visualization capabilities and Qamar's retention prediction algorithms serve entirely different purposes and would not enhance each other's functionality....Finally, Qamar actually teaches away from the claims by focusing exclusively on retention prediction rather than compensation analysis. Qamar's disclosure emphasizes retention-specific metrics and outcomes, which would discourage a skilled artisan from applying its teachings to compensation management systems. The Examiner's combination appears to be an improper hindsight reconstruction that artificially combines unrelated technologies without proper motivation or technical rationale. Therefore, because no combination of Psenka and Qamar teaches or suggests all of the elements of each of independent claims 1, 19, and 20, one skilled in the art would not have been motivated to combine the references in the manner suggested by the Examiner, and Qamar teaches away from the claims at issue, each of independent claims 1, 19, and 20, and their respective claims are patentable over the cited references…” Examiner respectfully disagrees.
Respectfully, Applicant’s argument requires that the each of the features of supporting references are bodily incorporated into primary reference that teach and every element is individually taught by a single reference. However, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981). The test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressly suggested in any one single or in all of the references. See id. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See id.; In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Under the broadest reasonable interpretation, Psenka teaches: A system comprising: one or more computer processors; one or more computer memories; a set of instructions incorporated into the one or more computer memories, the set of instructions configuring the one or more computer processors to perform operations, the operations comprising: ([0159])
generating one or more interactive compensation representation views based on an evaluation of mapping rules against a flexible ontology and current data state, the flexible ontology comprising one or more customizable compensation components, relationships, and properties; and (in at least [0057] User interface 12 is used to provide queries to, and render results based upon, data from an operational data set 16. Operational data set 16 can comprise, for example, a relational or other databases, and includes data ultimately maintained in one or more of a plurality of data sources 20. In some embodiments, operational data set 16 may be maintained in a suitable computer-readable medium and can comprise data imported from one or more of sources 20. However, operational data set 16 may comprise a mapping or other metadata providing for direct access to other sources in some embodiments. In FIG. 1, data sources 20 are indicated as 20-1, 20-2, 20-N to indicate that the number of sources may vary from one on upward. Source 20-3 is shown to indicate that, in some embodiments, solution 10 can integrate locally-stored data (e.g. data available at a user's terminal). [0129] FIG. 10D indicates how columns from a particular data source are to be mapped to columns in the operational dataset. In this example, column names from the data source are correlated to names in the operational dataset and types are indicated. Additionally, one or more columns can be indicated as “keys.” One or more components, processes, scripts, or the like can copy information returned from the data source into the operational dataset. Depending upon the configuration parameters, recently-accessed data may overwrite previously-written data in the operational dataset and/or recently-accessed data may be appended to the operational dataset. In some embodiments, data “freshness” is verified by comparing metadata (such as the last update for the operational dataset versus the data source) and only “new” data is written into or appended to the operational dataset. [0067] FIG. 3 provides an example of a window 50 which may be rendered to select a databook of interest. In this example, contextual menu 52 is provided, along with first and second selection areas 54 and 58. In some embodiments, solution 10 may provide users with multiple collections of databooks. For example, as shown in FIG. 3, a group 56 includes different sets (SET1 through SET4) available for selection. While four sets are shown, an infinite number of collections are possible. Area 58 provides a list 60 of one or more databooks associated with a given set when the set is clicked on by a user. [0151] FIG. 11, another conceptual aspect of the present subject matter can be appreciated. Operational data set 500 comprises a plurality of rows and columns. As was noted before, the dataset can originate from one or more sources. The data set may be updated irregularly based on user commands, on a scheduled basis, or even in response to an indication that an underlying data source has changed. [0070] FIG. 4 depicts a window 62 showing a simplified databook rendered in accordance with one or more aspects of the present subject matter. In this example, the window also includes a contextual menu 52. Contextual menu 52 may, in some embodiments, lead to options whereby a user can define the contents and appearance of the databook and/or export or otherwise distribute the databook contents. [0152] the databooks are defined by parameters 502, the access controls (and databooks) do not need to be redefined every time the underlying data sources change. For example, in a large manufacturing enterprise, sales records may be updated daily, with the operational dataset constantly including more records. A manager may be interested in which salespeople in each state are in the top 10% of total sales for the year. The manager can construct a databook, such as by grouping sales figures by state and salesperson, with sales amount totaled and the results filtered to show only the top 10% of sales.)
dynamically updating the one or more interactive compensation representation views in real- time in response to changes in compensation data, user context, or access permissions without requiring manual regeneration of the one or more interactive compensation representation views, wherein the one or more interactive compensation representation views include a user interface element displayed in response to a user selection of a compensation data item, wherein the user interface element provides direct access to compensation information relevant to the compensation data item without requiring a change to another screen or another user context; (in at least [0057] User interface 12 is used to provide queries to, and render results based upon, data from an operational data set 16. Operational data set 16 can comprise, for example, a relational or other databases, and includes data ultimately maintained in one or more of a plurality of data sources 20. In some embodiments, operational data set 16 may be maintained in a suitable computer-readable medium and can comprise data imported from one or more of sources 20. However, operational data set 16 may comprise a mapping or other metadata providing for direct access to other sources in some embodiments. In FIG. 1, data sources 20 are indicated as 20-1, 20-2, 20-N to indicate that the number of sources may vary from one on upward. Source 20-3 is shown to indicate that, in some embodiments, solution 10 can integrate locally-stored data (e.g. data available at a user's terminal). [0129] FIG. 10D indicates how columns from a particular data source are to be mapped to columns in the operational dataset. In this example, column names from the data source are correlated to names in the operational dataset and types are indicated. Additionally, one or more columns can be indicated as “keys.” One or more components, processes, scripts, or the like can copy information returned from the data source into the operational dataset. Depending upon the configuration parameters, recently-accessed data may overwrite previously-written data in the operational dataset and/or recently-accessed data may be appended to the operational dataset. In some embodiments, data “freshness” is verified by comparing metadata (such as the last update for the operational dataset versus the data source) and only “new” data is written into or appended to the operational dataset. [0067] FIG. 3 provides an example of a window 50 which may be rendered to select a databook of interest. In this example, contextual menu 52 is provided, along with first and second selection areas 54 and 58. In some embodiments, solution 10 may provide users with multiple collections of databooks. For example, as shown in FIG. 3, a group 56 includes different sets (SET1 through SET4) available for selection. While four sets are shown, an infinite number of collections are possible. Area 58 provides a list 60 of one or more databooks associated with a given set when the set is clicked on by a user. [0151] FIG. 11, another conceptual aspect of the present subject matter can be appreciated. Operational data set 500 comprises a plurality of rows and columns. As was noted before, the dataset can originate from one or more sources. The data set may be updated irregularly based on user commands, on a scheduled basis, or even in response to an indication that an underlying data source has changed. [0070] FIG. 4 depicts a window 62 showing a simplified databook rendered in accordance with one or more aspects of the present subject matter. In this example, the window also includes a contextual menu 52. Contextual menu 52 may, in some embodiments, lead to options whereby a user can define the contents and appearance of the databook and/or export or otherwise distribute the databook contents. [0152] the databooks are defined by parameters 502, the access controls (and databooks) do not need to be redefined every time the underlying data sources change. For example, in a large manufacturing enterprise, sales records may be updated daily, with the operational dataset constantly including more records. A manager may be interested in which salespeople in each state are in the top 10% of total sales for the year. The manager can construct a databook, such as by grouping sales figures by state and salesperson, with sales amount totaled and the results filtered to show only the top 10% of sales. [0153] The databook can then be accessed at any time and (assuming the operational dataset is updated), the report will automatically be updated with the latest data when the databook is refreshed. Databook definition parameters can be provided to various users to share the databook. As was noted above, if one or more security policies are in effect, the users may access the operational dataset using the same databook definition parameters, but end up with different databooks depending upon access rights and/or permissions.)
analyzing historical patterns of compensation actions using one or more … algorithms;(in at least [0114] FIG. 8E shows an example of a trend analysis, namely a trend analysis chart 410 showing salary for John Smoltz over time. Chart 410 includes a trend evaluation indicator 411 which reads “Fit HIGH, r: 0.962” to indicate the confidence in the trend analysis. One of skill in the art will recognize that any suitable number or type of curve-fitting algorithms can be applied to the data to attempt to identify a trend. In some embodiments, the trend analysis output is not made available unless the confidence value meets a predetermined threshold. [0115] Chart 410 also includes a forecast interface 412, in this example, a slider bar. FIG. 8F shows an example of the chart being extended to the maximum amount based on input provided via the slider bar. As shown at 414, the trend has been continued based on the trend analysis as indicated at 416.)
detecting anomalies or outliers relative to learned patterns of typical compensation amounts, distributions, or adjustments; and (in at least [0116] FIG. 8G shows another way in which data can be visualized based on the evaluate function. In this example, salary data is presented as a histogram. The vertical axis 418 indicates the number of records with data in the column of interest that fall within a particular sub-range, with the horizontal axis showing a plurality of sub-ranges in the range of values for the column of interest across the dataset. In this example, as shown at 422, the histogram includes a highlighted portion showing where the record of interest falls in the histogram. In this particular example, about 86,000 records have a “salary” value between zero and 2.8 million. The record being evaluated (John Smoltz for 2004 at $11,677,000) falls within a range of $11,571,429 to $14,166,667. The histogram view may allow for easy identification of whether a particular record is an outlier.)
flagging anomalous … for reviews via a graphical user interface. (in at least [0075] Once the filter has been defined, it can be applied to the data. For instance, the databook may include a button or other suitable interface mechanism that a user can click to trigger a refresh of the data, or the data may refresh automatically. In any event, to refresh the data, the user interface can provide a query to the operational dataset, with the query in this example based on the defined filter. The data returned for display will then include only those records where “salary” is above $40,000 for this example. [0117] For this particular inquiry, the user may wish to return to the data view and filter out records having a salary below $2.8 million. Then, upon highlighting the same record and again triggering “evaluate,” the user can see where the record lies amongst the 200 or so records with high salaries.)
Although implied, Psenka does not expressly disclose the following limitations, which however, are taught by Qamar,
analyzing historical patterns of compensation actions using one or more machine learning algorithms (in at least [0054] a series of regression models may be built and evaluated using a training subset of the organization data and/or the optional external data. In these regression models, factors may be removed one at a time, and the remaining factors may be reordered. These permutations and combinations on subsets of the set of factors may provide a table of predictions for the different regression models (i.e., statistical comparison between predictions of the regression models for a test subset of the organization data and/or optional external data relative to the training subset). The average model performance for the factors, the cross-correlations among the factors and/or the ordering of the factors in these predictions may be used to select the polynomial (factors, exponents n and amplitude weights wi) using to calculate the performance metric and/or to determine the retention risk. Thus, variance decomposition may allow the number of factors in the organization data and/or the optional external data to be pruned to reduce the risk of over fitting. [0058] the retention suggestion may be to offer additional training opportunities to the employee to help them improve their skills. Thus retention suggestion may cost $20,000, but may be predicted to keep the employee from leaving for several months, which may more than offset the incremental expense (thereby justifying the use of the retention suggestion). More generally, the retention suggestion may include an action that may keep the employee from leaving (such as: a one-time bonus, a pay increase, a promotion, a change in title, a change in work responsibility, additional training, changing the employee's supervisor, recognition among other employees, etc.). The retention suggestion and/or the cost-benefit analysis may be provided to the manager or the supervisor of the employee in the organization, and/or to the representative of human resources for the organization. [0082] Using the regression model (which may be used for one employee or multiple employees), and the aforementioned factors in the organization data and the optional external data, the performance metric and retention risk of employee Bob Smith at the company may be determined. The results may indicate that Bob's customer satisfaction performance during the last week has been extremely (relative to his historic baseline) varied, and that his overtime has reduced. This may indicate an 82% increased likelihood that Bob may leave the company within a week. [0083] However, Bob may be a high performing employee. In particular, company ABC may consider employees that produce more widgets per hour valuable. Based on his average productivity in this regard (holding constant factors such as work location or job type), Bob may be in the top 5% of employees. Consequently, a retention suggestion may be provided. This retention suggestion may indicate that by giving Bob a financial award as an ‘outstanding performer’ is likely to ensure that he stays at the company for at least six months, and that the incremental cost is more than offset by his high productivity.)
…compensation actions…(in at least [0083] However, Bob may be a high performing employee. In particular, company ABC may consider employees that produce more widgets per hour valuable. Based on his average productivity in this regard (holding constant factors such as work location or job type), Bob may be in the top 5% of employees. Consequently, a retention suggestion may be provided. This retention suggestion may indicate that by giving Bob a financial award as an ‘outstanding performer’ is likely to ensure that he stays at the company for at least six months, and that the incremental cost is more than offset by his high productivity. [0226] the computer system optionally regularizes the organization data (operation 2012) to correct for anomalies (such as differences relative to an expected data format, missing data, normalizing the data so that data having different ranges can be compared, etc.).)
At the time the invention was filed, in the same field of endeavor, it would have been obvious for one of ordinary skill in the art to have modified the teachings of Psenka, as taught by Qamar above, with a reasonable expectation of success if arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make this modification to the teachings of Psenka with the motivation of, … the analysis technique: speeds up computation of the Kaplan-Meier estimator curves and the clustering analysis; reduces memory consumption when performing the computations; improves reliability of the computations (as evidenced by increased retention); reduces network latency; improves the user-friendliness of a user interface that displays results of the computations; and/or improves other performance metrics related to the function of the computer or the computer system.… offer additional training opportunities to an employee to help them improve their skills... to improve the hiring practices of the organization. In this way, the analysis technique may help the organization improve its human capital in a targeted manner (specific to a particular position or job type in the organization), which may help the organization compete and succeed in the marketplace…. to manage its own employees and to facilitate improved hiring… facilitate a systematic and hierarchical study of computed data to eventually build predictive models that are more accurate, especially in the domain of selection-science and psychometry, thereby facilitating optimal hiring decisions and employee-workforce profitability management.….produce more accurate estimates from smaller samples… generate an ensemble of accurate predictive models such as: panel-methods and random-effects regression models, kernel-methods based regression models, decision forests, neural-nets and/or support vector machines. These machine-learning models or estimators may facilitate the analysis of causative factors behind employee-workforce attrition. Furthermore, the estimators may help differentiate between the behaviors of the specific values of a categorical predictor and may provide accurate predictive models…, as recited in Qamar.
And, Applicant’s assertions that the references teaches away from the claimed features. Examiner respectfully notes the cited references do not teach away from the claimed invention because the prior art’s mere disclosure of an alternative does not constitute a teaching away from any of these alternatives and the disclosure does not criticize, discredit, or otherwise discourage the solution claimed. As noted in the MPEP, the disclosure of desirable alternatives does not necessarily negate a suggestion for modifying the prior art to arrive at the claimed invention. In In re Fulton, 391 F.3d 1195, 73 USPQ2d 1141 (Fed. Cir. 2004), the claims of a utility patent application were directed to a shoe sole with increased traction having hexagonal projections in a "facing orientation." 391 F.3d at 1196-97, 73 USPQ2d at 1142. The Board combined a design patent having hexagonal projections in a facing orientation with a utility patent having other limitations of the independent claim. 391 F.3d at 1199, 73 USPQ2d at 1144. Applicant argued that the combination was improper because (1) the prior art did not suggest having the hexagonal projections in a facing (as opposed to a "pointing") orientation was the "most desirable" configuration for the projections, and (2) the prior art "taught away" by showing desirability of the "pointing orientation." 391 F.3d at 1200-01, 73 USPQ2d at 1145-46. The court stated that "the prior art’s mere disclosure of more than one alternative does not constitute a teaching away from any of these alternatives because such disclosure does not criticize, discredit, or otherwise discourage the solution claimed…." Id. In affirming the Board’s obviousness rejection, the court held that the prior art as a whole suggested the desirability of the combination of shoe sole limitations claimed, thus providing a motivation to combine, which need not be supported by a finding that the prior art suggested that the combination claimed by the applicant was the preferred, or most desirable combination over the other alternatives. Id. See also In re Urbanski, 809 F.3d 1237, 1244, 117 USPQ2d 1499, 1504 (Fed. Cir. 2016). See MPEP 2143.01.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 10 are rejected under is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as failing to set forth the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant(s) regard as their invention.
Claim 10 recites “…for display via a graphical user interface.”, it is unclear if this element is referring to “…for reviews via a graphical user interface.” in claim 1. Appropriate correction required.
Claim Rejections – 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-10, 12-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claim 1 (similarly 19, 20) recite, …:
generating one or more interactive compensation representation views based on an evaluation of mapping rules against a flexible ontology and current data state, the flexible ontology comprising one or more customizable compensation components, relationships, and properties;
dynamically updating the one or more interactive compensation representation views in real- time in response to changes in compensation data, user context, or access permissions without requiring manual regeneration of the one or more interactive compensation representation views, wherein the one or more interactive compensation representation views include a … element displayed in response to a user selection of a compensation data item, wherein the user interface element provides direct access to compensation information relevant to the compensation data item without requiring a change to another screen or another user context;
analyzing historical patterns of compensation actions using one or more … algorithms;
detecting anomalies or outliers relative to learned patterns of typical compensation amounts, distributions, or adjustments; and
flagging anomalous compensation actions for reviews via a ….
Analyzing under Step 2A, Prong 1:
The limitations regarding, …generating one or more interactive compensation representation views based on an evaluation of mapping rules against a flexible ontology and current data state, the flexible ontology comprising one or more customizable compensation components, relationships, and properties; dynamically updating the one or more interactive compensation representation views in real- time in response to changes in compensation data, user context, or access permissions without requiring manual regeneration of the one or more interactive compensation representation views, wherein the one or more interactive compensation representation views include a … element displayed in response to a user selection of a compensation data item, wherein the user interface element provides direct access to compensation information relevant to the compensation data item without requiring a change to another screen or another user context; analyzing historical patterns of compensation actions using one or more … algorithms; detecting anomalies or outliers relative to learned patterns of typical compensation amounts, distributions, or adjustments; and flagging anomalous compensation actions for reviews via a…, under the broadest reasonable interpretation, can include a human using their mind and using pen and paper to perform the identified limitations; therefore, the claims are directed to a mental process.
Further, …generating one or more interactive compensation representation views based on an evaluation of mapping rules against a flexible ontology and current data state, the flexible ontology comprising one or more customizable compensation components, relationships, and properties; dynamically updating the one or more interactive compensation representation views in real- time in response to changes in compensation data, user context, or access permissions without requiring manual regeneration of the one or more interactive compensation representation views, wherein the one or more interactive compensation representation views include a … element displayed in response to a user selection of a compensation data item, wherein the user interface element provides direct access to compensation information relevant to the compensation data item without requiring a change to another screen or another user context; analyzing historical patterns of compensation actions using one or more … algorithms; detecting anomalies or outliers relative to learned patterns of typical compensation amounts, distributions, or adjustments; and flagging anomalous compensation actions for reviews via a…, under the broadest reasonable interpretation, are human managing human compensation and relationship rules, therefore it is, fundamental economic principles or practices, commercial or legal interactions, managing personal behavior or relationships or interactions between people. Thus, the claims are directed to certain methods of organizing human activity.
Accordingly, the claims are directed to a mental process, certain methods of organizing human activity, and thus, the claims are directed to an abstract idea under the first prong of Step 2A.
Analyzing under Step 2A, Prong 2:
This judicial exception is not integrated into a practical application under the second prong of Step 2A.
In particular, the claims recite the additional elements beyond the recited abstract idea identified under Step 2A, Prong 1, such as:
Claim 1, 19, 20: A system comprising: one or more computer processors; one or more computer memories; a set of instructions incorporated into the one or more computer memories, the set of instructions configuring the one or more computer processors to, A non-transitory computer-readable storage medium storing a set of instructions that, when executed by one or more computer processors, causes the one or more computer processors to, user interface element, another screen, machine learning algorithms, graphical user interface
Claim 2: user interface element is a sidebar element
Claim 5: user interface element is overlaid
Claim 10: machine learning model, graphical user interface
Claim 17: application programming interfaces (APIs), payroll, human resources, or talent management systems
, and pursuant to the broadest reasonable interpretation, as an ordered combination, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea, and thus, are no more than applying the abstract idea with generic computer components. Further, these additional elements generally link the abstract idea to a technical environment, namely the environment of a computer.
Additionally, with respect to, “generating…”, “dynamically updating…”, “incrementally updating…”, “outputting…”, “rendering…”, “flag…”, “propagating…”, these elements do not add a meaningful limitations to integrate the abstract idea into a practical application because they are extra-solution activity, pre and post solution activity - i.e. data gathering – “generating…”, “propagating…”, data output – “dynamically updating…”, “incrementally updating…” “outputting…”, “rendering…”, “generating…”, “flag…”, “propagating…”
Analyzing under Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under Step 2B.
As noted above, the aforementioned additional elements beyond the recited abstract idea are not sufficient to amount to significantly more than the recited abstract idea because, as an order combination, the additional elements are no more than mere instructions to implement the idea using generic computer components (i.e. apply it).
Additionally, as an order combination, the additional elements append the recited abstract idea to well-understood, routine, and conventional activities in the field as individually evinced by the applicant’s own disclosure, as required by the Berkheimer Memo, in at least:
[0316] The system also enables integration with third-party systems using tailored APIs and protocols to synchronize goal data. For instance, one or more external system APIs (e.g., SalesForce API and/or Jira API) automatically updates progress on synced goals when a change occurs in Salesforce, without needing manual duplication of efforts and/or integrate issue tracking with goals to automatically update goal progress as issues are opened/closed in the external systems.
[0348] A dynamic graphical user interface (GUI) module 210 is configured to provide one or more specialized graphical user interfaces, as described herein, to, for example, allow users to manage and/or link data pertaining to goals, compensation, and/or OKRs. In example embodiments, the one or more specialized user interfaces and/or elements included in the one or more specialized user interfaces, and/or combinations thereof, are asserted to be unconventional. Additionally, the one or more specialized user interfaces described include one or more features that specially adapt the one or more specialized user interfaces for devices with small screens, such as mobile phones or other mobile devices.
[0993] FIG. 250 is a schematic of a user interface displaying the compensation settings page.
[1018] FIG. 257 is a schematic of a user interface displaying a compensation bands page with the option to assign employees to the bands.
[1126] FIG. 283 is a schematic of a user interface displaying a confirmation page for reviewing compensation adjustments.
[1131] FIG. 284 is a schematic of a user interface displaying a notifications settings page with the option to send compensation notifications through Slack, Microsoft Teams, or email.
[1132] To send these through Slack/Microsoft Teams or email, users can check on the boxes to the right of Compensation, as shown in FIG. 283.
[1225] Here are some examples of insights that the machine learning described herein may be configured to reveal regarding compensation and goals:
[1226] Predictive Modeling for Budgeting: A regression model could forecast next year's payroll budget needs by analyzing historical trends in compensation changes, hiring forecasts, attrition predictions, and economic indicators. The predictions can help guide budget planning cycles.
[1273] In example embodiments, one or more user interfaces may be caused to be presented that include advanced visualizations, embedded analytics, mobile optimization, drag-and-drop manipulation, visual cues, animations, recommendations, or simulations, as described herein. For example, one or more user interfaces my include one or more of the following features:
[1286] The user interface(s) may include, for example, contextual recommendations of potentially relevant goals, structures, and/or associations generated using a trained machine learning model. The machine learning model may be trained on historical goal data to identify patterns and correlations between goals, structures, and associations. The recommendations may suggest goals entities, mappings, hierarchies and representations to users and administrators to enhance goal definition workflows. The recommendations are provided via interactive overlays and previews allowing non-disruptive, low-risk exploration of recommendations. For example, the recommendations may aim to increase efficiency, alignment, and structure of goal definitions based on learned patterns and relationships between goals from organizational data.
[1376] Certain embodiments are described herein as including logic or a number of components, modules, or mechanisms. Modules may constitute either software modules (e.g., code embodied (1) on a non-transitory machine-readable medium or (2) in a transmission signal) or hardware-implemented modules. A hardware-implemented module is a tangible unit capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more processors may be configured by software (e.g., an application or application portion) as a hardware-implemented module that operates to perform certain operations as described herein.
[1377] In various embodiments, a hardware-implemented module may be implemented mechanically or electronically. For example, a hardware-implemented module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware-implemented module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware- implemented module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
[1378] Accordingly, the term "hardware-implemented module" should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired) or temporarily or transitorily configured (e.g., programmed) to operate in a certain manner and/or to perform certain operations described herein. Considering embodiments in which hardware- implemented modules are temporarily configured (e.g., programmed), each of the hardware-implemented modules need not be configured or instantiated at any one instance in time. For example, where the hardware-implemented modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware-implemented modules at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware-implemented module at one instance of time and to constitute a different hardware-implemented module at a different instance of time.
[1379] Hardware-implemented modules can provide information to, and receive information from, other hardware-implemented modules. Accordingly, the described hardware-implemented modules may be regarded as being communicatively coupled. Where multiple of such hardware-implemented modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware- implemented modules. In embodiments in which multiple hardware-implemented modules are configured or instantiated at different times, communications between such hardware-implemented modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware-implemented modules have access. For example, one hardware- implemented module may perform an operation, and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware-implemented module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware-implemented modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information).
[1380] The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules.
[1381] Similarly, the methods described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment or as a server farm), while in other embodiments the processors may be distributed across a number of locations.
[1382] The one or more processors may also operate to support performance
of the relevant operations in a "cloud computing" environment or as a "software as a service" (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., Application Program Interfaces (APIs).)
[1387] FIG. 300 is a block diagram of an example computer system 30000 on which methodologies and operations described herein may be executed, in accordance with an example embodiment.
[1388] In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
[1389] The example computer system 30000 includes a processor 30002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), a main memory 30004 and a static memory 30006, which communicate with each other via a bus 30008. The computer system 30000 may further include a graphics display unit 30010 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)). The computer system 30000 also includes an alphanumeric input device 30012 (e.g., a keyboard or a touch-sensitive display screen), a user interface (UI) navigation device 30014 (e.g., a mouse), a storage unit 30016, a signal generation device 30018 (e.g., a speaker) and a network interface device 30020.
[1393] Although an embodiment has been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the present disclosure. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof, show by way of illustration, and not of limitation, specific embodiments in which the subject matter may be practiced. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.
[1394] Although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description.
Furthermore, as an ordered combination, these elements amount to generic computer components receiving or transmitting data over a network, performing repetitive calculations, electronic record keeping, and storing and retrieving information in memory, which, as held by the courts, are well-understood, routine, and conventional. See MPEP 2106.05(d).
Moreover, the remaining elements of dependent claims do not transform the recited abstract idea into a patent eligible invention because these remaining elements merely recite further abstract limitations that provide nothing more than simply a narrowing of the abstract idea recited in the independent claims.
Looking at these limitations as an ordered combination adds nothing additional that is sufficient to amount to significantly more than the recited abstract idea because they simply provide instructions to use a generic arrangement of generic computer components to “apply” the recited abstract idea, perform insignificant extra-solution activity, and generally link the abstract idea to a technical environment. Thus, the elements of the claims, considered both individually and as an ordered combination, are not sufficient to ensure that the claim as a whole amounts to significantly more than the abstract idea itself. Since there are no limitations in these claims that transform the exception into a patent eligible application such that these claims amount to significantly more than the exception itself, claims 1-10, 12-20 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Claim Rejections – 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
Determining the scope and contents of the prior art.
Ascertaining the differences between the prior art and the claims at issue.
Resolving the level of ordinary skill in the pertinent art.
Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-10, 12-20 is/are rejected under 35 U.S.C. 103 as being unpatentable by JP Patent Publication to US Patent Publication to US20090300544A1 to Psenka et al., (hereinafter referred to as “Psenka”) in view of US Patent Publication to US20150269244A1 to Qamar et al., (hereinafter referred to as “Qamar”)
As per Claim 1, Psenka teaches: A system comprising: one or more computer processors; one or more computer memories; a set of instructions incorporated into the one or more computer memories, the set of instructions configuring the one or more computer processors to perform operations, the operations comprising: ([0159])
generating one or more interactive compensation representation views based on an evaluation of mapping rules against a flexible ontology and current data state, the flexible ontology comprising one or more customizable compensation components, relationships, and properties; and (in at least [0057] User interface 12 is used to provide queries to, and render results based upon, data from an operational data set 16. Operational data set 16 can comprise, for example, a relational or other databases, and includes data ultimately maintained in one or more of a plurality of data sources 20. In some embodiments, operational data set 16 may be maintained in a suitable computer-readable medium and can comprise data imported from one or more of sources 20. However, operational data set 16 may comprise a mapping or other metadata providing for direct access to other sources in some embodiments. In FIG. 1, data sources 20 are indicated as 20-1, 20-2, 20-N to indicate that the number of sources may vary from one on upward. Source 20-3 is shown to indicate that, in some embodiments, solution 10 can integrate locally-stored data (e.g. data available at a user's terminal). [0129] FIG. 10D indicates how columns from a particular data source are to be mapped to columns in the operational dataset. In this example, column names from the data source are correlated to names in the operational dataset and types are indicated. Additionally, one or more columns can be indicated as “keys.” One or more components, processes, scripts, or the like can copy information returned from the data source into the operational dataset. Depending upon the configuration parameters, recently-accessed data may overwrite previously-written data in the operational dataset and/or recently-accessed data may be appended to the operational dataset. In some embodiments, data “freshness” is verified by comparing metadata (such as the last update for the operational dataset versus the data source) and only “new” data is written into or appended to the operational dataset. [0067] FIG. 3 provides an example of a window 50 which may be rendered to select a databook of interest. In this example, contextual menu 52 is provided, along with first and second selection areas 54 and 58. In some embodiments, solution 10 may provide users with multiple collections of databooks. For example, as shown in FIG. 3, a group 56 includes different sets (SET1 through SET4) available for selection. While four sets are shown, an infinite number of collections are possible. Area 58 provides a list 60 of one or more databooks associated with a given set when the set is clicked on by a user. [0151] FIG. 11, another conceptual aspect of the present subject matter can be appreciated. Operational data set 500 comprises a plurality of rows and columns. As was noted before, the dataset can originate from one or more sources. The data set may be updated irregularly based on user commands, on a scheduled basis, or even in response to an indication that an underlying data source has changed. [0070] FIG. 4 depicts a window 62 showing a simplified databook rendered in accordance with one or more aspects of the present subject matter. In this example, the window also includes a contextual menu 52. Contextual menu 52 may, in some embodiments, lead to options whereby a user can define the contents and appearance of the databook and/or export or otherwise distribute the databook contents. [0152] the databooks are defined by parameters 502, the access controls (and databooks) do not need to be redefined every time the underlying data sources change. For example, in a large manufacturing enterprise, sales records may be updated daily, with the operational dataset constantly including more records. A manager may be interested in which salespeople in each state are in the top 10% of total sales for the year. The manager can construct a databook, such as by grouping sales figures by state and salesperson, with sales amount totaled and the results filtered to show only the top 10% of sales.)
dynamically updating the one or more interactive compensation representation views in real- time in response to changes in compensation data, user context, or access permissions without requiring manual regeneration of the one or more interactive compensation representation views, wherein the one or more interactive compensation representation views include a user interface element displayed in response to a user selection of a compensation data item, wherein the user interface element provides direct access to compensation information relevant to the compensation data item without requiring a change to another screen or another user context; (in at least [0057] User interface 12 is used to provide queries to, and render results based upon, data from an operational data set 16. Operational data set 16 can comprise, for example, a relational or other databases, and includes data ultimately maintained in one or more of a plurality of data sources 20. In some embodiments, operational data set 16 may be maintained in a suitable computer-readable medium and can comprise data imported from one or more of sources 20. However, operational data set 16 may comprise a mapping or other metadata providing for direct access to other sources in some embodiments. In FIG. 1, data sources 20 are indicated as 20-1, 20-2, 20-N to indicate that the number of sources may vary from one on upward. Source 20-3 is shown to indicate that, in some embodiments, solution 10 can integrate locally-stored data (e.g. data available at a user's terminal). [0129] FIG. 10D indicates how columns from a particular data source are to be mapped to columns in the operational dataset. In this example, column names from the data source are correlated to names in the operational dataset and types are indicated. Additionally, one or more columns can be indicated as “keys.” One or more components, processes, scripts, or the like can copy information returned from the data source into the operational dataset. Depending upon the configuration parameters, recently-accessed data may overwrite previously-written data in the operational dataset and/or recently-accessed data may be appended to the operational dataset. In some embodiments, data “freshness” is verified by comparing metadata (such as the last update for the operational dataset versus the data source) and only “new” data is written into or appended to the operational dataset. [0067] FIG. 3 provides an example of a window 50 which may be rendered to select a databook of interest. In this example, contextual menu 52 is provided, along with first and second selection areas 54 and 58. In some embodiments, solution 10 may provide users with multiple collections of databooks. For example, as shown in FIG. 3, a group 56 includes different sets (SET1 through SET4) available for selection. While four sets are shown, an infinite number of collections are possible. Area 58 provides a list 60 of one or more databooks associated with a given set when the set is clicked on by a user. [0151] FIG. 11, another conceptual aspect of the present subject matter can be appreciated. Operational data set 500 comprises a plurality of rows and columns. As was noted before, the dataset can originate from one or more sources. The data set may be updated irregularly based on user commands, on a scheduled basis, or even in response to an indication that an underlying data source has changed. [0070] FIG. 4 depicts a window 62 showing a simplified databook rendered in accordance with one or more aspects of the present subject matter. In this example, the window also includes a contextual menu 52. Contextual menu 52 may, in some embodiments, lead to options whereby a user can define the contents and appearance of the databook and/or export or otherwise distribute the databook contents. [0152] the databooks are defined by parameters 502, the access controls (and databooks) do not need to be redefined every time the underlying data sources change. For example, in a large manufacturing enterprise, sales records may be updated daily, with the operational dataset constantly including more records. A manager may be interested in which salespeople in each state are in the top 10% of total sales for the year. The manager can construct a databook, such as by grouping sales figures by state and salesperson, with sales amount totaled and the results filtered to show only the top 10% of sales. [0153] The databook can then be accessed at any time and (assuming the operational dataset is updated), the report will automatically be updated with the latest data when the databook is refreshed. Databook definition parameters can be provided to various users to share the databook. As was noted above, if one or more security policies are in effect, the users may access the operational dataset using the same databook definition parameters, but end up with different databooks depending upon access rights and/or permissions.)
analyzing historical patterns of compensation actions using one or more … algorithms;(in at least [0114] FIG. 8E shows an example of a trend analysis, namely a trend analysis chart 410 showing salary for John Smoltz over time. Chart 410 includes a trend evaluation indicator 411 which reads “Fit HIGH, r: 0.962” to indicate the confidence in the trend analysis. One of skill in the art will recognize that any suitable number or type of curve-fitting algorithms can be applied to the data to attempt to identify a trend. In some embodiments, the trend analysis output is not made available unless the confidence value meets a predetermined threshold. [0115] Chart 410 also includes a forecast interface 412, in this example, a slider bar. FIG. 8F shows an example of the chart being extended to the maximum amount based on input provided via the slider bar. As shown at 414, the trend has been continued based on the trend analysis as indicated at 416.)
detecting anomalies or outliers relative to learned patterns of typical compensation amounts, distributions, or adjustments; and (in at least [0116] FIG. 8G shows another way in which data can be visualized based on the evaluate function. In this example, salary data is presented as a histogram. The vertical axis 418 indicates the number of records with data in the column of interest that fall within a particular sub-range, with the horizontal axis showing a plurality of sub-ranges in the range of values for the column of interest across the dataset. In this example, as shown at 422, the histogram includes a highlighted portion showing where the record of interest falls in the histogram. In this particular example, about 86,000 records have a “salary” value between zero and 2.8 million. The record being evaluated (John Smoltz for 2004 at $11,677,000) falls within a range of $11,571,429 to $14,166,667. The histogram view may allow for easy identification of whether a particular record is an outlier.)
flagging anomalous … for reviews via a graphical user interface. (in at least [0075] Once the filter has been defined, it can be applied to the data. For instance, the databook may include a button or other suitable interface mechanism that a user can click to trigger a refresh of the data, or the data may refresh automatically. In any event, to refresh the data, the user interface can provide a query to the operational dataset, with the query in this example based on the defined filter. The data returned for display will then include only those records where “salary” is above $40,000 for this example. [0117] For this particular inquiry, the user may wish to return to the data view and filter out records having a salary below $2.8 million. Then, upon highlighting the same record and again triggering “evaluate,” the user can see where the record lies amongst the 200 or so records with high salaries.)
Although implied, Psenka does not expressly disclose the following limitations, which however, are taught by Qamar,
analyzing historical patterns of compensation actions using one or more machine learning algorithms (in at least [0054] a series of regression models may be built and evaluated using a training subset of the organization data and/or the optional external data. In these regression models, factors may be removed one at a time, and the remaining factors may be reordered. These permutations and combinations on subsets of the set of factors may provide a table of predictions for the different regression models (i.e., statistical comparison between predictions of the regression models for a test subset of the organization data and/or optional external data relative to the training subset). The average model performance for the factors, the cross-correlations among the factors and/or the ordering of the factors in these predictions may be used to select the polynomial (factors, exponents n and amplitude weights wi) using to calculate the performance metric and/or to determine the retention risk. Thus, variance decomposition may allow the number of factors in the organization data and/or the optional external data to be pruned to reduce the risk of over fitting. [0058] the retention suggestion may be to offer additional training opportunities to the employee to help them improve their skills. Thus retention suggestion may cost $20,000, but may be predicted to keep the employee from leaving for several months, which may more than offset the incremental expense (thereby justifying the use of the retention suggestion). More generally, the retention suggestion may include an action that may keep the employee from leaving (such as: a one-time bonus, a pay increase, a promotion, a change in title, a change in work responsibility, additional training, changing the employee's supervisor, recognition among other employees, etc.). The retention suggestion and/or the cost-benefit analysis may be provided to the manager or the supervisor of the employee in the organization, and/or to the representative of human resources for the organization. [0082] Using the regression model (which may be used for one employee or multiple employees), and the aforementioned factors in the organization data and the optional external data, the performance metric and retention risk of employee Bob Smith at the company may be determined. The results may indicate that Bob's customer satisfaction performance during the last week has been extremely (relative to his historic baseline) varied, and that his overtime has reduced. This may indicate an 82% increased likelihood that Bob may leave the company within a week. [0083] However, Bob may be a high performing employee. In particular, company ABC may consider employees that produce more widgets per hour valuable. Based on his average productivity in this regard (holding constant factors such as work location or job type), Bob may be in the top 5% of employees. Consequently, a retention suggestion may be provided. This retention suggestion may indicate that by giving Bob a financial award as an ‘outstanding performer’ is likely to ensure that he stays at the company for at least six months, and that the incremental cost is more than offset by his high productivity.)
…compensation actions…(in at least [0083] However, Bob may be a high performing employee. In particular, company ABC may consider employees that produce more widgets per hour valuable. Based on his average productivity in this regard (holding constant factors such as work location or job type), Bob may be in the top 5% of employees. Consequently, a retention suggestion may be provided. This retention suggestion may indicate that by giving Bob a financial award as an ‘outstanding performer’ is likely to ensure that he stays at the company for at least six months, and that the incremental cost is more than offset by his high productivity. [0226] the computer system optionally regularizes the organization data (operation 2012) to correct for anomalies (such as differences relative to an expected data format, missing data, normalizing the data so that data having different ranges can be compared, etc.).)
At the time the invention was filed, in the same field of endeavor, it would have been obvious for one of ordinary skill in the art to have modified the teachings of Psenka, as taught by Qamar above, with a reasonable expectation of success if arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make this modification to the teachings of Psenka with the motivation of, … the analysis technique: speeds up computation of the Kaplan-Meier estimator curves and the clustering analysis; reduces memory consumption when performing the computations; improves reliability of the computations (as evidenced by increased retention); reduces network latency; improves the user-friendliness of a user interface that displays results of the computations; and/or improves other performance metrics related to the function of the computer or the computer system.… offer additional training opportunities to an employee to help them improve their skills... to improve the hiring practices of the organization. In this way, the analysis technique may help the organization improve its human capital in a targeted manner (specific to a particular position or job type in the organization), which may help the organization compete and succeed in the marketplace…. to manage its own employees and to facilitate improved hiring… facilitate a systematic and hierarchical study of computed data to eventually build predictive models that are more accurate, especially in the domain of selection-science and psychometry, thereby facilitating optimal hiring decisions and employee-workforce profitability management.….produce more accurate estimates from smaller samples… generate an ensemble of accurate predictive models such as: panel-methods and random-effects regression models, kernel-methods based regression models, decision forests, neural-nets and/or support vector machines. These machine-learning models or estimators may facilitate the analysis of causative factors behind employee-workforce attrition. Furthermore, the estimators may help differentiate between the behaviors of the specific values of a categorical predictor and may provide accurate predictive models…, as recited in Qamar.
As per Claim 2, Psenka teaches: The system of claim 1,
wherein the user interface element is a sidebar element that is dynamically constructed based on properties of the user selection. (in at least [0075] Once the filter has been defined, it can be applied to the data. For instance, the databook may include a button or other suitable interface mechanism that a user can click to trigger a refresh of the data, or the data may refresh automatically. In any event, to refresh the data, the user interface can provide a query to the operational dataset, with the query in this example based on the defined filter. The data returned for display will then include only those records where “salary” is above $40,000 for this example. [0093] Databook 262 includes data tab 264, triggering the current view, graph tab 266, evaluate tab 268, and a “superpivot” tab 284, which can trigger another type of data visualization. Palettes 270 in this example include group palette 272, sort palette 274, and filter palette 276. Contextual menu 252 is included, along with a “refresh” button 292 (since in this example, the databook does not automatically refresh itself). Databook 262 further includes a “swiftseek” tab 286 positioned on its left side, which may be used to rapidly locate one or more rows of interest in the data view, as will be discussed later below. Like databook 162, databook 262 includes zoom in/out control 290, page browsing interface 288, and a total row count indicator 289 (showing a total of 16,638 rows in databook as currently configured). [0094] FIG. 6B shows the result of applying a “gender” grouping to the data. As shown in FIG. 6B, the databook now has grouped the records into “female” and “male.” Additionally, the “annual salary” column has been right-clicked to trigger a group analysis—namely to average the salary. Additionally, the “head count” column has been right-clicked to trigger counting each record. Accordingly, the databook now indicates average salary by gender, as well as a total head count for each gender. In this example, the average salary across the entire set has been computed and a total head count provided at “grand total.” An interested user could browse the records by expanding either gender list. [0095] FIG. 6C demonstrates nested grouping functionality that can be supported in some embodiments. In this example, the “gender” grouping has been kept, but the “state” header has been dragged to palette 272 so that, within each “gender” grouping, the results have been grouped by state. Databook 262 now displays average salary and total employees by gender and state. In this example, the average annual salary and head count for female employees in 6 exemplary states are provided. An interested user could drill down to individual records in a particular state by expanding the nested category lists. [0098] FIG. 6E presents a detailed example of a contextual menu that may be generated in order to define a filter. In this example, the unfiltered databook of FIG. 6A will be filtered to produce a databook indicating which faculty members have a visa expiration date within a given time range. The user begins by dragging the “visa expire” column to filter palette 276. In this example, the user is presented with filter builder window 234, which provides a plurality of filtering options 236 based on the data type of the column. In this example, the column contains numerical data, namely a date. The user is presented with a range option 230, along with other options (“equals,” where a specific date can be provided, “in last” and “in next” options, where the user can indicate a time window in days, weeks, or other time units relative to the current date, “prior, current,” and “next” options, where the user can indicate a window about the current date, and “is empty,” where the results can be filtered to exclude records with no value for visa expiration). Query area 232 provides the end result of the selection. In this example, the “exclude filtered rows” allows a user to define a filter based on data he/she desires to exclude, rather than include. [0118] the databook includes a “file” and “format” menu. FIG. 9A shows an example of an expanded “file” menu, whereby a user can select options to print, export, send, or save a databook. Additionally, in this example, the user interface allows the user to define “quicklinks” which allow access to a particular databook from a “quicklinks” screen available at any point in the application.)
As per Claim 3, Psenka teaches: The system of claim 2,
wherein the dynamic updating includes constructing a sidebar and populating the sidebar with one or more contextual actions relevant to the user selection. (in at least [0075] Once the filter has been defined, it can be applied to the data. For instance, the databook may include a button or other suitable interface mechanism that a user can click to trigger a refresh of the data, or the data may refresh automatically. In any event, to refresh the data, the user interface can provide a query to the operational dataset, with the query in this example based on the defined filter. The data returned for display will then include only those records where “salary” is above $40,000 for this example. [0093] Databook 262 includes data tab 264, triggering the current view, graph tab 266, evaluate tab 268, and a “superpivot” tab 284, which can trigger another type of data visualization. Palettes 270 in this example include group palette 272, sort palette 274, and filter palette 276. Contextual menu 252 is included, along with a “refresh” button 292 (since in this example, the databook does not automatically refresh itself). Databook 262 further includes a “swiftseek” tab 286 positioned on its left side, which may be used to rapidly locate one or more rows of interest in the data view, as will be discussed later below. Like databook 162, databook 262 includes zoom in/out control 290, page browsing interface 288, and a total row count indicator 289 (showing a total of 16,638 rows in databook as currently configured). [0094] FIG. 6B shows the result of applying a “gender” grouping to the data. As shown in FIG. 6B, the databook now has grouped the records into “female” and “male.” Additionally, the “annual salary” column has been right-clicked to trigger a group analysis—namely to average the salary. Additionally, the “head count” column has been right-clicked to trigger counting each record. Accordingly, the databook now indicates average salary by gender, as well as a total head count for each gender. In this example, the average salary across the entire set has been computed and a total head count provided at “grand total.” An interested user could browse the records by expanding either gender list. [0095] FIG. 6C demonstrates nested grouping functionality that can be supported in some embodiments. In this example, the “gender” grouping has been kept, but the “state” header has been dragged to palette 272 so that, within each “gender” grouping, the results have been grouped by state. Databook 262 now displays average salary and total employees by gender and state. In this example, the average annual salary and head count for female employees in 6 exemplary states are provided. An interested user could drill down to individual records in a particular state by expanding the nested category lists. [0098] FIG. 6E presents a detailed example of a contextual menu that may be generated in order to define a filter. In this example, the unfiltered databook of FIG. 6A will be filtered to produce a databook indicating which faculty members have a visa expiration date within a given time range. The user begins by dragging the “visa expire” column to filter palette 276. In this example, the user is presented with filter builder window 234, which provides a plurality of filtering options 236 based on the data type of the column. In this example, the column contains numerical data, namely a date. The user is presented with a range option 230, along with other options (“equals,” where a specific date can be provided, “in last” and “in next” options, where the user can indicate a time window in days, weeks, or other time units relative to the current date, “prior, current,” and “next” options, where the user can indicate a window about the current date, and “is empty,” where the results can be filtered to exclude records with no value for visa expiration). Query area 232 provides the end result of the selection. In this example, the “exclude filtered rows” allows a user to define a filter based on data he/she desires to exclude, rather than include. [0118] the databook includes a “file” and “format” menu. FIG. 9A shows an example of an expanded “file” menu, whereby a user can select options to print, export, send, or save a databook. Additionally, in this example, the user interface allows the user to define “quicklinks” which allow access to a particular databook from a “quicklinks” screen available at any point in the application)
As per Claim 4, Psenka teaches: The system of claim 3,
wherein the one or more contextual actions are determined based on roles or permissions associated with the user selection. (in at least [0149] the security policy includes user permissions regarding databook definition parameters. For example, the security policy may prevent a user from changing a filtering, grouping, or sorting parameter, or may prevent the user from adding or removing columns to or from a databook. Access restrictions may extend to other configuration parameters, such as which users can view/change live request specification data such as database identifiers, relationships, login information, and the like. )
As per Claim 5, Psenka teaches: The system of claim 1,
wherein the user interface element is overlaid over the one or more interactive compensation representation views without disrupting a current user workflow. (in at least [0107] Swiftseek tab 386 may be especially useful in light of the volume of data. FIG. 8B shows an example of an interface that may be triggered by a user clicking on a swiftseek tab. Namely, in this example, a window 400 is overlain on databook 362. Column listing 402 shows values from many rows of the “name” column in a visually-compressed (i.e. shrunken) view. However, part of the column is magnified in a “fisheye” effect as indicated at 404. “Fisheye” portion 404 represents the currently-viewed page, and a user can move rapidly through the data by providing suitable input (for example by clicking and dragging the fisheye portion or by mousing over a different portion of list 402).)
As per Claim 6, Psenka teaches: The system of claim 1, further comprising
further comprising dismissing the user interface element without navigating away from the one or more interactive compensation representation views. (in at least [0093] Databook 262 includes data tab 264, triggering the current view, graph tab 266, evaluate tab 268, and a “superpivot” tab 284, which can trigger another type of data visualization. Palettes 270 in this example include group palette 272, sort palette 274, and filter palette 276. Contextual menu 252 is included, along with a “refresh” button 292 (since in this example, the databook does not automatically refresh itself). Databook 262 further includes a “swiftseek” tab 286 positioned on its left side, which may be used to rapidly locate one or more rows of interest in the data view, as will be discussed later below. Like databook 162, databook 262 includes zoom in/out control 290, page browsing interface 288, and a total row count indicator 289 (showing a total of 16,638 rows in databook as currently configured). [0094] FIG. 6B shows the result of applying a “gender” grouping to the data. As shown in FIG. 6B, the databook now has grouped the records into “female” and “male.” Additionally, the “annual salary” column has been right-clicked to trigger a group analysis—namely to average the salary. Additionally, the “head count” column has been right-clicked to trigger counting each record. Accordingly, the databook now indicates average salary by gender, as well as a total head count for each gender. In this example, the average salary across the entire set has been computed and a total head count provided at “grand total.” An interested user could browse the records by expanding either gender list. [0095] FIG. 6C demonstrates nested grouping functionality that can be supported in some embodiments. In this example, the “gender” grouping has been kept, but the “state” header has been dragged to palette 272 so that, within each “gender” grouping, the results have been grouped by state. Databook 262 now displays average salary and total employees by gender and state. In this example, the average annual salary and head count for female employees in 6 exemplary states are provided. An interested user could drill down to individual records in a particular state by expanding the nested category lists. [0118] the databook includes a “file” and “format” menu. FIG. 9A shows an example of an expanded “file” menu, whereby a user can select options to print, export, send, or save a databook. Additionally, in this example, the user interface allows the user to define “quicklinks” which allow access to a particular databook from a “quicklinks” screen available at any point in the application)
As per Claim 7, Psenka teaches: The system of claim 3, wherein the one or more contextual actions include one or more of
reviewing details of the compensation data item, editing the properties or relationships of the compensation data item, adding or removing relationships between the compensation data item and other related compensation data items, simulating or projecting changes to the compensation data item; submitting changes to the compensation data item for approval; annotating the compensation data item with notes or explanations, or comparing the compensation data item to peer or industry benchmark data. (in at least [0088] FIG. 5C shows an additional display option which may be triggered to provide context for the rank graphs. For example, in some embodiments, the contextual information is triggered by triggered by clicking the “show range” checkbox at the top right of FIG. 5B. Namely, each rank graph 194 now includes a visual representation of where the row value lies within the population. Rank graph 194A shows that, while Professor Black's salary may be high, in reality his salary is far from the maximum. On the other hand, graph 194B indicates that having a single publication indeed places the professor near the bottom of his cohort. In some embodiments, the ranges are displayed using multiple colors to indicate where the median or mean value lies. [0092] FIG. 6A is an example of another databook 262 containing personnel/HR data. Databook 262 features rows 282 and columns 280 of data. In this example, columns 280 include entries as follows: Last Name, First Name, Annual Salary, Headcount (a placeholder for counting records), city, state, country, zip code, home phone, work phone, emergency contact, visa expiration (for some records only), birth date, age, building, department, gender, hire data, and hiring source (not visible in FIG. 6A). For example, databook 262 may originate from a combination of HR and department management data for a university, with the databook being used for administrative purposes [0118] the databook includes a “file” and “format” menu. FIG. 9A shows an example of an expanded “file” menu, whereby a user can select options to print, export, send, or save a databook. Additionally, in this example, the user interface allows the user to define “quicklinks” which allow access to a particular databook from a “quicklinks” screen available at any point in the application)
As per Claim 8, Psenka teaches: The system of claim 3, wherein the one or more contextual actions include one or more of
navigating to summary or detailed views of compensation data related to the compensation data item; filtering, sorting, or grouping related compensation data based on properties of the compensation data item; highlighting or flagging the compensation data item for follow up; adding tasks, reminders, or appointments related to the compensation data item; notifying other users or roles of changes related to the compensation data item. (in at least [0077] Group palette 72 may be useful in visualizing or ordering data without the need to sort or filter the data. For example, a management-level user may be interested viewing data by department. In some embodiments, software solution 10 allows for more in-depth analysis based on grouping data. Rather than merely sorting by department, the management-level user may wish to invoke the additional functionality by dragging the “department” header to group palette 72. [0092] FIG. 6A is an example of another databook 262 containing personnel/HR data. Databook 262 features rows 282 and columns 280 of data. In this example, columns 280 include entries as follows: Last Name, First Name, Annual Salary, Headcount (a placeholder for counting records), city, state, country, zip code, home phone, work phone, emergency contact, visa expiration (for some records only), birth date, age, building, department, gender, hire data, and hiring source (not visible in FIG. 6A). For example, databook 262 may originate from a combination of HR and department management data for a university, with the databook being used for administrative purposes [0118] the databook includes a “file” and “format” menu. FIG. 9A shows an example of an expanded “file” menu, whereby a user can select options to print, export, send, or save a databook. Additionally, in this example, the user interface allows the user to define “quicklinks” which allow access to a particular databook from a “quicklinks” screen available at any point in the application)
As per Claim 9, Psenka teaches: The system of claim 1, wherein the evaluation of the mapping rules against the flexible ontology and current data state comprises:
accessing a flexible ontology data structure, the flexible ontology data structure including the one or more customizable compensation components, relationships, and properties; (in at least [0057] User interface 12 is used to provide queries to, and render results based upon, data from an operational data set 16. Operational data set 16 can comprise, for example, a relational or other databases, and includes data ultimately maintained in one or more of a plurality of data sources 20. In some embodiments, operational data set 16 may be maintained in a suitable computer-readable medium and can comprise data imported from one or more of sources 20. However, operational data set 16 may comprise a mapping or other metadata providing for direct access to other sources in some embodiments. In FIG. 1, data sources 20 are indicated as 20-1, 20-2, 20-N to indicate that the number of sources may vary from one on upward. Source 20-3 is shown to indicate that, in some embodiments, solution 10 can integrate locally-stored data (e.g. data available at a user's terminal). [0129] FIG. 10D indicates how columns from a particular data source are to be mapped to columns in the operational dataset. In this example, column names from the data source are correlated to names in the operational dataset and types are indicated. Additionally, one or more columns can be indicated as “keys.” One or more components, processes, scripts, or the like can copy information returned from the data source into the operational dataset. Depending upon the configuration parameters, recently-accessed data may overwrite previously-written data in the operational dataset and/or recently-accessed data may be appended to the operational dataset. In some embodiments, data “freshness” is verified by comparing metadata (such as the last update for the operational dataset versus the data source) and only “new” data is written into or appended to the operational dataset. [0067] FIG. 3 provides an example of a window 50 which may be rendered to select a databook of interest. In this example, contextual menu 52 is provided, along with first and second selection areas 54 and 58. In some embodiments, solution 10 may provide users with multiple collections of databooks. For example, as shown in FIG. 3, a group 56 includes different sets (SET1 through SET4) available for selection. While four sets are shown, an infinite number of collections are possible. Area 58 provides a list 60 of one or more databooks associated with a given set when the set is clicked on by a user. [0151] FIG. 11, another conceptual aspect of the present subject matter can be appreciated. Operational data set 500 comprises a plurality of rows and columns. As was noted before, the dataset can originate from one or more sources. The data set may be updated irregularly based on user commands, on a scheduled basis, or even in response to an indication that an underlying data source has changed. [0070] FIG. 4 depicts a window 62 showing a simplified databook rendered in accordance with one or more aspects of the present subject matter. In this example, the window also includes a contextual menu 52. Contextual menu 52 may, in some embodiments, lead to options whereby a user can define the contents and appearance of the databook and/or export or otherwise distribute the databook contents. [0152] the databooks are defined by parameters 502, the access controls (and databooks) do not need to be redefined every time the underlying data sources change. For example, in a large manufacturing enterprise, sales records may be updated daily, with the operational dataset constantly including more records. A manager may be interested in which salespeople in each state are in the top 10% of total sales for the year. The manager can construct a databook, such as by grouping sales figures by state and salesperson, with sales amount totaled and the results filtered to show only the top 10% of sales.)
accessing a current data state structure, the current data state structure including one or more of an employee, role, or compensation data instance mapped to elements of the flexible ontology; and (in at least [0057] User interface 12 is used to provide queries to, and render results based upon, data from an operational data set 16. Operational data set 16 can comprise, for example, a relational or other databases, and includes data ultimately maintained in one or more of a plurality of data sources 20. In some embodiments, operational data set 16 may be maintained in a suitable computer-readable medium and can comprise data imported from one or more of sources 20. However, operational data set 16 may comprise a mapping or other metadata providing for direct access to other sources in some embodiments. In FIG. 1, data sources 20 are indicated as 20-1, 20-2, 20-N to indicate that the number of sources may vary from one on upward. Source 20-3 is shown to indicate that, in some embodiments, solution 10 can integrate locally-stored data (e.g. data available at a user's terminal). [0129] FIG. 10D indicates how columns from a particular data source are to be mapped to columns in the operational dataset. In this example, column names from the data source are correlated to names in the operational dataset and types are indicated. Additionally, one or more columns can be indicated as “keys.” One or more components, processes, scripts, or the like can copy information returned from the data source into the operational dataset. Depending upon the configuration parameters, recently-accessed data may overwrite previously-written data in the operational dataset and/or recently-accessed data may be appended to the operational dataset. In some embodiments, data “freshness” is verified by comparing metadata (such as the last update for the operational dataset versus the data source) and only “new” data is written into or appended to the operational dataset. [0067] FIG. 3 provides an example of a window 50 which may be rendered to select a databook of interest. In this example, contextual menu 52 is provided, along with first and second selection areas 54 and 58. In some embodiments, solution 10 may provide users with multiple collections of databooks. For example, as shown in FIG. 3, a group 56 includes different sets (SET1 through SET4) available for selection. While four sets are shown, an infinite number of collections are possible. Area 58 provides a list 60 of one or more databooks associated with a given set when the set is clicked on by a user. [0151] FIG. 11, another conceptual aspect of the present subject matter can be appreciated. Operational data set 500 comprises a plurality of rows and columns. As was noted before, the dataset can originate from one or more sources. The data set may be updated irregularly based on user commands, on a scheduled basis, or even in response to an indication that an underlying data source has changed. [0070] FIG. 4 depicts a window 62 showing a simplified databook rendered in accordance with one or more aspects of the present subject matter. In this example, the window also includes a contextual menu 52. Contextual menu 52 may, in some embodiments, lead to options whereby a user can define the contents and appearance of the databook and/or export or otherwise distribute the databook contents. [0152] the databooks are defined by parameters 502, the access controls (and databooks) do not need to be redefined every time the underlying data sources change. For example, in a large manufacturing enterprise, sales records may be updated daily, with the operational dataset constantly including more records. A manager may be interested in which salespeople in each state are in the top 10% of total sales for the year. The manager can construct a databook, such as by grouping sales figures by state and salesperson, with sales amount totaled and the results filtered to show only the top 10% of sales.)
applying the mapping rules, the mapping rules comparing one or more properties of compensation data instances to specified thresholds or values. (in at least [0057] User interface 12 is used to provide queries to, and render results based upon, data from an operational data set 16. Operational data set 16 can comprise, for example, a relational or other databases, and includes data ultimately maintained in one or more of a plurality of data sources 20. In some embodiments, operational data set 16 may be maintained in a suitable computer-readable medium and can comprise data imported from one or more of sources 20. However, operational data set 16 may comprise a mapping or other metadata providing for direct access to other sources in some embodiments. In FIG. 1, data sources 20 are indicated as 20-1, 20-2, 20-N to indicate that the number of sources may vary from one on upward. Source 20-3 is shown to indicate that, in some embodiments, solution 10 can integrate locally-stored data (e.g. data available at a user's terminal). [0129] FIG. 10D indicates how columns from a particular data source are to be mapped to columns in the operational dataset. In this example, column names from the data source are correlated to names in the operational dataset and types are indicated. Additionally, one or more columns can be indicated as “keys.” One or more components, processes, scripts, or the like can copy information returned from the data source into the operational dataset. Depending upon the configuration parameters, recently-accessed data may overwrite previously-written data in the operational dataset and/or recently-accessed data may be appended to the operational dataset. In some embodiments, data “freshness” is verified by comparing metadata (such as the last update for the operational dataset versus the data source) and only “new” data is written into or appended to the operational dataset. [0114] FIG. 8E shows an example of a trend analysis, namely a trend analysis chart 410 showing salary for John Smoltz over time. Chart 410 includes a trend evaluation indicator 411 which reads “Fit HIGH, r: 0.962” to indicate the confidence in the trend analysis. One of skill in the art will recognize that any suitable number or type of curve-fitting algorithms can be applied to the data to attempt to identify a trend. In some embodiments, the trend analysis output is not made available unless the confidence value meets a predetermined threshold.)
As per Claim 10, Psenka teaches: The system of claim 1, further comprising:
…on historical compensation data to identify relationships between compensation … and employee performance or retention outcomes; (in at least [0019] an end user evaluating performance over time may be interested in salary histories, and thus may desire a report with all salaries for each person, while another end user evaluating cash flow during the current time period may desire a report with current salaries only.)
… to simulate or project potential impacts of … on performance or retention; and (in at least [0025] trend analysis for one or more columns. For example, if rows of a particular dataset can be recognized as referring to the same subject (e.g. by sharing a particular column value such as “name”) and those rows include a plurality of changing values (e.g. “salary” by “year”), the data visualization mechanism can include a trend analysis for the changing value for each other value of interest (e.g. a “salary” trend for a particular “name”). The trend analysis can, for example, be implemented by using any suitable curve fitting algorithm. The trend analysis can further include a forecast element—for instance, in some embodiments, a sliding bar or other input can be used in order for an end-user to indicate a desired forecast range.)
outputting projected outcomes of compensation … for display via a graphical user interface. (in at least [0115] Chart 410 also includes a forecast interface 412, in this example, a slider bar. FIG. 8F shows an example of the chart being extended to the maximum amount based on input provided via the slider bar. As shown at 414, the trend has been continued based on the trend analysis as indicated at 416.)
Although implied, Psenka does not expressly disclose the following limitations, which however, are taught by Qamar,
training a machine learning model on historical compensation data to identify relationships between compensation actions and employee performance or retention outcomes; (in at least [0054] a series of regression models may be built and evaluated using a training subset of the organization data and/or the optional external data. In these regression models, factors may be removed one at a time, and the remaining factors may be reordered. These permutations and combinations on subsets of the set of factors may provide a table of predictions for the different regression models (i.e., statistical comparison between predictions of the regression models for a test subset of the organization data and/or optional external data relative to the training subset). The average model performance for the factors, the cross-correlations among the factors and/or the ordering of the factors in these predictions may be used to select the polynomial (factors, exponents n and amplitude weights wi) using to calculate the performance metric and/or to determine the retention risk. Thus, variance decomposition may allow the number of factors in the organization data and/or the optional external data to be pruned to reduce the risk of over fitting. [0058] the retention suggestion may be to offer additional training opportunities to the employee to help them improve their skills. Thus retention suggestion may cost $20,000, but may be predicted to keep the employee from leaving for several months, which may more than offset the incremental expense (thereby justifying the use of the retention suggestion). More generally, the retention suggestion may include an action that may keep the employee from leaving (such as: a one-time bonus, a pay increase, a promotion, a change in title, a change in work responsibility, additional training, changing the employee's supervisor, recognition among other employees, etc.). The retention suggestion and/or the cost-benefit analysis may be provided to the manager or the supervisor of the employee in the organization, and/or to the representative of human resources for the organization. [0082] Using the regression model (which may be used for one employee or multiple employees), and the aforementioned factors in the organization data and the optional external data, the performance metric and retention risk of employee Bob Smith at the company may be determined. The results may indicate that Bob's customer satisfaction performance during the last week has been extremely (relative to his historic baseline) varied, and that his overtime has reduced. This may indicate an 82% increased likelihood that Bob may leave the company within a week. [0083] However, Bob may be a high performing employee. In particular, company ABC may consider employees that produce more widgets per hour valuable. Based on his average productivity in this regard (holding constant factors such as work location or job type), Bob may be in the top 5% of employees. Consequently, a retention suggestion may be provided. This retention suggestion may indicate that by giving Bob a financial award as an ‘outstanding performer’ is likely to ensure that he stays at the company for at least six months, and that the incremental cost is more than offset by his high productivity.)
applying the trained machine learning model to simulate or project potential impacts of proposed compensation changes on performance or retention (in at least [0054] a series of regression models may be built and evaluated using a training subset of the organization data and/or the optional external data. In these regression models, factors may be removed one at a time, and the remaining factors may be reordered. These permutations and combinations on subsets of the set of factors may provide a table of predictions for the different regression models (i.e., statistical comparison between predictions of the regression models for a test subset of the organization data and/or optional external data relative to the training subset). The average model performance for the factors, the cross-correlations among the factors and/or the ordering of the factors in these predictions may be used to select the polynomial (factors, exponents n and amplitude weights wi) using to calculate the performance metric and/or to determine the retention risk. Thus, variance decomposition may allow the number of factors in the organization data and/or the optional external data to be pruned to reduce the risk of over fitting. [0058] the retention suggestion may be to offer additional training opportunities to the employee to help them improve their skills. Thus retention suggestion may cost $20,000, but may be predicted to keep the employee from leaving for several months, which may more than offset the incremental expense (thereby justifying the use of the retention suggestion). More generally, the retention suggestion may include an action that may keep the employee from leaving (such as: a one-time bonus, a pay increase, a promotion, a change in title, a change in work responsibility, additional training, changing the employee's supervisor, recognition among other employees, etc.). The retention suggestion and/or the cost-benefit analysis may be provided to the manager or the supervisor of the employee in the organization, and/or to the representative of human resources for the organization. [0082] Using the regression model (which may be used for one employee or multiple employees), and the aforementioned factors in the organization data and the optional external data, the performance metric and retention risk of employee Bob Smith at the company may be determined. The results may indicate that Bob's customer satisfaction performance during the last week has been extremely (relative to his historic baseline) varied, and that his overtime has reduced. This may indicate an 82% increased likelihood that Bob may leave the company within a week. [0083] However, Bob may be a high performing employee. In particular, company ABC may consider employees that produce more widgets per hour valuable. Based on his average productivity in this regard (holding constant factors such as work location or job type), Bob may be in the top 5% of employees. Consequently, a retention suggestion may be provided. This retention suggestion may indicate that by giving Bob a financial award as an ‘outstanding performer’ is likely to ensure that he stays at the company for at least six months, and that the incremental cost is more than offset by his high productivity.)
outputting projected outcomes of compensation change proposals for display via a graphical user interface (in at least [0073] by right-clicking on or touching a data point in user interface 300 (FIG. 3) (or by selecting the data point for an employee and activating a ‘history’ icon), a menu may be displayed. Selecting a ‘history’ option may result in the display of a graph of employee value 310 and retention risk 312 as a function of time 510 (FIG. 5) for an employee. This is shown in FIG. 5, which presents a drawing of a user interface 500. This user interface may allow the user to visually assess trends for the employee.)
The reason and rationale to combine Psenka and Qamar is the same as recited above.
As per Claim 12, Psenka teaches: The system of claim 1,
wherein the flexible ontology further comprises one or more configurable compensation band entities representing pay ranges for a job, each associated with one or more properties including a job function, job level, minimum pay, or currency. (in at least [0025] rows of a particular dataset can be recognized as referring to the same subject (e.g. by sharing a particular column value such as “name”) and those rows include a plurality of changing values (e.g. “salary” by “year”), the data visualization mechanism can include a trend analysis for the changing value for each other value of interest (e.g. a “salary” trend for a particular “name”). The trend analysis can, for example, be implemented by using any suitable curve fitting algorithm. The trend analysis can further include a forecast element—for instance, in some embodiments, a sliding bar or other input can be used in order for an end-user to indicate a desired forecast range. [0074] this databook includes column headers for record number, name, address, salary, department, and awards. A user interested in viewing only those employees earning above $40,000 could drag the “salary” column header to filter palette 74. In some embodiments, this will trigger the rendering of one or more input windows where a filter can be defined in more detail. For example, the input window(s) may allow a user to specify a desired range for the salary value. [0110] In FIG. 8C, swiftseek has been used (or records have otherwise been browsed) so that a plurality of rows corresponding to records for player “John Smoltz” are visible. In this example, several rows for “John Smoltz” are shown, with each row having a different year, slug percentage, and salary. A user interested in evaluating this player over time can click on “evaluate” tab 368 for an in-depth analysis. For example, a baseball manager may be interested in John Smoltz's salary over time. [0111] FIG. 8D shows the “evaluate” results. In this case, the evaluate function has compared John Smoltz's slug percentage to all other rows. Unless additional grouping or filtering is applied, this particular comparison may not be informative, since the data point for this row is zero (e.g. the dataset may have been empty) as indicated by graph 394A. However, the item of interest, salary, has been compared to the salary value for each of the 88,685 other rows and indicates that Smoltz's salary ranks 156th out of the set, putting him above 99% of the records as shown by graph 394B. [0116] salary data is presented as a histogram. The vertical axis 418 indicates the number of records with data in the column of interest that fall within a particular sub-range, with the horizontal axis showing a plurality of sub-ranges in the range of values for the column of interest across the dataset. In this example, as shown at 422, the histogram includes a highlighted portion showing where the record of interest falls in the histogram. In this particular example, about 86,000 records have a “salary” value between zero and 2.8 million. The record being evaluated (John Smoltz for 2004 at $11,677,000) falls within a range of $11,571,429 to $14,166,667. The histogram view may allow for easy identification of whether a particular record is an outlier.)
As per Claim 13, Psenka teaches: The system of claim 12,
wherein one or more configurable compensation band rules are used to evaluate the one or more properties and dynamically map bands to different hierarchy levels. (in at least [0077] Group palette 72 may be useful in visualizing or ordering data without the need to sort or filter the data. For example, a management-level user may be interested viewing data by department. In some embodiments, software solution 10 allows for more in-depth analysis based on grouping data. Rather than merely sorting by department, the management-level user may wish to invoke the additional functionality by dragging the “department” header to group palette 72. [0078] a user groups data by “department,” the resulting data view will depict rows grouped together based on the value of the “department” column. In this example, the databook will show a grouping for “finance,” “executive,” “sales,” “maintenance,” and “engineering.” For instance, in some embodiments, each group is depicted as a nested list of records. [0079] The “group” command may trigger one or more contextual menus based on the type of data upon which the group is based. For example, if a column upon which grouping will be based contains date information, the user may be presented with options to group by exact date, month, week, quarter, fiscal year, or other suitable organizational units by which the column values could be divided. As another example, a “location” type of column could group by city, state, county, country, etc. Generally speaking, the “group” command can support grouping by different levels within an organizational structure; to do so, of course, the databook should be configured to consult an appropriate record indicating the different levels of organization when constructing a particular group. In some embodiments, the different levels of organization are determined manually and/or automatically by ETL module 18 during construction or updating of the operational dataset upon which the databook is based. [0081] a user may right-click the “salary” or “award” column headers to indicate one or more operations to be performed on salary or award data in the group analysis. As one example, a manager may be interested in total awards by department. The manager can right-click the award column header and select “sum” before refreshing the data. The resulting databook view can include each group and can indicate a sum of awards for each group. Similarly, an available option may be the average of a column value. The manager may right-click the “salary” column header before refreshing the data and indicate that the “salary” value is to be averaged. The resulting data view will then include a listing of groups, the average salary, and total awards for each group. [0088] FIG. 5C shows an additional display option which may be triggered to provide context for the rank graphs. For example, in some embodiments, the contextual information is triggered by triggered by clicking the “show range” checkbox at the top right of FIG. 5B. Namely, each rank graph 194 now includes a visual representation of where the row value lies within the population. Rank graph 194A shows that, while Professor Black's salary may be high, in reality his salary is far from the maximum. On the other hand, graph 194B indicates that having a single publication indeed places the professor near the bottom of his cohort. In some embodiments, the ranges are displayed using multiple colors to indicate where the median or mean value lies)
As per Claim 14, Psenka teaches: The system of claim 13, the operations further comprising
rendering interactive band hierarchy visualizations based on the one or more configurable compensation band rules, responding to changes to band definitions or rules by incrementally updating the interactive band hierarchy visualizations, allowing users to filter, expand, collapse or drill-down through the interactive band hierarchy visualizations. (in at least [0075] Once the filter has been defined, it can be applied to the data. For instance, the databook may include a button or other suitable interface mechanism that a user can click to trigger a refresh of the data, or the data may refresh automatically. In any event, to refresh the data, the user interface can provide a query to the operational dataset, with the query in this example based on the defined filter. The data returned for display will then include only those records where “salary” is above $40,000 for this example. [0088] FIG. 5C shows an additional display option which may be triggered to provide context for the rank graphs. For example, in some embodiments, the contextual information is triggered by triggered by clicking the “show range” checkbox at the top right of FIG. 5B. Namely, each rank graph 194 now includes a visual representation of where the row value lies within the population. Rank graph 194A shows that, while Professor Black's salary may be high, in reality his salary is far from the maximum. On the other hand, graph 194B indicates that having a single publication indeed places the professor near the bottom of his cohort. In some embodiments, the ranges are displayed using multiple colors to indicate where the median or mean value lies [0095] FIG. 6C demonstrates nested grouping functionality that can be supported in some embodiments. In this example, the “gender” grouping has been kept, but the “state” header has been dragged to palette 272 so that, within each “gender” grouping, the results have been grouped by state. Databook 262 now displays average salary and total employees by gender and state. In this example, the average annual salary and head count for female employees in 6 exemplary states are provided. An interested user could drill down to individual records in a particular state by expanding the nested category lists. [0099] The “filter builder” window context will depend on the type of data in the column to be filtered. As another example, non-date numerical data may be filtered by range, value, percentage within the set, and the like. Textual data may be filtered using Boolean operators. In some embodiments, when filtering by textual data, the software provides a pick list based on values of the column. For example, if a filter were being constructed based on “department,” the filter builder window could provide a listing of values appearing in the department column of the dataset for easy selection. [0129] Depending upon the configuration parameters, recently-accessed data may overwrite previously-written data in the operational dataset and/or recently-accessed data may be appended to the operational dataset. In some embodiments, data “freshness” is verified by comparing metadata (such as the last update for the operational dataset versus the data source) and only “new” data is written into or appended to the operational dataset.)
As per Claim 15, Psenka teaches: The system of claim 1,
wherein the flexible ontology further comprises configural workflow process entities defining compensation review and approval chains, including process steps, rules, or routing between configurable system roles. (in at least [0136] Users in a “supervisor” role may have access to data from all columns. Users (such as B) in an “assistant” role may only have access to names, addresses, and non-sensitive data. Users in a “finance” role, such as user C, may have access to salary data, but not necessarily to personal data. For example, the dataset may include much more data than shown in FIG. 4, such as social security number, emergency contact, and the like. [0138] RBAC may be implemented in any suitable way, such as by accessing a policy limiting column availability to certain roles, with each user classified in a role as part of establishing an account (i.e. login, password) for the user. In some instances, it may be advantageous to restrict users from adding prohibited columns to databooks or to define new databooks. In some embodiments, a column may be “present” in a databook, but the RBAC policy may render the data invisible or inhibit display of the data to avoid the need to remove or add columns to a databook based on role. In other embodiments, the RBAC policy adjusts the query made to the operational dataset so that the results that are returned do not include the excluded columns. [0147] first department head is concerned, the databook “contains” only information from his/her department, so the “grand total,” even if displayed, will be limited to his/her department. This may be the case, for example, if the security policy (or policies) are implemented as restrictions on what queries are actually performed for certain users and/or adjustments to queries performed for certain users.)
As per Claim 16, Psenka teaches: The system of claim 1, the operations further comprising
rendering projected compensation metrics … based on rules encoded within the flexible ontology. (in at least [0025] the data visualization mechanism can include a trend analysis for the changing value for each other value of interest (e.g. a “salary” trend for a particular “name”). The trend analysis can, for example, be implemented by using any suitable curve fitting algorithm. The trend analysis can further include a forecast element—for instance, in some embodiments, a sliding bar or other input can be used in order for an end-user to indicate a desired forecast range. [0081] a user may right-click the “salary” or “award” column headers to indicate one or more operations to be performed on salary or award data in the group analysis. As one example, a manager may be interested in total awards by department. The manager can right-click the award column header and select “sum” before refreshing the data. The resulting databook view can include each group and can indicate a sum of awards for each group. Similarly, an available option may be the average of a column value. The manager may right-click the “salary” column header before refreshing the data and indicate that the “salary” value is to be averaged. The resulting data view will then include a listing of groups, the average salary, and total awards for each group. [0087] the databook of FIG. 5A contained 309 rows. This may be the “raw” databook view in some cases, while in other cases, the “evaluate” function is carried out on data that has been filtered, sorted, and/or grouped based on other databook definition parameters. In this example, ranking graph 194A depicts the relative position of the row's “total salary” value, while ranking graph 194B depicts the relative position of the row's “total pubs” value. In this example, the row corresponding to hypothetical Professor Black was selected, and it can be seen that his total salary of $122,046.40 puts him at 66% relative to the salaries of others in the population, while his total publication value (“1”) puts him only at 29% when compared to other publication counts in the population. In this example, each ranking graph includes the row value and a rank number. [0115] Chart 410 also includes a forecast interface 412, in this example, a slider bar. FIG. 8F shows an example of the chart being extended to the maximum amount based on input provided via the slider bar. As shown at 414, the trend has been continued based on the trend analysis as indicated at 416.)
Although implied, Psenka does not expressly disclose the following limitations, which however, are taught by Qamar,
rendering projected compensation metrics and providing guidance during compensation determination based on rules encoded within the flexible ontology (in at least [0072] By activating an icon, such as by clicking on or touching a slider, the user may change the scale in the organization that is presented. For example, by moving slider 314, the user may view the aggregate value and retention risk for employees in different groups or departments in the organization. Alternatively, the user may view the aggregate value and retention risk for the employees of different managers. This is shown in FIG. 4, which presents a drawing of a user interface 400. Note that data points in user interface 300 (FIG. 3) may be color coded to indicate associations of particular employees with different groups in the organization and/or with different managers. [0074] by right-clicking on or touching a data point in user interface 300 (FIG. 3) (or by selecting the data point for an employee and activating a ‘retention’ icon), and then selecting a ‘retention’ option, may result in the display of one or more retention suggestions 610 (FIG. 6) and an associated cost-benefit analysis 612 (FIG. 6) for the employee. This is shown in FIG. 6, which presents a drawing of a user interface 600. Note that the one or more retention suggestions 610 may be ordered or ranked. This information may present options for the user to use in retaining the employee. In addition, the displayed cost-benefit analysis 612 may allow the user to determine whether a particular retention suggestion is worthwhile or pays for itself. User interface 600 may include intuitive information to assist the user in this regard. For example, retention suggestions that are likely to be worthwhile (either financially or per predefined user criteria) may have a different color than those that are marginal or unlikely to be worthwhile.)
The reason and rationale to combine Psenka and Qamar is the same as recited above.
As per Claim 17, Psenka teaches: The system of claim 1, the operations further comprising
propagating compensation changes bidirectionally through integration application programming interfaces (APIs) to synchronize with one or more external payroll, human resources, or talent management systems. (in at least [0003] an organization may maintain employee performance data using a first data management system, insurance information in a second system, payroll information in a third system, and employee performance data in a fourth system. [0026] The use of the databook interface may advantageously decouple the data reporting operations from data collection operations in some embodiments. For example, an end-user who is relatively un-technologically-savvy may be able to visualize data of interest by dragging grouping, filtering, sorting, and other parameters to define particular reports of interest. Data defining the current databook view can be stored in any particular format, and will generally specify the columns displayed, and the user-defined parameters (e.g. grouping, filtering, sorting, and/or other parameters). Thus, the databook can be shared amongst multiple users simply by providing the data defining the databook (assuming such other users have access to the operational dataset). If such access is unavailable or unwarranted, the software can support exporting data in any suitable format. [0057] operational data set 16 may be maintained in a suitable computer-readable medium and can comprise data imported from one or more of sources 20. However, operational data set 16 may comprise a mapping or other metadata providing for direct access to other sources in some embodiments. In FIG. 1, data sources 20 are indicated as 20-1, 20-2, 20-N to indicate that the number of sources may vary from one on upward. Source 20-3 is shown to indicate that, in some embodiments, solution 10 can integrate locally-stored data (e.g. data available at a user's terminal). [0126] A “live” import can be supported in some embodiments in which, instead of extracting data from a data source and bringing the data in the ODS, the data remains in the data source and the solution 10 queries the data source directly based on the configuration parameters. This is advantageous in the scenario where the data source contains a very large volume of data and it is not possible to extract, transform, and load the data into the ODS in a timely fashion. Additionally or alternately, this may be advantageous in the scenario where the data source is collecting data in real-time and the user needs up-to-the-minute reporting capabilities. In this case, copying the data from the data source to the ODS would inject a time-delay in the reporting thereby creating databooks with less than real-time accuracy.)
As per Claim 18, Psenka teaches: The system of claim 1, the operations further comprising
generating immutable audit trails of compensation actions, context, explanations, approvals, or supporting decisions for regulatory compliance. (in at least [0068] for a very large organization, solution 10 may be configured to assemble a plurality of operational data sets. For instance, an operational data set of HR data may be collected, an operational data set of manufacturing/sales data may be collected, and an operational data set of data related to governmental compliance may be collected. The composition of the data sets will, of course, depend on what ultimately will be desired for databooks.)
As per Claim 19 and 20 for a method (see at least Psenka [0160]) and A non-transitory computer-readable medium (see at least Psenka [0161]), substantially recite the subject matter of Claim 1 and are rejected based on the same reasoning and rationale.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PO HAN MAX LEE whose telephone number is (571)272-3821. The examiner can normally be reached on Mon-Thurs 8:00 am - 7:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao Wu can be reached on (571) 272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PO HAN LEE/Primary Examiner, Art Unit 3623