Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Status of the Application
The following is a Final Office Action.
IDS filed on 2/10/2026 is acknowledged and considered by the Examiner.
In response to Examiner's communication of 11/3/2025, Applicant responded on 2/10/2026. Amended claim 1, 3, 6, 7, 12, 13, 18.
Claims 1-21 are pending in this application and have been examined.
Response to Amendment
Applicant's amendments to claims 1, 3, 6, 7, 12, 13, 18 are not sufficient to overcome the 35 USC 101 rejections set forth in the previous action.
Response to Arguments – 35 USC § 101
Applicant’s arguments with respect to the rejections have been fully considered, but they are not persuasive.
Applicant submits, “…The Interactive Radar Chart Recites Dynamic Computer Functionality, Not a Mental Process...claim 1 recites an interactive graphical user interface with specific dynamic behavior that, contrary to the Office's characterization, cannot be "practicably performed in the human mind." As amended, claim 1 requires "generating a graphical user interface displaying... an interactive radar chart" within a structured report and further requires that upon "receiving second user input" updating the selection of evaluations, the system "generat[es] an updated graphical user interface displaying an updated structured report" based on the updated selection. These are stateful, event-driven UI updates to a rendered visualization, not mental steps or static paper forms that, as alleged by the Office, can, "under the broadest reasonable interpretation," be performed by "a human mind and using pen and paper…The Office's Prong 1 analysis then treats this abbreviated version as part of its judicial exception identification, reserving consideration of "interactive" solely as an additional element in the discussion of Step 2A Prong 2. This piecewise treatment of the claims is improper. As claimed, "interactive" is a qualifier that adds a meaningful limitation on the nature of the radar chart. It is not an additional element that is separable from the term that it qualifies; doing so to read out the claim's interactive, computer-implemented nature is unreasonable under broadest reasonable interpretation...regarding any characterization of the claimed functionality as reducible to mental or pen-and-paper-based activity, doing so would be contrary to the invention's objective of a secure, trusted environment confined to the authenticated application. Because the architecture of these claims center on a private user space, anonymization, and invite-only access permissions that confine evaluations and analytics to authenticated clients and servers. Physical, off-system artifacts do not benefit from these security measures and result in an increased risk inadvertent disclosure to colleagues and external parties, thus undermining user trust…when properly construed in light of the foregoing remarks, the interactive radar chart reflects dynamic, system-driven behavior that cannot reasonably be characterized as a mental process. Accordingly, the identification of this and other as judicial exceptions under Step 2A, Prong 1 is inappropriate such that Applicant respectfully requests reapplication of the Alice/Mayo framework considering the foregoing remarks…The Claims Integrate Any Judicial Exceptions Present Into a Practical Application… Applicant would like to draw attention to the Office's memorandum dated August 4, 2025….The technological elements required by claims 1-21 are not incidental mechanisms for implementing an abstract concept; they are integral, problem-specific structures whose functionality is expressly shaped by various challenges that the claims are designed to address….As stated in the specification, "traditional performance management processes are not adapted to provide interactive tools for employee growth and development" (Specification, p. 2). The specification further explains that conventional, manager-driven evaluations "often suffer[] from drawbacks, causing dissatisfaction for both the manager and the employee and failing to achieve the primary goal of improving the performance of the individuals and the organization," and that a core "problem can be complicated by not having real and identifiable numerical measurements that are normalized across an organization to accurately and objectively measure employee performance and growth" (pp. 1-2). It criticizes existing applications as being oriented to HR administration, "limit[ing] the employees accesses to the evaluations" (id.). The document also notes that, in typical systems, the "meaning of the combination of evaluation information is left for the subjective understanding of the employee and the human resources department," and that "conventional tools do not implement advanced features that allows for self-evaluation analysis and detection of indirect information as a function of the inter-relationship of the different attributes" (pp. 8-9), underscoring the absence of standardized, comparable, multi- source analytics.…The claims address the shortcomings of traditional, static performance-management tools by implementing a specific, stateful GUI workflow that consolidates detected classifications and an interactive radar chart within a structured report, and then dynamically re-generates the interface in direct response to user-selection changes, thereby providing real-time, user-driven insights rather than leaving meaning to subjective interpretation or forcing users to traverse multiple screens to access comparable, multi-source analytics. The architecture confines operations to a server-implemented application exposed via a browser within an access-controlled private space, which structures where analytics execute, how results are delivered, and how state is persisted for historical review. These concrete GUI mechanisms, defined system boundaries, and stateful event-driven updates collectively provide a practical, technological solution to the specification-identified problems of non-interactive evaluations, limited and non-comparable inputs, and the lack of real-time, objective insight...the specification describes various security issues in existing approaches that are addressed through the integration of computational elements. It stresses the need for a "trusted environment" so users "are not random people from the public ... that can view evaluation or can comment (or troll) the system participants," (pp. 46-47) highlighting risks from open or poorly gated systems; it further warns about "unauthorized individuals or individuals outside the enterprise" gaining "access to the enterprise applications," (p. 18) underscoring enterprise-bound authentication requirements; and it recognizes privacy threats where "human access to the data ... may compromise an evaluator's identity," (p. 33) pointing to the need for controls that minimize the risk of identity exposures….The claims address these security and privacy concerns by operating within an access-controlled private space that the user configures and populates, thereby ensuring participants are not "random people from the public," while the server-implemented architecture enforces enterprise-bound authentication and controlled participation to prevent unauthorized individuals outside the enterprise from accessing evaluation data…Within this integral client server context, the system maintains state and coordinates interactions through defined GUI behaviors, including embedding the interactive radar chart within a structured report and generating an updated interface in response to user selections. These controlled behaviors keep sensitive analytics within the application's secure workflows rather than exposing them through ad hoc exports or uncontrolled interfaces. Further, by channeling multi-source ratings through a defined pipeline that detects classifications from fixed inputs and renders them only within the private, authenticated session, the design reduces unnecessary human access to raw contributor identities and supports privacy-preserving presentation that mitigates identity exposure risks recognized in the specification. These architectural and interface-level constraints do more than state a field of use; they are the technical scaffolding that integrates the analytics into a secure, trustworthy computing environment tailored to the problem context…. the specification describes a critical technical implementation challenge: integrating additional systems and objectives "without negatively impacting the operation and effectiveness of the evaluation systems presents a technological problem to developers" (p. 11), situating the invention's architecture and workflow as answers to recognized technical hurdles in deploying secure, responsive, and employee-centric evaluation tooling. The claimed invention addresses the specification's technical challenge of integrating additional systems and objectives without degrading evaluation effectiveness. It does so by organizing computation and presentation into an ordered, event-driven workflow: detecting classifications from defined inputs, generating a structured report with an embedded interactive radar chart, accepting state-changing user inputs, and re-rendering the updated report in real time. This structure isolates integration concerns behind stable, system-defined interfaces and preserves responsiveness. Confined to a particular client server framework and access controlled private space, the system delineates where analytics run and how results are delivered, allowing external objectives or data sources to be incorporated without altering the GUI's concrete behaviors or the persistence model that enables historical review, thereby preventing negative impacts on operation and effectiveness. Because the claimed GUI specifies a particular way of updating and presenting evaluative content, consolidating classifications with an interactive visualization and reducing navigation steps, it mirrors the Core Wireless teaching that an improved user interface can "improve[] the efficiency of using the electronic device" and can "specif[y] a particular manner of summarizing and presenting information." Core Wireless Licensing S.A.R.L., v. LG Electronics, Inc., 880 F.3d 1356 (Fed. Cir. 2018) (referred to in M.P.E.P. § 2106.05(a)). These concrete interface constraints preserve technical improvements in surfacing, updating, and using evaluative insights even as additional enterprise systems are linked, which addresses the specification's deployment concerns through defined architecture and interface constraints. Therefore, any judicial exceptions present in the claims are integrated into a practical application under Step 2A Prong 2.…” The Examiner respectfully disagrees.
Unlike the 2025 Memo and Core Wireless, by Applicant’s own admission, the claims and the argued elements, are directed to, … employee growth and development.…changing the evaluators used to generate an evaluation report…As stated in the specification, "traditional performance management processes are not adapted to provide interactive tools for employee growth and development" (Specification, p. 2). The specification further explains that conventional, manager-driven evaluations "often suffer[] from drawbacks, causing dissatisfaction for both the manager and the employee and failing to achieve the primary goal of improving the performance of the individuals and the organization," and that a core "problem can be complicated by not having real and identifiable numerical measurements that are normalized across an organization to accurately and objectively measure employee performance and growth…, which is a problem directed to, organizing human activity (i.e. humans performing self-assessment and human peer evaluation in a private room or at home with pen and paper and returning the evaluations to anonymous ballot box for human to mentally tally and perform mathematical analysis to generate human performance radar chart with pen and paper), mathematical concepts (i.e. human using mathematical algorithms to classify human performance patterns), and a mental process (i.e. humans performing self-assessment and human peer evaluation in a private room or at home with pen and paper and returning the evaluations to anonymous ballot box for human to mentally tally and perform mathematical analysis to generate human performance radar chart with pen and paper), as established in Step 2A Prong 1. This problem does not specifically arise in the realm of computer technology, but rather, this problem existed and was addressed long before the advent of computers. Thus, the claims do not recite a technical improvement to a technical problem. Improvement over prior art does not make the abstract idea non-abstract. Additionally, pursuant to the broadest reasonable interpretation, as an ordered combination, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea, and thus, are no more than applying the abstract idea with generic computer components, i.e. graphical user interface, performing extra solution activities, gathering data and outputting data, generally linked to a technical environment, i.e. computer. Therefore, as a whole, the additional elements do not integrate the abstract ideas into a practical application in Step 2A Prong 2.
Even novel and newly discovered judicial exceptions are still exceptions, despite their novelty. July 2015 Update, p. 3; see SAP America Inc. v. Investpic, LLC, No. 2017-2081, slip op. at 2 (Fed Cir. May 15, 2018).
Simply reciting specific limitations that narrow the abstract idea does not make an abstract idea non-abstract. 79 Fed. Reg. 74631; buySAFE Inc. v. Google, Inc., 765 F.3d 1350, 1355 (2014); see SAP America at p. 12. As discussed in SAP America, no matter how much of an advance the claims recite, when “the advance lies entirely in the realm of abstract ideas, with no plausibly alleged innovation in the non-abstract application realm,” “[a]n advance of that nature is ineligible for patenting.” Id. at p. 3.
Claims can recite a mental process even if they are claimed as being performed on a computer. The Supreme Court recognized this in Benson, determining that a mathematical algorithm for converting binary coded decimal to pure binary within a computer’s shift register was an abstract idea. The Court concluded that the algorithm could be performed purely mentally even though the claimed procedures “can be carried out in existing computers long in use, no new machinery being necessary.” 409 U.S at 67, 175 USPQ at 675. See also Mortgage Grader, 811 F.3d at 1324, 117 USPQ2d at 1699 (concluding that concept of “anonymous loan shopping” recited in a computer system claim is an abstract idea because it could be “performed by humans without a computer”).
In evaluating whether a claim that requires a computer recites a mental process, examiners should carefully consider the broadest reasonable interpretation of the claim in light of the specification. For instance, examiners should review the specification to determine if the claimed invention is described as a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept. In these situations, the claim is considered to recite a mental process.
1. Performing a mental process on a generic computer. An example of a case identifying a mental process performed on a generic computer as an abstract idea is Voter Verified, Inc. v. Election Systems & Software, LLC, 887 F.3d 1376, 1385, 126 USPQ2d 1498, 1504 (Fed. Cir. 2018). In this case, the Federal Circuit relied upon the specification in explaining that the claimed steps of voting, verifying the vote, and submitting the vote for tabulation are “human cognitive actions” that humans have performed for hundreds of years. The claims therefore recited an abstract idea, despite the fact that the claimed voting steps were performed on a computer. 887 F.3d at 1385, 126 USPQ2d at 1504. Another example is Versata, in which the patentee claimed a system and method for determining a price of a product offered to a purchasing organization that was implemented using general purpose computer hardware. 793 F.3d at 1312-13, 1331, 115 USPQ2d at 1685, 1699. The Federal Circuit acknowledged that the claims were performed on a generic computer, but still described the claims as “directed to the abstract idea of determining a price, using organizational and product group hierarchies, in the same way that the claims in Alice were directed to the abstract idea of intermediated settlement, and the claims in Bilski were directed to the abstract idea of risk hedging.” 793 F.3d at 1333; 115 USPQ2d at 1700-01.
2. Performing a mental process in a computer environment. An example of a case identifying a mental process performed in a computer environment as an abstract idea is Symantec Corp., 838 F.3d at 1316-18, 120 USPQ2d at 1360. In this case, the Federal Circuit relied upon the specification when explaining that the claimed electronic post office, which recited limitations describing how the system would receive, screen and distribute email on a computer network, was analogous to how a person decides whether to read or dispose of a particular piece of mail and that “with the exception of generic computer-implemented steps, there is nothing in the claims themselves that foreclose them from being performed by a human, mentally or with pen and paper”. 838 F.3d at 1318, 120 USPQ2d at 1360. Another example is FairWarning IP, LLC v. Iatric Sys., Inc., 839 F.3d 1089, 120 USPQ2d 1293 (Fed. Cir. 2016). The patentee in FairWarning claimed a system and method of detecting fraud and/or misuse in a computer environment, in which information regarding accesses of a patient’s personal health information was analyzed according to one of several rules (i.e., related to accesses in excess of a specific volume, accesses during a pre-determined time interval, or accesses by a specific user) to determine if the activity indicates improper access. 839 F.3d. at 1092, 120 USPQ2d at 1294. The court determined that these claims were directed to a mental process of detecting misuse, and that the claimed rules here were “the same questions (though perhaps phrased with different words) that humans in analogous situations detecting fraud have asked for decades, if not centuries.” 839 F.3d. at 1094-95, 120 USPQ2d at 1296.
3. Using a computer as a tool to perform a mental process. An example of a case in which a computer was used as a tool to perform a mental process is Mortgage Grader, 811 F.3d. at 1324, 117 USPQ2d at 1699. The patentee in Mortgage Grader claimed a computer-implemented system for enabling borrowers to anonymously shop for loan packages offered by a plurality of lenders, comprising a database that stores loan package data from the lenders, and a computer system providing an interface and a grading module. The interface prompts a borrower to enter personal information, which the grading module uses to calculate the borrower’s credit grading, and allows the borrower to identify and compare loan packages in the database using the credit grading. 811 F.3d. at 1318, 117 USPQ2d at 1695. The Federal Circuit determined that these claims were directed to the concept of “anonymous loan shopping”, which was a concept that could be “performed by humans without a computer.” 811 F.3d. at 1324, 117 USPQ2d at 1699. Another example is Berkheimer v. HP, Inc., 881 F.3d 1360, 125 USPQ2d 1649 (Fed. Cir. 2018), in which the patentee claimed methods for parsing and evaluating data using a computer processing system. The Federal Circuit determined that these claims were directed to mental processes of parsing and comparing data, because the steps were recited at a high level of generality and merely used computers as a tool to perform the processes. 881 F.3d at 1366, 125 USPQ2d at 1652-53.
Examiners should keep in mind that both product claims (e.g., computer system, computer-readable medium, etc.) and process claims may recite mental processes. For example, in Mortgage Grader, the patentee claimed a computer-implemented system and a method for enabling borrowers to anonymously shop for loan packages offered by a plurality of lenders, comprising a database that stores loan package data from the lenders, and a computer system providing an interface and a grading module. The Federal Circuit determined that both the computer-implemented system and method claims were directed to “anonymous loan shopping”, which was an abstract idea because it could be “performed by humans without a computer.” 811 F.3d. at 1318, 1324-25, 117 USPQ2d at 1695, 1699-1700. See also FairWarning IP, 839 F.3d at 1092, 120 USPQ2d at 1294 (identifying both system and process claims for detecting improper access of a patient's protected health information in a health-care system computer environment as directed to abstract idea of detecting fraud); Content Extraction & Transmission LLC v. Wells Fargo Bank, N.A., 776 F.3d 1343, 1345, 113 USPQ2d 1354, 1356 (Fed. Cir. 2014) (system and method claims of inputting information from a hard copy document into a computer program). Accordingly, the phrase “mental processes” should be understood as referring to the type of abstract idea, and not to the statutory category of the claim.
Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Similarly, “claiming the improved speed or efficiency inherent with applying the abstract idea on a computer” does not integrate a judicial exception into a practical application or provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015).
Claim Rejections – 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1 (similarly 7 and 13) recites, ” A … method of providing a guided … self-evaluation service, comprising the steps of:
…provide employees of a company with the service;
…provide individual users with an automated self- assessment process;
…establish a private space that the individual user can configure to limit access by other employees to the user's private space, … provide access to the private space via a … to the individual users and is configured to invite one or more other users to have access to the individual user's private space;
permitting the individual user to perform an automated self-evaluation using the …, the evaluation comprising numerical ratings for at least five attributes and a free-style text narrative;
permitting the individual users to self-select, using the …, other employees to perform a peer-evaluation of the individual user on the service, the peer evaluation comprising the numerical ratings for the at least five attributes and one or more additional free style text narrative;
receiving and storing, using the …, the numerical ratings for the at least five attributes and the free-style text narratives;
… to store a library of algorithms that are each configured to detect a classification by detecting patterns in the numerical ratings that satisfy one of the algorithms;
detecting one or more of the classifications for individual users using the received numerical ratings for the at least five attributes from the self-evaluation of the individual user and a plurality of the peer evaluations for that individual user;
receiving first user input selecting at least one of the self-evaluation or the plurality of the peer evaluations;
… displaying, for individual users in the individual user's private space a structured report based on the numerical ratings from the selected at least one of the self-evaluation or the plurality of the peer evaluations, the report comprising:
the detected classifications for the individual user, and
an … radar chart displaying the numerical ratings of each of the selected at least one the self-evaluation or the plurality of the peer evaluations for that individual user as a connected polygon shape;
storing the structured report including the detected classifications of the individual user;
receiving second user input updating the selected at least one of the self-evaluation or the plurality of the peer evaluations;
in response to receipt of the second user input:
generating an updated … displaying an updated structured report based on the numerical ratings for the updated selection of the at least one of the self-evaluation or the plurality of peer evaluations; and
storing the updated structured report; and
… to allow the users to return to use the … at later times to again perform the self-evaluation and peer evaluations and allow the user review historical data from previous reports in the private space.”
Analyzing under Step 2A, Prong 1:
The limitations regarding, …providing a guided … self-evaluation service…provide employees of a company with the service; …provide individual users with an automated self- assessment process; …establish a private space that the individual user can configure to limit access by other employees to the user's private space, … provide access to the private space via a … to the individual users and is configured to invite one or more other users to have access to the individual user's private space;
permitting the individual user to perform an automated self-evaluation using the …, the evaluation comprising numerical ratings for at least five attributes and a free-style text narrative; permitting the individual users to self-select, using the …, other employees to perform a peer-evaluation of the individual user on the service, the peer evaluation comprising the numerical ratings for the at least five attributes and one or more additional free style text narrative; receiving and storing, using the …, the numerical ratings for the at least five attributes and the free-style text narratives; … to store a library of algorithms that are each configured to detect a classification by detecting patterns in the numerical ratings that satisfy one of the algorithms; detecting one or more of the classifications for individual users using the received numerical ratings for the at least five attributes from the self-evaluation of the individual user and a plurality of the peer evaluations for that individual user; receiving first user input selecting at least one of the self-evaluation or the plurality of the peer evaluations; … displaying, for individual users in the individual user's private space a structured report based on the numerical ratings from the selected at least one of the self-evaluation or the plurality of the peer evaluations, the report comprising: the detected classifications for the individual user, and an … radar chart displaying the numerical ratings of each of the selected at least one the self-evaluation or the plurality of the peer evaluations for that individual user as a connected polygon shape; storing the structured report including the detected classifications of the individual user; receiving second user input updating the selected at least one of the self-evaluation or the plurality of the peer evaluations; in response to receipt of the second user input: generating an updated … displaying an updated structured report based on the numerical ratings for the updated selection of the at least one of the self-evaluation or the plurality of peer evaluations; and storing the updated structured report; and… to allow the users to return to use the … at later times to again perform the self-evaluation and peer evaluations and allow the user review historical data from previous reports in the private space...…, under the broadest reasonable interpretation, can include a human using their mind and using pen and paper to perform the identified limitations above; therefore, the claims are directed to a mental process.
Further, …providing a guided … self-evaluation service…provide employees of a company with the service; …provide individual users with an automated self- assessment process; …establish a private space that the individual user can configure to limit access by other employees to the user's private space, … provide access to the private space via a … to the individual users and is configured to invite one or more other users to have access to the individual user's private space; permitting the individual user to perform an automated self-evaluation using the …, the evaluation comprising numerical ratings for at least five attributes and a free-style text narrative; permitting the individual users to self-select, using the …, other employees to perform a peer-evaluation of the individual user on the service, the peer evaluation comprising the numerical ratings for the at least five attributes and one or more additional free style text narrative; receiving and storing, using the …, the numerical ratings for the at least five attributes and the free-style text narratives; … to store a library of algorithms that are each configured to detect a classification by detecting patterns in the numerical ratings that satisfy one of the algorithms; detecting one or more of the classifications for individual users using the received numerical ratings for the at least five attributes from the self-evaluation of the individual user and a plurality of the peer evaluations for that individual user; receiving first user input selecting at least one of the self-evaluation or the plurality of the peer evaluations; … displaying, for individual users in the individual user's private space a structured report based on the numerical ratings from the selected at least one of the self-evaluation or the plurality of the peer evaluations, the report comprising: the detected classifications for the individual user, and an … radar chart displaying the numerical ratings of each of the selected at least one the self-evaluation or the plurality of the peer evaluations for that individual user as a connected polygon shape; storing the structured report including the detected classifications of the individual user; receiving second user input updating the selected at least one of the self-evaluation or the plurality of the peer evaluations; in response to receipt of the second user input: generating an updated … displaying an updated structured report based on the numerical ratings for the updated selection of the at least one of the self-evaluation or the plurality of peer evaluations; and storing the updated structured report; and… to allow the users to return to use the … at later times to again perform the self-evaluation and peer evaluations and allow the user review historical data from previous reports in the private space…, under the broadest reasonable interpretation, are humans performing self-assessment and human peer evaluation, therefore it is, managing personal behavior or relationships or interactions between people. Thus, the claims are directed to certain methods of organizing human activity.
Additionally, …store a library of algorithms that are each configured to detect a classification by detecting patterns in the numerical ratings that satisfy one of the algorithms; detecting one or more of the classifications for individual users using the received numerical ratings for the at least five attributes from the self-evaluation of the individual user and a plurality of the peer evaluations for that individual user;…, are mathematical concepts.
Accordingly, the claims are directed to, a mental process, certain methods of organizing human activity, mathematical concepts, and thus, the claims are directed to an abstract idea under the first prong of Step 2A.
Analyzing under Step 2A, Prong 2:
This judicial exception is not integrated into a practical application under the second prong of Step 2A.
In particular, the claims recite the additional elements beyond the recited abstract idea identified under Step 2A, Prong 1, such as:
Claim 1, 7, 13: computer-implemented, implementing an application on a server, wherein the application is configured to, implementing an application on a server, wherein the application is configured to, browser, implementing the application to, interactive, generating a graphical user interface, configuring the application, A non-transitory computer readable medium storing one or more software applications that causes a computer system to execute, computer-implemented system, One or more computers configured using computer readable instructions stored in non-transitory computer memory
, and pursuant to the broadest reasonable interpretation, as an ordered combination, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea, and thus, are no more than applying the abstract idea with generic computer components. Further, these additional elements generally link the abstract idea to a technical environment, namely the environment of a computer.
Additionally, with respect to, “…to perform an automated self-evaluation….”, “…receiving and storing…”, “…providing…”, “….display…”, “generating … displaying, for individual users…”, “generating an updated … displaying an updated,…,”, these elements do not add a meaningful limitations to integrate the abstract idea into a practical application because they are extra-solution activity, pre and post solution activity - i.e. data gathering – “…to perform an automated self-evaluation….”, “…receiving and storing…”, “generating … displaying, for individual users…”, data output –“…providing…”, “….display…” “generating an updated … displaying an updated,…”
Analyzing under Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under Step 2B.
As noted above, the aforementioned additional elements beyond the recited abstract idea are not sufficient to amount to significantly more than the recited abstract idea because, as an order combination, the additional elements are no more than mere instructions to implement the idea using generic computer components (i.e. apply it).
Additionally, as an order combination, the additional elements append the recited abstract idea to well-understood, routine, and conventional activities in the field as individually evinced by the applicant’s own disclosure, as required by the Berkheimer Memo, in at least:
Fig. 1 illustrates one embodiment of the system 100 for soliciting, analyzing, and using employee evaluations. The system includes an evaluation software application or assessment software application (which for brevity is referred to herein as evaluation application) 105 installed on a first electronic device 101(a), and at least a second electronic device 101(b) that can communicate with the evaluation software application 105 (collectively, the evaluation portal system). The first electronic device 101(a) may be used by an evaluatee for, for example, selection of peers and/or coaches, soliciting evaluations from peers and/or coaches, providing self- evaluations, reviewing evaluations (or corresponding analyses), development of skills and/or receive training based on the evaluations (self or with a coach), or the like, via the evaluation application 105(a). The second electronic device 101(b) may be used by an evaluator (e.g., peer or coach) to, for example, receive solicitations for evaluation, provide secure and anonymous evaluations, and/or assist the evaluatee in development of skills and/or training (e.g., as a coach), or the like, via the evaluation software application 105. The system further includes an evaluation processing software application 110 implemented on one or more servers (evaluation processing system), an analytics software application 115 implemented on one or more servers (evaluation analytics system), and a training software application 120 implemented on one or more servers (training system). The electronic device is preferably a mobile smartphone that is handheld and capable of downloading and installing mobile applications that can communicate through the mobile phone with server via mobile networks or other wireless networks. Optionally, the electronic device can be personal computers, servers, mainframes, virtual machines, containers, gaming systems, televisions, and mobile electronic devices such as tablet computers, laptop computers, and the like. Each of the electronic devices and servers is a computer system that includes a microprocessor and volatile and non-volatile memory to configure the computer system. The computer system also includes a network connection interface that allows the computer system to communicate with another computer system over a network. The evaluation processing software application 110, the analytics software application 115, and the training software application 120 may be implemented on the same severs or different servers. The system may include one or more of the aforementioned software applications, instead of all three. Each software application may also be used independently or in conjunction with another software application (a software application other than the above three software applications) to strengthen or supplement functionalities of the other software application.
For clarification, in FIG. 1, for example, the application is illustrated as a mobile application, but the figures can also be illustrative of a web server implementation by having a web server implemented as part of the application environment and running the service application. The evaluation application can be configured to be implemented as a software as a cloud type application on a browser on the mobile phone or other type of device. The application provided by the web server is provided over a communication network, which can include the Internet. Other web or types of implementations are contemplated.
The system (e.g., available on employees mobile phone, which is almost always with a user) provides for quick entry of evaluation. The system is configured to provide employee to employee evaluation (one to one) and preferably without restrictions such as seniority or role. The system is also configured to maintain the identity of an evaluator who has given an evaluation anonymously. Furthermore, the system is configured to have the evaluations only be accessed by an evaluatee or coaches invited by the evaluatee, and not by other employees of the company.
The mobile application (e.g., evaluation software application) is preferably configured to be a lightweight application that places minimal processing or storage requirements on the personal mobile telephone. The mobile application generates the interactive display screen, providing interactive tools. The information that is displayed in the mobile application is retrieved after the user opens the application (from a closed state) and is running on the mobile phone. The mobile application will request the information in response to the user selecting an interactive option (e.g., selects to give evaluation, and list of employee names are retrieved). An HTTP GET command can be used for this process. For example, when the user selects the option to view the queue or report of employee-to-employee evaluation given in the enterprise, the mobile application in response transmits a command requesting the messages. The service (e.g., evaluation processing software application) responds by sending a set of the most recent messages that the service received and saved on the enterprise search platform (e.g., system 150 in Fig. 1) implemented in the service. The service can respond by sending a first set and in response to the user scrolling or searching the messages, the service can send supplemental messages to allow the user to see additional messages in the feed. Preferably, the service and mobile device operate in this incremental process rather than to transmit all or a significant portion of the messages to the mobile application, which can slow down the mobile phone and the service and can raise security issues.
Fig. 8B illustrates a detailed report including sections such as, without limitation, the evaluation progress bar 861, overall scores as a radar chart 862,analysis reports 863 (including sub-parts such as 863(a), 863(b), and 863(c)), and an evaluation viewing section 864 that allows the user to view ratings and free text evaluations consolidated with respect to each of the attributes. These are described in more detail below. It is highlighted that the system is configured to perform certain processing and in response generate the report in a certain structure. In particular, subparts 863(a), (b), and (c) are configured to report on the corresponding groups of algorithms. In the first group, which in this example is relationship gaps in awareness, the group of algorithms are configured to analyze gaps between the data between self and peers and self and coach. This can be a geometric area type algorithm to determine geometric or area differences. The system reports on the identified classification and generates information that is displayed that corresponds to the specific classification. The second group, imbalances, corresponds to using algorithms that are configured to evaluate each of the self, peer, and coach generated data and identify data pattern classification for each. The third group corresponds to an overall score and the algorithm is configured to generate an overall score. As shown in the example, there are scores that are generated from the five number ratings by the coach that are determined by way of grouping and averaging.
Furthermore, as an ordered combination, these elements amount to generic computer components receiving or transmitting data over a network, performing repetitive calculations, electronic record keeping, and storing and retrieving information in memory, which, as held by the courts, are well-understood, routine, and conventional. See MPEP 2106.05(d).
Moreover, the remaining elements of dependent claims do not transform the recited abstract idea into a patent eligible invention because these remaining elements merely recite further abstract limitations that provide nothing more than simply a narrowing of the abstract idea recited in the independent claims.
Looking at these limitations as an ordered combination adds nothing additional that is sufficient to amount to significantly more than the recited abstract idea because they simply provide instructions to use a generic arrangement of generic computer components to “apply” the recited abstract idea, perform insignificant extra-solution activity, and generally link the abstract idea to a technical environment. Thus, the elements of the claims, considered both individually and as an ordered combination, are not sufficient to ensure that the claim as a whole amounts to significantly more than the abstract idea itself. Since there are no limitations in these claims that transform the exception into a patent eligible application such that these claims amount to significantly more than the exception itself, claims 1-21 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PO HAN MAX LEE whose telephone number is (571)272-3821. The examiner can normally be reached on Mon-Thurs 8:00 am - 7:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao Wu can be reached on (571) 272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PO HAN LEE/Primary Examiner, Art Unit 3623