Prosecution Insights
Last updated: April 19, 2026
Application No. 18/456,454

WORKFORCE SENTIMENT MONITORING AND DETECTION SYSTEMS AND METHODS

Final Rejection §101§103
Filed
Aug 25, 2023
Examiner
BOYCE, ANDRE D
Art Unit
3623
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Macorva Inc.
OA Round
2 (Final)
36%
Grant Probability
At Risk
3-4
OA Rounds
4y 7m
To Grant
56%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
224 granted / 620 resolved
-15.9% vs TC avg
Strong +20% interview lift
Without
With
+19.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
41 currently pending
Career history
661
Total Applications
across all art units

Statute-Specific Performance

§101
33.6%
-6.4% vs TC avg
§103
34.1%
-5.9% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
10.8%
-29.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 620 resolved cases

Office Action

§101 §103
DETAILED ACTION Response to Amendment This Final office action is in response to Applicant’s amendment filed 7/23/2025. Claims 1-3, 5, 6 and 10-19 have been amended. Claim 20 has been canceled, while claim 21 has been added. Claims 1-19 and 21 are pending. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The previously pending objection to the specification has been withdrawn. Applicant's arguments filed 7/23/2025 have been fully considered but they are not persuasive. Additionally, Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-19 and 21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims are directed to an abstract idea without significantly more. Here, under step 1 of the Alice analysis, apparatus claims 1-15 are directed to at least one memory; and at least one processor that executes instructions stored in the at least one memory, and method claims 16-20 are directed to a series of steps. Thus the claims are directed to a machine and process, respectively. Under step 2A Prong One of the analysis, the claimed invention is directed to an abstract idea without significantly more. The claims recite sentiment identification and processing, including receiving, generating, processing, summarizing, and providing steps. The limitations of receiving, generating, processing, summarizing, and providing, are a process that, under its broadest reasonable interpretation, covers organizing human activity concepts, but for the recitation of generic computer components. Specifically, the claim elements recite receive ratings data from at least one client device, the ratings data including a plurality of ratings of an individual who is part of an organization with respect to at least one characteristic of the individual, the ratings data responsive to at least one survey; generate a plurality of weights corresponding to the plurality of ratings; process at least the ratings data and the plurality of weights using at least one trained machine learning model to generate an insight associated with the at least one characteristic of the individual based on the ratings data; summarize the ratings data and the insight associated with the at least one characteristic of the individual to generate summary information for use in an interactive interface; and provide the interactive interface, wherein the interactive interface includes the summary information while the at least one recipient device is in use by an authorized user, and wherein at least a portion of the interactive interface expands to include a visualization of the ratings data in response to an interaction with an interactive element of the interactive interface. That is, other than reciting at least one memory, at least one processor, and at least one recipient device the claim limitations merely cover commercial interactions, including business relations, thus falling within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. Under Step 2A Prong Two, the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception. This judicial exception is not integrated into a practical application. The claims include at least one memory, at least one processor, and at least one recipient device. The at least one memory, at least one processor, and at least one recipient device in the steps is recited at a high-level of generality, such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As a result, the claims are directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of at least one memory, at least one processor, and at least one recipient device amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. None of the dependent claims recite additional limitations that are sufficient to amount to significantly more than the abstract idea. Claims 2 and 3 further describe the attendees and suggested order of topics. Claims 4-6 further describe analyzing the attendees’ relationships. Claims 7 and 8 recite additional analyzing and suggesting steps. Claims 9-12 recite additional arranging, applying, considering and re-arranging steps. Claim 21 further describes the plurality of ratings of the individual. Similarly, dependent claims 14-16, 18 and 19 recite additional details that further restrict/define the abstract idea. A more detailed abstract idea remains an abstract idea. Under step 2B of the analysis, the claims include, inter alia, at least one memory, at least one processor, and at least one recipient device. As discussed with respect to Step 2A Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception on a generic computer cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. There isn’t any improvement to another technology or technical field, or the functioning of the computer itself. Moreover, individually, there are not any meaningful limitations beyond generally linking the abstract idea to a particular technological environment, i.e., implementation via a computer system. Further, taken as a combination, the limitations add nothing more than what is present when the limitations are considered individually. There is no indication that the combination provides any effect regarding the functioning of the computer or any improvement to another technology. In addition, as discussed in paragraph 00119 of the specification, “FIG. 14 is an example computing system 1400 that may implement various systems and methods discussed herein. The computer system 1400 includes one or more computing components in communication via a bus 1402. In one implementation, the computing system 1400 includes one or more processors 1414.” As such, this disclosure supports the finding that no more than a general purpose computer, performing generic computer functions, is required by the claims. Viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. See Alice Corporation Pty. Ltd. v. CLS Bank Int’l et al., No. 13-298 (U.S. June 19, 2014). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 8-19 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Fisher et al (US 20190228357 A1), in view of McLaughlin et al (US 20210256545 A1). As per claim 1, Fisher et al disclose an apparatus for sentiment identification and processing, the apparatus comprising: at least one memory; and at least one processor that executes instructions stored in the at least one memory (i.e., a computer system 800 configured for operating and processing one or more components of the insight and learning server and system, ¶ 0111) to: receive ratings data from at least one client device, the ratings data including a plurality of ratings of one individual who is part of an organization with respect to at least one characteristic of the individual, the ratings data responsive to at least one survey (i.e., assessment reasons can be a categorized list of one or more reasons which qualify the assessment rating given or selected, where the manager can select one or more for each assessment. For example, in reference to the dashboard or display 100a and extension 100b, some embodiments comprise a display portion 110 that can include selectable answers to a question of why the manager thinks a specific way about an employee, ¶ 0053, wherein an employee check-in will be used to collect the employee's feelings about their job. The system may use the check-in evaluation entered by the manager and the check-in entered by the employee to determine a more accurate and comprehensive sentiment profile of the employee, ¶ 0059); process at least the ratings data using at least one trained machine learning model to generate an insight associated with the at least one characteristic of the individual based on the ratings data (i.e., system may use the check-in evaluation entered by the manager and the check-in entered by the employee to determine a more accurate and comprehensive sentiment profile of the employee. Statistical analysis as well as machine learning techniques can be used to determine the sentiment profile, ¶ 0059); summarize the ratings data and the insight associated with the at least one characteristic of the individual to generate summary information for use in an interactive interface; and provide the interactive interface to at least one recipient device for output through the at least one recipient device (i.e., an individual employee's display 600 in accordance with some embodiments of the invention. In some embodiments, the individual employee's display 600 can include a display portion 610 where the manager can view his or her employee's check-in history, rating or sentiment summary comprising a rating, ¶ 0078), wherein the interactive interface includes the summary information while the at least one recipient device is in use by an authorized user (a display of selectable icons, ¶ 0051, wherein the individual employee's display 600 can include a display portion 610 where the manager can view his or her employee's check-in history, rating or sentiment summary comprising a rating, sentiment, or feeling, ¶ 0078), and wherein at least a portion of the interactive interface expands to include a visualization of the ratings data in response to an interaction with an interactive element of the interactive interface (window 570 comprises a section 572 that presents root causes that are typically associated with the identified concern, and a section 573 that contains manager coaching material associated with a root cause. The manager may select a root cause and specific coaching material will be presented in section 573, ¶ 0068). Fisher et al does not disclose generate a plurality of weights corresponding to the plurality of ratings, and process at least the ratings data and the plurality of weights using at least one trained machine learning model to generate an insight associated with the at least one characteristic of the individual based on the ratings data. McLaughlin et al disclose the impact-factor-insight system 102 generates the impact-factor scores 308a-308n using a relative weights analysis. In some cases, to prevent inflating the importance of unimportant impact factors, the impact-factor-insight system 102 can determine a correlation between an impact factor and the target 306 by controlling for other impact factors. For example, the impact-factor-insight system 102 can determine relative weights for each impact factor based on examples where the impact factors and the target change at different amounts (¶ 0069). The impact-factor-insight system 102 can include a response analyzer 504 to analyze responses to an electronic survey question. Specifically, the response analyzer 504 can use machine-learning or other language analysis techniques to determine content and characteristics of the responses. For instance, the response analyzer 504 can interpret the meaning of responses for identifying impact factors associated with a target (¶ 0104). Fisher et al and McLaughlin et al are concerned with effective employee assessment. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include generate a plurality of weights corresponding to the plurality of ratings, and process at least the ratings data and the plurality of weights using at least one trained machine learning model to generate an insight associated with the at least one characteristic of the individual based on the ratings data in Fisher et al, as seen in McLaughlin et al, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. As per claim 2, Fisher et al disclose the at least one insight associated with the at least one characteristic of the individual includes a score for the individual, the score rating the individual according to the at least one characteristic and based on the ratings data (i.e., a rating can provide an overall assessment score that serves as a summary of the state of mind of an employee after a check-in, ¶ 0049). As per claim 3, Fisher et al disclose select a follow-up action from a plurality of possible follow-up actions to generate the insight associated with the at least one characteristic of the individual, wherein the at least one insight includes the follow-up action, the follow-up action to improve the individual with respect to the at least one characteristic (i.e., the insight and learning server and system can propose approaches, strategies, and/or specific actions for improvement at both the employee and organizational level, ¶ 0050). As per claim 4, Fisher et al disclose the follow-up action is associated with a training resource to be reviewed by the individual, the training resource selected from a plurality of training resources based on the training resource being associated with the at least one characteristic (i.e., selectable responses to the positive or negative reason of their personal growth at the company includes a career growth plan. In some embodiments, the selectable responses to the positive or negative reason of their role, duties, and challenges includes feeling challenged, appropriate resources and resourcing, personal empowerment, and job or interest alignment, ¶ 0053). As per claim 5, Fisher et al disclose process at least the ratings data using the at least one trained machine learning model to generate a score, wherein the follow-up action is selected based also on the score (i.e., insight and learning server and system can use a combination of analytics-driven logic and machine learning techniques to identify at-risk employees. In some embodiments, the analytics-driven logic can use several periods of assessments in its determination of at-risk employees, and can rely on the overall rating, ¶ 0082, wherein the insight and learning server and system can use one or more analytics-driven logic and machine learning techniques to suggest recommendations to help the manager address issues identified during the assessments, ¶ 0084). Fisher et al does not disclose process at least the ratings data and the plurality of weights using the at least one trained machine learning model to generate a score. McLaughlin et al disclose the impact-factor-insight system 102 generates the impact-factor scores 308a-308n using a relative weights analysis. In some cases, to prevent inflating the importance of unimportant impact factors, the impact-factor-insight system 102 can determine a correlation between an impact factor and the target 306 by controlling for other impact factors. For example, the impact-factor-insight system 102 can determine relative weights for each impact factor based on examples where the impact factors and the target change at different amounts (¶ 0069). The impact-factor-insight system 102 can analyze the unstructured responses 302a-302n using machine-learning techniques or other computer language-analysis techniques to parse and interpret the unstructured responses 302a-302n. In one or more embodiments, the impact-factor-insight system 102 generates sentiment scores and/or textual quality scores for the unstructured responses 302a-302n to determine the content and characteristics of the content of each unstructured response (¶ 0062). The impact-factor-insight system 102 can include a response analyzer 504 to analyze responses to an electronic survey question. Specifically, the response analyzer 504 can use machine-learning or other language analysis techniques to determine content and characteristics of the responses. For instance, the response analyzer 504 can interpret the meaning of responses for identifying impact factors associated with a target (¶ 0104). Fisher et al and McLaughlin et al are concerned with effective employee assessment. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include process at least the ratings data and the plurality of weights using the at least one trained machine learning model to generate a score in Fisher et al, as seen in McLaughlin et al, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. As per claim 6, Fisher et al disclose the at least one insight associated with the at least one characteristic of the individual includes customized content generated using the at least one trained machine learning model based on at least the ratings data, wherein the customized content is generated to be associated with the at least one characteristic (i.e., one or more analytics-driven logic and machine learning techniques to suggest recommendations to help the manager address issues identified during the assessments. One of the goals of the service is to identify recommendations that help a manager proactively address issues that an employee might be having. In some embodiments, the service can use machine learning to determine if one or more recommendations should be suggested for an employee, ¶ 0084). As per claim 8, Fisher et al disclose the customized content includes a development plan for the individual, the development plan identifying at least one action to improve the individual with respect to the at least one characteristic (i.e., by identifying individual employee trends, the insight and learning server and system can enable a manager to have better visibility into that employee's state of mind, and thereby determine whether an individual plan is necessary, ¶ 0063). As per claim 9, Fisher et al disclose the customized content includes a summary of the ratings data (i.e., the individual employee's display 600 can include a display portion 610 where the manager can view his or her employee's check-in history, rating or sentiment summary, ¶ 0078). As per claim 10, Fisher et al disclose the rating data is received at a first time, wherein the customized content includes a prediction of performance of the individual at a second time with respect to the at least one characteristic, wherein the second time is after the first time (i.e., the ML component 725 of the service component architecture 700 can be used primarily to identify at-risk employees, select one or more relevant recommendations for an employee, and identify predictive trends of future behavior, ¶ 0110). As per claim 11, Fisher et al disclose process at least the ratings using the at least one trained machine learning model to generate a score, wherein the customized content is generated based also on the score (i.e., insight and learning server and system can use a combination of analytics-driven logic and machine learning techniques to identify at-risk employees. In some embodiments, the analytics-driven logic can use several periods of assessments in its determination of at-risk employees, and can rely on the overall rating, ¶ 0082, wherein the insight and learning server and system can use one or more analytics-driven logic and machine learning techniques to suggest recommendations to help the manager address issues identified during the assessments, ¶ 0084). Fisher et al does not disclose process at least the ratings data and the plurality of weights using the at least one trained machine learning model to generate a score. McLaughlin et al disclose the impact-factor-insight system 102 generates the impact-factor scores 308a-308n using a relative weights analysis. In some cases, to prevent inflating the importance of unimportant impact factors, the impact-factor-insight system 102 can determine a correlation between an impact factor and the target 306 by controlling for other impact factors. For example, the impact-factor-insight system 102 can determine relative weights for each impact factor based on examples where the impact factors and the target change at different amounts (¶ 0069). The impact-factor-insight system 102 can analyze the unstructured responses 302a-302n using machine-learning techniques or other computer language-analysis techniques to parse and interpret the unstructured responses 302a-302n. In one or more embodiments, the impact-factor-insight system 102 generates sentiment scores and/or textual quality scores for the unstructured responses 302a-302n to determine the content and characteristics of the content of each unstructured response (¶ 0062). The impact-factor-insight system 102 can include a response analyzer 504 to analyze responses to an electronic survey question. Specifically, the response analyzer 504 can use machine-learning or other language analysis techniques to determine content and characteristics of the responses. For instance, the response analyzer 504 can interpret the meaning of responses for identifying impact factors associated with a target (¶ 0104). Fisher et al and McLaughlin et al are concerned with effective employee assessment. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include process at least the ratings data and the plurality of weights using the at least one trained machine learning model to generate a score in Fisher et al, as seen in McLaughlin et al, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. As per claim 12, Fisher et al disclose process at least the ratings data using the at least one trained machine learning model to select a follow-up action from a plurality of possible follow-up actions, the follow-up action to improve the individual with respect to the at least one characteristic, wherein the customized content is generated based also on the follow-up action (i.e., the analytics and benchmarks can be used to benchmark assessment reasons which in turn are used to suggest recommendations that provide approaches, strategies, and/or specific actions to help improve an employee's disposition. In some embodiments, the insight and learning server and system can use one or more analytics-driven logic and machine learning techniques to suggest recommendations, ¶ 0084). Fisher et al does not process at least the ratings data and the plurality of weights using the at least one trained machine learning model to select a follow-up action. McLaughlin et al disclose the impact-factor-insight system 102 generates the impact-factor scores 308a-308n using a relative weights analysis. In some cases, to prevent inflating the importance of unimportant impact factors, the impact-factor-insight system 102 can determine a correlation between an impact factor and the target 306 by controlling for other impact factors. For example, the impact-factor-insight system 102 can determine relative weights for each impact factor based on examples where the impact factors and the target change at different amounts (¶ 0069). Furthermore, the impact-factor-insight system 102 can provide an option to begin planning actions relative to an impact factor (e.g., action planning option 424). For instance, the impact-factor-insight system 102 can determine that the entity can perform various options associated with an impact factor to help improve the entity's performance relative to the impact factor (¶ 0093). The impact-factor-insight system 102 can include a response analyzer 504 to analyze responses to an electronic survey question. Specifically, the response analyzer 504 can use machine-learning or other language analysis techniques to determine content and characteristics of the responses. For instance, the response analyzer 504 can interpret the meaning of responses for identifying impact factors associated with a target (¶ 0104). Fisher et al and McLaughlin et al are concerned with effective employee assessment. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include process at least the ratings data and the plurality of weights using the at least one trained machine learning model to select a follow-up action in Fisher et al, as seen in McLaughlin et al, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. As per claim 13, Fisher et al disclose update the trained machine learning model based on training data that includes at least the insight (i.e., a second feedback includes the service following up with a manager and to ask him/her to rate the effectiveness of the recommendation that had been selected for that employee. In some embodiments, both of these feedback loops can provide a useful result on which the model may be trained further, ¶ 0118). As per claim 14, Fisher et al disclose receive an indication of performance of the individual at a second time with respect to the at least one characteristic, the ratings data being received at a first time before the second time; and update the trained machine learning model based on training data that includes a comparison between at least the insight and the indication (i.e., the model is improved over time as more data is accumulated, especially by feedback steps including a first feedback when a manager is presented with recommendation suggestions and is asked to select one or more that are relevant for that employee's situation, where the selections that he or she makes will reinforce the correct selection for that situation. In some embodiments, a second feedback includes the service following up with a manager and to ask him/her to rate the effectiveness of the recommendation that had been selected for that employee. In some embodiments, both of these feedback loops can provide a useful result on which the model may be trained further, ¶ 0118). As per claim 15, Fisher et al disclose update the trained machine learning model based on training data that includes a at least the insight and an indication of an interaction with the interactive interface (i.e., a second feedback includes the service following up with a manager and to ask him/her to rate the effectiveness of the recommendation that had been selected for that employee. In some embodiments, both of these feedback loops can provide a useful result on which the model may be trained further, ¶ 0118). Claims 16-19 are rejected based upon the same rationale as the rejection of claims 1-3, 6 and 13, respectively, since they are the method claims corresponding to the apparatus claims. As per claim 21, Fisher et al does disclose the plurality of ratings of the individual correspond to a plurality of raters (i.e., an employee check-in will be used to collect the employee's feelings about their job. The system may use the check-in evaluation entered by the manager and the check-in entered by the employee to determine a more accurate and comprehensive sentiment profile of the employee, ¶ 0059). Fisher et al does not disclose wherein the plurality of weights are based on the plurality of raters. McLaughlin et al disclose the electronic survey question can include a request for an employee of a company to describe how well the company maintains a work environment or how satisfied the employee is with the company overall. Furthermore, each entity can provide a different set of electronic survey questions to respondents (e.g., entity 200b can provide the electronic survey question 202b) (¶ 0044). The impact-factor-insight system 102 generates the impact-factor scores 308a-308n using a relative weights analysis. In some cases, to prevent inflating the importance of unimportant impact factors, the impact-factor-insight system 102 can determine a correlation between an impact factor and the target 306 by controlling for other impact factors. For example, the impact-factor-insight system 102 can determine relative weights for each impact factor based on examples where the impact factors and the target change at different amounts (¶ 0069). Fisher et al and McLaughlin et al are concerned with effective employee assessment. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include wherein the plurality of weights are based on the plurality of raters in Fisher et al, as seen in McLaughlin et al, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claims 7 is rejected under 35 U.S.C. 103 as being unpatentable over Fisher et al (US 20190228357 A1), in view of McLaughlin et al (US 20210256545 A1), in further view of Jesneck et al (US 20240249831 A1). As per claim 7, Fisher et al does not disclose the customized content includes text that is customized to the individual, wherein the at least one trained machine learning model includes at least one large language model (LLM) that generates the text of the customized content. Jesneck et al disclose a platform wherein utilizing a computation that is based on deep learning modeling. Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods. Deep-learning has been used in fields such as computer vision, speech recognition, natural language processing, machine translation, bioinformatics, drug design, medical image analysis, material inspection and board game programs (¶ 0157). The platform connects with, indexes, and profiles large amounts of educational content, for example journal articles, anatomy diagrams, and medical procedure videos. The Firefly™ targeted education system associates each piece of content with relevant medical activities, using techniques including machine learning and natural language processing (¶ 0170). Fisher et al and Jesneck et al are concerned with effective employee assessment. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the customized content includes text that is customized to the individual, wherein the at least one trained machine learning model includes at least one large language model (LLM) that generates the text of the customized content in Fisher et al, as seen in Jesneck et al, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Response to Arguments In the Remarks, Applicant argues Applicant submits that the currently amended claims are non-abstract and patent-eligible for at least the same reasons as in Core Wireless Licensing S.A.R.L. v. LG Electronics, Inc., 880 F.3d 1356 (Fed. Cir. 2018) ("Core Wireless"). For instance, the claims at issue in CoreWireless concern "display[ing] on [a] screen a main menu listing at least a first application" and an "application summary window that [...] displays a limited list of at least one function offered within the first application [...] being selectable to launch the first application and initiate the selected function [...] while the application is in an un- launched state." CoreWireless, 1360. The Federal Circuit indicated that the claims at issue in Core Wireless were non-abstract and thus patent-eligible because they were "directed to an improved user interface for computing devices," "a particular manner of summarizing and presenting information in electronic devices," and "a specific manner of displaying a limited set of information to the user." Id., 1362. Applicant submits that the "interactive interface" of Applicant's currently amended claim 1 is analogous to the "application summary window," the "main menu," and/or the "limited list of at least one function" of Core Wireless. Applicant further submits that "at least a portion of the interactive interface expand[ing] to include a visualization of the ratings data in response to an interaction with an interactive element of the interactive interface" of Applicant's currently amended claim 1 is analogous to "initiat[ing] the selected function" based on "select[ion]" of the "limited list of at least one function" of Core Wireless. Ultimately, like Core Wireless, Applicant's currently amended claims are "directed to an improved user interface for computing devices," "a particular manner of summarizing and presenting information in electronic devices," and "a specific manner of displaying a limited set of information to the user." Id., 1362. Applicant submits that the "at least one trained machine learning model" of currently amended claim 1 is analogous to the "neural network" of Example 39. Applicant further submits that "process[ing] at least the ratings data and the plurality of weights using at least one trained machine learning model to generate an insight" of currently amended claim 1 is analogous to the "facial detection" of Example 39. Ultimately, like Example 39, Applicant's currently amended claims "do[] not recite any of the judicial exceptions" because "the claim does not recite any mathematical relationships, formulas, or calculations" and because "the claim does not recite a mental process because the steps are not practically performed in the human mind." USPTO's Subject Matter Eligibility Examples 37-42, p. 9. Applicant submits that the "summariz[ing] the ratings data and the insight associated with the at least one characteristic of the individual to generate summary information for use in an interactive interface" of currently amended claim 1 is analogous to "converting [...] non-standardized updated information into the standardized format" of claim 1 of Example 42. Applicant further submits that "provid[ing] the interactive interface to at least one recipient device for output through the at least one recipient device, wherein the interactive interface includes the summary information while the at least one recipient device is in use by an authorized user, and wherein at least a portion of the interactive interface expands to include a visualization of the ratings data in response to an interaction with an interactive element of the interactive interface" of currently amended claim 1 is analogous to "storing the standardized updated information about the patient's condition in the collection of medical records in the standardized format," and "transmitting [a] message [...] so that each user has immediate access to up-to-date patient information" of claim 1 of Example 42. Ultimately, like claim 1 of Example 42, Applicant's currently amended claims "recite a specific improvement [...] by allowing [...] users to share information in real time in a standardized format regardless of the format in which the information was input," and thus "the claim is eligible because it is not directed to the recited judicial exception (abstract idea)." Applicant submits that the "at least one trained machine learning model" of currently amended claim 1 is analogous to the "deep neural network (DNN)" of claim 3 of Example 48. Applicant further submits that "process[ing] at least the ratings data and the plurality of weights using at least one trained machine learning model to generate an insight" and "summariz[ing] the ratings data and the insight associated with the at least one characteristic of the individual to generate summary information for use in an interactive interface" of currently amended claim 1 is analogous to "processing the speech signal to produce masked clusters" using the "deep neural network (DNN),""extracting spectral features," and "generating a sequence of words from the extracted spectral features to produce a transcript" of claim 3 of Example 48. Ultimately, like claim 3 of Example 48, Applicant's currently amended claims "integrates [...] into a practical application of [...] conversion of a [...] signal." Subject Matter Eligibility Examples 47- 49, p. 28. For instance, as discussed in the specification as filed, the claimed technology "provide[s] improved efficiency by summarizing the ratings and insights via the interactive interface, and improved flexibility based on the interactivity." See specification as filed, para. [0006], [0061] (emphasis added). Furthermore, the claimed technology "provide[s] improved accuracy, precision, and quality of insights by reviewing and using information (e.g., the ratings data) as input(s) to the at least one machine learning model in real-time as the information is received." Id. The claimed concept is also "use[d] [...] in conjunction with a particular machine or manufacture that is integral to the claim," another one of the five "considerations" indicative of a practical application under MPEP § 2106.04(d)(I). For instance, currently amended independent claim 1 integrally relies on at least the "apparatus," the "at least one memory," the "at least one processor," the "at least one client device," the "at least one trained machine learning model," the "interactive interface," and the "at least one recipient device." Currently amended independent claim 16 integrally relies on at least the "at least one client device," the "at least one trained machine learning model," the "interactive interface," and the "at least one recipient device." The Office Action acknowledges some of these elements, but fails to address their being "integral to the claim." The claims are also "more than a drafting effort designed to monopolize the exception," another one of the five "considerations" indicative of a practical application under MPEP § 2106.04(d)(I). Applicant submits that the currently amended independent claims recite a significant amount of details of how the "insight" is "generate[d]" (by "process[ing] at least the ratings data and the plurality of weights using at least one trained machine learning model to generate [the] insight," how the "summary information" is "generate[d]" (by "summariz[ing] the ratings data and the insight associated with the at least one characteristic of the individual to generate [the] summary information for use in an interactive interface"), and how the "interactive interface" works (by "provid[ing] the interactive interface to at least one recipient device for output through the at least one recipient device, wherein the interactive interface includes the summary information while the at least one recipient device is in use by an authorized user, and wherein at least a portion of the interactive interface expands to include a visualization of the ratings data in response to an interaction with an interactive element of the interactive interface"), among other "details of how a solution to a problem is accomplished." The Office Action also cites para. [0119] of Applicant's specification as filed, arguing that para. [0119] of Applicant's specification as filed purportedly "supports the finding that no more than a general purpose computer, performing generic computer functions, is required by the claims." Office Action, 5. If this is an attempt to argue that the additional elements are purportedly "well-understood, routine, or conventional," then Applicant respectfully disagrees with such an argument. Since the Office Action does not "expressly support [this] rejection in writing with" any other of the types evidence listed in the MPEP § 2106.07(a)(III), the claimed elements are not well-understood, routine or conventional. Instead, the additional elements, considered together, represent an inventive concept ("something more") at least because they represent "[i]mprovements to the functioning of a computer,""[i]mprovements to any other technology or technical field," and "[a]ppl[ication] [...] [of] a particular machine" under MPEP § 2106.05(I)(A)(i)-(iii) for at least the reasons discussed above with respect to the considerations for Step 2A (prong 2). The Examiner respectfully disagrees. In Core Wireless, the court held in applying the first step of the Alice test that the claims at issue specifically improved the functioning of computers. The disclosed invention improves the efficiency of using the electronic device by bringing together "a limited list of common functions and commonly accessed stored data," which can be accessed directly from the main menu. Displaying selected data or functions of interest in the summary window allows the user to see the most relevant data or functions "without actually opening the application up." (emphasis added). However, and contrary to Applicant's assertion, there is no similar improvement to the functioning of computers. Here, there are no steps saved or applications that don't need to be accessed. Rather, here the claims recite an interactive interface that includes the summary information. Following, the claimed interactive interface including the summary information while the at least one recipient device is in use by an authorized user, and wherein at least a portion of the interactive interface expands to include a visualization… in response to an interaction with an interactive element of the interactive interface, do not represent a technical solution to a technical problem nor do the claims improve the functioning of the underlying computer/technology nor do the claims recite an improvement to other technology or technical field nor do the claims integrate the abstract idea into a practical application. Regarding claim 1 of Example 39, as an initial point, the example is hypothetical and only intended to be illustrative of the claim analysis under the 2019 PEG. The example should be interpreted based on the fact patterns set forth as other fact patterns may have different eligibility outcomes. Moreover, Applicant’s claim language is wholly unrelated to Example 39 (Method for Training a Neural Network for Facial Detection), as Applicant is likely aware. As discussed in the analysis of claim 1 of Example 39, The claim does not recite any of the judicial exceptions enumerated in the 2019 PEG. For instance, the claim does not recite any mathematical relationships, formulas, or calculations. While some of the limitations may be based on mathematical concepts, the mathematical concepts are not recited in the claims. Further, the claim does not recite a mental process because the steps are not practically performed in the human mind. Finally, the claim does not recite any method of organizing human activity such as a fundamental economic concept or managing interactions between people. Thus, the claim is eligible because it does not recite a judicial exception. Contrarily, here other than reciting at least one memory, at least one processor, and at least one recipient device the claim limitations merely cover commercial interactions, including business relations, thus falling within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. Regarding claim 1 of Example 42, as an initial point, the example is hypothetical and only intended to be illustrative of the claim analysis under the 2019 PEG. The example should be interpreted based on the fact patterns set forth as other fact patterns may have different eligibility outcomes. Moreover, Applicant’s claim language is wholly unrelated to Example 42 (Method for Transmission of Notifications When Medical Records Are Updated), as Applicant is likely aware. As discussed in the analysis of claim 1 of Example 42, “The claim recites a combination of additional elements including storing information, providing remote access over a network, converting updated information that was input by a user in a non-standardized form to a standardized format, automatically generating a message whenever updated information is stored, and transmitting the message to all of the users. The claim as a whole integrates the method of organizing human activity into a practical application.” Contrarily, here there are no additional elements including converting updated information that was input by a user in a non-standardized form to a standardized format, and automatically generating a message whenever updated information is stored. Regarding claim 3 of Example 48, as an initial point, the example is hypothetical and only intended to be illustrative of the claim analysis. The example should be interpreted based on the fact patterns set forth as other fact patterns may have different eligibility outcomes. Moreover, Applicant’s claim language is wholly unrelated to Example 48 (The application of the eligibility analysis to claims that recite artificial intelligence-based methods of analyzing speech signals and separating desired speech from extraneous or background speech), as Applicant is likely aware. As discussed in the analysis of claim 3 of Example 48, “The disclosure explains that devices that capture audio perform poorly in distinguishing conversations between individuals of interest from unwanted utterances due to their inability in distinguishing different speech sources belonging to the same class, thereby resulting in poor quality transcriptions of the recorded speech. The disclosure states that this invention offers an improvement over existing speech-separation methods by providing a particular speech separation technique that solves the problem of separating speech from different speech sources belonging to the same class, and also performing well with inter-speaker variability within the same audio class for transcriptions. The disclosure states that the invention derives embeddings by the DNN based on the global properties of the input signal, which is an improvement over prior art speech separation methods. In addition, the invention uses both temporal and spatial features of the speech signal; this feature of the invention helps a downstream conventional speech-to-text system reduce the gap in transcription performance for accented speakers over traditional speech-to-text methods.” “Here, the claim reflects these technical improvements discussed in the disclosure by reciting details of how the DNN trained on source separation aids in the cluster assignments to correspond to the sources identified in the mixed speech signal, which are then converted into separate speech signals in the time domain to generate a sequence of words from
Read full office action

Prosecution Timeline

Aug 25, 2023
Application Filed
Apr 18, 2025
Non-Final Rejection — §101, §103
Jul 03, 2025
Interview Requested
Jul 17, 2025
Applicant Interview (Telephonic)
Jul 17, 2025
Examiner Interview Summary
Jul 23, 2025
Response Filed
Oct 28, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12524722
ISSUE TRACKING METHODS FOR QUEUE MANAGEMENT
2y 5m to grant Granted Jan 13, 2026
Patent 12488363
TREND PREDICTION
2y 5m to grant Granted Dec 02, 2025
Patent 12475421
METHODS AND INTERNET OF THINGS SYSTEMS FOR PROCESSING WORK ORDERS OF GAS PLATFORMS BASED ON SMART GAS OPERATION
2y 5m to grant Granted Nov 18, 2025
Patent 12423719
TREND PREDICTION
2y 5m to grant Granted Sep 23, 2025
Patent 12423637
SYSTEMS AND METHODS FOR PROVIDING DIAGNOSTICS FOR A SUPPLY CHAIN
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
36%
Grant Probability
56%
With Interview (+19.8%)
4y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 620 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month