Prosecution Insights
Last updated: April 19, 2026
Application No. 18/485,228

GENERATING A UNIFIED USER EXPERIENCE SCORE USING SENTIMENT ANALYSIS AND THEME CLASSIFICATION

Final Rejection §101§102
Filed
Oct 11, 2023
Examiner
OUELLETTE, JONATHAN P
Art Unit
3629
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Oracle International Corporation
OA Round
2 (Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
3y 9m
To Grant
96%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
755 granted / 1140 resolved
+14.2% vs TC avg
Strong +30% interview lift
Without
With
+30.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
35 currently pending
Career history
1175
Total Applications
across all art units

Statute-Specific Performance

§101
28.9%
-11.1% vs TC avg
§103
18.5%
-21.5% vs TC avg
§102
27.8%
-12.2% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1140 resolved cases

Office Action

§101 §102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-20 are currently pending in application 18/485,228. Claim Rejections – 35 USC §101 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to non-statutory subject matter, specifically an abstract idea. Claims 1-20 are directed to a judicial exception (i.e., abstract idea), without providing a practical application, and without providing significantly more. Under the 35 U.S.C. §101 subject matter eligibility two-part analysis, Step 1 addresses whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. See MPEP §2106.03. If the claim does fall within one of the statutory categories, it must then be determined in Step 2A [prong 1] whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea). See MPEP §2106.04. If the claim is directed toward a judicial exception, it must then be determined in Step 2A [prong 2] whether the judicial exception is integrated into a practical application. See MPEP §2106.04(d). Finally, if the judicial exception is not integrated into a practical application, it must additionally be determined in Step 2B whether the claim recites "significantly more" than the abstract idea. See MPEP §2106.05. Examiner note: The Office’s 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG) is currently found in the Ninth Edition, Revision 10.2019 (revised June 2020) of the Manual of Patent Examination Procedure (MPEP), specifically incorporated in MPEP §2106.03 through MPEP §2106.07(c). Regarding Step 1, Claims 1-12 are directed toward a process (method). Claims 13-18 are directed toward an apparatus (system). Claims 19-20 are directed toward a computer program product having computer-readable tangible storage media (article of manufacture). Thus, all claims fall within one of the four statutory categories as required by Step 1. Regarding Step 2A [prong 1], Claims 1-20 are directed toward the judicial exception of an abstract idea. Independent claims 1, 13 and 19 are directed specifically to the abstract idea of determining customer satisfaction. Regarding independent claims 1, 9 and 16, the underlined limitations emphasized below correspond to the abstract ideas of the claimed invention: A method, comprising: determining, via one or more processors, user feedback data, wherein the user feedback data includes first user feedback data that is obtained from at least one of one or more computing devices of a cloud service provider (CSP), and second user feedback data that is obtained from one or more external computing devices associated with at least one of a social network, a messaging service, an Internet forum, or a chat room; monitoring operating and generating, via the one or more processors, metrics associated with categories including a mindshare category, a happiness category, and one or more other categories that include at least one of an adoption category, a success of tasks category, an engagement category, or a retention category; generating, via the one or more processors, a happiness score based, at least in part on the first user feedback data, wherein the happiness score indicates responses to one or more products of the CSP, one or more services of the CSP, or one or more features of the CSP; generating, via the one or more processors, a mindshare score based, at least in part on the second user feedback data, wherein the mindshare score indicates responses to one or more products of the CSP, one or more services of the CSP, or one or more features of the CSP; generating, via the one or more processors, one or more other scores for individual ones of the one or more other categories; generating a unified user experience (UX) score, via the one or more processors, based at least in part on the happiness score, the mindshare score and the one or more other scores, wherein each category associated with the scores contributes a predetermined percentage of the unified UX score; and causing information about the unified UX score to be provided to a computing device associated with a user. As the underlined claim limitations above demonstrate, independent claims 1, 13 and 19 are directed to the abstract idea of Certain methods of organizing human activity (commercial or legal interactions (including agreements in the form of marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions)). Dependent claims 2-12, 14-18, and 20 provide further details to the abstract idea of claims 1, 13, and 19 regarding the received data, therefore, these claims include certain methods of organizing human activities for similar reasons provided above for claims 1, 13 and 19. After considering all claim elements, both individually and in combination and in ordered combination, it has been determined that the claims do not amount to significantly more than the abstract idea itself. Regarding Step 2A [prong 2], Claims 1-20 fail to integrate the recited judicial exception into any practical application. The claims recite additional limitations which are hardware or software elements or particular technological environment, such as a “system”, a “non-transitory computer-readable medium storing a set of instructions”, “processors”, internal/external “computing devices”, a “cloud service provider (CSP)”, a “social network” / “social networking platform”, a “messaging service”, an “Internet forum”, a “chat room”, a “network”, a “theme machine learning model”, and a “machine learning environment”. However, these limitations are not enough to qualify as “practical application” being recited in the claims along with the abstract idea since these limitations are merely invoked as a tool to perform instruction of an abstract idea in a particular technological environment and/or are generally linking the use of the abstract idea to a particular technological environment or field of use, and merely applying and abstract idea in a particular technological environment and merely limiting use of an abstract idea to a particular field or a technological environment do not provide practical application for an abstract idea (MPEP 2106.05 (f) & (h)). The claims do not amount to "practical application" for the abstract idea because they neither (1) recite any improvements to another technology or technical field; (2) recite any improvements to the functioning of the computer itself; (3) apply the judicial exception with, or by use of, a particular machine; (4) effect a transformation or reduction of a particular article to a different state or thing; (5) provide other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment. The presence of a machine learning algorithm or computer implementations do not necessarily restrict the claim from reciting an abstract idea. The machine learning algorithm and computer limitations claimed herein are simply used as a tool to apply the abstract idea without transforming the underlying abstract idea into patent eligible subject matter. As claimed, machine learning algorithm is not iteratively trained to improve the accuracy of the model itself, it merely processes data to determine customer satisfaction based on factors, function objectives, and received input. Examiner notes that the additional limitations of machine learning and computer processing do not result in computer functionality or technical/technology improvement and hence do not result in a practical application. The machine learning algorithm and the computer limitation simply process the data through inputting and outputting data. Processing data is mere automation of manual processes, such as using a generic computer to process an application for financing a purchase, Credit Acceptance Corp. v. Westlake Services, 859 F.3d 1044, 1055, 123 USPQ2d 1100, 1108-09 (Fed.Cir. 2017) or speeding up a loan application process by enabling borrowers to avoid physically going to or calling each lender and filling out a loan application, Lending Tree, LLLC v. Zillow, Inc., 656 Fed. App'x 991, 996-97 (Fed. Cir. 2019)(non-precedential). Thus, the additional limitations of machine learning algorithm and computer limitations do not transform the abstract idea into a practical application. The relevant question under Step 2A [prong 2] is not whether the claimed invention itself is a practical application, instead, the question is whether the claimed invention includes additional elements beyond the judicial exception that integrate the judicial exception into a practical application by imposing a meaningful limit on the judicial exception. This is not the case with Applicant’s claimed invention. Automating the recited claimed features as a combination of computer instructions implemented by computer hardware and/or software elements as recited above does not qualify an otherwise unpatentable abstract idea as patent eligible. Examples where the Courts have found selecting a particular data source or type of data to be manipulated to be insignificant extra-solution activity include selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display, Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1354-55, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016); Applicant’s limitations as recited above do nothing more than supplement the abstract idea using additional hardware/software computer components as a tool to perform the abstract idea and generally link the use of the abstract idea to a technological environment, which is not sufficient to integrate the judicial exception into a practical application since they do not impose any meaningful limits. Dependent claims 2-12, 14-18, and 20 merely incorporate the additional elements recited above, along with further embellishments of the abstract idea of independent claims respectively, but these features only serve to further limit the abstract idea of independent claims. Therefore, the additional elements recited in the claimed invention individually, and in combination fail to integrate the recited judicial exception into any practical application. Regarding Step 2B, Claims 1-20 fail to amount to “significantly more” than an abstract idea. The claims recite additional limitations which are hardware or software elements or particular technological environment, such as a “system”, a “non-transitory computer-readable medium storing a set of instructions”, “processors”, internal/external “computing devices”, a “cloud service provider (CSP)”, a “social network” / “social networking platform”, a “messaging service”, an “Internet forum”, a “chat room”, a “network”, a “theme machine learning model”, and a “machine learning environment”. However, these limitations are not enough to qualify as “significantly more” being recited in the claims along with the abstract idea since these limitations are merely invoked as a tool to perform instruction of Abstract idea in a particular technological environment and/or are generally linking the use of the abstract idea to a particular technological environment or field of use, and merely applying and abstract idea in a particular technological environment and merely limiting use of an abstract idea to a particular field or a technological environment do not provide significantly more to an abstract idea (MPEP 2106.05(f) & (h)). The claims do not amount to "significantly more" than the abstract idea because they neither (1) recite any improvements to another technology or technical field; (2) recite any improvements to the functioning of the computer itself; (3) apply the judicial exception with, or by use of, a particular machine; (4) effect a transformation or reduction of a particular article to a different state or thing; (5) add a specific limitation other than what is well-understood, routine and conventional in the field; (6) add unconventional steps that confine the claim to a particular useful application; nor (7) provide other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment. Dependent claims 2-12, 14-18, and 20 merely recite further additional embellishments of the abstract idea of independent claims 1, 13 and 19 respectively, but these features only serve to further limit the abstract idea of independent claims 1, 13 and 19; however, none of the dependent claims recite an improvement to a technology or technical field or provide any meaningful limits. The addition of another abstract concept to the limitations of the claims does not render the claim other than abstract. Under the Interim Guidance on Patent Subject Matter Eligibility (PEG 2019), it specifically states that narrowing an abstract idea of claims do not resolve the claims of being "significantly more" than the abstract idea. Thus, the additional elements in the dependent claims only serve to further limit the abstract idea utilizing the computer components as a tool and/or generally link the use of the abstract idea to a particular technological environment. Therefore, since there are no limitations in the claims 1-20 that transform the exception into a patent eligible application such that the claims amount to significantly more than the exception itself, and looking at the limitations as a combination and as an ordered combination adds nothing that is not already present when looking at the elements taken individually, claims 1-20 are rejected under 35 USC § 101 as being directed to non-statutory subject matter under 35 U.S.C. § 101. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Garvey et al. (US 2024/0320591 A1). As per independent Claims 1, 13, and 19, Garvey discloses a method (a system comprising: one or more processors; and non-transitory computer-readable medium storing a set of instructions, the set of instructions when executed by the one or more processors cause processing to be performed)(a non-transitory computer-readable medium storing a set of instructions, the set of instructions when executed by one or more processors cause processing to be performed) (See at least Figs.1-4; Claims 1, 11, and 20; Para 0029-0030), comprising: Monitor operations and (to*) generate, via one or more processors, user feedback data, wherein the user feedback data includes first user feedback data that is obtained from at least one of one or more computing devices of a cloud service provider (CSP) [Applicant Specification, Para 0024; user-submitted feedback (e.g., user feedback that is sent directly to the cloud service provider)] (See at least Para 0025, “Product 102 refers to an item or service with which users may interact. Examples include articles of manufacture, software applications, cloud computing services, websites, virtual assistants, and other computing-based systems. 102 includes user interface 104 for interacting with one or more users. In some embodiments, 102 is a good or service that exists in a digital form (a “digital” product). Examples include websites, mobile applications, cloud services, digital media, and/or other digital assets.”; Para 0028, “UX test framework 120 includes components for composing and running UX tests. The components may include UX test editor 122, UX test engine 124, result parser 126, and AI integration engine 128. A UX test may comprise applications, tools, and/or processes for evaluating the performance of various facets of one or more user experiences with product 102. For example, a UX test may comprise a survey or questionnaire. Users of a website or a mobile application may be prompted to complete the UX test to evaluate their experience with product 102, which may be the website or application itself or a separate product. If the user accepts the prompt, the user may be redirected to a webpage with a set of queries to describe and/or rank various facets of the user experience with product 102”; Para 0029, “Additionally or alternatively, a UX test may obtain performance data for one or more UX facets using mechanisms for tracking how a user interacts with product 102. For example, scripting tags that embed executable code in a website or backend processes, such as daemons, may track and collect metrics and/or other information about user interactions with product 102. Example metrics may include how long it takes a user to first interact with a user interface element, how long it takes a user to complete a task using user interface 104, how long a user engages with product 102, how long it takes for pages of user interface 104 to load, which products features are most frequently accessed, and which product features are least frequently accessed”; Para 0031, “UX test editor 122 is a tool through which users may compose and customize UX tests. For example, UX test editor 122 may include one or more GUI elements through which a user may select predefined survey questions, input new questions, define scripts for capturing performance metrics, and/or otherwise customize test applications to evaluate user experiences with product 102. UX test editor 122 may further allow users to define parameters associated with running a UX test, such as what segment to target, what platform to use running the test, and/or other parameters controlling how the UX test is run”), and second user feedback data that is obtained from one or more external computing devices associated with at least one of a social network, a messaging service, an Internet forum, or a chat room [Applicant Specification, Para 0024; e.g., feedback that is submitted to a third-party service such as a social media platform] (See at least Para 0030, “Additionally or alternatively, a UX test may obtain information about user experiences from other data sources. For example, a web scraper may crawl one or more websites for user reviews of a product to extract information about which product features are viewed most positively, which product features are viewed most negatively, what scores have been assigned for different features of the product, and what overall product score has been assigned. Additionally or alternatively, the UX test may scrape social media sites for posts tagged with a product identifier and extract information from the posts about how users interact with the product. In yet another example, a UX test may search customer databases and/or other sources to determine what percentage of users have returned a product, submitted a customer support ticket, or submitted a product complaint. A UX test may assign scores based on the extracted information using a scoring function or machine learning, where a UX test score quantifies one or more user experiences with respect to one or more facets of the user experience. Although only one product is illustrated in FIG. 1, a given UX test may be run for several different products and several different UX tests may be run for the same product.”); determining/ generating, via the one or more processors, metrics associated with categories including: a mindshare category (metric derived from external data (social media, forums) to indicate how the product is perceived or discussed in the wider public sphere; See at least Para 0030, i.e. User review data from Online social media), a happiness category (Measures attitudinal responses, such as user satisfaction and perceived quality, derived from direct feedback; See at least Para 0029-0031, see above; Para 0055, “Additionally or alternatively, expectation elements may be captured by a UX test. An expectation element is a result that identifies an expectation of a user with respect to a product and whether the expectation was met or unmet during the UX test. For example, an expectation element may include an “expectation quote” that describes the user's expectations without being confined to a schema, an “outcome quote” that describes the outcome for an associated expectation (also without being confined to a schema), and an outcome selected from a predefined schema (e.g., “fully met”, “somewhat met”, “unmet”, etc.). The triplet of the unstructured expectation quote, unstructured outcome quote, and selected outcome may be part of an expectation element collected by UX test framework 118. In other embodiments, an expectation element may include additional information associated with a user's expectations with product 102 and/or may omit one or more items from the triplet”; See Para 0033, “Result parser 126 may further extract additional information about individual qualitative elements and/or groups of qualitative elements, including attributes about the author of a quotation, what question a quotation is responding to, and what quantitative score the respondent gave to a facet of the user experience that is described by the quotation..”; and Para 0030, “… A UX test may assign scores based on the extracted information using a scoring function or machine learning, where a UX test score quantifies one or more user experiences with respect to one or more facets of the user experience..”), an adoption category (Rate at which new users begin using a product or feature, See at least Para 0029, “Example metrics may include…how long it takes a user to first interact with a user interface element”), a success of tasks category (Efficiency and effectiveness, such as completion rates or error rates for specific features, See at least Para 0029, “Example metrics may include … how long it takes a user to complete a function, … how long it takes for pages of user interface 104 to load.”; See also Para 0035, “… a UX test may search customer databases and/or other sources to determine what percentage of users have returned a product, submitted a customer support ticket, or submitted a product complaint.”), an engagement category (Level of user involvement, such as frequency or depth of interaction, See at least Para 0029, “Example metrics may include … how long a user engages with product 102, … which products features are most frequently accessed, and which product features are least frequently accessed.”), and a retention category (Rate at which existing users return over time, See at least Para 0029, “Example metrics may include … which products features are most frequently accessed, and which product features are least frequently accessed.”; See also Para 0035, “… a UX test may search customer databases and/or other sources to determine what percentage of users have returned a product”; Para 0037, “As another example, UX test engine 122 may capture webpage usage metrics from the set of visitors using scripting tags and/or scrape review sites for information describing product 102, as previously described.”) (See at least Para 0029, “Example metrics may include how long it takes a user to first interact with a user interface element, how long it takes a user to complete a function, how long a user engages with product 102, how long it takes for pages of user interface 104 to load, which products features are most frequently accessed, and which product features are least frequently accessed.”; Para 0032, “As another example, UX test engine 124 may capture webpage usage metrics from the set of visitors using scripting tags and/or scrape review sites for information describing product 102, as previously described. The tests may be run in accordance with the parameters input through UX test editor 122. The results of a UX test may include qualitative elements describing the user experience and/or quantitative elements that quantify the user experience.”); generating, via the one or more processors, a happiness score based, at least in part on the first user feedback data, wherein the happiness score indicates responses to one or more products of the CSP, one or more services of the CSP, or one or more features of the CSP (See at least Para 0030, “A UX test may assign scores based on the extracted information using a scoring function or machine learning, where a UX test score quantifies one or more user experiences with respect to one or more facets of the user experience.”; Para 0033, and Para 0054-0055); generating, via the one or more processors, a mindshare score based, at least in part on the second user feedback data, wherein the mindshare score indicates responses to one or more products of the CSP, one or more services of the CSP, or one or more features of the CSP (See at least Para 0030, “Additionally or alternatively, a UX test may obtain information about user experiences from other data sources. For example, a web scraper may crawl one or more websites for user reviews of a product to extract information about which product features are viewed most positively, which product features are viewed most negatively, what scores have been assigned for different features of the product, and what overall product score has been assigned. … A UX test may assign scores based on the extracted information using a scoring function or machine learning, where a UX test score quantifies one or more user experiences with respect to one or more facets of the user experience.”; and Para 0054-0055); generating, via the one or more processors, one or more other scores for individual ones of the one or more other categories (See at least Para 0030-0033, Para 0043-0044, and Para 0054-055); generating a unified user experience (UX) score, via the one or more processors, based at least in part on the happiness score, the mindshare score and the one or more other scores, wherein each category associated with the scores contributes a predetermined percentage of the unified UX score; and causing information about the unified UX score to be provided to a computing device associated with a user (See at least Figs.2-3, Para 0052-0054, Para 0068-0070 and Table 1, and Para 0080, Summarization and display of findings). *Please note: A recitation of the intended use of the claimed invention must result in a structural difference between the claimed invention and the prior art in order to patentably distinguish the claimed invention from the prior art. If the prior art structure is capable of performing the intended use, then it meets the claim. Functional recitation(s) have been considered but given less patentable weight because they fail to add any steps and are thereby regarded as intended use language. A recitation of the intended use of the claimed invention must result in additional steps. See Bristol-Myers Squibb Co. v. Ben Venue Laboratories, Inc., 246 F.3d 1368, 1375-76, 58 USPQ2d 1508, 1513 (Fed. Cir. 2001) (Where the language in a method claim states only a purpose and intended result, the expression does not result in a manipulative difference in the steps of the claim.). As per Claims 2 and 14, Garvey discloses wherein the operations are associated with at least one of a service, a product, or a feature performed within a network associated with the cloud service provider, and wherein generating the metrics is based, at least in part, on the monitoring (See at least Para 0029-0030, and 0052). As per Claim 3, Garvey discloses wherein generating the one or more other scores, comprises: determining, via the one or more processors, an adoption score associated with the adoption category; determining, via the one or more processors, a success of tasks score associated with the success of tasks category; determining, via the one or more processors, an engagement score associated with the engagement category; and determining, via the one or more processors, a retention score associated with the retention category (BRI, See at least Para 0029-0035, 0043-0044). As per Claims 4 and 15, Garvey discloses wherein determining the happiness score comprises: performing sentiment analysis on instances of the first user feedback data associated with a predetermined time period to generate happiness scores; and generating an overall happiness score based on the happiness scores (See at least Para 0029-0034, 0054-0055). As per Claims 5 and 16, Garvey discloses wherein determining the mindshare score comprises: performing sentiment analysis on instances of the second user feedback data associated with a predetermined time period to generate mindshare scores; and generating an overall mindshare score based on the mindshare scores (See at least Para 0029-0034, 0054-0055). As per Claims 6 and 17, Garvey discloses wherein generating at least one of the happiness score or the mindshare score comprises providing individual instances of the user feedback to a machine learning environment that uses a sentiment model to generate an output, wherein the output comprises one of a positive output value that is indicative of a satisfied sentiment, a neutral output value that is indicative of a neutral sentiment, or a negative output value that is indicative of a negative sentiment (See at least Fig.1, Para 0014, 0024, 0029-0038, 0054-0055). As per Claim 7, Garvey discloses wherein generating the unified UX score comprises: generating a happiness contribution of the happiness score to the unified UX score; generating a mindshare contribution of the mindshare score to the unified UX score; and generating one or more other contributions for the one or more other scores to the unified UX score (See at least Para 0029-0034, 0054-0055). As per Claim 8, Garvey discloses wherein generating the mindshare score comprises performing sentiment analysis on the second user feedback that is obtained from a social networking platform (See at least Para 0029-0034). As per Claim 9, Garvey discloses wherein the user feedback is associated with an organization (See at least Para 0029-0030). As per Claim 10, Garvey discloses determining a theme for individual instances of the user feedback data (See at least Para 0029-0034, 0061-0063). As per Claim 11 (10), Garvey discloses wherein determining the theme comprises generating, by the one or more processors and using a theme machine learning model, individual themes for the individual instances of the user feedback data (See at least Para 0029-0035, 0061-0064). As per Claims 12 (11) and 18, Garvey discloses wherein one or more of the happiness score, or the mindshare score is adjusted based on one or more of the individual themes generated by the theme machine learning model (See at least Para 0029-0035, 0061-0064). As per Claim 20, Garvey discloses wherein the processing to be performed further comprises: performing sentiment analysis on first instances of the first user feedback data to generate happiness scores; performing sentiment analysis on second instances of the second user feedback data to generate mindshare scores; generating an overall happiness score based on the happiness scores; and generating an overall mindshare score based on the mindshare scores (See at least Para 0029-0034, 0054-0055). Response to Arguments Applicant's arguments filed on 11/10/2025, with respect to Claims 1-20, have been considered but are not persuasive. The claimed limitations are found in the prior art as stated/mapped in the rejection above. The rejection will remain as FINAL, based on the rejection above. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The Applicant has made the argument that the claims are directed to patent eligible subject matter. However, while the Applicant's claims are directed to a Process, Machine, Manufacture or Composition of Matter (Step 1), the claims fail to recite limitations that are “significantly more” than an abstract idea (Step 2a-2b). The claim limitations (under their broadest reasonable interpretation) recite Certain methods of organizing human activity as defined in the guidance set forth in the 2019 Memorandum. This is so because the claimed limitations recite steps that all related the business relation concept of determining customer satisfaction, by gathering/ measuring data, correlating/ calculating the data, and presenting the final customer satisfaction-based data. Accordingly, the Examiner concludes that the claims recite a judicial exception of a Certain methods of organizing human activity. Furthermore, having determined that claims recite a judicial exception, analysis under the Memorandum turns now to determining whether there are “additional elements that integrate the judicial exception into a practical application.” See Memorandum (Step 2A, prong 2), see also MPEP § 2106.05(a)-(c), (e)-(h)). This judicial exception is not integrated into a practical application because the combination of additional elements fails to integrate the judicial exception into a practical application within the meaning defined in the Subject Matter Eligibility Guidelines, Examiner notes the following. While the computer technology does make the steps more easily performed, in principle, the steps can be performed without such computer and the notion of ‘practicality’ is not evidenced. ‘Practicality’ is based on whether the invention demonstrates: Improvements to the functioning of a computer, or to any other technology or technical field - see MPEP 2106.05(a) Applying or using a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition – see Vanda Memo Applying the judicial exception with, or by use of, a particular machine - see MPEP 2106.05(b) Effecting a transformation or reduction of a particular article to a different state or thing - see MPEP 2106.05(c) Applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception - see MPEP 2106.05(e) and Vanda Memo The claims are simply directed to an abstract idea (searching, correlating, and transmitting/ displaying data based on saved rules and characteristics) with additional generic computer elements, because the generically recited computer elements do not add a meaningful limitation to the abstract idea, and because they amount to simply implementing the abstract idea on a computer. Finally, the examination proceeds to evaluating whether the claims add specific limitations beyond the judicial exception that are not “well-understood, routine, conventional” in the field (see MPEP § 2106.05(d)) or simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception. See Memorandum (Step 2B). The claims do not add specific limitations beyond what is well-understood, routine, and conventional. As such, there is no inventive concept sufficient to transform the claimed subject matter into a patent-eligible application. The claim does not amount to significantly more than the abstract idea itself. The Examiner therefore maintains the 35 USC 101 rejections. Applicant’s remaining arguments are addressed in the rejection above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure can be found in the PTO-892 Notice of References Cited. The Examiner suggests the applicant review all of these documents before submitting any amendments; especially the following, which are incorporated by reference into Garvey et al. (US 2024/0320591 A1, Para 0002) sited in the 102(a)(2) rejection above: Garvey et al. (US 11,836,591 B1) - Scalable Systems And Methods For Curating User Experience Test Results. Garvey et al. (US 12,079,585 B1) - Scalable Systems And Methods For Discovering And Summarizing Test Result Facts (See at least Abstract, “Techniques are described herein for producing machine-generated findings given a set of user experience test results. In some embodiments, the system generates the findings using an artificial intelligence and machine learning engine. The findings may highlight areas that are predicted to provide the most insight into optimizing a product's design. A finding may be generated based on all or a subset of the test result elements, including qualitative and/or quantitative data contained therein. A finding may summarize a subset of the UX test results that are interrelated.”; C9L58-C10L3, “FIG. 3 illustrates an example process for summarizing test result facts in accordance with some embodiments. As previously noted, the operations for generating findings may vary between different types of UX test elements and finding generators. Examples of how the operations may vary are described in the subsections below. However, FIG. 3 illustrates a generalized process for summarizing test results, which may be executed by different types of generators to produce findings. One or more operations illustrated in FIG. 3 may be modified, rearranged, or omitted all together. Accordingly, the particular sequence of operations illustrated in FIG. 3 should not be construed as limiting the scope of one or more embodiments.”; C35L33-L38, “ In some embodiments, the system may provide recommendations and/or trigger actions directed to optimizing a product based on the machine-generated findings. The recommendations and/or actions that are triggered may vary depending on the text of the finding summary, associated child findings, and/or associated references.”). Garvey et al. (US 2024/0354789 A1) - QUANTITATIVE SPLIT DRIVEN QUOTE SEGMENTATION. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN P OUELLETTE whose telephone number is (571)272-6807. The examiner can normally be reached on M-F 8am-6pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynda C Jasmin, can be reached at telephone number (571) 272-6782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. December 29, 2025 /JONATHAN P OUELLETTE/Primary Examiner, Art Unit 3629
Read full office action

Prosecution Timeline

Oct 11, 2023
Application Filed
Jul 09, 2025
Non-Final Rejection — §101, §102
Nov 10, 2025
Response Filed
Dec 29, 2025
Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591860
OPERATIONAL SIMULATIONS OF PLANNED MAINTENANCE FOR VEHICLES
2y 5m to grant Granted Mar 31, 2026
Patent 12586043
Social Match Platform Apparatus, Method, and System
2y 5m to grant Granted Mar 24, 2026
Patent 12586038
INTELLIGENT SYSTEM AND METHOD OF OPTIMIZING CROSS-TEAM INFORMATION FLOW
2y 5m to grant Granted Mar 24, 2026
Patent 12572599
SYSTEMS AND METHODS OF GENERATING DYNAMIC ASSOCIATIONS BASED ON USER OBJECT ATTRIBUTES
2y 5m to grant Granted Mar 10, 2026
Patent 12567037
LEARNING ACCELERATION USING INSIGHT-ASSISTED INTRODUCTIONS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
96%
With Interview (+30.0%)
3y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 1140 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month