Prosecution Insights
Last updated: April 18, 2026
Application No. 18/796,205

METHODS FOR GENERATING DATA INSIGHTS USING AI AND NATURAL LANGUAGE PROCESSING

Final Rejection §103
Filed
Aug 06, 2024
Examiner
WARNER, PHILIP N
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Jones Lang Lasalle Ip Inc.
OA Round
2 (Final)
36%
Grant Probability
At Risk
3-4
OA Rounds
3y 7m
To Grant
65%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
39 granted / 107 resolved
-15.6% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
28 currently pending
Career history
135
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
53.8%
+13.8% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
4.9%
-35.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 107 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The following FINAL Office Action is in response to Applicant’s communication filed 01/22/2026 regarding Application 18/796,205. Status of Claim(s) Claim(s) 1-20 is/are currently pending and are rejected as follows. Response to Arguments – 102/103 Rejection Applicant’s arguments and amendments in regards to the previously applied prior art rejection have been fully considered but are not deemed persuasive. Applicant argues that the art of Garvey fails to teach the limitation of “a survey comprising inquiries to collect market data” and therefore does not read on Applicant’s claimed invention. Examiner does not find the argument persuasive as the art of Garvey is related to an invention which possesses the capacity to use surveys in order to collect information from users in regards to their experience with regards to the use of a product. This type of information in terms of a user’s experience or ranking of various facets of their interactions would be equivalent to one of ordinary skill in the art as market data, as it is indicative of how users prefer various products, or their opinions on the objects they interact and purchase. This information is then used to determine insights in regards to the user experience, which again, is equivalent under broadest reasonable interpretation to include “inquiries related to market data” such as how a customer prefers the design of their products. Therefore the previously applied prior art of Garvey remains applicable to the recited limitation. Further citations and elaborations regarding this determination are given in the prior art rejection below. Applicant’s additional arguments are rendered moot in view of the amended prior art rejection below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, 5-10, 12-17, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Garvey (US 2024/0320591 A1) in view of Sanders (US 2014/0012780 A1) Claim(s) 1, 8, and 15 – Garvey discloses the following: one or more processors; (Garvey: Paragraph 130, "For example, FIG. 4 illustrates a computer system in accordance with some embodiments. Computer system 400 includes a bus 402 or other communication mechanism for communicating information, and a hardware processor 404 coupled with bus 402 for processing information. Hardware processor 404 may be, for example, a general-purpose microprocessor.") a memory comprising programmed instructions (Garvey: Paragraph 131, "Computer system 400 also includes a main memory 406, such as a random­access memory (RAM) or other dynamic storage device, coupled to bus 402 for storing information and instructions to be executed by processor 404. Main memory 406 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 404. Such instructions, when stored in non-transitory storage media accessible to processor 404, render computer system 400 into a special-purpose machine that is customized to perform the operations specified in the instructions.") A non-transitory computer readable medium (Garvey: Paragraph 131, "Computer system 400 also includes a main memory 406, such as a random-access memory (RAM) or other dynamic storage device, coupled to bus 402 for storing information and instructions to be executed by processor 404. Main memory 406 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 404. Such instructions, when stored in non-transitory storage media accessible to processor 404, render computer system 400 into a special-purpose machine that is customized to perform the operations specified in the instructions.") receiving, by a computing device, a response from a client device to a survey comprising inquiries to collect market data, wherein the survey is transmitted to the client device and the response is received from the client device via a link; (Garvey: Paragraph 28, "UX test framework 120 includes components for composing and running UX tests. The components may include UX test editor 122, UX test engine 124, result parser 126, and AI integration engine 128. AUX test may comprise applications, tools, and/or processes for evaluating the performance of various facets of one or more user experiences with product 102. For example, a UX test may comprise a survey or questionnaire. Users of a website or a mobile application may be prompted to complete the UX test to evaluate their experience with product 102, which may be the website or application itself or a separate product. If the user accepts the prompt, the user may be redirected to a webpage with a set of queries to describe and/or rank various facets of the user experience with product 102."; Paragraph 31, "For example, UX test editor 122 may include one or more GUI elements through which a user may select predefined survey questions, input new questions, define scripts for capturing performance metrics, and/or otherwise customize test applications to evaluate user experiences with product 102. UX test editor 122 may further allow users to define parameters associated with running a UX test, such as what segment to target, what platform to use running the test, and/or other parameters controlling how the UX test is run."; Paragraph 32, "A UX test may include a query mechanism to prompt or search for data describing or quantifying one or more facets of a user experience. For example, UX test engine 124 may prompt a sample set of visitors to a webpage to complete a survey describing and/or ranking various facets of a user experience with product 102. As another example, UX test engine 124 may capture webpage usage metrics from the set of visitors using scripting tags and/or scrape review sites for information describing product 102, as previously described. The tests may be run in accordance with the parameters input through UX test editor 122. The results of a UX test may include qualitative elements describing the user experience and/or quantitative elements that quantify the user experience."; Paragraph 105, "In some embodiments, the AI-generated comparison data may trigger one or more automated actions. For example, the comparison data may be used to select between different versions of a website to launch live based on which version satisfied performance goals with respect to a group of test respondents. As another example, the data may be used to merge different versions of a website, selecting user interface components that yielded more positive insights. In yet another example, the AI-generated comparison data may be used to dynamically select between different versions of a website for different visitors to the website based on one or more user attributes. For instance, if the server detects, based on user attributes extracted through HTTP cookies or survey questions, that the user is a hiker, then one version of the website may be rendered on the visitor's browser. Otherwise, the server may select and render a different version of the website. Additionally or alternatively, the comparison data may be used to decorate, annotate, and/or otherwise highlight different aspects of a prototype that performed comparatively well and/or poorly with respect different versions of the same user experience or competing user experiences. Thus, the comparison data may drive design decisions and actions to optimize user experiences."; Paragraph 120, "In some embodiments, network resources (such as digital devices, virtual machines, application instances, and threads) corresponding to different tenants are isolated to tenant-specific overlay networks maintained by the multi-tenant computer network. As an example, packets from any source device in a tenant overlay network may only be transmitted to other devices within the same tenant overlay network. Encapsulation tunnels are used to prohibit any transmissions from a source device on a tenant overlay network to devices in other tenant overlay networks. Specifically, the packets, received from the source device, are encapsulated within an outer packet. The outer packet is transmitted from a first encapsulation tunnel endpoint (in communication with the source device in the tenant overlay network) to a second encapsulation tunnel endpoint (in communication with the destination device in the tenant overlay network). The second encapsulation tunnel endpoint decapsulates the outer packet to obtain the original packet transmitted by the source device. The original packet is transmitted from the second encapsulation tunnel endpoint to the destination device in the same particular overlay network.") providing, by the computing device to the client device, a graphical user interface comprising the insight data. (Garvey: Paragraph 82, "Additionally or alternatively, the AI-generated analysis may be consumed by other applications and processes to trigger other actions directed at optimizing product designs. For example, the first finding in the AI-generated analysis illustrated in Table 2 includes insights into recommended design modifications such as "changing the color palette, reducing the visual complexity, adding content about other degrees, adding some variation to the imagery, and adding cost." As previously noted, the analytics may be mapped to recommended or automatically-implemented actions. In the present example, the AI-generated insights may be mapped to actions such as changing the color palette of the webpage, adding/modifying user interface elements on the webpage to include additional content, or removing user interface elements to reduce clutter. Additionally or alternatively, the insights may be used to decorate or create a prototype for an updated version of the webpage, which may be presented to the webpage designer for review."; Paragraph 90, "The result of applying generative language model 322 at generate analysis operation 206 is test analyses 308a-n. The analysis document that is created by generative language model 322 may include detailed insights into the results of each user experience. For example, the analysis document may include summaries of user expectations, diagnostics, heatmaps, responses to custom questions, test goals, quantitative splits, and analytics for other types of test elements as previously discussed. The analysis document may further include supporting test results references with links to the references.") Garvey does not explicitly disclose the following, however, in analogous art of market research and analysis, Sanders discloses the following: generating, by the computing device, insight data comprising portfolio optimization data based on marketing data using a machine learning model, wherein the marketing data is generated using a natural language processor by analyzing a prompt based on the response received via the link from the client device; and (Sanders: Paragraph 48, “The solution generator 101 may also process information provided by an unaccredited individual crowd investor computer device 109 via an input device or a survey generator 114, which receives the information via a survey. The solution generator 101 performs the processing based on a variety of different types of information, such as the unaccredited individual crowd investor's role, expertise, interest in product/service, and amount available for investment commitment.”; Paragraph 145, “Referring to FIG. 8, a screen display of the Companies tab in an exemplary implementation of the private equity due diligence process described in FIG. 4 previously, showing a listing of All Companies 802, New Deals, Deal Prospects, Qualified Deals, Portfolio, Deal Rounds, Portfolio Tracking, Board Meetings and Investor Rounds that are stored for viewing and for management of the process. Clicking on the sub-tab All Companies displays a list view of all companies in the system that can be viewed as a company list 806 and that includes the company name, company type, city, industry focus, website and phone number in line format 808. Clicking on a specific company name to drill down provides additional details and displays many more details pertaining to the specific company. A user can also search for companies by entering the name, city, or company type in the search area 804 and clicking on the search button 805. A user can also view tasks, events, activities, reports, funds and LPs associated with the company 810 by either clicking on the respective tab or scrolling down below the company detail view for summary information once in the detail view of a specific company.”; Paragraph 151, “FIG. 13 is a schematic diagram illustrating the planning loop structure. The diagram is a representation of an iterative process that refines results in a loop. Input goes through a controller 1302, which processes the information to output an action, where such action then is influenced and corrected by external non-related factors that the system 1304 adjusts and fine tunes to output a resulting system state. The resulting system state then may become the input into the controller once again and go through the iterative process of refinement once again to end up in an even more refined and enhanced resulting system state that becomes an ever more accurate representation relative to the previous resulting system state. This is precisely what planning loop structures refer to as described herein. The iterative process of self-correction so as to improve the resulting system state uses past experiences, correlations, clusters, and other relevant data including random factors depending on the process. This is a form of machine learning.”; Paragraph 166, “A system that combines unaccredited investors with strategic institutional and nonprofit investors for simultaneous participation,”; Paragraph 174, “A method of crowd funding that automates investment decision-making and recommends companies to investors,”; Paragraph 262, “Recommending clusters that have highest coefficient of determination of the indices so computed to automate matching and closing of the investment cycle for unaccredited individual crowd investors and institutional/nonprofit investors simultaneously with companies in multiple rounds of funding using planning loop structures.”) Garvey discloses a method of machine learning assisted test analysis. Sanders discloses a method for survey inclusive market analysis in regards to portfolios. At the time of Applicant’s filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Garvey with the teachings of Sanders in order to improve the efficiency of investment decisions as disclosed by Sanders (Sanders: Paragraph 10, “Recommended investments are expected to be more capital efficient than what is possible with other systems and methods of the prior art.”) Claim(s) 2, 9, and 16 – Garvey in view of Sanders discloses the limitations of claims 1, 8, and 15 Garvey further discloses the following: wherein the survey comprises inquiries related to a market, a transaction, or real estate client preferences. (Garvey: Paragraph 98, "A detailed example is described below for purposes of clarity. Components and/or operations described below should be understood as one specific example which may not be applicable to certain embodiments. Accordingly, components and/or operations described below should not be construed as limiting the scope of any of the claims."; Paragraph 99, "n the example below, the process begins with an analyses of two UX tests. The two tests are for the same experience, but the first targets previous customers and the second targets a more general population of hikers. Table 3 shows the AI-generated analysis conducted for previous customers."; Table 3) Claim(s) 3, 10, and 17 – Garvey discloses the limitations of claims 1, 8, and 15 Garvey further discloses the following: store the marketing data and the insight data in a centralized repository; (Garvey: Paragraph 61, "At operation 204, the process selects a first context for analysis. In some embodiments, the process determines what contexts are associated with the UX test that was run. To determine the context, the process may parse the UX test, UX test results, and/or UX test metadata to identify categories and sub­categories associated with various test elements. The UX test may determine which categories and/or sub-categories are mapped to a stored context within context database 202. For example, the UX test may determine that a portion of a UX test includes a set of diagnostic elements, including positive, neutral, and negative sub-categories. Each sub-category may be mapped to a different context or the entire category may be mapped to the same context, depending on the particular implementation."; Paragraph 65, "At operation 216, the process interacts with a generative language model to generate findings, which are stored in findings database 210. In this operation, the process fetches the prompt fragments that are relevant to the context from prompt fragments database 220 and uses the fragments to orchestrate a dialogue with the generative language model. In some embodiments, the prompt fragments vary depending on the context. For example, a context for positive diagnostics may be mapped to a different set of prompt fragments than a context for expectations that have been met, which may be different than heatmap clicks, etc. The process may construct a dialogue using the prompt fragments, conditioned UX test result data, and/or conditioned findings.") receive a query from the client device relating to the marketing data or the insight data in the centralized repository; and (Garvey: Paragraph 32, "UX test engine 124 runs tests defined through UX test editor 122. AUX test may include a query mechanism to prompt or search for data describing or quantifying one or more facets of a user experience. For example, UX test engine 124 may prompt a sample set of visitors to a webpage to complete a survey describing and/or ranking various facets of a user experience with product 102. As another example, UX test engine 124 may capture webpage usage metrics from the set of visitors using scripting tags and/or scrape review sites for information describing product 102, as previously described. The tests may be run in accordance with the parameters input through UX test editor 122. The results of a UX test may include qualitative elements describing the user experience and/or quantitative elements that quantify the user experience."; Paragraph 44, "Data repository 148 stores and fetches data including UX test results 150, trained models 152, and rules 154. In some embodiments, data repository 148 is any type of storage unit and/or device (e.g., a file system, database, collection of tables, or any other storage mechanism) for storing data. Further, data repository 148 may include multiple different storage units and/or devices. The multiple different storage units and/or devices may or may not be of the same type or located at the same physical site. Further, data repository 148 may be implemented or executed on the same computing system as one or more other components of system architecture 100. Alternatively or additionally, data repository 148 may be implemented or executed on a computing system separate from one or more other system components. Data repository 148 may be communicatively coupled to remote components via a direct connection or via a network.") transmit a query response to the client device. (Garvey: Paragraph 61, "At operation 204, the process selects a first context for analysis. In some embodiments, the process determines what contexts are associated with the UX test that was run. To determine the context, the process may parse the UX test, UX test results, and/or UX test metadata to identify categories and sub-categories associated with various test elements. The UX test may determine which categories and/or sub-categories are mapped to a stored context within context database 202. For example, the UX test may determine that a portion of a UX test includes a set of diagnostic elements, including positive, neutral, and negative sub-categories. Each sub-category may be mapped to a different context or the entire category may be mapped to the same context, depending on the particular implementation."; Paragraph 65, "At operation 216, the process interacts with a generative language model to generate findings, which are stored in findings database 210. In this operation, the process fetches the prompt fragments that are relevant to the context from prompt fragments database 220 and uses the fragments to orchestrate a dialogue with the generative language model. In some embodiments, the prompt fragments vary depending on the context. For example, a context for positive diagnostics may be mapped to a different set of prompt fragments than a context for expectations that have been met, which may be different than heatmap clicks, etc. The process may construct a dialogue using the prompt fragments, conditioned UX test result data, and/or conditioned findings."; Paragraph 72, "If any findings result from operation 216, then the findings are persisted for later use in findings database 210. Additionally, the contexts may be updated based on the generated result. An example is the addition of a "test goal" context if the automation has generated at least one "diagnostic & positive" or "diagnostic & negative" finding. A test goal context may interface with the generative language model to generate findings relating to a goal of running the UX test as a function of the one or more previously generated diagnostic findings. Additionally or alternatively, the generated finding results may trigger other types of context updates to create more complex findings. Another example is finding quantitative splits in the UX test results, where quotes may be segmented into "in" groups and "out" groups based on previous quantitative findings, such as described in U.S. application Ser. No. 18/306,030. Such context updates also provide feedback to the system, which may be used to make runtime adjustments to the system to optimize results."; Paragraph 73, "After generating findings for a given context, the process loops back to operation 204, where the next context is fetched and the process proceeds accordingly. This cyclic structure allows the process to perform specific analysis and dynamically populate paths based on collected results. The process may generate findings for each result type and recursively uplevel the findings until the results satisfy a threshold set of parameters.") Claim(s) 5, 12, and 19 – Garvey in view of Sanders discloses the limitations of claims 1, 8, and 15 Garvey further discloses the following: receiving, by the computing device from the client device, an executive summary request and input relating to the executive summary request; (Garvey: Paragraph 60, "Referring to FIG. 2, the process starts at operation 200. In some embodiments, the process may be triggered in response to detecting that a new set ofUX test results has been received from a plurality of test respondents. In other embodiments, a user, such as an analyst or product designer, may initiate the process. For example, UX test framework 120 may provide a user interface component that allows the user to request an AI-based analysis for a specified set ofUX test results."; Paragraph 70, "The next message of the automated process shown in Table 1 includes a summary of the UX test and expectation element. Specifically, the message includes an outcome distribution for a common expectation and a curated set of quotes for the different outcomes. The summary and conditioned result data is combined with the message fragment specifying the task, "Summarize this result by describing why respondents felt their expectation was and was not met." The next element in the dialogue is a finding summary output by the generative language model for the expectation element. In response to receiving the output, the automated process follows up with a third prompt as follows: "Rephrase with as few words as possible. Drop sentences that describe statistics. Do not include a setup sentence and avoid repetition." The subsequent output by the generative language model is then packaged as the finding for the given expectation element.") retrieving, by the computing device, related data to the executive summary request or the input from a centralized repository; (Garvey: Paragraph 70, "The next message of the automated process shown in Table 1 includes a summary of the UX test and expectation element. Specifically, the message includes an outcome distribution for a common expectation and a curated set of quotes for the different outcomes. The summary and conditioned result data is combined with the message fragment specifying the task, "Summarize this result by describing why respondents felt their expectation was and was not met." The next element in the dialogue is a finding summary output by the generative language model for the expectation element. In response to receiving the output, the automated process follows up with a third prompt as follows: "Rephrase with as few words as possible. Drop sentences that describe statistics. Do not include a setup sentence and avoid repetition." The subsequent output by the generative language model is then packaged as the finding for the given expectation element."; Paragraph 79, "The process cycles through all contexts until there are none left. When the process has finished with the contexts, then at operation 212, the process combines the available findings into a draft version of the AI-generated analysis. Thus, the analysis document may include the set of packaged summaries, decorated reference quotes, and/or hyperlinks to utilized quotes across a set of different UX test contexts. Table 2 depicts an example analysis of a webpage created using the techniques above.") transmitting, by the computing device, a prompt comprising the executive summary request, the input, and the related data to a large language model; (Garvey: Paragraph 79, "The process cycles through all contexts until there are none left. When the process has finished with the contexts, then at operation 212, the process combines the available findings into a draft version of the AI­generated analysis. Thus, the analysis document may include the set of packaged summaries, decorated reference quotes, and/or hyperlinks to utilized quotes across a set of different UX test contexts. Table 2 depicts an example analysis of a webpage created using the techniques above."; Paragraph 80, "At operation 218, an analyst may curate and supplement the analysis. An analyst may wish to review the AI-generated analysis before providing the results to a customer to ensure the quality of the results. The analyst may remove, add to, or otherwise modify the analysis, including the finding summaries, supporting references, and hyperlinks. Any changes may used as feedback to the AI system to retrain or tune the ML models. For instance, a change in the selected quotes may be fed back into the curation model, which may use a learning algorithm, such as backpropagation, to adjust weights, bias values, and/or other model parameters. The AI-generated analysis may be added to analysis database 222, before the process ends at operation 224.") receiving, by the computing device, the executive summary from the large language model; (Garvey: Paragraph 79, "The process cycles through all contexts until there are none left. When the process has finished with the contexts, then at operation 212, the process combines the available findings into a draft version of the AI-generated analysis. Thus, the analysis document may include the set of packaged summaries, decorated reference quotes, and/or hyperlinks to utilized quotes across a set of different UX test contexts. Table 2 depicts an example analysis of a webpage created using the techniques above."; Paragraph 80, "At operation 218, an analyst may curate and supplement the analysis. An analyst may wish to review the AI-generated analysis before providing the results to a customer to ensure the quality of the results. The analyst may remove, add to, or otherwise modify the analysis, including the finding summaries, supporting references, and hyperlinks. Any changes may used as feedback to the AI system to retrain or tune the ML models. For instance, a change in the selected quotes may be fed back into the curation model, which may use a learning algorithm, such as backpropagation, to adjust weights, bias values, and/or other model parameters. The AI-generated analysis may be added to analysis database 222, before the process ends at operation 224.") and transmitting, by the computing device, the executive summary to the client device. (Garvey: Paragraph 81, "In some embodiments, analysis database 222 may store a queue of analyses for an analyst to review before presentation to a customer. The AI-generated analysis may significantly increase the work throughput of the analyst and response time between receiving UX test results and providing the customer with insights into their product design. In other embodiments, the AI-generated analysis may be presented directly to the product designer or other end users. The process may create a webpage, application page, and/or other interface that renders the analysis through a browser or client application. The page may include the selectable hyperlinks to the AI-curated quotes that, when selected by the user, direct the user to the location of the quote within the UX test results."; Paragraph 90, "The result of applying generative language model 322 at generate analysis operation 206 is test analyses 308a-n. The analysis document that is created by generative language model 322 may include detailed insights into the results of each user experience. For example, the analysis document may include summaries of user expectations, diagnostics, heatmaps, responses to custom questions, test goals, quantitative splits, and analytics for other types of test elements as previously discussed. The analysis document may further include supporting test results references with links to the references."; Paragraph 96, "Process 300 next combines each summary (test summaries 314a-n) with content fragments 316. The content fragments may be used to construct dialogue prompts and perform a dialogue with generative language model 322 at a final stage (e.g., 2.sup.nd or 3.sup.rd stage in the illustrates process diagram) of the analysis at operation 318. For example, a dialogue prompt may be created requesting the model to compare the key findings of different analyses. The dialogue may prompt the model to identify key strengths and/or weaknesses of a user experience relative to others ( e.g., different versions of the same experience or different experiences)."; Paragraph 128, "In some embodiments, the microservice application may generate and provide an output based on input that identifies, locates, or provides historical data, and defines the extent or scope of the requested output. The action, when triggered, causes the microservice application to provide, store, or display the output, for example, as a data model or as aggregate data that describes a data model.") Claim(s) 6, 13, and 20 – Garvey in view of Sanders discloses the limitations of claims 1, 8, and 15 Garvey further discloses the following: receiving, by the computing device from the client device, a document request; (Garvey: Paragraph 60, "Referring to FIG. 2, the process starts at operation 200. In some embodiments, the process may be triggered in response to detecting that a new set of UX test results has been received from a plurality of test respondents. In other embodiments, a user, such as an analyst or product designer, may initiate the process. For example, UX test framework 120 may provide a user interface component that allows the user to request an AI-based analysis for a specified set of UX test results."; Paragraph 79, "The process cycles through all contexts until there are none left. When the process has finished with the contexts, then at operation 212, the process combines the available findings into a draft version of the AI-generated analysis. Thus, the analysis document may include the set of packaged summaries, decorated reference quotes, and/or hyperlinks to utilized quotes across a set of different UX test contexts. Table 2 depicts an example analysis of a webpage created using the techniques above.") modifying and providing, by the computing device, the graphical user interface to the client device, wherein the modified graphical user interface comprises an interactive chat configured to request and receive input from the client device for the document request; (Garvey: Paragraph 79, "The process cycles through all contexts until there are none left. When the process has finished with the contexts, then at operation 212, the process combines the available findings into a draft version of the AI-generated analysis. Thus, the analysis document may include the set of packaged summaries, decorated reference quotes, and/or hyperlinks to utilized quotes across a set of different UX test contexts. Table 2 depicts an example analysis of a webpage created using the techniques above."; Paragraph 93, "Process 300 next combines each analysis (test analyses 308a-n) with summarization fragments 310. Summarization fragments 310 may be used to construct dialogue prompts and conduct a dialogue with generative language model 322 at a second stage of the analysis (summarize analysis operation 312). For example, a dialogue prompt may be created that requests the model to generate a bullet point summary of the key findings related to an analysis. Additionally or alternatively other prompts may be generated and submitted to generative language model 322. Additional examples are given below."; Paragraph 96, "Process 300 next combines each summary (test summaries 314a­n) with content fragments 316. The content fragments may be used to construct dialogue prompts and perform a dialogue with generative language model 322 at a final stage (e.g., 2.sup.nd or 3.sup.rd stage in the illustrates process diagram) of the analysis at operation 318. For example, a dialogue prompt may be created requesting the model to compare the key findings of different analyses. The dialogue may prompt the model to identify key strengths and/or weaknesses of a user experience relative to others ( e.g., different versions of the same experience or different experiences)."; Paragraph 104, "Also, as illustrated above, various prompts are created and submitted to the generative language model to generate the final comparison result set. In some cases, the prompts may be submitted in a predefined order. In other embodiments, the prompts may be generated dynamically based on feedback/model output. For instance, the prompts "Make sure the result is a bulleted list without organizational headers" may be generated responsive to detecting that the output is not in a bulleted list form and/or includes an organizational header but be omitted otherwise. Similarly, the process may parse the dialogue outputs of the model and determine which prompts to submit next dynamically.") transmitting, by the computing device, a prompt comprising the document request and the input to a large language model; (Garvey: Paragraph 79, "The process cycles through all contexts until there are none left. When the process has finished with the contexts, then at operation 212, the process combines the available findings into a draft version of the AI-generated analysis. Thus, the analysis document may include the set of packaged summaries, decorated reference quotes, and/or hyperlinks to utilized quotes across a set of different UX test contexts. Table 2 depicts an example analysis of a webpage created using the techniques above."; Paragraph 93, "Process 300 next combines each analysis (test analyses 308a-n) with summarization fragments 310. Summarization fragments 310 may be used to construct dialogue prompts and conduct a dialogue with generative language model 322 at a second stage of the analysis (summarize analysis operation 312). For example, a dialogue prompt may be created that requests the model to generate a bullet point summary of the key findings related to an analysis. Additionally or alternatively other prompts may be generated and submitted to generative language model 322. Additional examples are given below."; Paragraph 96, "Process 300 next combines each summary (test summaries 314a-n) with content fragments 316. The content fragments may be used to construct dialogue prompts and perform a dialogue with generative language model 322 at a final stage (e.g., 2.sup.nd or 3.sup.rd stage in the illustrates process diagram) of the analysis at operation 318. For example, a dialogue prompt may be created requesting the model to compare the key findings of different analyses. The dialogue may prompt the model to identify key strengths and/or weaknesses of a user experience relative to others ( e.g., different versions of the same experience or different experiences)."; Paragraph 104, "Also, as illustrated above, various prompts are created and submitted to the generative language model to generate the final comparison result set. In some cases, the prompts may be submitted in a predefined order. In other embodiments, the prompts may be generated dynamically based on feedback/model output. For instance, the prompts "Make sure the result is a bulleted list without organizational headers" may be generated responsive to detecting that the output is not in a bulleted list form and/or includes an organizational header but be omitted otherwise. Similarly, the process may parse the dialogue outputs of the model and determine which prompts to submit next dynamically.") receiving, by the computing device, a document from the large language model, wherein the document meets requirements of the document request and comprises the input from the client device; and (Garvey: Paragraph 79, "The process cycles through all contexts until there are none left. When the process has finished with the contexts, then at operation 212, the process combines the available findings into a draft version of the AI-generated analysis. Thus, the analysis document may include the set of packaged summaries, decorated reference quotes, and/or hyperlinks to utilized quotes across a set of different UX test contexts. Table 2 depicts an example analysis of a webpage created using the techniques above."; Paragraph 93, "Process 300 next combines each analysis (test analyses 308a-n) with summarization fragments 310. Summarization fragments 310 may be used to construct dialogue prompts and conduct a dialogue with generative language model 322 at a second stage of the analysis (summarize analysis operation 312). For example, a dialogue prompt may be created that requests the model to generate a bullet point summary of the key findings related to an analysis. Additionally or alternatively other prompts may be generated and submitted to generative language model 322. Additional examples are given below."; Paragraph 96, "Process 300 next combines each summary (test summaries 314a­n) with content fragments 316. The content fragments may be used to construct dialogue prompts and perform a dialogue with generative language model 322 at a final stage (e.g., 2.sup.nd or 3.sup.rd stage in the illustrates process diagram) of the analysis at operation 318. For example, a dialogue prompt may be created requesting the model to compare the key findings of different analyses. The dialogue may prompt the model to identify key strengths and/or weaknesses of a user experience relative to others (e.g., different versions of the same experience or different experiences)."; Paragraph 104, "Also, as illustrated above, various prompts are created and submitted to the generative language model to generate the final comparison result set. In some cases, the prompts may be submitted in a predefined order. In other embodiments, the prompts may be generated dynamically based on feedback/model output. For instance, the prompts "Make sure the result is a bulleted list without organizational headers" may be generated responsive to detecting that the output is not in a bulleted list form and/or includes an organizational header but be omitted otherwise. Similarly, the process may parse the dialogue outputs of the model and determine which prompts to submit next dynamically.") transmitting, by the computing device, the document to the client device. (Garvey: Paragraph 79, "The process cycles through all contexts until there are none left. When the process has finished with the contexts, then at operation 212, the process combines the available findings into a draft version of the AI-generated analysis. Thus, the analysis document may include the set of packaged summaries, decorated reference quotes, and/or hyperlinks to utilized quotes across a set of different UX test contexts. Table 2 depicts an example analysis of a webpage created using the techniques above."; Paragraph 93, "Process 300 next combines each analysis (test analyses 308a-n) with summarization fragments 310. Summarization fragments 310 may be used to construct dialogue prompts and conduct a dialogue with generative language model 322 at a second stage of the analysis (summarize analysis operation 312). For example, a dialogue prompt may be created that requests the model to generate a bullet point summary of the key findings related to an analysis. Additionally or alternatively other prompts may be generated and submitted to generative language model 322. Additional examples are given below."; Paragraph 96, "Process 300 next combines each summary (test summaries 314a-n) with content fragments 316. The content fragments may be used to construct dialogue prompts and perform a dialogue with generative language model 322 at a final stage (e.g., 2.sup.nd or 3.sup.rd stage in the illustrates process diagram) of the analysis at operation 318. For example, a dialogue prompt may be created requesting the model to compare the key findings of different analyses. The dialogue may prompt the model to identify key strengths and/or weaknesses of a user experience relative to others ( e.g., different versions of the same experience or different experiences)."; Paragraph 104, "Also, as illustrated above, various prompts are created and submitted to the generative language model to generate the final comparison result set. In some cases, the prompts may be submitted in a predefined order. In other embodiments, the prompts may be generated dynamically based on feedback/model output. For instance, the prompts "Make sure the result is a bulleted list without organizational headers" may be generated responsive to detecting that the output is not in a bulleted list form and/or includes an organizational header but be omitted otherwise. Similarly, the process may parse the dialogue outputs of the model and determine which prompts to submit next dynamically.") Claim(s) 7 and 14 – Garvey in view of Sanders discloses the limitations of claims 1, 6, 8, and 13 Garvey further discloses the following: receiving, by the computing device, edits for the document from the client device;(Garvey: Paragraph 80, "At operation 218, an analyst may curate and supplement the analysis. An analyst may wish to review the AI-generated analysis before providing the results to a customer to ensure the quality of the results. The analyst may remove, add to, or otherwise modify the analysis, including the finding summaries, supporting references, and hyperlinks. Any changes may used as feedback to the AI system to retrain or tune the ML models. For instance, a change in the selected quotes may be fed back into the curation model, which may use a learning algorithm, such as backpropagation, to adjust weights, bias values, and/or other model parameters. The AI-generated analysis may be added to analysis database 222, before the process ends at operation 224."; Paragraph 81, "In some embodiments, analysis database 222 may store a queue of analyses for an analyst to review before presentation to a customer. The AI-generated analysis may significantly increase the work throughput of the analyst and response time between receiving UX test results and providing the customer with insights into their product design. In other embodiments, the AI-generated analysis may be presented directly to the product designer or other end users. The process may create a webpage, application page, and/or other interface that renders the analysis through a browser or client application. The page may include the selectable hyperlinks to the AI-curated quotes that, when selected by the user, direct the user to the location of the quote within the UX test results."; Paragraph 82, "Additionally or alternatively, the AI-generated analysis may be consumed by other applications and processes to trigger other actions directed at optimizing product designs. For example, the first finding in the AI-generated analysis illustrated in Table 2 includes insights into recommended design modifications such as "changing the color palette, reducing the visual complexity, adding content about other degrees, adding some variation to the imagery, and adding cost." As previously noted, the analytics may be mapped to recommended or automatically-implemented actions. In the present example, the AI-generated insights may be mapped to actions such as changing the color palette of the webpage, adding/modifying user interface elements on the webpage to include additional content, or removing user interface elements to reduce clutter. Additionally or alternatively, the insights may be used to decorate or create a prototype for an updated version of the webpage, which may be presented to the webpage designer for review."; Paragraph 104, "Also, as illustrated above, various prompts are created and submitted to the generative language model to generate the final comparison result set. In some cases, the prompts may be submitted in a predefined order. In other embodiments, the prompts may be generated dynamically based on feedback/model output. For instance, the prompts "Make sure the result is a bulleted list without organizational headers" may be generated responsive to detecting that the output is not in a bulleted list form and/or includes an organizational header but be omitted otherwise. Similarly, the process may parse the dialogue outputs of the model and determine which prompts to submit next dynamically.") modifying, by the computing device, the document using the edits from the client device; and (Garvey: Paragraph 80, "At operation 218, an analyst may curate and supplement the analysis. An analyst may wish to review the AI-generated analysis before providing the results to a customer to ensure the quality of the results. The analyst may remove, add to, or otherwise modify the analysis, including the finding summaries, supporting references, and hyperlinks. Any changes may used as feedback to the AI system to retrain or tune the ML models. For instance, a change in the selected quotes may be fed back into the curation model, which may use a learning algorithm, such as backpropagation, to adjust weights, bias values, and/or other model parameters. The AI-generated analysis may be added to analysis database 222, before the process ends at operation 224."; Paragraph 81, "In some embodiments, analysis database 222 may store a queue of analyses for an analyst to review before presentation to a customer. The AI-generated analysis may significantly increase the work throughput of the analyst and response time between receiving UX test results and providing the customer with insights into their product design. In other embodiments, the AI-generated analysis may be presented directly to the product designer or other end users. The process may create a webpage, application page, and/or other interface that renders the analysis through a browser or client application. The page may include the selectable hyperlinks to the AI-curated quotes that, when selected by the user, direct the user to the location of the quote within the UX test results."; Paragraph 82, "Additionally or alternatively, the AI-generated analysis may be consumed by other applications and processes to trigger other actions directed at optimizing product designs. For example, the first finding in the AI-generated analysis illustrated in Table 2 includes insights into recommended design modifications such as "changing the color palette, reducing the visual complexity, adding content about other degrees, adding some variation to the imagery, and adding cost." As previously noted, the analytics may be mapped to recommended or automatically-implemented actions. In the present example, the AI-generated insights may be mapped to actions such as changing the color palette of the webpage, adding/modifying user interface elements on the webpage to include additional content, or removing user interface elements to reduce clutter. Additionally or alternatively, the insights may be used to decorate or create a prototype for an updated version of the webpage, which may be presented to the webpage designer for review."; Paragraph 104, "Also, as illustrated above, various prompts are created and submitted to the generative language model to generate the final comparison result set. In some cases, the prompts may be submitted in a predefined order. In other embodiments, the prompts may be generated dynamically based on feedback/model output. For instance, the prompts "Make sure the result is a bulleted list without organizational headers" may be generated responsive to detecting that the output is not in a bulleted list form and/or includes an organizational header but be omitted otherwise. Similarly, the process may parse the dialogue outputs of the model and determine which prompts to submit next dynamically.") providing, by the computing device, the modified document to the client device.(Garvey: Paragraph 80, "At operation 218, an analyst may curate and supplement the analysis. An analyst may wish to review the AI-generated analysis before providing the results to a customer to ensure the quality of the results. The analyst may remove, add to, or otherwise modify the analysis, including the finding summaries, supporting references, and hyperlinks. Any changes may used as feedback to the AI system to retrain or tune the ML models. For instance, a change in the selected quotes may be fed back into the curation model, which may use a learning algorithm, such as backpropagation, to adjust weights, bias values, and/or other model parameters. The AI-generated analysis may be added to analysis database 222, before the process ends at operation 224."; Paragraph 81, "In some embodiments, analysis database 222 may store a queue of analyses for an analyst to review before presentation to a customer. The AI-generated analysis may significantly increase the work throughput of the analyst and response time between receiving UX test results and providing the customer with insights into their product design. In other embodiments, the AI-generated analysis may be presented directly to the product designer or other end users. The process may create a webpage, application page, and/or other interface that renders the analysis through a browser or client application. The page may include the selectable hyperlinks to the AI-curated quotes that, when selected by the user, direct the user to the location of the quote within the UX test results."; Paragraph 82, "Additionally or alternatively, the AI-generated analysis may be consumed by other applications and processes to trigger other actions directed at optimizing product designs. For example, the first finding in the AI-generated analysis illustrated in Table 2 includes insights into recommended design modifications such as "changing the color palette, reducing the visual complexity, adding content about other degrees, adding some variation to the imagery, and adding cost." As previously noted, the analytics may be mapped to recommended or automatically-implemented actions. In the present example, the AI-generated insights may be mapped to actions such as changing the color palette of the webpage, adding/modifying user interface elements on the webpage to include additional content, or removing user interface elements to reduce clutter. Additionally or alternatively, the insights may be used to decorate or create a prototype for an updated version of the webpage, which may be presented to the webpage designer for review."; Paragraph 104, "Also, as illustrated above, various prompts are created and submitted to the generative language model to generate the final comparison result set. In some cases, the prompts may be submitted in a predefined order. In other embodiments, the prompts may be generated dynamically based on feedback/model output. For instance, the prompts "Make sure the result is a bulleted list without organizational headers" may be generated responsive to detecting that the output is not in a bulleted list form and/or includes an organizational header but be omitted otherwise. Similarly, the process may parse the dialogue outputs of the model and determine which prompts to submit next dynamically.") Claim(s) 4, 11, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Garvey (US 2024/0320591 Al) in view of Sanders (US 2014/0012780 A1) and Panuganty (US 2020/0210647 Al) Claim(s) 4, 11, and 18 – Garvey in view of Sanders discloses the limitations of claims 1, 3, 8, 10, 15, and 17 Garvey in view of Sanders does not explicitly disclose the following, however, in analogous art of research and data analysis Panuganty discloses the following: receiving, by the computing device, a login request from the client device; (Panuganty: Paragraph 149, "Various implementations identify user distinctions for ambiguous words as anecdotal data. To further illustrate, consider a scenario illustrated by tablet 906 that is in process of outputting playlist content 908. In this scenario, the personalized analytics system receives an input request for analytic assistance from the personalized analytics system, such as via an input query through a search field similar search field 804 of FIG. 8A (not illustrated here). The input query includes an ambiguous term which the personalized analytics does not have enough data to resolve, such as the term "Washington" that can refer to Washington State or Washington D.C. Various implementations prompt for input corresponding to additional context information and/or clarification, and store the additional information as anecdotal data associated with a corresponding user profile and/or workspace. For instance, the user interface of tablet 906 includes control 922 that corresponds to Washington State, and control 924 that corresponds to Washington D.C. In tum, user 926 actuates control 922 to provide additional context information that is received and stored by the personalized analytics system as anecdotal data."; Paragraph 162, "In various implementations, curation engine module 1102 applies and/or utilizes user-defined rules, such as rules that prioritize database access, rules that prioritize what data to update more frequently relative to other data, etc. For instance, a user can create a workspace associated with the personalized analytics system such that the user assigns each workspace to a particular database and/or data source. This directs the curation engine module 1102 to curate data from the identified data source. Alternately or additionally, the user can assign a collection of particular databases and/or data sources to the workspace. As yet another example, a user can assign a login name and password to the workspace to secure and/or restrict access to curated data so that only authorized users with valid user credentials can access to the curated data."; Paragraph 163, "Some implementations of the curation engine module 1102 identify and/or generate inter-data relationship information, and store this information in relational module 1108. Alternately or additionally, relational module 1108 represents data relationships identified by the curation engine module that are used to form data structures within a corresponding database. In one or more implementations, curation engine module 1102 automatically triggers the data curation process without receiving explicit input associated with initiating the process, but alternate or additional implementations trigger the data curation process in response to receiving explicit input to initiate data curation. Access to the curated data can be restricted and/or limited to a single user profile and/or workspace, and/or can be distributed across multiple user profiles and/or workspaces, such as user profiles and/or workspaces associated with a same organization. This allows the curated data and/or analytics generated for a first user in the organization to be leveraged for analytics associated with a second user of the organization, thus improving the efficiency of the personalized analytics system across the organization since the information is shared, rather than repeatedly generated for each user profile."; Paragraph 163, "Personalized analytics system 1100 also includes story narrator module 1116 and animator module 1118 to generate a narrated analytic playlist from the identified insights. Story narrator module 1116 receives the output generated by the insight engine module 1114, and determines how to articulate, explain, and/or augment a corresponding description of the output. To illustrate, consider a scenario in which story narrator module 1116 receives, from the insight engine module, an insight that corresponds to a graph and/or data corresponding to a sales trend for a product in a particular state. In response to receiving this input, the story narrator module determines to generate a graph to visually display this information. Alternately or additionally, the story narrator module determines that supplemental information, such as sales trends for the product in neighboring states, could augment, explain, or further clarify a context associated with the sales trend in the particular state. Accordingly, in some implementations, the story narrator module includes a feedback loop to parser module 1110, query magnifier module 1112, and/or insight engine module 1114 to request additional insight information and/or request a query analysis be performed for the supplemental information. In various implementations, the story narrator module 1116 bundles and forwards information to the animator module to indicate what visual and/or audible information to include in the narrated analytics playlist. For example, the story narrator module 1116 can include charts, facts, text-based descriptive narratives, metadata, and other information corresponding to the insights, in the bundled information."; Paragraph 151, "At 1002, various implementations access a personalized analytics system. For example, a client device that includes a client application of the personalized analytics system (e.g., client analytics module 108) and/or a browser can access a server application of the personalized analytics system. This can include logging on to a particular workspace associated with the personalized analytics system, such as through the use of various types of authentication procedures. Thus, accessing the personalized analytics system can include logging onto a locally executing application and/or accessing remote applications a further described herein. Any suitable type of client device can be utilized, examples of which are provided herein.") authenticating, by the computing device, the client device based on login data in the login request; (Panuganty: Paragraph 149, "Various implementations identify user distinctions for ambiguous words as anecdotal data. To further illustrate, consider a scenario illustrated by tablet 906 that is in process of outputting play list content 908. In this scenario, the personalized analytics system receives an input request for analytic assistance from the personalized analytics system, such as via an input query through a search field similar search field 804 of FIG. 8A (not illustrated here). The input query includes an ambiguous term which the personalized analytics does not have enough data to resolve, such as the term "Washington" that can refer to Washington State or Washington D.C. Various implementations prompt for input corresponding to additional context information and/or clarification, and store the additional information as anecdotal data associated with a corresponding user profile and/or workspace. For instance, the user interface of tablet 906 includes control 922 that corresponds to Washington State, and control 924 that corresponds to Washington D.C. In tum, user 926 actuates control 922 to provide additional context information that is received and stored by the personalized analytics system as anecdotal data."; Paragraph 162, "In various implementations, curation engine module 1102 applies and/or utilizes user-defined rules, such as rules that prioritize database access, rules that prioritize what data to update more frequently relative to other data, etc. For instance, a user can create a workspace associated with the personalized analytics system such that the user assigns each workspace to a particular database and/or data source. This directs the curation engine module 1102 to curate data from the identified data source. Alternately or additionally, the user can assign a collection of particular databases and/or data sources to the workspace. As yet another example, a user can assign a login name and password to the workspace to secure and/or restrict access to curated data so that only authorized users with valid user credentials can access to the curated data."; Paragraph 163, "Some implementations of the curation engine module 1102 identify and/or generate inter-data relationship information, and store this information in relational module 1108. Alternately or additionally, relational module 1108 represents data relationships identified by the curation engine module that are used to form data structures within a corresponding database. In one or more implementations, curation engine module 1102 automatically triggers the data curation process without receiving explicit input associated with initiating the process, but alternate or additional implementations trigger the data curation process in response to receiving explicit input to initiate data curation. Access to the curated data can be restricted and/or limited to a single user profile and/or workspace, and/or can be distributed across multiple user profiles and/or workspaces, such as user profiles and/or workspaces associated with a same organization. This allows the curated data and/or analytics generated for a first user in the organization to be leveraged for analytics associated with a second user of the organization, thus improving the efficiency of the personalized analytics system across the organization since the information is shared, rather than repeatedly generated for each user profile."; Paragraph 163, "Personalized analytics system 1100 also includes story narrator module 1116 and animator module 1118 to generate a narrated analytic playlist from the identified insights. Story narrator module 1116 receives the output generated by the insight engine module 1114, and determines how to articulate, explain, and/or augment a corresponding description of the output. To illustrate, consider a scenario in which story narrator module 1116 receives, from the insight engine module, an insight that corresponds to a graph and/or data corresponding to a sales trend for a product in a particular state. In response to receiving this input, the story narrator module determines to generate a graph to visually display this information. Alternately or additionally, the story narrator module determines that supplemental information, such as sales trends for the product in neighboring states, could augment, explain, or further clarify a context associated with the sales trend in the particular state. Accordingly, in some implementations, the story narrator module includes a feedback loop to parser module 1110, query magnifier module 1112, and/or insight engine module 1114 to request additional insight information and/or request a query analysis be performed for the supplemental information. In various implementations, the story narrator module 1116 bundles and forwards information to the animator module to indicate what visual and/or audible information to include in the narrated analytics playlist. For example, the story narrator module 1116 can include charts, facts, text-based descriptive narratives, metadata, and other information corresponding to the insights, in the bundled information."; Paragraph 151, "At 1002, various implementations access a personalized analytics system. For example, a client device that includes a client application of the personalized analytics system (e.g., client analytics module 108) and/or a browser can access a server application of the personalized analytics system. This can include logging on to a particular workspace associated with the personalized analytics system, such as through the use of various types of authentication procedures. Thus, accessing the personalized analytics system can include logging onto a locally executing application and/or accessing remote applications a further described herein. Any suitable type of client device can be utilized, examples of which are provided herein.") receiving, by the computing device, the query related to the marketing data or the insight data; and (Panuganty: Paragraph 149, "Various implementations identify user distinctions for ambiguous words as anecdotal data. To further illustrate, consider a scenario illustrated by tablet 906 that is in process of outputting play list content 908. In this scenario, the personalized analytics system receives an input request for analytic assistance from the personalized analytics system, such as via an input query through a search field similar search field 804 of FIG. 8A (not illustrated here). The input query includes an ambiguous term which the personalized analytics does not have enough data to resolve, such as the term "Washington" that can refer to Washington State or Washington D.C. Various implementations prompt for input corresponding to additional context information and/or clarification, and store the additional information as anecdotal data associated with a corresponding user profile and/or workspace. For instance, the user interface of tablet 906 includes control 922 that corresponds to Washington State, and control 924 that corresponds to Washington D.C. In turn, user 926 actuates control 922 to provide additional context information that is received and stored by the personalized analytics system as anecdotal data."; Paragraph 162, "In various implementations, curation engine module 1102 applies and/or utilizes user-defined rules, such as rules that prioritize database access, rules that prioritize what data to update more frequently relative to other data, etc. For instance, a user can create a workspace associated with the personalized analytics system such that the user assigns each workspace to a particular database and/or data source. This directs the curation engine module 1102 to curate data from the identified data source. Alternately or additionally, the user can assign a collection of particular databases and/or data sources to the workspace. As yet another example, a user can assign a login name and password to the workspace to secure and/or restrict access to curated data so that only authorized users with valid user credentials can access to the curated data."; Paragraph 163, "Some implementations of the curation engine module 1102 identify and/or generate inter-data relationship information, and store this information in relational module 1108. Alternately or additionally, relational module 1108 represents data relationships identified by the curation engine module that are used to form data structures within a corresponding database. In one or more implementations, curation engine module 1102 automatically triggers the data curation process without receiving explicit input associated with initiating the process, but alternate or additional implementations trigger the data curation process in response to receiving explicit input to initiate data curation. Access to the curated data can be restricted and/or limited to a single user profile and/or workspace, and/or can be distributed across multiple user profiles and/or workspaces, such as user profiles and/or workspaces associated with a same organization. This allows the curated data and/or analytics generated for a first user in the organization to be leveraged for analytics associated with a second user of the organization, thus improving the efficiency of the personalized analytics system across the organization since the information is shared, rather than repeatedly generated for each user profile."; Paragraph 163, "Personalized analytics system 1100 also includes story narrator module 1116 and animator module 1118 to generate a narrated analytic playlist from the identified insights. Story narrator module 1116 receives the output generated by the insight engine module 1114, and determines how to articulate, explain, and/or augment a corresponding description of the output. To illustrate, consider a scenario in which story narrator module 1116 receives, from the insight engine module, an insight that corresponds to a graph and/or data corresponding to a sales trend for a product in a particular state. In response to receiving this input, the story narrator module determines to generate a graph to visually display this information. Alternately or additionally, the story narrator module determines that supplemental information, such as sales trends for the product in neighboring states, could augment, explain, or further clarify a context associated with the sales trend in the particular state. Accordingly, in some implementations, the story narrator module includes a feedback loop to parser module 1110, query magnifier module 1112, and/or insight engine module 1114 to request additional insight information and/or request a query analysis be performed for the supplemental information. In various implementations, the story narrator module 1116 bundles and forwards information to the animator module to indicate what visual and/or audible information to include in the narrated analytics playlist. For example, the story narrator module 1116 can include charts, facts, text-based descriptive narratives, metadata, and other information corresponding to the insights, in the bundled information."; Paragraph 151, "At 1002, various implementations access a personalized analytics system. For example, a client device that includes a client application of the personalized analytics system (e.g., client analytics module 108) and/or a browser can access a server application of the personalized analytics system. This can include logging on to a particular workspace associated with the personalized analytics system, such as through the use of various types of authentication procedures. Thus, accessing the personalized analytics system can include logging onto a locally executing application and/or accessing remote applications a further described herein. Any suitable type of client device can be utilized, examples of which are provided herein.") generating, by the computing device, the query response using the machine learning model by: o tokenizing the query for key components; (Garvey: Paragraph 164, "Personalized analytics system 1100 also includes parser module 1110 and query magnifier module 1112 to analyze input query strings, and identify various permutations of the input query to use in extracting information from the curated data. For instance, parser module 1110 can parse input query strings into individual tokens and/or units, where the analyzed input query string originates from any suitable source, such as curation engine module 1102, user-defined schedules, event-based triggers, feedback loops from other modules included in the personalized analytics system, etc. Thus, parsing an input query string can be done in real-time based on receiving an explicit user-input query, based on receiving a trigger event, based on scheduled query interval(s), based on determining the personalized analytics system 1100 is idle ( e.g., a lack of user interaction with the personalized analytics system), etc. In response to parsing the input query string into individual tokens, various implementations of the parser module further analyze the individual tokens to identify keywords, context information, etc.") matching the key components to a cluster of vectors; and (Garvey: Paragraph 181, "Curation engine module 1200 also includes vocabulary generation module 1214 that determines alternate wording options for the data and/or information being curated. For example, various natural language processing algorithms and/or models can be employed to identify similar wording, such as sematic matching algorithms, approximate string matching, text classifier algorithms, word2vec algorithms, latent semantic analysis, clustering algorithms, bag-of-words models, document-term matrices, automatic summarization algorithms, tagging operations, etc. Curation engine module 1200 applies the alternate wordings in the curation process as a way to identify similar data and/or entities, and then adds the information generated using the alternate wordings into the various facets of curating data. As one example, a company entitled "My Big Company" can alternately be referred to as "MBG", "My Big Co.", "Big Co.", and so forth. Vocabulary generation module 1214 discerns when information with alternate naming conventions apply to a same entity, and builds corresponding attributes and/or relationship information to combine and/or associate the information from different sources of information to a same data point and/or entity, thus further enriching the information about that entity."; Paragraph 166, "The newly generated queries and/or the original input query are then used by insight engine module 1114 to extract information from the curated data. Insight engine module 1114 analyzes the extracted information to identify one or more insights, such as by applying various machine-learning algorithms to the extracted information. An insight can include any suitable type of information, such as a trend, a pattern, an anomaly, an outlier, predictive behavior, a contradiction, connections, benchmarks, market segments, etc. Accordingly, an insight sometimes corresponds to an actionable finding that is based upon data analysis. For example, a rate of growth in sales for a product corresponds to a factual insight that a user can base future actions off of, such as a low rate of growth indicating a change is needed, a high rate of growth indicating that current solutions are working, and so forth. Insight engine module 1114 can apply any suitable type of machine-learning model and/or algorithm to discover an insight, such as cluster analysis algorithms, association rule learning, anomaly detection algorithms, regression analysis algorithms, classification algorithms, summarization algorithms, deep learning algorithms, ensemble algorithms, Neural Network based algorithms, regularization algorithms, rule system algorithms, regression algorithms, Bayesian algorithms, decision tree algorithms, dimensionality reduction algorithms, Instance based algorithms, clustering algorithms, K-nearest neighbors algorithms, gradient descent algorithms, linear discriminant analysis, classification and regression trees, learning vector quantization, supporting vector machines, Bagged Decision Trees and Random Forest algorithms, boosting, etc. While the various algorithms described here are described in the context of being utilized to generate insights by the insight engine module 1114, it is to be appreciated that these algorithms can alternately or additionally be employed in other modules of the personalized analytics system 1100, such as a curation engine module 1102, a parser module 1110, query magnifier module 1112, a story narrator module 1116, an animator module 1118, and so forth.") generating the query response based on the cluster of vectors. (Panuganty: Paragraph 171, "The personalized analytics system 1100 also includes proximity platform module 1122. As further described herein, various modules included in the personalized analytics system incorporate machine-learning algorithms, modules, and/or models to aid in curating and/or analyzing data. Accordingly, as the machine-learning algorithms evolve, the corresponding output becomes more personalized, more relevant, and more accurate for the corresponding user profiles and/or workspaces relative to unevolved algorithms. Proximity platform module 1122 acquires the learned information and/or the evolved algorithm parameters without having visibility into the curated data and/or queries used to generate the learned information. To illustrate, consider a scenario in which a first organization has sensitive sales growth charts that plot an organization product against a competitor's product. In generating this sales growth chart, the personalized analytics system modifies various configurable parameters of a machine-learning algorithm. Proximity platform module 1122 extracts changes to the parameters and/or the absolute values of the changed parameters without visibility into the curated data and/or query analyses used to evolve the algorithm. The proximity platform can then propagate these changed parameters to a generating the query response based on the cluster of vectors. (Panuganty: Paragraph 171, "The personalized analytics system 1100 also includes proximity platform module 1122. As further described herein, various modules included in the personalized analytics system incorporate machine-learning algorithms, modules, and/or models to aid in curating and/or analyzing data. Accordingly, as the machine-learning algorithms evolve, the corresponding output becomes more personalized, more relevant, and more accurate for the corresponding user profiles and/or workspaces relative to unevolved algorithms. Proximity platform module 1122 acquires the learned information and/or the evolved algorithm parameters without having visibility into the curated data and/or queries used to generate the learned information. To illustrate, consider a scenario in which a first organization has sensitive sales growth charts that plot an organization product against a competitor's product. In generating this sales growth chart, the personalized analytics system modifies various configurable parameters of a machine-learning algorithm. Proximity platform module 1122 extracts changes to the parameters and/or the absolute values of the changed parameters without visibility into the curated data and/or query analyses used to evolve the algorithm. The proximity platform can then propagate these changed parameters to a to articulate, explain, and/or augment a corresponding description of the output. To illustrate, consider a scenario in which story narrator module 1116 receives, from the insight engine module, an insight that corresponds to a graph and/or data corresponding to a sales trend for a product in a particular state. In response to receiving this input, the story narrator module determines to generate a graph to visually display this information. Alternately or additionally, the story narrator module determines that supplemental information, such as sales trends for the product in neighboring states, could augment, explain, or further clarify a context associated with the sales trend in the particular state. Accordingly, in some implementations, the story narrator module includes a feedback loop to parser module 1110, query magnifier module 1112, and/or insight engine module 1114 to request additional insight information and/or request a query analysis be performed for the supplemental information. In various implementations, the story narrator module 1116 bundles and forwards information to the animator module to indicate what visual and/or audible information to include in the narrated analytics playlist. For example, the story narrator module 1116 can include charts, facts, text­based descriptive narratives, metadata, and other information corresponding to the insights, in the bundled information") Garvey discloses a method of machine learning assisted test analysis. Sanders discloses a method for survey inclusive market analysis in regards to portfolios. Panuganty discloses a method for machine learning assisted summarization of insight data. At the time of Applicant's filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Garvey in view of Sanders with the teachings of Panuganty in order improve the efficiency and accuracy of large data as disclosed by Panuganty (Panuganty: Paragraph 67, "Accordingly, the volume of data accumulated by an organization from varying computer-based data sources, the speed at which the computer-based data is accumulated, as well as the differing formats in which the data can be stored, makes extracting accurate, current, and reliable insights from the data manually by a user insurmountable and difficult.") Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Philip N Warner whose telephone number is (571)270-7407. The examiner can normally be reached Monday-Friday 7am-4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O’Connor can be reached at 571-272-787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Philip N Warner/Examiner, Art Unit 3624 /Jerry O'Connor/Supervisory Patent Examiner,Group Art Unit 3624
Read full office action

Prosecution Timeline

Aug 06, 2024
Application Filed
Oct 17, 2025
Non-Final Rejection — §103
Jan 22, 2026
Response Filed
Apr 02, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596974
MULTI-LAYER ABRASIVE TOOLS FOR CONCRETE SURFACE PROCESSING
2y 5m to grant Granted Apr 07, 2026
Patent 12596984
INFORMATION GENERATION APPARATUS, INFORMATION GENERATION METHOD AND PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12579490
GENERATING SUGGESTIONS WITHIN A DATA INTEGRATION SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12567011
BATTERY LEDGER MANAGEMENT SYSTEM AND METHOD OF BATTERY LEDGER MANAGEMENT
2y 5m to grant Granted Mar 03, 2026
Patent 12493819
UTILIZING MACHINE LEARNING MODELS TO GENERATE INITIATIVE PLANS
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
36%
Grant Probability
65%
With Interview (+28.6%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 107 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month