Prosecution Insights
Last updated: April 19, 2026
Application No. 18/421,344

GENERATION OF DATA STORIES AND DATA SUMMARIES BASED ON USER QUERIES

Final Rejection §101§103
Filed
Jan 24, 2024
Examiner
ROSTAMI, MOHAMMAD S
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
Adobe Inc.
OA Round
4 (Final)
67%
Grant Probability
Favorable
5-6
OA Rounds
3y 10m
To Grant
93%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
425 granted / 635 resolved
+11.9% vs TC avg
Strong +26% interview lift
Without
With
+26.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
37 currently pending
Career history
672
Total Applications
across all art units

Statute-Specific Performance

§101
21.3%
-18.7% vs TC avg
§103
54.9%
+14.9% vs TC avg
§102
9.7%
-30.3% vs TC avg
§112
4.4%
-35.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 635 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-10, 12-16, 18 and 19 are pending of which claims 1, 9 and 15 are in independent form. Claims 1-10, 12-16, 18 and 19 are rejected under 35 U.S.C. 101 (Abstract idea). Claims 1-10, 12-16, 18 and 19 are rejected under 35 U.S.C. 103. Response to Arguments Applicant's arguments filed 21/1/2025 have been fully considered but they are not persuasive. Applicant’s Argument: Applicant argues, in pages 10-15 of the “Remarks”, that: Applicant cannot find any teaching or suggestion in the Vangala reference of using a set of facts relevant to the user query to generate a data summary of the set of relevant facts. Vangala is further void of any teaching or suggestion of generating the data summary by (1) generating fact captions that each include natural language text representing a corresponding fact of the set of facts, and (2) generating fact contextual data, including numerical or categorical values related to one or more attributes associated with the fact captions, for the set of relevant facts, as in claim 1. Moreover, the cited references fail to teach or suggest providing a prompt, having the fact contextual data including the numerical or categorical values related to the one or more attributes appended to the corresponding fact captions for the set of relevant facts, to a large language model to obtain the data summary as output, as in claim 1. Each of the references, either alone or in combination, fail to teach or suggest any such generation of a fact score that is based on diversity of the particular fact and importance of the particular fact, as in claim 15. Applicant cannot find user feedback indicating a desired or undesired fact visualization associated with a fact visualization of a data story, as in claim 9. The reference, however, fails to teach or suggest using a refined user query to identify a refined set of facts relevant to the refined user query including the desired or undesired fact visualization, as in claim 9. The cited references fail to teach or suggest "initiating generation of a data summary, via a large language model, using the refined set of facts relevant to the refined user query including the desired or undesired fact visualization," as in claim 9. Examiner’s Response: Examiner respectfully disagrees, the combination of Xu, Vangala and Hughes clearly teaches: using the set of facts relevant to the user query to generate a data summary of the set of relevant facts by: generating fact captions that each include natural language text representing a corresponding fact of the set of facts (Vangala: 1) “ranking” user-centric facts in the graph according to a scoring function and retrieving relatively higher-ranked user-centric facts (e.g., finding user-centric facts about the most relevant entities with regard to the context); and 2) “slicing” the user-centric graph to filter user-centric facts that satisfy a particular constraint (e.g., finding email-related user-centric facts that are connected to the context). Ranking, slicing, and other approaches to traversing/processing the user-centric graph will be described in further detail below with regard to FIGS. 3A-3B ¶ [0038]. In addition to being based on user-centric facts, the intelligently-selected content 121 may be further be based on external and/or derived facts. Non-limiting examples of external and/or derived facts include: 1) results of a web search related to the user-centric facts; 2) dictionary/encyclopedia/reference information related to the user-centric facts; 3) website content related to the user-centric facts; and/or 4) results of processing the user-centric facts according to programmatic, mathematical, statistical, and/or machine learning functions (e.g., computing a date or duration based on date values of user-centric facts associated with the context, for example, to suggest a possible scheduling of a future task; computing a natural-language summary or an answer to a query using a machine learning model) ¶ [0040]. Also see ¶ [0039] and [0066]. In some examples, multiple different candidate triggering facts may be recognized in the user-centric graph, to determine whether, for each candidate triggering fact, intelligently-selected content should be presented based on a context related to the triggering fact ¶ [0059] and [0144]. Examiner specifies that the NL summary constitutes fact captions under the BRI because they are textual descriptions corresponding to individual facts). generating fact contextual data, including numerical or categorical values related to one or more attributes associated with the fact captions (Vangala: The signal strength value numerically represents a salience of the triggering fact for recognizing contexts ¶ [0060]. A query is a rank query to rank the plurality of user-centric facts based at least on a confidence value associated with each user-centric fact ¶ [0111]. Categorical values associated with facts ¶ [0033]-[0034]. Examiner specifies that, Vangala discloses: numerical values (signal strength, confidence value), categorical groupings (context/categories), metadata associated with the facts; these correspond to fact contextual data with numerical or categorical attributes). providing a prompt, having the fact contextual data including the numerical or categorical values related to the one or more attributes appended to the corresponding fact captions for the set of relevant facts, to a large language model to obtain the data summary as output (Hughes: reports generated by the method could include generating a natural language summary of the relevant facts that caused a correlation to arise by converting the relevant facts represented in the knowledge graph to natural language format, using such natural language reports as inputs to a LLM that had been sufficiently trained on domain relevant texts such as, in the case of a legal system, historical law reports and generating a further natural language report on the existence ¶ [0065]. Sending a text input (prompt) to LLM, NL text representing facts (fact captions) derived from a knowledge graph. matches, or near matches according to a match threshold specified as a matter of policy by the user, reported on. Various algorithms that efficiently search for pattern occurrences, matches or proximate matches may be deployed for this purpose. In an exemplary embodiment, continuous querying of a federated graph database system for correlations with the conditions required to establish a cause of action may be performed using node and node path similarity algorithms and may include using a lower-limit similarity value or score as a threshold condition to generate an output report ¶ [0065], this corresponds to numerical values associated with facts. Therefore, examiner specifies that, Hughes, clearly teaches, facts extracted from knowledge graph, enriched with scores/similarity values, converted to NL reports, and the reports used as LLM input (prompt); this clearly indicated prompt including contextual fact data). query, wherein a fact score for a particular fact is generated based on diversity of the particular fact and importance of the particular fact (Applicant’s arguments with respect to claim(s) 15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument). user feedback indicating a desired or undesired fact visualization associated with a fact visualization of a data story (Adada: The semantic SERP 406 comprises several regions, including a summary field 408 and a summary area 410 that comprises (optionally) one or more images 412, text description 414, and additional facts 416. The semantic SERP 406 also comprises suggestion chips 418, 420 and a text field 422 into which the user can enter text input either manually or by voice. Additionally, a graphical icon for a microphone 424 is provided ¶ [0053]. The SERP conversation is grounded with the information in the pole 426 (e.g., Story, Creative content, Semantic SERP, etc.) A semantic SERP “peek” can be provided for certain inputs to illustrate the reasoning power of the GLM 112. E.g., the user is shown a peek (a small set) of the semantic SERP 406. This is a highly visual story-like experience where the classic SERP data is reasoned over, re-organized and flowed into a visual story like experience with a summary 410 that entices the user to engage to see analysis and links in each category ¶ [0059]. The GUI 500 comprises a SERP interface 504 that includes a query field 506, an instant answer field 508 (when applicable), an entity description field 510 (when applicable), and a results field 512. In the results field 512 are displayed one or more results (labeled A through D) returned in response to a query entered into the query field 506 and optionally one or more images 514 ¶ [0091]. Examiner specifies that, such user interactions indicate user interest or preference regarding the displayed information, which constitutes feedback regarding desired displayed content). using a refined user query to identify a refined set of facts relevant to the refined user query including the desired or undesired fact visualization (the GLM obtains such information as part of the prompt along with the aforementioned query. Because the prompt includes season by season home run totals for Babe Ruth, the GLM reasons over such data and provides output that is based upon the information identified as being relevant to the query by the search engine. Accordingly, the GLM can output “Babe Ruth hit 284 home runs before he turned 30.” This information can be streamed into a chat window presented on or alongside a SERP that is being viewed by the user. In another example, the chat window is presented next to a webpage to which the user has navigated from the SERP ¶ [0013], [0034]. At 712, an indication of user interaction with the SERP is received. The user interaction may be a user click on a selectable icon, word, or phrase presented in the SERP, or may be a user voice commands such as “tell me more about that” during a particular portion of the streaming data as it is first streamed into the SERP. At 714, additional search results are retrieved based on user interest as determined by the indication of user interaction with the SERP. At 716, the new queries generated for the GLM based on the user interaction. At 718, updated GLM response narrative is received. At 720, the updated search results and updated GLM response narrative are streamed to the SERP on the client device ¶ [0110]. The dialog history data store 122 includes dialog history, where the dialog history includes dialog information with respect to users and the GLM 112. For instance, the dialog history can include, for a user, identities of conversations undertaken between the user and the GLM 112, input provided to the GLM 112 by the user for multiple dialog turns during the conversation, responses in the conversation generated by the GLM 112 in response to the inputs from the user, queries generated by the GLM during the conversation that are used by the GLM 112 to generate responses, and so forth. In addition, the dialog history can include context obtained by the search engine 110 during conversations; for instance, with respect to a conversation, the dialog history 122 can include content from SERPs generated based upon queries set forth by the user and/or the GLM 112 during the conversation, content from webpages identified by the search engine 110 based upon queries set forth by the user and/or the GLM 112 during the conversation, and so forth ¶ [0033]). initiating generation (Adada: see ¶ [0013], [0033], [0034], [0110], GLM generates output response) of a data summary (summary field/summary area ¶ [0039], [0053], [0059]), via a large language model (Described herein are various technologies pertaining to providing streaming chat in a SERP and/or related webpage using a search engine and a generative language model (GLM), also referred to as a large language model (LLM) ¶ [0027]), using the refined set of facts relevant to the refined user query (see ¶ [0013], [0033], [0034], [0039], [0110], additional search results are retrieved and new queries are generated, the retrieved information is used to generate summaries, instant answers, and entity descriptions, which corresponds to fact relevant to the refined/updated query) including the desired or undesired fact visualization (see ¶ [0053], [0056], [0059], and [0091], Examiner specifies that, such user interactions indicate user interest or preference regarding the displayed information, which constitutes feedback regarding desired displayed content). Regarding 35 USC 101, Abstract Idea: Applicants arguments/amendments regarding the 35 USC 101 was considered. However, the newly added amendment fails to overcome the 35 USC 101 rejection. More specifically: Step 2A, Prong One (Judicial Exception) The claims recite operations that are generally abstract in nature: Obtaining a user query: information gathering. Identifying relevant facts: data analysis/filtering information. Generating a data story with visualizations: organizing and processing information into a narrative. Generating fact captions in NL describing facts: generating descriptive text summaries. Generating fact contextual data: numerical and categorical values. Providing a prompt including contextual data to a LLM. Obtaining data summaries from the LLM. Providing the results through GUI: presenting results (insignificant/generic computer functions). The claims fall within: Mental Process (identifying relevant facts, summarizing information, creating captions and explained). Information Processing and Presentation (selecting data from datasets; summarizing or explaining data; generating reports or narratives, presenting results via GUI). These are all recognized categories of abstract ideas (data collection, analysis, summarization, and display). Step 2A, Prong Two (Practical Application) The claims do not integrate the abstract idea into a practical application. The claims merely recite generic components: processors, computer storage media; GUI; LLM. The LLM is used as a tool to generate a summary, which is simply another form of information processing. These steps do improvement: Computer functionality ML architecture Data storage Visualization technology Query processing mechanism. The additional elements merely applies the abstract idea to a dataset and displays results. The claimed components are recited at a high level of generality and perform their ordinary and expected functions or storing, retrieving, and displaying data. Accordingly, the computer is used as a tool to perform the abstract data management concept, not improved by the claims. Therefore, the claims merely use generic computing components to execute an abstract idea, which is insufficient to “integrate” in into a practical application. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-10, 12-16, 18 and 19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The claim(s) recite(s) generating a summarized/story results based on the user query. With respect to step 1 of the patent subject matter eligibility analysis, the claims are directed to a process, machine, manufacture, or composition of matter. Independent claim 1 directed to a computer storage media, which is directed to one of the four statutory subject matters (non-transitory according to the specification). Independent Claim 15 is directed to a system, including one or more processors and a memory, which is a machine. Independent claim 9 is directed to a method, which is a process. All other claims depend on claims 1, 9 and 15. As such, claims 1-20 are directed to a statutory category. With respect to step 2A, prong one, the claims recite an abstract idea, law of nature, or natural phenomenon. Specifically, the following limitations recite mathematical concepts and/or mental processes and/or certain methods of organizing human activity. The claims recite operations that are generally abstract in nature: Obtaining a user query: information gathering. Identifying relevant facts: data analysis/filtering information. Generating a data story with visualizations: organizing and processing information into a narrative. Generating fact captions in NL describing facts: generating descriptive text summaries. Generating fact contextual data: numerical and categorical values. Providing a prompt including contextual data to a LLM. Obtaining data summaries from the LLM. Providing the results through GUI: presenting results (insignificant/generic computer functions). The claims fall within: Mental Process (identifying relevant facts, summarizing information, creating captions and explained). Information Processing and Presentation (selecting data from datasets; summarizing or explaining data; generating reports or narratives, presenting results via GUI). These are all recognized categories of abstract ideas (data collection, analysis, summarization, and display). With respect to step 2A, prong two, the claims do not recite additional elements that integrate the judicial exception into a practical application. The following limitations are considered “additional elements” and explanation will be given as to why these “additional elements” do not integrate the judicial exception into a practical application. The claims do not integrate the abstract idea into a practical application. The claims merely recite generic components: processors, computer storage media; GUI; LLM. The LLM is used as a tool to generate a summary, which is simply another form of information processing. These steps do improvement: Computer functionality ML architecture Data storage Visualization technology Query processing mechanism. The additional elements merely applies the abstract idea to a dataset and displays results. The claimed components are recited at a high level of generality and perform their ordinary and expected functions or storing, retrieving, and displaying data. Accordingly, the computer is used as a tool to perform the abstract data management concept, not improved by the claims. Therefore, the claims merely use generic computing components to execute an abstract idea, which is insufficient to “integrate” in into a practical application. With respect to Step 2B. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional limitations are directed to a computer readable storage medium, computer, memory, and processor, at a very high level of generality and without imposing meaningful limitations on the scope of the claim. The additional elements are well understood, routine and conventional computer components performing their ordinary functions. Using and LLM to summarize data is merely using a know tool for information analysis. which does not amount to significantly more than the abstract idea and is not enough to transform an abstract idea into eligible subject matter. Such generic, high‐level, and nominal involvement of a computer or computer‐based elements for carrying out the invention merely serves to tie the abstract idea to a particular technological environment, which is not enough to render the claims patent‐eligible, as noted at pg.74624 of Federal Register/Vol. 79, No. 241, citing Alice, which in turn cites Mayo. Further, See, e.g., Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 134 S. Ct. 2347, 2359‐60, 110 USPQ2d 1976, 1984 (2014). See also OIP Techs. v. Amazon.com, 788 F.3d 1359, 1364, 115 USPQ2d 1090, 1093‐94 (Fed. Cir. 2015) ("Just as Diehr could not save the claims in Alice, which were directed to 'implement[ing] the abstract idea of intermediated settlement on a generic computer', it cannot save O/P's claims directed to implementing the abstract idea of price optimization on a generic computer.") (citations omitted). See also, Affinity Labs of Texas LLC v. DirecTV LLC, 838 F.3d 1253, 1257‐1258 (Fed. Cir. 2016) (mere recitation of a GUI does not make a claimpatent‐eligible); Intellectual Ventures I LLC v. Capital One Bank, 792 F.3d 1363, 1370 (Fed. Cir. 2015) ("the interactive interface limitation is a generic computer element".). The additional elements are broadly applied to the abstract idea at a high level of generality ("similar to how the recitation of the computer in the claims in Alice amounted to mere instructions to apply the abstract idea of intermediated settlement on a generic computer,") as explained in MPEP § 2106.05(f)) and they operate in a well‐understood, routine, and conventional manner. MPEP § 2106.0S(d)(II) sets forth the following: The courts have recognized the following computer functions as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. • Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec ... ; TLI Communications LLC v. AV Auto. LLC ... ; OIP Techs., Inc., v. Amazon.com, Inc ... ; buySAFE, Inc. v. Google, Inc ... ; • Performing repetitive calculations, Flook ... ; Bancorp Services v. Sun Life ... ; • Electronic recordkeeping, Alice Corp ... ; Ultramercial ... ; • Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc ... ; • Electronically scanning or extracting data from a physical document, Content Extraction and Transmission, LLC v. Wells Fargo Bank ... ; and • A web browser's back and forward button functionality, Internet Patent • Corp. v. Active Network, Inc. ... . . . Courts have held computer-implemented processes not to be significantly more than an abstract idea (and thus ineligible) where the claim as a whole amounts to nothing more than generic computer functions merely used to implement an abstract idea, such as an idea that could be done by a human analog (i.e., by hand or by merely thinking). In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements integrate the abstract idea into a practical application. Their collective functions merely provide conventional computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that the ordered combination amounts to significantly more than the abstract idea itself. The dependent claims have been fully considered as well, however, similar to the findings for claims above, these claims are similarly directed to the “Mental Processes” grouping of abstract ideas set forth in the 2019 PEG, without integrating it into a practical application and with, at most, a general purpose computer that serves to tie the idea to a particular technological environment, which does not add significantly more to the claims. The ordered combination of elements in the dependent claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Accordingly, the subject matter encompassed by the dependent claims fails to amount to significantly more than the abstract idea. Looking at the claim as a whole does not change this conclusion and the claim is ineligible. Regarding claims 2, 3, 10, and 12 (Similarity Search for Identifying Relevant Facts): What they add: The claims further specify that the fact relevant to the user query are identified using similarity search, including: Performing a similarity search (claim 2) Similarity search using key phrases and the entire query relative to a precomputed fact corpus (claim 3) Similarity search using refined user query (claims 10 and 12) These claims merely specify how relevant information is retrieved from a dataset using similarity comparison. Such operations represent data comparison and information retrieval, which are forms of Mental Process and Data Analysis. These claims do not: Improve database architecture; Improve similarity search algorithm; Provide a new retrieval mechanism; Improve computer functionality. The claims merely add conventional data querying techniques to the abstract idea. The claims fall under: information analysis and retrieval, which is not integrated into a practical application. The claims seem routine and conventional. Regarding claims 4, 18 (Ranker or Selection Algorithm): What they add: The claims specify selecting facts using: A MMR algorithm (claim 4) An integer program for selecting facts using fact score (claim 18) These claims merely recite mathematical optimization techniques used to rank or select information. such techniques involve mathematical relationships and calculation, which fall within the mathematical concept category of abstract idea. These claims do not: Improve the algorithm itself; Provide a new mathematical model; Improve computer performance. The claims merely add mathematical techniques applied to organize or rank information for presentation, which remains abstract idea. The claims fall under: mathematical concept and mental process, which is not integrated into a practical application. The claims seem routine and conventional. Regarding claims 5, 13, and 14 (User Feedback and Query Refinement): What they add: The claims include: Receiving user feedback about a data story (claim 5) Binary feedback using a relevance indicator (claim 13) Refining the query based on user selection (claim 14) These claims merely describe interactive information analysis (receiving feedback, refining query and updating the results). Such operations correspond to human decision making and iterative data analysis, which fall within the mental process category of abstract idea. These claims do not: Improve human computer interaction technology; Improve query processing systems; Provide a new interface architecture. The claims merely add mathematical techniques applied to organize or rank information for presentation, which remains abstract idea. The claims fall under: mathematical concept and mental process, which is not integrated into a practical application. The claims seem routine and conventional. Regarding claims 6, 7, 16, and 19 (Prompt Construction and Source Indicators): What they add: The claims include: Generating the prompt (claim 6) Including unique source indicator in the prompt (claim 7) Generating contextual facts by concatenating fact captions and contextual data (claim 16) Including additional data context in the prompt (claim 19) These claims merely specify how information is formatted or structured before being provided to a language model. Constructing prompts or organizing contextual information represents preparing data for data analysis, which is an information organization task which is considered abstract idea. These claims do not: Improve LLM architecture; Improve NLP technology; Modify how the model operates. The claims merely format input information for processing, which remains part of the abstract idea. The claims fall under: mental process (organizing and preparing information), which is not integrated into a practical application. The claims seem routine and conventional. Regarding claim 8 (Citations in Data Summaries): What they add: The claims include: Data summary includes citation referencing facts or visualization (claim 8) This claim is merely referencing sources of information in a generated report or explanation, which is a conventional reporting technique. These claim does not: Improve computer functionality; Improve citation generation technology; Providing a new technical mechanism. The claims merely format input information for processing, which remains part of the abstract idea. The claims fall under: insignificant application, which is not integrated into a practical application. The claims seem routine and conventional. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 6, and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Xu; Shenyu et al. (US 20220237228 A1) [Xu] in view of VANGALA; Vipindeep et al. (US 20200110519 A1) [Vangala] in view of Hughes; Brendan John (US 20240185039 A1) [Hughes]. Regarding claim 1, Xu discloses, one or more computer storage media having computer-executable instructions embodied thereon that, when executed by one or more processors, cause the one or more processors to perform a method, the method comprising: obtaining a user query in association with a dataset (In addition, in some embodiments, the visual-data-story system 106 provides, for display within a graphical user interface, a search bar (e.g., for text input) to search through visual data stories. For instance, the visual-data-story system 106 receives a keyword search through the search bar and utilizes the keywords to search for relevant visual data stories from a visual-data-story graph. As an example, the visual-data-story system 106 searches for the keywords provided via the search bar in the content and/or visual-data story properties of the visual data stories in visual-data-story graph ¶ [0136]. Also see ¶ [0002], [0003]); identifying a set of facts relevant to the user query (In addition, in some embodiments, the visual-data-story system 106 provides, for display within a graphical user interface, a search bar (e.g., for text input) to search through visual data stories. For instance, the visual-data-story system 106 receives a keyword search through the search bar and utilizes the keywords to search for relevant visual data stories from a visual-data-story graph. As an example, the visual-data-story system 106 searches for the keywords provided via the search bar in the content and/or visual-data story properties of the visual data stories in visual-data-story graph. Upon identifying one or more visual data stories based on the keyword search, the visual-data-story system 106, provides, for display within a graphical user interface, the one or more visual data stories. In one or more embodiments, the visual-data-story system 106 provides, for display within a graphical user interface, a visual data story (e.g., in accordance with FIG. 8) upon receiving a selection of the visual data story from the search result visual data stories ¶ [0136]. Also see Figs. 3 and 4, identifying facts ); generating a data story using a portion of the set of facts relevant to the user query, the data story including a set of visualizations corresponding with the portion of the set of facts relevant to the user query (For instance, the disclosed systems intelligently and automatically analyze input data and generate visual data stories depicting graphical visualizations from data insights determined from the input data. As suggested, the disclosed systems can automatically extract data insights utilizing an in-depth statistical analysis of dataset groups from data-attribute categories within the input data. Based on the data insights, the disclosed systems can automatically generate exportable visual data stories to visualize the data insights, provide textual or audio-based natural language summaries of the data insights, and animate such data insights in videos. Such a visual data story may, for instance, include graphs and natural language summaries comparing particular groups within the larger dataset, such as visual data stories comparing data trends between countries (or other data groups) in time-series data for viral infections, stock-index values, or various other raw-data counts ¶ [0006], [0021], [0034], [0026]); providing for display, via a graphical user interface, the data story and the data summary (see Fig. 2, element 210. Figs. 3, 4, 6, 8A-E). However Xu does not explicitly facilitate using the set of facts relevant to the user query to generate a data summary of the set of relevant facts by: generating fact captions that each include natural language text representing a corresponding fact of the set of facts; generating fact contextual data, including numerical or categorical values related to one or more attributes associated with the fact captions. Vangala discloses, using the set of facts relevant to the user query to generate a data summary of the set of relevant facts by: generating fact captions that each include natural language text representing a corresponding fact of the set of facts (Vangala: 1) “ranking” user-centric facts in the graph according to a scoring function and retrieving relatively higher-ranked user-centric facts (e.g., finding user-centric facts about the most relevant entities with regard to the context); and 2) “slicing” the user-centric graph to filter user-centric facts that satisfy a particular constraint (e.g., finding email-related user-centric facts that are connected to the context). Ranking, slicing, and other approaches to traversing/processing the user-centric graph will be described in further detail below with regard to FIGS. 3A-3B ¶ [0038]. In addition to being based on user-centric facts, the intelligently-selected content 121 may be further be based on external and/or derived facts. Non-limiting examples of external and/or derived facts include: 1) results of a web search related to the user-centric facts; 2) dictionary/encyclopedia/reference information related to the user-centric facts; 3) website content related to the user-centric facts; and/or 4) results of processing the user-centric facts according to programmatic, mathematical, statistical, and/or machine learning functions (e.g., computing a date or duration based on date values of user-centric facts associated with the context, for example, to suggest a possible scheduling of a future task; computing a natural-language summary or an answer to a query using a machine learning model) ¶ [0040]. Also see ¶ [0039] and [0066]. In some examples, multiple different candidate triggering facts may be recognized in the user-centric graph, to determine whether, for each candidate triggering fact, intelligently-selected content should be presented based on a context related to the triggering fact ¶ [0059] and [0144]. Examiner specifies that the NL summary constitutes fact captions under the BRI because they are textual descriptions corresponding to individual facts); generating fact contextual data, including numerical or categorical values related to one or more attributes associated with the fact captions (Vangala: The signal strength value numerically represents a salience of the triggering fact for recognizing contexts ¶ [0060]. A query is a rank query to rank the plurality of user-centric facts based at least on a confidence value associated with each user-centric fact ¶ [0111]. Categorical values associated with facts ¶ [0033]-[0034]. Examiner specifies that, Vangala discloses: numerical values (signal strength, confidence value), categorical groupings (context/categories), metadata associated with the facts; these correspond to fact contextual data with numerical or categorical attributes). It would have been obvious to one ordinary skilled in the art at the time of the present invention to combine the teachings of the cited references because Vangala's system would have allowed Xu to facilitate using the set of facts relevant to the user query to generate a data summary of the set of relevant facts by: generating fact captions that each include natural language text representing a corresponding fact of the set of facts; generating fact contextual data, including numerical or categorical values related to one or more attributes associated with the fact captions. The motivation to combine is apparent in the Xu’s reference, because there is a need to improve providing intelligently presenting information of particular interest to the user. However, neither Xu nor Vangala explicitly facilitates providing a prompt, having the fact contextual data including the numerical or categorical values related to the one or more attributes appended to the corresponding fact captions for the set of relevant facts, to a large language model to obtain the data summary as output; each fact of the set of facts representing an item of information extracted from the dataset. Hughes discloses, providing a prompt, having the fact contextual data including the numerical or categorical values related to the one or more attributes appended to the corresponding fact captions for the set of relevant facts, to a large language model to obtain the data summary as output (Hughes: reports generated by the method could include generating a natural language summary of the relevant facts that caused a correlation to arise by converting the relevant facts represented in the knowledge graph to natural language format, using such natural language reports as inputs to a LLM that had been sufficiently trained on domain relevant texts such as, in the case of a legal system, historical law reports and generating a further natural language report on the existence ¶ [0065]. Sending a text input (prompt) to LLM, NL text representing facts (fact captions) derived from a knowledge graph. matches, or near matches according to a match threshold specified as a matter of policy by the user, reported on. Various algorithms that efficiently search for pattern occurrences, matches or proximate matches may be deployed for this purpose. In an exemplary embodiment, continuous querying of a federated graph database system for correlations with the conditions required to establish a cause of action may be performed using node and node path similarity algorithms and may include using a lower-limit similarity value or score as a threshold condition to generate an output report ¶ [0065], this corresponds to numerical values associated with facts. Therefore, examiner specifies that, Hughes, clearly teaches, facts extracted from knowledge graph, enriched with scores/similarity values, converted to NL reports, and the reports used as LLM input (prompt); this clearly indicated prompt including contextual fact data). each fact of the set of facts representing an item of information extracted from the dataset (extracting and classifying facts from the datasets ¶ [0089]-[0091], [0111]). It would have been obvious to one ordinary skilled in the art at the time of the present invention to combine the teachings of the cited references because Hughes' system would have allowed Xu and Vangala to facilitate providing a prompt, having the fact contextual data including the numerical or categorical values related to the one or more attributes appended to the corresponding fact captions for the set of relevant facts, to a large language model to obtain the data summary as output; each fact of the set of facts representing an item of information extracted from the dataset. The motivation to combine is apparent in the Xu and Vangala’s reference, because there is a need to improve large language models (“LLMs”) analyzing data and to recognize, summarize, predict and/or generate text and other forms of content based on data inputs and/or user queries. Regarding claims 6, the combination of Xu, Vangala, and Hughes discloses, further comprising generating the prompt (Hughes: reports generated by the method could include generating a natural language summary of the relevant facts that caused a correlation to arise by converting the relevant facts represented in the knowledge graph to natural language format, using such natural language reports as inputs to a LLM that had been sufficiently trained on domain relevant texts such as, in the case of a legal system, historical law reports and generating a further natural language report on the existence ¶ [0065]. Sending a text input (prompt) to LLM, NL text representing facts (fact captions) derived from a knowledge graph. matches, or near matches according to a match threshold specified as a matter of policy by the user, reported on. Various algorithms that efficiently search for pattern occurrences, matches or proximate matches may be deployed for this purpose. In an exemplary embodiment, continuous querying of a federated graph database system for correlations with the conditions required to establish a cause of action may be performed using node and node path similarity algorithms and may include using a lower-limit similarity value or score as a threshold condition to generate an output report ¶ [0065]. Therefore, examiner specifies that, Hughes, clearly teaches, facts extracted from knowledge graph, enriched with scores/similarity values, converted to NL reports, and the reports used as LLM input (prompt); this clearly indicated prompt including contextual fact data). Regarding claims 8, the combination of Xu, Vangala and Hughes discloses, wherein the data summary includes citations that reference corresponding facts or fact visualizations (Xu: see Fig. 2, element 210. Figs. 3, 4, 6, 8A-E). Claim(s) 2, 3, and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Xu in view of Vangala in view of Hughes in view Bathwal; Rahil et al. (US 20240281487 A1) [Bathwal]. Regarding claims 2, the combination of Xu, Vangala and Hughes teaches all the limitation of claim 1. However neither one of Xu, Vangala or Hughes explicitly facilitates wherein the set of facts relevant to the user query are identified using a similarity search. Bathwal discloses, wherein the set of facts relevant to the user query are identified using a similarity search (The example controller 102 includes a user information processing component 132 that interprets and/or stores user information utilized by aspects of the system 100, for example determining user intents, user behavior, user search history, user preferences, or the like. In certain embodiments, the user information processing component 132 is further able to integrate user information from offset users (e.g., other users that have a similarity to the current user, for example based on a classification of the user type, similarity in search terminology and/or search logic, similarity in search responses such as the type of sources trusted, language in responses that the user tends to favor as responsive, etc.) ¶ [0041]). It would have been obvious to one ordinary skilled in the art at the time of the present invention to combine the teachings of the cited references because Bathwal's system would have allowed Xu, Vangala and Hughes to facilitate using the set of facts relevant to the user query to generate a data summary of the set of relevant facts. The motivation to combine is apparent in the Xu, Vangala and Hughes’ reference, because there is a need to improve the accuracy and contextuality of search results. Regarding claims 3, the combination of Xu, Vangala, Hughes and Bathwal discloses, wherein the set of facts relevant to the user query are identified using a similarity search performed using one or more key phrases identified from the user query and using the user query in its entirety in relation to a precomputed fact corpus including a set of fact captions (Bathwal: The example controller 102 includes a result construction component 130 that queries the knowledge corpus 106 for responsive documents to a search, parses the documents for responsive portions (e.g., sentences, paragraphs, phrases, tables, graphical information, etc.), constructs a single best answer that is responsive to the search, and provides the single best answer to the user responsive to the search ¶ [0039] and [0049]). Regarding claims 7, the combination of Xu, Vangala, Hughes and Bathwal discloses, wherein the prompt includes a unique source indicator associated with at least one fact relevant to the user query and an instruction to use the unique source indicator in generating the data summary (Bathwal: In some examples, cross-document citation may involve the use of unique identifiers for each source document, such as URLs, Digital Object Identifiers (DOIs), or database record numbers ¶ [0085]. Existing technologies typically employ algorithms to rank search results based on relevance to the user's query ¶ [0019]. Also see ¶ [0021], [0024], [0025]). Claim(s) 4 is rejected under 35 U.S.C. 103 as being unpatentable over Xu in view of Vangala in view of Hughes in view of Bhattacharjee; Kasturi et al. (US 20240160651 A1) [Kasturi]. Regarding claims 4, the combination of Xu, Vangala and Hughes teaches all the limitation of claim 1. However neither one of Xu, Vangala or Hughes explicitly facilitates further comprising selecting the portion of the set of facts relevant to the query using a maximum margin relevance algorithm. Kasturi discloses, selecting the portion of the set of facts relevant to the query using a maximum margin relevance algorithm (candidate phrases are ranked using maximum marginal relevance (MMR), a diversity-based ranking approach, using the document embedding ¶ [0027]. Also see ¶ [0028], [0033]-[0035]). It would have been obvious to one ordinary skilled in the art at the time of the present invention to combine the teachings of the cited references because Kasturi's system would have allowed Xu, Vangala and Hughes to facilitate selecting the portion of the set of facts relevant to the query using a maximum margin relevance algorithm. The motivation to combine is apparent in the Xu, Vangala and Hughes’s reference, because there is a need for faster, more accurate, and more diverse meaningful information. Claim(s) 5 is rejected under 35 U.S.C. 103 as being unpatentable over Xu in view of Vangala in view of Hughes in view of ADADA; Mahmoud et al. (US 20240256618 A1) [Adada]. Regarding claims 5, the combination of Xu, Vangala and Hughes teaches all the limitation of claim 1. However neither one of Xu, Vangala or Hughes explicitly facilitates obtaining user feedback indicating an interest related to an aspect or attribute of the displayed data story; refining the user query based on the user feedback; using the refined user query to identify a new set of facts relevant to the refined user query; and generating a refined data story using a new portion of the new set of facts relevant to the user query. Adada discloses, obtaining user feedback indicating an interest related to an aspect or attribute of the displayed data story; refining the user query based on the user feedback; using the refined user query to identify a new set of facts relevant to the refined user query; and generating a refined data story using a new portion of the new set of facts relevant to the user query (At 712, an indication of user interaction with the SERP is received. The user interaction may be a user click on a selectable icon, word, or phrase presented in the SERP, or may be a user voice commands such as “tell me more about that” during a particular portion of the streaming data as it is first streamed into the SERP. At 714, additional search results are retrieved based on user interest as determined by the indication of user interaction with the SERP. At 716, the new queries generated for the GLM based on the user interaction. At 718, updated GLM response narrative is received. At 720, the updated search results and updated GLM response narrative are streamed to the SERP on the client device ¶ [0110]. Also see Fig. 7, and ¶ [0149]-[0150], [156]-[0158]). It would have been obvious to one ordinary skilled in the art at the time of the present invention to combine the teachings of the cited references because Adada's system would have allowed Xu, Vangala and Hughes to facilitate obtaining user feedback indicating an interest related to an aspect or attribute of the displayed data story; refining the user query based on the user feedback; using the refined user query to identify a new set of facts relevant to the refined user query; and generating a refined data story using a new portion of the new set of facts relevant to the user query. The motivation to combine is apparent in the Xu, Vangala and Hughes’s reference, because there is a need for improved techniques for generating better search results. Claim(s) 9, 10, and 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Xu in view of Vangala in view of Hughes in view of Adada in view of Bhattacharjee; Kasturi et al. (US 20240160651 A1) [Kasturi]. Regarding claim 9, the combination of Xu, Vangala, Hughes and Adada clearly show a computer implemented method for performing the computer-executable instructions stored one or more computer storage media of claims 1 and 5. Therefore, the rejection of claims 1 and 5 applies to claim 9. Vangala further discloses, indicating a desired or undesired attribute (For example, training the machine learning model may include supervised training with user-labelled samples (e.g., derived from direct user feedback during application with an application), and/or unsupervised training ¶ [0090]. Also see ¶ [0098], [0129]). However, neither one of Xu, Vangala or Hughes explicitly facilitate, identifying a portion of the refined set of facts relevant to the refined user query using a maximum margin relevance algorithm. Kasturi discloses, identifying a portion of the refined set of facts relevant to the refined user query using a maximum margin relevance algorithm (candidate phrases are ranked using maximum marginal relevance (MMR), a diversity-based ranking approach, using the document embedding ¶ [0027]. Also see ¶ [0028], [0033]-[0035]). It would have been obvious to one ordinary skilled in the art at the time of the present invention to combine the teachings of the cited references because Kasturi's system would have allowed Xu, Vangala and Hughes to facilitate identifying a portion of the refined set of facts relevant to the refined user query using a maximum margin relevance algorithm. The motivation to combine is apparent in the Xu, Vangala and Hughes’s reference, because there is a need for faster, more accurate, and more diverse meaningful information. Regarding claim 10, the combination of Xu, Vangala, Hughes, Adada, and Kasturi clearly show a device for performing the methods of claim 5. Therefore the rejection of claims 5 applies to claim 12. Regarding claim 11, (canceled). Regarding claim 12, the combination of Xu, Vangala, Hughes, Adada, and Kasturi clearly show a device for performing the methods of claim 5. Therefore the rejection of claims 5 applies to claim 12. Regarding claim 13, the combination of Xu, Vangala, Hughes, Adada, and Kasturi discloses, wherein the user feedback comprises binary feedback using a relevance indicator associated with the desired or undesired fact visualization of the data story (Adada: At 712, an indication of user interaction with the SERP is received. The user interaction may be a user click on a selectable icon, word, or phrase presented in the SERP, or may be a user voice commands such as “tell me more about that” during a particular portion of the streaming data as it is first streamed into the SERP. At 714, additional search results are retrieved based on user interest as determined by the indication of user interaction with the SERP. At 716, the new queries generated for the GLM based on the user interaction. At 718, updated GLM response narrative is received. At 720, the updated search results and updated GLM response narrative are streamed to the SERP on the client device ¶ [0110]. Also see Fig. 7, and ¶ [0149]-[0150], [156]-[0158]). Regarding claim 14, the combination of Xu, Vangala, Hughes, Adada, and Kasturi discloses, wherein refining the user query is performed based on a user selection to refine the data story based on the user feedback (Adada: At 712, an indication of user interaction with the SERP is received. The user interaction may be a user click on a selectable icon, word, or phrase presented in the SERP, or may be a user voice commands such as “tell me more about that” during a particular portion of the streaming data as it is first streamed into the SERP. At 714, additional search results are retrieved based on user interest as determined by the indication of user interaction with the SERP. At 716, the new queries generated for the GLM based on the user interaction. At 718, updated GLM response narrative is received. At 720, the updated search results and updated GLM response narrative are streamed to the SERP on the client device ¶ [0110]. Also see Fig. 7, and ¶ [0149]-[0150], [156]-[0158]). Claim(s) 15, 16, 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Xu in view of Vangala in view of Hughes in view of Bhattacharjee; Kasturi et al. (US 20240160651 A1) [Kasturi]. Regarding claim 15, the combination of Xu, Vangala, and Hughes clearly show a computer implemented method for performing the computer-executable instructions stored one or more computer storage media of claims 1 and 6. Therefore, the rejection of claims 1 and 6 applies to claim 9. Hughes discloses, each contextual fact comprising a fact caption and a fact contextual data pair (reports generated by the method could include generating a natural language summary of the relevant facts that caused a correlation to arise by converting the relevant facts represented in the knowledge graph to natural language format, using such natural language reports as inputs to a LLM that had been sufficiently trained on domain relevant texts such as, in the case of a legal system, historical law reports and generating a further natural language report on the existence, nature and extent of the legal remedies available to an entity in relation to the causes of action arising in terms of the rules reported on ¶ [0065]. Also see ¶ [0294]). However, neither one of Xu, Vangala or Hughes explicitly facilitate, wherein a fact score for a particular fact is generated based on diversity of the particular fact and importance of the particular fact. Kasturi discloses, wherein a fact score for a particular fact is generated based on diversity of the particular fact and importance of the particular fact (candidate phrases are ranked using maximum marginal relevance (MMR), a diversity-based ranking approach, using the document embedding ¶ [0027]. Also see [abstract], ¶ [0026], [0028], [0033]). It would have been obvious to one ordinary skilled in the art at the time of the present invention to combine the teachings of the cited references because Kasturi's system would have allowed Xu, Vangala and Hughes to facilitate wherein a fact score for a particular fact is generated based on diversity of the particular fact and importance of the particular fact. The motivation to combine is apparent in the Xu, Vangala and Hughes’s reference, because there is a need for faster, more accurate, and more diverse meaningful information. Regarding claim 16, the combination of Xu, Vangala, Hughes and Kasturi clearly show a device for performing the methods of claim 6. Therefore the rejection of claim 6 applies to claim 16. Regarding claims 17, (Canceled). Regarding claims 18, the combination of Xu, Vangala, Hughes and Kasturi discloses, wherein an integer linear program is used to select the portion of facts using the fact scores for the facts relevant to the user query (Xu: As an example, the visual-data-story system utilizes a linear least-squares regression and/or a sliding time-window with data-attribute values of a dataset group in a time series to determine a data-trend insight ¶ [0023]. Then, the visual-data-story system 106 determines an increasing trend (e.g., the linear data trend 406c) for the dataset group (U.S.A.). In addition, in reference to FIG. 4, the visual-data-story system 106 also determines an increasing trend (e.g., the linear data trend 406c) for the dataset group (Global) ¶ [0062], [0070], [0076]). Regarding claims 19, the combination of Xu, Vangala, Hughes and Kasturi discloses, wherein an integer linear program is used to select the portion of facts using the fact scores for the facts relevant to the user query (Vangala: 1) “ranking” user-centric facts in the graph according to a scoring function and retrieving relatively higher-ranked user-centric facts (e.g., finding user-centric facts about the most relevant entities with regard to the context); and 2) “slicing” the user-centric graph to filter user-centric facts that satisfy a particular constraint (e.g., finding email-related user-centric facts that are connected to the context). Ranking, slicing, and other approaches to traversing/processing the user-centric graph will be described in further detail below with regard to FIGS. 3A-3B ¶ [0038]. In addition to being based on user-centric facts, the intelligently-selected content 121 may be further be based on external and/or derived facts. Non-limiting examples of external and/or derived facts include: 1) results of a web search related to the user-centric facts; 2) dictionary/encyclopedia/reference information related to the user-centric facts; 3) website content related to the user-centric facts; and/or 4) results of processing the user-centric facts according to programmatic, mathematical, statistical, and/or machine learning functions (e.g., computing a date or duration based on date values of user-centric facts associated with the context, for example, to suggest a possible scheduling of a future task; computing a natural-language summary or an answer to a query using a machine learning model) ¶ [0040]. Also see ¶ [0039] and [0066]. In some examples, multiple different candidate triggering facts may be recognized in the user-centric graph, to determine whether, for each candidate triggering fact, intelligently-selected content should be presented based on a context related to the triggering fact ¶ [0059] and [0144]. Examiner specifies that the NL summary constitutes fact captions under the BRI because they are textual descriptions corresponding to individual facts). Regarding claim 20, (canceled). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD S ROSTAMI whose telephone number is (571)270-1980. The examiner can normally be reached Mon-Fri From 9 a.m. to 5 p.m.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached at (571)270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. 3/11/2026 /MOHAMMAD S ROSTAMI/Primary Examiner, Art Unit 2154
Read full office action

Prosecution Timeline

Jan 24, 2024
Application Filed
Sep 25, 2024
Non-Final Rejection — §101, §103
Dec 09, 2024
Interview Requested
Dec 19, 2024
Response Filed
Dec 19, 2024
Applicant Interview (Telephonic)
Dec 19, 2024
Examiner Interview Summary
Mar 27, 2025
Final Rejection — §101, §103
May 29, 2025
Interview Requested
Jun 17, 2025
Applicant Interview (Telephonic)
Jun 18, 2025
Request for Continued Examination
Jun 24, 2025
Response after Non-Final Action
Jun 27, 2025
Examiner Interview Summary
Aug 27, 2025
Non-Final Rejection — §101, §103
Nov 25, 2025
Interview Requested
Dec 01, 2025
Applicant Interview (Telephonic)
Dec 01, 2025
Response Filed
Dec 13, 2025
Examiner Interview Summary
Mar 11, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596705
CHANGE CONTROL AND VERSION MANAGEMENT OF DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12579127
DETECTING LABELS OF A DATA CATALOG INCORRECTLY ASSIGNED TO DATA SET FIELDS
2y 5m to grant Granted Mar 17, 2026
Patent 12561392
RELATIVE FUZZINESS FOR FAST REDUCTION OF FALSE POSITIVES AND FALSE NEGATIVES IN COMPUTATIONAL TEXT SEARCHES
2y 5m to grant Granted Feb 24, 2026
Patent 12561360
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY RECORDING MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12561312
DISTRIBUTED STREAM-BASED ACID TRANSACTIONS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
67%
Grant Probability
93%
With Interview (+26.3%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 635 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month