Prosecution Insights
Last updated: April 18, 2026
Application No. 18/933,899

Systems, Methods, and Media for Automated Creation of Analytics-Driven Audio-Visual Interactive Episodes

Non-Final OA §103
Filed
Oct 31, 2024
Examiner
FIBBI, CHRISTOPHER J
Art Unit
2174
Tech Center
2100 — Computer Architecture & Software
Assignee
Storyline AI, Inc.
OA Round
3 (Non-Final)
53%
Grant Probability
Moderate
3-4
OA Rounds
4y 3m
To Grant
90%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
199 granted / 376 resolved
-2.1% vs TC avg
Strong +38% interview lift
Without
With
+37.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
40 currently pending
Career history
416
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
62.9%
+22.9% vs TC avg
§102
10.7%
-29.3% vs TC avg
§112
10.2%
-29.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 376 resolved cases

Office Action

§103
DETAILED ACTION This action is in response to the RCE and Amendment dated 05 September 2025. Claims 1, 3, 9, 13, 17 and 20 are amended. No claims have been added or cancelled. Claims 1-10 and 12-21 remain pending and have been considered below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims Interpreted as Invoking 35 U.S.C. 112(f)/Sixth Paragraph The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “a data ingestion subsystem to ingest,” “a domain modeling subsystem to apply,” “an episode configuration subsystem to define,” “an analytic engine subsystem to generate,” “an episode production subsystem to generate,” “an episode player subsystem to generate,” “an episode interaction subsystem to facilitate,” and “an episode customization subsystem to enable” in claims 1-10 and 12. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-10 and 12-21 are rejected under 35 U.S.C. 103 as being unpatentable over Man et al. (US 2025/0077563 A1) in view of Panuganty et al. (US 2021/0248136 A1) and further in view of Bojanic et al. (US 2011/0276603 A1) and further in view of Kirk (US 2025/0124024 A1). As for independent claim 1, Man teaches a system comprising: a data ingestion subsystem to ingest data from external databases, the data from the external databases comprising financial data, insurance data, retail sales data, or real estate data [(e.g. see Man paragraphs 0012, 0017, 0059) ”For implementing automatic construction activity summary generation within a computing platform, various systems, software, and/or databases utilizing machine learning and/or artificial intelligence (AI) can be leveraged for intelligent ingestion and processing of construction project information. For example, machine learning models can be utilized to ingest new data that is generated throughout the planning, design, and implementation phases … ingesting and compiling data for use in the preparation of construction activity summaries. Such applications include … financial applications (e.g., budget applications, invoicing applications, payment processors, etc.) … the back-end computing platform 102 may also be configured to receive data from one or more external data sources that may be used to facilitate functions related to the processes disclosed herein”]. an episode configuration subsystem to define and manage episode configurations [(e.g. see Man paragraph 0087) ”FIG. 7A illustrates an example interface 700A, which may be utilized for automatically generating construction activity summaries. As illustrated in FIG. 7A, the interface 700A is configured to accept a user input or prompt to automatically generate a user summary for the project, titled “Project Name.” For example, the interface 700A may ask the end user to input a requested timeframe or date for the requested summary, as illustrated by the input prompt 710 of the interface 700A. Then, the user may click or otherwise interface with a confirmation button, 712, via the client device, to send the request to generate the summary for the timeframe input in the input prompt 710. While illustrated as requiring user input, the illustrated interface 700 and/or an application it is associated with (e.g., a “Summary Generator” application) may not require user input and may automatically populate the timeframe or may automatically generate the summary, via API request or by automatic generation at a given frequency”]. an analytic engine subsystem to generate results objects containing analytics from the data according to the episode configurations by leveraging the interface with the external databases [(e.g. see Man paragraphs 0079, 0085, 0088, 0091) ”FIG. 7B illustrates an example interface 700B, which illustrates a construction activity summary 720 for a given timeframe 725 … Based on an evaluation of the context-based prompt 532, in view of, at least, the trained construction-based data set 505A and the ongoing construction project data 450, the LLM 500A may utilize a language evaluator and/or generator 540 to output a contextual response to the context-based prompt … The method 600 includes determining what data categories and/or parameters are to be utilized in generating the contextual response, based on the context-based prompt, as illustrated in block 610. Thus, when the context-based prompt includes instructions to generate a construction activity summary for a given time frame, determining categories and parameters may include determining what are the most important parameters and/or categories for a response, based on ranking of importance that is determined during the training of the trained construction-based data set 505A. Thus, with categories/parameters for the construction activity summary determined, the method 600 may include utilizing the ongoing construction project data 450, in view of an evaluation with respect to the trained construction based data set, to determine and/or parse the ongoing construction project data 450 to determine the subjects and associated data for use in the construction activity summary, as illustrated in block 620 … the interface 800A may result from execution of the actions of block 416 of the method 400. The interface 800A may take the form of a “Solutions Engine” application, which utilizes output of the LLM 500A to automatically generate solutions to issues, which are predicted, based on the results of the automatic summary generation. As illustrated, the solutions engine of the interface 800A may output/present to a user a plurality of suggested actions 831, 832, 833, 834, 835, 836, where each of the suggested actions are associated with a respective update 731, 732, 733, 734, 735, 736 of the construction activity summary 720”]. an episode production subsystem to generate episode data packages for the episodes based on the results objects, the episode data packages comprising language for the episodes [(e.g. see Man paragraph 0086 and Fig. 7B) ”the parsed data can be used to generate natural language for a plurality of updates for use in the construction activity summary and/or contextual response … The output of blocks 631, 632, 633, 634, 635, 636 may then be intelligently ordered and/or organized by the LLM 500A, thereby generating a natural language, automated construction activity summary, output as summary data”]. Man does not specifically teach visualizations specifications for the episodes or an episode player subsystem to play the episodes based on the episode data packages on a computing device via a user interface. However, in the same field of invention, Panuganty teaches: and visualizations specifications for the episodes [(e.g. see Panuganty paragraphs 0121, 0207, 0342, 0343 and Fig. 31) ”base the narrated analytics playlist on predefined design themes, branding themes, etc … output from the insight engine module 116, and determines how to describe and/or articulate the output. As one example, in response to receiving an insight from the insight engine that corresponds to chartable data, story narrator module 118 determines to include a chart and a descriptive narrative of the chart within the narrated analytics playlist … story narrator module 118 identifies how to augment the insights identified by the insight engine module with additional information, such as visual information (e.g., charts, graphs, etc.) … a plurality of scenes to include in a narrated analytics playlist is identified. For example, story narrator module 1116 of FIG. 11 identifies scene 3106, scene 3108, scene 3110, and scene 3112 … a type of chart included in the scene”]. and an episode player subsystem to play the episodes based on the episode data packages on a computing device via a user interface [(e.g. see Panuganty paragraphs 0127, 0207, 0471) ”Playback module 132 receives a narrated analytics playlist, and outputs the content for consumption. This can include playing out audio, rendering video and/or images, displaying text-based content, and so forth … Animator module 1118 receives the bundled information, and uses the bundled information to generate audio and/or video outputs that are consumable by a playback engine … this includes generating and/or obtaining result content 5506 from content sources 5508 for inclusion in the query result 5502 as specified in the scripts 5504, such as visuals (text strings, images, charts, videos, animations, and so forth), audio (e.g., audio files generated based on the scripts 5504), and so on. Thus, the query result 5502 includes the result content 5506 for output, as well as instructions for outputting the content, such as content ordering and timing”]. Therefore, considering the teachings of Man and Panuganty, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add visualizations specifications for the episodes and an episode player subsystem to play the episodes based on the episode data packages on a computing device via a user interface, as taught by Panuganty, to the teachings of Man because it efficiently presents relevant data of interest which saves the user’s time and system resources (e.g. see Panuganty paragraph 0002). Man and Panuganty do not specifically teach a domain modeling subsystem to apply semantics to the data and provide an interface with the external databases. However, in the same field of invention or solving similar problems, Bojanic teaches: a domain modeling subsystem to apply semantics to the data and provide an interface with the external databases [(e.g. see Bojanic paragraphs 0032-0035) ”a dependency graph extracted from a particular domain (242, 252, or 262) can be parsed by the corresponding provider module (240, 250, or 260) to identify references to external objects that do not belong to the domain for that provider module (240, 250, or 260) … If such external object references are found, they can be used to generate addition extraction operation representations, which can be executed to extract additional dependency graphs, which may reveal new dependencies of those objects in their native domains. For example, a first database table in one database domain may depend on a second database table in a second database domain, and that second database table in the second database domain may in turn depend on a third database table, also in the second database domain … A provider that locates a reference to an external object can generate an object that identifies the external object and includes properties to assist a provider in the object's native domain in extracting dependencies of that object”]. Therefore, considering the teachings of Man, Panuganty and Bojanic, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add a domain modeling subsystem to apply semantics to the data and provide an interface with the external databases, as taught by Bojanic, to the teachings of Man and Panuganty because it provides permission benefits by limiting access to external databases based on domain origin (e.g. see Bojanic paragraph 0014). Man, Panuganty and Bojanic do not specifically teach and to generate diagnostics associated with the episodes … the diagnostics to provide traceability as to how the episode data packages are generated based on the results objects containing the analytics from the data ingested from the external databases. However, in the same field of invention or solving similar problems, Kirk teaches: and to generate diagnostics associated with the episodes … the diagnostics to provide traceability as to how the episode data packages are generated based on the results objects containing the analytics from the data ingested from the external databases [(e.g. see Kirk paragraphs 0034, 0036) ”the data sets 245 may include … external data sets 245 such as external databases … since the LLM 220 may include data from multiple data sets 245 and inferences from data obtained from the multiple data sets 245 within the response 265, the user may be unable to determine the origins of the information within the response 265 … the techniques of the present disclosure may result in an increase in trust in the LLM 220 and may allow the AI system using the LLM 220 to be more accessible, reliable, and trustworthy. In some examples, the response 265 may display the source of the information included in the response 265 via the user interface 210 of the client device 205 via a user-friendly presentation. For example, to present the source information in an accessible and understandable manner, the sources may be displayed within the response 265 as footnotes, hyperlinks, in-line citations, or any combination thereof. As such, by presenting the sources in a clear manner, the AI system of the LLM 220 may alleviate the trust issues of the LLM 220 and provide a clear insight into how the LLM 220 obtained the information in the response 265 (e.g., from the data sets 245), generated the information in the response 265, or both”]. Therefore, considering the teachings of Man, Panuganty, Bojanic and Kirk, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add generate diagnostics associated with the episodes … the diagnostics to provide traceability as to how the episode data packages are generated based on the results objects containing the analytics from the data ingested from the external databases, as taught by Kirk, to the teachings of Man, Panuganty and Bojanic because it allows the system to be more accessible, reliable, and trustworthy (e.g. see Kirk paragraph 0036). As for dependent claim 2, Man, Panuganty, Bojanic and Kirk teach the system as described in claim 1, but Man does not specifically teach the following limitation. However, Panuganty teaches: further comprising an episode interaction subsystem to facilitate interactivity with the episodes based on user inputs provided on the computing device via the user interface [(e.g. see Panuganty paragraphs 0127, 0185) ”Playback module 132 receives a narrated analytics playlist, and outputs the content for consumption. This can include playing out audio, rendering video and/or images, displaying text-based content, and so forth. As one example, a user can interact with a particular narrated analytics playlist via controls displayed by playback module 132, such as pausing playback, skipping content in the playlist, requesting drill-up content and/or drill-down content, inputting a search query during playback of content, etc. In various implementations, the playback module includes feedback controls, such as controls corresponding to giving explicit positive feedback and/or explicit negative feedback of the content being played out at a particular point in time … includes playback controls 914 that interface with a playback module to allow input that modifies the rendering and/or playback of playlist content 908, such as pausing the content, rewinding the content, skipping the content, etc.”]. The motivation to combine is the same as that used for claim 1. As for dependent claim 3, Man, Panuganty, Bojanic and Kirk teach the system as described in claim 1 and Man further teaches: further comprising an episode customization subsystem to enable a user to manage automated creation and distribution frequency of the episodes on the computing device via the user interface [(e.g. see Man paragraphs 0087, 0088) ”The construction activity summary 720 may be generated in response to, for example, user input via the interface 700A, may be generated automatically for a given timeframe via an API, and/or may be automatically populated based on a scheduled frequency for generating summaries … may automatically generate the summary … by automatic generation at a given frequency”]. As for dependent claim 4, Man, Panuganty, Bojanic and Kirk teach the system as described in claim 3, but Man does not specifically teach the following limitation. However, Panuganty teaches: wherein the episode customization subsystem is to enable the user to personalize the episodes in accordance with theming elements [(e.g. see Panuganty paragraphs 0173, 0207) ”base the narrated analytics playlist on predefined design themes, branding themes, etc … various implementations provide the ability to customize themes that control multiple facets of what is displayed (e.g., a font type, a font size, a color pallet, cursor types, etc.), such as through the use of selectable user interface controls.”]. The motivation to combine is the same as that used for claim 1. As for dependent claim 5, Man, Panuganty, Bojanic and Kirk teach the system as described in claim 1 and Man further teaches: wherein, to generate the language for the episodes, the episode production system is to interface with a large language model [(e.g. see Man paragraphs 0043, 0061) ”FIG. 5 depicts an example diagram illustrating aspects of a large language model (LLM) utilized in conjunction with the process … configured to automatically generate summaries of construction projects, by utilizing Large Language Models (LLMs)”]. As for dependent claim 6, Man, Panuganty, Bojanic and Kirk teach the system as described in claim 5 and Man further teaches: wherein, to interface with the large language model, the episode production subsystem is to provide prompts that constrain the large language model to a context window associated with the results objects [(e.g. see Man paragraphs 0079, 0085) ”The LLM 500A is further configured to receive, as input, a context-based prompt 532. Based on an evaluation of the context-based prompt 532, in view of, at least, the trained construction-based data set 505A and the ongoing construction project data 450, the LLM 500A may utilize a language evaluator and/or generator 540 to output a contextual response to the context-based prompt … The method 600 includes determining what data categories and/or parameters are to be utilized in generating the contextual response, based on the context-based prompt, as illustrated in block 610. Thus, when the context-based prompt includes instructions to generate a construction activity summary for a given time frame, determining categories and parameters may include determining what are the most important parameters and/or categories for a response, based on ranking of importance that is determined during the training of the trained construction-based data set 505A. Thus, with categories/parameters for the construction activity summary determined, the method 600 may include utilizing the ongoing construction project data 450, in view of an evaluation with respect to the trained construction based data set, to determine and/or parse the ongoing construction project data 450 to determine the subjects and associated data for use in the construction activity summary, as illustrated in block 620”]. As for dependent claim 7, Man, Panuganty, Bojanic and Kirk teach the system as described in claim 6 and Man further teaches: wherein, to generate the language for the episodes, the episode production subsystem is to fact check outputs provided by the large language model by comparing the outputs provided by the large language model to the results objects [(e.g. see Man paragraph 0026) ”the LLM will have been trained to understand the form and contextual meaning of various forms of data on the construction management platform, it may have capabilities for determining discrepancies in data or actions performed in the construction project, based on its learnings from the ingestion of historical construction data. For example, an invoice sent out during the timeframe for the generated construction activity summary may be flagged, in the construction activity summary by the LLM, as improper. In this example, the LLM may predict that the invoice is improper by comparing the invoice amount (e.g., $10,000) versus historical, similar invoices (e.g., generally in the range of $1,000) and determining the invoiced cost is far too high. In the construction activity summary, or as a separate notification, the LLM may indicate that someone should review the invoice and may even indicate the likely cause of the discrepancy”]. As for dependent claim 8, Man, Panuganty, Bojanic and Kirk teach the system as described in claim 1 and Man further teaches: wherein, to generate the language for the episodes, the episode production subsystem is to populate a template based on the results objects [(e.g. see Man paragraphs 0024, 0086) ”various restraints/parameters (e.g., rankings of importance, ordering of data, etc.) … The output of blocks 631, 632, 633, 634, 635, 636 may then be intelligently ordered and/or organized by the LLM 500A”]. As for dependent claim 9, Man, Panuganty, Bojanic and Kirk teach the system as described in claim 1, but Man and Panuganty do not specifically teach the following limitation. However, Bojanic teaches: wherein the domain modeling subsystem is to provide the interface with the external databases using a domain graph and a mapping between the domain graph and the external databases, the mapping between the domain graph and the external databases specifying tables to utilize for entities in the domain graph [(e.g. see Bojanic paragraphs 0001, 0033) ”a dependency graph that represents dependencies between different database objects, such as different databases, database tables, columns in database tables, etc … If such external object references are found, they can be used to generate addition extraction operation representations, which can be executed to extract additional dependency graphs, which may reveal new dependencies of those objects in their native domains. For example, a first database table in one database domain may depend on a second database table in a second database domain, and that second database table in the second database domain may in turn depend on a third database table, also in the second database domain”]. The motivation to combine is the same as that used for claim 1. As for dependent claim 10, Man, Panuganty, Bojanic and Kirk teach the system as described in claim 1, but Man does not specifically teach the following limitation. However, Panuganty teaches: wherein the data ingestion subsystem is to ingest unstructured data from the external databases [(e.g. see Panuganty paragraphs 0094, 0124, 0213) ”having multiple sources of data oftentimes corresponds to the data being acquired in multiple formats, such as each source providing the respective data in a respective format that is from data originating from other sources … a second data source may correspond to unstructured text data … communicate with external devices … indications of whether the data is … external to an organization”]. The motivation to combine is the same as that used for claim 1. As for dependent claim 12, Man, Panuganty, Bojanic and Kirk teach the system as described in claim 1, but Man does not specifically teach the following limitation. However, Panuganty teaches: wherein the episode data packages comprises text files and audio files, and wherein the episode player subsystem comprises a library to interpret the text files and the audio files [(e.g. see Panuganty paragraphs 0127, 0207, 0471) ”Playback module 132 receives a narrated analytics playlist, and outputs the content for consumption. This can include playing out audio, rendering video and/or images, displaying text-based content, and so forth… Animator module 1118 receives the bundled information, and uses the bundled information to generate audio and/or video outputs that are consumable by a playback engine … this includes generating and/or obtaining result content 5506 from content sources 5508 for inclusion in the query result 5502 as specified in the scripts 5504, such as visuals (text strings, images, charts, videos, animations, and so forth), audio (e.g., audio files generated based on the scripts 5504), and so on. Thus, the query result 5502 includes the result content 5506 for output, as well as instructions for outputting the content, such as content ordering and timing”]. The motivation to combine is the same as that used for claim 1. As for independent claim 13, Man, Panuganty, Bojanic and Kirk teach a method. Claim 13 discloses substantially the same limitations as claims 1, 4 and 9. Therefore, it is rejected with the same rational as claims 1, 4 and 9. Further, Man teaches receiving a request to generate an episode that is associated with a dataset [(e.g. see Man paragraph 0087) ”the user may click or otherwise interface with a confirmation button, 712, via the client device, to send the request to generate the summary for the timeframe input in the input prompt 710”]. As for dependent claim 14, Man, Panuganty, Bojanic and Kirk teach the method as described in claim 13; further, claim 14 discloses substantially the same limitations as claim 5. Therefore, it is rejected with the same rational as claim 5. As for dependent claim 15, Man, Panuganty, Bojanic and Kirk teach the method as described in claim 14; further, claim 15 discloses substantially the same limitations as claim 6. Therefore, it is rejected with the same rational as claim 6. As for dependent claim 16, Man, Panuganty, Bojanic and Kirk teach the method as described in claim 15; further, claim 16 discloses substantially the same limitations as claim 7. Therefore, it is rejected with the same rational as claim 7. As for dependent claim 17, Man, Panuganty, Bojanic and Kirk teach the method as described in claim 13, but Man does not specifically teach the following limitation. However, Panuganty teaches: wherein retrieving the dataset from the external database comprises retrieving an unstructured dataset, and wherein the method further comprises applying semantics to the unstructured dataset by leveraging the domain model [(e.g. see Panuganty paragraphs 0094, 0124, 0213, 0219) ” having multiple sources of data oftentimes corresponds to the data being acquired in multiple formats, such as each source providing the respective data in a respective format that is from data originating from other sources … a second data source may correspond to unstructured text data … communicate with external devices … indications of whether the data is … external to an organization … various natural language processing algorithms and/or models can be employed to identify similar wording, such as sematic matching algorithms … latent semantic analysis”]. The motivation to combine is the same as that used for claim 1. As for dependent claim 18, Man, Panuganty, Bojanic and Kirk teach the method as described in claim 13; further, claim 18 discloses substantially the same limitations as claim 2. Therefore, it is rejected with the same rational as claim 2. As for dependent claim 19, Man, Panuganty, Bojanic and Kirk teach the method as described in claim 13 and Man further teaches: wherein receiving the request to generate the episode comprises receiving the request to generate the episode based on a user input [(e.g. see Man paragraph 0087) ”the user may click or otherwise interface with a confirmation button, 712, via the client device, to send the request to generate the summary for the timeframe input in the input prompt 710”]. As for independent claim 20, Man, Panuganty, Bojanic and Kirk teach a non-transitory computer-readable storage medium. Claim 20 discloses substantially the same limitations as claim 13. Therefore, it is rejected with the same rational as claim 13. As for dependent claim 21, Man, Panuganty, Bojanic and Kirk teach the system as described in claim 4, but Man does not specifically teach the following limitation. However, Panuganty teaches: wherein the theming elements comprise at least one of voice settings for the episodes, background images for the episodes, a color palette for the episodes, or background music for the episodes [(e.g. see Panuganty paragraphs 0173, 0207) ”base the narrated analytics playlist on predefined design themes, branding themes, etc … various implementations provide the ability to customize themes that control multiple facets of what is displayed (e.g., a font type, a font size, a color pallet, cursor types, etc.), such as through the use of selectable user interface controls.”]. The motivation to combine is the same as that used for claim 1. Response to Arguments Applicant’s arguments, filed 05 September 2025, have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. PGPub 2018/0225281 A1 issued to Song et al. on 09 August 2018. The subject matter disclosed therein is pertinent to that of claims 1-10 and 12-21 (e.g. graphs for various external domains of semantically tagged data). U.S. Patent 12,008,332 B1 issued to Gardner et al. on 11 June 2024. The subject matter disclosed therein is pertinent to that of claims 1-10 and 12-21 (e.g. summarization of data using an LLM to create audio/video). Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER J FIBBI whose telephone number is (571)-270-3358. The examiner can normally be reached Monday - Thursday (8am-6pm). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Bashore can be reached at (571)-272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTOPHER J FIBBI/Primary Examiner, Art Unit 2174
Read full office action

Prosecution Timeline

Oct 31, 2024
Application Filed
Feb 22, 2025
Non-Final Rejection — §103
Apr 08, 2025
Interview Requested
May 02, 2025
Examiner Interview Summary
May 23, 2025
Response Filed
Jun 04, 2025
Final Rejection — §103
Jul 29, 2025
Applicant Interview (Telephonic)
Aug 06, 2025
Examiner Interview Summary
Sep 05, 2025
Request for Continued Examination
Sep 10, 2025
Response after Non-Final Action
Dec 12, 2025
Non-Final Rejection — §103
Feb 19, 2026
Examiner Interview Summary
Feb 19, 2026
Applicant Interview (Telephonic)
Mar 13, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585866
AUTOMATED ENTRY OF EXTRACTED DATA AND VERIFICATION OF ACCURACY OF ENTERED DATA THROUGH A GRAPHICAL USER INTERFACE
2y 5m to grant Granted Mar 24, 2026
Patent 12561152
METHODS AND SYSTEMS FOR ADAPTIVE CONFIGURATION
2y 5m to grant Granted Feb 24, 2026
Patent 12535930
INTEROPERABILITY FOR TRANSLATING AND TRAVERSING 3D EXPERIENCES IN AN ACCESSIBILITY ENVIRONMENT
2y 5m to grant Granted Jan 27, 2026
Patent 12535941
USER INTERFACE FOR MANAGING INPUT TECHNIQUES
2y 5m to grant Granted Jan 27, 2026
Patent 12519999
Location Based Playback System Control
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
53%
Grant Probability
90%
With Interview (+37.6%)
4y 3m
Median Time to Grant
High
PTA Risk
Based on 376 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month