DETAILED ACTION
This communication is in response to the Amendments and Arguments filed on 12/9/2025. Claims 1, 2, 4-9, 11-17, 19, and 20 are pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
With respect to the 35 U.S.C. 112 rejections for claims 1, 8, and 15, the applicant has amended the claims to address the informalities.
In view of the amendments to the claims, the rejections have been withdrawn.
With respect to the 35 U.S.C. 101 rejections for claims 1-2, 4-9, 11-16, and 18-20, the applicant asserts the claim now requires:
1. a user input comprising selection of an interaction feature presented on the first UI (with a swipe-up gesture in your version),
2. semantic parsing of information associated with that first UI using NLP algorithms,
3. capture of contextual information by extracting at least one of n-grams, noun phrases, themes, or facets,
4. determining network data sources based on that captured context,
5. retrieving information from those sources,
6. initiating extractive summarization using the NLP algorithms,
7. generating information summaries, and
8. generating a personalized dashboard with designated data slots corresponding to the context-determined sources, then dynamically mapping the generated summaries to those slots based on the summary-to-source association derived from the contextual information.
This claim language describes a specific computer UI-driven data processing and presentation architecture that cannot be practically performed in a human mind, particularly given the required extraction of structured NLP artifacts from a live user interface, the context-based selection of network sources, and the algorithmic generation and association of summaries with source-corresponding designated display regions. The USPTO's recent reminder on Step 2A cautions examiners to distinguish claims that recite a mental process from claims that merely involve analysis and to focus on what can practically be performed in the human mind.
Accordingly, the amended independent claim does not recite a judicial exception under Step 2A, Prong One.
Examiner respectfully disagrees. While the amended claims do attempt to overcome the 101 rejections by introducing claim language regarding the selection of an interaction feature and swipe-up gesture, these elements lack an explicit connection to the following limitations. The limitation acts as a step to then start the remainder of the method and is considered pre-solution activity because of this. Furthermore, the ability to decipher context in text, associate with an information source, and summarize that information source are still things the human mind is capable of doing.
Applicant asserts the amended claim now ties the GUI structure directly to the upstream computational steps. The claim does not merely retrieve and display information. It requires a particular sequence in which:
1. the system extracts defined contextual features from a first UI using NLP,
2. uses those features to determine the relevant network data sources,
3. performs extractive summarization to produce structured information
summaries, and
4. generates a second UI dashboard in which the system creates designated data slots corresponding to the context-determined sources and dynamically maps the summaries into those slots based on the summary-source association derived from the contextual information.
This is a UI-specific technical solution to the technical problem identified in your specification: context-appropriate aggregation and presentment that avoids overload across diverse data sources while reducing unnecessary retrieval and display of irrelevant data. The claim makes the dashboard generation and mapping logic part of the mechanism that operationalizes the semantic parsing and contextual extraction steps.
This type of claim structure aligns with the principles in Enfish, McRO, DDR Holdings, and Thales, where eligibility turns on whether the claim recites a specific computer-rooted solution that improves computer functionality or another technical field, rather than using a computer as a generic tool. MPEP §2106.04(d) reflects these standards.
Recent USPTO guidance for ALIL examinations reinforces that examiners should evaluate whether claims reflect an improvement in computer functioning or another technical field and should analyze the claim as a whole. The PTAB's precedential designation of Ex parte Desjardins similarly emphasizes that AI-related claims can be eligible when they reflect improvements in Al technology and should be cabined by §§102/103/112 where appropriate.
Here, the amended claim recites a practical application in the form of a context-derived, source- aware, dynamically structured dashboard that is generated in response to a specific interaction on a prior interface, with the dashboard's internal layout constraints (designated slots) and mapping rules anchored to the same contextual and source determinations recited earlier in the claim.
This integrates any alleged abstract idea into a practical technological application under Step 2A, Prong Two.
Examiner respectfully disagrees. The overarching concept of contextualizing information from an interface, finding more related information, summarizing the new information, and displaying the summaries is not a computer specific task. Thus, merely stating that it improves computer functionality is not adequate as the current claim language does not explicitly frame this as a computer function method. The outcome of the claimed method is the creation of a user interface dashboard which is considered post-solution activity of organizing and presenting the information output by the method. Also, the output/UI display is not described with specificity in the claim language, a generic description of a UI that could appear in countless different ways would not be considered an improvement to computer functionality.
Furthermore, regarding the PTAB's precedential designation of Ex parte Desjardins, the current independent claims do not include any AI/ML language that would make this precedent applicable. Also, Desjardins offered specific ways the claims altered the training of the AI/ML model, the present independent claims are silent on these aspects which make Ex parte Desjardins relevant.
Applicant asserts the claim now includes a non-conventional, integrated architecture that links:
1. a specific graphical interaction trigger on a first UI,
2. machine extraction of defined contextual artifacts from that UI,
3. context-based selection of network sources,
4. extractive summarization to generate structured summaries, and
5. generation of a second UI whose slot structure corresponds to the context- determined sources and whose content placement follows a dynamic mapping rule based on the summary-to-source association derived from the contextual information.
These limitations function together to produce a specific improvement to how the system aggregates, summarizes, and presents context-relevant data through a dynamically structured user interface. The claim goes beyond generic data retrieval and generic display. It recites a constrained, source-aware UI generation mechanism rooted in the preceding semantic and contextual computations.
Under Berkheimer, the Office must rely on evidence when characterizing additional elements as well-understood, routine, or conventional. The amended claim's integrated UI generation and mapping logic provides concrete, specific limitations that the Office has not shown to be routine in the art.
Accordingly, the amended independent claim recites significantly more under Step 2B.
Examiner respectfully disagrees. As currently claimed the graphical interaction trigger is stated to be the user selecting an interaction feature. The interaction feature is not present in any other steps of the method; thus, the user interaction is merely acting as pre-solution activity to the method. Similarly, the output/second user interface is merely acting as a post-solution or data presentment step for the method.
Furthermore, regarding Berkheimer, the user-interface-based elements are considered to be pre-solution and post-solution activity, thus satisfying Berkheimer. As currently claimed, the UI is used exclusively as a data gathering step (users input for initiation) and a presentment step (displaying the generated summaries for the user). The remaining additional elements of a device, display, processor, and storage device all have evidence provided from the specification on how these are general purposes computer components (details on this can be found in the 101 rejection for the independent claims below).
Therefore, the applicant’s arguments are not persuasive.
With respect to the 35 U.S.C. 103 rejections for claims 1-2, 4-9, 11-16, and 18-20, the applicant asserts the Office's cited references are unable to overcome the newly amended claims. Therefore, the Office's cited references, either alone or in combination, fails to teach or suggest each and every recitation of the claimed invention.
In view of the updated prior art search and changed rejections to the independent claims, the argument is considered moot. An updated rejection under 35 U.S.C. 103 for the claims is detailed below.
Claim Objections
Applicant is advised that should claims 1, 8, and 15 be found allowable, claims 2, 9, and 16 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-2, 4-9, 11-16, and 18-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 1, 8, and 15 recite A system for context-based data aggregation and presentment, the system comprising: a processing device; a non-transitory storage device containing instructions when executed by the processing device, causes the processing device to: receive, from a user input device, a user input triggering data aggregation and presentment from a first user interface, wherein the user input comprises selection of an interaction feature presented on the first user interface, wherein the interaction feature comprises a swipe-up gesture; initiate semantic parsing of information associated with the first user interface using natural language processing algorithms; capture contextual information associated with the first user interface based on at least the semantic parsing, wherein capturing contextual information comprises extracting at least one of n-grams, noun phrases, themes, or facets from the information associated with the first user interface; determine one or more sources of network data based on at least the contextual information; retrieve information from the one or more sources of network data; initiate extractive summarization of information associated with the one or more sources of network data using the natural language processing algorithms; generate information summaries based on at least the extractive summarization; and display, via a second user interface, the information summaries on the user input device, wherein displaying further comprises: generate a personalized dashboard for the user based on at least the one or more sources of network data, wherein generating the personalized dashboard comprises generating one or more designated data slots corresponding to[[for]] the one or more sources of network data determined based on the contextual information; and dynamically map the information summaries for the information associated with the one or more sources of network data with the one or more designated data slots based on an association between the information summary and a respective source of network data determined from the contextual information.
Each step in these claims, as drafted, is a process that, under broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. A human is capable of receiving a gesture as confirmation to start a task, for example, giving a thumbs up when you’re ready. A human can semantically parse information and capture contextual meaning from it, this is reading comprehension and is done whenever a human reads a new piece of information. Reading comprehension can include things such as identifying n-grams (specific words), noun phrases, themes and facets. Natural language processing algorithms are mathematical equations which a human could perform. Determining sources of network data could be, for example, a human looking for a relevant book in a library or even looking through printed out website pages to find relevant information. Retrieving information from these sources could be done by writing/drawing a copy a of the information you want. The human mind is capable of comprehending text they have read and summarizing it. The displaying of information can also be done by a human using a writing utensil and paper. A human can create a personalized dashboard for presenting sources of network data. For example, someone doing a research presentation may create a poster board to present their ideas. The poster board would contain various sources of research they found and summaries for them. The poster board would have designated areas for each source of data that correspond to where the research came from and pieces from the initial question that correspond to that information. The poster could also have fold-out components that dynamically reveal summaries for the information. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, claims 1, 8, and 15 recite a user input device with a user interface and a display device with a second user interface. The input device containing a user interface is necessary to gather data that the system will act upon, thus it is extra pre-solution activity and does not integrate the judicial exception. To this same extent the swipe-up gesture would be considered part of the pre-solution activity done on this user interface. Furthermore, there are generic examples of the types of equipment used provided in the specification in paragraph 22. The display device that could contain the second user interface is necessary to present the data the system produces; thus, it is extra post-solution activity and does not integrate the judicial exception. Furthermore, there are generic examples of the types of equipment used provided in the specification in paragraph 34. Claim 1 specifically lists additional components of a processing device and a non-transitory storage device. Generic examples of the processor and storage device are listed in the specification in paragraphs 38 and 40 respectively. Each of these additional components amount to general purpose computing devices. Accordingly, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because it does not impose any meaningful limits on practicing the abstract idea. The claims are not patent eligible.
Claims 2, 9, and 16 recite retrieving the information from the one or more sources of network data, the system is further configured to: initiate extractive summarization of information associated with the one or more sources of network data using the natural language processing algorithms; and generate information summaries for the information associated with the one or more sources of network data based on at least the extractive summarization.
The limitations in these claims, as drafted, is a process that, under broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. A human can extract information from something by taking notes, highlighting or even cutting out specific pieces. A human can then summarize a piece of information using their own mind and interpretation of the extracted information. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. The claims do not include additional elements that weren’t in the independent claims. Accordingly, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claims are not patent eligible.
Claims 4, 11, and 18 recite receiving from the user input device, a user interaction input associated with a first information summary, wherein the first information summary is associated with a first data source; and generate a third user interface to be overlaid on the second user interface to display a first information associated with the first data source, wherein the first information is an unabridged version of the first information summary.
The limitations in these claims, as drafted, is a process that, under broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. An “unabridged version of the first information summary” is being interpreted as providing the original source material without the summarization. A human could include both a summary they created and a copy of the original material on a note sheet for example. Furthermore, the generation of a third user interface could be the human equivalent of having another page of notes or even unfolding a piece of paper to reveal more information. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. As mentioned previously the recitation of “user input device” and “user interface” amount to general-purpose computing devices. Accordingly, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claims are not patent eligible.
Claims 5, 12 and 19 recite the one or more sources of network data are associated with the user.
The limitation in these claims, as drafted, is a process that, under broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. A human can select sources of information based on their own preferences/history. For example, choosing to use an article from a writer they enjoy/trust. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. The claims do not include additional elements that weren’t in the independent claims. Accordingly, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claims are not patent eligible.
Claims 6, 13, and 20 recite receiving from the user input device, a user customization input selecting a first subset of the one or more sources of network data; and update the second user interface to display the information from the first subset of the one or more sources of network data based on at least the user customization input.
The limitation in these claims, as drafted, is a process that, under broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. A human can customize how they want a collection of information gathered to look when being displayed. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. As mentioned previously the recitation of “a user interface” amount to general purpose computing devices. Accordingly, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claims are not patent eligible.
Claims 7 and 14 recite deploying using a machine learning (ML) subsystem, a trained ML model on the information from the one or more sources of network data; generate predictive analytics for the user based on at least the information from the one or more sources of network data using the trained ML model; and display the predictive analytics for the user on the second user interface.
Each step in these claims, as drafted, is a process that, under broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. A human can perform analysis and predictive analytics on a piece of information. For example, if looking at a line graph a human could see the current upward/downward trend and deduce that it will continue in that direction. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, claims 7 and 14 recite a “machine learning subsystem” and a “machine learning model” which as described in paragraphs 61-64 of the specification are generic implementations of these models with no specific construction. As mentioned previously the recitation of “a user interface” amounts to a general-purpose computing device. Accordingly, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claims are not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2, 4-6, 8-9, 11-13, 15-16, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Publication 11803401 B1 (Ferrucci et al.) in view of US Patent Application Publication 20160360382 A1 (Gross et al.), US Patent Publication 11740770 B1 (Neervannan et al.), and US Patent Publication 10783192 B1 (Soubbotin).
Regarding Claims 1, 8, and 15, Ferrucci et al. teaches a system for context-based data aggregation and presentment, the system comprising: a processing device (Col. 17, lines 20-26); a non-transitory storage device containing instructions (Col. 17, lines 27-47).
A computer program product for context-based data aggregation and presentment, the computer program product comprising a non-transitory computer-readable medium comprising code (Col. 17, lines 27-47).
A method for context-based data aggregation and presentment (Col. 53, lines 32-48).
Receive, from a user input device, a user input triggering data aggregation and presentment from a first user interface.
Ferrucci et al. includes various (UI Components) to receive inputs from a user through their device. (In some examples, the research assistant UI component 112 may receive user input for specifying an input query and call the query component 114 to process the input query.) (Col. 10, lines 29-45). “Triggering data aggregation and presentment” is being interpreted as beginning to process the input which is equivalent to (process the input query) in the quote from above. Furthermore, (Fig. 10, also referred to as Example User Interface 124 in Fig. 1) Can be mapped to the “first user interface” with element 1006 (As depicted, the example user interface 1002 is has been researching concepts related to “Syndrome A.” The example user interface element 1006 is highlighting one of the proposition nodes.) (Col. 36, lines 46-55) where (1006) contains information from the user input.
Initiate semantic parsing of information associated with the first user interface using natural language processing algorithms
Ferrucci et al. teaches information associated with the first user interface (input query) undergoing semantic parsing (In some examples, the query component may receive an input query that includes a natural language question and use a semantic parser to convert the natural language question to a structured question. The semantic parser may parse the text of the natural language question and convert the text into machine language (e.g., structured representation), which is a machine-understandable representation of the meaning of the text.) (Col. 4, lines 36-63). It can also be seen in (Fig. 1) that the (Query Component) and (Natural Language Understanding Engine) are in the system together. In regards to the claim mentioning a plurality of algorithms Ferrucci et al. also utilize various (engines) that are detailed further at (Col. 9, lines 3-19).
Capture contextual information associated with the first user interface based on at least the semantic parsing.
As mentioned previously the semantic parsing is interpreting meaning which can be considered contextual information. (The semantic parser may parse the text of the natural language question and convert the text into machine language (e.g., structured representation), which is a machine-understandable representation of the meaning of the text) (Col. 4, lines 36-63).
Determine one or more sources of network data based on at least the contextual information.
Network data is being interpreted as any source of data the system is pulling from. The (query component) mentioned previously contains contextual information. (The query component 210 may receive an input query and perform a search based on the input query.) (Col. 19, lines 35-46). The search is the equivalent to the determination of network data as can be further explained (As described herein, the research process may include a series of research steps, including, but not limited to: receiving a research topic as an input query, searching for documents/text related to the input query (i.e., “information”), parsing the evidence documents/text to understand the information...) (Col. 9, lines 44-67). “One or more sources of network data” can be further exhibited by (non-limiting examples of the knowledge sources may include unstructured, semi-structured, and structured knowledge (e.g., medical ontologies, knowledge graphs, research papers, clinical studies, etc.).) (Col. 3, lines 42-51).
Retrieve information from the one or more sources of network data.
The section referenced previously (Col. 9, lines 44-67) showing the overall process of Ferrucci et al. shows steps such as (parsing the evidence documents/text to understand the information) and (linking the evidence together to find logical reasoning to support research results) that could not be done without first retrieving said information.
initiate extractive summarization of information associated with the one or more sources of network data using the natural language processing algorithms.
Ferrucci et al. uses natural language understanding algorithms on the “network data” or query result (The NLU engine 116 may apply a multi-dimensional interpretation process with a domain-independent interpretation schema to analyze the query results.) (Col. 12, lines 19-28). Then uses a knowledge aggregation and synthesis engine to summarize data (The knowledge aggregation and synthesis engine 118 may apply clustering and similarity algorithms to aggregate information in the interpreted query results.) (Col. 12 line 55 to Col. 13 line 12).
And generate information summaries for the information associated with the one or more sources of network data based on at least the extractive summarization.
Ferrucci et al. teaches an evidence summary component for generating information summaries (The evidence summary component 122 may process the ranked aggregate results with the evidence texts. The evidence summary component 122 may process the ranked aggregate results with the evidence texts to generate results data, including one or more result clusters annotated with the related portion of evidence texts. The one or more result clusters include at least one concept cluster, a relation cluster, and a propositional cluster. Each cluster of the one or more result clusters annotated with the related portion of evidence texts includes a link to a summarized evidence passage.) (Col. 13, lines 42-58).
Display, via a second user interface, the information from the one or more sources of network data on the user input device.
Ferrucci et al. states (The research assistant UI component 112 may generate a user interface to guide user input to enter the query and explore the evidence chains. In some examples, the research assistant UI component 112 may configure the user interface to guide the user input and repeat the research process by iteratively exploring evidentiary chains to connect the dots through a large body of knowledge (“data sources”), including natural language text (e.g., journals, literature, documents, knowledge base, market research documents, and/or structured databases).) (Col. 10, lines 1-28). This shows “network data” being displayed on a user interface. While this system uses multiple user interfaces and transitions between them (Figs. 6-20), Fig. 10 specifically shows (The example user interface 1002 may include a visual presentation of query graph that is generated in response to a research, and the nodes of the graph include propositions constructed from combined evidence links from previous research (e.g., from research process illustrated in FIG. 9) with selectable nodes to explore the supporting evidence associated with the node) (Col. 36, lines 36-42). Where the second user interface (1002) contains information from sources of network data (query graph).
wherein displaying further comprises: generate a personalized dashboard for the user based on at least the one or more sources of network data,
Ferrucci et al. teaches building a query graph with the search results (The query component 114 may generate a query graph (“research results graph”) to store search results (“findings”) for an iterative exploration of the input query.) (Col. 10, lines 46-64). The query graph is then visually represented to the user based on the inputs they initially provided making it personalized (The example user interface 1002 may include a visual presentation of query graph that is generated in response to a research, and the nodes of the graph include propositions constructed from combined evidence links from previous research (e.g., from research process illustrated in FIG. 9) with selectable nodes to explore the supporting evidence associated with the node.) (Col. 36, lines 24-55).
Ferrucci et al. does not explicitly teach: wherein the user input comprises selection of an interaction feature presented on the first user interface, wherein the interaction feature comprises a swipe-up gesture; wherein capturing contextual information comprises extracting at least one of n-grams, noun phrases, themes, or facets from the information associated with the first user interface; wherein generating the personalized dashboard comprises generating one or more designated data slots corresponding to the one or more sources of network data determined based on the contextual information; and dynamically map the information summaries for the information associated with the one or more sources of network data with the one or more designated data slots based on an association between the information summary and a respective source of network data determined from the contextual information.
However, Gross et al. teaches a wherein the user input comprises selection of an interaction feature presented on the first user interface,
(The method further includes: detecting, via the touch-sensitive surface, a swipe gesture that, when detected, causes the electronic device to enter a search mode that is distinct from the application. The method also includes: in response to detecting the swipe gesture, entering the search mode, the search mode including a search interface that is displayed on the display. In conjunction with entering the search mode, the method includes: determining at least one suggested search query based at least in part on information associated with the content.) (Paragraph 126)
Gross et al. teaches a display that when interacted with by the user, causes search, summarization, and display methods to occur.
wherein the interaction feature comprises a swipe-up gesture;
(detecting, via the touch-sensitive surface, a swipe gesture that, when detected, causes the electronic device to enter a search mode that is distinct from the application.) (Paragraph 126).
The user interaction is a swipe gesture.
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the search/summarization system as taught by Ferrucci et al. to include an interaction feature that initiates the method as taught by Gross et al. This would have been an obvious improvement as it is an intuitive way to interact with a device that is effective and efficient for the user’s satisfaction. (Gross et al. Paragraph 125).
Ferrucci et al. in view of Gross et al. does not explicitly teach: wherein capturing contextual information comprises extracting at least one of n-grams, noun phrases, themes, or facets from the information associated with the first user interface; wherein generating the personalized dashboard comprises generating one or more designated data slots corresponding to the one or more sources of network data determined based on the contextual information; and dynamically map the information summaries for the information associated with the one or more sources of network data with the one or more designated data slots based on an association between the information summary and a respective source of network data determined from the contextual information.
However, Neervannan et al. teaches a wherein capturing contextual information comprises extracting at least one of n-grams, noun phrases, themes, or facets from the information associated with the first user interface;
(FIG. 2 illustrates an overview of the deep search process 50. In the process, the search system receives feeds, that may be real-time, of pieces of content (52) … The linguistic unit also discerns the topic of the content using special linguistic rules which is different from traditional search engines where a search is performed using word and phrases without contextual understanding of the text. For example, the linguistic analysis unit tags sentences based on their tense, to determine whether they talk about something that happened in the past, is continuing, or is expected to happen in the future. This is accomplished through a combination of linguistic analysis and domain-based language models that understand, for example, that a noun phrase like “deferred expenses” implies something about the future.) (Col. 4, Line 53 to Col. 5, Line 19).
(Feature extraction: Terms, phrases, or co-occurring words that are judged to be relevant from the point of view of sentiment classification are selected by a domain expert according to the approaches described in above. Another alternative is using n-grams or a combination of features.) (Col. 10, Lines 32-36)
Neervannan et al. teaches a method of rendering context-based information that can learn context from text based on things like noun-phrases and n-grams.
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the search/summarization system as taught by Ferrucci et al. to include using noun phrases and n-grams to identify context as taught by Neervannan et al. This would have been an obvious improvement adding context analysis to a search system can help retrieve more results that were not specifically stated in a query (Neervannan et al. Col. 1, Lines 50-67).
Ferrucci et al. in view of Gross et al. and Neervannan et al. does not explicitly teach: wherein generating the personalized dashboard comprises generating one or more designated data slots corresponding to the one or more sources of network data determined based on the contextual information; and dynamically map the information summaries for the information associated with the one or more sources of network data with the one or more designated data slots based on an association between the information summary and a respective source of network data determined from the contextual information.
However, Soubbotin teaches a wherein generating the personalized dashboard comprises generating one or more designated data slots corresponding to the one or more sources of network data determined based on the contextual information;
(In step 430 the multi-document summarization engine receives the query and the links to the documents. Then, the multi-document summarization engine parses each individual document to extract key semantic concepts and corresponding text fragments or sentences and performs multi-document summarization of the documents based on the extracted concepts, producing a text summary representing a digest of the search results. The summary may comprise key semantic concepts, text fragments or sentences matching these concepts, or a combination of the two. The user query may be taken into account, to focus the summary on a particular topic using means such as, but not limited to, NLP or metadata to help determine relevance of the results. … A relevance score may be calculated for each concept or text fragment, and this relevance score may or may not be displayed to the user in the summary and may be used by the system for learning.) (Col. 21, Line 65 to Col. 22, Line 26).
(Furthermore, some embodiments of the present invention may specify how to order the text fragments, for example, without limitation, alphabetically by source, by relevance, etc.) (Col. 21, Lines 29-32).
Soubbotin summarizes multiple sources of network data in response to a user query and displays them on a second user interface. Fig. 1 shows a first user interface (a typical search engine interface) with a button to “Get text summary of results” which transitions it to the second user interface seen in Fig. 2. This is a personalized dashboard in the sense that it is generated based on the context of the query and each results detected relevance to it. There are designated slots for each summarized source of network data as well as links to the websites.
and dynamically map the information summaries for the information associated with the one or more sources of network data with the one or more designated data slots based on an association between the information summary and a respective source of network data determined from the contextual information.
(The summary may comprise key semantic concepts, text fragments or sentences matching these concepts, or a combination of the two. The user query may be taken into account, to focus the summary on a particular topic using means such as, but not limited to, NLP or metadata to help determine relevance of the results. … A relevance score may be calculated for each concept or text fragment, and this relevance score may or may not be displayed to the user in the summary and may be used by the system for learning.) (Col. 21, Line 65 to Col. 22, Line 26).
(Furthermore, some embodiments of the present invention may specify how to order the text fragments, for example, without limitation, alphabetically by source, by relevance, etc.) (Col. 21, Lines 29-32).
The order that the text fragments are arranged in the text summary can be based on a detected relevance to the initial query. Thus, the summary displayed in Fig. 2 shows dynamically mapped data slots that are associated with information summaries and network data determined from contextual information (as can be seen in the associated links in Fig. 2).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the search/summarization system as taught by Ferrucci et al. to include a dynamically mapped user interface with data slots for each summary as taught by Soubbotin This would have been an obvious improvement as search summaries and/or results that consider context of the user/query may provide more accurate search summaries and/or results for the user. (Soubbotin Col. 1, Lines 58-64).
Regarding Claims 2, 9, and 16, Ferrucci et al. in view of Gross et al., Neervannan et al., and Soubbotin teaches the system of claims 1, 8, and 15,
Furthermore, Ferrucci et al. teaches wherein, in retrieving the information from the one or more sources of network data, the system is further configured to: initiate extractive summarization of information associated with the one or more sources of network data using the natural language processing algorithms.
Ferrucci et al. uses natural language understanding algorithms on the “network data” or query result (The NLU engine 116 may apply a multi-dimensional interpretation process with a domain-independent interpretation schema to analyze the query results.) (Col. 12, lines 19-28). Then uses a knowledge aggregation and synthesis engine to summarize data (The knowledge aggregation and synthesis engine 118 may apply clustering and similarity algorithms to aggregate information in the interpreted query results.) (Col. 12 line 55 to Col. 13 line 12).
And generate information summaries for the information associated with the one or more sources of network data based on at least the extractive summarization.
Ferrucci et al. teaches an evidence summary component for generating information summaries (The evidence summary component 122 may process the ranked aggregate results with the evidence texts. The evidence summary component 122 may process the ranked aggregate results with the evidence texts to generate results data, including one or more result clusters annotated with the related portion of evidence texts. The one or more result clusters include at least one concept cluster, a relation cluster, and a propositional cluster. Each cluster of the one or more result clusters annotated with the related portion of evidence texts includes a link to a summarized evidence passage.) (Col. 13, lines 42-58).
Regarding Claims 4, 11, and 18, Ferrucci et al. in view of Gross et al., Neervannan et al., and Soubbotin teaches the system of claims 1, 8, and 15,
Furthermore, Ferrucci et al. teaches wherein the system is further configured to: receive, from the user input device, a user interaction input associated with a first information summary, wherein the first information summary is associated with a first data source.
A user selecting a summary on the user interface can be seen in Ferrucci et al. in (Fig. 10) and further described as (In response to selecting the example user interface element 1006, the research assistant UI component 208 may generate the example user interface element 1004.) (Col. 36, lines 43-45).
and generate a third user interface to be overlaid on the second user interface to display a first information associated with the first data source, wherein the first information is an unabridged version of the first information summary.
The generating of the third user interface can be seen in the previously mentioned quote and Fig. 10 where (1004) represents the third user interface and (1002) represents the second user interface in which the third user interface is overlaid on. The “unabridged version” is represented in 1004 with evidence links (The example user interface element 1006 is highlighting one of the proposition nodes. The example user interface element 1004 allows user input to explore support evidence for the evidence links used to generate the proposition “Syndrome A has symptom dry eyes caused by lacrimal gland inflammation.”) (Col. 36, lines 46-55). Furthermore, Ferrucci et al. teaches providing “unabridged” material in the evidence summary component (The evidence summary component 234 may provide citations and links to the evidence texts.) (Col. 25 line 63 to Col. 26 line 21).
Regarding Claims 5, 12 and 19, Ferrucci et al. in view of Gross et al., Neervannan et al., and Soubbotin teaches the system of claims 1, 8, and 15,
Furthermore, Ferrucci et al. teaches wherein the one or more sources of network data are associated with the user.
Ferrucci et al. teaches multiple ways that the user can affect the data provided by the system, one being (Additionally, as described herein, the present system may request user feedback (e.g., thumbs up or thumbs down) for supporting/refuting evidence for a proposition. The system can use this feedback to (1) dynamically re-rank the list of evidence passages and provide immediate visual feedback by removing the evidence passage with negative feedback and up-ranking the evidence passage with positive feedback;) (Col. 8 line 38 to Col. 9 line 2).
Regarding Claims 6, 13, and 20, Ferrucci et al. in view of Gross et al., Neervannan et al., and Soubbotin teaches the system of claims 1, 8, and 15,
Furthermore, Ferrucci et al. teaches wherein the system is further configured to: receive, from the user input device, a user customization input selecting a first subset of the one or more sources of network data.
Fig. 7 shows an example user interface receiving input from the user on multiple sources of data. (The example user interface 702 presents the example user interface element 704, which includes an exploration window to allow user input to explore relations or concepts relative to the specific concept “Syndrome A.”) and (As depicted, based on user input, “Syndrome A” has the relation link “has symptoms” relative to the concepts: “Dry eyes,” “Nocturnal cough,” and “Dry mouth.” The user has selected those three concepts for further exploration.) (Col. 34 line 54 to Col. 35 line 8).
And update the second user interface to display the information from the first subset of the one or more sources of network data based on at least the user customization input.
Fig. 7 shows a user interface being updated based on user input (The research assistant UI component 208 may generate the example user interface 802 to continue guiding user input to enter the query following the examples illustrated in FIG. 7. As depicted, following the example in FIG. 7, the user has added an additional relation “manifest as” and an additional concept “parotid gland enlargement.”) (Col. 34 line 54 to Col. 35 line 8). Then Fig. 8 shows an example of how the user interface of Fig. 7 can generate new interface options based on the input received (As described herein, an input query may include a search schema that specifies a causal schema. The causal schema may trigger automatic repeat searches for a causal pathway from a starting point (“source concept”) and connected to an ending point (“target concept”).) (Col. 35, lines 9-58).
Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Publication 11803401 B1 (Ferrucci et al.) in view of in view of US Patent Application Publication 20160360382 A1 (Gross et al.), US Patent Publication 11740770 B1 (Neervannan et al.), US Patent Publication 10783192 B1 (Soubbotin). and further in view of US Patent Publication 10963518 B2 (Aggour et al.).
Regarding Claims 7 and 14 Ferrucci et al. in view of Gross et al., Neervannan et al., and Soubbotin teaches the system of claims 1 and 8.
Gross et al., Neervannan et al., and Soubbotin does not explicitly teach: wherein the system is further configured to: deploy, using a machine learning (ML) subsystem, a trained ML model on the information from the one or more sources of network data; generate predictive analytics for the user based on at least the information from the one or more sources of network data using the trained ML model; and display the predictive analytics for the user on the second user interface.
However, Aggour et al. teaches a query system is further configured to: deploy, using a machine learning (ML) subsystem, a trained ML model on the information from the one or more sources of network data;
(In accordance with implementations, scalable analytic execution layer 116 can optionally apply machine learning and artificial intelligence techniques to the query results, step 325. These techniques identify data correlations responsive to the consumer's query details.) (Col. 8, lines 18-30). In this reference the (query results) are being mapped to “network data.”
generate predictive analytics for the user based on at least the information from the one or more sources of network data using the trained ML model;
“Predictive analytics” are being mapped to (identifying data correlations) and (visualizations of the raw data or analytic results can be generated) as any task being performed by a machine learning algorithm could be considered “predictive”.
and display the predictive analytics for the user on the second user interface.
(Visualizations of the raw data or analytic results can be generated, step 330. The visualizations of raw data and/or analytic results, or the raw data and/or analytic results in native format (e.g., relational data, time series data, images, document, etc.) can be presented to the data consumer, step 335.) (Col. 8, lines 18-30).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the search/summarization system as taught by Gross et al., Neervannan et al., and Soubbotin to extract information from query results using machine learning as taught by Aggour et al. This would have been an obvious improvement to efficiently and scalably provide data relationships and rapid analytics to the user as described by Aggour et al. (Col. 6 line 65 to Col. 7 line 10).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS DANIEL LOWEN whose telephone number is (571)272-5828. The examiner can normally be reached Mon-Fri 8:00am - 4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NICHOLAS D LOWEN/Examiner, Art Unit 2653
/Paras D Shah/Supervisory Patent Examiner, Art Unit 2653
01/16/2026