DETAILED ACTION
This action is responsive to the filing of 11/2/23. Claims 1-20 are pending and have been considered below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Allowable Subject Matter
Claims 8-11, 16, 18, 20 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is an examiner's statement of reasons for allowance. The prior art of record fails to disclose detecting a burst of conversation strings generated in a time period, its frequency represented by a curve; and filtering the conversation in the burst, in combination with other limitations recited within the claimed context. The claims present a combination of limitations that differ from the cited art, and there is no reasonable combination of references that would teach it.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 6-7, 14, 16-18 recites the limitation "the summary." There is insufficient antecedent basis for this limitation in the claim. Claim 1, however, recites “a summary of interactions” and “the summary of a topic.”
Claims 13, 14, 16-17 recites the limitation " the summary of the conversation strings." There is insufficient antecedent basis for this limitation in the claim. Claim 1, however, recites “a summary of interactions” and “the summary of a topic.”
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-7, 12-15, 17, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kruger (2009/0157709) in view of Kestell (20240325930) and in further view of Neervannan (20250061141.)
Claim 1, 19: Kruger discloses a method, comprising:
receiving a request for a summary of interactions exchanged between two or more users (Fig. 4: 256, Show chat summary) during a time window of the conversation and the interactions include conversation strings (par. 48, as the conversation proceeds a record of the words used in the conversation may be stored) related to one or more topics discussed (Fig. 4: 260 tag cloud conversation summary showing topics such as ‘energy’ ‘green-initiative’ etc.);
identifying a subset of the interactions captured during conversation and correspond with the time window (par. 49, time stamp may be used to generate conversation summary 260 (e.g. tag cloud). In this way, conversation display process 10 may further include determining a weighting for each of the words based upon the count and the time stamp (110). The weighting for each word may be determined using a variety of different factors, including but not limited to, the word's count, the word's recent usage, etc.);
filtering the subset of the interactions to retain select ones of the interactions with the conversation strings that are relevant (par. 49, The weighting for each word may be determined using a variety of different factors, including but not limited to, the word's count, the word's recent usage, etc.);
analyzing the conversation strings included in the select ones of the interactions to identify keywords representing topics discussed within (Fig. 4: 260 tag cloud conversation summary showing topics such as ‘energy’ ‘green-initiative’ etc.); and
presenting the keywords in a user interface for user selection, the keywords presented using visual representation defining a level of prominence assigned to each of the keywords (Fig. 4: 260 tag cloud conversation summary showing topics such as ‘energy’ ‘green-initiative’ etc. showing levels of prominence of certain keywords);
the keyword discussed in the conversation strings of the select ones of the interactions exchanged between the two or more users during the time window (Fig. 4: 260 tag cloud conversation summary showing topics such as ‘energy’ ‘green-initiative’ etc.)
However, Kruger does not explicitly disclose:
(1) during a time window of game play of a video game, the game play generating game data defining game state of the video game and the interactions include conversation strings related to one or more topics discussed during the game play; identifying a subset of the interactions captured during the game play of the video game and correspond with the time window; filtering the subset of the interactions to retain select ones of the interactions with the conversation strings that are relevant to the game play of the video game; conversation strings that are relevant to the game play of the video game;
(2) wherein a keyword, when selected at the user interface, is configured to provide the summary of a topic associated.
Kestell discloses a similar method for conversations between users, including:
(1) during a time window of game play of a video game, the game play generating game data defining game state of the video game (par. 33, video game application 130 includes game engine 132, game data 134, and party system 136. As known to a person of ordinary skill in the art, a game engine uses game data (e.g., state data.)) and the interactions include conversation strings related to one or more topics discussed during the game play;
identifying a subset of the interactions captured during the game play of the video game and correspond with the time window;
filtering the subset of the interactions to retain select ones of the interactions with the conversation strings that are relevant to the game play of the video game;
conversation strings that are relevant to the game play of the video game (par. 35, voice and/or text chat communications, gameplay streaming, subset parties, gameplay recommendations, and other functionality based in part on game data corresponding to a video game or game mode thereof and/or corresponding to one or more members of a party.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of Kruger and Kestell so as to provide video game players with an ability for summarizing conversations within a potentially fast paced environment of a video game.
Neervannan discloses a similar interface for searching and summarizing content, including:
(2) wherein a keyword, when selected at the user interface, is configured to provide the summary of a topic associated (Fig. 19: 1920, 1930; par. 263, Topic summaries 1930 includes LLM generated summaries relevant to the selected topic shown in topics 1920.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of Kruger and Neervannan so as to provide the user with a more detailed summary to one of the topics that were discussed in the conversation beyond what the tag cloud could provide.
Claim 2: Kruger Kestell and Neervannan disclose the method of claim 1, wherein a keyword from the identified keywords is mapped to a game event occurring in a specific portion of the video game that falls within the time window, and the summary of the topic represented by the keyword includes details of the game event that occurred in the specific portion of the video game during gameplay (Kestell par. 49, events in a video game application.)
Claim 3: Kruger Kestell and Neervannan disclose the method of claim 1, wherein presenting the keywords using visual representation includes stylizing each keyword of the identified keywords presented in the user interface by adjusting one or more rendering attributes of said keyword, the stylizing performed to distinctly represent the level of prominence assigned to said keyword when rendered at the user interface (Kruger Fig. 4: 260, size and color.)
Claim 4: Kruger Kestell and Neervannan disclose the method of claim 3, wherein stylizing each keyword of the identified keywords includes any one or a combination of bolding, italicizing, color-coding, sizing, flashing and frequency of flashing, and underlining, wherein an amount of stylizing done to said each keyword is done in accordance to the level of prominence assigned to said each keyword, the amount of stylizing performed to provide visually distinguishable representation of said each keyword (Kruger Fig. 4: 260, the more prominent tags are darker in color in accordance to prominence, and larger.)
Claim 5: Kruger Kestell and Neervannan disclose the method of claim 1, wherein the level of prominence of each keyword of the identified keywords is determined by analyzing the conversation strings of the interactions captured during game play using a generative AI engine, the level of prominence determined as a function of any one or combination of a percentage of time spent in discussing a topic associated with said each keyword, a number of interactions discussing said topic (par. 49, time stamp may be used to generate conversation summary 260 (e.g. tag cloud). In this way, conversation display process 10 may further include determining a weighting for each of the words based upon the count and the time stamp (110). The weighting for each word may be determined using a variety of different factors, including but not limited to, the word's count, the word's recent usage, etc.), a number of users participating in the interactions discussing said topic and a number of portions of the video game where said topic associated with said each keyword is discussed.
Claim 6: Kruger Kestell and Neervannan disclose the method of claim 5, wherein the summary identifies each portion of the number of portions of the video game where the topic mapped to said each keyword is discussed (Neervannan par. 263, the topic summarization includes multiple bullet-point summaries 1932, with one or more citations 1934 (e.g., “1”, “2”, “3”, “4”, and “5”) that link to the underlying documents from which the topic summaries 1932 were generated.)
Claim 7: Kruger Kestell and Neervannan disclose the method of claim 1, wherein the summary is customized and presented in accordance to a type of user requesting the summary (Neervannan par. 213, the market intelligence platform may provide a daily summary (or a summary on another time frame) that is generated based on a user's preferences, recommendations, and/or predicted actions. As the user interacts with the system, the system is configured to learn about the types of information the user is interested in, and when the user is interested in that information.)
Claim 12: Kruger Kestell and Neervannan disclose the method of claim 1, wherein the subset of the interactions is filtered to remove one or more interactions with language that are deemed inappropriate for presenting to the two or more users or include content that do not add value to discussions on any one of the topics included in the conversation strings (Neervannan par. 79, retrieve relevant text that enables them to make better investment decisions in the financial industry.)
Claim 13: Kruger Kestell and Neervannan disclose the method of claim 1, wherein the keywords are visually represented in a word cloud (Kruger Fig. 4: 260, tag cloud) with each keyword of the identified keywords presented as an interactive word-bubble, each word-bubble, when selected, is configured to render the summary of the conversation strings related to a topic associated with the keyword represented by the selected interactive word-bubble (Neervannan Fig. 19: 1920, 1930; par. 263, Topic summaries 1930 includes LLM generated summaries relevant to the selected topic shown in topics 1920.)
Claim 14: Kruger Kestell and Neervannan disclose the method of claim 13, wherein presenting the summary further includes, identifying one or more sentiments expressed in the conversation strings related to the topic associated with the keyword of the word-bubble; and presenting the summary of the conversation strings separately for each sentiment expressed, at the user interface (Neervannan Fig. 7: Sentiment visualization search, sentiment heat map by sector; Fig. 20.)
Claim 15: Kruger Kestell and Neervannan disclose the method of claim 13, wherein presenting the summary of the conversation strings further includes presenting summary of an event occurring within a portion of the video game that is mapped to the keyword of the word-bubble (Kestell par. 49, events in a video game application.)
Claim 17: Kruger Kestell and Neervannan disclose the method of claim 1, wherein the time window for receiving the summary is defined by a start time and an end time selectable using a sliding scale rendered alongside the game data, and wherein the summary is presented in an audio format (Kruger Fig. 4: 266, par. 47, a precision selector 266.)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Barsoba (10177926) visualizing conversations.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREY BELOUSOV whose telephone number is (571) 270-1695 and Andrew.belousov@uspto.gov email. The examiner can normally be reached Mon-Friday EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler, can be reached at telephone number 571-272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/Andrey Belousov/
Primary Examiner
Art Unit 2172
1/15/26