DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Response to Amendment
This communication is responsive to the applicant’s amendment dated 12/16/2025. The applicant(s) amended claims 1-4, 12, and 13.
Response to Arguments
Applicant's arguments with respect to independent claims 1, 12, and 13 have been considered but are moot in view of the new ground(s) of rejection because the arguments pertain to the newly amended limitations.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-13 are rejected under 35 U.S.C. 103 as being unpatentable over Donaldson (US 20150348538 A1) in view of Chhabra et al. (US 20200293618 A1).
Regarding claims 1, 12, and 13, Donaldson teaches:
“A summarizing system comprising processing circuitry” (par. 0018; ‘Various embodiments or examples may be implemented in numerous ways, including as a system, a process, an apparatus, a user interface, or a series of program instructions on a computer readable medium such as a computer readable storage medium or a computer network where the program instructions are sent over optical, electronic, or wireless communication links.’) configured to:
“acquire speech” (par. 0020; ‘The audio signal may include speech.’);
“convert the speech into a plurality of texts” (par. 0026; ‘Speech recognizer 221 may translate or convert spoken words into text.’);
“generate a summary of the plurality of texts when the plurality of texts satisfy a summarizing execution condition” (par. 0029; ‘Summary generator 213 may be configured to generate a summary of the speech.’; par. 0030; ‘A content summary may be determined based on the words, vocal fingerprints, speakers, acoustic properties, or other parameters determined by speech analyzer 212. For example, based on word counts, and a comparison to the frequency that the words are used in the general English language, one or more keywords may be identified.’);
“output the summary to a user” (par. 0052; ‘Before connecting the caller to the conference, a speech summary manager may present the summary or keyword to the caller. The summary or keyword may be presented using a loudspeaker local to the caller, which may be remote from the speech summary manager.’).
However, Donaldson does not expressly teach:
“and in response to receiving a change in at least a part of the plurality of texts, update the summary based on the change.”
In a similar field of endeavor (generating summaries of text), Chhabra teaches:
“and in response to receiving a change in at least a part of the plurality of texts, update the summary based on the change” (par. 0036; ‘In some configurations, the summary can be updated dynamically. Thus, as the user adjusts the selection of the segments, the summary can be updated in response to each adjustment to the input.’).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Donaldson’s summary generator by incorporating Chhabra’s summary updating method in order to update a summary based on adjusted selected text. The combination would improve user interactions with computing devices. (Chhabra: par. 0009)
Regarding claim 2 (dep. on claim 1), the combination of Donaldson in view of Chhabra further teaches:
“wherein the processing circuitry determines that the summarizing execution condition is satisfied when an amount of the plurality of texts that are converted reached a first threshold value, the amount of the plurality of texts being calculated by counting a number of characters included in the plurality of texts respectively” (Donaldson: par. 0047; ‘FIG. 7A illustrates examples of a flowchart for determining keywords based on one or more speech parameters, such as word count, vocal fingerprint, acoustic properties, and the like, according to some examples.’; par. 0052; ‘A keyword may be determined by assigning weights words referenced in the speech based on word counts, vocal fingerprint durations, acoustic properties, and other parameters, and determining a significance of a word. The significance of a vocal fingerprint may also be determined based on word counts and other parameters, and the significance of a vocal fingerprint may in turn affect the significance of a word.’).
Regarding claim 3 (dep. on claim 1), the combination of Donaldson in view of Chhabra further teaches:
“wherein the processing circuitry: generates chunk data of the plurality of texts” (Chhabra: par. 0065; ‘Next, at operation 1006, the system 100 can analyze the selected segments. As described herein, one or more techniques can be utilized to interpret the content of the selected segments for the purpose of generating a summary.’);
“vectorizes the chunk data” (Chhabra: par. 0040; ‘For example, a linear regression mechanism may be used to generate a score that indicates a likelihood that a file has a threshold level of relevancy with one or more selected segments of a thread or document.’);
“calculates a similarity based on the vectorized chunk data” (Chhabra: par. 0040; ‘For example, a linear regression mechanism may be used to generate a score that indicates a likelihood that a file has a threshold level of relevancy with one or more selected segments of a thread or document.’);
“determines that the summarizing execution condition is satisfied when the similarity is lower than a second threshold value” (Donaldson: par. 0046; ‘A match may be determined if there is substantial similarity or a match within a tolerance, or may be determined based on statistical analysis, machine learning, neural networks, natural language processing, and the like.’).
Regarding claim 4 (dep. on claim 1), the combination of Donaldson in view of Chhabra further teaches:
“a memory that stores the summary and the plurality of texts in association with each other” (Donaldson: par. 0051; ‘A speech summary manager may cause data representing an event to be stored in an electronic calendar, which may be stored in a local or remote memory.’; Chhabra: par. 0036; ‘In some configurations, the summary can be updated dynamically. Thus, as the user adjusts the selection of the segments, the summary can be updated in response to each adjustment to the input.’ Updating summaries implies that the summaries are stored prior to the updating and are stored as the updated versions.).
Regarding claim 5 (dep. on claim 1), the combination of Donaldson in view of Chhabra further teaches:
“wherein the processing circuitry distributes the summary and the plurality of texts used for generating the summary to a terminal apparatus used by the user, and display, on a display of the terminal apparatus, a display screen that displays at least the summary” (Donaldson: par. 0023; ‘Speech summary manager 110 may be implemented on media device 101 (as shown), mobile device 102, a server, or another device, or distributed across any combination of devices.’ ‘Speech summary manager 110 may also present summary 160 on a display, or via another user interface.’).
Regarding claim 6 (dep. on claim 5), the combination of Donaldson in view of Chhabra further teaches:
“wherein the display screen displays the plurality of texts used for generating the summary in association with the summary in a manner that is editable by the user” (Donaldson: par. 0033; ‘Further, user interface 234 may be used to configure speech summary manager 210, such as adding a user profile to user profile database 241, modifying rules for creating action items, correcting a word that is repeatedly misrecognized by speech recognizer 221, and the like.’).
Regarding claim 7 (dep. on claim 5), the combination of Donaldson in view of Chhabra further teaches:
“wherein the display screen displays the summary in a manner that allows the user to extract, copy, or edit the summary” (Donaldson: par. 0033; ‘Further, user interface 234 may be used to configure speech summary manager 210, such as adding a user profile to user profile database 241, modifying rules for creating action items, correcting a word that is repeatedly misrecognized by speech recognizer 221, and the like.’).
Regarding claim 8 (dep. on claim 5), the combination of Donaldson in view of Chhabra further teaches:
“wherein the speech is collected during a conference in which content data is transmitted or received between a plurality of terminal apparatuses, and the processing circuitry is configured to display the display screen superimposed on a conference screen provided for the conference” (Donaldson: par. 0022; ‘A speech session may be associated with a variety of purposes, such as, delivering an address to an audience, giving a lecture or presentation, having a discussion, meeting, debate, chat, brainstorming session, and the like.’; par. 0023; ‘Speech summary manager 110 may be implemented on media device 101 (as shown), mobile device 102, a server, or another device, or distributed across any combination of devices.’ ‘Speech summary manager 110 may also present summary 160 on a display, or via another user interface.’).
Regarding claim 9 (dep. on claim 4), the combination of Donaldson in view of Chhabra further teaches:
“wherein the processing circuitry is configured to update the summary, after a first time period has elapsed since at least a part of the plurality of texts was changed for the first time or after a second time period has elapsed since at least a part of the plurality of texts was changed for the last time” (Donaldson: par. 0033; ‘Further, user interface 234 may be used to configure speech summary manager 210, such as adding a user profile to user profile database 241, modifying rules for creating action items, correcting a word that is repeatedly misrecognized by speech recognizer 221, and the like. Still, user interface 234 may be used for other purposes.’).
Regarding claim 10 (dep. on claim 1), the combination of Donaldson in view of Chhabra further teaches:
“wherein the processing circuitry stops updating the summary, in a case where the summary has been changed by a user input” (Donaldson: par. 0023; ‘Before connecting him to the conference call, speech summary manager 110 may provide the tardy user with an option to listen to a summary 160 of what has been discussed in the conference call thus far.’).
Regarding claim 11 (dep. on claim 4), the combination of Donaldson in view of Chhabra further teaches:
“wherein in response to reception of a predetermined operation by the user, the processing circuitry is configured to update the summary” (Donaldson: par. 0033; ‘Further, user interface 234 may be used to configure speech summary manager 210, such as adding a user profile to user profile database 241, modifying rules for creating action items, correcting a word that is repeatedly misrecognized by speech recognizer 221, and the like. Still, user interface 234 may be used for other purposes.’).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK VILLENA whose telephone number is (571)270-3191. The examiner can normally be reached 10 am - 6pm EST Monday through Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at (571) 272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
MARK . VILLENA
Examiner
Art Unit 2658
/MARK VILLENA/Examiner, Art Unit 2658