DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 02/11/2025 was filed in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-4, 6, 7, 9-17, and 19-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The independent claims 1, 19, and 21 relate to the statutory category of method/process and machine/apparatus. The independent claims recite “receiving a generation request for generating a meeting summary of a target meeting; extracting a meeting record file of the target meeting according to the generation request, wherein the meeting record file comprises a meeting audio recording and display data, and the meeting audio recording and the display data are collected through an intelligent meeting interaction device; and parsing the meeting record file to generate the meeting summary of the target meeting, wherein the meeting summary comprises a spoken text generated according to the meeting audio recording and the display data, and a time of the display data corresponds to a time of the meeting audio recording”.
The limitations of claims 1, 19, and 21 of “receiving…”, “extracting…”, and “parsing…” as drafted covers mental activity. More specifically, a human while viewing a video recording of a meeting, can determine whether the meeting has audio and can generate pictures of the meeting where they transcribe the audio portion and make a notes about the timing of where in the audio certain topics were discussed. The human can listen to the audio and take notes of what is being discussed and which particular time in the meeting was the topic discussed.
This judicial exception is not integrated into a practical application In particular, claims 18, and 19 recite the additional elements of “memory”, “program”, and “processor” which are recited generally in the specification. For example, in paragraph [0127] of the as filed specification, there is a description of using a general operating system. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception.. As discussed above with respect to the integration of the abstract idea int a practical application, the addition element of using a computer is noted as a general computer. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible.
With respect to claim 2, the claim relates to summarize the meeting into different topics, where each topic is discussed in the audio of the meeting. This relates to a mental activity of making an outline when summarizing the meeting and where in the audio and video of the meeting was the topic covered. No additional limitations are present.
With respect to claim 3, the claim relates to determining where in the meeting was a particular topic covered. This relates to a mental activity of making an outline when summarizing the meeting and where in the audio of the meeting was the topic covered. No additional limitations are present.
With respect to claim 4, the claim relates to identifying the different participants in the meeting by voice recognition and making an outline of what the participant discussed. This relates to a mental activity of recognizing who is speaking and making an outline of what they spoke about. No additional limitations are present.
With respect to claims 6 and 9, the claims relate to capturing an image of who is speaking. This relates to a mental activity of sketching an image of who is on the screen. No additional limitations are present.
With respect to claim 7, the claim relates to the layout of the meeting summary. This relates to the mental activity determining how the outline of the meeting will look, including the sketch of the person speaking. No additional limitations are present.
With respect to claims 10, 14 and 15, the claims relate to determining how long a particular participant has spoken and when in the image capture they spoke. This relates to amental activity of recognizing who is speaking in the video and for how long. No additional limitations are present.
With respect to claim 11, the claim relates to capturing an image of who is speaking and for how long. This relates to a mental activity of sketching an image of who is on the screen and noting how long they spoke. No additional limitations are present.
With respect to claim 12, the claim relates to determining what topic was being discusses at a particular time and for how long it was discussed. This relates to a mental activity of determining what was being discussed at a particular time and for how long was it discussed. No additional limitations are present.
With respect to claim 13, the claim relates to determining when something was written during the meeting. This relates to a mental activity of determining when a particular topic was discussed using a whiteboard. No additional limitations are present.
With respect to claim 16, the claim relates to viewing a live video feed of the meeting. This relates to a mental activity of being present in real-time while the meeting is going on. No additional limitations are present.
With respect to claim 17, the claim relates to how the meeting summary is outputted. This relates to a mental activity of printing out a hard copy of the meeting summary. No additional limitations are present.
With respect to claim 20, the claim relates to how the audio of the meeting is recorded. This relates to a mental activity of using a microphone to record the meeting and is considered a presolution activity. No additional limitations are present.
Claim 21 is rejected under 35 U.S.C. 101 because claim 20 is drawn to a “signal” per se as recited in the preamble and as such is non-statutory subject matter. In paragraphs [0130]-[131] of the as filed Specification, the term “readable storage medium" is not defined as to what the scope of the term is meant to encompass. Hence, one of ordinary skilled in the art can interpret such term to include transitory signals and non-transitory signals. It does not appear that a claim reciting a signal encoded with functional descriptive material falls within any of the categories of patentable subject matter set forth in § 101. First, a claimed signal is clearly not a "process" under § 101 because it is not a series of steps. The other three § 101 classes of machine, compositions of matter and manufactures "relate to structural entities and can be grouped as 'product' claims in order to contrast them with process claims." 1 D. Chisum, Patents § 1.02 (1994). The Applicant' s Specification presents a broad definition as to what the “readable storage medium covers and is being made to include transitory and non-transitory signals. The Applicant' s Filed Specification in paragraphs [0130]-[0131], refer to the “storage medium”. Hence, it appears that the claims appear to be drawn towards transitory signals, which is not subject matter eligible. In order to overcome the present rejection, the Applicant is advised to amend the claims by using the following terminology: "non-transitory machine readable storage medium." Such example terminology has been also found in the Official Gazette 1351 OG 212.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-17 and 19-21 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Shen et al (US 10,248,934).
Regarding Claim 1, Shen et al discloses a method of generating a meeting summary, comprising: receiving a generation request for generating a meeting summary of a target meeting (The user may be able to provide input selections and/or meeting parameters via the GUI. These meeting parameters may include, for example, a date, a time, and/or a title of a particular meeting that the user wishes to review) (col. 13, lines 35-39); extracting a meeting record file of the target meeting according to the generation request, wherein the meeting record file comprises a meeting audio recording and display data (A selection from the user may then be received, including search criteria associated with the different data streams and options discussed above (Step 425). For example, the user may be able to pick a particular topic to follow, input one or more key words, identify an attendee of the meeting, choose a particular display to view, and/or select a time period within the meeting. Based on these selections, any number of different searches and/or filters of the separated data streams may then be applied) (col. 13, lines 54-62), and the meeting audio recording and the display data are collected through an intelligent meeting interaction device (For example, equipment such as a camera device 20, a microphone device 22, and a display 32 may facilitate this communication, and/or the collection, processing, and displaying of communication-related data) (col. 3, lines 33-38); and parsing the meeting record file to generate the meeting summary of the target meeting (For example, the user may be able to pick a particular topic to follow, input one or more key words, identify an attendee of the meeting, choose a particular display to view, and/or select a time period within the meeting. Based on these selections, any number of different searches and/or filters of the separated data streams may then be applied) (col. 13, lines 54-62), wherein the meeting summary comprises a spoken text generated according to the meeting audio recording (For example, the search may return transcript associated with only the last three minutes of the meeting based on only the audio that originated from only the position(s) of attendee1 and/or the projector display screen) (col. 14, lines 50-54) and the display data (The video temporal search may return only portions of the above-described cropped video from the DPS and/or APS that correspond with the user-selected timing (e.g., only the last three minutes of the video of only the locations within the bounding boxes where attendee1 and/or the projector screen are positioned)) (col. 14, lines 22-28), and a time of the display data corresponds to a time of the meeting audio recording (last 3 minutes of the meeting) (col. 14, lines 18-54).
Regarding Claim 2, Shen et al discloses the method, wherein the meeting summary comprises a plurality of sub-contents (It should be noted that multiple video clips (e.g., of an attendee and a display, of different attendees, of an attendee and a meeting log reviewer, etc.) may be played back at the same time, if desired (e.g., via picture-in-picture, multiple virtual windows, etc. that may be available via the GUIs of FIGS. 5, 6, 7, and 8)) (col. 14, lines 55-65, each sub-content comprises the spoken text and the display data (Once all of the user selections have been made and the corresponding audio, video, and transcript returned, the meeting data may be played back on display 32 of portal 18 (Step 450) (col.14, lines 55-65).
Regarding Claim 3, Shen et al discloses the method, wherein a time of the display data comprised in each sub-content corresponds to a time of the spoken text (For example, the user may be able to pick a particular topic to follow, input one or more key words, identify an attendee of the meeting, choose a particular display to view, and/or select a time period within the meeting) (col. 13, lines 54-62) comprised in the sub- content (Once all of the user selections have been made and the corresponding audio, video, and transcript returned, the meeting data may be played back on display 32 of portal 18 (Step 450) (col.14, lines 55-65).
Regarding Claim 4, Shen et al discloses the method, wherein parsing the meeting record file to generate the meeting summary of the target meeting comprises: identifying a plurality of speaking objects corresponding to the meeting audio recording according to voiceprint information ( In some embodiments, the individual voice streams may be recognized and linked to identified attendees. This may be accomplished, for example, via voiceprinting technology. A voiceprint, like a fingerprint, includes a collection of characteristics that are unique to a particular attendee (e.g., to the attendee's voice)) (col., 10, lines 50-55); and forming the plurality of sub-contents according to a speaking order of the speaking objects (The returned audio may then be filtered based on the user-selection made in Step 430. Specifically, the returned audio may be filtered accordingly to attendee identification and/or display identification) (col. 14, lines 36-34-37).
Regarding Claim 5, Shen et al discloses the method, wherein the meeting summary comprise the meeting audio recording, and after forming the plurality of sub-contents according to the speaking order of the speaking objects, the method further comprises: displaying an audio play control identifier and the spoken text corresponding to each sub- content, wherein the audio play control identifier is configured to control playing the meeting audio recording corresponding to the sub-content, and the spoken text is obtained by identifying the meeting audio recording corresponding to the sub-content (In some embodiments, one or more additional buttons may be selectively displayed within secondary area 58. These buttons may allow the user to alter the way in which audio and/or video content is displayed. As can be seen in FIG. 8, these buttons may include a “Follow This Topic” button 70, a “Follow This Attendee” button 72, and a “Sequential Play” button 74) (col. 16, line 57-col. 17, line 13); and displaying at least one data display region in the meeting summary, wherein the data display region is configured to display the display data of which the time corresponding to the time of the meeting audio recording (During viewing of video content following the topic-focused format or the sequential playback format, when the user selects (e.g., taps or clicks on) button 72, the current playback of meeting content may switch to playback that follows the attendee that is actively speaking at the time of button-selection) (col. 17, lines 4-9).
Regarding Claim 6, Shen et al discloses the method, wherein the display data (Once all of the user selections have been made and the corresponding audio, video, and transcript returned, the meeting data may be played back on display 32 of portal 18 (Step 450) (col.14, lines 55-65) comprises one or more of a screen-recording video and a screenshot image captured by the intelligent meeting interaction device during the target meeting (Camera device 20 can include various components such as one or more processors, a camera, a memory, and a transceiver. It is contemplated that camera device 20 can include additional or fewer components. Camera device 20 may include one or more sensors for converting optical images to digital still image and/or video data) (col. 4, lines 8-15).
Regarding Claim 7, Shen et al discloses the method, wherein the quantity of the data display regions is multiple, each data display region corresponds to one sub-content, and the data display region is configured to play a screen-recording video corresponding to the sub-content or display a screenshot image corresponding to the sub-content (It should be noted that multiple video clips (e.g., of an attendee and a display, of different attendees, of an attendee and a meeting log reviewer, etc.) may be played back at the same time, if desired (e.g., via picture-in-picture, multiple virtual windows, etc. that may be available via the GUIs of FIGS. 5, 6, 7, and 8)) (col. 14, lines 55-65).
Regarding Claim 8, Shen et al discloses the method, wherein after parsing the meeting record file to generate the meeting summary of the target meeting, the method further comprises: receiving a control request for a target control identifier among audio play control identifiers As can be seen in FIG. 8, these buttons may include a “Follow This Topic” button 70, a “Follow This Attendee” button 72, and a “Sequential Play” button 74) (col. 16, line 57-col. 17, line 13); playing a target meeting audio recording corresponding to the target control identifier according to the control request (During viewing of any video content, when the user selects (e.g., taps or clicks on) button 70, the current playback of meeting content may switch from a current playback format (e.g., from either an attendee-focused playback or a sequential playback) to playback that follows a current topic being discussed) (col. 16, line 57-col. 17, line 13); and synchronously displaying the display data in the data display region according to a correspondence between the time of the display data and a time of the target meeting audio recording (During viewing of video content following the topic-focused format or the sequential playback format, when the user selects (e.g., taps or clicks on) button 72, the current playback of meeting content may switch to playback that follows the attendee that is actively speaking at the time of button-selection. Similarly, during viewing of video content following the topic-focused or attendee-focused format, when the user selects (e.g., taps or clicks on) button 74, the current playback of meeting content may switch to playback that follows a temporal sequence) (col. 16, line 57-col. 17, line 13).
Regarding Claim 9, Shen et al discloses the method, wherein the display data comprises a screenshot image at an end time of a speaking time of a corresponding speaking object in the meeting audio recording or at a preset time after the speaking time is ended (The video temporal search may be a search for a user-selected timing (e.g., start time, end time, and/or duration) of the cropped views) (col. 14, lines 18-21).
Regarding Claim 10, Shen et al discloses the method, wherein the display data comprises the screen-recording video, and extracting the meeting record file of the target meeting according to the generation request comprises: determining a speaking time of each speaking object according to a recognition result of the speaking object in the meeting audio recording (An audio temporal search and ID filtering may also be implemented (Step 440). The audio temporal search may be a search for the user-selected timing (e.g., start time, end time, and/or duration) of audio recorded during the meeting) (col. 14, lines 29-40); and determining the screen-recording video corresponding to the speaking time according to the speaking time (This search may return all audio from the VS recorded in association with the user-selected timing) (col. 14, lines 29-40).
Regarding Claim 11, Shen et al discloses the method, wherein the display data comprises display data of an operation region determined according to the speaking time ((e.g., only the last three minutes of the video of only the locations within the bounding boxes where attendee1 and/or the projector screen are positioned)) (col. 14, lines 18-28).
Regarding Claim 12, Shen et al discloses the method, further comprising: obtaining the display data of the operation region determined according to the speaking time (The audio temporal search may be a search for the user-selected timing (e.g., start time, end time, and/or duration) of audio recorded during the meeting) (col. 14, lines 29-40) and (Once all of the user selections have been made and the corresponding audio, video, and transcript returned, the meeting data may be played back on display 32 of portal 18 (Step 450)) (col. 14,lines 58); wherein obtaining the display data of the operation region determined according to the speaking time comprises: determining a target operation record corresponding to the speaking time ( By way of temporal progress bar 66, the user may be able to manipulate (e.g., rewind, fast-forward, skip, pause, stop, accelerate, etc.) playback of the audio and video content) (col. 15, lines 47-49); identifying an operation region corresponding to a position where the target operation record is located (As shown in FIG. 6, a temporal progress bar 66 may alternatively or additionally overlay the video content, for example also at the lower edge of primary area 56) (col. 15, lines 44-49); and determining the display data corresponding to the speaking time according to the operation region corresponding to the position where the target operation record is located (An audio temporal search and ID filtering may also be implemented (Step 440). The audio temporal search may be a search for the user-selected timing (e.g., start time, end time, and/or duration) of audio recorded during the meeting) (col. 14, lines 29-40).
Regarding Claim 13, Shen et al discloses the method, wherein the target operation record comprises an operation record of a writing operation (Camera device 20 can be configured to capture content presented or otherwise displayed during the meeting, such as writing and drawings on a whiteboard or paper flipper, and projected content on a projector screen 33) (col. 4, lines 8-15).
Regarding Claim 14, Shen et al discloses the method, wherein determining the screen-recording video corresponding to the speaking time according to the speaking time comprises: determining an operation time corresponding to the speaking time, wherein the operation time covers the speaking time (An audio temporal search and ID filtering may also be implemented (Step 440). The audio temporal search may be a search for the user-selected timing (e.g., start time, end time, and/or duration) of audio recorded during the meeting) (col. 14, lines 29-40); determining the screen-recording video corresponding to the speaking time according to the operation time (This search may return all audio from the VS recorded in association with the user-selected timing) (col. 14, lines 29-40).
Regarding Claim 15, Shen et al discloses the method, wherein the operation time comprises the speaking time, and the operation time further comprises at least one of a first time period or a second time period (The individual voice streams may be continuously recorded and packaged together with attendee identification and/or time information to generate a Voice Stream (VS) (Step 335)) (col. 11, lines 7-18), wherein the first time period is a time period of a first preset duration before the speaking time, and the second time period is a time period of a second preset duration after the speaking time (he individual voice streams may be continuously recorded and packaged together with attendee identification and/or time information to generate a Voice Stream (VS) (Step 335). The VS may consist of one or more records and have the following format: [Time Duration (TimeStamp_start, TimeStamp_end), User ID]; wherein: Time Duration is a duration from a start of the associated voice stream to a current time period or a stop of the voice stream (col. 11, lines 7-18).
Regarding Claim 16, Shen et al discloses the method, wherein the meeting record file further comprises a live video file of the target meeting, and the meeting summary further comprises a live video clip corresponding in time to the meeting audio recording (For example, meeting logging and reviewing app 52 may be able to configure portal 18 to perform operations including: capturing a real-time (e.g., live) video stream, capturing a real-time (e.g., live) voice stream, displaying a graphical user interface (GUI) for receiving control instructions, receiving control instructions from the associated user via I/O devices 34 and/or the user interface, processing the control instructions, sending the real-time video and/or audio based on the control instructions, receiving real-time video and/or audio from other portals 18, and playing back selected streams of the video and audio in a manner customized by the user) (col. 6, lines 30-46).
Regarding Claim 19, Shen et al discloses an electronic device comprising: a memory, a processor, and a program stored on the memory and executable on the processor (Fig. 2) (col. 5, lines 20-28), wherein the processor is configured to read the program in the memory to implement receiving a generation request for generating a meeting summary of a target meeting (The user may be able to provide input selections and/or meeting parameters via the GUI. These meeting parameters may include, for example, a date, a time, and/or a title of a particular meeting that the user wishes to review) (col. 13, lines 35-39): extracting a meeting record file of the target meeting according to the generation request, wherein the meeting record file comprises a meeting audio recording and display data (A selection from the user may then be received, including search criteria associated with the different data streams and options discussed above (Step 425). For example, the user may be able to pick a particular topic to follow, input one or more key words, identify an attendee of the meeting, choose a particular display to view, and/or select a time period within the meeting. Based on these selections, any number of different searches and/or filters of the separated data streams may then be applied) (col. 13, lines 54-62), and the meeting audio recording and the display data are collected through an intelligent meeting interaction device (For example, equipment such as a camera device 20, a microphone device 22, and a display 32 may facilitate this communication, and/or the collection, processing, and displaying of communication-related data) (col. 3, lines 33-38); and parsing the meeting record file to generate the meeting summary of the target meeting (For example, the user may be able to pick a particular topic to follow, input one or more key words, identify an attendee of the meeting, choose a particular display to view, and/or select a time period within the meeting. Based on these selections, any number of different searches and/or filters of the separated data streams may then be applied) (col. 13, lines 54-62), wherein the meeting summary comprises a spoken text generated according to the meeting audio recording (For example, the search may return transcript associated with only the last three minutes of the meeting based on only the audio that originated from only the position(s) of attendee1 and/or the projector display screen) (col. 14, lines 50-54) and the display data (The video temporal search may return only portions of the above-described cropped video from the DPS and/or APS that correspond with the user-selected timing (e.g., only the last three minutes of the video of only the locations within the bounding boxes where attendee1 and/or the projector screen are positioned)) (col. 14, lines 22-28), and a time of the display data corresponds to a time of the meeting audio recording (last 3 minutes of the meeting) (col. 14, lines 18-54).
Regarding Claim 20, Shen et al discloses the electronic device, wherein the electronic device is the intelligent meeting interaction device, the intelligent meeting interaction device comprises a microphone, and the microphone is configured to capture the meeting audio recording (For example, equipment such as a camera device 20, a microphone device 22, and a display 32 may facilitate this communication, and/or the collection, processing, and displaying of communication-related data) (col. 3, lines 33-38).
Regarding Claim 21, Shen et al discloses a readable storage medium having a program stored thereon, wherein the program, when executed by a processor (Another aspect of the disclosure is directed to a non-transitory computer-readable medium that stores instructions, which, when executed, cause one or more of the disclosed processors (e.g., processor 24) to perform the methods discussed above) (col. 17, lines 37-8), implements receiving a generation request for generating a meeting summary of a target meeting (The user may be able to provide input selections and/or meeting parameters via the GUI. These meeting parameters may include, for example, a date, a time, and/or a title of a particular meeting that the user wishes to review) (col. 13, lines 35-39); extracting a meeting record file of the target meeting according to the generation request, wherein the meeting record file comprises a meeting audio recording and display data (A selection from the user may then be received, including search criteria associated with the different data streams and options discussed above (Step 425). For example, the user may be able to pick a particular topic to follow, input one or more key words, identify an attendee of the meeting, choose a particular display to view, and/or select a time period within the meeting. Based on these selections, any number of different searches and/or filters of the separated data streams may then be applied) (col. 13, lines 54-62). and the meeting audio recording and the display data are collected through an intelligent meeting interaction device (For example, equipment such as a camera device 20, a microphone device 22, and a display 32 may facilitate this communication, and/or the collection, processing, and displaying of communication-related data) (col. 3, lines 33-38): and parsing the meeting record file to generate the meeting summary of the target meeting (For example, the user may be able to pick a particular topic to follow, input one or more key words, identify an attendee of the meeting, choose a particular display to view, and/or select a time period within the meeting. Based on these selections, any number of different searches and/or filters of the separated data streams may then be applied) (col. 13, lines 54-62), wherein the meeting summary comprises a spoken text generated according to the meeting audio recording (For example, the search may return transcript associated with only the last three minutes of the meeting based on only the audio that originated from only the position(s) of attendee1 and/or the projector display screen) (col. 14, lines 50-54) and the display data (The video temporal search may return only portions of the above-described cropped video from the DPS and/or APS that correspond with the user-selected timing (e.g., only the last three minutes of the video of only the locations within the bounding boxes where attendee1 and/or the projector screen are positioned)) (col. 14, lines 22-28), and a time of the display data corresponds to a time of the meeting audio recording (last 3 minutes of the meeting) (col. 14, lines 18-54).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Shen et al. in view of Mahmoud et al. (US 2021/0021558).
Regarding claim 17, Shen et al fails to teach the method, wherein a format of the meeting record file and/or the meeting summary is an hyper text markup language html format.
Mahmoud et al teaches the method, wherein a format of the meeting record file and/or the meeting summary is an hyper text markup language html format (In particular, analysis engine 246 may analyze video feed 226 from virtual display 208, audio feed 228 from virtual speaker 210, data received from resource 250 (e.g., HyperText Markup Language (HTML) and/or other content from resource 250), and/or other information related to the meeting to detect commands 230-232 to interactive virtual meeting assistant 132 that are issued by participants in the meeting) (page 4, paragraph [0036]).
Therefore, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to have combined the teachings of Shen with the teachings of Mahmoud to format the output of the transcription of audio recording of the meeting into HTML format to that it can be easily read and understood by the user.
Cited Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Oluyemi et al. (US 2022/0385858) discloses real-time, event-driven video conference analytics.
Hou et al. (US 2022/0207392) discloses generating summary and next actions in real-time for multiple users form interaction records in natural language.
Adlersberg et al. (US 11,334,618) discloses capturing the moment in audio discussions and recordings.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SATWANT K SINGH whose telephone number is (571)272-7468. The examiner can normally be reached Monday thru Friday 9:00 AM to 6:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571}270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SATWANT K SINGH/Primary Examiner, Art Unit 2653