DETAILED ACTION
The following is a Final Office action. In response to Examiner’s communication of 9/23/25, Applicant, on 12/22/25, amended claims 1, 4, 8, 10, 11, 14, 16, and 19. Claims 1-20 are now pending and have been rejected as indicated below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant’s amendments are acknowledged.
The 35 USC 101 rejections of claims 1-20 regarding abstract ideas are still applied in light of Applicant’s amendments and explanations.
Revised 35 USC 103 rejection of claims 1-20 are applied in light of Applicant’s amendments and explanations
Claim Rejections - 35 USC§ 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Here, under considerations of the broadest reasonable interpretation of the claimed invention, Examiner finds that the Applicant invented a method and system for determining completions of agenda items of a multi-participant communication using a transcription during the multi-participant communication and generating an agenda for a next multi-participant communication including incomplete agenda items. Examiner formulates an abstract idea analysis, following the framework described in the MPEP, as follows:
Step 1: The claims are directed to a statutory category, namely a "method" (claims 1-10) and "system" (claims 11-20).
Step 2A - Prong 1: The claims are found to recite limitations that set forth the abstract idea(s), namely, regarding claim 1:
identifying one or more agenda items of a first agenda associated with a first multi-participant communication…;
generating… a real-time transcription of the first multi-participant communication using audio data of the first multi-participant communication;
determining… that a first agenda item of the one or more agenda items remains incomplete at an end of the first multi-participant communication ends;
identifying… a person based on a first set of one or more keywords from the real-time transcription of the first multi-participant communication wherein a second set of one or more keywords associated with the person link the person to the first agenda item, internal metadata associated with users of the software platform, and external metadata associated with users of the software platform, wherein the person did not participate in the first multi-participant communication;
generating… for a second multi-participant communication, a second agenda including the first agenda item;
and sending an invite for the second multi-participant communication and the second agenda to the person, wherein the invite includes an action item assigned to the person.
Independent claims 11 and 16 recites substantially similar claim language.
Dependent claims 2-10, 12-15, and 17-20 recite the same or similar abstract idea(s) as independent claims 1, 11, and 16 with merely a further narrowing of the abstract idea(s) to particular data characterization and/or additional data analyses performed as part of the abstract idea.
The limitations in claims 1-20 above falling well-within the groupings of subject matter identified by the courts as being abstract concepts, specifically the claims are found to correspond to the category of:
"Certain methods of organizing human activity- fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions)" as the limitations identified above are directed to determining completions of agenda items of a multi-participant communication using a transcription during the multi-participant communication and generating an agenda for a next multi-participant communication including incomplete agenda items and thus is a method of organizing human activity including at least commercial or business interactions or relations and/or a management of user personal behavior; and/or
"Mental processes - concepts performed in the human mind (including an observation, evaluation, judgement, opinion)" as the limitations identified above include mere data observations, evaluations, judgements, and/or opinions, e.g. including determining completions of agenda items of a multi-participant communication using a transcription during the multi-participant communication and generating an agenda for a next multi-participant communication including incomplete agenda items, which is capable of being performed mentally and/or using pen and paper.
Step 2A - Prong 2: Claims 1-20 are found to clearly be directed to the abstract idea identified above because the claims, as a whole, fail to integrate the claimed judicial exception into a practical application, specifically the claims recite the additional elements of:
"A method, comprising: identifying one or more agenda items of a first agenda associated with a first multi-participant communication implemented using a software platform, / A system, comprising: a data store configured to store data indicative of agenda items; a communication system configured to implement a first multi-participant communication at a first time and a second multi-participant communication at a second time after the first time; and an agenda intelligence system configured to: / An apparatus, comprising: a memory; and a processor configured to execute instructions stored in the memory to:" (claims 1, 11, and 16), “generating, by a transcription engine of the software platform configured for automatic speech recognition; determining, by an agenda intelligence system of the software platform using a machine learning model to process the real-time transcription,” (claims 1, 11, and 16), however the aforementioned elements merely amount to generic components of a general purpose computer used to "apply" the abstract idea (MPEP 2106.0S(f)) and thus fails to integrate the recited abstract idea into a practical application, furthermore the high-level recitation of receiving data from a generic "software platform" is at most an attempt to limit the abstract to a particular field of use (MPEP 2106.0S(h), e.g.: "For instance, a data gathering step that is limited to a particular data source (such as the Internet) or a particular type of data (such as power grid data or XML tags) could be considered to be both insignificant extra-solution activity and a field of use limitation. See, e.g., Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (limiting use of abstract idea to the Internet); Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data); Intellectual Ventures I LLC v. Erie lndem. Co., 850 F.3d 1315, 1328-29, 121 USPQ2d 1928, 1939 (Fed. Cir. 2017) (limiting use of abstract idea to use with XML tags).") and/or merely insignificant extra-solution activity (MPE 2106.05(g)) and thus further fails to integrate the abstract idea into a practical application;
Step 2B: Claims 1-20 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements as described above with respect to Step 2A Prong 2 merely amount to a general purpose computer that attempts to apply the abstract idea in a technological environment (MPEP 2106.0S(f)), including merely limiting the abstract idea to a particular field of use via a "software platform", as explained above, and/or performs insignificant extra-solution activity, e.g. data gathering or output, (MPEP 2106.0S(g)), as identified above, which is further found under step 2B to be merely well-understood, routine, and conventional activities as evidenced by MPEP 2106.0S(d)(II) (describing conventional activities that include transmitting and receiving data over a network, electronic recordkeeping, storing and retrieving information from memory, electronically scanning or extracting data from a physical document, and a web browser's back and forward button functionality). Therefore, similarly the combination and arrangement of the above identified additional elements when analyzed under Step 2B also fails to necessitate a conclusion that the claims amount to significantly more than the abstract idea directed to determining completions of agenda items of a multi-participant communication using a transcription during the multi-participant communication and generating an agenda for a next multi-participant communication including incomplete agenda items.
Claims 1-20 are accordingly rejected under 35 USC§ 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea(s)) without significantly more.
Note: The analysis above applies to all statutory categories of invention. As such, the presentment of any claim otherwise styled as a machine or manufacture, for example, would be subject to the same analysis
For further authority and guidance, see:
MPEP § 2106
https://www.uspto.gov/patents/laws/examination-policy/subject-matter-eligibility
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication Number 2020/0403817 to Daredia et al. (hereafter referred to as Daredia) in view of U.S. Patent Application Publication Number 2015/0154291 to Shepherd et al. (hereafter referred to as Shepherd) in further view of U.S. Patent Application Publication Number 2013/0254279 to Bentley et al. (hereafter referred to as Bentley) and in even further view of U.S. Patent Application Publication Number 2014/0164510 to Abuelsaad et al. (hereafter referred to as Abuelsaad).
As per claim 1 Daredia teaches:
A method, comprising: identifying one or more agenda items of a first agenda associated with a first multiparticipant communication implemented using a software platform (Paragraph Number [0093] teaches the content management system 102 can analyze the meeting agenda 404 to determine whether the meeting is on track based on the percentage of the meeting agenda 404 remaining. For instance, the content management system 102 can assign a planned time for each agenda item based on the total number of items and the amount of time scheduled for the meeting. If the content management system 102 detects that the meeting is likely to go over time, or that discussion of an agenda item has taken longer than its allotted time, the content management system 102 can provide a message to the meeting presenter indicating the time issue. Paragraph Number [0142] teaches processor 702 includes hardware for executing instructions, such as those making up a computer program. As an example, and not by way of limitation, to execute instructions, processor 702 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 704, or storage device 706 and decode and execute them).
determining, by an agenda intelligence system of the software platform using a machine learning model to process the real-time transcription (Paragraph Number [0024] teaches the content management system can use a training dataset including manually-labeled insight data corresponding to past meetings to train the machine-learning model. The content management system can then input audio data for a meeting into the trained machine-learning model, which outputs insights or suggestions by analyzing the audio data and other information associated with the later meeting. Paragraph Number [0054] teaches once the content management system 102 has meeting data and device input data, the content management system 102 can use the data to generate meeting insights. In particular, the content management system 102 analyzes the meeting data to determine content (e.g., determine what is being said, generated, or presented) for the meeting. For instance, the content management system 102 can utilize natural language processing to generate a transcription for audio data. The content management system 102 can store the transcription in memory and/or with one or more user accounts of one or more users associated with the meeting. Paragraph Number [0055] teaches the content management system 102 can then analyze the transcription to identify information associated with the audio content. For example, the content management system 102 can identify one or more users (e.g., using voice recognition technology) during the meeting and determine what each user says during the meeting. The content management system 102 can also identify a context of the audio data based on what the one or more users discuss, including one or more subject matters being discussed during one or more portions of the meeting. The content management system 102 can also determine times of different items being discussed during the meeting).
internal metadata associated with users of the software platform, and external metadata associated with users of the software platform (Paragraph Number [0056] teaches the content management system 102 can analyze content items associated with a meeting to identify relevant information from the associated content items. To illustrate, the content management system 102 can analyze text or metadata of content items generated and synchronized with the content management system 102 to determine text content relative to audio data for the meeting. The content management system 102 can also use video/image analysis to determine content of materials presented or generated (e.g., on a screen, whiteboard, or writing material) during the meeting. Paragraph Number [0081] teaches the meeting agenda 404 can also include metadata or other data that indicates to the content management system 102 that the meeting agenda 404 corresponds to a scheduled meeting).
Daredia teaches real time analysis of audio in a meeting to determine if agenda items have been sufficiently covered, but does not explicitly teach the use of Automatic Speech Recognition technology to create a real time transcript which is taught by the following citations from Shepherd:
generating, by a transcription engine of the software platform configured for automatic speech recognition, a real-time transcription of the first multi-participant communication using audio data of the first multi-participant communication (Paragraph Number [0069] teaches virtual collaboration application 230 may implement systems and/or methods for live speech-to-text broadcast communication. Such systems and methods may be configured to employ Automatic Speech Recognition (ASR) technology combined with a client-server model and in order to synchronize the converted speech's text transcript for real-time viewing and later audio playback within a scrolling marquee (e.g., "news ticker"). In conjunction with the converted speech's text the audio data of the speech itself is persisted on a backend system, it may provide remote synchronous and asynchronous viewing and playback features for connected clients),
from a real-time transcription of the first multi-participant communication (Paragraph Number [0069] teaches virtual collaboration application 230 may implement systems and/or methods for live speech-to-text broadcast communication. Such systems and methods may be configured to employ Automatic Speech Recognition (ASR) technology combined with a client-server model and in order to synchronize the converted speech's text transcript for real-time viewing and later audio playback within a scrolling marquee (e.g., "news ticker"). In conjunction with the converted speech's text the audio data of the speech itself is persisted on a backend system, it may provide remote synchronous and asynchronous viewing and playback features for connected clients).
Both Daredia and Shepherd are directed to meeting assistance and recording systems. Daredia teaches real time analysis of audio in a meeting to determine if agenda items have been sufficiently covered. Shepherd improves upon Daredia by teaching the use of Automatic Speech Recognition technology to create a real time transcript. One of ordinary skill in the art would be motivated to further include the use of Automatic Speech Recognition technology to create a real time transcript, to efficiently create word for word analysis of meetings to better corroborate and correlate meeting data. Accordingly, it would have been obvious to one of ordinary skill in the art at the time the invention was made to modify the system and method of real time analysis of audio in a meeting to determine if agenda items have been sufficiently covered in Daredia to further utilize Automatic Speech Recognition technology to create a real time transcript as disclosed in Shepherd, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Daredia teaches utilizing machine learning to determine if a particular agenda item has been skipped when the meeting ends (See Paragraph Numbers [0022]-[0023]) but does not explicitly teach determining if a particular agenda item is in progress and not fully covered in the meeting which is taught by the following citations from Bentley:
that a first agenda item of the one or more agenda items remains incomplete at an end of the first multi-participant communication ends (Paragraph Number [0064]-[0065] teaches Tracking Meeting Progress. Tracking the progress of static meetings (i.e., meetings with a fixed agenda) is straightforward. However, tracking the progress of a dynamic meeting is more subtle and harder. Each sub-session may be progressing at a different rate, and the sub-sessions may represent unequal shares of the overall agenda. This may influence an ability to change the order in which topics are discussed. For example, in a conventional static meeting, a participant may be able to know that the meeting is in the middle of item 7, and that therefore the previous agenda items (i.e., items 1-6) are finished and that subsequent items (i.e., items 8-17) are unfinished. In a dynamic meeting, a participant may know that the meeting has finished items 1, 2, and 3, then moved on to 7, finished half of that, and then moved back to 5. In a conventional meeting, a bookmark may represent a single number (i.e., the current agenda topic). In a dynamic meeting, a bookmark may represent a list of topics that have been finished and topics that are unfinished. Paragraph Number [0086] teaches the meeting snapshot features support an ability to peek into an e-conference while the e-conference is in progress, in order to determine status or progress toward agenda items. The meeting history is complete only after the meeting is done).
generating, by the agenda intelligence system, for a second multi-participant communication, a second agenda including the first agenda item; and (Paragraph Number [0064]-[0065] teaches Tracking Meeting Progress. Tracking the progress of static meetings (i.e., meetings with a fixed agenda) is straightforward. However, tracking the progress of a dynamic meeting is more subtle and harder. Each sub-session may be progressing at a different rate, and the sub-sessions may represent unequal shares of the overall agenda. This may influence an ability to change the order in which topics are discussed. For example, in a conventional static meeting, a participant may be able to know that the meeting is in the middle of item 7, and that therefore the previous agenda items (i.e., items 1-6) are finished and that subsequent items (i.e., items 8-17) are unfinished. In a dynamic meeting, a participant may know that the meeting has finished items 1, 2, and 3, then moved on to 7, finished half of that, and then moved back to 5. In a conventional meeting, a bookmark may represent a single number (i.e., the current agenda topic). In a dynamic meeting, a bookmark may represent a list of topics that have been finished and topics that are unfinished. Paragraph Number [0086] teaches the meeting snapshot features support an ability to peek into an e-conference while the e-conference is in progress, in order to determine status or progress toward agenda items. The meeting history is complete only after the meeting is done. (See also Paragraph Numbers [0008], [0033], and [0038])).
Both the combination of Daredia and Shepherd and Bentley are directed to meeting assistance and recording systems. The combination of Daredia and Shepherd teaches real time analysis of audio in a meeting to determine if agenda items have been sufficiently covered. Bentley improves upon the combination of Daredia and Shepherd and Bentley by teaching determining if a particular agenda item is in progress and not fully covered in the meeting and identifying additional meeting topics to be covered and placed on the agenda. One of ordinary skill in the art would be motivated to further include determining if a particular agenda item is in progress and not fully covered in the meeting and identifying additional meeting topics to be covered and placed on the agenda, to efficiently track and store information about agenda items that are only partially complete including length of time covered so that actions items related to those agenda items not fully covered can be properly assigned or the meeting can be redirected towards completion of the incomplete agenda item. Accordingly, it would have been obvious to one of ordinary skill in the art at the time the invention was made to modify the system and method of real time analysis of audio in a meeting to determine if agenda items have been sufficiently covered in the combination of Daredia and Shepherd to further utilize determining if a particular agenda item is in progress and not fully covered in the meeting and identifying additional meeting topics to be covered and placed on the agenda as disclosed in Bentley, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Daredia teaches utilizing machine learning to determine if a particular agenda item has been skipped when the meeting ends (See Paragraph Numbers [0022]-[0023]) but does not explicitly teach determining if a particular participant is not present in a meeting where a topic is discussed that requires that person’s presence and subsequently that participant is invited to a future meeting which is taught by the following citations from Abuelsaad:
identifying, by the agenda intelligence system, a person based on a first set of one or more keywords (Paragraph Number [0025] teaches the meeting management engine (100) may also recognize which of the meeting participants is involved in the meeting. For example, if the meeting is part of a video conference, the meeting management engine (100) can recognize which of the meeting participants is on the line by recognizing phone numbers, through voice recognition, through an auditory device that detects meeting participants calling each other by name, through an introduction given by the meeting participants as they virtually enter the meeting, video image recognition, other mechanisms, or combinations thereof).
wherein a second set of one or more keywords associated with the person link the person to the first agenda item, ... wherein the person did not participate in the first multi-participant communication; (Paragraph Number [0031] teaches if an agenda topic is determined to be unfinished at the conclusion of the meeting, the meeting management engine (100) reschedules the unfinished meeting agenda topics for a later meeting. In the example of FIG. 1, the third agenda topic (112) is classified as unfinished because it was still in progress at the conclusion of the meeting. Thus, the topic is likely to need additional discussion. Also, the fourth agenda topic (114) is also classified as unfinished because this agenda topic was not started or unaddressed during the meeting. The later meeting may be a meeting that was previously scheduled, but will take place after the original meeting. The previously scheduled meeting may include the next meeting in a series of repeating meeting, a related meeting, a follow-up meeting, subsequent meeting that is not the next meeting in a series of repeating meetings, another meeting, or combinations thereof. A later meeting may already have some of the meeting participants desired to be present for unfinished agenda topics scheduled to attend, and the meeting management engine (100) schedules the unfinished agenda topics for that meeting. In such an example, the meeting management engine (100) may cause an update to be sent to the meeting participants already invited to the later meeting and send an invitation to those meeting participants who were not originally invited to the later meeting. (See Paragraph Number [0024] in regard to determining information based on agenda items and metadata associated with participants. See Shephard in regard to utilizing real-time transcription) (Examiner asserts that this section teaches and at least suggests identifying participants who did not participate in a meeting but are related to unfinished agenda topics by inviting those who were not originally invited to a recurring meeting indicating that they are not present in the first meeting described)).
sending an invite for the second multi-participant communication and the second agenda to the person, wherein the invite includes an action item assigned to the person. (Paragraph Number [0031] teaches if an agenda topic is determined to be unfinished at the conclusion of the meeting, the meeting management engine (100) reschedules the unfinished meeting agenda topics for a later meeting. In such an example, the meeting management engine (100) may cause an update to be sent to the meeting participants already invited to the later meeting and send an invitation to those meeting participants who were not originally invited to the later meeting).
Both the combination of Daredia, Shepherd, and Bentley, and Abuelsaad are directed to meeting assistance and recording systems. The combination of Daredia, Shepherd, and Bentley teaches real time analysis of audio in a meeting to determine if agenda items have been sufficiently covered. Abuelsaad improves upon the combination of Daredia, Shepherd, and Bentley by teaching determining if a particular participant is not present in a meeting where a topic is discussed that requires that person’s presence and subsequently that participant is invited to a future meeting. One of ordinary skill in the art would be motivated to further include determining if a particular participant is not present in a meeting where a topic is discussed that requires that person’s presence and subsequently that participant is invited to a future meeting, to efficiently ensure that topics brought up in meetings are discussed with those who need to be present. Accordingly, it would have been obvious to one of ordinary skill in the art at the time the invention was made to modify the system and method of real time analysis of audio in a meeting to determine if agenda items have been sufficiently covered in the combination of Daredia, Shepherd, and Bentley to further utilize determining if a particular participant is not present in a meeting where a topic is discussed that requires that person’s presence and subsequently that participant is invited to a future meeting as disclosed in Abuelsaad, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As per claim 11, Daredia teaches.
A system, comprising: a data store configured to store data indicative of agenda items; a communication system configured to implement a first multi-participant communication at a first time and a second multi-participant communication at a second time after the first time; and an agenda intelligence system configured to: (Paragraph Number [0142] teaches processor 702 includes hardware for executing instructions, such as those making up a computer program. As an example, and not by way of limitation, to execute instructions, processor 702 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 704, or storage device 706 and decode and execute them. Paragraph Number [0143] teaches Memory 704 may be used for storing data, metadata, and programs for execution by the processor(s). Memory 704 may be internal or distributed memory. Paragraph Number [0146] teaches communication interface 710 can include hardware, software, or both. In any event, communication interface 710 can provide one or more interfaces for communication (such as, for example, packet-based communication) between computing device 700 and one or more other computing devices or networks. (See also examples of networks in Paragraph Number [0147]).).
The remainder of the claim limitations are substantially similar to the method described in claim 1 and are rejected for the same reasons put forth in regard to claim 1.
As per claim 16, Daredia teaches.
An apparatus, comprising: a memory; and a processor configured to execute instructions stored in the memory to: (Paragraph Number [0142] teaches processor 702 includes hardware for executing instructions, such as those making up a computer program. As an example, and not by way of limitation, to execute instructions, processor 702 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 704, or storage device 706 and decode and execute them. Paragraph Number [0143] teaches Memory 704 may be used for storing data, metadata, and programs for execution by the processor(s). Memory 704 may be internal or distributed memory. Paragraph Number [0146] teaches communication interface 710 can include hardware, software, or both. In any event, communication interface 710 can provide one or more interfaces for communication (such as, for example, packet-based communication) between computing device 700 and one or more other computing devices or networks. (See also examples of networks in Paragraph Number [0147]).).
The remainder of the claim limitations are substantially similar to the method described in claim 1 and are rejected for the same reasons put forth in regard to claim 1.
As per claims 2, 12, and 17, the combination of Daredia, Shepherd, Bentley, and Abuelsaad teaches each of the limitations of claims 1, 11, and 16 respectively.
In addition, Daredia teaches:
wherein the software platform identifies the person by searching the internal metadata and external metadata associated with users of the software platform based on a third set of one or more keywords identified within a transcription of the first multi-participant communication.. (Paragraph Number [0022] teaches after identifying portions of a meeting (and corresponding portions of audio or other data associated with the meeting), the content management system generates meeting insights for a user or multiple users. The generated meeting insights can include content based on the analyzed data associated with the meeting. For example, a meeting insight can include a meeting summary, highlights from the meeting, action items, etc. Paragraph Number [0036] teaches an action item can include a task discussed during a meeting for completion by a user after the meeting is complete. In some embodiments, an action item can be associated with one or more specific users. An action item can also be associated with a date or time, by which completion of the action item is required. Paragraph Number [0041] teaches the content management system 102 can include a machine-learning model 112. The content management system 102 can train the machine-learning model 112 to automatically identify meeting highlights, action items, and other meeting insight information using data from past meetings. Paragraph Number [0056] teaches the content management system 102 can analyze text or metadata of content items generated and synchronized with the content management system 102 to determine text content relative to audio data for the meeting).
As per claims 3, 13, and 18, the combination of Daredia, Shepherd, Bentley, and Abuelsaad teaches each of the limitations of claims 1, 11, and 16 respectively.
In addition, Daredia teaches:
wherein the second agenda includes a second agenda item identified using the software platform during or after the first multi-participant communication (Paragraph Number [0024] teaches the content management system also uses data from past meetings to train a machine-learning model to automatically tag or suggest insights for meetings. In particular, the content management system can use a training dataset including manually-labeled insight data corresponding to past meetings to train the machine-learning model. The content management system can then input audio data for a meeting into the trained machine-learning model, which outputs insights or suggestions by analyzing the audio data and other information associated with the later meeting. Furthermore, the machine-learning model, or another machine-learning model, can output analytics for a meeting (e.g., sentiment scores, attendance). The content management system can also use machine-learning to determine whether to schedule/cancel future meetings based on feedback associated with past meetings indicating an effectiveness of the past meetings).
As per claims 4, 14, and 19, the combination of Daredia, Shepherd, Bentley, and Abuelsaad teaches each of the limitations of claims 1 and 3, 11 and 13, and 16 and 18 respectively.
In addition, Daredia teaches:
wherein the software platform identifies the second agenda item using a real-time transcription of the first multi-participant communication (Paragraph Number [0024] teaches the content management system also uses data from past meetings to train a machine-learning model to automatically tag or suggest insights for meetings. In particular, the content management system can use a training dataset including manually-labeled insight data corresponding to past meetings to train the machine-learning model. The content management system can then input audio data for a meeting into the trained machine-learning model, which outputs insights or suggestions by analyzing the audio data and other information associated with the later meeting. Furthermore, the machine-learning model, or another machine-learning model, can output analytics for a meeting (e.g., sentiment scores, attendance). The content management system can also use machine-learning to determine whether to schedule/cancel future meetings based on feedback associated with past meetings indicating an effectiveness of the past meetings. Paragraph Number [0074] teaches the content management system 102 can provide meeting documentation including presentation materials, documents, or other content items presented or generated during a meeting to the machine-learning model 304. For example, the content management system 102 can use meeting agendas, synchronized notes, video data, or other materials in connection with a meeting to train the machine-learning model 304 to output meeting insights for future meetings. (See Shephard in regard to utilizing real-time transcription)).
As per claims 5, 15, and 20, the combination of Daredia, Shepherd, Bentley, and Abuelsaad teaches each of the limitations of claims 1 and 3, 11 and 13, and 16 and 18 respectively.
In addition, Daredia teaches:
wherein the software platform identifies the second agenda item based on spoken input from a user of the software platform collected using an agent of the software platform. (Paragraph Number [0024] teaches the content management system also uses data from past meetings to train a machine-learning model to automatically tag or suggest insights for meetings. In particular, the content management system can use a training dataset including manually-labeled insight data corresponding to past meetings to train the machine-learning model. The content management system can then input audio data for a meeting into the trained machine-learning model, which outputs insights or suggestions by analyzing the audio data and other information associated with the later meeting. Furthermore, the machine-learning model, or another machine-learning model, can output analytics for a meeting (e.g., sentiment scores, attendance). The content management system can also use machine-learning to determine whether to schedule/cancel future meetings based on feedback associated with past meetings indicating an effectiveness of the past meetings).
As per claim 6, the combination of Daredia, Shepherd, Bentley, and Abuelsaad teaches each of the limitations of claim 1.
Daredia teaches real time analysis of audio in a meeting to determine if agenda items have been sufficiently covered, but does not explicitly teach identifying second users based on keywords which is taught by the following citations from Abuelsaad:
further comprising: sending the invite and the second agenda to one or more participants invited to the second multi-participant communication. (Paragraph Number [0031] teaches if an agenda topic is determined to be unfinished at the conclusion of the meeting, the meeting management engine (100) reschedules the unfinished meeting agenda topics for a later meeting. In such an example, the meeting management engine (100) may cause an update to be sent to the meeting participants already invited to the later meeting and send an invitation to those meeting participants who were not originally invited to the later meeting).
One of ordinary skill in the art would be motivated to combine these references as described in regard to claim 1.
As per claim 7, the combination of Daredia, Shepherd, Bentley, and Abuelsaad teaches each of the limitations of claim 1.
Daredia teaches utilizing machine learning to determine if a particular agenda item has been skipped when the meeting ends (See Paragraph Numbers [0022]-[0023]) but does not explicitly teach determining if a particular agenda item is in progress and not fully covered in the meeting which is taught by the following citations from Bentley:
wherein the second agenda includes a second agenda item of the one or more agenda items of the first agenda that was not discussed during the first multiparticipant communication. (Paragraph Number [0070] teaches changes may also arise as a result of discussions within the sub-session, e.g., it may become apparent the additional topics are needed or some previously-scheduled topics are moot, based upon the outcome of discussions during the sub-session. There may also be topic overflow and/or topic underflow. Overflow is when a topic is running too long (e.g., a 10-minute topic lasts for 20 minutes), and underflow is when a topic is running too short (e.g., a 10-minute topic lasts for 5 minutes) (See also Paragraph Numbers [0064]-[0065])).
One of ordinary skill in the art would be motivated to combine these references as described in regard to claim 1.
As per claim 8, the combination of Daredia, Shepherd, Bentley, and Abuelsaad teaches each of the limitations of claim 1.
Daredia teaches utilizing machine learning to determine if a particular agenda item has been skipped when the meeting ends (See Paragraph Numbers [0022]-[0023]) but does not explicitly teach determining if a particular agenda item is in progress and not fully covered in the meeting which is taught by the following citations from Bentley:
wherein the second agenda includes a second agenda item of the one or more agenda items of the first agenda, wherein only a portion of the second agenda item was completed during the first multi-participant communication. (Paragraph Number [0064]-[0065] teaches Tracking Meeting Progress. Tracking the progress of static meetings (i.e., meetings with a fixed agenda) is straightforward. However, tracking the progress of a dynamic meeting is more subtle and harder. Each sub-session may be progressing at a different rate, and the sub-sessions may represent unequal shares of the overall agenda. This may influence an ability to change the order in which topics are discussed. For example, in a conventional static meeting, a participant may be able to know that the meeting is in the middle of item 7, and that therefore the previous agenda items (i.e., items 1-6) are finished and that subsequent items (i.e., items 8-17) are unfinished. In a dynamic meeting, a participant may know that the meeting has finished items 1, 2, and 3, then moved on to 7, finished half of that, and then moved back to 5. In a conventional meeting, a bookmark may represent a single number (i.e., the current agenda topic). In a dynamic meeting, a bookmark may represent a list of topics that have been finished and topics that are unfinished. Paragraph Number [0086] teaches the meeting snapshot features support an ability to peek into an e-conference while the e-conference is in progress, in order to determine status or progress toward agenda items. The meeting history is complete only after the meeting is done. (See also Paragraph Number [0070])).
One of ordinary skill in the art would be motivated to combine these references as described in regard to claim 1.
As per claim 9, the combination of Daredia, Shepherd, Bentley, and Abuelsaad teaches each of the limitations of claim 1.
Daredia teaches utilizing machine learning to determine if a particular agenda item has been skipped when the meeting ends (See Paragraph Numbers [0022]-[0023]) but does not explicitly teach determining if a particular agenda item is in progress and not fully covered in the meeting which is taught by the following citations from Bentley:
wherein the second agenda includes a second agenda item that was not part of the one or more agenda items of the first agenda, wherein the second agenda item is determined using a transcription of the first multi-participant communication.. (Paragraph Number [0070] teaches changes may also arise as a result of discussions within the sub-session, e.g., it may become apparent the additional topics are needed or some previously-scheduled topics are moot, based upon the outcome of discussions during the sub-session. There may also be topic overflow and/or topic underflow. Overflow is when a topic is running too long (e.g., a 10-minute topic lasts for 20 minutes), and underflow is when a topic is running too short (e.g., a 10-minute topic lasts for 5 minutes) (See also Paragraph Numbers [0064]-[0065])).
One of ordinary skill in the art would be motivated to combine these references as described in regard to claim 1.
As per claim 10, the combination of Daredia, Shepherd, Bentley, and Abuelsaad teaches each of the limitations of claim 1.
In addition, Daredia teaches:
wherein sending the second agenda to the person comprises uploading the second agenda to an account associated with the person. (Paragraph Number [0054] teaches once the content management system 102 has meeting data and device input data, the content management system 102 can use the data to generate meeting insights. In particular, the content management system 102 analyzes the meeting data to determine content (e.g., determine what is being said, generated, or presented) for the meeting. For instance, the content management system 102 can utilize natural language processing to generate a transcription for audio data. The content management system 102 can store the transcription in memory and/or with one or more user accounts of one or more users associated with the meeting. Paragraph Number [0077] teaches the content management system 102 can also allow the user to view and edit content items already stored by the content management system 102. For instance, if the user uses another client device to create a content item, the content management system 102 can synchronize the content item to other client devices associated with the user account. Paragraph Number [0110] teaches the content management system 102 also provides suggestions to one or more users (e.g., to the meeting presenter) to send meeting materials (summaries or other content items) to the one or more users or for including users in future meetings. In particular, the content management system 102 can identify users who may be interested in the meeting materials based on attendees/invitees associated with the meeting. The content management system 102 can determine user interest based on identifying that a user during the meeting is discussing a content item with another user during the meeting. The content management system 102 can also identify users who may be interested in meeting materials based on user account info, users in specific departments, subject matter of the meeting materials, participation in previous meetings, or other indicators of a correlation between the meeting subject matter and the users).
Response to Arguments
Applicant’s arguments filed 12/22/2025 have been fully considered but they are not persuasive.
Applicant argues that the claims are eligible under 35 USC 101. (See Applicant’s Remarks, 12/22/2025, pgs. 9-14). Examiner respectfully disagrees. As noted in the 35 USC 101 analysis presented above, the claims recite an abstract concept that is encapsulated by decision making analogous to a method of organizing human activity. Examiner notes that each of the limitations that encapsulate the abstract concepts are recited in the above 35 USC 101. Additionally, the claims do not recite a practical application of the abstract concepts in that there is no specific use or application of the method steps other than to make conclusory determinations and provide for direction for either a person or machine to follow at some future time or to make calculations that are mathematical operations. The claims do not recite any particular use for these determinations and directions that improve upon the underlying computer technology (in this instance the computer software, processor, and memory). Instead, Examiner asserts that the additional elements in the claim language are only used as implementation of the abstract concepts utilizing technology. The concepts described in the limitations when taken both as a whole and individually are not meaningfully different than those found by the courts to be abstract ideas and are similarly considered to be certain methods of organizing human activity such as managing personal behavior or relationships or interactions between people, including social activities, teaching, and following rules or instructions or to make calculations that are mathematical operations. The steps are then encapsulated into a particular technological environment by executing these steps upon a computer processor and utilizing features such as a computer interface or sending and receiving data over a network or displaying information via a computerized graphical user interface. However, sending and receiving of information over a network and execution of algorithms on a computer are utilized only to facilitate the abstract concepts (i.e. selecting data on an interface, publishing/displaying information, etc.). As such, Examiner asserts that the implementation of the abstract concepts recited by the claims utilize computer technology in a way that is considered to be generally linking the use of the judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)). Accordingly, Examiner does not find that the claims recite a practical application of the abstract concepts recited by the claims.
Applicant argues that the previously cited reference does not teach the newly amended portions including the new limitations recited by the independent claims. (See Applicant’s Remarks, 12/22/2025, pgs. 15-16). Examiner respectfully disagrees. Examiner notes that new citations from the previously cited references have been applied to the newly presented claim limitations as indicated in the above in the new 35 USC 103 rejection. Examiner has added and emphasized specific portions of the Daredia and Shepherd references to read on the new independent claim language. As such, Applicant’s arguments directed towards the previous rejection are moot. In response to Applicant’s arguments, Examiner directs Applicant to review the new citations and explanations provided in the new 35 USC 103 rejection presented above
Conclusion
Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW H. DIVELBISS whose telephone number is (571) 270-0166. The fax phone number is 571-483-7110. The examiner can normally be reached on M-Th, 7:00 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor can be reached on (571) 272-6787.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.H.D/Examiner, Art Unit 3624
/Jerry O'Connor/Supervisory Patent Examiner,Group Art Unit 3624