DETAILED ACTION
This communication is in response to the Amendments and Arguments filed on 12/29/2025.
Claim(s) 1-20 are pending and have been examined. Hence, this action has been made FINAL.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments and Amendments
Amendments to the claims by the Applicant have been considered and addressed below.
With respect to the Double Patenting and 35 USC § 102, and 103 rejections, the Applicant provides several arguments in which the Examiner will respond accordingly, below.
Double Patenting rejection(s)
Arguments:
Double Patentinq
Claims 1, 2, 9, 10, 17, and 18 are provisionally rejected on the ground of nonstatutory double patenting as allegedly unpatentable over claims 1-2, 8- 9, and 15-16 of copending Application No. 18/412,392 in view of Daredia et al. (US 20200403817 A1).
As the scope of the claims is subject to change during prosecution, Applicant requests that the nonstatutory double patenting rejection be held in abeyance pending a final disposition of the claims.
Examiner’s Response to Arguments:
Applicant’s arguments and request regarding the Double Patenting rejection(s) have been considered and acknowledged.
For more details, please refer to updated Double Patenting rejection(s) for claims, below.
35 USC § 102/103 rejection(s)
Arguments in pages 8-11 of the Remarks filed on 12/29/2025.
Examiner’s Response to Arguments:
Applicant’s arguments with respect to independent claim(s) 1, 9, and 17 under 35 U.S.C. § 102, and 103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Daredia et al. (US 20200403817 A1) further in view of Kumar et al. (US 20190028591 A1).
For more details, please refer to updated 35 U.S.C. § 103 rejections for claims 1-20, below.
Specification
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-2, 9-10, and 17-18 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, 8-9, and 15-16 of copending Application No. 18/412,392 in view of Daredia et al. (US 20200403817 A1).
This is a provisional nonstatutory double patenting rejection.
The claims of the copending application are similar in scope than that of the instant application. However, the claims of the copending Application No. 18/412,392) do not explicitly teach but Daredia et al. does teach:
train an artificial intelligence (AI) model based on a plurality of dashboards related to a software application that corresponds to a plurality of topics (see ¶ [0018 and 0024]: “[0018] Furthermore, the content management system can use past meeting data for automatically generating meeting insights for future meetings. Specifically, the content management system can use curated meeting insights (e.g., meeting insights that one or more users have generated or verified for accuracy and completion) to train a machine-learning model. The content management system can then use the machine-learning model to automatically output meeting insights (e.g., highlights, summaries, or action items) for future meetings either while the meetings are ongoing or after the meetings are finished. Thus, the content management system can generate and provide meeting insights to users even in the absence of user inputs to client devices.
[0024] In one or more embodiments, as mentioned, the content management system also uses data from past meetings to train a machine-learning model to automatically tag or suggest insights for meetings. In particular, the content management system can use a training dataset including manually-labeled insight data corresponding to past meetings to train the machine-learning model. The content management system can then input audio data for a meeting into the trained machine-learning model, which outputs insights or suggestions by analyzing the audio data and other information associated with the later meeting…”),
U.S. copending Application No. 18/412,392 and Daredia et al. (US 20200403817 A1) are considered to be analogous to the claimed invention because they are in the same field of endeavor in content processing/generation/management. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified U.S copending Application No. 18/412,392 to incorporate the teachings of Daredia et al. of train[ing] an artificial intelligence (AI) model based on a plurality of dashboards related to a software application that corresponds to a plurality of topics which provides the benefit of improving the efficiency and productivity of meetings[0005] of Daredia et al.).
Please see the claim mappings for the individual claims as well as the independent claim mapping (based on each of the limitations) in the tables below.
Table 1: Claim mapping comparing Instant Application and Copending Application.
Instant Application
Copeding Application
U.S. Application No. 18/412,392
1, 9, 17
1, 8, 15
2, 10, 18
2, 9, 16
3, 11, 19
Have no equivalent
4, 12, 20
Have no equivalent
5, 13
Have no equivalent
6, 14
Have no equivalent
7, 15
Have no equivalent
8, 16
Have no equivalent
Table 2: Comparing independent claim mapping (based on each of the limitations)
Instant Application
Copeding Application
U.S. Application No. 18/412,392
Claim 1:
Claim 1:
1. An apparatus comprising:
1. An apparatus comprising:
a memory; and
a memory; and
a processor communicatively coupled to the memory, the processor configured to:
a processor communicatively coupled to the memory, the processor configured to:
train an artificial intelligence (AI) model based on a plurality of dashboards related to a software application that corresponds to a plurality of topics,
ingest a call transcript from a previous call with a user,
generate a new topic from the call transcript;
identify a first topic from a call actively in progress,
display a dashboard on a user device on the call,
wherein the dashboard comprises content related to the first topic,
receive discussion data from the call,
using the AI model, determine that the new topic is different than a topic of content currently displayed by a dashboard of a device,
determine that a focus of the call has shifted from the first topic to a second topic,
using the AI model,
generate a new dashboard on the device,
execute an artificial intelligence (Al) model on the second topic;
dynamically generate dashboard content based on the execution, and
wherein the new dashboard displays new content associated with the new topic
display the dynamically generated dashboard content to the dashboard on the user device.
Note: Main differences between instant application and issued patent are underlined.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-5, 9-13, and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Daredia et al. (US 20200403817 A1) further in view of Kumar et al. (US 20190028591 A1).
As to independent claim 1, Daredia et al. teaches:
1. An apparatus (see ¶ [0005]: “One or more embodiments disclosed herein provide benefits and/or solve one or more of the foregoing and other problems in the art with systems, methods, and non-transitory computer readable storage media that provide customized meeting insights based on meeting media (e.g., meeting documents, audio data, and video data) and user interactions with client devices…”) comprising:
a memory (see ¶ [0132]: “Embodiments of the present disclosure may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below…”); and
a processor communicatively coupled to the memory (see ¶ [0132] citation as in limitation above and further in the same paragraph: “…In general, a processor (e.g., a microprocessor) receives instructions, from a non-transitory computer-readable medium, (e.g., a memory, etc.), and executes those instructions, thereby performing one or more processes, including one or more of the processes described herein.”), the processor configured to:
train an artificial intelligence (AI) model based on a plurality of dashboards related to a software application that corresponds to a plurality of topics (see ¶ [0018 and 0024]: “[0018] Furthermore, the content management system can use past meeting data for automatically generating meeting insights for future meetings. Specifically, the content management system can use curated meeting insights (e.g., meeting insights that one or more users have generated or verified for accuracy and completion) to train a machine-learning model. The content management system can then use the machine-learning model to automatically output meeting insights (e.g., highlights, summaries, or action items) for future meetings either while the meetings are ongoing or after the meetings are finished. Thus, the content management system can generate and provide meeting insights to users even in the absence of user inputs to client devices.
[0024] In one or more embodiments, as mentioned, the content management system also uses data from past meetings to train a machine-learning model to automatically tag or suggest insights for meetings. In particular, the content management system can use a training dataset including manually-labeled insight data corresponding to past meetings to train the machine-learning model. The content management system can then input audio data for a meeting into the trained machine-learning model, which outputs insights or suggestions by analyzing the audio data and other information associated with the later meeting…”),
ingest a call transcript from a previous call with a user (see ¶ [0022 and 0054-0055]: “[0022] In some embodiments, after identifying portions of a meeting (and corresponding portions of audio or other data associated with the meeting), the content management system generates meeting insights for a user or multiple users… Additionally, the content management system can generate a document, notification, or other electronic message that includes a description of the relevant portion, a transcription of the relevant portion, an audio clip of the relevant portion, or other content related to the relevant portion…
[0054] …For instance, the content management system 102 can utilize natural language processing to generate a transcription for audio data…
[0055] The content management system 102 can then analyze the transcription to identify information associated with the audio content…”),
generate a new topic from the call transcript (see ¶ [0022 and 0054-0055] citations as in limitation above, more specifically (and further) ¶ [0055]: “The content management system 102 can then analyze the transcription to identify information associated with the audio content. For example, the content management system 102 can identify one or more users (e.g., using voice recognition technology) during the meeting and determine what each user says during the meeting. The content management system 102 can also identify a context of the audio data based on what the one or more users discuss, including one or more subject matters being discussed during one or more portions of the meeting. The content management system 102 can also determine times of different items being discussed during the meeting.”
and further ¶ [0037]: “As further mentioned above, the content management system 102 generates meeting insights based on analyzed data associated with a meeting. As used herein, the term “meeting insights” refers to content generated by the content management system 102 based on an analysis of data related to a meeting. Meeting insights can include, for example, a meeting summary, highlights from the meeting (e.g., portions of the meeting marked by users as important), action items resulting from or discussed in the meeting, subsequent meetings scheduled during the meeting, a list of attendees of the meeting, metrics or analytics related to the meeting, or other information from the meeting that is of interest to one or more users. As used herein, the term “summary” refers to a text summary of recognized speech in audio data or a text summary of materials associated with a meeting. A summary can provide an overall description or listing of items and topics discussed during the meeting.”
and ¶ [0089]: “As shown, the transcription region 412c continues to follow along with the audio data received from one or more client devices (e.g., the client device 400). Furthermore, the content management system 102 analyzes the materials associated with the meeting (e.g., the meeting agenda 404 in the document region 412a). As the content management system 102 transcribes the audio data and analyzes the materials associated with the meeting, the content management system 102 can determine when the meeting presenter or another user covers a topic from the meeting agenda 404 and moves to a subsequent topic. As mentioned above, the content management system 102 can highlight a currently or most recently discussed topic within the document region 412a using a highlight box 416 to indicate that the current topic has changed, as in FIGS. 4B and 4C.”);
determine that the new topic is different than a topic (see Fig. 4A-4C (4A: Sales Team Meeting Agenda, 4B: highlight box (414) and 4C: highlight box (416) and popup notification to remind presenter or speaker to cover agenda item (i.e., topic)) and ¶ [0018, 0022, 0024, 0054-0055, 0037, and 0089] citations as in limitations above. More specifically, ¶ [0089]: “… As mentioned above, the content management system 102 can highlight a currently or most recently discussed topic within the document region 412a using a highlight box 416 to indicate that the current topic has changed, as in FIGS. 4B and 4C.””),
execute the AI model based on the new topic (see Fig. 4A-4C (4A: Sales Team Meeting Agenda, 4B: highlight box (414) and 4C: highlight box (416) and popup notification to remind presenter or speaker to cover agenda item (i.e., topic)) and ¶ [0018, 0022, 0024, 0054-0055, 0037, and 0089] citations as in limitations above. More specifically, ¶ “[0018] Furthermore, the content management system can use past meeting data for automatically generating meeting insights for future meetings. Specifically, the content management system can use curated meeting insights (e.g., meeting insights that one or more users have generated or verified for accuracy and completion) to train a machine-learning model. The content management system can then use the machine-learning model to automatically output meeting insights (e.g., highlights, summaries, or action items) for future meetings either while the meetings are ongoing or after the meetings are finished. Thus, the content management system can generate and provide meeting insights to users even in the absence of user inputs to client devices.
and further ¶ [0068]: “Furthermore, the machine-learning model 304 can output action items 308 corresponding to one or more users. In particular, the machine-learning model 304 generates an action item to indicate that at least a portion of the audio data includes an indication of an action that one or more users should perform in accordance with a subject matter discussed within the meeting. For example, the machine-learning model 304 can identify phrases, words, or context in the audio data that indicates an operation to be performed and then generate a reminder, notification, or other content item that indicates to a user the operation to be performed.”),
generate a dashboard on the device, (see Fig. 4A-4C (4A: Sales Team Meeting Agenda, 4B: highlight box (414) and 4C: highlight box (416) and popup notification to remind presenter or speaker to cover agenda item (i.e., topic)) and ¶ [0018, 0022, 0024, 0054-0055, 0037, 0068, and 0089] citations as in limitations above. More specifically, Fig. 4C and ¶ [0068]: “…For example, the machine-learning model 304 can identify phrases, words, or context in the audio data that indicates an operation to be performed and then generate a reminder, notification, or other content item that indicates to a user the operation to be performed.” and further ¶ [0061]: “The content management system 102 can alternatively generate a notification for display within a client application or on an operating system of a client device of an identified user. For example, as described in more detail below with respect to FIG. 4C, the content management system 102 can generate notifications for providing meeting moderation to a meeting presenter while the meeting is ongoing…”).
However, Daredia et al. does not explicitly teach, but Kumar et al. does teach:
using the AI model, determine that the new topic is different than a topic of content currently displayed by a dashboard of a device (see Fig. 7B and ¶ [0115-0117]: “[0115] At step 712, if the machine learning controller 220 detects the inactivity period does not meets the inactivity threshold in the AI-assisted user interface, then the conference application device 200 does not make any changes in the AI-assisted conference, at step 714. Alternatively, if the machine learning controller 220 detects the inactivity period meets the inactivity threshold in the AI-assisted user interface, then the method allows the conference context detector 215 to determine whether the topic of discussion is available in the AI-assisted conference, at step 716. [0116] At step 718, if the conference context detector 215 detects that the topic of discussion is not available in the AI-assisted conference, then the method allows the conference context detector 215 to automatically probe any one participant in the AI-assisted user interface to set the topic of discussion in the AI-assisted conference, at step 720. [0117] At step 722, the method includes determining topic content related to topic of discussion provided by any one of the participant in the AI-assisted user interface. In an embodiment, the method allows the recommendation engine 217 to determine the topic content related to topic of discussion provided by any one of the participant in the AI-assisted user interface.), and
using the AI model, generate a new dashboard on the device, wherein the new dashboard displays new content associated with the new topic (see ¶ [0115 and 0117] citations as in limitation above and further ¶ [0115-0119]: “[0118] Alternatively, if the conference context detector 215 detects that the topic of discussion is available in the AI-assisted conference, then the method directly allows the recommendation engine 217 to determine the topic content related to topic of discussion which is available in the AI-assisted conference, at step 722. [0119] At step 724, the method includes displaying the topic content in the AI-assisted user interface. In an embodiment, the method allows the recommendation engine 217 to cause to display the topic content in the AI-assisted user interface, via the display 290.””)
Daredia et al. and Kumar et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in content processing/management. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Daredia et al. to incorporate the teachings of Kumar et al. of using the AI model, determine that the new topic is different than a topic of content currently displayed by a dashboard of a device, and using the AI model, generate a new dashboard on the device, wherein the new dashboard displays new content associated with the new topic which provides the benefit of facilitating communication collaboratively ([0093] of Kumar et al.).
As to independent claim 9, Daredia et al. further teaches:
9. A method (see ¶ [0005]: “One or more embodiments disclosed herein provide benefits and/or solve one or more of the foregoing and other problems in the art with systems, methods, and non-transitory computer readable storage media that provide customized meeting insights based on meeting media (e.g., meeting documents, audio data, and video data) and user interactions with client devices…”) comprising:
[the limitations as taught by Daredia et al. and Kumar et al. in claim 1, above.]
As to independent claim 17, Daredia et al. further teaches:
17. A computer-readable storage medium comprising instructions stored therein (see ¶ [0005]: “One or more embodiments disclosed herein provide benefits and/or solve one or more of the foregoing and other problems in the art with systems, methods, and non-transitory computer readable storage media that provide customized meeting insights based on meeting media (e.g., meeting documents, audio data, and video data) and user interactions with client devices…”) which when executed by a processor cause the processor to perform:
[the limitations as taught by Daredia et al. and Kumar et al. in claim 1, above.]
Regarding claims 2, 10, and 18, Daredia et al. in combination with Kumar et al. teach the limitations as in claims 1, 9, and 17, above.
Daredia et al. further teaches:
2/10/18. The apparatus/method/computer-readable storage medium of claims 1/9/17, wherein the processor is configured to
record audio from the call and convert the audio into the call transcript based on execution of a speech-to-text converter on the audio (see ¶ [0054, 0086, and 0100]: “[0054] …For instance, the content management system 102 can utilize natural language processing to generate a transcription for audio data.
[0086] Furthermore, the client application 410 can include a transcript region 412c that displays a transcript of audio from the meeting. In one or more embodiments, the content management system 102 generates a transcript in real-time while the meeting is ongoing. Specifically, the content management system 102 can use language processing to analyze audio data (e.g., streaming audio data) that the client device 400 or another client device provides to the content management system 102. The transcript region 412c provides a text transcription of the audio data for the content management system 102 to analyze.
[0100] Additionally, the highlights 434 can include important/notable points discussed during the meeting. Specifically, the content management system 102 can analyze the audio data to determine content of the audio data (e.g., text transcription)…”).
Regarding claims 3, 11, and 19, Daredia et al. in combination with Kumar et al. teach the limitations as in claims 1, 9, and 17, above.
Daredia et al. further teaches:
3/11/19. The apparatus/method/computer-readable storage medium of claims 1/9/17, wherein the processor is configured to
identify the plurality of topics from the call transcript and generate a ranking of priorities of the plurality of topics based on the execution of the AI model being executed on the plurality of topics (see Fig. 4A-4C citations as in claims 1,9, and 17 above and further ¶ [0090 and 0106]: “[0090] In one or more embodiments, the content management system 102 tracks the discussed content to determine whether all of the materials are discussed. For example, the content management system 102 can track the topics discussed during the meeting and compare the discussed topics to the meeting materials (e.g., the meeting agenda 404). The content management system 102 can also note the order of discussed topics and determine whether any of the topics are discussed out of order, etc. For instance, if the content management system 102 determines that the meeting presenter has skipped a topic listed in the meeting agenda 404 and moved on to another topic, the content management system 102 can determine that the meeting presenter may have missed the topic.
[0106] When determining relevance of highlights or action items for one or more users, the content management system 102 can assign a confidence to each potential highlight/action item in the meeting data. For example, the content management system 102 can use information about past highlights/action items, including whether users reviewed or completed the highlights/action items, to determine a confidence level for highlights/action items in the present meeting data. If the confidence for a given item meets a threshold, the content management system 102 can include the item in the meeting summary 430. Additionally, the content management system 102 can use confidence levels and past execution/review of items to prioritize similar items within the meeting summary 430. This can be particularly helpful in regularly occurring meetings dealing with review cycles, product launches, or other meetings that regularly include the same or similar action items.”).
Regarding claims 4, 12, and 20, Daredia et al. in combination with Kumar et al. teach the limitations as in claims 3, 11, and 19, above.
Daredia et al. further teaches:
4/12/20. The apparatus/method/computer-readable storage medium of claims 3/11/19, wherein the processor is configured to
arrange the content to be displayed on the dashboard based on the ranking (see Fig. 4A-4C citations as in claims 1,9, and 17 above and further ¶ [0090 and 00106] citations as in claims 3, 11, and 19, above. More specifically: “[0090] “… The content management system 102 can also note the order of discussed topics and determine whether any of the topics are discussed out of order, etc. For instance, if the content management system 102 determines that the meeting presenter has skipped a topic listed in the meeting agenda 404 and moved on to another topic, the content management system 102 can determine that the meeting presenter may have missed the topic.
[0106] …If the confidence for a given item meets a threshold, the content management system 102 can include the item in the meeting summary 430. Additionally, the content management system 102 can use confidence levels and past execution/review of items to prioritize similar items within the meeting summary 430…”).
Regarding claims 5 and 13, Daredia et al. in combination with Kumar et al. teach the limitations as in claims 3 and 11, above.
Daredia et al. further teaches:
5/13. The apparatus/method of claims 3/11, wherein the processor is configured to
identify a main topic of interest and a sub-topic of interest from the plurality of topics, and arrange content from the main topic of interest that results in a greater focus on the dashboard than content from the sub-topic of interest (see Fig. 4A-4C citations as in claims 1,9, and 17 above and further ¶ [0090 and 0106] citations as in claims 3-4, 11-12, and 19-20, above. More specifically: “[0090] “… The content management system 102 can also note the order of discussed topics and determine whether any of the topics are discussed out of order, etc. For instance, if the content management system 102 determines that the meeting presenter has skipped a topic listed in the meeting agenda 404 and moved on to another topic, the content management system 102 can determine that the meeting presenter may have missed the topic.
[0106] …If the confidence for a given item meets a threshold, the content management system 102 can include the item in the meeting summary 430. Additionally, the content management system 102 can use confidence levels and past execution/review of items to prioritize similar items within the meeting summary 430…”).
Claims 6-7 and 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over over Daredia et al. (US 20200403817 A1) further in view of Kumar et al. (US 20190028591 A1) as applied to claims 1 and 9 above, and further in view of Liu (US 20200320086 A1).
Regarding claims 6 and 14, Daredia et al. in combination with Kumar et al. teach the limitations as in claims 1 and 9, above.
However, Daredia et al. does not explicitly teach, but Liu does teach:
6/14. The apparatus/method of claims 1/9, wherein the processor is configured to
ingest a browsing history from a user device of the user and identify another topic of interest based on keywords included in the browsing history (see ¶ [0032, 0068, and 0093]: “[0032] In some embodiments, a machine-learning technique can be used to facilitate the content-recommendation process. For each domain (e.g., theme or topic), the system can train a domain-knowledge-based model that can model the hierarchical relationships among words meaningful in the domain and construct domain-knowledge graphs…
[0068] Subsequent to determining the feature tags for content pieces in the content library, the content-recommendation system can retrieve, from the content library, a plurality of content pieces associated with a user (operation 512). More specifically, the received content pieces can be determined based on their feature tags and an attribute tag associated with the user. Specifically, the content-recommendation system may generate an attribute tag for each user based on various historic information associated with the user, including but not limited to: application-usage history, content-browsing history, content-selection history, etc. In one embodiment, the attribute tag of a user may be determined based on the feature tags of one or more content pieces in the user's browsing history. If a content piece in the user's browsing history does not include a feature tag, the system can determine a feature tag for the content piece using a process similar to operations 504-510. If the user has a wide content-browsing range, to prevent the user attribute tag from being too long, the system can only add a smaller portion (e.g., the top-ranked feature words) of the feature tag of a previously viewed content piece to the user's attribute tag.
[0093] In general, the disclosed embodiments provide a solution to the technical problems of efficiently recommending content to a user when the user is interacting with an application running on a computing device. More specifically, the disclosed embodiments can provide a content-recommendation system that includes a content library storing content to-be-recommended to users. The content-recommendation system can train domain-knowledge-based models (e.g., using various machine-learning techniques) to obtain domain knowledge for various domains. The domain knowledge of a particular domain can include hierarchical domain knowledge, domain-knowledge graphs, or both. The hierarchical domain knowledge can specify a number of categories and a number of feature words for each category, and the domain-knowledge graphs can specify a number of feature combination words. By combining the domain-knowledge-based models and keywords extracted from content pieces using standard NLP techniques, the content-recommendation system can generate domain-knowledge-based feature tags for the content pieces. Compared with conventional content-recommendation approaches where content pieces are retrieved simply based on keywords, this domain-knowledge-based approach can recommend content in a more precise manner, because the content feature tags are generated based on domain knowledge and can represent the meaning of the content pieces more accurately. Moreover, when matching content pieces to users, the content-recommendation system takes into consideration both the browsing history of the user as well as the current display environment (e.g., topic or theme of the current user interface, etc.), thus enhancing the user experience.”).
Daredia et al., Kumar et al., and Liu are considered to be analogous to the claimed invention because they are in the same field of endeavor in content processing/management. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Daredia et al. in combination with Kumar et al. to incorporate the teachings of Liu of ingest[ing] a browsing history from a user device of the user and identify another topic of interest based on keywords included in the browsing history which provides the benefit of enhancing the user experience ([0093] of Liu).
Regarding claims 7 and 15, Daredia et al. in combination with Kumar et al. and Liu teach the limitations as in claims 6 and 14, above.
Liu further teaches:
7/15. The apparatus/method of claims 6/14, wherein the processor is configured to
generate the new dashboard with content directed to the another topic of interest based on execution of the AI model on the another topic of interest (see ¶ [0032, 0068, and 0093] citations as in claims 6 and 14, above. More specifically: ¶ [0068]: “…Specifically, the content-recommendation system may generate an attribute tag for each user based on various historic information associated with the user, including but not limited to: application-usage history, content-browsing history, content-selection history, etc. In one embodiment, the attribute tag of a user may be determined based on the feature tags of one or more content pieces in the user's browsing history...”),
wherein an AI agent performs an action based on the new dashboard (see ¶ [0032, 0068, and 0093] citations as in claims 6 and 14, above. More specifically: ¶ [0068]: content recommended to the user).
Daredia et al., Kumar et al., and Liu are considered to be analogous to the claimed invention because they are in the same field of endeavor in content processing/management. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Daredia et al. in combination with Kumar et al. to incorporate the teachings of Liu of generat[ing] the dashboard with content directed to the another topic of interest based on execution of the AI model on the another topic of interest which provides the benefit of enhancing the user experience ([0093] of Liu).
Claims 8 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over over Daredia et al. (US 20200403817 A1) further in view of Kumar et al. (US 20190028591 A1) as applied to claims 1 and 9 above, and further in view of Sahasi et al. (US 20250133275 A1).
Regarding claims 8 and 16, Daredia et al. in combination with Kumar et al. teach the limitations as in claims 1 and 9, above.
However, Daredia et al. does not explicitly teach, but Liu does teach:
8/16. The apparatus/method of claims 1/9, wherein the processor is configured to
generate the dashboard based on execution of the AI model on a dashboard of a different user (see Fig. 8 and ¶ [0092 and 0200]: “[0092] Selection of the selectable UI element 816 can cause the source device that presents the UI 810 to present another UI (not depicted) to search for a media asset to be augmented with directed content. To that end, in some embodiments, the portal subsystem 900 can include a search unit 916. In this disclosure, directed content refers to digital media configured for a particular audience, or a particular outlet channel (such as a website, a streaming service, or a mobile application), or both. Directed content, as further described herein, can include, for example, digital media of various types, such as an advertisement(s); a curated e-mail(s), a survey(s) or other type(s) of questionnaire(s); a motion picture(s), an animation(s), or other types of video segments; a podcast(s); an audio segment(s) of defined durations (e.g., a portion of a speech or tutorial); media combination thereof, and/or the like. In an example, the directed content can be generated, via a machine learning model, based on data extracted from one or more media assets, and/or based on engagement data associated with the one or more media assets. For example, directed content may be generated based on a plurality of similar user profiles (e.g. similar UICs). The plurality of similar user profiles may be determined based on a first interest cloud associated with a first user profile, wherein the plurality of similar user profiles are associated with corresponding interest clouds that are similar to the first interest cloud.
[0200] …As an example, the computing device may generate the directed content based on data indicative of a plurality of similar user profiles. The computing device may determine the plurality of similar user profiles based on a plurality of attributes associated with one or more users. The computing device may generate the directed content, via the machine learning model, based on the plurality of similar profiles, the compilation of segments, and the data associated with the media asset. ”).
Daredia et al., Kumar et al., and Sahasi et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in content processing/management. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Daredia et al. in combination with Kumar et al. to incorporate the teachings of Sahasi et al. of generat[ing] the dashboard based on execution of the AI model on a dashboard of a different user which provides the benefit of enhancing user interactions ([0121] of Sahasi et al.).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Keisha Y Castillo-Torres whose telephone number is (571)272-3975. The examiner can normally be reached Monday - Friday, 9:00 am - 4:00 pm (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre-Louis Desir can be reached at (571)272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Keisha Y. Castillo-Torres
Examiner
Art Unit 2659
/Keisha Y. Castillo-Torres/Examiner, Art Unit 2659
/PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659