Prosecution Insights
Last updated: April 19, 2026
Application No. 18/412,255

DYNAMIC PRESENTATION OF GRAPHICAL USER INTERFACE CONTENT WITH GENERATIVE ARTIFICIAL INTELLIGENCE

Non-Final OA §101§102§103
Filed
Jan 12, 2024
Examiner
PARCHER, DANIEL W
Art Unit
2174
Tech Center
2100 — Computer Architecture & Software
Assignee
The Toronto-Dominion Bank
OA Round
1 (Non-Final)
61%
Grant Probability
Moderate
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
160 granted / 264 resolved
+5.6% vs TC avg
Strong +59% interview lift
Without
With
+59.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
35 currently pending
Career history
299
Total Applications
across all art units

Statute-Specific Performance

§101
4.8%
-35.2% vs TC avg
§103
55.6%
+15.6% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
16.9%
-23.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 264 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 17-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter (see the Manual of Patent Examining Procedure section 2106). Claim 17 recites a “computer-readable storage medium”, however, Applicant’s Specification states at ¶00158 that “Further, modules may be stored on a computer-readable medium, which may be, for instance, a hard disk drive, flash device, random access memory (RAM), tape, or any other such medium used to store data.”. It is noted that this can be interpreted to include signals, carrier waves and other transmission media which is non-statutory. The Examiner suggests amending the claims to include “non-transitory computer-readable storage medium" to direct the claim to only statutory subject matter. Dependent claims incorporate all of the limitations of their respective independent or intervening claim(s) and are rejected on the same basis. Prior Art Listed herein below are the prior art references relied upon in this Office Action: Daredia et al. (US Patent Application Publication 2020/0403817), referred to as Daredia herein. Mostafa et al. (US Patent Application Publication 2013/0010049), referred to as Mostafa herein. Examiner’s Note Strikethrough notation in the pending claims has been added by the Examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-5, 8-13, and 16-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Daredia. Regarding claim 1, Daredia discloses an apparatus comprising: a memory; and a processor coupled to the memory, the processor configured to (Daredia, ¶0121, ¶0132 – processor executing instructions stored in hardware memory): establish a communication session between a first user device and a second user device from among the plurality of user devices (Daredia, Figs. 1 and 4B with ¶0029-¶0033, ¶0085 – client devices capture and send video data during a meeting), train an artificial intelligence (AI) model to learn user interface preferences of a plurality of user devices during the communication session (Daredia, Fig. 3 with ¶0060, ¶0064-¶0067, Figs. 4A-4E with ¶0108, ¶0112 – training the machine-learning model based on user preferences regarding the meeting), receive a description associated with the communication session (Daredia, ¶0084 – meeting agenda is analyzed by the content management system. ¶0108-¶0111 – machine learning), generate a plurality of windows of content and display the plurality of windows of content on a user interface of the first user device during the communication session based on execution of the AI model on the description associated with the communication session (Daredia, Figs. 4A-4E with ¶0082-¶0091 – moderation messages are provided in a separate window. Content items associated with the meeting are presented and highlighted based on the meeting agenda), and generate a second plurality of windows of content and display the second plurality of windows of content on a user interface of the second user device based on an execution of the AI model on the description associated with the communication session (Daredia, ¶0083, ¶0085 – content can be synchronized across all clients, or customized for each client. ¶0088, ¶0091 – moderation can be presented only to one user. See also Figs. 5A-5B with ¶0074, ¶0112-¶0118 – meeting insights are based on the agenda and are included in user-specific summaries). Regarding claim 2, Daredia discloses the elements of claim 1, and further discloses wherein the processor is configured to execute the AI model on content of the communication session and visual content from previous meetings between the first user device and the second user device to train the AI model (Daredia, ¶0018, ¶0024, ¶0041 – learning is based on prior meetings. ¶0065 – the training data set includes scenarios involving specific users and client devices. ¶0110, ¶0113, ¶0115 – each user’s participation in previous meetings. ¶0074 – video data used to train the model). Regarding claim 3, Daredia discloses the elements of claim 1, and further discloses wherein the processor is configured to receive an audio feed and a video feed from a meeting application on one or more of the first user device and the second user device (Daredia, ¶0017, ¶0044, ¶0122 – audio and video feeds from client meeting applications). Regarding claim 4, Daredia discloses the elements of claim 1 above, and further discloses wherein the processor is configured to receive a meeting summary of the communication session with a summary description of content to be discussed during the communication session between the first user device and the second user device (Daredia, Fig. 4E with ¶0107 – content management system generates a summary which is modified by the meeting presenter. Modifications to the summary a received from the client application before the summary is presented to other users. See also ¶0070 – manually generated summaries. See also, ¶0084 – meeting agenda is analyzed by the content management system). Regarding claim 5, Daredia discloses the elements of claim 1 above, and further discloses wherein the processor is configured to generate at least one window of missing content for the second plurality of windows of content that is not included in the plurality of windows of content displayed on the user interface of the first user device (Daredia, Figs. 4A-4E with ¶0082-¶0091 – moderation messages are provided in a separate window. Content items associated with the meeting are presented and highlighted based on the meeting agenda. Content items which have not been covered are shown to the presenter. ¶0083, ¶0085 – content can be synchronized across all clients, or customized for each client. ¶0088, ¶0091 – moderation can be presented only to one user. See also Figs. 5A-5B with ¶0074, ¶0112-¶0118 – meeting insights are based on the agenda and are included in user-specific summaries). Regarding claim 8, Daredia discloses the elements of claim 1 above, and further discloses wherein the processor is configured to display the second plurality of windows of content on the user interface of the second user device simultaneously while displaying the plurality of windows of content on the user interface of the first user device (Daredia, ¶0083, ¶0085 – content can be synchronized across all clients, or customized for each client. ¶0088, ¶0091 – moderation can be presented only to one user. See also Figs. 5A-5B with ¶0074, ¶0112-¶0118 – meeting insights are based on the agenda and are included in user-specific summaries). Regarding claim 9, Daredia discloses a method comprising: establishing a communication session between a first user device and a second user device from among the plurality of user devices (Daredia, Figs. 1 and 4B with ¶0029-¶0033, ¶0085 – client devices capture and send video data during a meeting. ¶0121, ¶0132 – processor executing instructions stored in hardware memory); training an artificial intelligence (AI) model to learn user interface preferences of a plurality of user devices during the communication session (Daredia, Fig. 3 with ¶0060, ¶0064-¶0067, Figs. 4A-4E with ¶0108 – training the machine-learning model based on user preferences regarding the meeting); receiving a description associated with the communication session (Daredia, ¶0084 – meeting agenda is analyzed by the content management system. ¶0108-¶0111 – machine learning); generating a plurality of windows of content and displaying the plurality of windows of content on a user interface of the first user device during the communication session based on execution of the AI model on the description associated with the communication session (Daredia, Figs. 4A-4E with ¶0082-¶0091 – moderation messages are provided in a separate window. Content items associated with the meeting are presented and highlighted based on the meeting agenda); and generating a second plurality of windows of content and displaying the second plurality of windows of content on a user interface of the second user device based on an execution of the AI model on the description associated with the communication session (Daredia, ¶0083, ¶0085 – content can be synchronized across all clients, or customized for each client. ¶0088, ¶0091 – moderation can be presented only to one user. See also Figs. 5A-5B with ¶0074, ¶0112-¶0118 – meeting insights are based on the agenda and are included in user-specific summaries). Regarding claim 10, Daredia discloses the elements of claim 9 above, and further discloses wherein the training comprises executing the AI model on content of the communication session and visual content from previous meetings between the first user device and the second user device (Daredia, ¶0018, ¶0024, ¶0041 – learning is based on prior meetings. ¶0065 – the training data set includes scenarios involving specific users and client devices. ¶0110, ¶0113, ¶0115 – each user’s participation in previous meetings. ¶0074 – video data used to train the model). Regarding claim 11, Daredia discloses the elements of claim 9 above, and further discloses wherein the establishing the communication session comprises receiving an audio feed and a video feed from a meeting application on one or more of the first user device and the second user device (Daredia, ¶0017, ¶0044, ¶0122 – audio and video feeds from client meeting applications). Regarding claim 12, Daredia discloses the elements of claim 9 above, and further discloses wherein the receiving the description comprises receiving a meeting summary of the communication session with a summary description of content to be discussed during the communication session between the first user device and the second user device (Daredia, Fig. 4E with ¶0107 – content management system generates a summary which is modified by the meeting presenter. Modifications to the summary a received from the client application before the summary is presented to other users. See also ¶0070 – manually generated summaries. See also, ¶0084 – meeting agenda is analyzed by the content management system). Regarding claim 13, Daredia discloses the elements of claim 9 above, and further discloses wherein the generating the second plurality of windows of content comprises generating at least one window of missing content for the second plurality of windows of content that is not included in the plurality of windows of content displayed on the user interface of the first user device (Daredia, Figs. 4A-4E with ¶0082-¶0091 – moderation messages are provided in a separate window. Content items associated with the meeting are presented and highlighted based on the meeting agenda. Content items which have not been covered are show to the presenter. ¶0083, ¶0085 – content can be synchronized across all clients, or customized for each client. ¶0088, ¶0091 – moderation can be presented only to one user. See also Figs. 5A-5B with ¶0074, ¶0112-¶0118 – meeting insights are based on the agenda and are included in user-specific summaries). Regarding claim 16, Daredia discloses the elements of claim 9 above, and further discloses wherein the displaying the second plurality of windows of content on the user interface of the second user device comprises displaying the second plurality of windows of content on the user interface of the second user device simultaneously while displaying the plurality of windows of content on the user interface of the first user device (Daredia, ¶0083, ¶0085 – content can be synchronized across all clients, or customized for each client. ¶0088, ¶0091 – moderation can be presented only to one user. See also Figs. 5A-5B with ¶0074, ¶0112-¶0118 – meeting insights are based on the agenda and are included in user-specific summaries). Regarding claim 17, Daredia discloses a computer-readable storage medium comprising instructions stored therein which when executed by a processor cause a computer to perform (Daredia, ¶0121 – processor executing instructions stored in hardware memory): establishing a communication session between a first user device and a second user device from among the plurality of user devices (Daredia, Figs. 1 and 4B with ¶0029-¶0033, ¶0085 – client devices capture and send video data during a meeting); training an artificial intelligence (AI) model to learn user interface preferences of a plurality of user devices during the communication session (Daredia, Fig. 3 with ¶0064-¶0067, Figs. 4A-4E with ¶0108 – training the machine-learning model based on user preferences regarding the meeting); receiving a description associated with the communication session (Daredia, ¶0084 – meeting agenda is analyzed by the content management system. ¶0108-¶0111 – machine learning); generating a plurality of windows of content and displaying the plurality of windows of content on a user interface of the first user device during the communication session based on execution of the AI model on the description associated with the communication session (Daredia, Figs. 4A-4E with ¶0082-¶0091 – moderation messages are provided in a separate window. Content items associated with the meeting are presented and highlighted based on the meeting agenda); and generating a second plurality of windows of content and displaying the second plurality of windows of content on a user interface of the second user device based on an execution of the AI model on the description associated with the communication session (Daredia, ¶0083, ¶0085 – content can be synchronized across all clients, or customized for each client. ¶0088, ¶0091 – moderation can be presented only to one user. See also Figs. 5A-5B with ¶0074, ¶0112-¶0118 – meeting insights are based on the agenda and are included in user-specific summaries). Regarding claim 18, Daredia discloses the elements of claim 17 above, and further discloses wherein the training comprises the executing of the AI model on content of the communication session and visual content from previous meetings between the first user device and the second user device (Daredia, ¶0018, ¶0024, ¶0041 – learning is based on prior meetings. ¶0065 – the training data set includes scenarios involving specific users and client devices. ¶0110, ¶0113, ¶0115 – each user’s participation in previous meetings. ¶0074 – video data used to train the model). Regarding claim 19, Daredia discloses the elements of claim 17 above, and further discloses wherein the establishing the communication session comprises receiving an audio feed and a video feed from a meeting application on one or more of the first user device and the second user device (Daredia, ¶0017, ¶0044, ¶0122 – audio and video feeds from client meeting applications). Regarding claim 20, Daredia discloses the elements of claim 17 above, and further discloses wherein the receiving the description comprises receiving a meeting summary of the communication session with a summary description of content to be discussed during the communication session between the first user device and the second user device (Daredia, Fig. 4E with ¶0107 – content management system generates a summary which is modified by the meeting presenter. Modifications to the summary a received from the client application before the summary is presented to other users. See also ¶0070 – manually generated summaries). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 6-7 and 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Daredia in view of Mostafa. Regarding claim 6, Daredia discloses the elements of claim 1 above, and further discloses wherein the processor is configured to generate a same plurality of windows of content for the user interface of the first user device and the user interface of the second user device and arrange the second plurality of windows of content in However, Daredia appears not to expressly disclose the limitations in strikethrough above. However, in the same field of endeavor, Mostafa discloses video conference session management (Mostafa, ¶0035-¶0036), including generate a same plurality of windows of content for the user interface of the first user device and the user interface of the second user device and arrange the second plurality of windows of content in different locations on the user interface of the second user device than the plurality of windows of content on the user interface of the first user device based on different preferences of the second user device (Mostafa, Figs. 9-11 with ¶0053 – tailored window layouts during a video conference. ¶0064-¶0083 – windows of content are positioned and sized according to the attribute description of the communication session. ¶0045-¶0046 – default values can specify rank where no input for attributes for personal preferences are received. ¶0065 – windows can be pinned to a particular rank and location). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the content locations of Daredia to include different window locations on different client devices based on the teachings of Mostafa. The motivation for doing so would have been offer client devices addition control over layout, improving user experience during the video conference (Mostafa, ¶0034). Regarding claim 7, Daredia as modified discloses the elements of claim 6 above, and further discloses wherein the different preferences are determined based on the execution of the AI model on content of the communication session and visual content from previous meetings between the first user device and the second user device (Daredia, ¶0018, ¶0024, ¶0041 – learning is based on prior meetings. ¶0065 – the training data set includes scenarios involving specific users and client devices. ¶0108 – user preferences and historical operations. ¶0074 – video data used to train the model. Mostafa, Figs. 9-11 with ¶0053 – tailored window layouts during a video conference). Regarding claim 14, Daredia discloses the elements of claim 9 above, and further discloses wherein the generating the second plurality of windows of content comprises generating a same plurality of windows of content for the user interface of the first user device and the user interface of the second user device and arranging the second plurality of windows of content in However, Daredia appears not to expressly disclose the limitations in strikethrough above. However, in the same field of endeavor, Mostafa discloses video conference session management (Mostafa, ¶0035-¶0036), including generate a same plurality of windows of content for the user interface of the first user device and the user interface of the second user device and arrange the second plurality of windows of content in different locations on the user interface of the second user device than the plurality of windows of content on the user interface of the first user device based on different preferences of the second user device (Mostafa, Figs. 9-11 with ¶0053 – tailored window layouts during a video conference. ¶0064-¶0083 – windows of content are positioned and sized according to the attribute description of the communication session. ¶0045-¶0046 – default values can specify rank where no input for attributes for personal preferences are received. ¶0065 – windows can be pinned to a particular rank and location). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the the content locations of Daredia to include different window locations on different client devices based on the teachings of Mostafa. The motivation for doing so would have been offer client devices addition control over layout, improving user experience during the video conference (Mostafa, ¶0034). Regarding claim 15, Daredia as modified discloses the elements of claim 14 above, and further discloses wherein the different preferences are determined based on the execution of the AI model on content of the communication session and visual content from previous meetings between the first user device and the second user device (Daredia, ¶0018, ¶0024, ¶0041 – learning is based on prior meetings. ¶0065 – the training data set includes scenarios involving specific users and client devices. ¶0108 – user preferences and historical operations. ¶0074 – video data used to train the model. Mostafa, Figs. 9-11 with ¶0053 – tailored window layouts during a video conference). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. References are at least relevant as indicated in the corresponding summary. Callegari et al. (US Patent Application Publication 2024/0205037) – AI-based video conference custom meeting summaries. Gambhir (US Patent Application Publication 2024/0045581) – AI-based window layout. Viswanathan Iyer et al. (US Patent Application Publication 2022/0236782) – AI based video conference layout. Browne et al. (US Patent Number 9,888,211) – custom window arrangements for meetings. Gupta et al. (US Patent Application Publication 2022/0207489) – AI-based meeting summaries and task management. Asthana et al. (US Patent Application Publication 2022/0109585) – machine-learning customized meeting summaries. Kikim-Gil et al. (US Patent Application Publication 2021/0383127) - – machine-learning customized meeting summaries. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL W PARCHER whose telephone number is (303)297-4281. The examiner can normally be reached Monday - Friday, 9:00am - 5:00pm, Mountain Time. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Bashore can be reached at (571)272-4088 (Eastern Time). The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL W PARCHER/Primary Examiner, Art Unit 2174
Read full office action

Prosecution Timeline

Jan 12, 2024
Application Filed
Feb 19, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596464
ELECTRONIC APPARATUS AND METHOD FOR PROVIDING USER INTERFACE THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12591347
USER INTERFACES FOR INDICATING STATUS OF A TRACKED ENTITY
2y 5m to grant Granted Mar 31, 2026
Patent 12591607
AUTOMATED CONTENT CREATION AND CONTENT SERVICES FOR COLLABORATION PLATFORMS
2y 5m to grant Granted Mar 31, 2026
Patent 12578977
OMNI-CHANNEL MICRO FRONTEND CONTROL PLANE
2y 5m to grant Granted Mar 17, 2026
Patent 12541378
SYSTEMS AND METHODS FOR GENERATING AND PROVIDING A DYNAMIC USER INTERFACE
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
61%
Grant Probability
99%
With Interview (+59.4%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 264 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month