Prosecution Insights
Last updated: April 19, 2026
Application No. 18/391,120

COMPUTER-BASED SYSTEMS AND PLATFORMS FOR PARTICIPANT ENGAGEMENT AND CONTINUITY DURING LIVE EVENTS AND METHODS OF USE THEREOF

Non-Final OA §103
Filed
Dec 20, 2023
Examiner
DISTEFANO, GREGORY A
Art Unit
2174
Tech Center
2100 — Computer Architecture & Software
Assignee
Capital One Services LLC
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
3y 8m
To Grant
92%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
363 granted / 527 resolved
+13.9% vs TC avg
Strong +23% interview lift
Without
With
+23.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
25 currently pending
Career history
552
Total Applications
across all art units

Statute-Specific Performance

§101
12.0%
-28.0% vs TC avg
§103
58.1%
+18.1% vs TC avg
§102
14.7%
-25.3% vs TC avg
§112
8.2%
-31.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 527 resolved cases

Office Action

§103
DETAILED ACTION This action is in response to the application filed 12/20/2023. Claims 1-20 have been submitted for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nakajima et al. (US 12,182,502), hereinafter Nakajima, in view of Narayanaswami et al. (US 2008/0276159), hereinafter Nara, in view of Nguyen et al. (US 2021/0174787), hereinafter Nguyen. As per claim 17, Nakajima teaches the following: a computer-implemented method comprising: receiving, by the at least one processor, an audio data in real-time from an audio device at the live event. As Nakajima teaches in column 10, lines 7-22, at step 3010, a set of audio data is obtained which is associated with a conversation as the conversation occurs; generating, by the at least one processor, in real time, a transcription data of the audio data. As Nakajima teaches in column 10, lines 7-22, at step 3-15, the audio data is transcribed into a set of text data while the conversation occurs; i receiving, by the at least one processor, via the application executing on the at least one second computing device, at least one user-specific input data related to at least one of the event data or the transcription data from the plurality of second users of the plurality of users. As Nakajima teaches in column 7, lines 25-44, images captured during a conversation may be selected for automatic embedding into the transcript; wherein the at least one user-specific input data is provided to the application via the user interface of the plurality of second computing devices; wherein the user interface comprises at least one tool for providing the user-specific input data via the application As Nakajima teaches in column 7, lines 25-44, images captured during a conversation may be selected for automatic embedding into the transcript, where the set of images for selection are interpreted as being a “tool”. Nakajima teaches in column 4, lines 56-60, that the embedding into a transcript may occur in real-time; aggregating, by the at least one processor, the at least one user-specific input data to form an aggregated user input data; generating, by the at least one processor, a combined software container comprising a schema that allows to embed the user-specific input data into the transcription data. As Nakajima teaches in column 10, lines 23-33, at step 3040, information is embedded in the transcript; instructing, by the at least one processor, the application to display the combined software container on the user interface of the plurality of second computing devices. Nakajima teaches in column 10, lines 29-33, at step 3040, the transcript along with embedded objects is presented to a group of actual participants. However, Nakajima does not explicitly teach of receiving event data, ####. In a similar field of endeavor, Nara teaches of a method of annotating a transcript of a presentation (see abstract). Nara further teaches the following: receiving, by at least one processor, event data from a first computing device of a first user of a plurality of users. As Nara teaches in paragraph [0024], and corresponding Fig. 1, presentation 110 may be broadcast. Further see Fig. 3, 350; wherein the event data is associated with a live event. As Nara teaches in paragraph [0026], and corresponding Fig. 1, the presentation may be a live event 110. Further see Fig. 3, times of presentation at each device; transmitting, by the at least one processor, the transcription data, in real time, to the application executing on the plurality of second computing devices of a plurality of second users of the plurality of users. As Nara teaches in paragraph [0026], and corresponding Fig. 1, a presentation may be streamed to a viewing user device 120. Further see Fig. 3, 155. It would have been obvious to one of ordinary skill in the art before the effective filing date of applicant’s claimed invention to have modified the presentation of Nakajima with the presentation broadcasting of Nara. One of ordinary skill would have been motivated to have made such modification because as Nara teaches in paragraph [0026], such broadcasting benefits users in allowing the users to not be in a same physical location. Furthermore, While Nakajima teaches in column 10, lines 29-33, of presenting the live transcript, Nakajima does not explicitly teach of displaying the event data and transcript data at the second computing devices. In a similar field of endeavor, Nguyen teaches of annotating real-time speech-to-text transcription (see abstract). Nguyen further teaches the following: instructing, by the at least one processor, the application to display the event data on a user interface of the plurality of second computing devices. As Nguyen shows in Fig. 1, a “productivity application” pane 126 is displayed corresponding to real-time transcription pane 128. Nguyen teaches in paragraph [0025] different examples of productivity applications including a presentation application which is interpreted as encompassing event data; instructing, by the at least one processor, the application to display the transcription data, in real time, on the user interface of the plurality of second computing devices. As Nguyen shows in Fig. 1, a “productivity application” pane 126 is displayed corresponding to real-time transcription pane 128; It would have been obvious to one of ordinary skill in the art before the effective filing date of applicant’s claimed invention to have modified the presentation of Nakajima with the presentation and transcription panes of Nguyen. One of ordinary skill would have been motivated to have made such modification because as Nguyen teaches in paragraph [0001], such panes benefit users viewing a presentation who may be unfamiliar with the speech subject matter, have auditory learning issues, have hearing issues, and/or language issues. Regarding claim 18, modified Nakajima teaches the method of claim 17 as described above. However, Nakajima does not explicitly teach of the event being a slide presentation. Nguyen further teaches the following: the event data comprises at least a presentation comprising a plurality of slides. Nguyen teaches in paragraph [0025] different examples of productivity applications including a presentation application and paragraph [0037] that the presentation may be that of a slide show. It would have been obvious to one of ordinary skill in the art before the effective filing date of applicant’s claimed invention to have modified the presentation of Nakajima with the slide show of Nguyen. One of ordinary skill would have been motivated to have made such modification because slide shows provided the well known benefit of providing visual aids to presentations. Regarding claim 19, modified Nakajima teaches the method of claim 17 as described above. Nakajima further teaches the following: the user-specific input data comprises at least one of highlighting of at least one portion of the transcription data, highlighting of at least one portion of the event data, a response to at least one polling question or a comment on at least one of the event data or the transcription data. As Nakajima teaches in column 7, lines 10-18, the user may highlight a specific section of the transcript, which is embedded in said transcript. Regarding claim 20, modified Nakajima teaches the method of claim 17 as described above. However, Nakajima does not explicitly teach of different interfaces. Nguyen further teaches the following: the user interface comprises at least one of an event data interface, a transcription data interface, a presentation interface, a participant engagement interface and an announcements display. As Nguyen shows in Fig. 1, a productivity pane 126 (presentation interface) is shown corresponding to a transcription pane 128 (transcription data interface). It would have been obvious to one of ordinary skill in the art before the effective filing date of applicant’s claimed invention to have modified the presentation of Nakajima with the presentation and transcription panes of Nguyen. One of ordinary skill would have been motivated to have made such modification because as Nguyen teaches in paragraph [0001], such panes benefit users viewing a presentation who may be unfamiliar with the speech subject matter, have auditory learning issues, have hearing issues, and/or language issues. Allowable Subject Matter Claims 1-16 are allowed. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. -Toomey et al. (US 6,119,147), real time transcription, see Fig.4. -*Strader et al. (US 2019/0122766), real time transcription with machine learning note generation. -Shepherd et al. (US 2015/0149929), meeting interface with real time transcription, see Fig. 4. -Przekop et al. (US 2003/0078973), embedding links into a transcript sharable to other users and for creating a summary. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GREGORY A DISTEFANO whose telephone number is (571)270-1644. The examiner can normally be reached Monday - Friday: 9 am - 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Bashore can be reached at 5712424088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GREGORY A. DISTEFANO/ Examiner Art Unit 2174 /WILLIAM L BASHORE/ Supervisory Patent Examiner, Art Unit 2174
Read full office action

Prosecution Timeline

Dec 20, 2023
Application Filed
Feb 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591356
ELECTRONIC DEVICE FOR PERFORMING SCREEN CAPTURE AND METHOD FOR CAPTURING SCREEN BY ELECTRONIC DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12585867
METHOD, SYSTEM, AND COMPUTING DEVICE FOR FACILITATING PRIVATE DRAFTING
2y 5m to grant Granted Mar 24, 2026
Patent 12566913
Artificial Intelligence Agents to Automate Multimodal Interface Task Workflows
2y 5m to grant Granted Mar 03, 2026
Patent 12541285
ELECTRONIC APPARATUS AND METHOD FOR OBTAINING A CAPTURE IMAGE THEREOF
2y 5m to grant Granted Feb 03, 2026
Patent 12530086
TRACTABLE BODY-BASED AR SYSTEM INPUT
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
92%
With Interview (+23.0%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 527 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month