Prosecution Insights
Last updated: April 19, 2026
Application No. 18/224,783

CONFERENCE SYSTEM FOR USE OF MULTIPLE DEVICES

Non-Final OA §102
Filed
Jul 21, 2023
Examiner
TIEU, BINH KIEN
Art Unit
2694
Tech Center
2600 — Communications
Assignee
Capital One Services LLC
OA Round
3 (Non-Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
97%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
809 granted / 931 resolved
+24.9% vs TC avg
Moderate +10% lift
Without
With
+9.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
25 currently pending
Career history
956
Total Applications
across all art units

Statute-Specific Performance

§101
6.1%
-33.9% vs TC avg
§103
43.9%
+3.9% vs TC avg
§102
26.5%
-13.5% vs TC avg
§112
4.1%
-35.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 931 resolved cases

Office Action

§102
DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/04/2026 has been entered. Response to Amendment The amendment, filed 02/04/2026, was received and entered. As the results, independent claims 1, 10 and 19 were amended with the features as following: (a) "parsing, by the conference system, a voice signal of the audio/video input," (supported by, e.g., paragraphs [0023] and [0040] of the as-filed Specification ("Specification")) (b) "detecting, by the conference system, an identifier in the voice signal based on the parsing," (supported by, e.g., Specification, paragraphs [0031] and [0040]) and (c) "determining, by the conference system, the first user is a designated user from the plurality of users based on the detected identifier in the voice signal being associated with the first user." (supported by, e.g., Specification, paragraphs [0031] and [0040]). No new claims were added and deleted. Claims 1-20 are pending in this application at this time. Based on the Applicants’ remarks and above features amended to the claims, Examiner performed updated searches and found a new reference. The new ground of rejections to the claims are as followings. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Tokuchi (US 2022/0224735). Regarding claim 1, Tokuchi teaches a computer-implemented method (i.e., a method of an online conference) comprising: associating, by a conference system (i.e., server 10, as shown in figure 2), a first device of a plurality of devices participating in a conference session with a first user of a plurality of users participating in the conference session (i.e., users A, B, C, D, E, F, G are in different places as shown in figures 4 and 5 (par.[0057]-[0058] and [0108]-[0109]); account information of a user who uses an online service is information for logging in and using the online service wherein the user associated with the account information is permitted to participate and use the online service (para.[0034]); and a list of information, e.g., account information, for identifying the user who is logged into the online conference; para.[0114]; a user who speakers next or user D (as a first user) is designated by a user (i.e., a second user) who speaks before the user speaks next, an authorized person, etc.; para.[0120]); gathering, by the conference system, an audio/video input from a second device of the plurality of devices (i.e., gathering, by the server 10, a sound such as a voice, etc. by the user who speaks before the user D; para.[0121]); parsing, by the conference system, a voice signal of the audio/video input (i.e., a voice signal, such as a name, nickname, etc. via a microphone and transmitting to the server 10 to identify the user D; para.[0123]); detecting, by the conference system, an identifier in the voice signal based on the parsing (i.e., the server 10 identifies the user D; para.[0124]); determining, by the conference system, the first user is a designated user from the plurality of users based on the detected identifier in the voice signal being associated with the first user (para.[0123] and [0128]); and modifying, by the conference system, a setting of the conference session based on the designated user (i.e., changing display form of the display region, expanding the display region up to a size corresponding to a point that the designed user is speaking, or enlarging the image or moving image, etc.; para.[0115]-[0117]). Regarding claim 2, Tokuchi further teaches the limitations of the claim, such as associating multiple of the plurality of devices participating in the conference session with respective users participating in the conference session (i.e., the users A to G are logged into the online conference and were assigned different regions, as shown in figures 7-8; para.[0102]-[0103] and [0110]-[0112]); and the modifying comprises modifying a display method of the video input of the device associated with the designated user (i.e., if the user D is designed as the next speaker, the display region D associated with the user D blinks, or is decorated; para. [0118]), expanding the display region up to a size corresponding to a point that the designed user is speaking, or enlarging the image or moving image, etc., as shown in figures 9-15; para.[0115]-[0117]). Regarding claim 3, Tokuchi further teaches the limitations of the claim, such as identifying a primary user from among the respective users (i.e., identifying the user is currently speaking as a primary user; para.[0115]); and detecting a change of the primary user to the designated user based on the detected identifier in the voice signal being associated with the first user (i.e., detecting a sound or voice signal (i.e., name, nick name, etc.) spoken from the user who speaks before the user who speaks next; para.[0121] and [0123]); and wherein modifying the display method of the video input of the device associated with the designated user comprises setting the video input of the device associated with the designated user as a primary video of the conference session (i.e., if the user D is designed as the next speaker, the display region D associated with the user D blinks, or is decorated; para. [0118]), expanding the display region up to a size corresponding to a point that the designed user is speaking, or enlarging the image or moving image, etc., as shown in figures 9-15; para.[0115]-[0117]). Regarding claim 4, Tokuchi further teaches multiple user regions are displayed in variety of grid arrangements, as shown in figures 8-15. Tokuchi further teaches the features of modifying the display method, as discussed above (para.[0015]-[0117]). Regarding claim 5, Tokuchi further teaches limitations of the claim, such as modification of the setting by pinning a designated user for a predetermined period of time in paragraphs [0135]-[0139]. Regarding claim 6, Tokuchi further teaches limitations of the claim, such as identifying a second user as a designating user associated with the second device that output the voice signal audio/video cue that was a source for detecting the designated user (i.e., identifying the user who previously speaks may be the user who speaks immediately before the user who speaks next; para.[0120]); and determining a direction from the designating user to the designated user based on the detecting the identifier in the voice signal by detecting the audio/video cue from the audio/video input (i.e., determining a direction from the designating user by a gesture such as pointing, or by a sight line; para.[0121] and [0124]); and wherein: modifying the display method of the user region of the designated user in the grid comprises placing the user region of the designated user at a display location relative to the user region of the designating user in a same direction as the determined direction (i.e., if the user D is designed as the next speaker, the display region D associated with the user D blinks, or is decorated; para. [0118]), expanding the display region up to a size corresponding to a point that the designed user is speaking, or enlarging the image or moving image, etc., as shown in figures 9-15; para.[0115]-[0117]). Regarding claim 7, Tokuchi further teaches limitations of the claim, such as assigning each of multiple participants associated with their devices in the corresponding regions as displayed in figures 7-15 as discussed above. Tokuchi further teaches the image or moving image of the designated user is displayed in the expanded or enlarged region to emphasizing the designed user among the plurality of the users, as shown in figures 8-15, paragraphs [0115]-[0118]. Regarding claims 8 and 9, Tokuchi further teaches limitations of the claim, such as a gesture such as pointing with a finger or arm, etc. in paragraphs [0121] and [0124]. Regarding claim 10, Tokuchi teaches a conference system (i.e., server 10, as shown in figure 2; para.[0038]), comprising: a memory (i.e., memory 18; para.[0042]), and a least one processor coupled to the memory (i.e., processor 20; para.[0043]) and configured to: associate a first device of a plurality of devices participating in a conference session with a first user of a plurality of users participating in the conference session (i.e., users A, B, C, D, E, F, G are in different places as shown in figures 4 and 5 (par.[0057]-[0058] and [0108]-[0109]); account information of a user who uses an online service is information for logging in and using the online service wherein the user associated with the account information is permitted to participate and use the online service (para.[0034]); and a list of information, e.g., account information, for identifying the user who is logged into the online conference; para.[0114]; a user who speakers next or user D (as a first user) is designated by a user (i.e., a second user) who speaks before the user speaks next, an authorized person, etc.; para.[0120]); gathering an audio/video input from a second device of the plurality of devices (i.e., gathering, by the server 10, a sound such as a voice, etc. by the user who speaks before the user D; para.[0121]); parsing a voice signal of the audio/video input (i.e., a voice signal, such as a name, nickname, etc. via a microphone and transmitting to the server 10 to identify the user D; para.[0123]); detecting an identifier in the voice signal based on the parsing (i.e., the server 10 identifies the user D; para.[0124]); determine the first user is a designated user from the plurality of users based on the detected identifier in the voice signal being associated with the first user (para.[0123] and [0128]); and modify a setting of the conference session based on the designated user (i.e., changing display form of the display region, expanding the display region up to a size corresponding to a point that the designed user is speaking, or enlarging the image or moving image, etc.; para.[0115]-[0117]). Regarding claim 11, Tokuchi further teaches the limitations of the claim, such as associating multiple of the plurality of devices participating in the conference session with respective users participating in the conference session (i.e., the users A to G are logged into the online conference and were assigned different associated regions, as shown in figures 7-8; para.[0102]-[0103] and [0110]-[0112]); and modifying a display method of the video input of the device associated with the designated user (i.e., if the user D is designed as the next speaker, the display region D associated with the user D blinks, or is decorated; para. [0118]), expanding the display region up to a size corresponding to a point that the designed user is speaking, or enlarging the image or moving image, etc., as shown in figures 9-15; para.[0115]-[0117]). Regarding claim 12, Tokuchi further teaches the limitations of the claim, such as identifying a primary user from among the respective users (i.e., identifying the user is currently speaking as a primary user; para.[0115]); and detecting a change of the primary user to the designated user based on the detected identifier in the voice signal being associated with the first user (i.e., detecting a sound or voice signal (i.e., name, nick name, etc.) spoken from the user who speaks before the user who speaks next; para.[0121] and [0123]); and wherein modifying the display method of the video input of the device associated with the designated user comprises setting the video input of the device associated with the designated user as a primary video of the conference session (i.e., if the user D is designed as the next speaker, the display region D associated with the user D blinks, or is decorated; para. [0118]), expanding the display region up to a size corresponding to a point that the designed user is speaking, or enlarging the image or moving image, etc., as shown in figures 9-15; para.[0115]-[0117]). Regarding claim 13, Tokuchi further teaches multiple user regions are displayed in variety of grid arrangements, as shown in figures 8-15. Tokuchi further teaches the features of modifying the display method, as discussed above (para.[0015]-[0117]). Regarding claim 14, Tokuchi further teaches limitations of the claim, such as modification of the setting by pinning a designated user for a predetermined period of time in paragraphs [0135]-[0139]. Regarding claim 15, Tokuchi further teaches limitations of the claim, such as identifying a second user as a designating user associated with the second device that output the voice signal audio/video cue that was a source for detecting the designated user (i.e., identifying the user who previously speaks may be the user who speaks immediately before the user who speaks next; para.[0120]); and determining a direction from the designating user to the designated user based on the detecting the identifier in the voice signal by detecting the audio/video cue from the audio/video input (i.e., determining a direction from the designating user by a gesture such as pointing, or by a sight line; para.[0121]); and wherein: modifying the display method of the user region of the designated user in the grid comprises placing the user region of the designated user at a display location relative to the user region of the designating user in a same direction as the determined direction (i.e., if the user D is designed as the next speaker, the display region D associated with the user D blinks, or is decorated; para. [0118]), expanding the display region up to a size corresponding to a point that the designed user is speaking, or enlarging the image or moving image, etc., as shown in figures 9-15; para.[0115]-[0117]). Regarding claim 16, Tokuchi further teaches limitations of the claim, such as assigning each of multiple participants associated with their devices in the corresponding regions as displayed in figures 7-15 as discussed above. Tokuchi further teaches the image or moving image of the designated user is displayed in the blinking region, expanded or enlarged region, color regions, etc., to emphasize the designed user among the plurality of the users, as shown in figures 8-15, paragraphs [0115]-[0118]. Regarding claims 17 and 18, Tokuchi further teaches limitations of the claim, such as a gesture such as pointing with a finger or arm, etc. in paragraphs [0121] and [0124]. Regarding claim 19, Tokuchi teaches a computer readable storage device having instructions stored thereon that, when executed by one or more processing devices (i.e., server 10, as shown in figure 2, comprising a memory 18 and processor 20; para.[0038], [0042] and [0043]) to perform operations comprising: associating, by a conference system (i.e., server 10, as shown in figure 2), a first device of a plurality of devices participating in a conference session with a first user of a plurality of users participating in the conference session (i.e., users A, B, C, D, E, F, G are in different places as shown in figures 4 and 5 (par.[0057]-[0058] and [0108]-[0109]); account information of a user who uses an online service is information for logging in and using the online service wherein the user associated with the account information is permitted to participate and use the online service (para.[0034]); and a list of information, e.g., account information, for identifying the user who is logged into the online conference; para.[0114]; a user who speakers next or user D (as a first user) is designated by a user (i.e., a second user) who speaks before the user speaks next, an authorized person, etc.; para.[0120]); gathering, by the conference system, an audio/video input from a second device of the plurality of devices (i.e., gathering, by the server 10, a sound such as a voice, etc. by the user who speaks before the user D; para.[0121]); parsing, by the conference system, a voice signal of the audio/video input (i.e., a voice signal, such as a name, nickname, etc. via a microphone and transmitting to the server 10 to identify the user D; para.[0123]); detecting, by the conference system, an identifier in the voice signal based on the parsing (i.e., the server 10 identifies the user D; para.[0124]); determining, by the conference system, the first user is a designated user from the plurality of users based on the detected identifier in the voice signal being associated with the first user (para.[0123] and [0128]); and modifying, by the conference system, a setting of the conference session based on the designated user (i.e., changing display form of the display region, expanding the display region up to a size corresponding to a point that the designed user is speaking, or enlarging the image or moving image, etc.; para.[0115]-[0117]). Regarding claim 20, Tokuchi further teaches the limitations of the claim, such as associating multiple of the plurality of devices participating in the conference session with respective users participating in the conference session (i.e., the users A to G are logged into the online conference and were assigned different associated regions, as shown in figures 7-8; para.[0102]-[0103] and [0110]-[0112]); and modifying a display method of the video input of the device associated with the designated user (i.e., if the user D is designed as the next speaker, the display region D associated with the user D blinks, or is decorated; para. [0118]), expanding the display region up to a size corresponding to a point that the designed user is speaking, or enlarging the image or moving image, etc., as shown in figures 9-15; para.[0115]-[0117]). Any inquiry concerning this communication or earlier communications from the examiner should be directed to BINH TIEU whose telephone number is (571)272-7510. The examiner can normally be reached on 9-5. The Examiner’s fax number is (571) 273-7510 and E-mail address: BINH.TIEU@USPTO.GOV. Examiner interviews are available via telephone or video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, FAN S. TSANG can be reached on (571) 272-7547. Any response to this action should be mailed or handed carry deliveries to: Commissioner of Patents and Trademarks 401 Dulany Street Alexandria, VA 22314 Or faxed to: (571) 273-8300 Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (FAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the FAIR system, see fitp://nair-direct.usoto.aqev. If you have any questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /Binh Kien Tieu/Primary Examiner, Art Unit 2694 Date: February 2026
Read full office action

Prosecution Timeline

Jul 21, 2023
Application Filed
May 22, 2025
Non-Final Rejection — §102
Aug 25, 2025
Examiner Interview (Telephonic)
Aug 25, 2025
Response Filed
Aug 25, 2025
Examiner Interview Summary
Nov 04, 2025
Final Rejection — §102
Feb 04, 2026
Request for Continued Examination
Feb 13, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §102
Apr 14, 2026
Applicant Interview (Telephonic)
Apr 14, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603111
AUDIO GUESTBOOK SYSTEMS AND METHODS
2y 5m to grant Granted Apr 14, 2026
Patent 12598223
Dynamic Teleconference Content Item Distribution to Multiple Devices Associated with a User
2y 5m to grant Granted Apr 07, 2026
Patent 12592994
REAL-TIME USER SCREENING OF MESSAGES WITHIN A COMMUNICATION PLATFORM
2y 5m to grant Granted Mar 31, 2026
Patent 12592740
WIRELESS COMMUNICATION DEVICE AND WIRELESS COMMUNICATION METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12573198
COMMUNICATION SYSTEM, OUTPUT DEVICE, COMMUNICATION METHOD, OUTPUT METHOD, AND OUTPUT PROGRAM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
97%
With Interview (+9.8%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 931 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month