Prosecution Insights
Last updated: April 19, 2026
Application No. 18/674,599

CONFERENCE RECORDING METHOD, TERMINAL DEVICE, AND CONFERENCE RECORDING SYSTEM

Non-Final OA §102
Filed
May 24, 2024
Examiner
TRAN, QUOC DUC
Art Unit
2691
Tech Center
2600 — Communications
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
90%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
720 granted / 841 resolved
+23.6% vs TC avg
Minimal +5% lift
Without
With
+4.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
17 currently pending
Career history
858
Total Applications
across all art units

Statute-Specific Performance

§101
5.0%
-35.0% vs TC avg
§103
43.3%
+3.3% vs TC avg
§102
30.5%
-9.5% vs TC avg
§112
5.3%
-34.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 841 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 7-10, 15-17 and 19-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Baba (7,466,334). Consider claims 1, 15 and 20, Baba teaches a conference recording method, terminal device and non-transitory computer-readable medium for application to a client configured to provide an online conference, the online conference comprises an online audio conference or an online video conference, the method comprising: detecting an online conference trigger operation, the online conference trigger operation being a trigger operation of an online conference end option or a trigger operation of an online conference record viewing option (col. 6 lines 58 – col. 7 line 2; “a subscriber 12 can submit a notification request to the notification subsystem 11, requesting to be notified whenever an event of interest to the subscriber occurs. All notification subscriptions are stored in the management subsystem 10 database and are forwarded to the indexing subsystems 8. When an indexing subsystem detects an event of interest 213, it triggers a notification message, which is sent via the notification system to the subscriber”; col. 11 lines 43-59; “Requests 71 made by a subscriber via the notification and navigation subsystem 11 are in one of three forms: query requests 72, navigation requests 73 and notification requests 74. Query requests are requests to retrieve a set of results that match certain conditions submitted by the subscriber. Query requests are performed in an ad-hoc manner. Navigation requests are requests to retrieve a new recording or recording interval, or give a current recording or interval and a temporal offset or some other iteration offset. Navigation requests can be made only after first obtaining a results set from either a query request of a notification request”); and displaying, in response to the online conference trigger operation an online conference review interface (col. 14 lines 14-21; “FIG. 9 is an illustration of one embodiment of the graphic representation and display of a result sets. The results of user query requests or of a topic notification are displayed as a matrix 170”), comprising timelines of a plurality of participants and a plurality of audio identifiers distributed along the timelines, wherein each of the plurality of audio identifiers identifies a segment of audio data of a participant corresponding to a timeline on which the audio identifier is located, wherein a start position of the audio identifier on the timeline indicates a start time of the segment of audio data of the participant, wherein an end position of the audio identifier on the timeline indicates an end time of the segment of audio data of the participant, and wherein the segment of audio data of the participant is generated by recording a conference speech voice signal of the participant in a time period from the start time to the end time (col. 7 lines 10 – col. 8 lines 16; “The recording subsystem 7 accepts an input stream 25 from the external audio/video conferencing system 5 and records the input stream by segmenting at designated time intervals obtained from the configuration subsystem 6. The recording subsystem performs encoding of the input stream to produce an output audio/video stream 27 suitable for playback via the playback subsystem 9 and informs the management subsystem 10 of the location of the recording. After each recorded segment is encoded, the segment is processed by the indexing subsystem 8”; col. 11 lines 9-32; “The recording information records 56 contain the recording date and time, the meeting leader for the recording, meeting participant information if available, and system identification denoting which recording module and parameters where used to record the conference call. Each recording has a unique recording ID, which is used to associate or join the record to any other record in the system database. The recording location records 62 contain information of where the recording media is stored on one or more playback servers 9. The relevance interval records 58 contain the interval start time and stop time relative to the recording and a weighting to describe the strength or confidence of the interval. Relevance intervals are further associated with recognized words and tasks”). Consider claims 2 and 16, Baba teaches detecting a first trigger operation corresponding to one of the plurality of audio identifiers; and displaying, in response to the first trigger operation, a keyword of audio data corresponding to the audio identifier (col. 11 lines 9-32; “The recording information records 56 contain the recording date and time, the meeting leader for the recording, meeting participant information if available, and system identification denoting which recording module and parameters where used to record the conference call. Each recording has a unique recording ID, which is used to associate or join the record to any other record in the system database. The recording location records 62 contain information of where the recording media is stored on one or more playback servers 9. The relevance interval records 58 contain the interval start time and stop time relative to the recording and a weighting to describe the strength or confidence of the interval. Relevance intervals are further associated with recognized words and tasks”; col. 13 lines 4-9; “Next, for each associated task we further find 166 from the interval correlation database 167 any other interval that also has a high correlation with the identified task. The results are organized at step 168 and graphically presented to the display system of the user”). Consider claims 3 and 17, Baba teaches detecting a second trigger operation corresponding to the one of the plurality of audio identifiers, and playing, in response to the second trigger operation, the audio data corresponding to the audio identifier (col. 11 lines 9-32; “The recording information records 56 contain the recording date and time, the meeting leader for the recording, meeting participant information if available, and system identification denoting which recording module and parameters where used to record the conference call. Each recording has a unique recording ID, which is used to associate or join the record to any other record in the system database. The recording location records 62 contain information of where the recording media is stored on one or more playback servers 9. The relevance interval records 58 contain the interval start time and stop time relative to the recording and a weighting to describe the strength or confidence of the interval. Relevance intervals are further associated with recognized words and tasks”; col. 13 lines 4-9; “Next, for each associated task we further find 166 from the interval correlation database 167 any other interval that also has a high correlation with the identified task. The results are organized at step 168 and graphically presented to the display system of the user”). Consider claims 7 and 19, Baba teaches wherein, before the detecting the online conference trigger operation, the method further comprises: recording and generating at least one piece of audio data of a participant that uses the client, and recording a start time and an end time of the at least one piece of audio data on a timeline of the participant; and sending the start time and the end time of the at least one piece of audio data on the timeline of the participant and the at least one piece of audio data to a server, wherein the start time and the end time of the at least one piece of audio data on the timeline of the participant and the at least one piece of audio data are used to generate the online conference review interface (col. 5 lines 47-67; “The recording subsystem 7 receives the audio/video stream 203 from the external conferencing system 5 in the format and protocol of the conferencing system. To the conferencing system the recording subsystem appears to be just another participant on the conference call. An audio conferencing system can connect to the recording module by dialing a telephone number of the recording subsystem gateway. A video conferencing system can connect to the recording subsystem by dialing the appropriate ISDN or network based address of the recording module or gateway as required”; col. 11 lines 9-32; “The recording information records 56 contain the recording date and time, the meeting leader for the recording, meeting participant information if available, and system identification denoting which recording module and parameters where used to record the conference call. Each recording has a unique recording ID, which is used to associate or join the record to any other record in the system database. The recording location records 62 contain information of where the recording media is stored on one or more playback servers 9”). Consider claim 8, Baba teaches wherein the start time and the end time of the at least one piece of audio data on the timeline of the participant and the at least one piece of audio data are stored in at least one storage unit, and wherein the at least one storage unit is connected in series by using a time pointer of the participant (col. 11 lines 9-32; “The recording information records 56 contain the recording date and time, the meeting leader for the recording, meeting participant information if available, and system identification denoting which recording module and parameters where used to record the conference call. Each recording has a unique recording ID, which is used to associate or join the record to any other record in the system database. The recording location records 62 contain information of where the recording media is stored on one or more playback servers 9”; col. 12 lines 15-23; “Each match from the SR match module 68 returns the recording ID and the start and end times an interval surrounding the matched word along with a confidence value. Each match from the contextual match module 69 contains the recording ID, topic ID, and the interval IDs previously associated with the topic by the indexing subsystem”). Consider claim 9, Baba teaches detecting a sixth trigger operation corresponding to the one of the plurality of audio identifiers; and displaying, in response to the sixth trigger operation, a thumbnail of video data corresponding to the audio identifier, wherein the video data corresponding to the audio identifier is generated by recording a main interface picture in a time period between a start time and an end time of audio data corresponding to the audio identifier (col. 14 lines 14-34; “Each graphic icon also displays the interval start and stop times 175 and other recording data 176. Each interval has a player control 177, allowing the user to select and play that recording interval easily”). Consider claim 10, Baba teaches comprising: detecting a seventh trigger operation of the thumbnail, and playing, in response to the seventh trigger operation, the video data and the audio data that are corresponding to the audio identifier (col. 14 lines 14-34; “Each graphic icon also displays the interval start and stop times 175 and other recording data 176. Each interval has a player control 177, allowing the user to select and play that recording interval easily”; col. 15 lines 20-25; “subscriber 12 may issue a navigation command 108 directly to the media server 103 for intra-recording navigation including: stop, pause, rewind, fast-forward or to jump to a specific time offset within the audio or video stream currently being played. These commands are only appropriate after an audio or video stream has been retrieved”). Allowable Subject Matter Claims 4-6, 11-14 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Any response to this action should be mailed to: Mail Stop ____(explanation, e.g., Amendment or After-final, etc.) Commissioner for Patents P.O. Box 1450 Alexandria, VA 22313-1450 Facsimile responses should be faxed to: (571) 273-8300 Hand-delivered responses should be brought to: Customer Service Window Randolph Building 401 Dulany Street Alexandria, VA 22314 Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUOC DUC TRAN whose telephone number is (571) 272-7511. The examiner can normally be reached Monday-Friday 8:30am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached on (571) 272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Quoc D Tran/ Primary Examiner, Art Unit 2691 January 29, 2026
Read full office action

Prosecution Timeline

May 24, 2024
Application Filed
Jan 29, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598268
STAGE USER REPLACEMENT TECHNIQUES FOR ONLINE VIDEO CONFERENCES
2y 5m to grant Granted Apr 07, 2026
Patent 12598251
PREVENTING DEEP FAKE VOICEMAIL SCAMS
2y 5m to grant Granted Apr 07, 2026
Patent 12592989
DETECTING A SPOOFED CALL
2y 5m to grant Granted Mar 31, 2026
Patent 12593011
APPARATUS AND METHODS FOR VISUAL SUMMARIZATION OF VIDEOS
2y 5m to grant Granted Mar 31, 2026
Patent 12581033
ENFORCING A LIVENESS REQUIREMENT ON AN ENCRYPTED VIDEOCONFERENCE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
90%
With Interview (+4.8%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 841 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month