Prosecution Insights
Last updated: April 19, 2026
Application No. 18/617,384

REAL-TIME AI-DRIVEN SPEAKING SUGGESTIONS DURING ASYNCHRONOUS VIDEO CAPTURE

Non-Final OA §101§102
Filed
Mar 26, 2024
Examiner
TRAN, TAM T
Art Unit
2174
Tech Center
2100 — Computer Architecture & Software
Assignee
Emovid Corporation
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
92%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
318 granted / 397 resolved
+25.1% vs TC avg
Moderate +12% lift
Without
With
+11.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
18 currently pending
Career history
415
Total Applications
across all art units

Statute-Specific Performance

§101
10.7%
-29.3% vs TC avg
§103
53.0%
+13.0% vs TC avg
§102
14.4%
-25.6% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 397 resolved cases

Office Action

§101 §102
DETAILED ACTION This Office Action is in response to Application 18/617,384 filed on 03/26/2024. In the instant application, claims 1, 9 and 15 are independent claims; Claims 1-20 have been examined and are pending. This action is made non-final. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings submitted on 03/26/2024 are acceptable. Claim Objections Claims 10-14 are objected to because of the following informalities: Regarding claims 10-14; these are method claims but are the dependents of one or more instances of computer-readable media claim 9. Appropriate corrections are required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 9-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter. Regarding claim 9: Claim 9 cites “One or more instances of computer-readable media …”. The specification has failed to define or limit the claimed “One or more instances of computer-readable media.” Therefore, it would be reasonable to interpret the claimed “computer-readable storage medium” to comprise a signal or a carrier wave; neither of which falls into one of the four statutory categories invention. Regarding claims 10-14: These claims are also rejected under 35 U.S.C. 101 due to their dependency on the independent claim 9. Allowable Subject Matter Claims 4, 5, 7, 12, 14, 17, 18 and 20 are objected to as being dependent upon a rejected based claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 6, 8-11, 13, 15-16 and 19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Gustman et al. (US 2025/0063239). For clarity, the examiner cites directly from one of Gustman’s provisionals: 63/492,174 (hereinafter Gustman), filed on 03/24/2023. Regarding claim 1, Gustman teaches a method in a computing system having a display device and a camera positioned with respect to the display device (Gustman: ¶0017; the interactive interview assistant can provide real-time analysis and suggestions during video and audio interviews), the method comprising: receiving user input specifying a speaking subject (Gustman: ¶0019; the interviewer can press a button to indicate when they are asking a question. As each question and answer is transcribed, the system can user natural-language processing to analyze the content of the question and response, and make suggestions to the interviewer. ¶0024 and Fig. 1; the system can use a generative pre-trained transformer to analyze the transcript of the response to generate follow-up questions. The transformer can take other reference materials into account when generating questions, Examples can be biographical material about the interviewee, resumes, news articles, or other media transcriptions); calling a recommendation engine with the speaking subject (Gustman: ¶0024 and Fig. 1; a pre-trainee transformer is called to analyze a transcript of a response to generate follow-up questions); receiving a response from the recommendation engine containing a speaking suggestion for the speaking subject (Gustman: ¶0024 and Fig. 1; the pre-trained transformer to generates a set of suggested follow-up questions to be addressed by the interviewee); capturing an audio/video sequence using the camera (Gustman: ¶0017, 0057, 0058; an interactive interview assistant can provide real-time analysis and suggestions during video and audio interviews, which can be done in real time. ¶0018; provides real-time analysis and suggestions during an active interview); and concurrently with capturing the audio/video sequence, causing the speaking suggestion to be displayed on the display device (Gustman: ¶0019; as each question and answer is transcribed, the system can use natural-language processing to analyze the content of the question and response, and make suggestions to the interviewer. ¶0023; the size and distance between topics (question recommendations) can be presented visually. ¶0058; system can be configured for a user to be interviewed by themselves. In absence of a human interviewer, the system can display prompts as text. The video prompts can be based on previous recording, or could be generated dynamically using video or audio synthesis). Regarding claim 2, Gustman teaches the method of claim 1, Gustman further teaches: wherein the speaking suggestion is displayed in a portion of the display device nearest the camera (Gustman: ¶0057; the assistant can analyze the video of the subject. This can extract subject metrics such as tracked body pose and facial expression, and overall image quality metrics such as camera framing, color and lighting). Regarding claim 3, Gustman teaches the method of claim 1, Gustman further teaches: causing the displayed speaking suggestion to be scrolled during the capture of the audio/video sequence (Gustman: ¶0058; the system can provide feedback both during the recording process, providing teleprompter guidance, feedback on tone and emphasis). Regarding claim 6, Gustman teaches the method of claim 1, Gustman further teaches: wherein the recommendation engine is a large language model, and wherein the calling comprises: concatenating the user input with predetermined text to obtain a prompt; and submitting the obtained prompt to the large language model (Gustman: ¶0024 the system can user a generative pre-trained transformer to analyze the transcript of the response to generate follow-up questions. The system can take as input the content and transcript and a secondary prompt to generate a given number of follow-up questions). Regarding claim 8, Gustman teaches the method of claim 1, Gustman further teaches: receiving additional input adjusting the speaking suggestion; and revising the speaking suggestion in accordance with the received additional input, and wherein it is the revised speaking suggestion that is caused to be displayed (Gustman: ¶0016; the interactive interview assistant receives input information and generates question prompts regarding items in/related to the input information. The question prompts are provided, e.g., spoken, displayed, or otherwise conveyed, to a respondent. Reponses are captured and subsequent questions are generated based on the responses). Regarding claims 9-11, these claims are directed to one or more instances of computer-readable media, executing the method as claimed in claims 1-3, respectively. Claims 9-11 are similar scope to claims 1-3, respectively and are therefore rejected under similar rationale. Regarding claim 13, this claim is directed to one or more instances of computer-readable media, executing the method as claimed in claim 6. Claim 13 is similar scope to claims 6 and is therefore rejected under similar rationale. Regarding claims 15-16, these claims are directed to a computing system, executing the method as claimed in claims 1-2, respectively. Claims 15-16 are similar scope to claims 1-2, respectively and are therefore rejected under similar rationale. Regarding claim 19, this claim is directed to a computing system, executing the method as claimed in claim 6. Claim 19 is similar scope to claims 6 and is therefore rejected under similar rationale. Conclusion The prior art made of record on form PTO-892 and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. It is noted that any citation to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33,216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006,1009, 158 USPQ 275,277 (CCPA 1968)). Inquiry Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tam T. Tran whose telephone number is (571) 270-5029. The examiner can normally be reached M-F: 7:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William L. Bashore can be reached on 571-272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAM T TRAN/Primary Examiner, Art Unit 2174
Read full office action

Prosecution Timeline

Mar 26, 2024
Application Filed
Jan 09, 2026
Non-Final Rejection — §101, §102
Apr 13, 2026
Applicant Interview (Telephonic)
Apr 13, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592007
LYRICS AND KARAOKE USER INTERFACES, METHODS AND SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12591312
WEARABLE TERMINAL APPARATUS, PROGRAM, AND IMAGE PROCESSING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12585419
AUDIO PROCESSING METHOD AND APPARATUS, AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12572272
METHOD FOR COMPUTER KEY AND POINTER INPUT USING GESTURES
2y 5m to grant Granted Mar 10, 2026
Patent 12572260
PRESENTATION AND CONTROL OF USER INTERACTIONS WITH A USER INTERFACE ELEMENT
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
92%
With Interview (+11.9%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 397 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month