Prosecution Insights
Last updated: April 19, 2026
Application No. 18/585,587

ARTIFICIAL INTELLIGENCE ASSISTANCE FOR AN AUDIO, VIDEO AND CONTROL SYSTEM

Final Rejection §102
Filed
Feb 23, 2024
Examiner
SAINT CYR, LEONARD
Art Unit
2658
Tech Center
2600 — Communications
Assignee
Qsc LLC
OA Round
2 (Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
95%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
882 granted / 1144 resolved
+15.1% vs TC avg
Strong +18% interview lift
Without
With
+18.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
32 currently pending
Career history
1176
Total Applications
across all art units

Statute-Specific Performance

§101
17.8%
-22.2% vs TC avg
§103
39.1%
-0.9% vs TC avg
§102
28.0%
-12.0% vs TC avg
§112
2.2%
-37.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1144 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 12/19/25 have been fully considered but they are not persuasive. Applicant argues that Klein fails to teach implementing an audio, video and control (‘AVC’) operating system on an AVC processing core communicably coupled to one or more peripheral devices, the AVC processing core being configured to manage and control functionality of audio, video and control features of the peripheral devices (Amendment, pages 8 – 11). The examiner disagrees, since Klein et al. disclose “User devices 102a and 102b through 102n can be client devices on the client-side of operating environment 100, while server 106 can be on the server-side of operating environment 100. Server 106 can comprise server-side software designed to work in conjunction with client-side software on user devices 102a and 102b through 102n so as to implement any combination of the features and functionalities discussed in the present disclosure…user devices 102a through 102n may be the type of computing device described in relation to FIG. 19 herein. By way of example and not limitation, a user device may be embodied as a personal computer (PC), a laptop computer, a mobile phone or mobile device, a smartphone, a tablet computer, a smart watch, a wearable computer, a personal digital assistant (PDA), a music player or an MP3 player, a global positioning system (GPS) or device, a video player, a handheld communications device, a gaming device or system, an entertainment system, a vehicle computer system, an embedded system controller, a camera” (paragraphs 46 – 49). Applicant argues that Klein fails to teach taught command set (Amendment, pages 9, 10). The examiner disagrees, since Klein et al. disclose “one or more servers, will roll back conversational state to a past state and let the new incoming request (e.g., the request to set a reminder to take kids to the baseball game) play through it again (e.g., updates to a conversational state as modeled as a graph, and roll back to a previous node in the graph, as described in more detail below)”[paragraph 150]. Applicant’s arguments, see pages 6 - 8, filed 12/19/25, with respect to 1 - 20 have been fully considered and are persuasive. The rejection of claims 1 – 20 under 35 U.S.C 101 has been withdrawn. Applicant argues that the claims require a non-conventional arrangement of elements, including: an AVC processing core executing an AVC operating system, integration of an LLM module with that AVC architecture, execution of preconfigured or user-taught command sets to perform system actions, and propagation of taught command sets across multiple AVC processing cores (Amendment, pages 6, 7). Applicant’s arguments, see pages 10, 11, filed 12/19/25, with respect to claims 6, 13, and 19 have been fully considered and are persuasive. The rejection of claims 6, 13, and 19 under 35 U.S.C 102 has been withdrawn. Applicant argues that Klein does not teach the AVC processing core communicates the taught command set to one or more secondary AVC processing cores, thereby teaching the secondary AVC processing cores the taught command set (Amendment, pages 10, 11). Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1 –5, 7 – 12, 14 – 18, and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Klein et al.(US PAP 2022/0308718). As per claims 1, 8, and 15, Klein et al. teach a computer-implemented method, comprising: implementing an audio, video and control (‘AVC’) operating system on an AVC processing core communicably coupled to one or more peripheral devices, the AVC processing core being configured to manage and control functionality of audio, video and control features of the peripheral devices (“By way of example and not limitation, a user device may be embodied as a personal computer (PC), a laptop computer, a mobile phone or mobile device, a smartphone, a tablet computer, a smart watch, a wearable computer, a personal digital assistant (PDA), a music player or an MP3 player, a global positioning system (GPS) or device, a video player, a handheld communications device, a gaming device or system, an entertainment system, a vehicle computer system, an embedded system controller, a camera, a remote control, a bar code scanner, a computerized measuring device, an appliance, a consumer electronic device, a workstation, or any combination of these delineated devices, or any other suitable computer device.”; paragraphs 46 – 49); detecting, using a large language model (“LLM”) module communicably coupled to the AVC processing core, one or more oral commands issued from a user (“the speech-to-text conversion module 216 breaks down the audio of a speech recording into individual sounds, analyzes each sound, using algorithms (e.g., GMM or HMI) to find the most probable word fit in that language, and transcribes those sounds into text. In some embodiments, the speech-to-text conversion module 216 uses NLP models (e.g., GPT-3, BERT, XLNET, or other NLP model) and/or deep learning neural networks to perform its functionality. NLP is a way for computers to analyze, understand, and derive meaning from human language.”; paragraph 67); and performing actions on the peripheral devices or AVC processing core which correspond to the oral commands (“The context understanding module 218 is generally responsible for determining or predicting user intent of a voice utterance issued by a user. “User intent” as described herein refers to one or more actions or tasks the user is trying to accomplish via the voice utterance…The client action request attribute is indicative of a command to the client application to perform one or more specific actions based on determining the user intent.”; paragraphs 68, 94). As per claims 2, 9, and 16, Klein et al. further disclose the LLM module executes a preconfigured command set to perform actions on the peripheral devices (“a news service that provides the current news the user has requested, and/or a home device activation service that causes one or more home devices (e.g., lights) to be activated in response to a user request.”; paragraphs 44 – 47, 91). As per claims 3, 10, and 17, Klein et al. further disclose the LLM module executes a taught command set to perform actions on the peripheral devices, the taught command set being taught by the user (“The client action request attribute is indicative of a command to the client application to perform one or more specific actions based on determining the user intent. Specifically, the client action request as indicated in the table 404 is to populate the “meeting attendees” field of instance ID 4. The “result payload” attribute indicates the specific values that are to be returned to the client application based on the client action request and the determined or predicted user intent.”; paragraphs 68, 94, see also paragraph 150). As per claims 4, 11, and 18, Klein et al. further disclose the taught command set is obtained from the user via a web interface (“Often the content may include static content and dynamic content. When a client application, such as a web browser, requests a website or web application via a URL or search term, the browser typically contacts a web server to request static content or the basic components of a website or web application (e.g., HTML, pages, image files, video files, and the like.).”; paragraphs 46 – 48). As per claims 5, 12, Klein et al. further disclose the taught command set is obtained from the user via a listening device (“the voice utterance is a key word or wake word used as authentication or authorization (e.g., key word detection) to trigger a component (e.g., an audio application programming interface (API)) to initiate a recording of audio to listen for or detect audio input.”; paragraph 55 – 57). As per claims 7, 14, and 20, Klein et al. further disclose the LLM module is s accessed from a cloud service, local network service, or on the AVC processing core (“a cloud computing environment”; paragraphs 46, 54, 84). Allowable Subject Matter Claims 6, 13, and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: As to claims 6, 13, and 19, the prior art of record does not teach or suggest the AVC processing core communicates the taught command set to one or more secondary AVC processing cores, thereby teaching the secondary AVC processing cores the taught command set. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEONARD SAINT-CYR whose telephone number is (571)272-4247. The examiner can normally be reached Monday- Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at (571)272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LEONARD SAINT-CYR/Primary Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Feb 23, 2024
Application Filed
Sep 19, 2025
Non-Final Rejection — §102
Dec 03, 2025
Interview Requested
Dec 18, 2025
Examiner Interview (Telephonic)
Dec 19, 2025
Response Filed
Jan 14, 2026
Examiner Interview Summary
Mar 05, 2026
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603100
SYSTEM AND METHOD FOR OPTIMIZED AUDIO MIXING
2y 5m to grant Granted Apr 14, 2026
Patent 12597415
VOICE RECOGNITION GRAMMAR SELECTION BASED ON CONTEXT
2y 5m to grant Granted Apr 07, 2026
Patent 12592227
DIALOG UNDERSTANDING DEVICE AND DIALOG UNDERSTANDING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12591765
SYSTEMS AND METHODS FOR BUILDING A CUSTOMIZED GENERATIVE ARTIFICIAL INTELLIGENT PLATFORM
2y 5m to grant Granted Mar 31, 2026
Patent 12585884
DIALOGUE APPARATUS, DIALOGUE METHOD, AND PROGRAM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
95%
With Interview (+18.2%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 1144 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month