Prosecution Insights
Last updated: April 19, 2026
Application No. 18/730,725

DISPLAY DEVICE

Non-Final OA §103
Filed
Jul 19, 2024
Examiner
SHAH, PARAS D
Art Unit
2653
Tech Center
2600 — Communications
Assignee
LG Electronics Inc.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
474 granted / 645 resolved
+11.5% vs TC avg
Strong +31% interview lift
Without
With
+31.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
24 currently pending
Career history
669
Total Applications
across all art units

Statute-Specific Performance

§101
20.3%
-19.7% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 645 resolved cases

Office Action

§103
DETAILED ACTION 1. This communication is in response to the Application filed on 7/19/2024. Claims 1-15 are pending and have been examined. Allowable Subject Matter 2. Claims 4-11 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 103 3. Claims 1-2, 12 are rejected under 35 U.S.C. 103 as being unpatentable over Srinivasan (IEEE 2011; hereinafter Srinivasan) in view of Trompf (US 5583968; hereinafter TROMPF). As per claim 1, Srinivasan (Title: USING A REMOTE WIRELESS MICROPHONE FOR SPEECH ENHANCEMENT IN NON-STATIONARY NOISE) discloses “A display device, comprising: a speaker; a wireless communication interface configured to communicate with a peripheral device; a microphone configured to record surrounding sounds; and a controller configured to perform [ user voice recognition ] on data recorded by the microphone, wherein the controller is configured to compensate the user voice recognition using audio data received from the peripheral device (Srinivasan, Title; Fig. 1, [Introduction, para 2-3], use of a remote wireless microphone (RWM) <read on ‘a peripheral device’> placed close to a noise source, which transmits relevant information to the primary device <read on the ‘display device’> , where it is used for noise reduction … The RWM is in this case placed near the TV, and transmits its signal to the computer <read on ‘display device .. speaker’>, where it is combined with the local microphone signal to perform noise reduction <read on to ‘compensate the user voice recognition using audio data received from the peripheral device’>).” Srinivasan does not explicitly disclose “user voice recognition ..” However, the feature is taught by TROMPF (Title: Noise reduction for speech recognition). In the same field of endeavor, TROMPF teaches: [Abstract] “A neural network for noise reduction for speech recognition in a noisy environment.” Therefore, it would have been obvious to one of ordinary skill in the art at the time before the effective filing date of the claimed invention to incorporate the teachings of TROMPF in a system (as taught by Srinivasan) to provide neural network based speech recognition for noise-suppressed speech signal for any downstream application. As per claim 2 (dependent on claim 1), Srinivasan in view of TROMPF further discloses “wherein the audio data includes at least one of recording data recorded by the peripheral device and sound source data being played by the peripheral device (Srinivasan, [Introduction, para 2-3], use of a remote wireless microphone (RWM) <read on ‘the peripheral device’ and the ‘recording data’> placed close to a noise source, which transmits relevant information to the primary device, where it is used for noise reduction).” As per claim 12 (dependent on claim 1), Srinivasan in view of TROMPF further discloses “wherein the peripheral device comprises at least one of a remote control device, a mobile terminal, and a Bluetooth speaker that transmits a control signal to the display device (Srinivasan, [Introduction, para 2-3], a remote wireless microphone (RWM) .. which transmits relevant information to the primary device <where RWM reads on a key component of many diverse devices including a remote control device, a mobile terminal, …).” 4. Claims 3, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Srinivasan in view of TROMPF, and further in view of Heo (US 20170154625; hereinafter HEO). As per claim 3 (dependent on claim 1), Srinivasan in view of TROMPF further discloses “if the display device is not linked with the peripheral device, recognize a user voice from the data recorded by the microphone using first preprocessing data, if the display device is linked with the peripheral device, obtain second preprocessing data based on audio data received from the peripheral device, and recognize the user voice from the data recorded by the microphone using the second preprocessing data (Srinivasan, Fig. 1).” Srinivasan in view of TROMPF does not explicitly disclose the limitations of Claim 3. However, the feature is taught by HEO (Title: Video display device and operation method therefor). In the same field of endeavor, HEO teaches: [Abstract] “receiving at least one voice signal for a user voice acquired by at least one peripheral device .. and a voice signal for the user voice acquired by the video display device; comparing the plurality of acquired voice signals with each other; determining a voice signal subjected to voice recognition based on the comparison result; recognizing the user voice based on the determined voice signal ..” and [0176] “The control unit of the video display device may compare qualities of the plurality of acquired voice signals based on at least one of a user voice range level and a noise level of each of the plurality of acquired voice signals <read on linked or not linked, which is subject to BRI>” Therefore, it would have been obvious to one of ordinary skill in the art at the time before the effective filing date of the claimed invention to incorporate the teachings of HEO in a system (as taught by Srinivasan and TROMPF) to determine which of the acquired voice signals to be used for voice recognition based on the acquired voice quality (voice and noise levels) which read on linked or not linked condition. As per claim 14 (dependent on claim 1), Srinivasan in view of TROMPF further discloses “wherein the controller is configured to extract user commands from each of the data recorded by the microphone and the data recorded by the peripheral device, and [ recognize user commands that match at least two of the extracted user commands as the user's voice ] (Srinivasan, Fig. 1, [Introduction, para 2-3], a remote wireless microphone .. the local microphone <read on to receive user voice commands>).” Srinivasan in view of TROMPF does not explicitly disclose “recognize user commands that match at least two of the extracted user commands as the user's voice ..” However, the feature is taught by HEO (Title: Video display device and operation method therefor). In the same field of endeavor, HEO teaches: [0212] “the control unit may search for the voice signal matching the acquired voice signal from at least one of the database stored in the storage unit and the database of the network server connected through the network interface unit, and may acquire the control operation corresponding to the found voice signal” and [Abstract] “determining a voice signal subjected to voice recognition ..” Therefore, it would have been obvious to one of ordinary skill in the art at the time before the effective filing date of the claimed invention to incorporate the teachings of HEO in a system (as taught by Srinivasan and TROMPF) to search different (extracted) voice databases for the matching of the user command to determine the user's voice to ensure more accurate voice command recognition for target application. 5. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Srinivasan in view of TROMPF, and further in view of Ganeshkumar (US 20180270565; hereinafter Ganeshkumar). As per claim 13 (dependent on claim 1), Srinivasan in view of TROMPF further discloses “extract user commands from each of the data recorded by the microphone and the data recorded by the peripheral device, and recognize [ a command with the largest size ] among the extracted user commands as the user voice (HEO, [Abstract], receiving at least one voice signal for a user voice acquired by at least one peripheral device .. and a voice signal for the user voice acquired by the video display device; comparing the plurality of acquired voice signals with each other; determining a voice signal subjected to voice recognition based on the comparison result; [0176], The control unit of the video display device may compare qualities of the plurality of acquired voice signals based on at least one of a user voice range level <read on ‘size’> and a noise level of each of the plurality of acquired voice signals).” Srinivasan in view of TROMPF does not explicitly disclose “a command with the largest size ..” However, the feature is taught by Ganeshkumar (Title: Audio signal processing for noise reduction). In the same field of endeavor, Ganeshkumar teaches: [0005] “compare the primary signal and the secondary signal, and provide a selected signal based upon the primary signal, the secondary signal, and the comparison” and [0006] “to compare the primary signal and the secondary signal by signal energies <read on size which is subject to BRI>).” Therefore, it would have been obvious to one of ordinary skill in the art at the time before the effective filing date of the claimed invention to incorporate the teachings of Ganeshkumar in a system (as taught by Srinivasan and TROMPF) to measure the signal energies of the audio signals captured by the primary microphone and the secondary microphone and select the one with the largest energy/size as the user voice to be recognized by voice recognition for any subsequent applications with enhanced accuracy. 6. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Srinivasan in view of TROMPF, and further in view of Binder, et al. (US 20140222436; hereinafter BINDER). As per claim 15 (dependent on claim 1), Srinivasan in view of TROMPF further discloses “compensate the user voice recognition using audio data received from the peripheral device if [ a wakeup word is recognized ] (Srinivasan, [Introduction, para 2-3], use of a remote wireless microphone (RWM) <read on ‘the peripheral device’> placed close to a noise source, which transmits relevant information to the primary device <under any preset condition>, where it is used for noise reduction <read on ‘compensate the user voice recognition’>).” Srinivasan in view of TROMPF does not explicitly disclose “a wakeup word is recognized ..” However, the feature is taught by BINDER (Title: Voice trigger for a digital assistant). In the same field of endeavor, BINDER teaches: [0007] “the trigger sound detector recognizes whether a voice input includes a predefined pattern (e.g., a sonic pattern matching the words “Hey, SIRI”) <read on ‘wakeup word’>.” Therefore, it would have been obvious to one of ordinary skill in the art at the time before the effective filing date of the claimed invention to incorporate the teachings of BINDER in a system (as taught by Srinivasan and TROMPF) to detect and recognize a wakeup word and activate the display device for full-capacity functions and operations, including compensate the voice recognition operation by adding the remote microphone data. Conclusion 7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FENG-TZER TZENG whose telephone number is 571-272-4609. The examiner can normally be reached on M-F (9:00-5:00). The fax phone number where this application or proceeding is assigned is 571-273-4609. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras Shah (SPE) can be reached on 571-270-1650. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FENG-TZER TZENG/ 2/5/2026 Primary Examiner, Art Unit 2653
Read full office action

Prosecution Timeline

Jul 19, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586591
SOUND SIGNAL DECODING METHOD, SOUND SIGNAL DECODER, PROGRAM, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579367
TWO-TOWER NEURAL NETWORK FOR CONTENT-AUDIENCE RELATIONSHIP PREDICTION
2y 5m to grant Granted Mar 17, 2026
Patent 12579360
LEARNING SUPPORT APPARATUS FOR CREATING MULTIPLE-CHOICE QUIZ
2y 5m to grant Granted Mar 17, 2026
Patent 12562173
WEARABLE DEVICE CONTROL BASED ON VOICE COMMAND OF VERIFIED USER
2y 5m to grant Granted Feb 24, 2026
Patent 12559026
VEHICLE AND CONTROL METHOD THEREOF
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+31.1%)
3y 9m
Median Time to Grant
Low
PTA Risk
Based on 645 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month