Prosecution Insights
Last updated: April 19, 2026
Application No. 18/718,637

DISPLAY DEVICE AND CONTROL METHOD THEREFOR

Non-Final OA §103
Filed
Jun 11, 2024
Examiner
MOHAMMED, ASSAD
Art Unit
2691
Tech Center
2600 — Communications
Assignee
LG Electronics Inc.
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
84%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
430 granted / 587 resolved
+11.3% vs TC avg
Moderate +11% lift
Without
With
+11.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
24 currently pending
Career history
611
Total Applications
across all art units

Statute-Specific Performance

§101
7.3%
-32.7% vs TC avg
§103
67.5%
+27.5% vs TC avg
§102
7.8%
-32.2% vs TC avg
§112
9.5%
-30.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 587 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 1. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 2. Claim(s) 1, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Oh et al. (US 2017/0103735) in view of Suk et al. (US 2019/0355328) in further view of Alameh et al. (US 2021/0344560). Regarding claim 1, Oh teaches a display device comprising: a communicator configured to achieve communication with the outside; a microphone configured to receive utterance of a user; a display including a display area to output content; a motor configured to control the display to expose a partial area of the display area to the outside; and a controller configured to control the communicator, the display, and the motor, wherein the controller is configured to: change a mode of the display to one of a hidden view, a first partial view, a second partial view, and a full view (see fig. 1-5, ¶ 0035-0036, 0041-0043, 0050, 0061, 0070, 0093-0094. The display having a flexible screen that can be rolled up or rolled down. The device having a motor as shown in (fig. 2 and 4) enables the motor to roll the screen up or down based on user input. The display device having a user interface unit, also includes a voice sensor that is communicated with the display device.); wherein the hidden view is a state where the display area is not exposed to the outside, the first partial view is a state where a first area of the display area is exposed to the outside, the second partial view is a state where a second area of the display area is exposed to the outside, and the full view is a state where the display area is maximally exposed to the outside view (see fig. 1-5, 10, ¶ 0035-0036, 0041-0043, 0050, 0061, 0070, 0077-0079, 0093-0094. An appropriate size of a screen may be changed depending on a size or quantity of a content or information provided by a display device. By adjusting a size of a display projected (or, rolled out) or expanded from a body, the display device can control a size of a screen formed by the display as well. The screen size can be adjusted as presented in fig. 5, and 10. The display device performs in the prescribed mode and is able to have desired changes which can be hidden, partial view, and full view as it goes through each mode for the display.). Oh teaches having a display device wherein if can perform different viewing modes for the screen to be presented. Oh also includes a voice sensor in the display device. However Oh does not disclose recognize the received user's utterance; when the utterance recognition is successful, change the mode of the display to the first partial view or the second partial view, and output a result regarding the utterance recognition in the first partial view or the second partial view; and when the utterance recognition fails, change the mode of the display to the second partial view, and output a virtual keyboard in the second partial view. Suk teaches recognize the received user's utterance; when the utterance recognition is successful, change the mode of the display to the first partial view or the second partial view, and output a result regarding the utterance recognition in the first partial view or the second partial view (see fig. 24, ¶ 0358-0359, 0381, 0386-0387. Suk’s discloses a display device that is performs screen adjustments with partial screen and full screen views. This is activated by voice commands to change screen views.). It would have been obvious to one of ordinary skill in the art based on the effective date of the claimed invention to modify Oh to incorporate voice commands to be recognized to provide commands to operate the display viewing screen adjustments. The modification provides voice activated commands to operate the screen display for adjustment. Alameh teaches when the utterance recognition fails, change the mode of the display to the second partial view, and output a virtual keyboard in the second partial view (see fig. 4, ¶ 0055. The operation in where the computer device fails to detect audio commands or voice command based on environment noise. The system would prompt the user to enter text input by displaying a keyboard on the screen. The screen presents a partial screen with the keyboard the screen in order to permit view of the text.). Alameh does not talk about a change of mode to a second partial view, however Alameh changes modes of voice to keyboard and provides the keyboard in a partial view. Alameh in combination with Oh and Suk, the adjustment of the screen in combination with virtual keyboard to be presented in light of failed voice command, would be plausible for conversion of concept features. It would have been obvious to one of ordinary skill in the art based on the effective date of the claimed invention to modify Oh and Suk to incorporate voice command failure and present keyboard for text capability. The modification provides voice recognition and based on voice failure, activated keyboard on the screen. Regarding claim 15, Oh teaches A method for controlling a display device including a display with a display area exposed to the outside changeable in size, the method comprising: changing a mode of the display to one of a hidden view, a first partial view, a second partial view, and a full view (see fig. 1-5, ¶ 0035-0036, 0041-0043, 0050, 0061, 0070, 0093-0094. The display having a flexible screen that can be rolled up or rolled down. The device having a motor as shown in (fig. 2 and 4) enables the motor to roll the screen up or down based on user input. The display device having a user interface unit, also includes a voice sensor that is communicated with the display device.); wherein the hidden view is a state where the display area is not exposed to the outside, the first partial view is a state where a first area of the display area is exposed to the outside, the second partial view is a state where a second area of the display area is exposed to the outside, and the full view is a state where the display area is maximally exposed to the outside (see fig. 1-5, 10, ¶ 0035-0036, 0041-0043, 0050, 0061, 0070, 0077-0079, 0093-0094. An appropriate size of a screen may be changed depending on a size or quantity of a content or information provided by a display device. By adjusting a size of a display projected (or, rolled out) or expanded from a body, the display device can control a size of a screen formed by the display as well. The screen size can be adjusted as presented in fig. 5, and 10. The display device performs in the prescribed mode and is able to have desired changes which can be hidden, partial view, and full view as it goes through each mode for the display.). Oh teaches having a display device wherein if can perform different viewing modes for the screen to be presented. Oh also includes a voice sensor in the display device. However Oh does not disclose receiving utterance of a user; recognizing the received user's utterance; when the utterance recognition is successful, changing the mode of the display to the first partial view or the second partial view; outputting a result regarding the utterance recognition in the first partial view or the second partial view; when the utterance recognition fails, changing the mode of the display to the second partial view; and outputting a virtual keyboard in the second partial view, Suk teaches receiving utterance of a user; recognizing the received user's utterance; when the utterance recognition is successful, changing the mode of the display to the first partial view or the second partial view; outputting a result regarding the utterance recognition in the first partial view or the second partial view (see fig. 24, ¶ 0358-0359, 0381, 0386-0387. Suk’s discloses a display device that is performs screen adjustments with partial screen and full screen views. This is activated by voice commands to change screen views.). It would have been obvious to one of ordinary skill in the art based on the effective date of the claimed invention to modify Oh to incorporate voice commands to be recognized to provide commands to operate the display viewing screen adjustments. The modification provides voice activated commands to operate the screen display for adjustment. Alameh teaches when the utterance recognition fails, changing the mode of the display to the second partial view; and outputting a virtual keyboard in the second partial view (see fig. 4, ¶ 0055. The operation in where the computer device fails to detect audio commands or voice command based on environment noise. The system would prompt the user to enter text input by displaying a keyboard on the screen. The screen presents a partial screen with the keyboard the screen in order to permit view of the text.). Alameh does not talk about a change of mode to a second partial view, however Alameh changes modes of voice to keyboard and provides the keyboard in a partial view. Alameh in combination with Oh and Suk, the adjustment of the screen in combination with virtual keyboard to be presented in light of failed voice command, would be plausible for conversion of concept features. It would have been obvious to one of ordinary skill in the art based on the effective date of the claimed invention to modify Oh and Suk to incorporate voice command failure and present keyboard for text capability. The modification provides voice recognition and based on voice failure, activated keyboard on the screen. 3. Claim(s) 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Oh et al. (US 2017/0103735) in view of Suk et al. (US 2019/0355328) in further view of Alameh et al. (US 2021/0344560). Regarding claim 2, Oh and Alameh does not teach the display device of claim 1, wherein the controller is configured to: when the recognized user's utterance includes a name of a device, transmit a control signal corresponding to the recognized user's utterance to the device; change the mode of the display to the first partial view; and output a message corresponding to the control signal in the first partial view. Suk teaches wherein the controller is configured to: when the recognized user's utterance includes a name of a device, transmit a control signal corresponding to the recognized user's utterance to the device; change the mode of the display to the first partial view; and output a message corresponding to the control signal in the first partial view (see fig. 24, ¶ 0375-0381. The voice command provides a utterance which includes a name of device (e.g. Is there anything fun on TV now?) and provides change of mode on the display to modify the screen to a viewing mode based on message being provided.). It would have been obvious to one of ordinary skill in the art based on the effective date of the claimed invention to modify Oh and Alameh to incorporate voice command and providing a text message to be displayed on the screen in a portion view. The modification provides voice recognition, recognizing command and provide the message in a screen mode view. Regarding claim 3, Oh teaches the display device of claim 1, wherein the controller is configured to: when the recognized user's utterance includes a program title, search for the program title; change the mode of the display to the second partial view; and output search results in the second partial view (see fig. 11, M1-M3, ¶ 0107-0110. The mode changes in correlation with a title or with a title and information corresponding to title.) Regarding claim 4, Oh and Alameh do not teach the display device of claim 3, wherein the controller is configured to: receive a control signal of selecting a first search result among the search results; change the mode of the display to the full view; and output first content corresponding to the first search result in the full view. Suk teaches wherein the controller is configured to: receive a control signal of selecting a first search result among the search results; change the mode of the display to the full view; and output first content corresponding to the first search result in the full view (see fig. 24, ¶ 0375-0381. The voice command provides a utterance which includes a name of device (e.g. Is there anything fun on TV now?) and provides change of mode on the display to modify the screen to a viewing mode based on message being provided.). It would have been obvious to one of ordinary skill in the art based on the effective date of the claimed invention to modify Oh and Alameh to incorporate voice command and providing a text message to be displayed on the screen in a portion view. The modification provides voice recognition, recognizing command and provide the message in a screen mode view. Regarding claim 5, Oh and Alameh do not teach the display device of claim 1, wherein the controller is configured to: when the recognized user's utterance includes weather, change the mode of the display to the first partial view; output information on the weather in the first partial view; change the mode of the display to the second partial view; and output audio content related to the weather in the second partial view. Suk teaches wherein the controller is configured to: when the recognized user's utterance includes weather, change the mode of the display to the first partial view; output information on the weather in the first partial view; change the mode of the display to the second partial view; and output audio content related to the weather in the second partial view (see fig. 24, ¶ 0375-0381. The voice command provides a utterance which includes a name of device (e.g. Is there anything fun on TV now?) and provides change of mode on the display to modify the screen to a viewing mode based on message being provided. Any utterances can present the visual context on the screen.). It would have been obvious to one of ordinary skill in the art based on the effective date of the claimed invention to modify Oh and Alameh to incorporate voice command and providing a text message to be displayed on the screen in a portion view. The modification provides voice recognition, recognizing command and provide the message in a screen mode view. Regarding claim 6, Oh teaches the display device of claim 1, wherein the controller is configured to: when the recognized user's utterance includes a person's name, change the mode of the display to the first partial view; output information on the person's name in the first partial view; change the mode of the display to the second partial view; and display a content list related to the person's name in the second partial view (see fig. 11, M1-M3, ¶ 0107-0110. The mode changes in correlation with a title or with a title and information corresponding to title.). Regarding claim 7, Oh and Alameh do not teach the display device of claim 1, wherein the controller is configured to: when performing a voice call with a first external terminal, change the mode of the display to the first partial view; and output voice call information in the first partial view. Suk teaches wherein the controller is configured to: when performing a voice call with a first external terminal, change the mode of the display to the first partial view; and output voice call information in the first partial view (see fig. 24, ¶ 0116-0118, 0122, 0133-0134, 0144. The system changes viewing mode based on incoming call, if in video call mode, the display will be altered to provide visual. simultaneous displaying a sender and a recipient on a screen upon video call. In call mode, information can be presented as presented in fig. 24.). It would have been obvious to one of ordinary skill in the art based on the effective date of the claimed invention to modify Oh and Alameh to incorporate voice call to provide video screen presentation of the information of the call. The modification provides the message in a screen mode view. Regarding claim 8, Oh and Alameh do not teach the display device of claim 7, wherein the controller is configured to: when performing a video call with the first external terminal, change the mode of the display to the second partial view; and output video call information in the second partial view. Suk teaches wherein the controller is configured to: when performing a video call with the first external terminal, change the mode of the display to the second partial view; and output video call information in the second partial view (see fig. 24, ¶ 0116-0118, 0122, 0133-0134, 0144. The system changes viewing mode based on incoming call, if in video call mode, the display will be altered to provide visual. simultaneous displaying a sender and a recipient on a screen upon video call. In call mode, information can be presented as presented in fig. 24.). It would have been obvious to one of ordinary skill in the art based on the effective date of the claimed invention to modify Oh and Alameh to incorporate voice call to provide video screen presentation of the information of the call. The modification provides the message in a screen mode view. Regarding claim 9, Oh and Alameh do not teach the display device of claim 8, wherein the controller is configured to: when performing a video conference with a plurality of external terminals, change the mode of the display to the full view; and output video conference information in the full view. Suk teaches wherein the controller is configured to: when performing a video conference with a plurality of external terminals, change the mode of the display to the full view; and output video conference information in the full view (see fig. 24, ¶ 0116-0118, 0122, 0133-0134, 0144. The system changes viewing mode based on incoming call, if in video call mode, the display will be altered to provide visual. simultaneous displaying a sender and a recipient on a screen upon video call. In call mode, information can be presented as presented in fig. 24.). It would have been obvious to one of ordinary skill in the art based on the effective date of the claimed invention to modify Oh and Alameh to incorporate voice call to provide video screen presentation of the information of the call. The modification provides the message in a screen mode view. Regarding claim 11, Oh and Alameh do not teach the display device of claim 1, wherein the controller is configured to: when receiving a first message from a second external terminal, change the mode of the display to the first partial view; output the first message in the first partial view; when receiving a second message from the second external terminal after outputting the first message, change the mode of the display to the second partial view; and output the first message and the second message in the second partial view. Suk teaches wherein the controller is configured to: when receiving a first message from a second external terminal, change the mode of the display to the first partial view; output the first message in the first partial view; when receiving a second message from the second external terminal after outputting the first message, change the mode of the display to the second partial view; and output the first message and the second message in the second partial view (see fig. 24, ¶ 0375-0381. The voice command provides a utterance which includes a name of device (e.g. Is there anything fun on TV now?) and provides change of mode on the display to modify the screen to a viewing mode based on message being provided. Any utterances can present the visual context on the screen.). It would have been obvious to one of ordinary skill in the art based on the effective date of the claimed invention to modify Oh and Alameh to incorporate voice command and providing a text message to be displayed on the screen in a portion view. The modification provides voice recognition, recognizing command and provide the message in a screen mode view. Regarding claim 12, Oh teaches the display device of claim 11, wherein the controller is configured to, when a first time period elapses after the first message and the second message are output, change the mode of the display to the hidden view (see fig. 10-11, M1-M3, ¶ 0082-0089, 0107-0110. The mode changes in correlation with a title or with a title and information corresponding to title. The system having a scheduled time for content viewing, the system can perform the viewing process based on schedule time for broadcast.) Regarding claim 13, Oh and Alameh do not teach the display device of claim 11, wherein the controller is configured to: when a third message is received from a third external terminal after the first message and the second message are output, change the mode of the display to the hidden view; change the mode of the display to the first partial view; and output the third message in the first partial view. Suk teaches wherein the controller is configured to: when a third message is received from a third external terminal after the first message and the second message are output, change the mode of the display to the hidden view; change the mode of the display to the first partial view; and output the third message in the first partial view (see fig. 24, ¶ 0375-0381. The voice command provides a utterance which includes a name of device (e.g. Is there anything fun on TV now?) and provides change of mode on the display to modify the screen to a viewing mode based on message being provided. Any utterances can present the visual context on the screen.). It would have been obvious to one of ordinary skill in the art based on the effective date of the claimed invention to modify Oh and Alameh to incorporate voice command and providing a text message to be displayed on the screen in a portion view. The modification provides voice recognition, recognizing command and provide the message in a screen mode view. Regarding claim 14, Oh teaches the display device of claim 1, wherein the second area is wider than the first area (see fig. 10, M1-M4, ¶ 0082-0084. The system provides different modes of view which has one area of the screen being wider than the first area of the screen.) 4. Claim(s) 10 is rejected under 35 U.S.C. 103 as being unpatentable over Oh et al. (US 2017/0103735) in view of Suk et al. (US 2019/0355328) in further view of Alameh et al. (US 2021/0344560) in further view of Sejpal et al. (US 2022/0270594). Regarding claim 10, Oh and Alameh do not teach the display device of claim 1, wherein the display device includes a plurality of Al engines, wherein the controller is configured to change the mode of the display to one of the first partial view, the second partial view, and the full view based on an Al engine result regarding the recognized user's utterance. Suk teach wherein the controller is configured to change the mode of the display to one of the first partial view, the second partial view, and the full view based on an Al engine result regarding the recognized user's utterance (see fig. 15, 24, 29, ¶ 0315, 0370-0381. The system receives users voice command and recognized voice command corresponds to an operations. The device then will automatically change view type for display screen.). Suk disclose a controller however does not disclose wherein the display device includes a plurality of Al engines. Sejpal disclose wherein AI engines which recognize utterances from a user (see ¶ 0054). The combination of Sejpal to Suk to incorporate AI engines to recognize user utterances. It would have been obvious to one of ordinary skill in the art based on the effective date of the claimed invention to modify Oh and Alameh to incorporate voice utterances to provide a command in order to alter the display screen. The modification provides voice recognition, recognizing command and changing the viewing type display. Conclusion 5. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASSAD MOHAMMED whose telephone number is (571)270-7253. The examiner can normally be reached 9:00AM-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached at 571-272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ASSAD MOHAMMED/Examiner, Art Unit 2691 /DUC NGUYEN/Supervisory Patent Examiner, Art Unit 2691
Read full office action

Prosecution Timeline

Jun 11, 2024
Application Filed
Jan 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604149
ELECTRONIC DEVICE AND METHOD THEREOF FOR OUTPUTTING AUDIO DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12598441
AUDIO SIGNAL PROCESSING METHOD AND AUDIO SIGNAL PROCESSING APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12587801
RE-MIXING A COMPOSITE AUDIO PROGRAM FOR PLAYBACK WITHIN A REAL-WORLD VENUE
2y 5m to grant Granted Mar 24, 2026
Patent 12587774
SYSTEM AND METHOD OF ASSEMBLING A COMPRESSION TRIGGERED HEADSET POWER SAVING SYSTEM FOR AN AUDIO HEADSET
2y 5m to grant Granted Mar 24, 2026
Patent 12581240
Method and System for Determining Audio Channel Role of Sound Box, Electronic Device, and Storage Medium
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
84%
With Interview (+11.1%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 587 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month