Prosecution Insights
Last updated: April 19, 2026
Application No. 18/642,118

DISPLAY DEVICE AND OPERATING METHOD THEREFOR

Final Rejection §103
Filed
Apr 22, 2024
Examiner
ANWAH, OLISA
Art Unit
2692
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
2 (Final)
89%
Grant Probability
Favorable
3-4
OA Rounds
2y 1m
To Grant
93%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
1036 granted / 1162 resolved
+27.2% vs TC avg
Minimal +4% lift
Without
With
+4.2%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
38 currently pending
Career history
1200
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
29.1%
-10.9% vs TC avg
§112
5.0%
-35.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1162 resolved cases

Office Action

§103
DETAILED ACTION Claim Rejections - 35 USC § 103 1. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 2. Claims 1, 2, 9, 11, 12, 15 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Kim, KR 101820369 (hereinafter Kim) in view of Young et al, U.S. Patent Application Publication No. 2022/0103607 (hereinafter Young). Regarding claim 1, Kim discloses a display apparatus (see smartphone) comprising: a display; communication circuitry; memory storing one or more instructions; and at least one processor, including processing circuitry, configured to, individually or collectively, control the display apparatus to: receive a voice call request for a voice call (from Figure 4, see S300), in response to receiving the voice call request, determine whether audio data is being transmitted from the display apparatus to an audio input/output apparatus by the communication circuitry according to a Bluetooth communication protocol (from Figure 4, see S310), and in response to determining that the audio data is being transmitted from the display apparatus to the audio input/output apparatus by the communication circuitry, identify one or more audio input apparatuses other than the audio input/output apparatus, select a first audio input apparatus from among the one or more identified audio input apparatuses (from Figure 4, see S320), and activate the selected first audio input apparatus and receive a speech input for transmitting to a voice call counterpart apparatus via the activated first audio input apparatus and receive counterpart speech input from the voice call counterpart apparatus (from Figure 4, see S330). Still on the issue of claim 1, Kim does not teach mixing the received counterpart speech input with the audio data being transmitted to the audio input/output apparatus. All the same, Young discloses mixing the received counterpart speech input (from paragraph 0091, see the audio content of the phone call can barge in on the audio stream from the smart TV) with the audio data being transmitted to the audio input/output apparatus (from paragraph 0082, see In some examples, rather than stopping, pausing, or muting the first audio playback PB1, upon a barge-in event, the first audio playback PB1 is mixed with the second audio playback associated with the isochronous data stream of the second source device, e.g., where the user can hear a mixed audio playback which includes both first and second audio playbacks (PB1, and PB2 at the same relative volume). In some example, after mixing the playback audio data of the two isochronous data streams, the devices of system 100 can be configured to alter or adjust the volume associated with the first audio playback PB1 data so that it is louder than the second audio playback PB2 data, such that the perceived volume of the first audio playback PB1 is louder than the perceived volume of the second audio playback PB2). Therefore, it would have been obvious to one of ordinary skill in the art to modify Kim with mixing the received counterpart speech input with the audio data being transmitted to the audio input/output apparatus as taught by Young. This modification would have improved the system’s convenience by allowing the user to listen to a tv program while receiving the audio related to a phone call as suggested by Young. Regarding claim 2, Kim discloses the display apparatus of claim 1, wherein at least one processor is configured to, individually and/or collectively, control the display apparatus to, in response to determining that the audio data is not being transmitted by the communication circuitry, select the audio input/output apparatus as an audio input apparatus for receiving a speech input for transmitting to the voice call counterpart apparatus (from Figure 4, see s350). Regarding claim 9, Kim discloses the display apparatus of claim 1, wherein the one or more other audio input apparatuses include at least one of a display apparatus built-in microphone (see Of course, the contents of the call of the user of the smartphone 10 are inputted by his / her microphone and transmitted to the other party by mobile communication), a microphone included in an external apparatus connected to the display apparatus, a microphone included in a remote control apparatus configured to control the display apparatus, and a microphone included in a smart device connected to the display apparatus. Regarding claim 11, Kim discloses an operating method of a display apparatus (see smartphone), the operating method comprising: receiving a voice call request for a voice call (from Figure 4, see S300); in response to receiving the voice call request, determining whether audio data is being transmitted from the display apparatus to an audio input/output apparatus by communication circuitry according to a Bluetooth communication protocol (from Figure 4, see S310); in response to determining that the audio data is being transmitted from the display apparatus to the audio input/output apparatus by the communication circuitry, identifying one or more audio input apparatuses other than the audio input/output apparatus; selecting a first audio input apparatus from among the one or more identified audio input apparatuses (from Figure 4, see S320); and activating the selected first audio input apparatus, receiving a speech input for transmitting to a voice call counterpart apparatus via the activated first audio input apparatus and receiving counterpart speech input from the voice call counterpart apparatus (from Figure 4, see S330). Still on the issue of claim 11, Kim does not teach mixing the received counterpart speech input with the audio data being transmitted to the audio input/output apparatus. All the same, Young discloses mixing the received counterpart speech input (from paragraph 0091, see the audio content of the phone call can barge in on the audio stream from the smart TV) with the audio data being transmitted to the audio input/output apparatus (from paragraph 0082, see In some examples, rather than stopping, pausing, or muting the first audio playback PB1, upon a barge-in event, the first audio playback PB1 is mixed with the second audio playback associated with the isochronous data stream of the second source device, e.g., where the user can hear a mixed audio playback which includes both first and second audio playbacks (PB1, and PB2 at the same relative volume). In some example, after mixing the playback audio data of the two isochronous data streams, the devices of system 100 can be configured to alter or adjust the volume associated with the first audio playback PB1 data so that it is louder than the second audio playback PB2 data, such that the perceived volume of the first audio playback PB1 is louder than the perceived volume of the second audio playback PB2). Therefore, it would have been obvious to one of ordinary skill in the art to modify Kim with mixing the received counterpart speech input with the audio data being transmitted to the audio input/output apparatus as taught by Young. This modification would have improved the system’s convenience by allowing the user to listen to a tv program while receiving the audio related to a phone call as suggested by Young. Claim 12 is rejected for the same reasons as claim 2. Regarding claim 15, Kim discloses a non-transitory computer-readable recording medium having recorded thereon one or more programs which, when executed by at least one processor of a display apparatus (see smartphone), cause the at least one processor to control the display apparatus to: receive a voice call request for a voice call (from Figure 4, see S300); in response to receiving the voice call request, determine whether audio data is being transmitted from the display apparatus to an audio input/output apparatus by communication circuitry according to a Bluetooth communication protocol (from Figure 4, see S310); and in response to determining that the audio data is being transmitted from the display apparatus to the audio input/output apparatus by the communication circuitry, identify one or more audio input apparatuses other than the audio input/output apparatus (see headset); select a first audio input apparatus (see speaker) from among the one or more identified audio input apparatuses (from Figure 4, see S320); and activate the selected first audio input apparatus, receive speech input for transmitting to a voice call counterpart apparatus via the activated first audio input apparatus and receive counterpart speech input from the voice call counterpart apparatus (from Figure 4, see S330). Still on the issue of claim 15, Kim does not teach mixing the received counterpart speech input with the audio data being transmitted to the audio input/output apparatus. All the same, Young discloses mixing the received counterpart speech input (from paragraph 0091, see the audio content of the phone call can barge in on the audio stream from the smart TV) with the audio data being transmitted to the audio input/output apparatus (from paragraph 0082, see In some examples, rather than stopping, pausing, or muting the first audio playback PB1, upon a barge-in event, the first audio playback PB1 is mixed with the second audio playback associated with the isochronous data stream of the second source device, e.g., where the user can hear a mixed audio playback which includes both first and second audio playbacks (PB1, and PB2 at the same relative volume). In some example, after mixing the playback audio data of the two isochronous data streams, the devices of system 100 can be configured to alter or adjust the volume associated with the first audio playback PB1 data so that it is louder than the second audio playback PB2 data, such that the perceived volume of the first audio playback PB1 is louder than the perceived volume of the second audio playback PB2). Therefore, it would have been obvious to one of ordinary skill in the art to modify Kim with mixing the received counterpart speech input with the audio data being transmitted to the audio input/output apparatus as taught by Young. This modification would have improved the system’s convenience by allowing the user to listen to a tv program while receiving the audio related to a phone call as suggested by Young. Regarding claim 16, the combination of Kim and Young teaches the audio data being transmitting from the display apparatus to the audio input/output apparatus by the communication circuitry comprises application audio data generated by an application executed by the display apparatus (from paragraph 0074 of Young, see audio data associated with a television show, movie, audio broadcast, podcast, or other media program with associated audio data). 3. Claims 3, 4, 7, 13 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Kim combined with Young in further view of Maru et al, JP 2006254148 (hereinafter Maru). Regarding claim 3, the combination of Kim and Young does not teach obtaining an audio input apparatus list in which priorities are assigned to the one or more audio input apparatuses, and select, as the first audio input apparatus, an audio input apparatus with a highest priority in the audio input apparatus list. All the same, Maru discloses obtaining an audio input apparatus list (from Figure 6, see 49) in which priorities are assigned to the one or more audio input apparatuses, and select, as the first audio input apparatus, an audio input apparatus with a highest priority (see highest priority) in the audio input apparatus list. Therefore it would have been obvious to one of ordinary skill in the art to further modify the combination of Kim and Young with the list of Maru. This modification would have improved flexibility by providing more alternatives as suggested by Maru. Regarding claim 4, the combination of Kim and Young as modified by Maru discloses wherein at least one processor is configured to, individually and/or collectively, control the display apparatus to determine the priorities, based on an apparatus characteristic of each of the one or more other audio input apparatuses included in the audio input apparatus list (from Maru, see the power source of the headset device 4 may not be turned on, and the headset device 4 is stored in the trunk room. If the connection is not established with the device set at the highest priority, the control unit 44 selects the subsequent priority device and establishes the connection with the device. Thereby, in this case, the control unit 44 establishes a connection with the car navigation device 3). Regarding claim 7, the combination of Kim and Young as modified by Maru discloses controlling the display apparatus to: provide a graphical user interface for setting the priorities of the one or more other audio input apparatuses included in the audio input apparatus list, and determine the priorities, based on input received through the graphical user interface (from Maru, see the priority setting in the hands-free system 1 is accepted by the user selecting a menu by operating the operation unit 48). Claim 13 is rejected for the same reasons as claim 3. Claim 14 is rejected for the same reasons as claim 4. 4. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Kim combined with Young in further view of Han et al, U.S. Patent Application Publication No. 2020/0162205 (hereinafter Han). Regarding claim 10, the combination of Kim and Young does not teach the at least one processor is configured to, individually and/or collectively, control the display apparatus to determine whether the audio data is being transmitted from the display apparatus to the audio input/output apparatus by the communication circuitry by identifying whether the communication interface operates according to a Bluetooth advanced audio distribution (A2DP) profile. All the same, Han discloses the at least one processor is configured to, individually and/or collectively, control the display apparatus to determine whether the audio data is being transmitted from the display apparatus to the audio input/output apparatus by the communication circuitry by identifying whether the communication interface operates according to a Bluetooth advanced audio distribution (A2DP) profile (from paragraph 0068, see may detect the streaming state information from the A2DP through the first communication circuit). Therefore, it would have been obvious to one of ordinary skill in the art to further modify the combination of Kim and Young wherein the at least one processor is configured to, individually and/or collectively, control the display apparatus to determine whether the audio data is being transmitted from the display apparatus to the audio input/output apparatus by the communication circuitry by identifying whether the communication interface operates according to a Bluetooth advanced audio distribution (A2DP) profile as taught by Han. This modification would have improved flexibility by providing different Bluetooth protocols as suggested by Han. 5. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Kim combined with Young in further view of Shin et al, U.S. Patent Application Publication No. 2016/0360332 (hereinafter Shin). Regarding claim 8, the combination of Kim and Young does not teach wherein at least one processor is configured to, individually and/or collectively, control the display apparatus to: identify one or more other audio input apparatuses registered in the display apparatus, test performance of each of the one or more identified audio input apparatuses in real time, and based on a result of the test, select the first audio input apparatus. All the same, Shin discloses wherein at least one processor is configured to, individually and/or collectively, control the display apparatus to: identify one or more other audio input apparatuses registered in the display apparatus (from Figure 6, see 603), test performance of each of the one or more identified audio input apparatuses in real time (from Figure 6, see 605), and based on a result of the test, select the first audio input apparatus (from Figure 6, see 609). Therefore, it would have been obvious to one of ordinary skill in the art to further modify the combination of Kim and Young wherein the at least one processor is configured to, individually and/or collectively, control the display apparatus to: identify one or more other audio input apparatuses registered in the display apparatus, test performance of each of the one or more identified audio input apparatuses in real time, and based on a result of the test, select the first audio input apparatus as taught by Shin. This modification would have improved flexibility by providing more alternatives as suggested by Shin. Allowable Subject Matter 6. Claims 5 and 6 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Arguments 7. Applicant’s arguments have been considered but are deemed to be moot in view of the new grounds of rejection. Conclusion 8. Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office action. THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. 9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLISA ANWAH whose telephone number is 571-272-7533. The examiner can normally be reached Monday to Friday from 8.30 AM to 6 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn Edwards can be reached on 571-270-7136. The fax phone numbers for the organization where this application or proceeding is assigned are 571-273-8300 for regular communications and 571-273-8300 for After Final communications. Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the receptionist whose telephone number is 571-272-2600. Olisa Anwah Patent Examiner March 6, 2026 /OLISA ANWAH/Primary Examiner, Art Unit 2692
Read full office action

Prosecution Timeline

Apr 22, 2024
Application Filed
Oct 29, 2025
Non-Final Rejection — §103
Dec 22, 2025
Applicant Interview (Telephonic)
Dec 22, 2025
Examiner Interview Summary
Feb 25, 2026
Response Filed
Mar 06, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604130
HEARING DEVICE WITH A BLEEDING CIRCUIT FOR DELIVERING MESSAGES TO A CHARGING DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12598710
Terminal Device
2y 5m to grant Granted Apr 07, 2026
Patent 12597251
VIDEO FRAMING BASED ON TRACKED CHARACTERISTICS OF MEETING PARTICIPANTS
2y 5m to grant Granted Apr 07, 2026
Patent 12596515
FIRST DEVICE, COMMUNICATION SERVER, SECOND DEVICE AND METHODS IN A COMMUNICATIONS NETWORK
2y 5m to grant Granted Apr 07, 2026
Patent 12598437
EARPHONES AND EARPHONE SYSTEM
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
89%
Grant Probability
93%
With Interview (+4.2%)
2y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 1162 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month