Prosecution Insights
Last updated: April 19, 2026
Application No. 18/592,886

DISPLAY DEVICE AND OPERATION METHOD THEREOF

Non-Final OA §102§103
Filed
Mar 01, 2024
Examiner
SAUNDERS JR, JOSEPH
Art Unit
2692
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
93%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
538 granted / 740 resolved
+10.7% vs TC avg
Strong +21% interview lift
Without
With
+20.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
27 currently pending
Career history
767
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
40.0%
+0.0% vs TC avg
§102
29.6%
-10.4% vs TC avg
§112
14.6%
-25.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 740 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office action is based on the communications filed March 1, 2024. Claims 1 – 15 are currently pending and considered below. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on March 5, 2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 7, 10, and 15 is/are rejected under 35 U.S.C. 102(a)(1) and 35 U.S.C. 102(a)(2) as being anticipated by Fornshell et al. (US 2020/0382569 A1), hereinafter Fornshell. Claim 1: Fornshell discloses a display apparatus comprising: a display (see at least, “The output device interface 706 may enable, for example, the display of images generated by electronic system 700. Output devices that may be used with the output device interface 706 may include, for example, printers and display devices, such as a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, a flexible display, a flat panel display, a solid-state display, a projector, or any other device for outputting information. One or more implementations may include devices that function as both input and output devices, such as a touchscreen,” Fornshell [0080], “Upon discovering the proximate wireless audio output device 104B (302), the electronic device 102A may display an indication that the proximate wireless audio output device 104B is available for concurrent streaming (304). For example, the electronic device 102Amay display a pop-up user-interface element (e.g., a pop-up sheet) and/or a notification indicating that the wireless audio output device 104B is available for concurrent audio streaming. In one or more implementations, the electronic device 102B may provide an audio notification, a haptic notification, and/or another form of notification, in lieu of and/or in addition to displaying the indication,” Fornshell [0041], Fornshell FIG. 7); a communication interface (see at least, “Finally, as shown in FIG. 7, the bus 708 also couples the electronic system 700 to one or more networks and/or to one or more network nodes, through the one or more network interface(s) 716,” Fornshell [0081], “Responsive to displaying the indication (304), the electronic device 102A may receive a request, such as from a user, to initiate concurrent audio streaming with the wireless audio output device 104B and, e.g., the wireless audio output device 104A that is currently connected to the electronic device 102A (306). The electronic device 102A may temporarily pair with the wireless audio output device 104B to generate a link key (308). The pairing may be and/or may include, for example, a Bluetooth pairing mechanism, such as secure simple pairing, or generally any form of pairing. For example, user input with respect to the wireless audio output device 104B, and/or a case associated with the wireless audio output device 104B, such as pressing a button, may be included in the pairing. In one or more implementations, the pairing may also involve the exchange of additional information, such as communication addresses including Bluetooth classic addresses,” Fornshell [0042], Fornshell FIG. 7); a memory storing one or more instructions; and a processor configured to execute the one or more instructions stored in the memory to (see at least, “The bus 708 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 700. In one or more implementations, the bus 708 communicatively connects the one or more processing unit(s) 712 with the ROM 710, the system memory 704, and the permanent storage device 702. From these various memory units, the one or more processing unit(s) 712 retrieves instructions to execute and data to process in order to execute the processes of the subject disclosure. The one or more processing unit(s) 712 can be a single processor or a multi-core processor in different implementations,” Fornshell [0077], Fornshell FIG. 7) identify that a first audio input/output device and a second audio input/output device, through which audio data is to be output by using a Bluetooth communication protocol, are connected (see at least, “The electronic device 102A may then automatically, and without user input, establish a connection with the proximate wireless audio output device 104B using the link key, and/or based at least in part on the link key (310). In one or more implementations, when the wireless audio output devices 104A-B are connected to the electronic device 102A via Bluetooth connections, the Bluetooth controller of the electronic device 102A may maintain a separate connection for each of the wireless audio output devices 104A-B. Accordingly, packets can be individually transmitted to the wireless audio output devices 104A-B, and one or more link parameters of the connections can be individually managed and/or maintained,” Fornshell [0043], “The subject technology allows an electronic device, such as the electronic device 102A to concurrently connect to multiple of the wireless audio output devices 104A-B, and concurrently stream audio to each of the connected wireless audio output devices 104A-B, such as to provide a shared listening session,” Fornshell [0026]), and based on identifying that different Bluetooth communication profiles are used in Bluetooth communication for outputting the audio data to the first audio input/output device and to the second audio input/output device (see at least, “Furthermore, in the example of Bluetooth connections, the electronic device 102A may utilize separate and independent Bluetooth profiles for the connection with each of the wireless audio output devices 104A-B, such as A2DP, HFP, AVCRP, and the like. In the example of concurrent streaming audio, the electronic device 102A may utilize a first A2DP profile for the connection with the wireless audio output device 104Aand a second, separate, A2DPprofile for the connection with the wireless audio output device 104B,” Fornshell [0044]), control the communication interface to delay output of the audio data to one of the first audio input/output device and the second audio input/output device for a preset period of time by using a synchronization buffer, in order to synchronize the audio data output to the first audio input/output device with the audio data output to the second audio input/output device (see at least, “The electronic device 102A may then determine if the same audio content is being streamed to both of the wireless audio output devices 104A-B. If the same audio content is being streamed to both of the wireless audio output devices 104A-B, the electronic device 102A may synchronize one or more audio output synchronization parameters between the wireless audio output devices 104A-B (314). For example, the electronic device 102A may synchronize jitter buffer depth across the wireless audio output devices 104A-B, such as by setting the jitter buffer depth on both wireless audio output devices 104A-B to match the largest jitter buffer depth between the audio output devices 104A-B. The electronic device 102A may also synchronize audio-video synchronization parameters, e.g., an audio delay or shift parameter, such as in the instance that the concurrently streamed audio corresponds to a video being presented on the electronic device 102A,” Fornshell [0046]). Claim 7: Fornshell disclose the display apparatus of claim 1, wherein the processor is further configured to execute the one or more instructions to control the display to display first content, and obtain the audio data corresponding to the first content (see at least, “A user streaming audio from an electronic device, such as a mobile phone or a television, to a wireless audio output device, such as their wireless headset, may wish to concurrently stream audio to another proximate wireless audio output device, such as a wireless headset/earbuds of a friend, family member, companion, etc. For example, a user streaming, e.g., music or a podcast on a mobile device may wish to concurrently stream the music or podcast to the wireless audio output device of another user for a shared listening experience. Similarly, a user streaming audio corresponding to a movie or other video content may wish to concurrently stream the audio or a variation thereof (e.g., a director's commentary) to the wireless audio output device of another user for a shared content viewing experience,” Fornshell [0012]). Claim 10 is directed to an operation method of a display apparatus, the operation method comprising the steps substantially similar to those performed by the display apparatus of claim 1, and therefore is rejected for the same reasons (see also at least, “Those of skill in the art would appreciate that the various illustrative blocks, modules, elements, components, methods, and algorithms described herein may be implemented as electronic hardware, computer software, or combinations of both. To illustrate this interchangeability of hardware and software, various illustrative blocks, modules, elements, components, methods, and algorithms have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application,” Fornshell [0093], Fornshell FIG. 3, “1. A method comprising:” Fornshell claim 1). Claim 15 is directed to a non-transitory computer-readable recording medium having recorded thereon one or more programs executable by a processor of a display apparatus to implement an operation method of the display apparatus, wherein the operation method comprises the steps substantially similar to those performed by the display apparatus of claim 1, and therefore is rejected for the same reasons (see also at least, “Implementations within the scope of the present disclosure can be partially or entirely realized using a tangible computer-readable storage medium (or multiple tangible computer-readable storage media of one or more types) encoding one or more instructions. The tangible computer-readable storage medium also can be non-transitory in nature,” Fornshell [0088]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2 – 5, 8, 9, and 11 – 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fornshell in view of Jang et al. (KR 20210103114 A), with citations provided from corresponding English translation, hereinafter Jang. Claim 2: Fornshell discloses the display apparatus of claim 1, but does not disclose wherein the processor is further configured to execute the one or more instructions to control the display to display a first image of first content through a first window and display a second image of the first content through a second window, in a multi-window including the first window and the second window, and obtain the audio data by mixing first audio data corresponding to the first image and second audio data corresponding to the second image. However, Jang discloses in regards to audio processing “As shown in FIG. 1 , most electronic devices such as TVs or AV devices today can receive heterogeneous source signals including two or more audio or video signals. In addition, with the development of signal processing technology, the number of users who want to output two or more images on one screen and listen to two or more types of sounds at the same time is increasing,” Jang [0044]. Jang further discloses wherein the processor is further configured to execute the one or more instructions to control the display to display a first image of first content through a first window and display a second image of the first content through a second window, in a multi-window including the first window and the second window (see at least, Jang FIG. 1), and obtain the audio data by mixing first audio data corresponding to the first image and second audio data corresponding to the second image (see at least, “For example, when receiving a broadcast signal for a sports channel and watching a sports video 111 on a TV while watching a sports game while playing a sports relay video 112 of a famous YouTuber in the corner of the screen, the user It is desired that the sound of the sports image 111 and the sound of the sports relay image 112 be continuously output through the same audio device 100,” Jang [0045], “When the sports video 111 and the sports relay video 112 are simultaneously output from the TV, the audio signal output speed needs to be synchronized with the audio signal input speed of the sports video 111 corresponding to the main audio signal,” Jang [0046], “In addition, when the TV additionally receives the audio signal of the sports relay image 112 corresponding to the sub audio signal in addition to the main audio signal, the sub audio signal is mixed with the main audio signal and outputted by mixing the sub audio signal and the main audio signal must be synchronized,” Jang [0047]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the aforementioned features Jang in the invention of Fornshell thereby allowing for the advantage of allowing for a shared content viewing experience (see at least, Fornshell [0012]) for “users who want to output two or more images on one screen and listen to two or more types of sounds at the same time,” Jang [0044]. Claim 3: Fornshell and Jang disclose the display apparatus of claim 2, wherein the processor is further configured to execute the one or more instructions to control the communication interface to process the audio data according to an advanced audio distribution profile (A2DP) Bluetooth profile in order to output the audio data to the first audio input/output device, and control the communication interface to process mixing data of the audio data and voice call sound according to a hands-free profile (HFP) Bluetooth profile in order to output the voice call sound to the second audio input/output device along with the audio data (see at least, “Furthermore, in the example of Bluetooth connections, the electronic device 102A may utilize separate and independent Bluetooth profiles for the connection with each of the wireless audio output devices 104A-B, such as A2DP, HFP, AVCRP, and the like,” Fornshell [0044], “In addition, when the TV additionally receives the audio signal of the sports relay image 112 corresponding to the sub audio signal in addition to the main audio signal, the sub audio signal is mixed with the main audio signal and outputted by mixing the sub audio signal and the main audio signal must be synchronized,” Jang [0047]). Claim 4: Fornshell and Jang disclose the display apparatus of claim 3, wherein the processor is further configured to execute the one or more instructions to control the communication interface to delay the mixing data processed according to the HFP Bluetooth profile for a preset period of time by using the synchronization buffer in order to synchronize the audio data processed according to the A2DP Bluetooth profile with the mixing data processed according to the HFP Bluetooth profile (see at least, “Furthermore, in the example of Bluetooth connections, the electronic device 102A may utilize separate and independent Bluetooth profiles for the connection with each of the wireless audio output devices 104A-B, such as A2DP, HFP, AVCRP, and the like,” Fornshell [0044], “The electronic device 102A may then determine if the same audio content is being streamed to both of the wireless audio output devices 104A-B. If the same audio content is being streamed to both of the wireless audio output devices 104A-B, the electronic device 102A may synchronize one or more audio output synchronization parameters between the wireless audio output devices 104A-B (314). For example, the electronic device 102A may synchronize jitter buffer depth across the wireless audio output devices 104A-B, such as by setting the jitter buffer depth on both wireless audio output devices 104A-B to match the largest jitter buffer depth between the audio output devices 104A-B. The electronic device 102A may also synchronize audio-video synchronization parameters, e.g., an audio delay or shift parameter, such as in the instance that the concurrently streamed audio corresponds to a video being presented on the electronic device 102A,” Fornshell [0046]). Claim 5: Fornshell and Jang disclose the display apparatus of claim 4, wherein a preset size of the synchronization buffer corresponds to a size of a buffer provided in the first audio input/output device (see at least, “For example, the electronic device 102A may synchronize jitter buffer depth across the wireless audio output devices 104A-B, such as by setting the jitter buffer depth on both wireless audio output devices 104A-B to match the largest jitter buffer depth between the audio output devices 104A-B,” Fornshell [0046]). Claim 8: Fornshell discloses the display apparatus of claim 7, wherein the processor is further configured to execute the one or more instructions to control the communication interface to process the audio data according to an advanced audio distribution profile (A2DP) Bluetooth profile in order to output the audio data to the first audio input/output device, and control the communication interface to process voice call sound according to a hands-free profile (HFP) Bluetooth profile in order to output the voice call sound to the second audio input/output device along with the audio data (see at least, “Furthermore, in the example of Bluetooth connections, the electronic device 102A may utilize separate and independent Bluetooth profiles for the connection with each of the wireless audio output devices 104A-B, such as A2DP, HFP, AVCRP, and the like,” Fornshell [0044]). Fornshell does not disclose to control the communication interface to process mixing data of the audio data and voice call sound. However, Jang discloses in regards to audio processing “As shown in FIG. 1 , most electronic devices such as TVs or AV devices today can receive heterogeneous source signals including two or more audio or video signals. In addition, with the development of signal processing technology, the number of users who want to output two or more images on one screen and listen to two or more types of sounds at the same time is increasing,” Jang [0044]. Jang further discloses wherein the processor is further configured to execute the one or more instructions to control the communication interface to process mixing data of the audio data and voice call sound (see at least, “For example, when receiving a broadcast signal for a sports channel and watching a sports video 111 on a TV while watching a sports game while playing a sports relay video 112 of a famous YouTuber in the corner of the screen, the user It is desired that the sound of the sports image 111 and the sound of the sports relay image 112 be continuously output through the same audio device 100,” Jang [0045], “When the sports video 111 and the sports relay video 112 are simultaneously output from the TV, the audio signal output speed needs to be synchronized with the audio signal input speed of the sports video 111 corresponding to the main audio signal,” Jang [0046], “In addition, when the TV additionally receives the audio signal of the sports relay image 112 corresponding to the sub audio signal in addition to the main audio signal, the sub audio signal is mixed with the main audio signal and outputted by mixing the sub audio signal and the main audio signal must be synchronized,” Jang [0047]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the aforementioned features Jang in the invention of Fornshell thereby allowing for the advantage of allowing for a shared content viewing experience (see at least, Fornshell [0012]) for “users who want to output two or more images on one screen and listen to two or more types of sounds at the same time,” Jang [0044]. Claim 9: Fornshell and Jang disclose the display apparatus of claim 8, wherein the processor is further configured to execute the one or more instructions to control the communication interface to delay the mixing data processed according to the HFP Bluetooth profile for a preset period of time by using the synchronization buffer in order to synchronize the audio data processed according to the A2DP Bluetooth profile with the mixing data processed according to the HFP Bluetooth profile (see at least, “Furthermore, in the example of Bluetooth connections, the electronic device 102A may utilize separate and independent Bluetooth profiles for the connection with each of the wireless audio output devices 104A-B, such as A2DP, HFP, AVCRP, and the like,” Fornshell [0044], “The electronic device 102A may then determine if the same audio content is being streamed to both of the wireless audio output devices 104A-B. If the same audio content is being streamed to both of the wireless audio output devices 104A-B, the electronic device 102A may synchronize one or more audio output synchronization parameters between the wireless audio output devices 104A-B (314). For example, the electronic device 102A may synchronize jitter buffer depth across the wireless audio output devices 104A-B, such as by setting the jitter buffer depth on both wireless audio output devices 104A-B to match the largest jitter buffer depth between the audio output devices 104A-B. The electronic device 102A may also synchronize audio-video synchronization parameters, e.g., an audio delay or shift parameter, such as in the instance that the concurrently streamed audio corresponds to a video being presented on the electronic device 102A,” Fornshell [0046]). Claims 11 – 14 are directed to an operation method of a display apparatus, the operation method comprising the steps substantially similar to those performed by the display apparatus of claims 2 – 5, and therefore is rejected for the same reasons. Allowable Subject Matter Claim 6 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Girardier et al. (US 2018/0027368 A1) directed to “a system comprising a device DEV for controlling wireless speakers according to an embodiment of the invention, as well as a set of wireless speakers SPKi, SPK2 and SPKN each comprising a respective buffer memory BUF i, BUF 2 and BUF N The device DEV comprises a wireless communication circuit BC (such as a Bluetooth circuit) enabling communication with the wireless speakers SPKi, SPK2 and SPKN and in particular enabling identifying them. The device DEV comprises an access circuit DBC for accessing a database DB comprising information (such as latencies LAT1 , LAT2 and LATN) about different types of wireless speakers, information associated with the identifiers ID1 , ID2 and IDN of these different types of wireless speakers. The device DEV comprises a circuit SEC for separating a main audio stream into as many separated audio streams as the control device has received (via its wireless communication circuit) wireless speaker identifiers. The device DEV comprises an allocation circuit AC for allocating each separated audio stream to a respective wireless speaker. The device DEV comprises a synchronization circuit SYC for synchronizing the separated audio streams based on characteristics of the wireless speakers for which the wireless communication circuit has received an identifier. The device DEV comprises a harmonization circuit HAC for harmonizing the separated audio streams based on characteristics of the wireless speakers for which the wireless communication circuit has received an identifier,” [0067]. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH SAUNDERS whose telephone number is (571)270-1063. The examiner can normally be reached Monday-Thursday, 9:00 a.m. - 4 p.m., EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn R Edwards can be reached at (571)270-7136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSEPH SAUNDERS JR/Primary Examiner, Art Unit 2692 /CAROLYN R EDWARDS/Supervisory Patent Examiner, Art Unit 2692
Read full office action

Prosecution Timeline

Mar 01, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §102, §103
Apr 07, 2026
Applicant Interview (Telephonic)
Apr 07, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596883
Audio Analysis for Text Generation
2y 5m to grant Granted Apr 07, 2026
Patent 12598420
AUDIO DEVICE WITH ELECTROSTATIC DISCHARGE PROTECTION
2y 5m to grant Granted Apr 07, 2026
Patent 12593190
User Experience Localizing Binaural Sound During a Telephone Call
2y 5m to grant Granted Mar 31, 2026
Patent 12585425
Light-function audio parameters
2y 5m to grant Granted Mar 24, 2026
Patent 12585422
DATA PROCESSING METHOD OF PROCESSING MULTITRACK AUDIO DATA AND DATA PROCESSING APPARATUS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
93%
With Interview (+20.6%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 740 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month