DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on September 10th, 2025 has been considered by the examiner.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-18 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1–5, 8–16, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Jantunen (US 20140302773) in view of Tritschler (US 9820036).
Claim 1
Jantunen discloses an electronic device (701 in fig. 7) comprising:
a display (707 in fig. 7);
a first microphone (711 in fig. 7; para. 101);
a wireless communication circuit (717 in fig. 7; para. 88, “wireless links may also be implemented”; also see para. 102);
memory (751 in fig. 7) storing one or more computer programs (para. 4); and
one or more processors communicatively coupled to the display (703 in fig. 7), and the memory (memory and processor in fig. 7 are communicatively coupled to each other in fig. 7 via the ASIC backplane),
wherein the one or more computer programs (para. 4) include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device to:
detect activation of an extended display configuration function (para. 66; figs. 3a-f),
establish a wireless connection with an external device (i.e. 403a in fig. 4a) using the wireless communication circuit (fig. 4a; para. 74 “The device 401 detects the presence of the neighboring devices 403 via an ad-hoc network 405 and communicates with them via short broadcast messages.”),
transmit, via the wireless connection, a request for outputting sound (para 32 discloses transmission of messages that result in additional broadcasts; para. 42 details that the exchanged signals may be audio signals; para. 62 also details a master device submitting a request to slave devices to obtain video data including audio files) to the external device (para. 36 details ultrasonic audio signal outputting),
control the first microphone to receive a first sound signal (para. 36 details microphones to receive the ultrasonic audio),
detect the first sound signal using the first microphone (para. 36 details microphones to receive the ultrasonic audio),),
recognize a position of the external device, based on the first sound signal (para. 36 details use of emitted audio to determine proximity and orientation amongst the devices; para. 40 also details “audio-based proximity”; para. 56 also details additional details; para. 76 also includes relevant discussion),
produce a first screen to be displayed on the display and a second screen to be displayed on a display of the external device, based on the recognized position of the external device (fig. 4g; para. 80),
display the produced first screen on the display of the electronic device (screen displayed on 401 in fig. 4g), and
transmit the second screen to the external device using the wireless communication circuit (step 315 in fig. 3b details the transmission of respective portions of the media files to the devices in the subgroup; para. 68; para. 30; para. 31 discloses that the communication may be wireless).
Jantunen does not expressly disclose multiple microphones or the sweeping of the microphones to detect sounds.
Tritschler discloses, performing beamforming using a first microphone (242a in fig. 6a);
a second microphone (242n in fig. 6a) disposed to be spaced apart from the first microphone (fig. 3a discloses that the two microphones, 242, are spaced apart);
control the first microphone to form and sweep a first receiving beam for receiving a first sound signal and the second microphone to form and sweep a second receiving beam for receiving a second sound signal (fig. 4 details sweeping through receiving angles for the microphones; also see figs. 5a-b; col. 4, line 65 – col. 5, line 1; col. 5, lines 25-29 detail unique weights applied to each microphone to achieve different receiving angles).
Tritschler and Jantunen are analogous art because they are from the same field of endeavor namely audio based position determination.
At the time of filing it would have been obvious to one of ordinary skill in the art to perform the audio specific orientation and positioning taught by Jantunen with the beamforming structure and specifics taught by Tritschler. The motivation for doing so would have been to more effectively and with higher resolution determine the position of an emitted sounds even in challenging acoustic environments (Tritschler; col. 8, lines 35-39).
Claim 2
Tritschler and Jantunen further disclose, the electronic device of claim 1 (see above), wherein the first microphone and the second microphone are configured to detect a predetermined sound (Jantunen discloses specific strength levels of emitted ultrasonic signals in para. 36; these strength levels would necessarily have had to be predetermined).
Claim 3
Tritschler and Jantunen further disclose, the electronic device of claim 1 (see above), wherein the first microphone and the second microphone are configured to detect a sound in an inaudible range (Jantunen discloses inaudible ultrasonic signals; para. 36).
Claim 4
Tritschler and Jantunen further disclose, the electronic device of claim 1 (see above), wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device to recognize the position of the external device, based on at least one of a time at which the first sound signal and the second sound signal are detected, or volume thereof (Jantunen; para. 36 details both time-of-flight as well as propagation difference detection).
Claim 5
Tritschler and Jantunen further disclose, the electronic device of claim 1 (see above), wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device to determine the position of the external device to be a left side or right side (Jantunen; figs. 4a-g; para. 55).
Claim 8
Tritschler and Jantunen further disclose, the electronic device of claim 1 (see above).
Jantunen further teaches wherein the first screen and the second screen are connected to each other (fig. 1; displays in UE101a-n are connected to each other wirelessly).”
Claim 9
Tritschler and Jantunen further disclose, the electronic device of claim 1 (see above), one or more compute programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device to:
recognize the position of the external device (Jantunen; para. 36), based on a direction (130 in fig. 1) in which the first receiving beam and the second receiving beam are disposed through sweeping (Tritschler; discloses, determining directionality of a source based on the sweeping direction detected magnitude; col. 5, lines 63-67).
Claim 10
Jantunen in view of Tritschler teaches the electronic device of claim 1 (see above), wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device to output a notification through the display in case that the position of the external device is not recognized for a predetermined time (Jantunen; para. 65 discusses display of a static or moving icon upon determination that one or more of the devices is no longer available.”
Claim 11
Tritschler and Jantunen further disclose, the electronic device of claim 1 (see above),
Jantunen further teaches wherein the wireless communication circuit is configured to establish a wireless connection with the external device using Wi-Fi direct (para. 48).
Claim 12
Jantunen discloses a method performed by an electronic device (701 in fig. 7) for configuring an extended display of the electronic device, the method comprising:
detecting activation of an extended display configuration function (para. 66; figs. 3a-f),
establishing a wireless connection with an external device (i.e. 403a in fig. 4a; para. 74 “The device 401 detects the presence of the neighboring devices 403 via an ad-hoc network 405 and communicates with them via short broadcast messages.”),
transmitting, via the wireless connection, a request for outputting sound (para 32 discloses transmission of messages that result in additional broadcasts; para. 42 details that the exchanged signals may be audio signals; para. 62 also details a master device submitting a request to slave devices to obtain video data including audio files) to the external device (para. 36 details ultrasonic audio signal outputting),
control the first microphone to receive a first sound signal (para. 36 details microphones to receive the ultrasonic audio),
detecting the first sound signal using the first microphone (para. 36 details microphones to receive the ultrasonic audio),),
recognizing a position of the external device, based on the first sound signal (para. 36 details use of emitted audio to determine proximity and orientation amongst the devices; para. 40 also details “audio-based proximity”; para. 56 also details additional details; para. 76 also includes relevant discussion),
producing a first screen to be displayed on the display and a second screen to be displayed on a display of the external device, based on the recognized position of the external device (fig. 4g; para. 80),
displaying the produced first screen on the display of the electronic device (screen displayed on 401 in fig. 4g), and
transmitting the second screen to the external device using the wireless communication circuit (step 315 in fig. 3b details the transmission of respective portions of the media files to the devices in the subgroup; para. 68; para. 30; para. 31 discloses that the communication may be wireless).
Jantunen does not expressly disclose multiple microphones or the sweeping of the microphones to detect sounds.
Tritschler discloses, performing beamforming using a first microphone (242a in fig. 6a) and a second microphone (242n in fig. 6a);
controlling the first microphone to form and sweep a first receiving beam for receiving a first sound signal and the second microphone to form and sweep a second receiving beam for receiving a second sound signal (fig. 4 details sweeping through receiving angles for the microphones; also see figs. 5a-b; col. 4, line 65 – col. 5, line 1; col. 5, lines 25-29 detail unique weights applied to each microphone to achieve different receiving angles).
Tritschler and Jantunen are analogous art because they are from the same field of endeavor namely audio based position determination.
At the time of filing it would have been obvious to one of ordinary skill in the art to perform the audio specific orientation and positioning taught by Jantunen with the beamforming structure and specifics taught by Tritschler. The motivation for doing so would have been to more effectively and with higher resolution determine the position of an emitted sounds even in challenging acoustic environments (Tritschler; col. 8, lines 35-39).
Claim 13
Tritschler and Jantunen further disclose, the method of claim 12 (see above), wherein the detecting a predetermined sound comprises detecting a sound in an inaudible range (Jantunen discloses using inaudible ultrasonic signals in para. 36).
Claim 14
Tritschler and Jantunen further disclose, the method of claim 12 (see above), wherein the recognizing the position of the external device, based on at least one of a time at which the first sound signal and the second sound signal are detected, or volume thereof (Jantunen; para. 36 details both time-of-flight as well as propagation difference detection).
Claim 15
Tritschler and Jantunen further disclose, the method of claim 12 (see above), wherein the first microphone and the second microphone are configured to detect a predetermined sound (Jantunen discloses specific strength levels of emitted ultrasonic signals in para. 36; these strength levels would necessarily have had to be predetermined).
Claim 16
Tritschler and Jantunen further disclose, the method of claim 12 (see above), further comprising determining the position of the external device to be a left side or right side (Jantunen; figs. 4a-g; para. 55).
Claim 18
Tritschler and Jantunen further disclose, the method of claim 12 (see above), further comprising recognizing the position of the external device (Jantunen; para. 36), based on a direction (130 in fig. 1) in which the first receiving beam and the second receiving beam are disposed through sweeping (Tritschler; discloses, determining directionality of a source based on the sweeping direction detected magnitude; col. 5, lines 63-67).
Claims 6–7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Jantunen (US 20140302773) in view of Tritschler (US 9820036) and in further view of Wirasinghe, (US 20190174193).
Claim 6
Tritschler and Jantunen further disclose, the electronic device of claim 1 (see
above).
While Jantunen discloses a termination module to end multi-device presentation when devices leave (see para. 65), there is no express discussion of causing the electronic device to terminate an operation of detecting the sound using the first microphone and the second microphone in case that the recognition of the position of the external device is completed.
However, Wirasinghe (Figs. 2A-B) teaches the basic concept of turning a display microphone off when it does not receive audio (par. 0069). In the combined invention, the microphones of Jantunen in view of Tritschler would be deactivated when detection of the position of the external device is finished.
Before the effective filing date of the invention, it would have been obvious to one with ordinary skill in the art to modify Jantunen in view of Tritschler with the above features of Wirasinghe. Wirasinghe suggests that deactivating a microphone prevents the microphone from receiving unnecessary audio signals when not needed (par. 0069).
Claim 7
Jantunen in view of Tritschler and in further view of Wirasinghe teaches the electronic device of claim 6.
The combined invention further teaches wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device to: reactivate the first microphone and the second microphone (e.g., After the microphones of Jantunen/Tritschler are turned off, as taught by Wirasinghe, they are presumably turned on when needed to detect the position of the extended display); and detect a sound output from the external device in case that movement of the external device is identified after recognizing the position of the external device (Jantunen discloses, detection of change in device positioning and realigning the displays; para. 64).
Claim 17
Tritschler and Jantunen further disclose, the method of claim 12 (see
above).
While Jantunen discloses a termination module to end multi-device presentation when devices leave (see para. 65), there is no express discussion of causing the electronic device to terminate an operation of detecting the sound using the first microphone and the second microphone in case that the recognition of the position of the external device is completed.
However, Wirasinghe (Figs. 2A-B) teaches the basic concept of turning a display microphone off when it does not receive audio (par. 0069). In the combined invention, the microphones of Jantunen in view of Tritschler would be deactivated when detection of the position of the external device is finished.
Before the effective filing date of the invention, it would have been obvious to one with ordinary skill in the art to modify Jantunen in view of Tritschler with the above features of Wirasinghe. Wirasinghe suggests that deactivating a microphone prevents the microphone from receiving unnecessary audio signals when not needed (par. 0069).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
De Prycker (US 2009/0184837) discloses audio-based positioning determination.
Chatlani (US 11159878) details mobile device beamforming
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to William L Boddie whose telephone number is (571)272-0666. The examiner can normally be reached 8 - 4:15 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview ReQuest (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alford Kindred can be reached on 571-272-4037. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional Questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WILLIAM BODDIE/Supervisory Patent Examiner, Art Unit 2625