Prosecution Insights
Last updated: April 19, 2026
Application No. 18/988,003

SYSTEMS AND METHODS FOR REMOTE INTERACTION BETWEEN ELECTRONIC DEVICES

Non-Final OA §102§103§112
Filed
Dec 19, 2024
Examiner
LEE, GENE W
Art Unit
2624
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
84%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
479 granted / 652 resolved
+11.5% vs TC avg
Moderate +11% lift
Without
With
+10.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
18 currently pending
Career history
670
Total Applications
across all art units

Statute-Specific Performance

§101
1.7%
-38.3% vs TC avg
§103
46.1%
+6.1% vs TC avg
§102
25.7%
-14.3% vs TC avg
§112
21.9%
-18.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 652 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1-12 are objected to because of the following informalities. Regarding claim 1, the phrase “at first electronic device” is grammatically awkward. It is recommended that the article “a” be added so that the phrase is amended to “at a first electronic device”. Claims 2-12 depend from claim 1 and share the objection. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 5-10 and 12 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 5, it is not clear whether any or all of the method steps are to be performed at the first electronic device or elsewhere. Claim 6 depends from claim 5 and shares the rejection. Regarding claim 7, it is not clear whether any or all of the method steps are to be performed at the first electronic device or elsewhere. Regarding claim 8, it is not clear whether any or all of the method steps are to be performed at the first electronic device or elsewhere. Claims 9-10 depend from claim 8 and share the rejection. Regarding claim 9, it is not clear whether any or all of the method steps are to be performed at the first electronic device or elsewhere. Regarding claim 12, it is not clear whether any or all of the method steps are to be performed at the first electronic device or elsewhere. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Note that citations to figures and elements should be understood to also implicitly refer to any pertinent explanatory text in the reference. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim 1-4, 8, 10-11, and 13-14 are rejected, claims 1-2 and 13-14 in the alternative, under 35 U.S.C. 102(a)(1) as being anticipated by US 2012/0038679 A1 (Yun). Regarding claim 1, Yun teaches a method comprising: at first electronic device (Figs. 1, 3 at 100) in communication with a display generation component (Fig. 3 at 151), one or more input devices ([79], [86], Fig. 3 at 130), and a second electronic device (Fig. 3 at 200): while in a remote interaction mode with the second electronic device, transmitting, to the second electronic device, data associated with displaying an image and respective data corresponding to the image, wherein the image is associated with operation of the first electronic device ([116]-[117]); after transmitting, to the second electronic device, the data associated with displaying the image and the respective data corresponding to the image, receiving, from the second electronic device, data associated with an input associated with the image, wherein the input associated with the image is received via the second electronic device while the image is being displayed via the second electronic device ([117]-[119]); and in response to receiving the data associated with the input: in accordance with a determination that the respective data is first data, performing a first operation at the first electronic device corresponding to the image in accordance with the first data ([117]-[119] OR Fig. 19 at S196 & S198); and in accordance with a determination that the respective data is second data, performing a second operation, different from the first operation, at the first electronic device corresponding to the image in accordance with the second data ([117]-[119] OR Fig. 21 at S216 & S218). Regarding claim 2, Yun teaches wherein: in response to receiving the data associated with the input: in accordance with the determination that the respective data is the first data, the first data includes one or more first instructions that cause the first electronic device to perform the first operation corresponding to the image and the first data ([117]-[119] OR Fig. 19 at S196 & S198); and in accordance with the determination that the respective data is the second data, the second data includes one or more second instructions that cause the first electronic device to perform the second operation corresponding to the image and the second data ([117]-[119] OR Fig. 21 at S216 & S218). Regarding claim 3, Yun teaches wherein the first electronic device includes a hardware input device ([79]; Fig. 3 at 130), and the data associated with displaying the image includes data associated with displaying a representation of the hardware input device with the image ([117]-[119]), the method further comprising: while in the remote interaction mode and after transmitting, to the second electronic device, the data associated with displaying the image and the respective data corresponding to the image, receiving, from the second electronic device, data associated with a second input directed to the representation of the hardware input device, wherein the second input is received via the second electronic device while the representation of the hardware input device is being displayed via the second electronic device with the image ([117]-[119]); and in response to receiving the data associated with the second input: in accordance with a determination that the data associated with the second input indicates that the second input corresponds to a first interaction with the hardware input device, performing a third operation at the first electronic device corresponding to the first interaction with the hardware input device ([117]-[119]). Regarding claim 4, Yun teaches in response to receiving the data associated with the second input: in accordance with a determination that the data associated with the second input corresponds to a second interaction with the hardware input device, performing a fourth operation, different from the third operation, at the first electronic device corresponding to the second interaction with the hardware input device ([117]-[119]). Regarding claim 8, Yun teaches while in the remote interaction mode with the second electronic device and while the second electronic device is operating in a respective control mode, receiving, from the second electronic device, data associated with a second input corresponding to selection of a user interface element of the image using the respective control mode, wherein the second input is received via the second electronic device while the user interface element is being displayed via the second electronic device in the image; and in response to receiving the data associated with the second input: performing a third operation corresponding to the selection of the user interface element using the respective control mode ([117]-[122]). Regarding claim 10, Yun teaches wherein: the second electronic device is in communication with a switch input device, the second input is associated with the switch input device, and performing the third operation corresponding to the selection of the user interface element using the respective control mode includes performing a selection of the user interface element ([117]-[119]). Regarding claim 11, Yun teaches wherein the second electronic device is in communication with a hardware input device, and the input associated with the image is detected via the hardware input device ([118]). Regarding claim 13, Yun teaches a first electronic device (Figs. 1, 3 at 100), comprising: one or more processors ([95], [104], [105]); memory ([95], [104], [105]); and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions ([95], [104], [105]) for: while in a remote interaction mode with a second electronic device (Fig. 3 at 200), transmitting, to the second electronic device, data associated with displaying an image and respective data corresponding to the image, wherein the image is associated with operation of the first electronic device ([116]-[117]); after transmitting, to the second electronic device, the data associated with displaying the image and the respective data corresponding to the image, receiving, from the second electronic device, data associated with an input associated with the image, wherein the input associated with the image is received via the second electronic device while the image is being displayed via the second electronic device ([117]-[119]); and in response to receiving the data associated with the input: in accordance with a determination that the respective data is first data, performing a first operation at the first electronic device corresponding to the image in accordance with the first data ([117]-[119]); and in accordance with a determination that the respective data is second data, performing a second operation, different from the first operation, at the first electronic device corresponding to the image in accordance with the second data ([117]-[119]). Regarding claim 14, Yun teaches a non-transitory computer readable storage medium storing one or more programs ([95], [104], [105]), the one or more programs comprising instructions, which when executed by one or more processors of a first electronic device ([95], [104], [105]), cause the first electronic device (Figs. 1, 3 at 100) to: while in a remote interaction mode with a second electronic device (Fig. 3 at 200), transmitting, to the second electronic device, data associated with displaying an image and respective data corresponding to the image, wherein the image is associated with operation of the first electronic device ([116]-[117]); after transmitting, to the second electronic device, the data associated with displaying the image and the respective data corresponding to the image, receiving, from the second electronic device, data associated with an input associated with the image, wherein the input associated with the image is received via the second electronic device while the image is being displayed via the second electronic device ([117]-[119]); and in response to receiving the data associated with the input: in accordance with a determination that the respective data is first data, performing a first operation at the first electronic device corresponding to the image in accordance with the first data ([117]-[119]); and in accordance with a determination that the respective data is second data, performing a second operation, different from the first operation, at the first electronic device corresponding to the image in accordance with the second data ([117]-[119]). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Note that citations to figures and elements should be understood to also implicitly refer to any pertinent explanatory text in the reference. Claims 1-2 and 13-14 are rejected in the alternative under 35 U.S.C. 103 as being unpatentable over US 2021/0084136 A1 (Zhao) in view of US 2012/0038679 A1 (Yun). Regarding claim 1, Zhao teaches a method comprising: at first electronic device (element 120) in communication with a display generation component ([91]), and a second electronic device (element 110): while in a remote interaction mode with the second electronic device, transmitting, to the second electronic device, data associated with displaying an image and respective data corresponding to the image, wherein the image is associated with operation of the first electronic device ([55]-[56]; Fig. 5 at Display Data); after transmitting, to the second electronic device, the data associated with displaying the image and the respective data corresponding to the image, receiving, from the second electronic device, data associated with an input associated with the image, wherein the input associated with the image is received via the second electronic device while the image is being displayed via the second electronic device ([60]-[69], [91]; Fig. 5 at conversion information and control instruction); and in response to receiving the data associated with the input: in accordance with a determination that the respective data is first data, performing a first operation at the first electronic device corresponding to the image in accordance with the first data ([61]-[69]); and in accordance with a determination that the respective data is second data, performing a second operation, different from the first operation, at the first electronic device corresponding to the image in accordance with the second data ([61]-[69]). Zhao does not expressly teach that the first electronic device is in communication with one or more input devices. Yun teaches that a first electronic device is in communication with one or more input devices ([79]; Fig. 3 at 130). The suggestion to modify the teaching of Zhao by the teaching of Yun is present as both teach a handheld electronic device in communication with an external electronic device. The motivation is to provide a user means to input the device. The combination would have been unsurprising and had a reasonable expectation of success because both Zhao and Yun teach a handheld electronic device. Thus, before the effective filing date of the current application, the combination of Zhao and Yun would have rendered obvious, to one of ordinary skill in the art, that the first electronic device is in communication with one or more input devices. Regarding claim 2, Zhao further teaches wherein: in response to receiving the data associated with the input: in accordance with the determination that the respective data is the first data, the first data includes one or more first instructions that cause the first electronic device to perform the first operation corresponding to the image and the first data ([61]-[69]); and in accordance with the determination that the respective data is the second data, the second data includes one or more second instructions that cause the first electronic device to perform the second operation corresponding to the image and the second data ([61]-[69]). Regarding claim 13, Zhao teaches a first electronic device (element 120), performing steps comprising: while in a remote interaction mode with a second electronic device, transmitting, to the second electronic device, data associated with displaying an image and respective data corresponding to the image, wherein the image is associated with operation of the first electronic device ([55]-[56]; Fig. 5 at Display Data); after transmitting, to the second electronic device, the data associated with displaying the image and the respective data corresponding to the image, receiving, from the second electronic device, data associated with an input associated with the image, wherein the input associated with the image is received via the second electronic device while the image is being displayed via the second electronic device ([60]-[69], [91]; Fig. 5 at conversion information and control instruction); and in response to receiving the data associated with the input: in accordance with a determination that the respective data is first data, performing a first operation at the first electronic device corresponding to the image in accordance with the first data ([61]-[69]); and in accordance with a determination that the respective data is second data, performing a second operation, different from the first operation, at the first electronic device corresponding to the image in accordance with the second data ([61]-[69]). Zhao does not expressly teach that the first electronic device comprises one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing the steps. Yun teaches that an electronic device may comprise one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing steps ([95], [104], [105]). The suggestion to modify the teaching of Zhao by the teaching of Yun is present as both teach mobile terminals. The motivation is to implement the mobile terminal. The combination would have been unsurprising and had a reasonable expectation of success because both Zhao and Yun teach a handheld electronic device. Thus, before the effective filing date of the current application, the combination of Zhao and Yun would have rendered obvious, to one of ordinary skill in the art, that the first electronic device comprises one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing the steps. Regarding claim 14, Zhao teaches a first electronic device (element 120) performing steps comprising: while in a remote interaction mode with a second electronic device, transmitting, to the second electronic device, data associated with displaying an image and respective data corresponding to the image, wherein the image is associated with operation of the first electronic device ([55]-[56]; Fig. 5 at Display Data); after transmitting, to the second electronic device, the data associated with displaying the image and the respective data corresponding to the image, receiving, from the second electronic device, data associated with an input associated with the image, wherein the input associated with the image is received via the second electronic device while the image is being displayed via the second electronic device ([60]-[69], [91]; Fig. 5 at conversion information and control instruction); and in response to receiving the data associated with the input: in accordance with a determination that the respective data is first data, performing a first operation at the first electronic device corresponding to the image in accordance with the first data ([61]-[69]); and in accordance with a determination that the respective data is second data, performing a second operation, different from the first operation, at the first electronic device corresponding to the image in accordance with the second data ([61]-[69]). Zhao does not expressly teach a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of the first electronic device, cause the first electronic device to perform the steps. Yun teaches a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of the first electronic device, cause the first electronic device to perform steps. ([95], [104], [105]). The suggestion to modify the teaching of Zhao by the teaching of Yun is present as both teach mobile terminals. The motivation is to implement the mobile terminal. The combination would have been unsurprising and had a reasonable expectation of success because both Zhao and Yun teach a handheld electronic device. Thus, before the effective filing date of the current application, the combination of Zhao and Yun would have rendered obvious, to one of ordinary skill in the art, a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of the first electronic device, cause the first electronic device to perform the steps. Claims 5, 6 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over US 2021/0084136 A1 (Zhao) in view of US 2012/0038679 A1 (Yun) as applied to claim 1 above, and further in view of US 2014/0019513 A1 (Han). Regarding claim 5, Zhao and Yun both teach displaying, via the display generation component, a first user interface object corresponding to establishment of a connection (Zhao Fig. 5 at 121, Fig. 8; Yun Fig. 3). Zhao and Yun do not expressly teach while not operating in the remote interaction mode with the second electronic device, receiving, from the second electronic device, an indication of a request to initialize the remote interaction mode with the second electronic device; and in response to receiving the indication: initializing the remote interaction mode with the second electronic device, including displaying, via the display generation component, a first user interface object corresponding to the indication. Zhao does teach that the second electronic device establishes a peer to peer (P2P) connection with the first electronic device ([47]). Han teaches that a device may establish a P2P connection with another device via a request (Abstract). The suggestion to modify the teaching of Zhao by the teaching of Han is present as Zhao teaches establishing a P2P connection while Han teaches that establishing a P2P connection includes a request. The motivation is to implement the P2P connection. The combination would have been unsurprising and had a reasonable expectation of success because Zhao teaches establishing a P2P connection while Han teaches that establishing a P2P connection includes a request. Thus, before the effective filing date of the current application, the combination of Zhao, Yun, and Han would have rendered obvious, to one of ordinary skill in the art, while not operating in the remote interaction mode with the second electronic device, receiving, from the second electronic device, an indication of a request to initialize the remote interaction mode with the second electronic device; and in response to receiving the indication: initializing the remote interaction mode with the second electronic device, including displaying, via the display generation component, a first user interface object corresponding to the indication. Regarding claim 6, Zhao further teaches wherein: while the first electronic device operates in the remote interaction mode with the second electronic device, the first electronic device displays a second visual indication indicating that the first electronic device is in the remote interaction mode with the second electronic device ([67]-[69]; Fig. 5 at 122). Regarding claim 12, Zhao teaches initiating the remote interaction mode with the second electronic device ([47]). Zhao and Yun do not expressly teach while not operating in the remote interaction mode with the second electronic device, receiving, from the second electronic device, an indication of a request to initiate the remote interaction mode with the second electronic device; and in response to receiving the indication: initiating the remote interaction mode with the second electronic device. Zhao does teach that the second electronic device establishes a peer to peer (P2P) connection with the first electronic device ([47]). Han teaches that a device may establish a P2P connection with another device via a request (Abstract). The suggestion to modify the teaching of Zhao by the teaching of Han is present as Zhao teaches establishing a P2P connection while Han teaches that establishing a P2P connection includes a request. The motivation is to implement the P2P connection. The combination would have been unsurprising and had a reasonable expectation of success because Zhao teaches establishing a P2P connection while Han teaches that establishing a P2P connection includes a request. Thus, before the effective filing date of the current application, the combination of Zhao, Yun, and Han would have rendered obvious, to one of ordinary skill in the art, while not operating in the remote interaction mode with the second electronic device, receiving, from the second electronic device, an indication of a request to initiate the remote interaction mode with the second electronic device; and in response to receiving the indication: initiating the remote interaction mode with the second electronic device. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over US 2012/0038679 A1 (Yun) as applied to claim 1 above, and further in view of US 2015/0126174 A1 (Gray). Regarding claim 7, Yun teaches wherein the image includes a user interface region (Fig. 3 at 400, 430, 500), the method further comprising: while in the remote interaction mode and after transmitting, to the second electronic device, the data associated with displaying the image and the respective data corresponding to the image, receiving, from the second electronic device, data associated with a second input corresponding to a request to input into the user interface region ([117]-[119]), wherein the second input is received via the second electronic device while the user interface region is being displayed via the second electronic device in the image; in response to receiving the data associated with the second input: displaying, via the display generation component, a keyboard for inputting into the user interface region; while displaying the keyboard, receiving, from the second electronic device, data associated with selection of one or more keys of the keyboard ([117]-[119]); and in response to receiving the data associated with the selection of the one or more keys: displaying, via the display generation component, one or more outputs corresponding to the one or more keys in the user interface region ([117]-[119]). Yun does not expressly teach wherein the image includes a user interface region, the method further comprising: while in the remote interaction mode and after transmitting, to the second electronic device, the data associated with displaying the image and the respective data corresponding to the image, receiving, from the second electronic device, data associated with a second input corresponding to a request to input text into the user interface region, wherein the second input is received via the second electronic device while the user interface region is being displayed via the second electronic device in the image; in response to receiving the data associated with the second input: displaying, via the display generation component, a keyboard for inputting text into the user interface region; while displaying the keyboard, receiving, from the second electronic device, data associated with selection of one or more keys of the keyboard; and in response to receiving the data associated with the selection of the one or more keys: displaying, via the display generation component, one or more characters corresponding to the one or more keys in the user interface region. Gray teaches that the keyboard may comprise hardware keys for inputting text characters to be displayed via a display generation component (Fig. 1a at 10). The suggestion to modify the teaching of Yun by the teaching of Gray is present as both teach mobile devices with a display and hardware keys. The motivation is to provide options for user input. The combination would have been unsurprising and had a reasonable expectation of success because both Yun and Gray teach mobile devices with a display and hardware keys. Thus, before the effective filing date of the current application, the combination of Yun and Gray would have rendered obvious, to one of ordinary skill in the art, wherein the image includes a user interface region, the method further comprising: while in the remote interaction mode and after transmitting, to the second electronic device, the data associated with displaying the image and the respective data corresponding to the image, receiving, from the second electronic device, data associated with a second input corresponding to a request to input text into the user interface region, wherein the second input is received via the second electronic device while the user interface region is being displayed via the second electronic device in the image; in response to receiving the data associated with the second input: displaying, via the display generation component, a keyboard for inputting text into the user interface region; while displaying the keyboard, receiving, from the second electronic device, data associated with selection of one or more keys of the keyboard; and in response to receiving the data associated with the selection of the one or more keys: displaying, via the display generation component, one or more characters corresponding to the one or more keys in the user interface region. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over US 2012/0038679 A1 (Yun) as applied to claim 8 above, and further in view of US 2014/0189510 A1 (Ozcan). Regarding claim 9, Yun does not expressly teach wherein: the respective control mode corresponds to an audio output mode, the second input corresponds to a secondary selection of the user interface element using the respective control mode, performing the third operation includes presenting an audio output corresponding to the user interface element without performing a fourth operation corresponding to selection of the user interface element, and the method further comprises: receiving, from the second electronic device, data associated with a third input corresponding to primary selection of the user interface element, wherein the third input is received via the second electronic device while the user interface element is being displayed via the second electronic device in the image; and in response to receiving the data associated with the third input, performing the fourth operation. Ozcan teaches wherein: the respective control mode corresponds to an audio output mode ([44]), the second input corresponds to a secondary selection of the user interface element using the respective control mode, performing the third operation includes presenting an audio output corresponding to the user interface element without performing a fourth operation corresponding to selection of the user interface element ([44]), and data associated with a third input corresponding to primary selection of the user interface element, wherein the third input is received via the second electronic device while the user interface element is being displayed via the second electronic device in the image; and in response to receiving the data associated with the third input, performing the fourth operation ([44]). The suggestion to modify the teaching of Yun by the teaching of Ozcan is present as both teach computing devices with user input (Yun Fig. 3; Ozcan [2], [44]). The motivation is to provide greater functionality to a user. The combination would have been unsurprising and had a reasonable expectation of success because both Yun and Ozcan teach computing devices with user input. Note that Yun provides the teachings related to the two device interconnection, Ozcan provides additional teaching related to audio output and dual operation from interacting with the user interface element. Thus, before the effective filing date of the current application, the combination of Yun and Ozcan would have rendered obvious, to one of ordinary skill in the art, wherein: the respective control mode corresponds to an audio output mode, the second input corresponds to a secondary selection of the user interface element using the respective control mode, performing the third operation includes presenting an audio output corresponding to the user interface element without performing a fourth operation corresponding to selection of the user interface element, and the method further comprises: receiving, from the second electronic device, data associated with a third input corresponding to primary selection of the user interface element, wherein the third input is received via the second electronic device while the user interface element is being displayed via the second electronic device in the image; and in response to receiving the data associated with the third input, performing the fourth operation. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to GENE W LEE whose telephone number is (571)270-7148. The examiner can normally be reached M-F 9:45am-6:15pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, LunYi Lao can be reached at 571-272-7671. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Gene W Lee/Primary Examiner, Art Unit 2621
Read full office action

Prosecution Timeline

Dec 19, 2024
Application Filed
Jan 09, 2026
Non-Final Rejection — §102, §103, §112
Mar 30, 2026
Interview Requested
Apr 14, 2026
Applicant Interview (Telephonic)
Apr 15, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586503
INTERPOLATION AMPLIFIER AND SOURCE DRIVER COMPRISING THE SAME
2y 5m to grant Granted Mar 24, 2026
Patent 12579958
DEVICE AND METHOD FOR TRANSITION BETWEEN LUMINANCE LEVELS
2y 5m to grant Granted Mar 17, 2026
Patent 12573331
DISPLAY DEVICE AND DRIVING METHOD THEREOF
2y 5m to grant Granted Mar 10, 2026
Patent 12567352
METHOD, APPARATUS, DEVICE, AND STORAGE MEDIUM FOR COMPENSATING DISPLAY PANEL
2y 5m to grant Granted Mar 03, 2026
Patent 12567384
Circuit Device And Display System
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
84%
With Interview (+10.7%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 652 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month