Prosecution Insights
Last updated: April 19, 2026
Application No. 19/169,118

Devices, Methods, and Graphical User Interfaces for Interactions Between Computer Systems

Non-Final OA §101§102§103
Filed
Apr 03, 2025
Examiner
YANG, NAN-YING
Art Unit
2629
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
86%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
629 granted / 815 resolved
+15.2% vs TC avg
Moderate +9% lift
Without
With
+8.9%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
16 currently pending
Career history
831
Total Applications
across all art units

Statute-Specific Performance

§101
1.5%
-38.5% vs TC avg
§103
74.1%
+34.1% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
7.7%
-32.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 815 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 30 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 30 recites “A computer readable storage medium storing one or more programs” The broadest reasonable interpretation of a claim drawn to a computer readable storage medium (also called machine readable storage medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable storage media, particularly when the specification does not clearly exclude the transitory propagating signals. See MPEP 2111.01. The USPTO recognizes that applicants may have claims directed to computer readable media that cover signals per se, which the USPTO must reject under 35 U.S.C. § 101 as covering both non-statutory subject matter and statutory subject matter. In an effort to assist the patent community in overcoming a rejection or potential rejection under 35 U.S.C. § 101 in this situation, the USPTO suggests the following approach. A claim drawn to such a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 U.S.C. § I01 by adding the limitation "non-transitory" to the claim. Cf. Animals -Patentability, 1 077 0ff. Gaz. Pat. Office 24 (April 21, 1987) (suggesting that applicants add the limitation "non-human" to a claim covering a multi-cellular organism to avoid a rejection under 35 U.S.C. § 101). Such an amendment would typically not raise the issue of new matter, even when the specification is silent because the broadest reasonable interpretation relies on the ordinary and customary meaning that includes signals per se. The limited situations in which such an amendment could raise issues of new matter occur, for example, when the specification does not support a non-transitory embodiment because a signal per se is the only viable embodiment such that the amended claim is impermissibly broadened beyond the supporting disclosure. See, e.g., Gentry Gallery, Inc. v. Berkline Corp., 134 F.3d 1473 (Fed. Cir. 1998). Claim Interpretation under 35 USC § 112(f) or 35 USC 112 (pre-AIA ) sixth paragraph The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. Use of the word “means” (or “step for”) in a claim with functional language creates a rebuttable presumption that the claim element is to be treated in accordance with 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) is invoked is rebutted when the function is recited with sufficient structure, material, or acts within the claim itself to entirely perform the recited function. Absence of the word “means” (or “step for”) in a claim creates a rebuttable presumption that the claim element is not to be treated in accordance with 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) is not invoked is rebutted when the claim element recites function but fails to recite sufficiently definite structure, material or acts to perform that function. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a display generation component; one or more input device in claims 1 and 29-30. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation: In the corresponding PGPub US2025/0348266 of the application: paragraph 303 recites “the computer system includes a display generation component (e.g., a heads-up display, a head-mounted display (HMD), a display, a touchscreen, a projector, a tablet, a smartphone, and other displays)”; and paragraph 303 recites “one or more input devices (e.g., one or more optical sensors, eye-tracking devices, touch-screens, keyboards, mouses, and/or other input devices)”. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. For more information, see MPEP § 2173 et seq. and Supplementary Examination Guidelines for Determining Compliance With 35 U.S.C. 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-5, 12, 14-15, 18-21, 26 and 29-30 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Fleizach et al. (US. Pub. No. 2017/0357401, hereinafter “Fleizach”). As to claim 1, Fleizach discloses a method [abstract], comprising: at a first computer system [figures 6A-6BD, Tablet 502 with a display screen (display generation component) and one or more input devices (touch screen and/or 690)] with a display generation component and one or more input devices, where the first computer system is in communication with a second computer system [figures 6A-6BD, tablet 502 is in communication with smartphone 501 (second computer system)]: displaying, via the display generation component of the first computer system, a plurality of notifications [figures 6A-6BD, a plurality of notifications 631, 634, 635, 636, 638 via the display generation component of tablet 502], wherein: a first notification of the plurality of notifications is generated by a first application installed on the first computer system [figure 6O, a first notification 631 is generated by a first application 621C (695) installed on tablet 502], and a second notification of the plurality of notifications is generated by a second application installed on the second computer system [figure 6AE, a second notification 638 of the plurality of notifications is generated by phone app (a second application) installed on smartphone 501]; while displaying the second notification, detecting, via the one or more input devices, an input selecting the second notification of the plurality of notifications [figure 6AK, while displaying the second notification 638, detecting via touch input device 690, an input to select answer 638C of the second notification 638]; and in response to detecting the input selecting the second notification of the plurality of notifications: displaying, via the display generation component of the first computer system, a user interface that is generated by the second computer system [figure 6AL, display via display screen of tablet 502, a user interface 639 (phone interface) that is generated by smartphone 501]. As to claim 2, Fleizach discloses the method of claim 1, wherein the user interface that is generated by the second computer system includes a representation of a user interface of the second application that includes content related to the second notification [figures 6AK-6AL, the user interface 639 generated by 501 includes a representation of a user interface of phone app that includes content related to second notification 638]. As to claim 3, Fleizach discloses the method of claim 1, wherein the user interface that is generated by the second computer system is displayed, via the display generation component of the first computer system, in a representation that has a shape that corresponds to a respective shape of the second computer system [figure 6AL, the user interface 639 that is generated by 501 is displayed, via the display generation component of 502, in a representation that has a shape that corresponds to respective shape of 501]. As to claim 4, Fleizach discloses the method of claim 3, wherein the representation in which the user interface is displayed in has a size that corresponds to a respective size of the second computer system [figure 6AL, the representation in which the user interface 639 is displayed in has a size that corresponds to a respective size of 501]. As to claim 5, Fleizach discloses the method of claim 1, further comprising: detecting, via the one or more input devices, a second input directed to the user interface of the second application that is generated by the second computer system and is displayed via the display generation component of the first computer system [figure 6AL, detecting, via input device, a second input 639B directed to the user interface 639 of phone app that is generated by 501 and is displayed via the display generation component of 502]; and in response to detecting the second input, displaying additional content of the second application [figure 6AL, in response to detecting the Keypad input, displaying additional content of the Keypad, paragraph 244, The call affordances include a keypad affordance 639B for displaying a keypad.]. As to claim 12, Fleizach discloses the method of claim 1, wherein displaying, via the display generation component of the first computer system, the plurality of notifications includes: in accordance with a determination that notifications of the plurality of notifications are generated by one or more applications installed on the second computer system, displaying respective visual indications that the notifications corresponds to notifications that are generated by a device different from the first computer system, wherein the second notification includes a visual indication that the second notification corresponds to a notification that is generated by the second computer system [figures 6AL-AK, in accordance with a determination that notifications of a plurality of notifications are generated by phone app installed on 501, displaying visual indications 638-639 corresponds to notification that are generated by a device different from 502]. As to claim 14, Fleizach discloses the method of claim 1, wherein: the user interface generated by the second computer system is a second user interface [figure 6AL, user interface 639 generated by 501 is a second user interface], and the method includes: detecting, via the one or more input devices of the first computer system, a first input [figure 6AK, detect a first input on 690]; and in response to detecting the first input: displaying, via the display generation component of the first computer system, a respective user interface generated by the second computer system, wherein the respective user interface includes a representation of a respective application installed on the second computer system [figures 6AL-AK, display a respective user interface generated by 501 via display of 502, including a representation of a respective application installed on 501]. As to claim 15, Fleizach discloses the method of claim 14, wherein: the first input corresponds to a request to launch the respective application from the second computer system [figure 6AL, first input of 690 corresponds to a request to launch respective application (Keypad) from 501]. As to claim 18, Fleizach discloses the method of claim 1, wherein a session is established between the first computer system and the second computer system [figure 6K, a session is established between 501 and 502], and the method includes: detecting an event that occurs with respect to the second computer system [figure 6L, detect a control event that occurs with respect to 501]; and in response to the event that occurs with respect to the second computer system: in accordance with a determination that the second computer system is used in a first manner, pausing the session on the first computer system [figure 6L, in accordance with a determination that 501 is used in a first manner (controlling tablet), pause the session on 502]. As to claim 19, Fleizach discloses the method of claim 18, including: in response to the event that occurs with respect to the second computer system: in accordance with a determination that the second computer system is used in a second manner different from the first manner, maintaining the session on the first computer system active [figures 6T-6U, in accordance with a determination that 501 is used in a second manner (return from being alternative control) different from first manner, maintain the session on 502]. As to claim 20, Fleizach discloses the method of claim 1, including: while a session is established between the first computer system and the second computer system displaying, via a display generation component of the second computer system, a visual indication of the session between the first computer system and the second computer system [figure 6T, while a session is established between 501 and 502, a visual indication 642, 632E of the session between 501 and 502]. As to claim 21, Fleizach discloses the method of claim 20, including: detecting, via the one or more input devices, a selection input directed to the visual indication of the session between the first computer system and the second computer system [figure 6T, detect a selection input 691 directed to 632E between 501 and 502]; and in response to detecting the selection input directed to the visual indication, initiating a process for terminating the session between the first computer system and the second computer system [figure 6U, in response to detecting the selection input Return, terminate the session of alternative control between 501 and 502]. As to claim 26, Fleizach discloses the method of claim 1, wherein: while a configuration of a sharing mode between the first computer system and the second computer system is being performed, an authentication prompt is displayed via one or more display generation components of the second computer system, wherein a valid authentication is required in order for the first computer system to access information from the second computer system [figure 6K, a sharing mode between 501 and 502 is performed, an authentication prompt 641 is displayed in order for 501 to access information from 502]. As to claim 29, Fleizach discloses a first computer system [figures 6A-6BD, Tablet 502 with a display screen (display generation component) and one or more input devices (touch screen and/or 690)] that is in communication with a second computer system [figures 6A-6BD, tablet 502 is in communication with smartphone 501 (second computer system)], the first computer system comprising: a display generation component [figures 6A-6BD, display screen of tablet 502]; one or more input devices [figures 6A-6BD, touch screen and/or 690]; one or more processors [figure 1A, processor(s) 122]; and memory [paragraph 38, various software programs and/or sets of instructions stored in memory 102 to perform various functions for device 100 and to process data] storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display generation component of the first computer system, a plurality of notifications [figures 6A-6BD, a plurality of notifications 631, 634, 635, 636, 638 via the display generation component (display screen) of tablet 502], wherein: a first notification of the plurality of notifications is generated by a first application installed on the first computer system [figure 6O, a first notification 631 is generated by a first application 621C (695) installed on tablet 502], and a second notification of the plurality of notifications is generated by a second application installed on the second computer system [figure 6AE, a second notification 638 of the plurality of notifications is generated by phone app (a second application) installed on smartphone 501]; while displaying the second notification, detecting, via the one or more input devices, an input selecting the second notification of the plurality of notifications [figure 6AK, while displaying the second notification 638, detecting via touch input device 690, an input to select answer 638C of the second notification 638]; and in response to detecting the input selecting the second notification of the plurality of notifications: displaying, via the display generation component of the first computer system, a user interface that is generated by the second computer system [figure 6AL, display via display screen of tablet 502, a user interface 639 (phone interface) that is generated by smartphone 501]. As to claim 30, Fleizach discloses a computer readable storage medium [paragraph 34, memory 102 (which optionally includes one or more computer readable storage mediums)] storing one or more programs, the one or more programs comprising instructions that, when executed by a first computer system that is in communication with a second computer system [figures 6A-6BD, tablet 502 is in communication with smartphone 501 (second computer system)], wherein the first computer system includes and/or is in communication with a display generation component, and one or more input devices [figures 6A-6BD, Tablet 502 with a display screen (display generation component) and one or more input devices (touch screen and/or 690)], cause the first computer system to: display, via the display generation component of the first computer system, a plurality of notifications [figures 6A-6BD, a plurality of notifications 631, 634, 635, 636, 638 via the display generation component (display screen) of tablet 502], wherein: a first notification of the plurality of notifications is generated by a first application installed on the first computer system [figure 6O, a first notification 631 is generated by a first application 621C (695) installed on tablet 502], and a second notification of the plurality of notifications is generated by a second application installed on the second computer system [figure 6AE, a second notification 638 of the plurality of notifications is generated by phone app (a second application) installed on smartphone 501]; while displaying the second notification, detect, via the one or more input devices, an input selecting the second notification of the plurality of notifications [figure 6AK, while displaying the second notification 638, detecting via touch input device 690, an input to select answer 638C of the second notification 638]; and in response to detecting the input selecting the second notification of the plurality of notifications: display, via the display generation component of the first computer system, a user interface that is generated by the second computer system [figure 6AL, display via display screen of tablet 502, a user interface 639 (phone interface) that is generated by smartphone 501]. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fleizach in view of Ananthakrishnan et al. (US. Pub. No. 2012/0278727, hereinafter “Ananthakrishnan”). As to claim 13, Fleizach discloses the method of claim 1. Fleizach does not expressly disclose wherein the user interface generated by the second computer system is a second user interface, and the method includes: displaying, via the display generation component of the first computer system, a first user interface that is generated by the first computer system; detecting, via the input devices of the first computer system, an input requesting to drag and drop content between the first user interface generated by the first computer system and the second user interface generated by the second computer system; and in response to detecting the input: in accordance with a determination that the input requests to drag and drop a first content item from the first user interface to the second user interface, copying the first content item displayed in the first user interface generated by the first computer system to the second user interface generated by the second computer system; and in accordance with a determination that the input requests to drag and drop a second content item from the second user interface to the first user interface, copying the second content item displayed in the second user interface generated by the second computer system to the first user interface generated by the first computer system. Ananthakrishnan teaches wherein a user interface generated by a second computer system is a second user interface, and the method includes: displaying, via the display generation component of the first computer system, a first user interface that is generated by the first computer system [figure 1, display a first user interface via the display]; detecting, via the input devices of the first computer system, an input requesting to drag and drop content between the first user interface generated by the first computer system and the second user interface generated by the second computer system [figure 1, detect via the touch screen of 100 a drag and drop request between first user interface generated by]; and in response to detecting the input: in accordance with a determination that the input requests to drag and drop a first content item from the first user interface to the second user interface, copying the first content item displayed in the first user interface generated by the first computer system to the second user interface generated by the second computer system [figure 1, in accordance with drag and drop input request, copy 101 content item displayed in 100 to 201 content item displayed in 200]; and in accordance with a determination that the input requests to drag and drop a second content item from the second user interface to the first user interface, copying the second content item displayed in the second user interface generated by the second computer system to the first user interface generated by the first computer system [figure 1, in accordance with drag and drop input request, copy 203 content item displayed in 200 to 103 content item displayed in 100]. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to have modified the method of Fleizach to display, via the display generation component of the first computer system, a first user interface that is generated by the first computer system; detect, via the input devices of the first computer system, an input requesting to drag and drop content between the first user interface generated by the first computer system and the second user interface generated by the second computer system; and in response to detecting the input: in accordance with a determination that the input requests to drag and drop a first content item from the first user interface to the second user interface, copying the first content item displayed in the first user interface generated by the first computer system to the second user interface generated by the second computer system; and in accordance with a determination that the input requests to drag and drop a second content item from the second user interface to the first user interface, copying the second content item displayed in the second user interface generated by the second computer system to the first user interface generated by the first computer system, as taught by Ananthakrishnan, in order to allow cross-platform operations to be performed in a user-friendly manner that is consistent with the touch screen look-and-feel interfaces of the devices being used (Ananthakrishnan, paragraph 15). Allowable Subject Matter Claims 6-11, 16-17, 22-25 and 27-28 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: None of the prior art, made of record, singularly or in combination, teaches or fairly suggests the features presented in the combination limitations of dependent claims 6, 10, 16-17, 22, 25 and 27-28 such as “the user interface generated by the second computer system is a first user interface, and the method includes: while displaying, via the displaying generation component of the first computer system, the first user interface generated by the second computer system, detecting, via the one or more input devices, a third input; and in response to detecting the third input: is playing, via the display generation component of the first computer system, a second user interface generated by the second computer system, wherein the second user interface generated by the second computer system is different from the first user interface generated by the second computer system”, recited by claim 6; “displaying, via the display generation component of the first computer system, a user interface object for opening a user interface generated by a remote device, wherein the user interface object represents the second computer system and the user interface object, when selected, displays a respective user interface generated by the second computer system”, recited by claim 10; “the first input corresponds to a selection of a search result displayed via the display generation component of the first computer system”, recited by claim 16; “the second computer system is locked while the user interface generated by the second computer system is displayed via the display generation component of the first computer system; and the method includes: detecting, via the one or more input devices, a fourth input; and in response to detecting the fourth input: while the second computer system is locked, displaying, via the display generation component of the first computer system, content in a fourth user interface generated by the second computer system”, recited by claim 17; “a respective session is established between the first computer system and the second computer system; the respective session is active on the first computer system and is paused on the second computer system; an input requesting to activate the respective session is detected on the second computer system; in response to detection of the input requesting to activate the respective session on the second computer system: in accordance with a determination that the respective session had a first state when the input requesting to active the respective session on the second computer system was detected, wherein, in the first state, first content is displayed via the display generation component of the first computer system in a first respective user interface generated by the second computer system: the first content in the first respective user interface is displayed via the display generated component of the second computer system; and in accordance with a determination that the respective session had a second state when the input requesting to active the respective session on the second computer system was detected, wherein, in the second state, second content is displayed via the display generation component of the first computer system in a second respective user interface generated by the second computer system: the second content in the second respective user interface is displayed via the display generated component of the second computer system”, recited by claim 22; “detecting a request to enable a sharing mode between the first computer system and the second computer system; and in response to detecting the request to enable the sharing mode: in accordance with a determination that the sharing mode has been authorized, enabling the sharing mode without requesting authentication; and in accordance with a determination that the sharing mode has not been authorized, displaying an authentication request via one or more display generation components of the second computer system and/or via the one or more display generation components of the first computer system”, recited by claim 25; “detecting, via one or more input devices of the second computer system, a request to display a respective user interface on a respective display of the second computer system; in response to detecting the request to display the respective user interface: in accordance with a determination that authentication is required according to a setting, prompting a user to provide authentication on the first computer system and/or the second computer system; and in accordance with a determination that authentication is not required according to the setting, forgoing prompting the user to provide authentication”, recited by claim 27; and “displaying, via the display generation component of the first computer system, the user interface that is generated by the second computer system, includes: in accordance with a determination that the second computer system has a first physical shape, displaying the user interface in a first respective representation that has a shape that correspond to the first physical shape of the second computer system; and in accordance with a determination that the second computer system has a second physical shape, displaying the user interface in a second respective representation that has a shape that correspond to the second physical shape of the second computer system”, recited by claim 28. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US. Pub. No. 2012/0185790 (Bae et al.) is considered as pertinent art as seen in figure 5C. US. Pub. No. 2016/0239547 (Lim et al.) is also considered as pertinent art as seen in figure 1A. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAN-YING YANG whose telephone number is (571)272-2211. The examiner can normally be reached Monday-Friday, 8am-5pm, EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BENJAMIN LEE can be reached at (571)272-2963. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NAN-YING YANG/ Primary Examiner, Art Unit 2629
Read full office action

Prosecution Timeline

Apr 03, 2025
Application Filed
Jan 31, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602138
TOUCH PANEL
2y 5m to grant Granted Apr 14, 2026
Patent 12593510
DISPLAY PANEL AND DISPLAY DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12592193
DISPLAY PANEL AND CONTROL METHOD THEREFOR, AND DISPLAY DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12583316
User Interface for a Vehicle and a Vehicle
2y 5m to grant Granted Mar 24, 2026
Patent 12583319
SYSTEM AND METHOD FOR DIMMING DISPLAYS IN AUTOMOTIVE VEHICLE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
86%
With Interview (+8.9%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 815 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month