Prosecution Insights
Last updated: April 19, 2026
Application No. 18/473,195

METHODS FOR INTERACTING WITH USER INTERFACES BASED ON ATTENTION

Non-Final OA §102§103
Filed
Sep 22, 2023
Examiner
BELOUSOV, ANDREY
Art Unit
2172
Tech Center
2100 — Computer Architecture & Software
Assignee
Apple Inc.
OA Round
3 (Non-Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
96%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
411 granted / 594 resolved
+14.2% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
33 currently pending
Career history
627
Total Applications
across all art units

Statute-Specific Performance

§101
2.8%
-37.2% vs TC avg
§103
53.9%
+13.9% vs TC avg
§102
31.4%
-8.6% vs TC avg
§112
8.7%
-31.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 594 resolved cases

Office Action

§102 §103
DETAILED ACTION This action is responsive to the filing of 2/24/26. Claims 1-25 are pending and have been considered below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Allowable Subject Matter Claims 7-11, 19-22 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is an examiner's statement of reasons for allowance. The prior art of record fails to disclose displaying the first object a first distance before the first portion of the user comes into threshold distance of the object, and then based on the object being a first type, displaying it at a greater distance, in combination with other limitations recited within the claimed context. The claims present a combination of limitations that differ from the cited art, and there is no reasonable combination of references that would teach it. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-3, 5-6, 12-15, 17-18, 23-24 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hauenstein (10,318,034.) Claim 1, 23-24: Hauenstein discloses a method comprising: at a computer system (Fig. 1A, portable multifunction device 100) in communication with a display generation component (Fig. 1A: 112 touch-sensitive display) and one or more input devices (Fig. 1A: 116, 160 input controllers / input devices): while displaying, via the display generation component, a user interface that includes a first object (Fig. 7A: 704, application launch icon, button, etc.), detecting, via the one or more input devices (Fig. 1A, proximity / intensity sensor of the touch screen input device), a first portion of a user (Fig. 7A: 203, finger / stylus; 28:29-33, a user is enabled to select one or more of the graphics by making a gesture on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figure) or one or more styluses 203) of the computer system within a threshold distance (Fig. 7B: 203 within 514 distance less than the Threshold hover distance) of a location (Fig. 7B: 504 (x, y, z) Position) corresponding to the first object; and in response to detecting the first portion of the user of the computer system within the threshold distance of the location corresponding to the first object: [The following are contingent limitations (the rest of claim 1), as they are based on the determination of object type, first, or second, as claimed below. However, these two contingencies do not preclude other types of objects. Therefore, the BRI of this method (does not apply to system and medium types, claims 23-24) does not encompass these following contingencies (for first type / second type). See MPEP 2111.04 II, and Ex parte Schulhauser, Appeal 2013-007847 (PTAB April 28, 2016.)] in accordance with a determination that the first object is a first type (Fig. 7A: 704, application launch icon, button, etc.) of object, displaying, via the display generation component, the first object with a visual indication of an interaction between the first portion of the user and the first object (Fig. 7C-7E, the object gets larger), wherein the visual indication of the interaction between the first portion of the user and the first object has a first visual appearance (Fig. 7C-7E, enlargement); and changes in visual appearance based on a change in position of the first portion of the user in a first direction (Fig. 7C-7E, downward, hover distance) relative to a location corresponding to the first object (Fig. 7C-7E, the object gets larger the closer the tip gets to the object); and in accordance with a determination that the first object is a second type of object (Fig. 7AL: text object, “birth” word), different from the first type of object, displaying the first object with the visual indication of an interaction between the first portion of the user (Fig. 7AL: 738 cursor indicator is displayed within the text object; 50:29-30, an indicator (e.g., cursor 738) is displayed within selectable text 732 at position 701) and the first object, wherein the visual indication of the interaction between the first portion of the user and the first object has a second visual appearance (50:32-35, Position 701 is offset from the lateral position, (x,y) position 504, of finger 734, and the amount of offset is optionally determined based on the hover distance (e.g., represented by distance 514)), different from the first visual appearance; and changes in visual appearance based on a change in position of the first portion of the user in the first direction (downward, i.e. hover distance, Fig. 7AM-7AN) relative to the location corresponding to the first object (Fig. 7AL-AM: the cursor indicator is repositioned within the text object in response to the vertical user input, which changes the offset, thereby moving position 701 where 738 cursor is displayed.) Claim 2: Hauenstein discloses the method of claim 1, wherein displaying the first object with the visual indication of the interaction between the first portion of the user and the first object includes displaying, via the display generation component, the first object with a virtual highlighting effect (Fig. 7C-7E the object is enlarged.) Claim 3: Hauenstein discloses the method of claim 1, wherein displaying the first object with the visual indication of the interaction between the first portion of the user and the first object includes displaying, via the display generation component, the first object with a virtual glow effect (65:24-48 highlighting of application launch icon 1210.) Claim 5: Hauenstein discloses the method of claim 1, wherein: displaying the first object with the visual indication of the interaction between the first portion of the user and the first object with the first visual appearance includes displaying, via the display generation component, the first object with the visual indication that has a first visual prominence (Fig. 7C-7E the object is enlarged); and displaying the first object with the visual indication of the interaction between the first portion of the user and the first object with the second visual appearance includes displaying the first object with the visual indication that has a second visual prominence, less than the first visual prominence (Fig. 7AL: 738 cursor indicator is displayed within the text object / 736 magnifying loupe. Whether one is less or more prominent is purely subjective.) Claim 12: Hauenstein discloses the method of claim 1, further comprising: while displaying the first object with the visual indication of the interaction between the first portion of the user and the first object in response to detecting the first portion of the user within the threshold distance of the location corresponding to the first object, detecting, via the one or more input devices, movement of the first portion of the user relative to the first object; and in response to detecting the movement of the first portion of the user relative to the first object: changing, via the display generation component, a visual appearance of the visual indication that is displayed with the first object in accordance with the movement of the first portion of the user relative to the first object (Fig. 7C-7E, the object gets larger the closer the tip gets to the object.) Claim 15: Hauenstein discloses the method of claim 12, wherein changing the appearance of the visual indication that is displayed with the first object includes: in accordance with a determination that the first portion of the user is a first distance from the location corresponding to the first object, displaying the visual indication at a first size; and in accordance with a determination that the first portion of the user is a second distance, smaller than the first distance, from the location corresponding to the first object, displaying the visual indication at a second size, smaller than the first size (81:8-11, before the finger made the initial contact with the button on the touch-screen, the button shrinks with decreasing hover distance.) Claim 17: Hauenstein discloses the method of claim 1, further comprising: while displaying the first object with the visual indication of the interaction between the first portion of the user and the first object, wherein the visual indication has the first visual appearance, in accordance with the determination that the first object is the first type of object in response to detecting the first portion of the user within the threshold distance of the location corresponding to the first object, detecting, via the one or more input devices, an input provided by the first portion of the user corresponding to selection of the first object; and in response to detecting the input provided by the first portion of the user: performing an operation corresponding to selection of the first object in accordance with the input; and outputting audio indicating selection of the first object (91:53-55, selection of the first user interface object is indicated by visual, audio, and/or haptic feedback; 92:40 start of a move operation.) Claim 18: Hauenstein discloses the method of claim 17, wherein outputting the audio indicating the selection of the first object includes outputting the audio with a respective audio characteristic having a first value, the method further comprising: while displaying the first object with the visual indication of the interaction between the first portion of the user and the first object, wherein the visual indication has the second visual appearance, in accordance with the determination that the first object is the second type of object in response to detecting the first portion of the user within the threshold distance of the location corresponding to the first object, detecting, via the one or more input devices, an input provided by the first portion of the user corresponding to selection of the first object; and in response to detecting the input provided by the first portion of the user: outputting audio indicating contact (79:48-51, different types of sounds are played to indicate whether the user interface object is in the hover state or the contact state) with the first object with the respective audio characteristic having a second value, different from the first value. Claim 25: Hauenstein discloses the method of claim 1, wherein: displaying the first object with the visual indication of the interaction comprises displaying the visual indication of the interaction over the first object (Fig. 7C-E the object enlargement is displayed right over the object); and displaying the second object with the visual indication of the interaction comprises displaying the visual indication of the interaction over the first object (Fig. 7AL, the cursor and loupe are over the text object.) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hauenstein in view of McVeigh (2023/0385532.) Claim 4: Hauenstein discloses the method of claim 1. However, Hauenstein does not explicitly disclose wherein: the first type of object is a selectable object; and the second type of object is a non-selectable object. McVeigh discloses a similar method for an interactive electronic documents, including: the first type of object is a selectable object; and the second type of object is a non-selectable object (par. 74, a visual element that can be shown in a UI as a presentation element that is either selectable or non-selectable.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date to combine Hauenstein with McVeigh so as to utilize both selectable and non-selectable GUI elements. One would have been motivated to combine the teachings so as to handle and treat UI elements differently by different highlighting means depending on whether the user may be able to select the element or not. Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hauenstein in view of Aoki (2011/0029185.) Claim 6: Hauenstein discloses the method of claim 1. However, Hauenstein does not explicitly disclose, wherein the first portion of the user is a first finger of a hand of the user, and displaying the first object with the visual indication of the interaction between the first portion of the user and the first object in response to detecting the first portion of the user within the threshold distance of the location corresponding to the first object is in accordance with a determination that the first portion of the user is the first finger of the hand of the user, the method further comprising: while displaying the first object with the visual indication of the interaction between the first portion of the user and the first object in response to detecting the first portion of the user within the threshold distance of the location corresponding to the first object, detecting, via the one or more input devices, a second portion of the user within the threshold distance of the location corresponding to the first object; and in response to detecting the second portion of the user within the threshold distance of the location corresponding to the first object: in accordance with the determination that the second portion of the user is a second finger, other than the first finger, of a hand of the user, forgoing displaying a visual indication of an interaction between the second portion of the user and the first object. Aoki disclose as similar interface for touch inputs, including: wherein the first portion of the user is a first finger of a hand of the user, and displaying the first object with the visual indication of the interaction between the first portion of the user and the first object in response to detecting the first portion of the user within the threshold distance of the location corresponding to the first object is in accordance with a determination that the first portion of the user is the first finger of the hand of the user, the method further comprising: while displaying the first object with the visual indication of the interaction between the first portion of the user and the first object in response to detecting the first portion of the user within the threshold distance of the location corresponding to the first object, detecting, via the one or more input devices, a second portion of the user within the threshold distance of the location corresponding to the first object; and in response to detecting the second portion of the user within the threshold distance of the location corresponding to the first object: in accordance with the determination that the second portion of the user is a second finger, other than the first finger, of a hand of the user, forgoing displaying a visual indication of an interaction between the second portion of the user and the first object (par. 464-465, The specified position indication image is displayed in highlight rather than another finger. Thereby, the user can recognize easily from the display window the finger which should be used for the manipulation. Moreover, the correspondence relation between a finger presently displayed and an actual finger can be easily recognized as long as the user understands at least how to determine a manipulation target finger. Therefore, the display of a position indication image suitable for the remote position indication can be attained.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date to combine Hauenstein with Aoki so as to not highlight secondary fingers. One would have been motivated to combine the teachings so that the user can recognize easily from the display window the finger which should be used for the manipulation (Aoki, par. 465.) Claim(s) 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hauenstein in view of Lee (20140184759.) Claim 13: Hauenstein discloses the method of claim 12. However, Hauenstein does not explicitly disclose wherein changing the visual appearance of the visual indication that is displayed with the first object includes changing a virtual lighting effect of the visual indication. Lee discloses a similar method for displaying object, including: wherein changing the visual appearance of the visual indication that is displayed with the first object includes changing a virtual lighting effect of the visual indication (par. 66, if it is determined that an object is nearby … at least one of brightness or saturation is further increased.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date to combine Hauenstein with Lee so as to emphasize objects dynamically in response to change. Claim 14: Hauenstein discloses the method of claim 12, wherein changing the appearance of the visual indication that is displayed with the first object includes: in accordance with a determination that the first portion of the user is a first distance from the location corresponding to the first object, displaying the visual indication with a first amount; and in accordance with a determination that the first portion of the user is a second distance, smaller than the first distance, from the location corresponding to the first object, displaying the visual indication with a second amount, greater than the first amount (Fig. 7C-7E, the object gets larger the closer the tip gets to the object.) However, Hauenstein does not explicitly disclose: displaying the visual indication with a first amount of brightness; and in accordance with a determination that the first portion of the user is a second distance, smaller than the first distance, from the location corresponding to the first object, displaying the visual indication with a second amount of brightness, greater than the first amount of brightness (par. 66, if it is determined that an object is nearby … at least one of brightness or saturation is further increased.) Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hauenstein in view of Ellsworth (2014/0267046.) Claim 16: Hauenstein discloses the method of claim 1. However, Hauenstein does not explicitly disclose wherein: the user interface includes a virtual keyboard; and the first object corresponds to a first key of a plurality of keys of the virtual keyboard. Ellsworth discloses a similar UI element highlighting method, including: wherein the user interface includes a virtual keyboard; and the first object corresponds to a first key of a plurality of keys of the virtual keyboard (par. 48, Hovering the pointing device 560 over the user input entry point 580 may highlight a key on the virtual keyboard such as Q 550 on the virtual keyboard 530.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date to combine Hauenstein with Ellsworth so as to provide the user with visual feedback to guide the user to they keys that the user may want to depress (Ellsworth Abstract.) Response to Arguments Applicant's arguments filed 2/24/26 have been fully considered but they are not persuasive. Applicant argues that the cited portions of Hauenstein provide different directions of inputs (one provides a vertical movement within the hover range, while the other is agnostic to a hover distance.) The Examiner respectfully disagrees. The vertical position affects both cases and in different ways in Hauenstein: (Fig. 7AL-AM), the cursor indicator is repositioned within the text object in response to the vertical user input, which changes the offset, thereby moving position 701 where 738 cursor is displayed. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Kaptelinin (2023/0130520) touch highlighting. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREY BELOUSOV whose telephone number is (571) 270-1695 and Andrew.belousov@uspto.gov email. The examiner can normally be reached Mon-Friday EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler, can be reached at telephone number 571-272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /Andrey Belousov/ Primary Examiner Art Unit 2145 3/19/26
Read full office action

Prosecution Timeline

Sep 22, 2023
Application Filed
Jun 25, 2025
Non-Final Rejection — §102, §103
Sep 16, 2025
Applicant Interview (Telephonic)
Sep 16, 2025
Examiner Interview Summary
Sep 29, 2025
Response Filed
Nov 21, 2025
Final Rejection — §102, §103
Feb 24, 2026
Request for Continued Examination
Mar 08, 2026
Response after Non-Final Action
Mar 19, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602533
CONTENT GENERATION WITH INTEGRATED AUTOFORMATTING IN WORD PROCESSORS THAT DEPLOY LARGE LANGUAGE MODELS
2y 5m to grant Granted Apr 14, 2026
Patent 12585372
GRAPHICAL USER INTERFACE SYSTEM GUIDE MODULE
2y 5m to grant Granted Mar 24, 2026
Patent 12586829
SYSTEMS AND METHODS FOR GENERATING ROLL MAP AND MANUFACTURING BATTERY USING ROLL MAP
2y 5m to grant Granted Mar 24, 2026
Patent 12564733
METHODS FOR OPTIMIZING TREATMENT TIME AND PLAN QUALITY FOR RADIOTHERAPY
2y 5m to grant Granted Mar 03, 2026
Patent 12536210
AUTOMATED CONTENT CREATION AND CONTENT SERVICES FOR COLLABORATION PLATFORMS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
96%
With Interview (+26.6%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 594 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month