Prosecution Insights
Last updated: April 19, 2026
Application No. 18/562,652

Gaze Activation of Display Interface

Non-Final OA §103
Filed
Nov 20, 2023
Examiner
LU, WILLIAM
Art Unit
2624
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
3 (Non-Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
78%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
425 granted / 595 resolved
+9.4% vs TC avg
Moderate +6% lift
Without
With
+6.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
31 currently pending
Career history
626
Total Applications
across all art units

Statute-Specific Performance

§101
5.2%
-34.8% vs TC avg
§103
68.4%
+28.4% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 595 resolved cases

Office Action

§103
DETAILED ACTION Claims 77-85, 89-99 filed January 12th 2026 are pending in the current action. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 12th 2026 has been entered. Response to Arguments Applicant’s arguments with respect to claim(s) 77-99 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 77, 78, 82, 89, 92, 93, 95-99 is/are rejected under 35 U.S.C. 103 as being unpatentable over Denker (US2013/0311925) in view of Rohrbacher (US2020/0319705) in view of Duchastel (US2020/0401686) Consider claim 77, where Denker teaches a method comprising: at a device comprising a sensor, a display, one or more processors, and a memory: (See Denker ¶9 where a computing system may include a display; a sensor subsystem to obtain the passive interaction data; one or more processors; and one or more machine-readable media having stored therein a plurality of instructions that when executed by the processor cause the computing system to perform any of the foregoing methods.) obtaining a first user input corresponding to a first user focus location; and (See Denker Fig. 6 and ¶52, 63 where the method 400 accesses the semantic visualization model 130 to obtain information about the semantic meaning of the user interface element currently displayed at the on-screen location of the user's gaze where the user's visual attention is directed at area 614. Thus a first user’s gaze is directed to a first user focus location 614.) determining that the first user focus location corresponds to the first target location; (See Denker Fig. 6 and ¶52, 63 where for example, the user may be reading and/or preparing a document 612, e.g., "Project Design.doc. Thus, the first user focus location 614 corresponds to a first location document 612.) obtaining a second user input corresponding to a second user focus location; (See Denker Fig. 6 and ¶52, 63 where the user's gaze shifts to area 618 in response to the notification 616. Thus, obtaining a second user’s gaze corresponding to a second user focus location 618.) and includes determining that the second user focus location corresponds to the second target location, displaying a first user interface. (See Denker Fig. 6 and ¶63 where Based on the duration of the user's attention to the notification 616 and/or other factors (e.g., the relevance of the notification the user's current interaction context, as may be determined from the contextual user model 180), the adaptive presentation module 260 relocates the notification 616 to the area 614 and changes the interaction mode of the notification 616 to an interactive control 620, which reveals the sender of the message, the subject, and an selectable button to allow the user to immediately reply to the message, if desired. Thus, on the determination that the second user focus 618 corresponds to notification 616 at a location different from the document 612, displaying (the first user interface) interactive control 620) Denker teaches determining a first user location and a second user location, however Denker does not explicitly teach determining a first target location within a field of view and a second target location different from the first target location within the field of view; after determining the first target location and the second target location; and after determining that the first user focus location corresponds to the first target location. However, in an analogous field of endeavor Rohrbacher teaches determining a first target location within a field of view and a second target location different from the first target location within the field of view; after determining the first target location and the second target location; (See Rohrbacher Figs 6A, 6B and ¶38 where there are a plurality of menu items located in the field of view of the user so that the user is provided a human machine interface to interact with the machine) and after determining that the first user focus location corresponds to the first target location. (See Rohrbacher Figs 5, 6A, 6B and ¶27-31, 69-74 where certain menu elements require two-steps where the first step is an eye gaze input on the menu item 301 (first target location) which then activates visibility of menu item 320 (second target location) to be selected in order to confirm the first action) Therefore, it would have been obvious for one of ordinary skill in the art that the menu items of Denker could further require two-steps in order to activate as taught by Rohrbacher. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known methods to require further confirmation from the user to reduce false positives. Denker teaches a first target and a second target, however Denker does not explicitly teach determining, in a field of view of the display, a first target location within the field of view corresponding to a first portion of the field of view and a second target location corresponding to a second portion of the field of view different from the first portion, wherein the first portion and the second portion are fixed relative to the field of view. However, in an analogous field of endeavor Duchastel teaches determining, in a field of view of the display, a first target location within the field of view corresponding to a first portion of the field of view and a second target location corresponding to a second portion of the field of view different from the first portion, wherein the first portion and the second portion are fixed relative to the field of view. (See Duchastel Figs. 4A-D and ¶46-51 where a static layout of objects may be rendered within the field of view of the user in order to captures a gaze in a sequence of direction.) Therefore, it would have been obvious for one of ordinary skill in the art to modify the field of view of Denker to have objects fixed relative to the field of view as taught by Duchastel. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ implementing known applications where the gaze sequence and position are both important to the operation. Consider claim 78, where Denker in view of Rohrbacher in view of Duchastel teaches the method of claim 77, wherein the first user input comprises a gaze input. (See Denker Fig. 6 and ¶52, 63 where the method 400 accesses the semantic visualization model 130 to obtain information about the semantic meaning of the user interface element currently displayed at the on-screen location of the user's gaze where the user's visual attention is directed at area 614. Thus a first user’s gaze is directed to a first user focus location 614.) Consider claim 82, where Denker in view of Rohrbacher in view of Duchastel teaches the method of claim 77, wherein the second user input comprises a gaze input. (See Denker Fig. 6 and ¶52, 63 where the method 400 accesses the semantic visualization model 130 to obtain information about the semantic meaning of the user interface element currently displayed at the on-screen location of the user's gaze where the user's visual attention is directed at area 614. Thus, obtaining a second user’s gaze corresponding to a second user focus location 618.) Consider claim 89, where Denker in view of Rohrbacher in view of Duchastel teaches the method of claim 77, further comprising, after determining the first target location and the second target location, displaying a first visual indicator proximate the first target location before obtaining the first user input. (See Rohrbacher Figs 5, 6A, 6B and ¶27-31, 69-74 where certain menu elements require two-steps where the first step is an eye gaze input on the menu item 301 (first target location) which then activates visibility of menu item 320 (second target location) to be selected in order to confirm the first action) Therefore, it would have been obvious for one of ordinary skill in the art that the menu items of Denker could further require two-steps in order to activate as taught by Rohrbacher. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known methods to require further confirmation from the user to reduce false positives. Consider claim 92, where Denker in view of Rohrbacher in view of Duchastel teaches the method of claim 89, wherein ceasing to display the first visual indicator is performed after the first user interface has been displayed a threshold number of times. (See Denker ¶88 where after the notification is displayed it may be ignored after closing it, thus ceasing to display a notification after it has been displayed once.) Consider claim 93, where Denker in view of Rohrbacher in view of Duchastel teaches the method of claim 89, wherein ceasing to display the first visual indicator is performed in response to detecting the first user input directed to the first visual indicator. (See Denker ¶41 where common interfaces include controls e.g. "OK", "Cancel", "Quit", "Close", > (play button), >>, etc. Thus, a user interaction directed to closing the application and its associated affordances would cease display of the affordance) Consider claim 95, where Denker in view of Rohrbacher in view of Duchastel teaches a device comprising: one or more processors; a non-transitory memory; a display; an input device; and one or more programs stored in the non-transitory memory, which, when executed by the one or more processors, (See Denker ¶9 where a computing system may include a display; a sensor subsystem to obtain the passive interaction data; one or more processors; and one or more machine-readable media having stored therein a plurality of instructions that when executed by the processor cause the computing system to perform any of the foregoing methods.) cause the device to: obtain a first user input corresponding to a first user focus location; (See Denker Fig. 6 and ¶52, 63 where the method 400 accesses the semantic visualization model 130 to obtain information about the semantic meaning of the user interface element currently displayed at the on-screen location of the user's gaze where the user's visual attention is directed at area 614. Thus a first user’s gaze is directed to a first user focus location 614.) determine that the first user focus location corresponds to the first target location; (See Denker Fig. 6 and ¶52, 63 where the user's gaze shifts to area 618 in response to the notification 616. Thus, obtaining a second user’s gaze corresponding to a second user focus location 618.) obtain a second user input corresponding to a second user focus location; (See Denker Fig. 6 and ¶52, 63 where the user's gaze shifts to area 618 in response to the notification 616. Thus, obtaining a second user’s gaze corresponding to a second user focus location 618.) and includes determining that the second user focus location corresponds to the second target location different from the first target location within the field of view, display a first user interface. (See Denker Fig. 6 and ¶63 where Based on the duration of the user's attention to the notification 616 and/or other factors (e.g., the relevance of the notification the user's current interaction context, as may be determined from the contextual user model 180), the adaptive presentation module 260 relocates the notification 616 to the area 614 and changes the interaction mode of the notification 616 to an interactive control 620, which reveals the sender of the message, the subject, and an selectable button to allow the user to immediately reply to the message, if desired. Thus, on the determination that the second user focus 618 corresponds to notification 616 at a location different from the document 612, displaying (the first user interface) interactive control 620) Denker teaches determining a first user location and a second user location, however Denker does not explicitly teach determining a first target location within a field of view and a second target location different from the first target location within the field of view; after determining the first target location and the second target location; and after determining that the first user focus location corresponds to the first target location. However, in an analogous field of endeavor Rohrbacher teaches determining a first target location within a field of view and a second target location different from the first target location within the field of view; after determining the first target location and the second target location; (See Rohrbacher Figs 6A, 6B and ¶38 where there are a plurality of menu items located in the field of view of the user so that the user is provided a human machine interface to interact with the machine) and after determining that the first user focus location corresponds to the first target location. (See Rohrbacher Figs 5, 6A, 6B and ¶27-31, 69-74 where certain menu elements require two-steps where the first step is an eye gaze input on the menu item 301 (first target location) which then activates visibility of menu item 320 (second target location) to be selected in order to confirm the first action) Therefore, it would have been obvious for one of ordinary skill in the art that the menu items of Denker could further require two-steps in order to activate as taught by Rohrbacher. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known methods to require further confirmation from the user to reduce false positives. Denker teaches a first target and a second target, however Denker does not explicitly teach determining, in a field of view of the display, a first target location within the field of view corresponding to a first portion of the field of view and a second target location corresponding to a second portion of the field of view different from the first portion, wherein the first portion and the second portion are fixed relative to the field of view. However, in an analogous field of endeavor Duchastel teaches determining, in a field of view of the display, a first target location within the field of view corresponding to a first portion of the field of view and a second target location corresponding to a second portion of the field of view different from the first portion, wherein the first portion and the second portion are fixed relative to the field of view. (See Duchastel Figs. 4A-D and ¶46-51 where a static layout of objects may be rendered within the field of view of the user in order to captures a gaze in a sequence of direction.) Therefore, it would have been obvious for one of ordinary skill in the art to modify the field of view of Denker to have objects fixed relative to the field of view as taught by Duchastel. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ implementing known applications where the gaze sequence and position are both important to the operation. Consider claim 96, where Denker in view of Rohrbacher in view of Duchastel teaches a non-transitory memory storing one or more programs, (See Denker ¶9 where a computing system may include a display; a sensor subsystem to obtain the passive interaction data; one or more processors; and one or more machine-readable media having stored therein a plurality of instructions that when executed by the processor cause the computing system to perform any of the foregoing methods.) which, when executed by one or more processors of a device, cause the device to: obtain a first user input corresponding to a first user focus location; and (See Denker Fig. 6 and ¶52, 63 where the method 400 accesses the semantic visualization model 130 to obtain information about the semantic meaning of the user interface element currently displayed at the on-screen location of the user's gaze where the user's visual attention is directed at area 614. Thus a first user’s gaze is directed to a first user focus location 614.) determine that the first user focus location corresponds to a first target location within a field of view; (See Denker Fig. 6 and ¶52, 63 where the user's gaze shifts to area 618 in response to the notification 616. Thus, obtaining a second user’s gaze corresponding to a second user focus location 618.) obtain a second user input corresponding to a second user focus location; (See Denker Fig. 6 and ¶52, 63 where the user's gaze shifts to area 618 in response to the notification 616. Thus, obtaining a second user’s gaze corresponding to a second user focus location 618.) and includes determining that the second user focus location corresponds to a second target location different from the first target location within the field of view, display a first user interface. (See Denker Fig. 6 and ¶63 where Based on the duration of the user's attention to the notification 616 and/or other factors (e.g., the relevance of the notification the user's current interaction context, as may be determined from the contextual user model 180), the adaptive presentation module 260 relocates the notification 616 to the area 614 and changes the interaction mode of the notification 616 to an interactive control 620, which reveals the sender of the message, the subject, and an selectable button to allow the user to immediately reply to the message, if desired. Thus, on the determination that the second user focus 618 corresponds to notification 616 at a location different from the document 612, displaying (the first user interface) interactive control 620) Denker teaches determining a first user location and a second user location, however Denker does not explicitly teach determining a first target location within a field of view and a second target location different from the first target location within the field of view; after determining the first target location and the second target location; and after determining that the first user focus location corresponds to the first target location. However, in an analogous field of endeavor Rohrbacher teaches determining a first target location within a field of view and a second target location different from the first target location within the field of view; (See Rohrbacher Figs 6A, 6B and ¶38 where there are a plurality of menu items located in the field of view of the user so that the user is provided a human machine interface to interact with the machine) after determining the first target location and the second target location; and after determining that the first user focus location corresponds to the first target location. (See Rohrbacher Figs 5, 6A, 6B and ¶27-31, 69-74 where certain menu elements require two-steps where the first step is an eye gaze input on the menu item 301 (first target location) which then activates visibility of menu item 320 (second target location) to be selected in order to confirm the first action) Therefore, it would have been obvious for one of ordinary skill in the art that the menu items of Denker could further require two-steps in order to activate as taught by Rohrbacher. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known methods to require further confirmation from the user to reduce false positives. Consider claim 97 where Denker in view of Rohrbacher in view of Duchastel teaches the method of claim 77, further comprising, after determining that the first user focus location corresponds to the first target location, displaying a second visual indicator proximate the second target location before obtaining the second user input. (See Rohrbacher Figs 5, 6A, 6B and ¶27-31, 69-74 where certain menu elements require two-steps where the first step is an eye gaze input on the menu item 301 (first target location) which then activates visibility of menu item 320 (second target location) to be selected in order to confirm the first action) Therefore, it would have been obvious for one of ordinary skill in the art that the menu items of Denker could further require two-steps in order to activate as taught by Rohrbacher. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known methods to require further confirmation from the user to reduce false positives. Consider claim 98 where Denker teaches the method of claim 97, further comprising ceasing to display the second visual indicator after a condition is satisfied. (See Denker ¶41 where common interfaces include controls e.g. "OK", "Cancel", "Quit", "Close", > (play button), >>, etc. it would be obvious to one of ordinary skill in the art that the selection of “close” will cease the display of the menu item) Consider claim 99 where Denker in view of Rohrbacher in view of Duchastel teaches the method of claim 97, however they do not explicitly teach wherein ceasing to display the second visual indicator is performed in response to displaying the second visual indicator for a threshold duration. (See Rohrbacher ¶106-107 where menu items may include specific timeouts. Typical timeouts can be dimensioned in the order of one second, two seconds, etc., to give the user chance to abort the second user action 202/the confirmation process.) Therefore, it would have been obvious for one of ordinary skill in the art that the menu items of Denker could further require two-steps in order to activate as taught by Rohrbacher. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known methods to require further confirmation from the user to reduce false positives. Claim(s) 79-81, 83-85, 90, 91 and 94 is/are rejected under 35 U.S.C. 103 as being unpatentable over Denker in view of Rohrbacher in view of Duchastel as applied to claim 1 above, in further view of Ambrus et al. (US2015/0212576) Consider claim 79, where Denker in view of Rohrbacher in view of Duchastel teaches the method of claim 77, however, Denker does not explicitly teach wherein the first user input comprises a head pose input. However, in an analogous field of endeavor Ambrus Teaches a head pose input. (See Ambrose ¶20-24 where in one example, the particular head movement may comprise the end user moving their head's orientation away from the direction of the object at a speed that is greater than a threshold speed while maintaining their gaze upon the object and then returning their head's orientation back to the direction of the object while maintaining their gaze upon the object.) It would have been obvious that the gaze input used by Denker could comprise a head pose input as taught by Ambrus. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using other known methods of gaze input on an object to yield similar results. Consider claim 80, where Denker in view of Rohrbacher in view of Duchastel teaches the method of claim 77, and suggests, but not explicitly teaches wherein determining that the first user focus location corresponds to the first target location includes determining that the first user focus location satisfies a proximity criterion relative to the first target location. (See Denker ¶63 where at the outset, the user's visual attention is directed at area 614. For example, the user may be reading and/or preparing a document 612, e.g., "Project Design.doc." the adaptive presentation module 260 may be aware that the subject of the notification 616 relates to the work that the user is currently doing in the document 612, based on the contextual user model 180.) However, Denker does not explicitly teach determining that the first user focus location corresponds to the first target location includes determining that the first user focus location satisfies a proximity criterion relative to the first target location. However, in an analogous field of endeavor Ambrus teaches a proximity criterion. (See Ambrus ¶64-65 where selection of an object is by a gaze over a selectable object where a viewing angle less than a threshold angle (e.g., less than two degrees) or less than a viewing area less than a threshold area (e.g., less than one square inch) may require a longer fixation time than that used for a larger selectable object.) Therefore, it would have been obvious to one of ordinary skill in the art that the contextual user model of Denker would recognize when the user’s gaze 614 falls over the document 612 (proximity criterion) for a predetermined period of time (threshold duration) in order to establish the appropriate context. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known methods in the art to yield the intended results. Consider claim 81, where Denker in view of Rohrbacher in view of Duchastel in view of Ambrus teaches the method of claim 80, wherein determining that the first user focus location corresponds to the first target location includes determining that the first user focus location satisfies the proximity criterion for a duration. (See Denker ¶63 where at the outset, the user's visual attention is directed at area 614. For example, the user may be reading and/or preparing a document 612, e.g., "Project Design.doc." the adaptive presentation module 260 may be aware that the subject of the notification 616 relates to the work that the user is currently doing in the document 612, based on the contextual user model 180.) However, Denker does not explicitly teach wherein determining that the first user focus location corresponds to the first target location includes determining that the first user focus location satisfies the proximity criterion for a threshold duration. However, in an analogous field of endeavor Ambrus teaches a threshold duration. (See Ambrus ¶64-65, 80 where selection of an object is by a gaze over a selectable object where a viewing angle less than a threshold angle (e.g., less than two degrees) or less than a viewing area less than a threshold area (e.g., less than one square inch) may require a longer fixation time than that used for a larger selectable object. In one example, the first period of time for a virtual object occupying a viewing angle less than two degrees may be set to three seconds, while the first period of time for a virtual object occupying a viewing angle greater than or equal to two degrees may be set to two seconds.) Therefore, it would have been obvious to one of ordinary skill in the art that the contextual user model of Denker would recognize when the user’s gaze 614 falls over the document 612 (proximity criterion) for a predetermined period of time (threshold duration) in order to establish the appropriate context. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known methods in the art to yield the intended results. Consider claim 83, where Denker in view of Rohrbacher in view of Duchastel teaches the method of claim 77, however, Denker does not explicitly teach wherein the second user input comprises a head pose input. However, in an analogous field of endeavor Ambrus Teaches a head pose input. (See Ambrose ¶20-24 where in one example, the particular head movement may comprise the end user moving their head's orientation away from the direction of the object at a speed that is greater than a threshold speed while maintaining their gaze upon the object and then returning their head's orientation back to the direction of the object while maintaining their gaze upon the object.) It would have been obvious that the gaze input used by Denker could comprise a head pose input as taught by Ambrus. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using other known methods of gaze input on an object to yield similar results. Consider claim 84, where Denker in view of Rohrbacher in view of Duchastel teaches the method of claim 77, wherein determining that the second user focus location corresponds to the second target location includes determining that the second user focus location satisfies a proximity criterion relative to the second target location. (See Denker ¶63 where the user's gaze shifts to area 618 in response to the notification 616. Based on the duration of the user's attention to the notification 616 and/or other factors (e.g., the relevance of the notification the user's current interaction context, as may be determined from the contextual user model 180), the adaptive presentation module 260 relocates the notification 616 to the area 614 and changes the interaction mode of the notification 616 to an interactive control 620.) However, Denker does not explicitly teach determining that the second user focus location corresponds to the second target location includes determining that the second user focus location satisfies a proximity criterion relative to the second target location. However, in an analogous field of endeavor Ambrus teaches a proximity criterion. (See Ambrus ¶64-65 where selection of an object is by a gaze over a selectable object where a viewing angle less than a threshold angle (e.g., less than two degrees) or less than a viewing area less than a threshold area (e.g., less than one square inch) may require a longer fixation time than that used for a larger selectable object.) Therefore, it would have been obvious to one of ordinary skill in the art that the contextual user model of Denker would recognize when the user’s gaze 614 falls over the document 612 (proximity criterion) for a predetermined period of time (threshold duration) in order to establish the appropriate context. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known methods in the art to yield the intended results. Consider claim 85, where Denker in view of Rohrbacher in view of Duchastel in view of Ambrus teaches the method of claim 84, wherein determining that the second user focus location corresponds to the second target location includes determining that the second user focus location satisfies the proximity criterion for a threshold duration. (See Ambrose ¶20-24 where in one example, the particular head movement may comprise the end user moving their head's orientation away from the direction of the object at a speed that is greater than a threshold speed while maintaining their gaze upon the object and then returning their head's orientation back to the direction of the object while maintaining their gaze upon the object.) It would have been obvious that the gaze input used by Denker could comprise a head pose input as taught by Ambrus. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using other known methods of gaze input on an object to yield similar results. Consider claim 90, where Denker in view of Rohrbacher in view of Duchastel teaches the method of claim 89, however, they do not explicitly teach wherein ceasing to display the first visual indicator after a condition is satisfied. However, in an analogous field of endeavor Ambrus teaches wherein ceasing to display the first visual indicator is performed after a condition is satisfied. (See Ambrus ¶80 where in response to determining that the user's point of gaze has exited the target zone, the virtual affordance program 12 may cause the HMD device 18 to cease displaying the virtual affordance.) Therefore, it would have been obvious for one of ordinary skill in the art to establish conditional interactions around affordances such that they server their intended interactions. For example, Denker ¶41 provides the example of a “close” button. One of ordinary skill in the art would appreciate that pressing the “close” button will yield the predictable result of closing the application and ceasing the display the affordances associated with the application. Consider claim 91, where Denker in view of Rohrbacher in view of Duchastel teaches the method of claim 89, however, they do not explicitly teach wherein ceasing to display the first visual indicator in response to displaying the first user interface for a threshold duration. However, in an analogous field of endeavor Ambrus teaches wherein ceasing to display the first visual indicator is performed in response to displaying the first user interface for a threshold duration (See Ambrus ¶71-73 where the virtual affordance program 12 may not display the virtual affordance at the landing location until the user's hand has moved less than a predetermined amount over a predetermined time period. In one example, the movement of the user's hand may be tracked to determine if the user's hand has moved less than 5 centimeters over the previous 0.5 seconds.) Therefore, it would have been obvious for one of ordinary skill in the art to modify the interface of Denker with an affordance indicating the user’s selection as taught by Ambrus. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known methods to enhance the user experience by presenting a cursor and quickly identifying what the user is about to select. Consider claim 94, where Denker in view of Rohrbacher in view of Duchastel teaches the method of claim 77, wherein presenting a “dismiss” button of the first user interface in response to detecting a third user input directed to the first user interface. (See Denker Fig 6 and ¶63 where the first user interface 620 includes a dismiss button) Denker does not explicitly teach changing a visual property of the first user interface. However, in an analogous field of endeavor Ambrus teaches changing a visual property. (See Ambrus ¶80 where when the user no longer desires to interact with the virtual targets, the virtual affordance program may cause the virtual affordance to cease displaying) Therefore, it would have been obvious for one of ordinary skill in the art that Denker’s presentation of a “dismiss” button would indicate that the user no longer wishes to interact with the user interface element 620 and cease the display of the user interface element as taught by Ambrus. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of using known affordances (See Denker ¶41 where common interfaces include controls e.g. "OK", "Cancel", "Quit", "Close", > (play button), >>, etc.) to yield their intended predictable results. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM LU whose telephone number is (571)270-1809. The examiner can normally be reached 10am-6:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached on 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. WILLIAM LU Primary Examiner Art Unit 2624 /WILLIAM LU/Primary Examiner, Art Unit 2624
Read full office action

Prosecution Timeline

Nov 20, 2023
Application Filed
Feb 21, 2025
Non-Final Rejection — §103
May 15, 2025
Examiner Interview Summary
May 15, 2025
Applicant Interview (Telephonic)
May 16, 2025
Response Filed
Oct 08, 2025
Final Rejection — §103
Dec 04, 2025
Examiner Interview Summary
Dec 04, 2025
Applicant Interview (Telephonic)
Jan 12, 2026
Request for Continued Examination
Jan 28, 2026
Response after Non-Final Action
Feb 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592191
PIXEL DRIVING CIRCUIT AND DRIVING METHOD THEREFOR, AND DISPLAY PANEL AND DISPLAY APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12591307
APPARATUS AND METHOD FOR DETERMINING AN INTENT OF A USER
2y 5m to grant Granted Mar 31, 2026
Patent 12585054
SUNROOF SYSTEM FOR PERFORMING PASSIVE RADIATIVE COOLING
2y 5m to grant Granted Mar 24, 2026
Patent 12566328
OPTICAL SCANNING DEVICE AND IMAGE FORMING APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12566502
Methods and Systems for Controlling and Interacting with Objects Based on Non-Sensory Information Rendering
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
78%
With Interview (+6.5%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 595 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month