Prosecution Insights
Last updated: April 19, 2026
Application No. 18/929,813

Combined Gaze-Based and Scanning-Based Control of an Apparatus

Final Rejection §103
Filed
Oct 29, 2024
Examiner
OKEBATO, SAHLU
Art Unit
2625
Tech Center
2600 — Communications
Assignee
Tobii Dynavox AB
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
94%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
509 granted / 668 resolved
+14.2% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
38 currently pending
Career history
706
Total Applications
across all art units

Statute-Specific Performance

§101
1.1%
-38.9% vs TC avg
§103
63.7%
+23.7% vs TC avg
§102
19.2%
-20.8% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 668 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-10, 12, 16-18 and 21-26 are rejected under 35 U.S.C. 103 as being unpatentable over Krasadakis, US PGPUB 20170212583 in view of Godby et al., US PGPUB 20170189824 hereinafter referenced as Godby and Aleem et al., US PGPUB 20200142479 hereinafter referenced as Aleem. As to claim 1, Krasadakis discloses a method comprising: presenting a graphical representation of a plurality of zones via a display of an apparatus (e.g. device 302, with UI windows 308, 310, 312, 314, 316, 318, 320, and 322 fig. 3); performing gaze-based control, wherein performing gaze-based control comprises: receiving, from an eye-tracking device, an input from a user representing a point of a gaze of the user (FIG. 3 illustrates a block diagram 300 where a device 302 utilizes a camera 304 to track a user's gaze 306 by monitoring the eyes 103 of the user 102); identifying a gaze target location on the display (for example, the user 102 has a gaze 306 fixed upon UI window 310 for predetermined period, depicted as input from a user, see fig. 3); and detecting a confirmation signal based on information provided by the apparatus by detecting a contact-required input provided by a physical input device ([0029] The touch device 128 may include a touchpad, track pad, touch screen, other touch-capturing device capable of translating physical touches into interactions with software being presented on, through, or by the presentation components 110); and selecting, from the plurality of zones presented via the display of the apparatus, a zone based on the identified gaze target location in response to detecting the confirmation signal (UI window 310 is selected to get daily financial report as shown in fig. 3). Krasadakis does not specifically disclose in response to selecting the zone, performing scanning-based control within the selected zone by scanning the selected zone in an x-direction and a y-direction. However, in the same endeavor, Aleem discloses in response to selecting the zone, performing scanning-based control within the selected zone by scanning the selected zone in an x-direction and a y-direction ([0062] The arrangement and extents of the M illumination areas may be selected such that the M illumination areas all overlap in a region of interest of eye 200, which would correspond to M sub-scans of that region of the eye). Therefore, it would have been obvious to one of ordinary skill in the art to modify the disclosure of Krasadakis to further include Aleem’s scanning method, in order to improve the eye tracking function. The combination of Krasadakis Aleem does not specifically disclose outputting auditory feedback to guide the user in an interaction process of the user with the apparatus. However, in the same endeavor, Godby discloses outputting auditory feedback to guide the user in an interaction process of the user with the apparatus ([0082] Execution of the training mode on the wearable device 260 can cause the wearable device 260 to sequentially output audio instructions to the user to perform one or more actions, either independently in accordance with the training mode). Therefore, it would have been obvious to one of ordinary skill in the art to modify the disclosure of Krasadakis to further include Godby’s audio command generation method in order to enhance users’ experience. As to claim 9, Krasadakis discloses a system comprising: at least one processor configured to: present a graphical representation of a plurality of zones via a display of an apparatus (e.g. device 302, with UI windows 308, 310, 312, 314, 316, 318, 320, and 322 fig. 3); perform gaze-based control, wherein when performing gaze-based control, the at least one processor is configured to: receive, from an eye-tracking device, an input from a user representing a point of a gaze of the user (FIG. 3 illustrates a block diagram 300 where a device 302 utilizes a camera 304 to track a user's gaze 306 by monitoring the eyes 103 of the user 102); identify a gaze target location on the display, wherein, when identifying the gaze target location of the at least one processor is configured to: output auditory feedback to guide the user in an interaction process of the user with the apparatus (for example, the user 102 has a gaze 306 fixed upon UI window 310 for predetermined period, depicted as input from a user, see fig. 3); and detect a confirmation signal based on information provided by the apparatus (as shown in fig. 3, window 310 is focused when the user gaze to its direction), wherein detecting a contact-required input provided by a physical input device ([0029] The touch device 128 may include a touchpad, track pad, touch screen, other touch-capturing device capable of translating physical touches into interactions with software being presented on, through, or by the presentation components 110); and select, from the plurality of zones presented via the display of the apparatus, a zone based on the identified gaze target location (UI window 310 is selected to get daily financial report as shown in fig. 3); and Krasadakis does not specifically disclose in response to selecting the zone, performing scanning-based control within the selected zone by scanning the selected zone in an x-direction and a y-direction. However, in the same endeavor, Aleem discloses in response to selecting the zone, performing scanning-based control within the selected zone by scanning the selected zone in an x-direction and a y-direction ([0062] The arrangement and extents of the M illumination areas may be selected such that the M illumination areas all overlap in a region of interest of eye 200, which would correspond to M sub-scans of that region of the eye). Therefore, it would have been obvious to one of ordinary skill in the art to modify the disclosure of Krasadakis to further include Aleem’s scanning method, in order to improve the eye tracking function. The combination of Krasadakis Aleem does not specifically disclose outputting auditory feedback to guide the user in an interaction process of the user with the apparatus. However, in the same endeavor, Godby discloses outputting auditory feedback to guide the user in an interaction process of the user with the apparatus ([0082] Execution of the training mode on the wearable device 260 can cause the wearable device 260 to sequentially output audio instructions to the user to perform one or more actions, either independently in accordance with the training mode). Therefore, it would have been obvious to one of ordinary skill in the art to modify the disclosure of Krasadakis to further include Godby’s audio command generation method in order to enhance users’ experience. As to claim 17, Krasadakis discloses a non-transitory computer-readable storage medium comprising computer- readable instructions stored thereon which, when executed by at least one processor, cause the at least one processor to: present a graphical representation of a plurality of zones via a display of an apparatus (e.g. device 302, with UI windows 308, 310, 312, 314, 316, 318, 320, and 322 fig. 3); perform gaze-based control, wherein when performing gaze-based control, the instructions cause the at least one processor to: receive, from an eye-tracking device, an input from a user representing a point of a gaze of the user (for example, the user 102 has a gaze 306 fixed upon UI window 310 for predetermined period, depicted as input from a user, see fig. 3); identify a gaze target location on the display, wherein when identifying the gaze target location, the instructions further cause the at least one processor to: output auditory feedback to guide the user in an interaction process of the user with the apparatus; and detect a confirmation signal based on information provided by the apparatus, wherein detecting a contact-required input provided by a physical input device ([0029] The touch device 128 may include a touchpad, track pad, touch screen, other touch-capturing device capable of translating physical touches into interactions with software being presented on, through, or by the presentation components 110); select, from the plurality of zones presented via the display of the apparatus, a zone based on the identified gaze target location, wherein the zone is selected only in response to detecting the confirmation signal (UI window 310 is selected to get daily financial report as shown in fig. 3 and window 310 is focused when the user gaze to its direction); and Krasadakis does not specifically disclose in response to selecting the zone, performing scanning-based control within the selected zone by scanning the selected zone in an x-direction and a y-direction. However, in the same endeavor, Aleem discloses in response to selecting the zone, performing scanning-based control within the selected zone by scanning the selected zone in an x-direction and a y-direction ([0062] The arrangement and extents of the M illumination areas may be selected such that the M illumination areas all overlap in a region of interest of eye 200, which would correspond to M sub-scans of that region of the eye). Therefore, it would have been obvious to one of ordinary skill in the art to modify the disclosure of Krasadakis to further include Aleem’s scanning method, in order to improve the eye tracking function. The combination of Krasadakis Aleem does not specifically disclose outputting auditory feedback to guide the user in an interaction process of the user with the apparatus. However, in the same endeavor, Godby discloses outputting auditory feedback to guide the user in an interaction process of the user with the apparatus ([0082] Execution of the training mode on the wearable device 260 can cause the wearable device 260 to sequentially output audio instructions to the user to perform one or more actions, either independently in accordance with the training mode). Therefore, it would have been obvious to one of ordinary skill in the art to modify the disclosure of Krasadakis to further include Godby’s audio command generation method in order to enhance users’ experience. As to claim 2, the combination of Krasadakis, Aleem and Godby discloses the method of claim 1. The combination further discloses performing the scanning-based control prior to receiving the input representing the point of the gaze of the user, wherein the input representing the point of the gaze of the user is received during the scanning-based control (Krasadakis, [0062] for example, a URL/hyperlink that receives a gaze for a particular content item may be utilized to trigger specialized prompts (micro-animated, inline panels, etc.) that provide the user with the ability to view more information regarding the content item that was or is still being gazed). As to claim 3, the combination of Krasadakis, Aleem and Godby discloses the method of claim 1. The combination further discloses detecting a gaze target fixating at the gaze target location for a predetermined length of time (Krasadakis, [0062] In other examples, a web link (or uniform resource locator (URL) or similar resources) may be selected and/or visited upon receiving a gaze, which may include additional logic, such as at least x milliseconds of duration and/or combined with a voice command). As to claim 4, the combination of Krasadakis, Aleem and Godby discloses the method of claim 1. The combination further discloses detecting a blink based on information provided by the eye-tracking device (Krasadakis, [0056] For digital video examples, some or all of the frames of the gaze monitoring may be utilized. Blinking may be ignored based upon any type of suitable threshold of blink time occurring at an interval). As to claim 5, the combination of Krasadakis, Aleem and Godby discloses the method of claim 1. The combination further discloses detecting a saccade based on information provided by the eye-tracking device (Krasadakis, [0043] in various examples, eye movement of a user may be tracked, along with a variety of related data, such as gaze duration, speed of eye movement, how quickly attention is shifted, and so on). As to claim 6, the combination of Krasadakis, Aleem and Godby discloses the method of claim 1. The combination further discloses detecting an audio input provided by an audio sensing device (Krasadakis, for example, a picture may be enlarged if the user gazes upon it, a video may begin playing if it attracts a user's gaze, an audio file may begin playing upon viewing, an animation may start upon viewing, or the like). As to claim 7, the combination of Krasadakis, Aleem and Godby discloses the method of claim 1. The combination further discloses the zone is selected only in response to detecting the confirmation signal (Krasadakis, [0057] the user 102 has a gaze 306 fixed upon UI window 310, depicted as containing a report. Continuing with this example, the gaze 306 causes the UI window to come into focus. By contrast, the other UI windows 308, 312, 314, 316, 318, 320, and 322 are not in focus (i.e., focus is suspended)). As to claim 8, the combination of Krasadakis, Aleem and Godby discloses the method of claim 1. The combination further discloses the apparatus is one of a computer, a tablet computer, and a desktop computer (Krasadakis, [0026] the client computing device 100 may also include less portable devices such as desktop personal computers, kiosks, tabletop devices). As to claim 10, the combination of Krasadakis, Aleem and Godby discloses the system of claim 9. The combination further discloses the at least one processor performs the scanning-based control prior to receiving an input representing the point of the gaze of the user, and wherein the input representing the point of the gaze of the user is received during the scanning- based control (Krasadakis, [0062] for example, a URL/hyperlink that receives a gaze for a particular content item may be utilized to trigger specialized prompts (micro-animated, inline panels, etc.) that provide the user with the ability to view more information regarding the content item that was or is still being gazed). 11. Cancelled As to claim 12, the combination of Krasadakis, Aleem and Godby discloses the system of claim 9. The combination further discloses the at least one processor is further configured to detect a blink based on information provided by the eye-tracking device (Krasadakis, [0056] For digital video examples, some or all of the frames of the gaze monitoring may be utilized. Blinking may be ignored based upon any type of suitable threshold of blink time occurring at an interval). 13. Cancelled 14. Cancelled 15. Cancelled As to claim 16, the combination of Krasadakis, Aleem and Godby discloses the system of claim 9. The combination further discloses the apparatus is one of a computer, a tablet computer, and a desktop computer (Krasadakis, [0026] the client computing device 100 may also include less portable devices such as desktop personal computers, kiosks, tabletop devices). As to claim 18, the combination of Krasadakis, Aleem and Godby discloses the non-transitory computer storage medium of claim 17. The combination further discloses the instructions cause the at least one processor to perform the scanning-based control prior to receiving the input representing the point of the gaze of the user, wherein the input representing the point of the gaze of the user is received during the scanning-based control (Krasadakis, [0062] for example, a URL/hyperlink that receives a gaze for a particular content item may be utilized to trigger specialized prompts (micro-animated, inline panels, etc.) that provide the user with the ability to view more information regarding the content item that was or is still being gazed). 19. Cancelled 20. Cancelled As to claim 21, the combination of Krasadakis, Aleem and Godby discloses the method of claim 1. The combination further discloses the scanning-based control is performed only within the selected zone and not in other zones of the plurality of zones (Aleem, e.g., facet 136a, fig. 4D). As to claim 22, the combination of Krasadakis, Aleem and Godby discloses the method of claim 1. The combination further discloses each zone of the plurality of zones comprises a zone option selection area, and wherein selecting the zone based on the identified gaze target location comprises identifying which zone option selection area corresponds with the received point of the gaze of the user (Aleem, [0074] Each portion of scan space 300 from which a facet, e.g., 136a, 136b, 136c, 136d, receives infrared light signals may be referred to as a scan subspace). As to claim 23, the combination of Krasadakis, Aleem and Godby discloses the method of claim 22. The combination further discloses the zone option selection area corresponds with the received point of the gaze of the user when the input from the user represents that the user's gaze is fixated on the zone option selection area for at least a predetermined length of time (Krasadakis, [0062] In other examples, a web link (or uniform resource locator (URL) or similar resources) may be selected and/or visited upon receiving a gaze, which may include additional logic, such as at least x milliseconds of duration and/or combined with a voice command). As to claim 24, the combination of Krasadakis, Aleem and Godby discloses the method of claim 22. The combination further discloses performing scanning-based control within the selected zone comprises scanning additional areas of the selected zone beyond the zone option selection area of the selected zone (Krasadakis, [0062] The content of UI windows 308, 310, 312, 314, 316, 318, 320, and 322 may change or rotate based upon any number of factors). As to claim 25, the combination of Krasadakis, Aleem and Godby discloses the method of claim 24. The combination further discloses the zone option selection area for each zone is positioned in a center of the zone, and wherein the additional areas of the selected zone beyond the zone option selection area comprise areas that are spaced apart from the zone option selected area in both the x-direction and the y-direction (Aleem, [0101] As the glints are detected during a scan period, it is possible to predict the trajectory of the gaze of the eye based on changes in the position of the glints. Based on this prediction, it is possible to see which exit pupils the eye would be aligned with for the next frame of display). As to claim 26, the combination of Krasadakis, Aleem and Godby discloses the method of claim 24. The combination further discloses the additional areas of the selected zone beyond the zone option selection area comprises areas that are spaced apart from the zone option selected area both positively and negatively in the x-direction and the y-direction (Aleem, [0101] As the glints are detected during a scan period, it is possible to predict the trajectory of the gaze of the eye based on changes in the position of the glints. Based on this prediction, it is possible to see which exit pupils the eye would be aligned with for the next frame of display). Response to Arguments Applicant’s arguments with respect to claim(s) 1-18 and 21-26 have been considered but are moot because the new ground of rejection does not rely on the combined reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Conness et al., US PGPUB 20140176813 discloses methods and systems for adjusting audio output based on eye tracking input. In some embodiments, a memory stores data defining a boundary based on a coordinate system. The boundary corresponds to a display element of displayed content. An input receives data indicating coordinates of a gaze point location of a user viewing the displayed content. A processor compares the received coordinates of the gaze point location to the boundary corresponding to the display element to determine whether the gaze point location is inside the boundary corresponding to the display element. In response to determining that the gaze point location is inside the boundary corresponding to the display element, the processor adjusts an audio setting of the displayed content. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAHLU OKEBATO whose telephone number is (571)270-3375. The examiner can normally be reached Mon - Fri 8:00 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, WILLIAM BODDIE can be reached at 571-272-0666. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SAHLU OKEBATO/ Primary Examiner, Art Unit 2625 11/6/2025
Read full office action

Prosecution Timeline

Oct 29, 2024
Application Filed
Jun 10, 2025
Non-Final Rejection — §103
Aug 26, 2025
Interview Requested
Sep 03, 2025
Applicant Interview (Telephonic)
Sep 03, 2025
Examiner Interview Summary
Sep 11, 2025
Response Filed
Nov 06, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594450
MOTOR FUNCTION REHABILITATION SYSTEM AND METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12596511
INFORMATION PROCESSING METHOD, INFORMATION PROCESSING DEVICE, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12585162
DISPLAYING IMAGES ON TOTAL INTERNAL REFLECTIVE DISPLAYS
2y 5m to grant Granted Mar 24, 2026
Patent 12586547
COMPENSATION DEVICE AND METHOD FOR DISPLAY APPARATUS, DISPLAY APPARATUS, AND COMPUTER STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12582002
LEFT AND RIGHT PROJECTORS FOR DISPLAY DEVICE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
94%
With Interview (+18.0%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 668 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month