Prosecution Insights
Last updated: April 19, 2026
Application No. 18/707,234

Methods of Input and Interaction With an Augmentative and Alternative Communications (AAC) Device

Non-Final OA §102§Other
Filed
May 03, 2024
Examiner
SILVERMAN, SETH ADAM
Art Unit
2172
Tech Center
2100 — Computer Architecture & Software
Assignee
Tobii Dynavox AB
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
88%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
327 granted / 449 resolved
+17.8% vs TC avg
Moderate +15% lift
Without
With
+14.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
47 currently pending
Career history
496
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
20.1%
-19.9% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 449 resolved cases

Office Action

§102 §Other
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 6/27/2024 was filed before the first office action. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejection Notes In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-4, 15-18, and 29-32, are rejected under 35 U.S.C. 102a1 as being anticipated by Lacey et al. (US 20190362557 A1, published: 11/28/2019). Claim 1: Lacey teaches a method of input for using augmented or alternative communications (AAC) system including an eye tracking device (the wearable system 200 can include an outward-facing imaging system 464 (shown in FIG. 4) which observes the world in the environment around the user. The wearable system 200 can also include an inward-facing imaging system 462 (shown in FIG. 4) which can track the eye movements of the user. The inward-facing imaging system may track either one eye's movements or both eyes' movements [Lacey, 0098]), the method comprising: receiving, by the eye tracking device, at least one gaze input from a user of the eye tracking device (wearable system 200 can determine the gaze direction based on the inward-facing imaging system 462 and can cast a cone 2806 or ray in the gaze direction [Lacey, 0325]), displaying, via a user interface, at least one selected text portion in response to receiving the at least one gaze input, the at least one selected text portion forming a message (the wearable system can select one or more words that intercept with the user's direction of gaze [Lacey, 0325, FIG. 28B]; Examiner's Note: as illustrated), wherein the user interface comprises a message window, and wherein the message is displayed in the message window (a message with a header 2802 and a body 2804 [Lacey, 0322, FIG. 2B]; Examiner's Note: as illustrated); generating, by at least one processor, at least one suggested text portion, the at least one suggested text portion being adaptive to a content of the message (In the examples of FIGS. 28D and 28E, the system may automatically present the user with an array of suggested alternatives such as alternatives 2810a and 2810b upon a selection of the word 2808. The suggested alternatives may be generated by the ASR engine or other language processing engines in the system [Lacey, 0331, FIG. 2B]; Examiner's Note: as illustrated); displaying, via the user interface, the at least one suggested text portion in the message window of the user interface (present the user with an array of suggested alternatives such as alternatives 2810a and 2810b upon a selection of the word 2808 [Lacey, 0331, FIG. 2B]); receiving, by the eye tracking device, at least one second gaze input from the user (FIG. 28E illustrates how the system may enable the user to select a desired alternative word, such as “corner,” with eye gaze. The wearable system may use similar techniques as those described with reference to FIG. 28C to select the alternative word [Lacey, 0332]); and modifying, by the at least one processor, the message to add the at least one suggested text portion to the message based on receiving the at least one second gaze input from the user (user's gaze 2812 has been focused upon a particular alternative, such as alternative 2810A or “corner”, for at least a threshold time. After determining that the user's gaze 2812 was focused on an alternative for the threshold time, the system may revise the text (the message) by replacing the originally selected word with the selected alternative word 2814, as shown in FIG. 28F [Lacey, 0332]). Claims 15 and 29, sharing similar deficiencies of claim 1, are likewise rejected. Claim 2: Lacey teaches the method of claim 1. Lacey further teaches wherein when receiving the at least one second gaze input, the method further comprises: determining a location of the at least one suggested text portion in the message window; determining a location of the at least one second gaze input (the system may track the user's eyes using inward-facing imaging system 462 to determine that the user's gaze 2812 has been focused upon a particular alternative, such as alternative 2810A or “corner”, for at least a threshold time [Lacey, 0332]); determining whether the location of the at least one suggested text portion in the message window is the same as the location of the at least one second gaze input (after determining that the user's gaze 2812 was focused on an alternative for the threshold time, the system may revise the text (the message) by replacing the originally selected word with the selected alternative word 2814, as shown in FIG. 28F. In certain implementations, where the wearable system uses cone casting to select a word, the wearable system can dynamically adjust the size of the cone based on the density of the text [Lacey, 0332]); and verifying the at least one second gaze input is a confirmatory input based on determining the location of the at least one suggested text portion in the message window is the same as the location of the gaze input (FIG. 28E illustrates how the system may enable the user to select a desired alternative word, such as “corner,” with eye gaze. The wearable system may use similar techniques as those described with reference to FIG. 28C to select the alternative word. For example, the system may track the user's eyes using inward-facing imaging system 462 to determine that the user's gaze 2812 has been focused upon a particular alternative, such as alternative 2810A or “corner”, for at least a threshold time. After determining that the user's gaze 2812 was focused on an alternative for the threshold time, the system may revise the text (the message) by replacing the originally selected word with the selected alternative word 2814, as shown in FIG. 28F. In certain implementations, where the wearable system uses cone casting to select a word, the wearable system can dynamically adjust the size of the cone based on the density of the text. For example, the wearable system may present a cone with a bigger aperture (and thus with a bigger surface area at the away from the user) to select an alternative word for editing as shown in FIG. 28E because there are few available options. But the wearable system may present the cone with a smaller aperture to select the word 2808 in FIG. 28C because the word 2808 is surrounded with other words and a smaller cone can reduce the error rate of accidentally selecting another word [Lacey, 0332]). Claims 15 and 30, sharing similar deficiencies of claim 2, are likewise rejected. Claim 3: Lacey teaches the method of claim 2. Lacey further teaches further comprising: determining whether a length of time of the at least one second gaze input satisfies a threshold value, wherein the threshold value is based on a length of the at least one suggested text portion (user's gaze 2812 has been focused upon a particular alternative, such as alternative 2810A or “corner”, for at least a threshold time [Lacey, 0332]). Claims 16 and 31, sharing similar deficiencies of claim 3, are likewise rejected. Claim 4: Lacey teaches the method of claim 1. Lacey further teaches wherein receiving the at least one second gaze input further comprises: displaying a visual indication, indicating that the at least one suggested text portion will be added to the message ([Lacey, FIG. 2E]; Examiner's Note: item 2812). Claims 17 and 32, sharing similar deficiencies of claim 3, are likewise rejected. Additional References The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following references control the input of text through eye gaze tracking: Moulder et al. (US 20130138421 A1, published: 5/30/2013) Kristensson et al. (US 20160062458 A1, published: 3/3/2016) Powderly et al. (US 20180307303 A1, published: 10/25/2018) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SETH A SILVERMAN whose telephone number is (571)272-9783. The examiner can normally be reached Mon-Thur, 8AM-4PM MST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571)272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Seth A Silverman/Primary Examiner, Art Unit 2172
Read full office action

Prosecution Timeline

May 03, 2024
Application Filed
Feb 13, 2026
Non-Final Rejection — §102, §Other (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587581
SYSTEMS, METHODS, AND MEDIA FOR CAUSING AN ACTION TO BE PERFORMED ON A USER DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12579201
INFORMATION PROCESSING SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12578200
NAVIGATIONAL USER INTERFACES
2y 5m to grant Granted Mar 17, 2026
Patent 12572269
PERFORMING A CONTROL OPERATION BASED ON MULTIPLE TOUCH POINTS
2y 5m to grant Granted Mar 10, 2026
Patent 12572261
SPATIAL NAVIGATION AND CREATION INTERFACE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
88%
With Interview (+14.8%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 449 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month