Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 1 is objected to because of the following informalities: the claim references the term “on-screen keyboard” multiple times within the claim without use of the proper subsequent antecedent after the initial introduction. Appropriate correction is required.
Claim 1 is objected to because of the following informalities: the claim includes a reference number in the last line “device (1)” which is not correctly mapped to the element and figures. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 13 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim is drawn to a “computer-readable storage medium”. The specification discloses various types of system memory 504 ([0054] PG Pub), which might constitute “computer-readable storage medium”. However, the broadest reasonable interpretation of the claim in light of the specification concludes that the claim as a whole covers a transitory signal, which does not fall within the definition of a process, machine, manufacture or composition of matter (In re Nuijten). Therefore, claim 13 does not fall within a statutory category. Examiner suggests amending the claims to recite "non-transitory computer-readable storage medium" or equivalent, thereby excluding a "signal", "carrier wave", or "transmission medium" which are deemed to be non-statutory. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claims 1, 3, 5, 7-9 and 14, the phrase "in particular" renders the claim indefinite because it is unclear whether the limitation(s) following the phrase are part of the claimed invention. See MPEP § 2173.05(d). By virtue of their dependency on claims 1 and 14 respectively, claims 2, 4, 6, 10-13 and 15 are also rejected.
Regarding claims 4 and 9, the phrase "preferably" renders the claim indefinite because it is unclear whether the limitation(s) following the phrase are part of the claimed invention. See MPEP § 2173.05(d). Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 7-8, 11-13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Rudchenko et al., US 2019/0034057 A1 (hereinafter “Rudchenko”).
Regarding claim 1, Rudchenko discloses a method (FIG. 2, method 200 at [0033]) for generating an optimized on-screen keyboard (FIGS. 4A-4C and virtual keyboard 404A, 404B at [0056]-[0060]) for a device (20) (FIGS. 4A-4C, 5, 6, and 8; device 400, device 500, device 600, tablet device 800at [0056]-[0060], [0067]-[0072], [0074] and [0084]), in particular device for a supported communication (device using eye gaze considered communication aid at [0002], wherein the device is formed to display an on-screen keyboard (1) (FIGS. 4A-4C and virtual keyboard 404A, 404B at [0056]-[0060] displayed on the display within a user interface 414A at [0056])), which can be operated by a user by means of eye control (FIG. 2 at [0036] describing eye gaze input by a user therein), wherein the method has the following steps:
a) displaying an on-screen keyboard (1) (FIGS. 4A-4C and virtual keyboard 404A, 404B at [0056]-[0060] illustrating QUERTY keyboard in area 408A), which has a plurality of buttons (FIGS. 4A-4B with QUERTY keyboard provided has multiple keys at [0056]-[0060]), on the device (20) (FIGS. 4A-4C, 5, 6, and 8; device 400, device 500, device 600, tablet device 800at [0056]-[0060], [0067]-[0072], [0074] and [0084]);
b) receiving at least one user input with regard to the on-screen keyboard (1) (FIG. 2 and [0036]-[0038] receiving gaze input at steps 206-208), in particular focusing on a button by means of eye control ([0007] generally and FIGS. 2 and 4A-4B describing [0034]-[0038] and [0056]-[0060]);
c) determining context information, at least based on the user input ([0007] generally and FIGS. 2 and 4A-4B describing [0034]-[0038] and [0056]-[0060], given an input is a Q the display dwell times for specific keys are adjusted, the contextual data being considered based on the input and the running program);
d) generating a modified on-screen keyboard based on the context information (FIGS. 4A-4B and [0007] describing various modifications to the keys, and [0056]-[0060] describing modified screen for input after Q has been input), wherein the modified on-screen keyboard includes at least one modified information and/or action element, which is arranged in a focus area of the user (FIGS. 4A-4B and [0007] describing various modifications to the keys including flashing, change colors, and animations/visual cues, in addition to change in the underlying dwell time, and [0056]-[0060] describing modified screen for input after Q has been input, further illustrating suggested word area 406A and 406B); and
e) displaying the modified on-screen keyboard on the device (1) (FIGS. 4A-B illustrating change in display based on original input of Q at [0056]-[0060], with modifications as described therein and at [0007]).
Regarding claim 2, Rudchenko discloses the method according to claim 1 (see above), characterized in that
step c) comprises an evaluation of an input prefix, which is specified by a sequence of
previous user inputs and the user input (FIGS. 4A-4B illustrating sequence of letters “He” at 402A , which is evaluated to produce suggested words 406A as disclosed at [0056]-[0062]),
and step d) comprises the following steps:
generating at least one word proposal based on the input prefix (FIGS. 4A-4B suggested words 406A as disclosed at [0056]-[0062]);
assigning the at least one word proposal to a button of the on-screen keyboard (FIGS. 4A-4B suggested words 406A are buttons produced accordingly as disclosed at [0056]-[0062]), wherein the button displays a following letter with regard to the input prefix and the word proposal (FIGS. 4A-4B suggested words 406A are buttons produced accordingly as disclosed at [0056]-[0062]),
wherein in step d), the at least one word proposal is displayed within the assigned button (FIGS. 4A-4B suggested words 406A are buttons produced and showing the words as disclosed at [0056]-[0062]).
Regarding claim 7, Rudchenko discloses the method according to claim 1 (see above), characterized in that
step c) further comprises the following steps:
detecting environmental data (Rudchenko at FIGS. 2-3 and [0052]-[0056] using GPS environment data), in particular voice (see below, condition satisfied, but also audio sensor capable of detecting voice at [0078]), image (see below, condition satisfied)and/or position data (Rudchenko at [0052] GPS data), from at least one sensor (Rudchenko at FIG. 1 and input devices 512 at [0028]);
assigning the environmental data to at least one environment (Rudchenko at [0052] GPS data), in particular discussion context (Rudchenko at [0052]-[0054]), discussion partner (Rudchenko at [0052] and intended recipient) and/or location (Rudchenko location at [0052]),
and step d) comprises the modification of at least one button based on the environmental data (Rudchenko at [0052]-[0054], dwell times for selection button modified accordingly and dynamic suggestions accordingly at FIGS. 4A-4B and [0056]-[0062]).
Regarding claim 8, Rudchenko discloses the method according to claim 1 (see above), characterized in that
at least one language model (Rudchenko at FIG. 5 and [0067] language processor NLP 513), in particular a large language model (machine learning language models at [0006] and [0043] with regarding to FIG. 2), is used for assigning the environmental data to at least one environment (in combination with a semantic determination engine using the environment data at [0052]); and/or generating word proposals, in particular based on the environment ([0052] describing contextual proposals based on environment).
Regarding claim 11, Rudchenko discloses the method according to claim 1 (see above), further comprising the following steps:
detecting a speed, with which a gaze position of the user moves over the on-screen keyboard (Rudchenko at [0024] describing read speed and eye movement over the keyboard);
modifying a trigger threshold of at least one button of the modified on-screen keyboard based on the speed (Rudchenko at [0024] adjusting dwell time accordingly), wherein the trigger threshold specifies a time, during which the button has to at least be focused on in order to be triggered (Rudchenko at [0024] adjusting dwell time dynamically changes accordingly).
Regarding claim 12, Rudchenko discloses the method according to claim 1 (see above),
wherein the detection of the at least one user input comprises a detection of a movement path over the on-screen keyboard (Rudchenko at FIGS. 2-3 and [0038] and [0046]-[0050] path determination),
and the determination of context information comprises a continuous determination of a current input prefix and corresponding word proposals during the input of the movement path (Rudchenko at FIGS. 2-3 and 4A-4B with [0038]-[0040] and [0046]-[0048 describing proposal based on context and the eye movement and proposed words at FIGS. 4A-4B and [0056]-[0062])
and wherein the word proposals are provided via the modified on-screen keyboard during the input of the movement path (Rudchenko at FIGS. 4A-4B and [0056]-[0062]).
Regarding claim 13, Rudchenko discloses a computer-readable storage medium (Rudchenko at [0010]), which includes instructions, which prompt at least one processor (Rudchenko at FIG. 3, processor 300 at [0034], and FIG. 5 processing unit 502 at [0067]-[0069]), to implement the method according to claim 1 (see above) when the instructions are executed by means of the at least one processor (Rudchenko FIG. 5 and [0067]-[0069] describing execution of programs by the processor 502).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3, 5-6 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Rudchenko in view of Zhou et al., US 2024/0134505 A1 (hereinafter “Zhou”).
Regarding claim 3, Rudchenko discloses the method according to claim 1 (see above).
However, Rudchenko does not explicitly disclose the method is characterized in that
step c) comprises the following steps:
c1) evaluating a root word based on a sequence of previous user inputs;
c2) generating at least one word proposal, which specifies an inflection of the root word, in particular with regard to person, mode and/or gender;
c3) assigning the at least one word proposal to a button of the on-screen keyboard, wherein the button displays a last or penultimate letter of the word proposal,
wherein in step d), the at least one word proposal is thereby illustrated within the assigned button.
In the same field of endeavor, Zhou discloses the method is characterized in that step c) comprises the following steps:
c1) evaluating a root word based on a sequence of previous user inputs (FIGS. 2-3B and 6 at [0032]-[0039] and [0045]-[0046] determining the root/start of a word and prediction presented accordingly);
c2) generating at least one word proposal (FIGS. 3A-3B, 4 and 6 with 303, 405 and 603 at [0035]-[0039] and [0041]-[0047] with proposed words therein), which specifies an inflection of the root word (FIG. 6 at [0045]-[0047]), in particular with regard to person ([0055] describing name/person recognition to determine context), mode (see above, condition satisfied) and/or gender (see above, condition satisfied);
c3) assigning the at least one word proposal to a button of the on-screen keyboard, wherein the button displays a last or penultimate letter of the word proposal (FIGS. 3A-6 illustrating buttons at 303, 405 and 603 and as disclosed at [0039]-[0039] and [0041]-[0047]),
wherein in step d), the at least one word proposal is thereby illustrated within the assigned button (FIGS. 3A-6 illustrating buttons at 303, 405 and 603 and as disclosed at [0039]-[0039] and [0041]-[0047]).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the gaze input display tablet of Rudchenko to incorporate the root determination and word suggestion as disclosed by Zhou because the references are within the same field of endeavor, namely, eye gaze input determination for typing on a virtual keyboard on a display device. The motivation to combine these references would have been to reduce the need to type out words (see Zhou at [0047]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Regarding claim 5, Rudchenko discloses the method according to claim 1 (see above), characterized in that
a plurality of word proposals is generated (Rudchenko at FIGS. 4A-4B and [0056]-[0062]).
However, Rudchenko does not explicitly disclose the method the method further comprises a determination of a priority of the respective word proposal wherein the priority is determined in particular as a function of a usage frequency in a language and/or discussion situation; and
that word proposal, which has the highest priority among all word proposals, which are assigned to the button, is displayed in a respective button, in which a word proposal is
displayed.
In the same field of endeavor, Zhou discloses the method further comprises a determination of a priority of the respective word proposal (Zhou FIGS. 6-7D and [0046]-[0048] and [0052]-[0054] describing probability determination of a word), wherein the priority is determined in particular as a function of a usage frequency in a language (Zhou FIGS. 6-7D and [0046]-[0048] and [0052]-[0054] language model determination) and/or discussion situation (Zhou FIGS. 6-7D and [0046]-[0048] and [0052]-[0054] contextual determination therein); and
that word proposal, which has the highest priority among all word proposals, which are assigned to the button (Zhou FIGS. 6-7D and [0046]-[0048] and [0052]-[0054]proposed words as displayed therein), is displayed in a respective button, in which a word proposal is
displayed (Zhou FIGS. 6-7D and [0046]-[0048] and [0052]-[0054] selectable buttons for high probability words provided therein).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the gaze input display tablet of Rudchenko to incorporate the prioritized and high probability suggested words as disclosed by Zhou because the references are within the same field of endeavor, namely, eye gaze input determination for typing on a virtual keyboard on a display device. The motivation to combine these references would have been to reduce the need to type out words and improve input efficiency (see Zhou at [0056]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Regarding claim 6, Rudchenko discloses the method according to claim 1 (see above) characterized in that the user input comprises the focusing on a delete button (backspace button for deletion at [0007] and [0023] and [0033]).
However, Rudchenko does not explicitly disclose the method further comprises the following steps:
determining a word, which will at least be partly deleted when continuing to focus on the delete button; and
displaying the word within the delete button.
In the same field of endeavor, Zhou discloses the method further comprises the following steps:
determining a word, which will at least be partly deleted when continuing to focus on the delete button (see FIGS. 7A-7D and [0048]-[0056] with focus and selection of a word 705 for editing or deleting having a box surrounding it); and
displaying the word within the delete button (see FIGS. 7A-7D and [0048]-[0056] with selection of a word 705 and will be highlighted accordingly for deletion and/or editing).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the gaze input display tablet of Rudchenko to incorporate the prioritized and high probability suggested words as disclosed by Zhou because the references are within the same field of endeavor, namely, eye gaze input determination for typing on a virtual keyboard on a display device. The motivation to combine these references would have been to reduce the need to type out words and improve input efficiency (see Zhou at [0056]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Regarding claim 14, Rudchenko discloses a device (20), in particular for a supported communication, which has the following:
a tablet computer (21) (Rudchenko at FIG. 1, tablet 104 and [0028]-[0030] and [0074]; FIG. 7 tablet computing device 706 and [0082]; FIG. 8 tablet 800 at [0084]), which is formed to display an on-screen keyboard (1) (Rudchenko at FIGS. 4A-4C and virtual keyboard 404A, 404B at [0056]-[0060]) to carry out the method according to claim 1 (see above).
However, Although Rudchenko discloses various input devices for determination of device input (Rudchenko at [0084]) Rudchenko does not explicitly disclose an eye tracking camera (22), which is formed to detect a gaze position (11) with respect to the on-screen keyboard (1);
wherein the tablet computer (21) is further formed to receive the gaze position (11) from the eye tracking camera (21).
In the same field of endeavor, Zhou discloses an eye tracking camera (22) ([0004] describing eye-gaze sensor being a camera, FIG. 1 and camera 18 at [0021]-[0025 and [0028]), which is formed to detect a gaze position (11) with respect to the on-screen keyboard (1) ([0004] describing eye-gaze sensor being a camera, FIG. 1 and camera 18 at [0021]-[0025 and [0028]);
wherein the tablet computer (21) (Zhou at [0021]) is further formed to receive the gaze position (11) from the eye tracking camera (21) (FIG. 1 and camera 18 at [0021]-[0025 and [0028] including depth camera 21).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the eye gaze input display device of Rudchenko to incorporate the eye-gaze cameras incorporated into a tablet as disclosed by Zhou because the references are within the same field of endeavor, namely, eye-gaze input methods and systems for input via a virtual keyboard. The motivation to combine these references would have been to improve the visible field of the user’s environment for improved input via the cameras (see Zhou at least at [0023]-[0026] and [0056]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Regarding claim 15, Rudchenko in view of Zhou discloses the device (20) according to claim 14 (see above), characterized in that
the device (20) further has at least one of the following sensors: sound sensor (see below, condition satisfied, also Rudchenko at [0074]), image sensor (see below, condition satisfied, also Rudchenko at [0078]), GPS position sensor (Rudchenko at [0052]).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Rudchenko in view of Zhou et al., US 2024/0134505 A1 (hereinafter “Zhou”) further in view of Griffin et al., US 2013/0187868 A1 (hereinafter “Griffin”) .
Regarding claim 4, Rudchenko discloses the method according to claim 1 (see above), characterized in that the method further comprises the following steps:
detecting that, by means of eye control, the user focusses on a button, in which a word proposal is displayed (Rudchenko at FIGS. 4A-4B at [0056]-[0062]).
However, Rudchenko does not explicitly disclose providing a completion button (12) in the modified on-screen keyboard, preferably in a lower area thereof,
wherein the completion button (12) displays the word proposal and is formed to accept the word proposal upon selection.
In the same field of endeavor, Zhou discloses providing a completion button (12) in the modified on-screen keyboard (Zhou at FIGS. 7A-7D and [0048]-[0056]),
wherein the completion button (12) displays the word proposal and is formed to accept the word proposal upon selection (Zhou at FIGS. 7A-7D and [0048]-[0056]).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the gaze input display tablet of Rudchenko to incorporate the root determination and word suggestion as disclosed by Zhou because the references are within the same field of endeavor, namely, eye gaze input determination for typing on a virtual keyboard on a display device. The motivation to combine these references would have been to confirm the intended input by the user (see Zhou at [0054]-[0057]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
However, Rudchenko in view of Zhou does not explicitly disclose the placement of suggested words is preferably in a lower area thereof.
In the same field of endeavor, Griffin discloses preferably in a lower area thereof (see Griffin at FIG. 3 and [0029]-[0031] and describing selectable predictive text bar 350 being placed below the keyboard therein; being a selectable button at [0042] and [0044]).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the display input device and method of Rudchenko in view of Zhou to incorporate the suggested text placement as disclosed by Griffin because the references are within the same field of endeavor, namely, display input methods using a virtual keyboard. The motivation to combine these references would have been to improve to improve efficiency of typing and input on a display device (see Griffin at least at [0009]-[0011]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Rudchenko in view of Griffin et al., US 2013/0187868 A1 (hereinafter “Griffin”).
Regarding claim 9, Rudchenko discloses the method according to claim 1 (see above),
wherein step c) comprises a detection of a dwell time on a first button (Rudchenko and FIG. 2 at 204-208 and [0034]-[0038]),
and the method further comprises the following steps:
determining whether the dwell time exceeds a preselection threshold value (Rudchenko and FIG. 2 at 204-208 and [0034]-[0038] gaze determination based on initial gaze time);
if the dwell time exceeds the preselection threshold value, generating the modified on-screen keyboard in such a way that the latter includes a second button (Rudchenko at FIGS. 4A-4B illustrating 406A at [0056]-[0060] suggested words section with separate buttons) (13),
wherein the second button (13) is configured to trigger an action, which is associated with the first button (Rudchenko at FIGS. 4A-4B illustrating 406A at [0056]-[0062] suggested words section with separate buttons are triggered based on the first button, for example first button Q selected provides suggested words associated with the first button Q), in particular the input of a letter, which is displayed on the first button (Rudchenko at FIGS. 4A-4B illustrating 406A at [0056]-[0062] suggested words section with separate buttons are triggered based on the first button, for example first button Q selected provides suggested words associated with the first button Q), wherein the second button (13) is larger, preferably at least twice as large, as the first button (Rudchenko at FIGS. 4A-4B and 406A and 406B, for example the button for “herein” is at least double the length and size of the letters P, and generally the entire area for suggested words exceeds any one button at [0056]-[0062]),
However, Rudchenko does not explicitly disclose wherein the second button (13) is preferably arranged in a lower area of the modified on-screen keyboard.
In the same field of endeavor, Griffin discloses wherein the second button (13) is preferably arranged in a lower area of the modified on-screen keyboard (see Griffin at FIG. 3 and [0029]-[0031] and describing selectable predictive text bar 350 being placed below the keyboard therein; being a selectable button at [0042] and [0044]).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the eye-gaze predictable input keyboard device and method of Rudchenko to incorporate the selectable bar placement as disclosed by Griffin because the references are within the same field of endeavor, namely, predictive text input devices and methods for displays. The motivation to combine these references would have been to improve efficiency of typing and input on a display device (see Griffin at least at [0009]-[0011]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Rudchenko in view of Lacey et al., US 2021/0056764 A1 (hereinafter “Lacey”).
Regarding claim 10, Rudchenko discloses the method according to claim 1 (see above).
However, Rudchenko does not explicitly disclose wherein one or several buttons of the on-screen keyboard and of the modified on-screen keyboard are triggered by focusing on a respective area, which is larger than a visible boundary of the button and surrounds the visible boundary.
In the same field of endeavor, Lacey discloses wherein one or several buttons of the on-screen keyboard and of the modified on-screen keyboard are triggered by focusing on a respective area (FIG. 34 and gaze of 3420 at [0342]), which is larger than a visible boundary of the button and surrounds the visible boundary (FIG. 34 and gaze of 3420 at [0342], gaze area is larger than the key boundaries).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the eye gaze display input device of Rudchenko to incorporate the virtual keyboard gaze location and area as disclosed by Lacey because the references are within the same field of endeavor, namely, virtual display input devices and methods particularly for a keyboard. The motivation to combine these references would have been to improve identification of the particular key intended by user based on the size of keys (see Lacey at least at [0342]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Badman et al., US 2025/0013316 A1;
Zhao et al., US 2024/0256031 A1;
Morris et al., US 2017/0293402 A1;
Paek et al., US 2016/0070441 A1;
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SARVESH J. NADKARNI whose telephone number is (571)270-7562. The examiner can normally be reached 8AM-5PM M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin C. Lee can be reached at (571) 272-1963. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SARVESH J NADKARNI/Examiner, Art Unit 2621