DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The amendment filed on 11/26/2025 has been entered. Claims 1, 6, 11, and 18 have been amended. Claims 21-23 have been added. Claims 1-23 are pending.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-23 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Mckenzie et al (US Pub. 20170322623).
Regarding claim 20, Mckenzie discloses:
An apparatus for entering text by movement of fingers of a user having an eye, (at least fig. 5A-B and paragraph 28. Describes initialization in this manner may establish a connection between the letter H displayed on the virtual keyboard 450 (in this example, the first letter of the text to be entered into the virtual text entry box 480), and a corresponding relative position of the user's finger on the touch surface 108 of the controller 102) comprising:
A sensor in communication with at least one finger of the fingers which detects movement of the at least one finger and produces a finger signal, (at least fig. 5A-E and paragraphs 22, 28. Describes a touch sensor such as is included in a touch sensitive surface of a handheld electronic device, or smartphone, and other such sensors and/or different combination(s) of sensors. Para. 28, describes: once the user has established this connection, the user may maintain the touch on the touch surface 108 of the controller 102, and then drag or swipe along the touch surface 108 of the controller to enter subsequent characters);
A computer in communication with the sensor which receives the finger signal and associates proposed text with the finger signal, (at least fig. 5A-E and paragraphs 17, 28. Describes the handheld electronic device 102 is just one example of an external computing device which may be operably coupled with the HMD 100, Para. 28, describes: Initialization in this manner may establish a connection between the letter H displayed on the virtual keyboard 450 (in this example, the first letter of the text to be entered into the virtual text entry box 480), and a corresponding relative position of the user's finger on the touch surface 108 of the controller 102); and
A display of a virtual reality headset in communication with the computer upon which the proposed text is displayed, the computer selecting desired text from the proposed text on the display based on at least one additional finger signal from the sensor, (at least fig. 5A-E and paragraph 34. Describes the system may anticipate, or predict the user's intended entry based on, for example, characters already entered, word(s) and/or phrase(s) preceding the current entry, a particular user's usage history, and other such factors, and may display a list 460 including one or more recommended or suggested entries, as shown in FIG. 5E. In this situation, the user may, instead of completing all of the steps discussed above with respect to FIGS. 5E-5F to complete entry of the word “hello” the user may instead re-direct the finger touch and drag, or swipe, along the touch surface 108 of the controller 102 to the desired entry in the list 460 to complete entry of the desired word in the text entry box 480).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-6, 11-14, 18-19 and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mckenzie et al (US Pub. 20170322623) in view of Iwata (US Pub. 20100231505).
Regarding claim 1, Mckenzie discloses:
An apparatus for entering text by movement of fingers of a user having an eye, (at least fig. 5A-B and paragraph 28. Describes initialization in this manner may establish a connection between the letter H displayed on the virtual keyboard 450 (in this example, the first letter of the text to be entered into the virtual text entry box 480), and a corresponding relative position of the user's finger on the touch surface 108 of the controller 102) comprising:
A display in communication with the computer upon which the proposed text is displayed, (at least fig. 5A-B and paragraphs 17, 16. Describes the handheld electronic device 102 is just one example of an external computing device which may be operably coupled with the HMD 100, Para. 16, describes: a virtual keyboard displayed in the virtual environment by the HMD, to facilitate text entry in the virtual environment); and
An eye tracker in communication with the computer, the computer selecting desired text from the proposed text on the display based on the eye tracker identifying where the eye of the user gazes, (at least fig. 6A-F and paragraphs 16, 41. Describes the gaze tracking device may be included in a head mounted display (HMD) device worn by the user. Para. 41, describes: in this situation, the user may, instead of completing all of the steps discussed above with respect to FIGS. 6D-6E to complete entry of the word “hello” the user may instead re-direct his eye gaze to the desired entry in the list 460 to complete entry of the desired word in the text entry box 480).
Mckenzie does not disclose:
A sensor in communication with at least one finger of the fingers which detects solely movement of the at least one finger independent of location of the one finger and produces a finger signal;
A computer in communication with the sensor which receives the finger signal and associates proposed text with the finger signal
Iwata teaches:
A sensor in communication with at least one finger of the fingers which detects solely movement of the at least one finger independent of location of the one finger and produces a finger signal, (at least fig. 1-2 and paragraph 19. Describes the tactile sensors (5) detecting the pressure on the finger tips when the finger tips are pressed against an arbitrary object, such as a desk, and the information processing device inputting the key corresponding to the finger tip pressed against the arbitrary object. Furthermore, the same operations as ordinary click and drag operations using a mouse are achieved by a user wearing a motion detection sensor (4) and a tactile sensor (5) on each of the finger tips of his/her hands, the sensors transmitting motion information about the finger tips to an information processing device);
A computer in communication with the sensor which receives the finger signal and associates proposed text with the finger signal, (at least fig. 1-2 and paragraph 19. Describes when a finger tip is pressed against a desk or the like, a tactile sensor (5) composed of a conductive rubber, a piezoelectric element or the like detects the pressure on the finger tip to determine which finger is pressed and transmits a signal to the personal computer (PC) or the like. Para. 12, describes: For example, the user can create a text longer than e-mails or the like created on conventional cellular phones at any time anywhere)
The two references are analogous art because they are related with the same field of invention of an input device.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate input device sensors mounted on fingers as taught by Iwata with the virtual reality system as disclose by Mckenzie. The motivation to combine the reference of Iwata is to enhance communication mechanism which allows key input without actual operation of a keyboard and allow click and drag operations via finger tips of the hands.
Regarding claim 11, Mckenzie discloses:
A method for entering text by movement of fingers of a user having an eye, (at least fig. 5A-B and paragraph 28. Describes initialization in this manner may establish a connection between the letter H displayed on the virtual keyboard 450 (in this example, the first letter of the text to be entered into the virtual text entry box 480), and a corresponding relative position of the user's finger on the touch surface 108 of the controller 102) comprising the steps of:
Moving at least one finger of the fingers, (at least fig. 5A-E and paragraphs 22, 28. Describes a touch sensor such as is included in a touch sensitive surface of a handheld electronic device, or smartphone, and other such sensors and/or different combination(s) of sensors. Para. 28, describes: once the user has established this connection, the user may maintain the touch on the touch surface 108 of the controller 102, and then drag or swipe along the touch surface 108 of the controller to enter subsequent characters);
Displaying on the display the proposed text, (at least fig. 5A-B and paragraphs 17, 16. Describes the handheld electronic device 102 is just one example of an external computing device which may be operably coupled with the HMD 100, Para. 16, describes: a virtual keyboard displayed in the virtual environment by the HMD, to facilitate text entry in the virtual environment);
Identifying with an eye tracker where the user gazes onto the proposed text displayed on a display, (at least fig. 6A-F and paragraphs 16, 41. Describes the gaze tracking device may be included in a head mounted display (HMD) device worn by the user. Para. 41, describes: in this situation, the user may, instead of completing all of the steps discussed above with respect to FIGS. 6D-6E to complete entry of the word “hello” the user may instead re-direct his eye gaze to the desired entry in the list 460 to complete entry of the desired word in the text entry box 480); and
Selecting desired text from the proposed text by the computer based on the eye tracker identifying where the eye of the user gazes, (at least fig. 6A-F and paragraph 41. Describes in this situation, the user may, instead of completing all of the steps discussed above with respect to FIGS. 6D-6E to complete entry of the word “hello” the user may instead re-direct his eye gaze to the desired entry in the list 460 to complete entry of the desired word in the text entry box 480).
Mckenzie does not disclose:
Causing a sensor in communication with the at least one finger which detects solely movement of the at least one finger independent of location of the one finger to produce a finger signal;
Receiving the finger signal at a computer, associating by the computer proposed text with the finger signal;
Iwata teaches:
Causing a sensor in communication with the at least one finger which detects solely movement of the at least one finger independent of location of the one finger to produce a finger signal, (at least fig. 1-2 and paragraph 19. Describes the tactile sensors (5) detecting the pressure on the finger tips when the finger tips are pressed against an arbitrary object, such as a desk, and the information processing device inputting the key corresponding to the finger tip pressed against the arbitrary object. Furthermore, the same operations as ordinary click and drag operations using a mouse are achieved by a user wearing a motion detection sensor (4) and a tactile sensor (5) on each of the finger tips of his/her hands, the sensors transmitting motion information about the finger tips to an information processing device);
A computer in communication with the sensor which receives the finger signal and associates proposed text with the finger signal, (at least fig. 1-2 and paragraph 19. Describes when a finger tip is pressed against a desk or the like, a tactile sensor (5) composed of a conductive rubber, a piezoelectric element or the like detects the pressure on the finger tip to determine which finger is pressed and transmits a signal to the personal computer (PC) or the like. Para. 12, describes: For example, the user can create a text longer than e-mails or the like created on conventional cellular phones at any time anywhere)
Regarding the rejection of claim 11, refer to the motivation of claim 1.
Regarding claim 18, Mckenzie discloses:
A non-transitory readable storage medium which includes a computer program stored on the storage medium for entering text by movement of fingers of a user having an eye having the computer-generated, (at least fig. 5A-B and paragraph 28. Describes initialization in this manner may establish a connection between the letter H displayed on the virtual keyboard 450 (in this example, the first letter of the text to be entered into the virtual text entry box 480), and a corresponding relative position of the user's finger on the touch surface 108 of the controller 102. Para. 62, describes: The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 1004, the storage device 1006, or memory on processor 1002) steps of:
Displaying on a display of a virtual reality headset the proposed text, (at least fig. 5A-B and paragraphs 17, 16. Describes the handheld electronic device 102 is just one example of an external computing device which may be operably coupled with the HMD 100, Para. 16, describes: a virtual keyboard displayed in the virtual environment by the HMD, to facilitate text entry in the virtual environment);
Identifying from an eye tracker where the user gazes onto the proposed text displayed on a display, (at least fig. 6A-F and paragraphs 16, 41. Describes the gaze tracking device may be included in a head mounted display (HMD) device worn by the user. Para. 41, describes: in this situation, the user may, instead of completing all of the steps discussed above with respect to FIGS. 6D-6E to complete entry of the word “hello” the user may instead re-direct his eye gaze to the desired entry in the list 460 to complete entry of the desired word in the text entry box 480); and
Selecting desired text from the proposed text based on the eye tracker identifying where the eye of the user gazes, (at least fig. 6A-F and paragraph 41. Describes in this situation, the user may, instead of completing all of the steps discussed above with respect to FIGS. 6D-6E to complete entry of the word “hello” the user may instead re-direct his eye gaze to the desired entry in the list 460 to complete entry of the desired word in the text entry box 480).
Mckenzie does not disclose:
Associating proposed text from a finger signal obtained from a sensor in communication with at least one finger of the fingers moving which detects solely movement of the at least one finger independent of location of the one finger;
Iwata teaches:
Associating proposed text from a finger signal obtained from a sensor in communication with at least one finger of the fingers moving which detects solely movement of the at least one finger independent of location of the one finger, (at least fig. 1-2 and paragraph 19. Describes the tactile sensors (5) detecting the pressure on the finger tips when the finger tips are pressed against an arbitrary object, such as a desk, and the information processing device inputting the key corresponding to the finger tip pressed against the arbitrary object. Furthermore, the same operations as ordinary click and drag operations using a mouse are achieved by a user wearing a motion detection sensor (4) and a tactile sensor (5) on each of the finger tips of his/her hands, the sensors transmitting motion information about the finger tips to an information processing device);
Regarding the rejection of claim 18, refer to the motivation of claim 1.
Regarding claim 2, Mckenzie discloses:
A virtual reality headset having the display which displays a virtual reality and the proposed text in the virtual reality, (at least fig. 5A-B and paragraphs 17, 16. Describes the handheld electronic device 102 is just one example of an external computing device which may be operably coupled with the HMD 100, Para. 16, describes: a virtual keyboard displayed in the virtual environment by the HMD, to facilitate text entry in the virtual environment).
Regarding claim 3, Mckenzie does not disclose:
Wherein the computer receives a second finger signal from the sensor which causes the computer to select the desired text.
Iwata teaches:
Wherein the computer receives a second finger signal from the sensor which causes the computer to select the desired text, (at least fig. 1-2 and paragraphs 7, 19. Describes a home position for an index finger is set on the virtual keyboard (2) shown on the display (3) as with an ordinary keyboard, a cursor corresponding the index finger is displayed at the home position, and cursors corresponding to the remaining fingers are displayed adjacent thereto so that the user can feel as if the user actually puts his/her finger tips on the virtual keyboard (2) shown on the display (3). Para. 19, describes: the tactile sensors (5) detecting the pressure on the finger tips when the finger tips are pressed against an arbitrary object, such as a desk, and the information processing device inputting the key corresponding to the finger tip pressed against the arbitrary object).
Regarding the rejection of claim 3, refer to the motivation of claim 1.
Regarding claim 4, Mckenzie discloses:
Wherein the computer selecting the desired text based on either direct gaze pointing with dwell, eye switches, discrete gaze gestures, or continuous gaze, (at least fig. 6A-F, 7A-E and paragraphs 16, 41. Describes the gaze tracking device may be included in a head mounted display (HMD) device worn by the user. Para. 41, describes: in this situation, the user may, instead of completing all of the steps discussed above with respect to FIGS. 6D-6E to complete entry of the word “hello” the user may instead re-direct his eye gaze to the desired entry in the list 460 to complete entry of the desired word in the text entry box 480. Para. 50, describes: continuous eye swipes along the virtual keyboard 450 to enter subsequent character(s) while the touch is maintained on the touch surface 108 of the controller 102).
Regarding claim 5, Mckenzie discloses:
Wherein the computer tracks and visualizes a physical keyboard in the virtual reality to facilitate keyboard text entry in virtual reality, (at least fig. 2A-B and paragraph 34. Describes a camera 180 may capture still and/or moving images that may be used to help track a physical position of the user and/or the handheld electronic device 102. The captured images may also be displayed to the user on the display 140 in a pass through mode. Para. 53, describes: gaze directed swipes on a virtual keyboard displayed by the system in the virtual environment may be detected and tracked, and translated into a corresponding text input, either alone or together with user input(s) received by the controller).
Regarding claim 6, Mckenzie discloses:
Wherein the computer displays the proposed text either a Lexical Layout, a WordCloud Layout, a Division Layout, or a Pentagon Layout, (at least fig. 5E and paragraph 3. Describes the list 460 of suggested entries may be displayed to the user as the user implements the entries on the virtual keyboard 450 as described above, allowing the user to swipe over to one of the suggested entries in the list 460 to select one of the suggested entries displayed in the list 460).
Mckenzie does not explicitly discloses:
Wherein the computer displays the proposed text either a Lexical Layout, a WordCloud Layout, a Division Layout, or a Pentagon Layout.
Iwata teaches:
Wherein the computer displays the proposed text either a Lexical Layout, a WordCloud Layout, a Division Layout, or a Pentagon Layout, (at least fig. 5 and paragraph 7. Describes the tactile sensor (5) detects the pressure on the finger tip pressed against the arbitrary object, such as a desk, to allow key input to word processing application software (1) or the like in the information processing device).
Iwata does not explicitly teach:
Wherein the computer displays the proposed text either a Lexical Layout, a WordCloud Layout, a Division Layout, or a Pentagon Layout
It has been held that a recitation with respect to the manner in which a claimed apparatus is intended to be employed does not differentiate the claimed apparatus from a prior art apparatus satisfying the claimed structural limitations. Ex parte Masham, 2 USPQ2d 1647 (1987)
Regarding claim 12, Mckenzie discloses:
A virtual reality headset having the display and including the step of displaying a virtual reality on the display and the proposed text in the virtual reality, (at least fig. 5A-B and paragraphs 17, 16. Describes the handheld electronic device 102 is just one example of an external computing device which may be operably coupled with the HMD 100, Para. 16, describes: a virtual keyboard displayed in the virtual environment by the HMD, to facilitate text entry in the virtual environment).
Regarding claim 13, Mckenzie discloses:
The step of the computer receiving a second finger signal from the sensor which causes the computer to select the desired text, (at least fig. 5A-E and paragraph 34. Describes the system may anticipate, or predict the user's intended entry based on, for example, characters already entered, word(s) and/or phrase(s) preceding the current entry, a particular user's usage history, and other such factors, and may display a list 460 including one or more recommended or suggested entries, as shown in FIG. 5E. In this situation, the user may, instead of completing all of the steps discussed above with respect to FIGS. 5E-5F to complete entry of the word “hello” the user may instead re-direct the finger touch and drag, or swipe, along the touch surface 108 of the controller 102 to the desired entry in the list 460 to complete entry of the desired word in the text entry box 480).
Regarding claim 14, Mckenzie discloses:
The steps of the computer tracking and visualizing a physical keyboard in the virtual reality to facilitate keyboard text entry in virtual reality, (at least fig. 2A-B and paragraph 34. Describes a camera 180 may capture still and/or moving images that may be used to help track a physical position of the user and/or the handheld electronic device 102. The captured images may also be displayed to the user on the display 140 in a pass through mode. Para. 53, describes: gaze directed swipes on a virtual keyboard displayed by the system in the virtual environment may be detected and tracked, and translated into a corresponding text input, either alone or together with user input(s) received by the controller).
Regarding claim 19, Mckenzie does not disclose:
Also having the computer-generated step of selecting the desired text from a second finger signal from the sensor.
Iwata teaches:
Wherein the computer receives a second finger signal from the sensor which causes the computer to select the desired text, (at least fig. 1-2 and paragraphs 7, 19. Describes a home position for an index finger is set on the virtual keyboard (2) shown on the display (3) as with an ordinary keyboard, a cursor corresponding the index finger is displayed at the home position, and cursors corresponding to the remaining fingers are displayed adjacent thereto so that the user can feel as if the user actually puts his/her finger tips on the virtual keyboard (2) shown on the display (3). Para. 19, describes: the tactile sensors (5) detecting the pressure on the finger tips when the finger tips are pressed against an arbitrary object, such as a desk, and the information processing device inputting the key corresponding to the finger tip pressed against the arbitrary object).
Regarding the rejection of claim 19, refer to the motivation of claim 1.
Regarding claim 23, Mckenzie discloses:
Where there is no controller, (at least fig. 1 and paragraph 15. Describes for example, a head mounted display (HMD) device may explore the virtual environment and interact with the virtual environment through various different types of inputs including, for example, manipulation of one or more electronic devices separate from the HMD and/or manipulation of the HMD itself, and/or eye directional gaze, and/or head movement and/or hand/arm gestures)
Allowable Subject Matter
Claims 21-22 are allowed.
Claims 7-10 and 15-17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. 20150253862 relates to a display unit in the form of a head mounted display, the display unit including a transparent display means; and a control module configured to display an execution screen of a currently executed application in a transparent display area corresponding to the palm of one hand of a user, upon recognition of user's gaze at the user's hand; to execute a specific function corresponding to a first gesture of associating the execution screen of the application with a specific finger of the user's hand and to display a first item indicating execution of the specific function in a transparent display area corresponding to the specific finger, upon recognition of the first gesture; and to apply a result of execution of the specific function to the application and to display a result of application of the result of execution of the specific function to the application in the transparent display area corresponding to the palm, upon recognition of a second gesture of associating the first item with the application.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IFEDAYO B ILUYOMADE whose telephone number is (571)270-7118. The examiner can normally be reached Monday-Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at 5712707230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/IFEDAYO B ILUYOMADE/Primary Examiner, Art Unit 2624 03/03/2026