DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/16/2025 has been entered.
Response to Arguments
Applicant's arguments filed 12/16/2025 have been fully considered.
Regarding to claim 42, the applicant argues that cited arts fail to teach or suggest the amended claim limitation “wherein the working area is configured to move within the virtual environment such that a representation of a physical object located within the working area remains within the working area responsive to physical movement of one or both of the user and the physical object from a primary position in a physical space to a secondary, different position in the physical space”. The arguments have been fully considered. The argument according “such that a representation of a physical object located within the working area remains within the working area responsive to physical movement of one or both of the user and the physical object from a primary position in a physical space to a secondary, different position in the physical space” is persuasive. Therefore, the 35 U.S.C 103 rejection has been withdrawn . However, upon further consideration, new grounds of rejection are made in newly applied arts. The argument according “wherein the working area is configured to move within the virtual environment” is not persuasive. The examiner cannot concur with the applicant for following reasons:
Stafford discloses “wherein the working area is configured to move within the virtual environment”. For example, in Fig. 2A and paragraph [0035], Stafford teaches gaze detection system determines an approximate location; Stafford further teaches the cursor within the virtual environment is moved from a first location to a second location;
PNG
media_image1.png
318
498
media_image1.png
Greyscale
; Stafford further more teaches the cursor is in the working area. In Fig. 3 and paragraph [0051], Stafford teaches the estimation of the distance between the POG and the current position of the cursor.
Marggraff discloses “wherein the working area is configured to move within the virtual environment”. For example, in Fig. 29 and paragraph [0423], Marggraff teaches determining relative positions during the typing process;
PNG
media_image2.png
716
648
media_image2.png
Greyscale
. In paragraphs [0424-0425], Marggraff teaches minimizing eye movements; Marggraff further teaches physical and projected keyboards are optimized in second position for use in enhanced modes.
Claims 80 and 81 are not allowable due to the similar reasons as discussed above.
All dependent claims are not allowable due to the similar reasons as discussed above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 42-51, 53-68 and 71-82 are rejected under 35 U.S.C. 103 as being unpatentable over Stafford (US 20120272179 A1) in view of Marggraff (US 20180011533 A9), and further in view of Kawano (US 20230222742 A1).
Regarding to claim 42 (Currently Amended), Stafford discloses a method for integrated gaze interaction with a virtual environment ([0055]: an eye tracker is a device for measuring eye position and eye movement; Fig. 11; [0085]: an algorithm for interfacing a user with a computer program executing in a computing device; identify the POG of the user; the user has initiated a physical action to move the cursor; the cursor is moved to a region proximate to the POG), the method comprising:
receiving a gaze activation input from a user to activate a gaze tracker (Fig. 2A; [0035]: a gaze detection system determines gaze detection based on image analysis of images taken by a camera; [0043]: the user must look for at least one second to a window of the screen before gaze detection is engaged; [0047]: the trigger to determine if gaze-assistance is used is whether the POG is in the same window as the current mouse cursor; [0048]: the start of mouse motion is the physical user action that triggers gaze-assistance; [0049]: the gaze assistance only begins when the mouse is moved at least a threshold mouse distance; Fig. 4B; [0058]: the cameras track the user features such as eyes 416 and 418, face, nose 422, mouth 420, and torso 414);
defining a first position in the virtual environment based on a gaze tracker user input as determined by the gaze tracker (Fig. 2A; [0035]: gaze detection system determines an approximate location; the cursor of the mouse is moved from a first location to a second location;
PNG
media_image1.png
318
498
media_image1.png
Greyscale
; the cursor of the mouse is in virtual environment; [0053]: the POG defines a first point on the display, and the cursor defines a second point on the display; Fig. 6; [0068]: when the user performs a two-finger tap on touchpad 612, while directing the user's gaze 608 to POG 620, the mouse cursor jumps to position 618);
defining a working area within the virtual environment adjacent the first position, wherein the working area is defined to comprise only a portion of the virtual environment (Fig. 2A; [0036]: the system relies solely on the mouse movement to find the final destination for the cursor; [0037]: the cursor of a mouse is far away from the POG, the cursor moves fast, but when the cursor of a mouse starts approaching the destination, the cursor slows down; the cursor of a mouse is in virtual environment; Fig. 2B; [0046]: define a circle 254 around the POG;
PNG
media_image3.png
292
476
media_image3.png
Greyscale
; [0047]: the mouse cursor is moving within the same window; [0049]: a web page; users move the mouse; a user may be reading a web page while holding the mouse, and the user may move the mouse several millimeters due to a small hand motion; Fig. 6; [0068]: window 616 is selected because mouse cursor is now inside window 616);
defining a second position inside the working area, wherein the second position is different than the first position and is defined based on a position of a first body member relative to a coordinate system (Stafford; Fig. 2A; [0035]: gaze detection system determines an approximate location; the cursor is moved from a first location to a second location;
PNG
media_image1.png
318
498
media_image1.png
Greyscale
; [0053]: the POG defines a first point on the display, and the cursor defines a second point on the display; [0063]: when the head is moving, head tracking is used to determine the target area; once head motion substantially comes to a stop, gaze tracking is utilized to fine tune the location of the target area; Fig. 8; [0075]), wherein the coordinate system and the working area have substantially the same shape (Stafford; Fig. 2A; [0035]: gaze detection system determines an approximate location; the cursor is moved from a first location to a second location;
PNG
media_image1.png
318
498
media_image1.png
Greyscale
; both are square shape as illustrated in Fig. 2A ), and wherein the second position has the same relative position in the working area as the position of the first body member in the coordinate system (Stafford; Fig. 2A; [0035]: gaze detection system determines an approximate location; the cursor is moved from a first location to a second location;
PNG
media_image1.png
318
498
media_image1.png
Greyscale
; same relative position is shown in Fig. 2A; Fig. 3; [0051]: the estimation of the distance between the POG and the current position of the cursor); and
wherein the working area is configured to move within the virtual environment (Stafford; Fig. 2A; [0035]: gaze detection system determines an approximate location; the cursor within the virtual environment is moved from a first location to a second location;
PNG
media_image1.png
318
498
media_image1.png
Greyscale
; the cursor is in the working area; Fig. 3; [0051]: the estimation of the distance between the POG and the current position of the cursor).
Stafford fails to explicitly disclose:
wherein the virtual environment is an artificial environment provided by a computing device and experienced by a user through sensory stimuli in which the user's actions determine, at least in part, what happens within the virtual environment;
operating the virtual environment at the second position within the working area, by a first user input from at least one input device different from the gaze tracker; and
such that a representation of a physical object located within the working area remains within the working area responsive to physical movement of one or both of the user and the physical object from a primary position in a physical space to a secondary, different position in the physical space.
In same field of endeavor, Marggraff teaches:
wherein the virtual environment is an artificial environment provided by a computing device and experienced by a user through sensory stimuli in which the user's actions determine, at least in part, what happens within the virtual environment ([0215]: any combination of real-world objects and virtual objects on one or more displays; [0325]: visualize and control 3-dimensional virtual objects within a 3-dimensional space; [0333]: a pointer is transported from a location on a virtual reality headset to the display on a smart phone; Fig. 27; [0412]: a selectable area may apply only to one region of a virtual display; [0413]: guide a user's eye toward a selectable region; the viewable region 613 is moved within the virtual display area 610; Fig. 28; [0421]: the device user may not be able to see the position of the keyboard 652 and hands 653a, 653b; Fig. 29; [0422]: the display 660 is a part of the headwear 650, e.g., augmented reality, virtual reality or heads up displays; [0423]: Characters and words are appended at a location 665 relative to a body of text 664);
wherein the coordinate system and the working area have substantially the same shape , and wherein the second position has the same relative position in the working area as the position of the first body member in the coordinate system (Fig. 28; [0421]: a keyboard 652;
PNG
media_image4.png
262
440
media_image4.png
Greyscale
Fig. 29; [0422]: a projection of the keyboard 662;
PNG
media_image5.png
158
350
media_image5.png
Greyscale
);
operating the virtual environment at the second position within the working area, by a first user input from at least one input device different from the gaze tracker ([0003]: Human-Machine Interfaces is HMI; [0316]: a pointer is transported upon initiating a movement of a pointer in the general direction toward the target location being viewed by the device wearer; [0318]: upon transporting to a focal region, another HMI device is seamlessly used to control and activate selections; the devices include touch pads, computer mice, and joy sticks; the devices offer greater precision compared with eye signals; make selection over spatial ranges down to the visual acuity of the device wearer; Fig. 20A; [0328]: upon reaching a threshold distance within a continuous movement, the cursor 511a is made to move essentially instantaneously or “jump” 515 to the region of the gaze 513; [0329]: until the user directs the pointer to arrive at the desired location, and then acts by clicking or activating the pointing device; Fig. 29; [0423]: determine relative positions during the typing process;
PNG
media_image2.png
716
648
media_image2.png
Greyscale
; [0424-0425]: minimize eye movements; physical and projected keyboards are optimized in second position for use in enhanced modes);
wherein the working area is configured to move within the virtual environment (Fig. 29; [0423]: determine relative positions during the typing process;
PNG
media_image2.png
716
648
media_image2.png
Greyscale
; [0424-0425]: minimize eye movements; physical and projected keyboards are optimized in second position for use in enhanced modes).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Stafford to include wherein the virtual environment is an artificial environment provided by a computing device and experienced by a user through sensory stimuli in which the user's actions determine, at least in part, what happens within the virtual environment; wherein the coordinate system and the working area have substantially the same shape, and wherein the second position has the same relative position in the working area as the position of the first body member in the coordinate system; operating the virtual environment at the second position within the working area, by a first user input from at least one input device different from the gaze tracker; wherein the working area is configured to move within the virtual environment as taught by Marggraff. The motivation for doing so would have been to interact and control virtual environment; to distinguish between movements of the eye; to perform an activation saccade; to transport a pointer upon initiating a movement of a pointer in the general direction toward the target location being viewed by the device wearer; to control and activate selections using another HMI device as taught by Marggraff in paragraphs [0005], [0242-0243], [0316], and [0318].
Stafford in view of Marggraff fails to explicitly disclose:
such that a representation of a physical object located within the working area remains within the working area responsive to physical movement of one or both of the user and the physical object from a primary position in a physical space to a secondary, different position in the physical space.
In same field of endeavor, Kawano teaches:
such that a representation of a physical object located within the working area remains within the working area responsive to physical movement of one or both of the user and the physical object from a primary position in a physical space to a secondary, different position in the physical space (or is optional; Fig. 7; [0097]: determine whether or not each of recognition objects B1 to B4 has moved between before and after the activation of the AR glasses 30;
PNG
media_image6.png
188
344
media_image6.png
Greyscale
; Fig. 9; [0103]: a movement of the hand of the user of the AR glasses 30; Fig. 9; [0104-0105]: the hand H_Px of the user is located at a position where the virtual operation object OBx can be operated; table, device, and an operative object are in working area responsive the hand movement;
PNG
media_image7.png
530
702
media_image7.png
Greyscale
; Fig. 12; [0133-0134]: the virtual operation object OB.sub.Y is moved until the virtual operation object OB.sub.Y reaches the projection position PJB of the superimposition target object;
PNG
media_image8.png
244
538
media_image8.png
Greyscale
).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Stafford in view of Marggraff to include such that a representation of a physical object located within the working area remains within the working area responsive to physical movement of one or both of the user and the physical object from a primary position in a physical space to a secondary, different position in the physical space as taught by Kawano. The motivation for doing so would have been to detect a movement of a hand of the user in a state where the operation object is displayed; to move the virtual operation object OB.sub.Y until the virtual operation object OB.sub.Y reaches the projection position PJB of the superimposition target object; to improve the processing efficiency of the AR glasses 30 as taught by Kawano in Fig. 12, and paragraphs [0006], [0133-0134], and [0213].
Regarding to claim 43 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein defining the first position deactivates the gaze tracker (Stafford; [0047]: deactivate gaze-assistance and gaze tracker when the mouse is moving within the same window; [0049]: a user may be reading a web page while holding the mouse, and the user may move the mouse several millimeters due to a small hand motion; when a user a read a web page, deactivate the gaze assistance).
Regarding to claim 44 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein the working area comprises a first working area (same as rejected in claim 42), the method further comprising:
receiving a second gaze activation input from the user to activate the gaze tracker (Stafford; Fig. 2A; [0035]: A gaze detection system determines gaze detection based on image analysis of images taken by camera; [0043]: the user must look for at least one second to a window of the screen before gaze detection is engaged; [0047]: the trigger to determine if gaze-assistance is used is whether the POG is in the same window as the current mouse cursor; [0048]: the start of mouse motion is the physical user action that triggers gaze-assistance; [0049]: gaze assistance only begins when the mouse is moved at least a threshold mouse distance; Fig. 4B; [0058]: the cameras track user features such as eyes 416 and 418, face, nose 422, mouth 420, and torso 414);
defining a second position in the virtual environment based on a second gaze tracker user input (Stafford; Fig. 2A; [0035]: gaze detection system determines an approximate location; the cursor is moved from a first location to a second location;
PNG
media_image1.png
318
498
media_image1.png
Greyscale
; [0053]: the POG defines a first point on the display, and the cursor defines a second point on the display);
defining the second working area adjacent the second position as only a part of the virtual environment (Stafford; Fig. 2A; [0036]: the system relies solely on the mouse movement to find the final destination for the cursor; [0037]: the cursor is far away from the POG, the cursor moves fast, but when the cursor starts approaching the destination, the cursor slows down; Fig. 2B; [0046]: define a circle 254 around the POG;
PNG
media_image3.png
292
476
media_image3.png
Greyscale
; [0047]: the mouse is moving within the same window; [0049]: a web page);
returning to the first working area (Stafford; Fig. 2B; [0046]: define a circle 254 around the POG;
PNG
media_image3.png
292
476
media_image3.png
Greyscale
; [0050]: the cursor moves quickly through the desktop between the original position and the destination; [0062]: jump the position of the cursor to the target area; [0065]: cause a jump to the bottom of the page; Fig. 6; [0068]: directing the user's gaze 608 to POG 620, the mouse cursor jumps to position 618, which is proximate to POG 620 or exactly at POG 620); and
Stafford in view of Marggraff and Kawano further discloses:
returning to the first working area (Marggraff; [0316]: transport of the pointer to a new location);
operating the virtual environment within the first working area only, by the first user input from the at least one input device different from the gaze tracker; or
operating the virtual environment within the second working area only, by the first user input from the at least one input device different from the gaze tracker (or is optional; Marggraff; [0003]: Human-Machine Interfaces is HMI; [0316]: a pointer is transported upon initiating a movement of a pointer in the general direction toward the target location being viewed by the device wearer; [0318]: upon transporting to a focal region, another HMI device may seamlessly is used to control and activate selections; the devices include touch pads, computer mice, and joy sticks; the devices offer greater precision compared with eye signals; make selection over spatial ranges down to the visual acuity of the device wearer).
Same motivation of claim 42 is applied here.
Regarding to claim 45 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, further comprising:
receiving a second gaze activation input from the user to activate the gaze tracker (Stafford; Fig. 2A; [0035]: A gaze detection system determines gaze detection based on image analysis of images taken by camera; [0043]: the user must look for at least one second to a window of the screen before gaze detection is engaged; [0047]: the trigger to determine if gaze-assistance is used is whether the POG is in the same window as the current mouse cursor; [0048]: the start of mouse motion is the physical user action that triggers gaze-assistance; [0049]: gaze assistance only begins when the mouse is moved at least a threshold mouse distance; Fig. 4B; [0058]: the cameras track user features such as eyes 416 and 418, face, nose 422, mouth 420, and torso 414);
returning to the working area (Stafford; Fig. 2B; [0046]: define a circle 254 around the POG;
PNG
media_image3.png
292
476
media_image3.png
Greyscale
; [0050]: the cursor moves quickly through the desktop between the original position and the destination; [0062]: jump the position of the cursor to the target area; [0065]: cause a jump to the bottom of the page; Fig. 6; [0068]: directing the user's gaze 608 to POG 620, the mouse cursor jumps to position 618, which is proximate to POG 620 or exactly at POG 620); and
Stafford in view of Marggraff and Kawano further discloses:
receiving an interruption input from the user to deactivate the gaze tracker (Marggraff; [0283]: the presence of mutual gaze is identified using the scene camera of the device wearer; [0331]: if the pointer is in the vicinity of eye gaze, i.e., within a pre-defined separation between pointer and gaze locations, then transport is automatically disabled);
operating the virtual environment within the working area only, by the first user input from the at least one input device different from the gaze tracker (Marggraff; [0003]: Human-Machine Interfaces is HMI; [0316]: a pointer is transported upon initiating a movement of a pointer in the general direction toward the target location being viewed by the device wearer; [0318]: upon transporting to a focal region, another HMI device may seamlessly is used to control and activate selections; the devices include touch pads, computer mice, and joy sticks; the devices offer greater precision compared with eye signals; make selection over spatial ranges down to the visual acuity of the device wearer).
Same motivation of claim 42 is applied here.
Regarding to claim 46 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein the gaze activation input is received from the at least one input device (Stafford; Fig. 2A; [0035]: a gaze detection system determines gaze detection based on image analysis of images taken by camera; the camera is an input device; [0043]: the user must look for at least one second to a window of the screen before gaze detection is engaged; [0047]: the mouse is an input device; the trigger to determine if gaze-assistance is used is whether the POG is in the same window as the current mouse cursor; [0048]: the mouse is an input device; the start of mouse motion is the physical user action that triggers gaze-assistance; [0049]: gaze assistance only begins when the mouse is moved at least a threshold mouse distance; Fig. 4B; [0058]: the cameras track user features such as eyes 416 and 418, face, nose 422, mouth 420, and torso 414).
Regarding to claim 47 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein a cursor of the virtual environment is moved to within the working area when the first position has been defined (Stafford; Fig. 2A; [0036]: the system relies solely on the mouse movement to find the final destination for the cursor; [0037]: when the cursor is far away from the POG, the cursor moves fast, but when the cursor starts approaching the destination, the cursor slows down; Fig. 2B; [0046]: define a circle 254 around the POG;
PNG
media_image3.png
292
476
media_image3.png
Greyscale
; [0047]: the mouse is moving within the same window; [0049]: a web page).
Regarding to claim 48 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein operating the virtual environment comprises at least one of:
moving a cursor within the working area (Stafford; [0047]: the mouse is moving within the same window; the mouse moves the cursor within the same window without using camera; [0049]: users move the mouse; a user may be reading a web page while holding the mouse, and the user may move the mouse several millimeters due to a small hand motion; moving cursor is one of);
scrolling an application window or slide;
zooming an application window or slide;
swiping from a first window or first slide to a second window or second slide (or is optional; Stafford; Fig. 8; [0075]: move window 810 from left to right; cause window 810 to end in position 812 in display 806);
activating or deactivating checkboxes;
selecting radio buttons;
navigating and selecting from dropdown lists;
navigating and activating and deactivating items from list boxes;
clicking a button in the virtual environment or icon (or is optional; Stafford; Fig. 9; [0078]: icon; display the contents of folder);
clicking a menu button or icon (or is optional; Stafford; Fig. 9; [0078]: icon; display the contents of folder);
activating and deactivating toggles;
manipulating text fields;
manipulating windows, fields, and message boxes;
manipulating sliders/track bar and carousels; and/or
activating and deactivating tool tips (or is optional).
Regarding to claim 49 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein at least one movement by a body member is registered by a camera for operating the virtual environment, wherein the body member comprises at least one of:
an eyelid;
a hand;
an arm; and
a leg of the user (one of; Stafford; [0034]: the motion of an eye relative to the head; Fig. 6; [0068]: the user performs a two-finger tap on touchpad 612; Fig. 8; [0075]: raises hand 818, and moves 816 hand 818 from left to right; the move 804 of the window from left to right).
Regarding to claim 50 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein different relative movements or different relative positions between a first finger and a second finger of a hand and/or a first finger and a palm of a hand are registered by a device (Stafford; Fig. 6; [0067]: a two finger tap on the touchpad triggers the mouse cursor to move based on the gaze of the user; Fig. 6; [0068]: the user performs a two-finger tap on touchpad 612;
PNG
media_image9.png
140
234
media_image9.png
Greyscale
).
Stafford fails to explicitly disclose a device is a wearable device worn by the hand for operating the virtual environment.
In same field of endeavor, Marggraff teaches a device is a wearable device worn by the hand for operating the virtual environment ([0168]: an off-display target may be carried about by the user, e.g., glove, ring; [0219]: fingers following a moving object; [0220]: the tracking of a finger pointing at an object; [0225]: a specified finger or pointing device; [0243]: wearable devices include gloves, rings, bracelets, necklaces, headwear, underclothing, and the like).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Stafford to include a device is a wearable device worn by the hand for operating the virtual environment as taught by Marggraff. The motivation for doing so would have been to interact and control virtual environment; to distinguish between movements of the eye; to perform an activation saccade as taught by Marggraff in paragraphs [0005] and [0242-0243].
Regarding to claim 51 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 50, wherein operating the virtual environment comprises the first finger touching different areas of the second finger or the palm (Stafford; Fig. 6; [0067]: a two finger tap on the touchpad triggers the mouse cursor to move based on the gaze of the user; Fig. 6; [0068]: the user performs a two-finger tap on touchpad 612;
PNG
media_image9.png
140
234
media_image9.png
Greyscale
).
Regarding to claim 53 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42,
Stafford fails to explicitly disclose: wherein the coordinate system is defined by a tracking device, or a first part of a wearable device, or a second body part of the user as seen by a camera.
In same field of endeavor, Marggraff teaches wherein the coordinate system is defined by a tracking device, or a first part of a wearable device, or a second body part of the user as seen by a camera ([0408]: head movements, gaze direction of the device wearer, and eye-signal commands; [0410]: the head moves; [0411]: there is head movement that is associated with eye movement; [0545]: the coordinates of the displayed pattern (x, y) may be expressed as (x,y)=f(t); the target is followed using smooth pursuit eye movements, a series of short tracking saccadic eye movements; [0547]: acquired eye position coordinates).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Stafford to include wherein the coordinate system is defined by a tracking device, or a first part of a wearable device, or a second body part of the user as seen by a camera as taught by Marggraff. The motivation for doing so would have been to interact and control virtual environment; to distinguish between movements of the eye; to perform an activation saccade as taught by Marggraff in paragraphs [0005] and [0242-0243].
Regarding to claim 54 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein the first body member and/or a second body member is selected from one of:
a finger (one of: Stafford; Fig. 6; [0067-0068]);
a hand (one of: Stafford; Fig. 8; [0075]);
a palm;
an arm;
a toe;
a foot;
a leg;
a tongue;
a mouth;
an eye;
a torso; and
a head (one of: Stafford; [0063]: when the head is moving, head tracking is used to determine the target area; once head motion substantially comes to a stop, gaze tracking is utilized to fine tune the location of the target area).
Regarding to claim 55 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42,
Stafford fails to explicitly disclose: wherein the first body member and/or a second body member is/are wearing a wearable device configured to determine the position of the first body member and/or the second body member and/or a position of the first body member relative to the second body member.
In same field of endeavor, Marggraff teaches wherein the first body member and/or a second body member is/are wearing a wearable device configured to determine the position of the first body member and/or the second body member and/or a position of the first body member relative to the second body member ([0243]: a variety of coupled wearable devices include gloves, rings, bracelets, necklaces, headwear, underclothing, and the like; Fig. 21; [0421]: a view of headwear 650 includes a keyboard-viewing camera 651 oriented; a keyboard 652 and the hands 653a, 653b of the device wearer are within the field-of-view 654 of the keyboard camera 651; [0306]: determine a gaze location from eye tracking based on the sizes of the pupil entrance and fovea).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Stafford to include wherein the first body member and/or a second body member is/are wearing a wearable device configured to determine the position of the first body member and/or the second body member and/or a position of the first body member relative to the second body member as taught by Marggraff. The motivation for doing so would have been to interact and control virtual environment; to distinguish between movements of the eye; to perform an activation saccade as taught by Marggraff in paragraphs [0005] and [0242-0243].
Regarding to claim 56 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, further comprising repeating:
receiving the gaze activation input from the user to activate the gaze tracker (Stafford; Fig. 2A; [0035]: A gaze detection system determines gaze detection based on image analysis of images taken by camera; [0043]: the user must look for at least one second to a window of the screen before gaze detection is engaged; [0047]: the trigger to determine if gaze-assistance is used is whether the POG is in the same window as the current mouse cursor; [0048]: the start of mouse motion is the physical user action that triggers gaze-assistance; [0049]: gaze assistance only begins when the mouse is moved at least a threshold mouse distance; Fig. 4B; [0058]: the cameras track user features such as eyes 416 and 418, face, nose 422, mouth 420, and torso 414); and
defining the first position in the virtual environment based on the gaze tracker user input as determined by the gaze tracker to further define a first working area (Stafford; Fig. 2A; [0036]: the system relies solely on the mouse movement to find the final destination for the cursor; [0037]: the cursor is far away from the POG, the cursor moves fast, but when the cursor starts approaching the destination, the cursor slows down; Fig. 2B; [0046]: define a circle 254 around the POG;
PNG
media_image3.png
292
476
media_image3.png
Greyscale
; [0047]: the mouse is moving within the same window; [0049]: a web page),
Stafford in view of Marggraff and Kawano further discloses wherein the first user input operates the virtual environment based on the first working area and the working area (Marggraff; [0003]: Human-Machine Interfaces is HMI; [0316]: a pointer is transported upon initiating a movement of a pointer in the general direction toward the target location being viewed by the device wearer; [0318]: upon transporting to a focal region, another HMI device may seamlessly is used to control and activate selections; the devices include touch pads, computer mice, and joy sticks; the devices offer greater precision compared with eye signals; make selection over spatial ranges down to the visual acuity of the device wearer).
Same motivation of claim 42 is applied here.
Regarding to claim 57 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein the method further comprises identifying a virtual item within the working area (Stafford; Fig. 7; [0072]: the user's POG 716b is on the character 712 controlled by user 702; Fig. 9; [0078]), wherein:
Stafford in view of Marggraff and Kawano further discloses:
the virtual item is connected to a real item (Marggraff; Fig. 28; [0421]: a keyboard 652 and the hands 653a, 653b of the device wearer; Fig. 29; [0423]);
an activity is connected to the real item (Marggraff; Fig. 29; [0423]: typing by a device user); and
the first user input controls the activity (Marggraff; Fig. 29; [0423]: typing by a device user; the user may glance downward to the projected image of the keyboard 662 and hands 663a, 663b to determine relative positions during the typing process).
Same motivation of claim 42 is applied here.
Regarding to claim 58 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to 57, wherein the real item is a certain distance from the user, and wherein the virtual item is positioned with a focal length from the user corresponding to, or substantially equal to, the certain distance (Marggraff; Fig. 28; [0421]: a keyboard 652 and the hands 653a, 653b of the device wearer; Fig. 29; [0423]: determine relative positions during the typing process; [0424]: Semi-transparent projections may even be positioned within regions where typing or other viewing activities are occurring).
Regarding to claim 59 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein operating the virtual environment within the working area only is terminated by a deactivation input received from the user (Marggraff; [0331]: if the pointer is in the vicinity of eye gaze, i.e., within a pre-defined separation between pointer and gaze locations, then transport is generally not advantageous and may automatically be disabled.).
Same motivation of claim 42 is applied here.
Regarding to claim 60 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein the gaze activation input is touching or positioning a body member on a trackpad configured to activate the gaze tracker (Stafford; Fig. 6; [0067]: a two finger tap on the touchpad triggers the mouse cursor to move based on the gaze of the user; Fig. 6; [0068]: the user performs a two-finger tap on touchpad 612;
PNG
media_image9.png
140
234
media_image9.png
Greyscale
).
Regarding to claim 61 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 49, wherein the gaze activation input is a body member moving into a field-of-view of the camera, or into a certain first volume or first area of the field of view, and wherein the moving body member comprises one of:
a finger (one of: Fig. 6; [0068]: the user performs a two-finger tap on touchpad 612;
PNG
media_image9.png
140
234
media_image9.png
Greyscale
);
a hand (one of: Stafford; Fig. 8; [0075]: to perform the move, the user looks at window 810, raises hand 818, and moves 816 hand 818 from left to right, indicating that user 814 wishes to move window 810 from left to right);
an arm;
a toe;
a foot; and
a leg.
Regarding to claim 62 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 50, wherein a first relative movement or a first relative position of the different relative movements is the gaze activation input (Stafford; Fig. 6; [0067]: a two finger tap on the touchpad triggers the mouse cursor to move based on the gaze of the user; Fig. 6; [0068]: the user performs a two-finger tap on touchpad 612;
PNG
media_image9.png
140
234
media_image9.png
Greyscale
; Fig. 8; [0075]).
Regarding to claim 63 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 62, wherein the first relative movement or position of the different relative movements or positions is moving or positioning one finger of the hand within a second volume or second area (Stafford; Fig. 6; [0067]: a two finger tap on the touchpad triggers the mouse cursor to move based on the gaze of the user; Fig. 6; [0068]: the user performs a two-finger tap on touchpad 612;
PNG
media_image9.png
140
234
media_image9.png
Greyscale
; Fig. 8; [0075]).
Stafford in view of Marggraff and Kawano further discloses one finger of the hand wearing the wearable device (Marggraff; [0168]: an off-display target may be carried about by the user, e.g., glove, ring; [0219]: fingers following a moving object; [0220]: the tracking of a finger pointing at an object; [0225]: a specified finger or pointing device; [0243]: wearable devices include gloves, rings, bracelets, necklaces, headwear, underclothing, and the like).
Same motivation of claim 42 is applied here.
Regarding to claim 64 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 50, wherein the gaze activation input is the first finger touching the second finger or the palm at a certain first position (Stafford; Fig. 5; [0065]: the user slides the finger down across the touchscreen; a lot of finger scrolling;
PNG
media_image10.png
116
172
media_image10.png
Greyscale
; fingers touch a finger or a position of a palm; Fig. 6; [0067]: a two finger tap on the touchpad triggers the mouse cursor to move based on the gaze of the user; Fig. 6; [0068]: the user performs a two-finger tap on touchpad 612;
PNG
media_image9.png
140
234
media_image9.png
Greyscale
; [0069]: utilize different inputs, e.g., a double click on a mouse, two consecutive taps on the touchpad, a key pressed on the keyboard, etc., including fingers touch a position of palm; Fig. 8; [0075]).
Regarding to claim 65 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 50, wherein the gaze activation input is:
moving one finger or the first finger of the hand wearing the wearable device within the second volume or second area; or
touching the certain first position twice within a first time period (or is optional; Stafford; Fig. 6; [0067]: a two finger tap on the touchpad triggers the mouse cursor to move based on the gaze of the user; Fig. 6; [0068]: the user performs a two-finger tap on touchpad 612;
PNG
media_image9.png
140
234
media_image9.png
Greyscale
; [0069]: two consecutive taps on the touchpad; Fig. 8; [0075]).
Regarding to claim 66 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein the gaze activation input is a second signal from an electromyography or a neural and/or muscle activity tracker, and wherein the second signal is a nerve signal or a muscle signal for moving a certain body member (or is optional; Stafford; Fig. 6; [0067]: a two finger tap on the touchpad triggers the mouse cursor to move based on the gaze of the user; Fig. 6; [0068]: the user performs a two-finger tap on touchpad 612;
PNG
media_image9.png
140
234
media_image9.png
Greyscale
; [0069]: two consecutive taps on the touchpad; Fig. 8; [0075]).
Stafford in view of Marggraff and Kawano further discloses wherein the second signal is a nerve signal or a muscle signal for moving a certain body member (Marggraff; [0438]: facial muscle movement to indicate the selection of a character or symbol; [0449]: eye muscles; [0546]: the optic nerve; send motor signals to the muscles that move the eye; the time for muscles to develop force and generate eye movement).
Same motivation of claim 42 is applied here.
Regarding to claim 67 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 65, wherein the first time period is less than 2 seconds, or less than 1 second, or between 0.2 seconds and 1 second (Stafford; [0043]: one second is less than 2 seconds).
Regarding to claim 68 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein the gaze activation input comprises one of:
activating a first button of the at least one input device;
touching the at least one input device (or is optional; Stafford; Fig. 6; [0067]: a two finger tap on the touchpad triggers the mouse cursor to move based on the gaze of the user; Fig. 6; [0068]: the user performs a two-finger tap on touchpad 612;
PNG
media_image9.png
140
234
media_image9.png
Greyscale
; [0069]: two consecutive taps on the touchpad);
performing an activating movement by a body member wearing a wearable input device, wherein the body member comprises at least one of an arm, a hand, a leg, a foot, and a head; and
performing an activating movement in front of a first camera (Stafford; Fig. 8; [0075]).
Regarding to claim 71 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 59, wherein the deactivation input comprises at least one of:
deactivating a first button;
activating the first button a second time;
activating a second button;
un-touching the at least one input device;
touching the at least one input device a second time (Stafford; [0048]: pressing a keyboard, pushing a button, touching a screen, speaking, snapping the fingers, clapping, etc);
performing a deactivating movement or deactivating position by a body member wearing a wearable device or a wearable input device (Stafford; [0049]: users move the mouse without the intent to move the mouse cursor on the screen. For example, a user may be reading a web page while holding the mouse, and the user may move the mouse several millimeters due to a small hand motion), wherein the body member comprises at least one of:
an arm;
a hand;
a leg;
a foot; and
a head ; and
performing a deactivating movement in front of a first camera (Stafford; [0063]: head tracking; Fig. 6; [0068]; Fig. 8; [0075] ).
Regarding to claim 72 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein the at least one input device comprises one of:
a mouse (Stafford; [0029]: moving a mouse; [0030]: the mouse);
a trackpad;
a touchscreen (Stafford; [0033]: touchscreen);
a trackball (Marggraff; [0316]: trackball) ;
a thumb stick;
a hand tracker (Stafford; Fig. 8; [0074]: GUI uses gaze and gestures as inputs);
a head tracker (Stafford; [0058]: head and face tracking);
a body tracker (Stafford; [0058]);
a trackpoint;
a body member tracker;
a console controller;
a wand controller;
a cross reality (XR) controller (Marggraff; [0248]: an augmented reality (AR) or mixed reality (MR) device); and
a virtual reality (VR) controller (Marggraff; [0248]: an augmented reality (AR) or mixed reality (MR) device).
Regarding to claim 73 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein the virtual environment is displayed on a display comprising one of:
an electronic visual display comprising one of:
an XR head-mounted display or glasses (Marggraff; [0248]: an augmented reality (AR) or mixed reality (MR) device; [0506]: HMD);
augmented reality glasses (Marggraff; Fig. 18A; [0367]);
augmented reality goggles;
augmented reality contact lenses; and
a head-mountable see-through display;
a see-through electronic visual display;
a user interface of an electronic processing device;
a user interface of a specific application of an electronic processing device; and
a 3D visual display.
Regarding to claim 74 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein the working area is visualized to the user in the virtual environment when the working area is defined (Stafford; Fig. 2A; [0036]: the system relies solely on the mouse movement to find the final destination for the cursor; [0037]: the cursor is far away from the POG, the cursor moves fast, but when the cursor starts approaching the destination, the cursor slows down; Fig. 2B; [0046]: define a circle 254 around the POG;
PNG
media_image3.png
292
476
media_image3.png
Greyscale
; [0047]: the mouse is moving within the same window; [0049]: a web page; users move the mouse; a user may be reading a web page while holding the mouse, and the user may move the mouse several millimeters due to a small hand motion).
Regarding to claim 75 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein the first position is determined by further calculating a second distance between the gaze tracker and eyes of the user (Stafford; Fig. 4A; [0056]: the corneal reflection of light directed towards the user and distance are analyzed; the reflection is then analyzed to determine the POG of user 402;
PNG
media_image11.png
330
350
media_image11.png
Greyscale
; [0057]).
Regarding to claim 76 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein the first position is defined at an instant when:
the gaze activation input is received; or
when the gaze activation input has been received and at least one eye of the user is open (Stafford; Fig. 4B; [0058]: tracking of user features such as eyes; only one eye is visible to the camera; [0059]: determine where on the display a user's gaze is focused).
Regarding to claim 77 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein a size or a diameter of the working area is adjustable (Stafford; Fig. 2A; [0036]: the system relies solely on the mouse movement to find the final destination for the cursor; [0037]: the cursor is far away from the POG, the cursor moves fast, but when the cursor starts approaching the destination, the cursor slows down; Fig. 2B; [0046]: define a circle 254 around the POG;
PNG
media_image3.png
292
476
media_image3.png
Greyscale
; [0047]: the mouse is moving within the same window; [0049]: a web page; users move the mouse; a user may be reading a web page while holding the mouse, and the user may move the mouse several millimeters due to a small hand motion; a web page is adjustable).
Regarding to claim 78 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein a size or a diameter of the working area is adjustable by the user (Stafford; Fig. 2A; [0036]: the system relies solely on the mouse movement to find the final destination for the cursor; [0037]: the cursor is far away from the POG, the cursor moves fast, but when the cursor starts approaching the destination, the cursor slows down; Fig. 2B; [0046]: define a circle 254 around the POG;
PNG
media_image3.png
292
476
media_image3.png
Greyscale
; [0047]: the mouse is moving within the same window; [0049]: a web page; users move the mouse; a user may be reading a web page while holding the mouse, and the user may move the mouse several millimeters due to a small hand motion; a web page is adjustable).
Regarding to claim 79 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein operating the virtual environment comprises at least one of:
selecting an application or element within the working area (Stafford; [0049]: a user selects a web page; a user may be reading a web page while holding the mouse, and the user may move the mouse several millimeters due to a small hand motion);
activating an application or element within the working area (Stafford; [0040]: move to the field at the bottom of the page; travel from the top field to the bottom field);
deactivating an application or element within the working area; and
controlling an application or element within the working area (Stafford; [0040]: typing in a word processor; the user presses the Tab key).
Regarding to claim 80 (Currently Amended), Stafford discloses a data processing system (Fig. 2A; [0035]: move the mouse cursor from position 204 in display 110c to position 208 in display 110a; [0036]: the cursor is currently in the desktop; [0055]: an eye tracker is a device for measuring eye position and eye movement; Fig. 11; [0085]: an algorithm for interfacing a user with a computer program executing in a computing device; identify the POG of the user; the user has initiated a physical action to move the cursor; the cursor is moved to a region proximate to the POG ) comprising:
an electronic visual display configured to provide a visualization of a virtual environment (Fig. 2A; [0035]: move the mouse cursor from position 204 in display 110c to position 208 in display 110a; [0036]: the cursor is currently in the desktop);
a gaze tracker ([0034]: eye-gaze tracking; Fig. 2A; [0035]: a gaze detection system executing on computer 108 determines the POG of the user; camera 214; [0048]: trigger gaze-assistance; [0049]: gaze-assistance);
an input device ([0029]: the mouse 104; Fig. 8; [0075]); and
processing circuitry configured to ([0009]: a processor; the processor executes a computer program that provides the GUI; Fig. 10; [0079]: a processor):
the rest claim limitations are similar to claim limitations recited in claim 42. Therefore, same rational used to reject claim 42 is also used to reject claim 80.
Regarding to claim 81 (Currently Amended), Stafford discloses a non-transitory computer readable medium comprising a computer program stored therein, the computer program comprising instructions which, when executed by processing circuitry of a computer connected to a gaze tracker, causes the computer to ([0009]: a processor; the processor executes a computer program that provides the GUI; Fig. 2A; [0035]: move the mouse cursor from position 204 in display 110c to position 208 in display 110a; [0036]: the cursor is currently in the desktop; [0055]: an eye tracker is a device for measuring eye position and eye movement; Fig. 10; [0079]: a processor; Computing device 1012 includes a processor 1032, which is coupled to memory 1034, to permanent storage device 1058; Fig. 11; [0085]: an algorithm for interfacing a user with a computer program executing in a computing device; identify the POG of the user; the user has initiated a physical action to move the cursor; the cursor is moved to a region proximate to the POG):
The rest claim limitations are similar to claim limitations recited in claim 42. Therefore, same rational used to reject claim 42 is also used to reject rest claim limitations.
Regarding to claim 82 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the computer program according to claim 81, wherein the computer is further connected to:
a tracking device (Stafford; Fig. 2A; [0035]: camera 214; Fig. 4A; [0056]);
a camera associated with operating the virtual environment (Stafford; Fig. 2A; [0035]: camera 214; Fig. 4A; [0056]); and/or
a wearable device (Marggraff; Fig. 18A; [0267]: a headset; [0506]: HMD); and
wherein the tracking device comprises a trackpad (Stafford; Fig. 2B; [0046]: define a circle 254 around the POG;
PNG
media_image3.png
292
476
media_image3.png
Greyscale
; Fig. 6; [0067]: a two finger tap on the touchpad triggers the mouse cursor to move based on the gaze of the user; Fig. 6; [0068]: the user performs a two-finger tap on touchpad 612;
PNG
media_image9.png
140
234
media_image9.png
Greyscale
; [0069]: two consecutive taps on the touchpad), and
wherein the wearable device comprises a dataglove (Marggraff; [0168]: an off-display target may be carried about by the user, e.g., glove, and ring; [0243]: a variety of coupled wearable devices include gloves, rings, bracelets, necklaces, headwear, underclothing, and the like).
Claims 69-70 are rejected under 35 U.S.C. 103 as being unpatentable over Stafford (US 20120272179 A1) in view of Marggraff (US 20180011533 A9), in view of Kawano (US 20230222742 A1), and further in view of George-Svahn (US 20140247232 A1).
Regarding to claim 69 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein defining the first position and/or defining the working area is performed if a duration of touching the at least one input device is a number (Stafford; Fig. 2A; [0036]: the system relies solely on the mouse movement to find the final destination for the cursor; [0037]: the cursor is far away from the POG, the cursor moves fast, but when the cursor starts approaching the destination, the cursor slows down; Fig. 2B; [0046]: define a circle 254 around the POG;
PNG
media_image3.png
292
476
media_image3.png
Greyscale
; [0047]: the mouse is moving within the same window; [0049]: a web page; users move the mouse; a user may be reading a web page while holding the mouse, and the user may move the mouse several millimeters due to a small hand motion; Fig. 6; [0067]: a two finger tap on the touchpad triggers the mouse cursor to move based on the gaze of the user; Fig. 6; [0068]: the user performs a two-finger tap on touchpad 612;
PNG
media_image9.png
140
234
media_image9.png
Greyscale
; [0069]: two consecutive taps on the touchpad).
Stafford in view of Marggraff and Kawano fails to explicitly disclose a duration of touching the at least one input device is longer than 75 ms.
In same field of endeavor, George-Svahn teaches a duration of touching the at least one input device is longer than 75 ms ([0090]: touch the touchpad for the predetermined period of time, such as 200 ms).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Stafford in view of Marggraff and Kawano to include a duration of touching the at least one input device is longer than 75 ms as taught by George-Svahn. The motivation for doing so would have been to move the visual indicator 502 to the gaze target at block 614; to move the visual indication to an icon by touching the touchpad for the predetermined period of time, such as 200 ms as taught by George-Svahn in paragraphs [0088] and [0090].
Regarding to claim 70 (Previously Presented), Stafford in view of Marggraff and Kawano discloses the method according to claim 42, wherein a previous working area has already been defined, and wherein method further comprises maintaining the previous working area when a duration of touching the at least one input device is a number (Stafford; Fig. 2A; [0036]: the system relies solely on the mouse movement to find the final destination for the cursor; [0037]: the cursor is far away from the POG, the cursor moves fast, but when the cursor starts approaching the destination, the cursor slows down; Fig. 2B; [0046]: define a circle 254 around the POG;
PNG
media_image3.png
292
476
media_image3.png
Greyscale
; [0047]: the mouse is moving within the same window; [0049]: a web page; users move the mouse; a user may be reading a web page while holding the mouse, and the user may move the mouse several millimeters due to a small hand motion; Fig. 6; [0067]: a two finger tap on the touchpad triggers the mouse cursor to move based on the gaze of the user; Fig. 6; [0068]: the user performs a two-finger tap on touchpad 612;
PNG
media_image9.png
140
234
media_image9.png
Greyscale
; [0069]: two consecutive taps on the touchpad):
Stafford in view of Marggraff and Kawano fails to explicitly disclose a duration of touching the at least one input device is:
less than 75 ms; or
between 35 ms and 75 ms; or
less than 100 ms; or
between 35 ms and 100 ms; or
less than 250 ms; or
between 35 ms and 250 ms.
In same field of endeavor, George-Svahn teaches a duration of touching the at least one input device is:
ess than 75 ms; or
between 35 ms and 75 ms; or
less than 100 ms; or
between 35 ms and 100 ms; or
less than 250 ms; or
between 35 ms and 250 ms ([0090]: touch the touchpad for the predetermined period of time, such as 200 ms).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Stafford in view of Marggraff and Kawano to include a duration of touching the at least one input device is: less than 75 ms; or between 35 ms and 75 ms; or less than 100 ms; or between 35 ms and 100 ms; or less than 250 ms; or between 35 ms and 250 ms as taught by George-Svahn. The motivation for doing so would have been to move the visual indicator 502 to the gaze target at block 614; to move the visual indication to an icon by touching the touchpad for the predetermined period of time, such as 200 ms as taught by George-Svahn in paragraphs [0088] and [0090].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hai Tao Sun whose telephone number is (571)272-5630. The examiner can normally be reached 9:00AM-6:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 5712727642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HAI TAO SUN/Primary Examiner, Art Unit 2616