NON-FINAL REJECTION, FIRST DETAILED ACTION
Status of Prosecution
The present application 18/110,323, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 137-180 are pending and are all rejected. Claims 137, 179 and 180 are independent. Claims 1-136 and 181-329 are cancelled by preliminary amendment.
Claims Status
Claim 180 is rejected under 35 U.S.C. § 101 as being directed to software per se that is ineligible subject matter.
Claims 177 and 178 are rejected as being indefinite per 35 USC § 112(b).
Claims 137-145, 147-150, 152, 155-157, 160, 162-165, 172, 175 and 179-180 are rejected under 35 U.S.C. § 103 as being unpatentable over Lemelson et al. (“Lemelson”), United States Patent Application Publication 2003/0020755 published on Jan. 30, 2003 in view of Lopez et al. (“Lopez”), United States Patent Application Publication 2016/0274762 published on Jan. 30, 2003.
Claims 146, 177 and 178 are rejected under 35 U.S.C. § 103 as being unpatentable over Lemelson in view of Lopez in further view of Gu et al. (“Gu”), United States Patent Application Publication 2018/0136716 published on May 17, 2018.
Claim 151 is rejected under 35 U.S.C. § 103 as being unpatentable over Lemelson in view of Lopez in further view of Kuehne et al. (“Kuehne”), United States Patent Application Publication 2018/0136716 published on May 17, 2018.
Claim 153 is rejected under 35 U.S.C. § 103 as being unpatentable over Lemelson in view of Lopez in further view of Shinohara et al. (“Shinohara”), United States Patent Application Publication 2018/0325483 published on Nov. 15, 2018.
Claims 144, 154, 158-159, 161, 166-171 and 173-174 are rejected under 35 U.S.C. § 103 as being unpatentable over Lemelson in view of Lopez in further view of George-Svahn et al. (“George-Svahn”), United States Patent Application Publication 2017/0235360 published on Aug. 17, 2017.
Claim 176 is rejected under 35 U.S.C. § 103 as being unpatentable over Lemelson in view of Lopez in further view of Clements, United States Patent 10,802,582, published on Oct. 13, 2020.
Claim Rejections – 35 USC § 101
35 U.S.C. § 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 180 is rejected under 35 U.S.C. § 101 as being directed to software per se that is ineligible subject matter. Specifically, independent Claim 180 recites in part, “A computer readable storage medium storing one or more programs” The rejected claims recite a “computer readable storage medium” which reads on signals per se.
In the Specification this and their variations are discussed (with bolded emphasis added):
[0006] The above deficiencies and other problems associated with user interfaces for computer systems are reduced or eliminated by the disclosed systems. In some embodiments, the computer system is a desktop computer with an associated display. In some embodiments, the computer system is portable device (e.g., a notebook computer, tablet computer, or handheld device). In some embodiments, the computer system is a personal electronic device (e.g., a wearable electronic device, such as a watch, or a head-mounted device). In some embodiments, the computer system has a touchpad. In some embodiments, the computer system has one or more cameras. In some embodiments, the computer system has a touch- sensitive display (also known as a "touch screen" or "touch-screen display"). In some embodiments, the computer system has one or more eye-tracking components. In some embodiments, the computer system has one or more hand-tracking components. In some embodiments, the computer system has one or more output devices in addition to the display generation component, the output devices including one or more tactile output generators and/or one or more audio output devices. In some embodiments, the computer system has a graphical user interface (GUI), one or more processors, memory and one or more modules, programs or sets of instructions stored in the memory for performing multiple functions. In some embodiments, the user interacts with the GUI through a stylus and/or finger contacts and gestures on the touch-sensitive surface, movement of the user's eyes and hand in space relative to the GUI (and/or computer system) or the user's body as captured by cameras and other movement sensors, and/or voice inputs as captured by one or more audio input devices. In some embodiments, the functions performed through the interactions optionally include image editing, drawing, presenting, word processing, spreadsheet making, game playing, telephoning, video conferencing, e-mailing, instant messaging, workout support, digital photographing, digital videoing, web browsing, digital music playing, note taking, and/or digital video playing. Executable instructions for performing these functions are, optionally, included in a transitory and/or non-transitory computer readable storage medium or other computer program product configured for execution by one or more processors.
[0071] The memory220 includes high-speed random-access memory, such as dynamic random-access memory (DRAM), static random-access memory (SRAM), double- data-rate random-access memory (DDR RAM), or other random-access solid-state memory devices. In some embodiments, the memory220 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory220 optionally includes one or more storage devices remotely located from the one or more processing units202. The memory220 comprises a non-transitory computer readable storage medium. In some embodiments, the memory220 or the non-transitory computer readable storage medium of the memory220 stores the following programs, modules and data structures, or a subset thereof including an optional operating system230 and a XR experience module240.
[00396] Figure 8 is a flow diagram of an exemplary method8000 for displaying a plurality of affordances for accessing system functions of a first computer system, in response to detecting a first gaze input directed to a first user interface object, and in accordance with a determination that the first gaze input satisfies attention criteria with respect to the first user interface object, in accordance with some embodiments. In some embodiments, the method8000 is performed at a computer system (e.g., computer system101 in Figure 1) (which is sometimes referred to as "the first computer system") that is in communication with a display generation component (e.g., display generation component120 in Figures 1, 3, and 4) (e.g., a heads-up display, a display, a touchscreen, or a projector) and one or more input devices (e.g., a touch screen, a camera, and/or a microphone). In some embodiments, the computer system optionally includes one or more cameras (e.g., a camera (e.g., color sensors, infrared sensors, and/or other depth-sensing cameras) that points towards the user (e.g., to detect the gaze of the user) and/or a camera that points forward (e.g., to facilitate displaying elements of the physical environment captured by the camera). In some embodiments, the method 8000 is governed by instructions that are stored in a non-transitory (or transitory) computer-readable storage medium and that are executed by one or more processors of a computer system, such as the one or more processors202 of computer system101 (e.g., control110 in Figure IA). Some operations in method8000 are, optionally, combined and/or the order of some operations is, optionally, changed.
Specifically, the Examiner draws the Applicant’s attention to the language that the Examiner has emphasized in the quoted portions above. Under the broadest readable interpretation, these portions encompass include program code transmitted via optical, electromagnetic, and infrared subject matter, which may read on a signal.
The United States Patent and Trademark Office (USPTO) is obliged to give claims their broadest reasonable interpretation consistent with the specification during proceedings before the USPTO. See In re ZIetz, 893 F.2d 319 (Fed. Cir. 1989) (during patent examination the pending claims must be interpreted as broadly as their terms reasonably allow). The broadest reasonable interpretation of a claim drawn to a computer readable medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 US.C. § 101 as covering non-statutory subject matter. See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Cir. 2007) (transitory embodiments are not directed to statutory subject matter) and Interim Examination Instructions for Evaluating Subject Matter Eligibility Under 35 USC § 101, Aug. 24, 2009; p. 2.
The USPTO recognizes that applicants may have claims directed to computer readable media that cover signals per se, which the USPTO must reject under 35 USC § 101 as covering both non-statutory subject matter and statutory subject matter. In an effort to assist the patent community in overcoming a rejection or potential rejection under 35 USC § 101 in this situation, the USPTO suggests the following approach. A claim drawn to such a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 USC. § 101 by adding the limitation “non-transitory” to the claim. cf. Animals - Patentability, 1077 Off. Gaz. Pat. Office 24 (April 21, 1987) (suggesting that applicants add the limitation “non-human” to a claim covering a multicellular organism to avoid a rejection under 35 USC § 101). Such an amendment would typically not raise the issue of new matter, even when the specification is silent because the broadest reasonable interpretation relies on the ordinary and customary meaning that includes signals per se. The limited situations in which such an amendment could raise issues of new matter occur, for example, when the specification does not support a non-transitory embodiment because a signal per se is the only viable embodiment such that the amended claim is impermissibly broadened beyond the supporting disclosure. See e.g., Gentry Gallery, Inc. v. Berkline Corp., 134F.3d 1473 (Fed. Cir. 1998).
In view of the Applicant’s specification (as cited above) and the guidance provided (also above), a “machine-readable storage medium” under the broadest reasonable interpretation includes signals per se and therefore constitutes non-statutory subject matter. The Examiner recommends that the Applicant amend the rejected claims to recite exclusively “non-transitory computer readable storage medium.”
Claim Rejections -- 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 177 and 178 are rejected as being indefinite per 35 USC § 112(b).
The term “first level of proximity” in claim 177 is a relative term which renders the claim indefinite. The term “first” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. For purposes of examination, the claim is construed similarly to claim 146. Claim 178 is similarly rejected as it inherits the deficiencies of parent claim 177. Clarification and correction is required.
Claim Rejections -- 35 USC § 103
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
A.
Claims 137-145, 147-150, 152, 155-157, 160, 162-165, 172, 175 and 179-180 are rejected under 35 U.S.C. § 103 as being unpatentable over Lemelson et al. (“Lemelson”), United States Patent Application Publication 2003/0020755 published on Jan. 30, 2003 in view of Lopez et al. (“Lopez”), United States Patent Application Publication 2016/0274762 published on Jan. 30, 2003.
As to Claim 137, Lemelson teaches: A method, including:
at a first computer system that is in communication with a first display generation component and one or more input devices (Lemelson: Fig. 3A, a system that has a display system [52] and one or more sensors and input devices [15, 53]):
while a first view of a environment is visible via the first display generation component, detecting, via the one or more input devices, a first user input, including detecting a first gaze input that is directed to a first position in the environment (Lemelson: Fig. 6A, a display screen of a display area (i.e. environment); par. 0117, a gaze detection of an obtained coordinate is made and determined whether it is in a menu-pop up area (i.e. a first position) at step [248]); and
in response to detecting the first user input including detecting the first gaze input:
in accordance with a determination that the first position in the environment has a first spatial relationship to a viewport through which the is visible, displaying a first user interface object in the first view of the environment, wherein the first user interface object includes one or more affordances for accessing a first set of functions of the first computer system (Lemelson: par. par. 0117, a menu is displayed (i.e. a first user interface object).
PNG
media_image1.png
751
612
media_image1.png
Greyscale
Lemelson may not explicitly teach: the environment is three-dimensional and
wherein the first user interface object is displayed at a second position in the environment that has a second spatial relationship, different from the first spatial relationship, to the viewport through which the environment is visible
in accordance with a determination that the first position in the three-dimensional environment does not have the first spatial relationship to the viewport through which the three- dimensional environment is visible, forgoing displaying the first user interface object in the first view of the three-dimensional environment.
Lopez teaches in general concepts related to an augmented reality environment that allows a user to look at and utilize its gaze information to control devices (Lopez: Abstract). Specifically, Lopez teaches that once a user’s gaze is detected upon a certain device, a control user interface may be displayed next to it (i.e. in a second spatial relationship) (Lopez: par. 0049, Fig. 8, the UI may be presented as an overlay next to or on top of the physical device). Once the user’s gaze has been moved away from the device, the control user interface is removed from the display (Lopez: par. 0059).
PNG
media_image2.png
914
635
media_image2.png
Greyscale
It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Lemelson disclosures and teachings by implementing the system in an augmented (i.e. three-dimensional environment) with the offset menu and removal of menu as taught and suggested by Lopez. Such a person would have been motivated to do so with a reasonable expectation of success to allow for the ease of control of items and preservation of augmented real estate in an augmented reality setting (Lopez: par. 0021).
As to Claim 138, Lemelson and Lopez teach the limitations of claim 137.
Lemelson and Lopez as combined further teaches: while the first user interface object is not visible in a currently displayed view of the three-dimensional environment, detecting a first change of a viewpoint of a user from a first viewpoint associated with the first view of the three-dimensional environment to a second viewpoint associated with a second view of the three-dimensional environment (Lemelson: par. 0108, scrolling through the document up and down would change the viewpoint from a first to a second one); and
in response to detecting the first change in the viewpoint of the user, updating the currently displayed view of the three-dimensional environment in accordance with the first change in the viewpoint of the user, to display the second view of the three-dimensional environment;
while the second view of the three-dimensional environment is visible via the first display generation component, detecting, via the one or more input devices, a second user input, including detecting a second gaze input that is directed to a third position, different from the first position, in the three-dimensional environment; and
in response to detecting the second user input including detecting the second gaze input:
in accordance with a determination that the third position in the three-dimensional environment has the first spatial relationship to the viewport through which the three- dimensional environment is visible, displaying the first user interface object in the second view of the three-dimensional environment, at a fourth position in the three-dimensional environment that has the second spatial relationship to the second view of the three-dimensional environment; and
in accordance with a determination that the third position in the three-dimensional environment does not have the first spatial relationship to the viewport through which the three- dimensional environment is visible, forgoing displaying the first user interface object in the second view of the three-dimensional environment (Lemelson: pars. 0108-109, 114, the new view would have the menu selection region [240] at the same area as noted in Figs. 6A-C and 7A-C).
As to Claim 139, Lemelson and Lopez teach the limitations of claim 137.
Lemelson further teaches: in response to detecting the first user input including detecting the first gaze input:
in accordance with a determination that the first position in the three-dimensional environment does not have the first spatial relationship to viewport through which the three- dimensional environment is visible and that a second user interface object, different from the first user interface object, occupies the first position in the three-dimensional environment, performing a respective operation that corresponds to the second user interface object (Lemeson: par. 0108, there are several other regions that may be activated different from the menu-pop region [240] to perform other functions such as scrolling).
As to Claim 140, Lemelson and Lopez teach the limitations of claim 138.
Lemelson further teaches: while the first view of the three-dimensional environment is visible and the first user interface object is not displayed in the first view of the three-dimensional environment, detecting a third user input that includes a third gaze input that is directed to a fifth position in the three- dimensional environment;
in response to detecting the third user input that includes the third gaze input:
in accordance with a determination that the fifth position in the three-dimensional environment is within a first region that includes a respective position having the first spatial relationship to the viewport through which the three-dimensional environment is visible, displaying a third user interface object at the respective position in the three-dimensional environment (Lemelson: par. 0113, as an example, a user may direct their gaze to hypertexts in on the page (i.e. a third eye faze input directed to a fifth position in a first region) causing it to be highlighted (i.e. displaying a third user interface object); and
in accordance with a determination that the fifth position in the three- dimensional environment is not within the first region that includes the respective position having the first spatial relationship to the viewport through which the three-dimensional environment is visible, forgoing displaying the third user interface object at the respective position in the three-dimensional environment (Examiner asserts that per the disclosure, when the gaze is not on the first region, the hypertext is not highlighted).
As to Claim 141, Lemelson and Lopez teach the limitations of claim 140.
Lemelson further teaches: the first region includes a first subregion including the respective position that has the first spatial relationship to the viewport through which the three-dimensional environment is visible and a second subregion that does not include the respective position (Examiner asserts that the highlighting effect is a second region that does not include the respective position, per a z-order).
As to Claim 142, Lemelson and Lopez teach the limitations of claim 141.
Lemelson further teaches: wherein displaying the first user interface object at the second position in response to detecting the first user input including the first gaze input is further in accordance with a determination that the first gaze input is maintained within the first subregion for at least a first threshold amount of time (Lemelson: par. 0114, a dwell time on the pop-up menu region will result in the menu being selected and displayed).
As to Claim 143, Lemelson and Lopez teach the limitations of claim 142.
Lopez as combined further teaches: while the first user interface object is not visible in the first view of the three-dimensional environment, detecting, via the one or more input devices, a fourth user input, including detecting a fourth gaze input that is directed to the first subregion and that has not been maintained within the first subregion for at least the first threshold amount of time; and
in response to detecting the fourth user input including the fourth gaze input:
in accordance with a determination that a respective gesture meeting first criteria has been detected while the fourth gaze input is maintained in the first subregion, displaying the first user interface object at the second position in the three-dimensional environment (Lopez: par. 0060, a user may use hand gestures in combination with the presented user interface items).
As to Claim 145, Lemelson and Lopez teach the limitations of claim 137.
Lemelson and Lopez further teaches: while displaying the first user interface object, in the first view of the three-dimensional environment, at the second position in the three-dimensional environment that has the second spatial relationship to the first view of the three-dimensional environment, detecting that user attention is no longer directed to the object position in the three-dimensional environment; and
in response to detecting that the user attention is no longer directed to the first object position in the three-dimensional environment, ceasing to display the first user interface object in the first view of the three-dimensional environment (Examiner asserts that the combination would allow for the removal of the user interface object, even in the second position, as contemplated by Lopez’s combination thereof. Operatively, the interface object would cease regardless of the positioning).
As to Claim 147, Lemelson and Lopez teach the limitations of claim 137.
Lemelson further teaches: while displaying the first user interface object in the first view of the three-dimensional environment, detecting a fourth user input including detecting gaze input directed to a respective affordance of the one or more affordances for accessing the first set of functions of the first computer system in conjunction with detecting a first speech input from a user (Lemelson: par. 0114, a speech command in conjunction with the gaze); and
in response to detecting the fourth user input, performing a respective operation corresponding to the respective affordance in accordance with the first speech input (Lemelson: par. 01114, the option may be selected via speech command).
As to Claim 148, Lemelson and Lopez teach the limitations of claim 147.
Lemelson further teaches: wherein performing the respective operation corresponding to the respective affordance in accordance with the first speech input includes:
in accordance with a determination that the respective affordance is an affordance for accessing a virtual assistant function of the first computer system, performing an operation corresponding to instructions contained in the first speech input (Lemelson: par. 0117, the speech recognition means is a selection protocol, that uses a virtual assistant function).
As to Claim 149, Lemelson and Lopez teach the limitations of claim 147.
Lemelson further teaches: wherein performing the respective operation corresponding to the respective affordance in accordance with the first speech input includes:
in accordance with a determination that the respective affordance is an affordance for accessing a text entry function of the first computer system that accepts text input, providing text converted from the first speech input as input to the text entry function (Lemelson: par. 0117, the speech recognition is a selection protocol that may be used to understand the text equivalent command).
As to Claim 150, Lemelson and Lopez teach the limitations of claim 137.
Lemelson further teaches: while displaying the first view of the three-dimensional environment via the first display generation component, determining a current spatial relationship between the first display generation component and a user (Lemelson: Fig. 4B, par. 0089, the distance and position relationship of different potential users from the device is determined); and
adjusting criteria for determining whether the respective position has the first spatial relationship to the viewport through which the three-dimensional environment is visible in accordance with the current spatial relationship between the first display generation component and the user (Lemelson: par. 0089, the optimal and closest user is determined (i.e. adjusted criteria)).
PNG
media_image3.png
895
703
media_image3.png
Greyscale
As to Claim 152, Lemelson and Lopez teach the limitations of claim 150.
Lemelson further teaches: wherein displaying the first user interface object at the second position that has the second spatial relationship to the viewport through which the three-dimensional environment is visible includes:
adjusting criteria for establishing the second spatial relationship between the first user interface object and the viewport through which the three-dimensional environment is visible in accordance with the current spatial relationship between the first display generation component and the user (Lemelson: par. 0089, the optimal and closest user is determined (i.e. adjusted criteria). Examiner asserts that the same adjusted criteria may be applied in the second spatial relationship).
As to Claim 155, Lemelson and Lopez teach the limitations of claim 140.
Lemelson and Lopez as combined further teaches: while displaying the third user interface object at the respective position in the three- dimensional environment that has the first spatial relationship to the viewport through which the three-dimensional environment is visible, detecting, via the one or more input devices, a second change of the viewpoint of the user from the first viewpoint a third viewpoint (Examiner asserts that the scrolling of the document will change the viewpoints); and
in response to detecting the second change in the viewpoint of the user, displaying the viewport through which the three-dimensional environment is visible and displaying the third user interface object at an updated position in the view of the three-dimensional environment that has the first spatial relationship to the viewport through which the three-dimensional environment is visible (Examiner asserts the change in viewpoint would update the view accordingly).
As to Claim 156, Lemelson and Lopez teach the limitations of claim 155.
Lemelson further teaches: wherein the third user interface object is translucent and has an appearance that is based on at least a portion of the three-dimensional environment over which the third user interface object is displayed (Lemelson: par. 0113, the hypertext region may become highlighted (i.e. translucent) and is based on the underlying text).
As to Claim 157, Lemelson and Lopez teach the limitations of claim 155.
Lopez further teaches: while the three-dimensional environment is visible through the viewport, displaying the third user interface object with a first appearance at a first indicator position in the three- dimensional environment, wherein the first appearance of the third user interface object at the first indicator position is based at least in part on a characteristic of the three-dimensional environment at the first indicator position in the viewport through which the three-dimensional environment is visible (Lopez: par. 113, the hypertext link may remain selected event if the document is scrolled) ; and
in response to detecting a movement of the viewpoint of the user from the first viewpoint to the third viewpoint in the three-dimensional environment, displaying the third user interface object with a respective appearance at a respective indicator position in the three-dimensional environment that has the first spatial relationship to the viewport through which the three- dimensional environment is visible, wherein the respective appearance of the first user interface object at the respective indicator position is based at least in part on a characteristic of the three- dimensional environment at the respective indicator position (Examiner asserts the scrolling is a movement of the viewpoint that will change the indicator of the position (the text of the document) accordingly and the appearance is maintained).
As to Claim 160, Lemelson and Lopez teach the limitations of claim 155.
Lemelson and Lopez as combined further teaches: in response to detecting the first user input that includes the first gaze input:
in accordance with a determination that the first position in the three-dimensional environment has the first spatial relationship to the viewport through which the three- dimensional environment is visible:
displaying an indication of the first user interface object before displaying the first user interface object at the second position (Lopez: par. 0061, upon detection of the gaze on a button, the button may be highlighted (i.e. an indication); and
after displaying the indication of the first user interface object;
in accordance with a determination that criteria for displaying the first user interface object is met by the first user input, replacing the indication of the first user interface object with the first user interface object (Lopez: par. 0061, in response to a determination that the button is activated, it will be selected and activated and shown as such); and
in accordance with a determination that criteria for displaying the first user interface object is not met by the first user input and that the first gaze input has moved away from the first position that has the first spatial relationship with the viewport through which the three-dimensional environment is visible, ceasing to display the indication of the third user interface object and forgoing display the third user interface object at the second position in the three-dimensional environment (Examiner asserts that the teaching of Lopez of gaze detection of the user turning away attention from the first position will result in the view being reset or changing accordingly to focus on other elements).
As to Claim 162, Lemelson and Lopez teach the limitations of claim 137.
Lemelson and Lopez further teaches: while displaying the first user interface object including the one or more affordances for accessing the first set of functions of the first computer system, detecting a fifth user input including detecting gaze input directed to a respective affordance of the one or more affordances; and
in response to detecting the fifth user input:
in accordance with a determination that the respective affordance is a first affordance corresponding to a first function of the first computer system and that the fifth user input includes a gesture input that meets gesture criteria, performing the first function (Lopez: par. 0060, a hand gesture (i.e. fifth user input) may be associated with a particular menu option);
and in accordance with a determination that the respective affordance is the first affordance corresponding to the first unction function of the first computer system and that the fifth user input does not include a gesture input that meets the gesture criteria, forgoing performing the first function (Examiner asserts that the triggering condition of the gesture input is needed for the first function to be presented, as taught and disclosed by Lopez); and
in accordance with a determination that the respective affordance is a second affordance corresponding to a second function of the first computer system and that the fifth user input does not include a gesture input that meets the gesture criteria, performing the second function (Examiner asserts that any of the already discussed functions that do not require a gesture input, and simply the gaze input would satisfy the “second function”).
As to Claim 163, Lemelson and Lopez teach the limitations of claim 137.
Lemelson and Lopez as combined further teaches: while displaying the first user interface object including the one or more affordances for accessing the first set of functions of the first computer system, detecting a change in pose of a first portion of the user; and
in response to detecting the change in pose of the first portion of the user (As per the combination, if the user changes her gaze, and thus a change in pose):
in accordance with a determination that the change in pose of the first portion of the user results in a first type of pose, changing an appearance of a respective affordance of the one or more affordances (The result is the removal of the affordance, a change in appearance); and
in accordance with a determination that the change in pose of the first portion of the user does not result in the first type of pose, forgoing changing the appearance of the respective affordance (if there is no change, then the affordance remains).
As to Claim 164, Lemelson and Lopez teach the limitations of claim 163.
Lemelson and Lopez as combined further teaches: in response to detecting the change in pose of the first portion of the user:
in accordance with a determination that the change in pose of the first portion of the user results in the first type of pose, forgoing changing an appearance of at least one affordance of the one or more affordances different from the respective affordance (if there is no change, then the affordance remains as noted earlier).
As to Claim 165, Lemelson and Lopez teach the limitations of claim 137.
Lopez further teaches: while displaying the first user interface object including the one or more affordances for accessing the first set of functions of the first computer system, detecting, via the one or more input devices, a sixth user input including gaze input directed to a respective affordance of the one or more affordances; and in response to detecting the sixth user input directed to the respective affordance, displaying additional content associated with the respective affordance (Lopez: par. 0061, the additional settings may be scrolled through based on the gaze input being directed to the top or bottom (i.e. sixth user input).
As to Claim 172, Lemelson and Lopez teach the limitations of claim 137.
Lemelson further teaches: while displaying, via the first display generation component, the first user interface object, detecting, via the one or more input devices, a thirteenth user input that activates a fifth affordance of the one or more of affordances for accessing the first set of functions of the first computer system; and
in response to detecting the thirteenth user input that activates the fifth affordance:
performing a respective operation that corresponds to activation of the fifth affordance (Lopez: par. 0060, a hand gesture (i.e. thirteenth user input) may be associated with a particular menu option).
As to Claim 175, Lemelson and Lopez teach the limitations of claim 138.
Lopez further teaches: wherein displaying the first user interface object in the first view of the three-dimensional environment includes displaying the first user interface object at a first simulated distance from the first viewpoint of the user (Lopez: Fig. 7, par. 0053, the UI for the thermostat is shown at a simulated distance from the user) , wherein the first simulated distance is less than respective simulated distances of one or more other user interface objects displayed in the first view of the three- dimensional environment from the first viewpoint of the user (Lopez: par. 0053, the larger UI may be shown, when selected, which Examiner asserts may be design choice of being simulated as closer).
PNG
media_image4.png
618
933
media_image4.png
Greyscale
As to Claim 179, it is rejected for similar reasons as claim 137. Lemelson further teaches a first display generation component (Lemelson: Fig. 3C, display driver [68]), input devices (Lemelson: Fig. 3C, other input devices [53]), processors (Lemelson: Fig. 3C, microprocessor [72]) and memory storing programs (Lemelson: Fig. 3C, memory[14]).
As to Claim 180, it is rejected for similar reasons as claim 137 and 179.
B.
Claims 146, 177 and 178 are rejected under 35 U.S.C. § 103 as being unpatentable over Lemelson et al. (“Lemelson”), United States Patent Application Publication 2003/0020755 published on Jan. 30, 2003 in view of Lopez et al. (“Lopez”), United States Patent Application Publication 2016/0274762 published on Jan. 30, 2003 in further view of Gu et al. (“Gu”), United States Patent Application Publication 2018/0136716 published on May 17, 2018.
As to Claim 146, Lemelson and Lopez teach the limitations of claim 145.
Lemelson and Lopez may not explicitly teach: the determination that the first position in the three-dimensional environment has the first spatial relationship to the viewport through which the three-dimensional environment is visible includes a determination that the first position is within a first response region of a first size; and
detecting that the user attention is no longer directed to the first position in the three-dimensional environment includes detecting that the user attention has moved from within the first response region to outside of a second response region of a second size that is different from the first size.
Gu teaches in general concepts related to a vision-based control apparatus utilizing a rear-view mirror of a vehicle (Gu: Title). Specifically, Gu teaches that selectable objects are displayed in the rear-view mirror of the vehicle (Gu: par. 00008). If the detected user gaze position is within a threshold distance of a selectable object in the displayed viewport area, then it is selected (Gu: par. 0023). Lopez teaches the removal of the activated user interface object when the gaze is no longer directed at it.
It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Lemelson-Lopez disclosures and teachings utilizing the threshold distance for activation as taught by Gu. Such a person would have been motivated to do so with a reasonable expectation of success to allow for a margin of error in precision of interacting with objects.
As to Claim 177 it is rejected for similar reasons as claim 146.
As to Claim 178 it is rejected for similar reasons as claim 146.
C.
Claim 151 is rejected under 35 U.S.C. § 103 as being unpatentable over Lemelson et al. (“Lemelson”), United States Patent Application Publication 2003/0020755 published on Jan. 30, 2003 in view of Lopez et al. (“Lopez”), United States Patent Application Publication 2016/0274762 published on Jan. 30, 2003 in further view of Kuehne et al. (“Kuehne”), United States Patent Application Publication 2018/0136716 published on May 17, 2018.
As to Claim 151, Lemelson and Lopez teach the limitations of claim 150.
Lemelson further teaches: in accordance with a determination that the current spatial relationship between the first display generation component and the user no longer meets alignment criteria, displaying a second visual indication that the current spatial relationship between the first display generation component and the user no longer meets the alignment criteria.
Kuehne teaches in general concepts related to how position and head alignment of a user wearing virtual reality glasses are detected and used in a virtual environment (Kuehne: Abstract). Specifically, Kuehne teaches that a warning (i.e. a visual indication) when the user is about to move out of a specific region for the use and detection of the virtual environment device (Kuehne: par. 0010, when the user is about to move out of the pre-specified distance, a visual indication is given).
It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Lemelson-Lopez disclosures and teachings to provide for the warning as taught by Kuehne. Such a person would have been motivated to do so with a reasonable expectation of success to allow for a reliable manner of warning the user of the loss of being able to be detected in the specific region (Kuehne: par. 0010).
D.
Claim 153 is rejected under 35 U.S.C. § 103 as being unpatentable over Lemelson et al. (“Lemelson”), United States Patent Application Publication 2003/0020755 published on Jan. 30, 2003 in view of Lopez et al. (“Lopez”), United States Patent Application Publication 2016/0274762 published on Jan. 30, 2003 in further view of Shinohara et al. (“Shinohara”), United States Patent Application Publication 2018/0325483 published on Nov. 15, 2018.
As to Claim 153, Lemelson and Lopez teach the limitations of claim 150.
Lemelson and Lopez may not explicitly teach: displaying one or more user interface objects in the first view of the three-dimensional environment, wherein the one or more user interface objects are different from the first user interface object, wherein respective positions of the one or more user interface objects in the first view of the three-dimensional environment do not change in accordance with a change to the current spatial relationship between the first display generation component and the user.
Shinohara teaches in general concepts related to displaying tomographic images in a overlayed fashion (Shinohara: Abstract). Specifically, Shinohara teaches that elements of a view may have fixed elements such as the title is fixed even if a gaze-directed scrolling takes place (Shinohara: par. 0113).
It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Lemelson-Lopez disclosures and teachings to provide for fixed user interface elements as taught by Shinohara. Such a person would have been motivated to do so with a reasonable expectation of success to allow for certain information to remain fixed in the virtual environment that would need to be referred to as other information is dynamically changed.
E.
Claims 144, 154, 158-159, 161, 166-171 and 173-174 are rejected under 35 U.S.C. § 103 as being unpatentable over Lemelson et al. (“Lemelson”), United States Patent Application Publication 2003/0020755 published on Jan. 30, 2003 in view of Lopez et al. (“Lopez”), United States Patent Application Publication 2016/0274762 published on Jan. 30, 2003 in further view of George-Svahn et al. (“George-Svahn”), United States Patent Application Publication 2017/0235360 published on Aug. 17, 2017.
As to Claim 144, Lemelson and Lopez teach the limitations of claim 137.
Lemelson and Lopez may not explicitly teach: wherein the first user interface object includes a respective system user interface for accessing one or more system functions of the first computer system.
George-Svahn teaches in general concepts related to gaze-based input to interact with a graphical user interface (George-Svahn: Abstract). Specifically, George-Svahn teaches that the volume of a media (i.e. a system function) being played on the system may be controlled via gaze-detection (George-Svahn: par. 0052, a volume slider control may be controlled via a joint gaze and touch gesture).
It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Lemelson-Lopez disclosures and teachings by allowing for the system functions to be controlled via the menu interface object as taught by George-Svahn. Such a person would have been motivated to do so with a reasonable expectation of success to allow for ease of control of the system.
As to Claim 154, Lemelson and Lopez teach the limitations of claim 137.
Lemelson and Lopez may not explicitly teach: at a first time, the one or more affordances for accessing the first set of functions of the first computer system include a first affordance for adjusting an audio level of the first computer system; and
at a second time, different from the first time, the one or more affordances for accessing the first set of functions of the first computer system include a second affordance for adjusting an audio level of a first type of audio provided by the first computer system and a third affordance for adjusting an audio level of a second type of audio provided by the first computer system, wherein the second affordance and the third affordance are different from the first affordance.
George-Svahn teaches in general concepts related to gaze-based input to interact with a graphical user interface (George-Svahn: Abstract). Specifically, George-Svahn teaches that the volume of a media being played on the system may be controlled via gaze-detection (George-Svahn: par. 0052, a volume slider control may be controlled via a joint gaze and touch gesture). Examiner notes that the disclosure is not limited to only one slider for one audio source.
It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Lemelson-Lopez disclosures and teachings by allowing for the volume of several sources to be controlled via the menu interface with multiple scrolling objects as taught and suggested by George-Svahn. Such a person would have been motivated to do so with a reasonable expectation of success to allow for ease of control of the system.
As to Claim 158, Lemelson, Lopez and George-Svahn teach the limitations of claim 155.
Lemelson further teaches: wherein displaying the first user interface object in response to detecting the first user input including the first gaze input, includes displaying an animated transition of the one or more affordances for accessing the first set of functions of the first computer system emerging from the third user interface object in a first direction (George-Svahn: par. 0201, animation may be used as a transition when the object is gazed upon. Examiner asserts that the manner of the appearance of the emerging is a design choice).
As to Claim 159, Lemelson, Lopez and George-Svahn teach the limitations of claim 155.
George-Svahn further teaches: wherein displaying the first user interface object in response to detecting the first user input including the first gaze input, includes displaying an animated transition of the one or more affordances for accessing the first set of functions of the first computer system gradually appearing (George-Svahn: par. 0201, animation may be used as a transition when the object is gazed upon).
As to Claim 161, Lemelson and Lopez teach the limitations of claim 137.
Lemelson and Lopez may not explicitly teach: wherein the first position in the three-dimensional environment is in a periphery region of the viewport through which the three-dimensional environment is visible.
George-Svahn teaches in general concepts related to gaze-based input to interact with a graphical user interface (George-Svahn: Abstract). Specifically, George-Svahn teaches that the a menu may be displayed in a combination of a gaze on an edge area and a slide input (Georges-Svahn: par. 0051, the edge of an information area is gazed upon (i.e. a periphery region)
It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Lemelson-Lopez disclosures and teachings by allowing the position of the first position to be on the periphery as taught and suggested by George-Svahn. Such a person would have been motivated to do so with a reasonable expectation of success to allow for ease of control of the system with an expected region of the view to be used for the activation of a menu system.
As to Claim 166, Lemelson and Lopez teach the limitations of claim 137.
Lemelson and Lopez may not explicitly teach: while displaying the first user interface object, detecting, via the one or more input devices, a seventh user input that activates a first affordance of the one or more affordances for accessing the first set of functions of the first computer system; and
in response to detecting the seventh user input that activates the first affordance, displaying a first system user interface for a first system function of the first computer system in the three-dimensional environment.
George-Svahn teaches in general concepts related to gaze-based input to interact with a graphical user interface (George-Svahn: Abstract). Specifically, George-Svahn teaches that the volume of a media (i.e. a system function) being played on the system may be controlled via gaze-detection (George-Svahn: par. 0052, a volume slider control may be controlled via a joint gaze and touch gesture).
It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Lemelson-Lopez disclosures and teachings by allowing for the system functions to be controlled via the menu interface object as taught by George-Svahn. Such a person would have been motivated to do so with a reasonable expectation of success to allow for ease of control of the system.
As to Claim 167, Lemelson, Lopez and George-Svahn teach the limitations of claim 166.
Lemelson, Lopez and George-Svahn as combined further teaches: while displaying the first user interface object and the first system user interface, detecting, via the one or more input devices, an eighth user input that activates a second affordance, different from the first affordance, of the one or more of affordances for accessing the first set of functions of the first computer system; and
in response to detecting the eighth user input that activates the second affordance:
displaying a second system user interface, different from the first system user interface, for a second system function of the first computer system; and ceasing to display the first system user interface.
It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have further modified the Lemelson-Lopez-George-Svahn disclosures and teachings by allowing for multiple affordances as taught and suggested by George-Svahn for controlling the functions (e.g. George-Svahn- par. 0053, check boxes). Such a person would have been motivated to do so with a reasonable expectation of success to allow for ease of control of the system.
As to Claim 168, Lemelson, Lopez and George-Svahn teach the limitations of claim 166.
Lopez further teaches: while displaying the first system user interface and the first user interface object, detecting, via the one or more input devices, a ninth user input that includes a gaze input directed to the first user interface object; and
in response to detecting the ninth user input, changing one or more visual properties of the first system user interface to reducing visual prominence of the first system user interface relative to the first user interface object (Lopez: par. 0061, the button may be highlighted. Examiner asserts that shading of the highlighting may be interpreted to be of reduced visual prominence broadly).
As to Claim 169, Lemelson, Lopez and George-Svahn teach the limitations of claim 166.
Lopez further teaches: while displaying the first system user interface and the first user interface object, detecting, via the one or more input devices, a tenth user input that includes gaze input directed to the first system user interface; and
in response to detecting the tenth user input, changing one or more visual properties of the first user interface object to reduce visual prominence of the first user interface object relative to the first system user interface (Lopez: par. 0061, the button may be highlighted. Examiner asserts that shading of the highlighting may be interpreted to be of reduced visual prominence broadly).
As to Claim 170, Lemelson, Lopez and George-Svahn teach the limitations of claim 166.
Lemelson, Lopez and George-Svahn further teaches: while displaying, via the first display generation component, an application launching user interface in the three-dimensional environment and the first user interface object, detecting, via the one or more input devices, an eleventh user input that activates a third affordance of the one or more of affordances for accessing the first set of functions of the first computer system; and
in response to detecting the eleventh user input that activates the respective affordance:
displaying a third system user interface for a third system function of the first computer system that corresponds to the third affordance; and
ceasing to display the application launching user interface.
It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have further modified the Lemelson-Lopez-George-Svahn disclosures and teachings by allowing for multiple affordances as taught and suggested by George-Svahn for controlling the functions (e.g. George-Svahn- par. 0053, check boxes). Such a person would have been motivated to do so with a reasonable expectation of success to allow for ease of control of the system.
As to Claim 171, Lemelson, Lopez and George-Svahn teach the limitations of claim 166.
Lemelson further teaches: while displaying, via the first display generation component, an application user interface in the three-dimensional environment and the first user interface object, detecting, via the one or more input devices, a twelfth user input that activates a fourth affordance of the one or more of affordances for accessing the first set of functions of the first computer system; and
in response to detecting the twelfth user input that activates the respective affordance:
displaying a fourth system user interface for a fourth system function of the first computer system that corresponds to the fourth affordance, concurrently with the application user interface.
It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have further modified the Lemelson-Lopez-George-Svahn disclosures and teachings by allowing for multiple affordances as taught and suggested by George-Svahn for controlling the functions (e.g. George-Svahn- par. 0053, check boxes). Such a person would have been motivated to do so with a reasonable expectation of success to allow for ease of control of the system.
As to Claim 173, Lemelson, Lopez and George-Svahn teach the limitations of claim 172.
Lopez further teaches: wherein performing the respective operation includes displaying one or more controls for adjusting one or more settings of the first computer system, wherein the one or more controls are displayed overlaying at least a portion of the first user interface object (Lopez: par. 0049, Fig. 8, the UI may be presented as an overlay next to or on top of the physical device.)
As to Claim 174, Lemelson, Lopez and George-Svahn teach the limitations of claim 173.
George-Svanh further teaches: wherein detecting the thirteenth user input that activates the fifth affordance of the one or more of affordances for accessing the first set of functions of the first computer system includes detecting a pinch and release gesture that is directed to the fifth affordance (Georges-Svahn: par. 0049, the pinching of two of his or fer fingers can allow for the zooming of the object part, in combination with drag), and the method includes:
while displaying the one or more controls for adjusting one or more settings of the first computer system, detecting a pinch and drag gesture that is directed to a first control of the one or more controls; and
in response to detecting the pinch and drag gesture that is directed to the first control of the one or more controls, adjusting a first setting that corresponds to the first control in accordance with one or more characteristics of the pinch and drag gesture (Georges-Svahn: par. 0049, the pinching of two of his or fer fingers can allow for the zooming of the object part, in combination with drag).
F.
Claim 176 is rejected under 35 U.S.C. § 103 as being unpatentable over Lemelson et al. (“Lemelson”), United States Patent Application Publication 2003/0020755 published on Jan. 30, 2003 in view of Lopez et al. (“Lopez”), United States Patent Application Publication 2016/0274762 published on Jan. 30, 2003 in further view of Clements, United States Patent 10,802,582, published on Oct. 13, 2020.
As to Claim 176, Lemelson and Lopez teach the limitations of claim 137.
Lemelson and Lopez may not explicitly teach: displaying a plurality of system status indicators that include information about a status of the first computer system, concurrently with displaying the first user interface object.
Clements teaches in general eye tracking devices as part of an augmented reality headset(Clements: Abstract). Specifically, Clements teaches that the user is able to direct her eye gaze to a position to trigger display of a window including system information (Clements: col. 7, lines 39-44, the icon is activated by eye gaze and shows an icon window including information about the device).
It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Lemelson-Lopez disclosures and teachings by allowing for the displayed information to be about multiple system indicators as taught by Clements. Such a person would have been motivated to do so with a reasonable expectation of success to reduce the cognitive burden on the user.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Ballard et al., US Patent Application Publication 2015/0156803 (June 4, 2015) (describing gaze-based selection of menu and sub menus);
Ziraknejad et al., US Patent Application Publication 2020/0038120 (Feb. 6, 2020) (describing gaze-based surgical system).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES T TSAI whose telephone number is (571)270-3916. The examiner can normally be reached M-F 8-5 Eastern.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached on 571-270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES T TSAI/Primary Examiner, Art Unit 2174