DETAILED ACTION
Status
This Office Action is responsive to claims filed on 09/18/2023. Please note Claims 1-17 are pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 16 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter.
Claim 16 describes a “computer-readable storage medium”. Further, Applicant's specification, at [0006], describes “Executable instructions for performing these functions are, optionally, included in a transitory and/or non-transitory computer readable storage medium”. Thus, Applicant’s claimed “computer-readable storage medium” is considered to include data signals per se, which are nonstatutory as data signals per se do not fall into one of the four statutory categories of invention.
As an additional note, a non-transitory computer readable medium having executable programming instructions stored thereon is considered statutory, as non-transitory computer readable medium excludes data signals.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-9 and 11-17 are rejected under 35 U.S.C. 103 as being unpatentable over Faulkner (US 20210097776 A1), in view of Hurwitz (US 20230046333 A1).
Regarding Claim 1, Faulkner discloses a method, comprising:
at a computer system that includes or is in communication with a display generation component and one or more input devices ([0042] “In some embodiments, as shown in FIG. 1, the CGR experience is provided to the user via an operating environment 100 that includes a computer system 101. The computer system 101 includes… a display generation component 120”):
detecting a first input on a rotatable input mechanism of an input device of the one or more input devices ([0058] “In some embodiments, the one or more I/O devices 206 include at least one of a keyboard, a mouse, a touchpad, a joystick, one or more microphones, one or more speakers, one or more image sensors, one or more displays, and/or the like.” Examiner notes “a joystick” or “a mouse” with scroll wheel teaches the claimed rotatable input mechanism):
in response to detecting the first input on the input mechanism:
in accordance with a determination that the first input is a first type of input ([0216] “Specifically, the computer system, in response to detecting a first user input (e.g., input by hand 7200 in FIG. 7G) of the sequence of two or more user inputs, and in accordance with a determination that the first user input meets first criteria…”):
changing an immersion level associated with display of an extended reality (XR) environment generated by the display generation component to a first immersion level ([0216] “the computer system successively increases (10006) a quantity of virtual elements displayed in the three-dimensional scene in accordance with the consecutive inputs of sequence of two or more user inputs (e.g., successively increasing the level of immersiveness of the three-dimensional scene by replacing additional class(es) of physical elements in the three-dimensional scene in response to consecutive user inputs of the same type or consecutive user inputs in a sequence of related inputs).”) in which display of the XR environment concurrently includes virtual content from an application and a passthrough portion of a physical environment of the computer system ([0216] “Specifically, the computer system, in response to detecting a first user input (e.g., input by hand 7200 in FIG. 7G) of the sequence of two or more user inputs, and in accordance with a determination that the first user input meets first criteria (e.g., criteria for detecting a gesture to increase the level of emersion of the computer-generated experience), displays the three-dimensional scene with at least a first subset of the first set of one or more physical elements (e.g., some, not all, of the first set of one or more physical elements become obscured or blocked by newly added virtual elements) and a second quantity of virtual elements (e.g., virtual object 7402 in FIGS. 7H and 7I).”); and
in accordance with a determination that the first input is a second type of input ([0221] “the computer system detects an input that meets second criteria…”):
performing an operation different from changing the immersion level associated with display of the XR environment ([0221] “In response to detecting the input that meets the second criteria (e.g., a long press input that is maintained for at least a respective time threshold) that are distinct from the first criteria, the computer system displays a plurality of selectable options for changing the view into the first virtual environment (e.g., including menu options for changing the virtual environment represented in the virtual window (e.g., by changing the location, time of day, lighting, weather condition, zoom level, viewing perspective, season, date, etc.)).”).
Faulkner does not expressly disclose detecting the first input on the rotatable input mechanism. Instead, Faulkner’s examples of the first input are hand gestures.
However, in the same field of endeavor, Hurwitz discloses detecting a first input on a rotatable input mechanism of an input device of the one or more input devices, and in response to detecting the first input on the rotatable input mechanism, perform various functions based on the different types of input ([0036] “An implementation of such a peripheral interface for AR or VR glasses according to one example would enable user 130 to scroll in the glasses UI by dragging a finger along display 114 or rotating crown 122, to switch between menus in the glasses UI or upon display 114 by swiping across display 114, to select an option on the menu by tapping on display 114, to go back or cancel by pressing on crown 122, and to perform a quick access task by pressing button 126 with greater or less force, or for a different amount of time, than would be used to toggle smartwatch 110 back to the watch interface.” [0002] “A crown here refers, for example, to any one or any combination of a rotary mechanical input”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the method of Faulkner with the feature of detecting a first input on a rotatable input mechanism and perform various functions in response to detecting the first input on the rotatable input mechanism (e.g. a crown 122 as disclosed in Hurwitz). Doing so could “receive inputs of rotation, pressure, other inputs, or any combination thereof” and therefore facilitate user’s input (see Hurwitz [0028]).
Regarding Claim 2, Faulkner-Hurwitz discloses the method of claim 1, further including, in response to a second input of the first type of input, changing the immersion level associated with display of the XR environment generated by the display generation component to a second immersion level in which display of the XR environment concurrently includes different virtual content or virtual content displayed with a different level of fidelity from the virtual content displayed when the first immersion level is associated with display of the XR environment (Faulkner [0217] “In some embodiments, displaying the second quantity of virtual elements in response to detecting the first user input of the sequence of two or more user inputs includes displaying a first animated transition that gradually replaces (e.g., replaces display of objects that would be visible via pass through video or obscures objects that would be directly visible through a transparent or partially transparent display) an increasing amount of the first class of physical elements in the three-dimensional scene with virtual elements (e.g., new virtual elements and/or expansion of existing virtual elements). Displaying the third quantity of virtual elements in response to detecting the second user input of the sequence of two or more user inputs includes displaying a second animated transition that gradually replaces an increasing amount of the second class of physical elements in the three-dimensional scene with virtual elements (e.g., new virtual elements and/or expansion of existing virtual elements) while the first class of physical elements are displayed in place of existing virtual elements (e.g., the second quantity of virtual elements) in the three-dimensional scene.”).
Regarding Claim 3, Faulkner-Hurwitz discloses the method of claim 1, wherein the second type of input includes a press input (Hurwitz [0028] “According to various examples, crown 122 may be able to receive inputs of rotation, pressure, other inputs, or any combination thereof.”), the method further comprising: detecting a third input provided to the rotatable input mechanism; and in response to the rotatable input mechanism detecting the third input as a press input, performing a respective operation selected from a group consisting of: dismissing an active application; dismissing a virtual object displayed via the display generation component; displaying an application manager user interface; enabling an accessibility mode; and redisplaying in the XR environment a plurality of previously displayed user interface elements (Hurwitz [0036] “An implementation of such a peripheral interface for AR or VR glasses according to one example would enable user 130 to… go back or cancel by pressing on crown 122” Faulkner [0221] “In response to detecting the input that meets the second criteria (e.g., a long press input that is maintained for at least a respective time threshold) that are distinct from the first criteria, the computer system displays a plurality of selectable options for changing the view into the first virtual environment (e.g., including menu options for changing the virtual environment represented in the virtual window (e.g., by changing the location, time of day, lighting, weather condition, zoom level, viewing perspective, season, date, etc.)).”).
Regarding Claim 4, Faulkner-Hurwitz discloses the method of claim 1, wherein changing the immersion level associated with display of the XR environment is based on detecting rotational inputs to the rotatable input mechanism (Faulkner [0190] “in accordance with a determination that the first hand movement meets first gesture criteria (e.g., the first hand movement is a pinch and drag gesture (e.g., movement of the pinching fingers is resulted from the whole hand moving laterally), or a swipe gesture (e.g., a micro-swipe gesture by a finger across the surface of another finger or a controller)), the computer system performs a first operation” and Hurwitz [0036] “…by dragging a finger along display 114 or rotating crown 122” demonstrate that rotating crown 122 or a dragging gesture can be used to activate the same operation).
Regarding Claim 5, Faulkner-Hurwitz discloses The method of claim 4, wherein changing the immersion level associated with display of the XR environment based on detecting the rotational inputs includes: in accordance with a determination that the first input is a rotational input in a first direction, increasing the immersion level; and in accordance with a determination that the first input is a rotational input in a second direction different from the first direction, decreasing the immersion level (Faulkner [0149] “In some embodiments, the gesture inputs for increasing or decreasing the level of immersiveness of the three-dimensional environment are vertical swipe gestures that are of opposite directions (e.g., upward for increasing immersiveness/quantity of virtual elements, and downward for decreasing immersiveness/quantity of virtual elements).”).
Regarding Claim 6, Faulkner-Hurwitz discloses the method of claim 1, wherein the first type of input includes a rotational input of the rotatable input mechanism, and the second type of input includes a press input of the rotatable input mechanism (Hurwitz [0028] “According to various examples, crown 122 may be able to receive inputs of rotation, pressure, other inputs, or any combination thereof.”).
Regarding Claim 7, Faulkner-Hurwitz discloses the method of claim 6, including, in response to detecting the first input: in accordance with a determination that the first input is the second type of input and comprises a first number of press inputs, performing a first operation, and in accordance with a determination that the first input is the second type of input and comprises a second number of press inputs different from the first number, performing a second operation different from the first operation (Faulkner [0243] “in accordance with a determination that the user input on the first physical surface of the first physical object meets sixth criteria (e.g., a first set of criteria among respective sets of criteria for detecting… a tap input, …or a double tap input, etc.), the computer system performs a first operation corresponding to the first physical object. In accordance with a determination that the user input on the first physical surface of the first physical object meets sixth criteria (e.g., a second set of criteria among the respective sets of criteria for detecting …a tap input, …or a double tap input, etc.), the computer system performs a second operation corresponding to the first physical object, that is distinct from the first operation.”).
Regarding Claim 8, Faulkner-Hurwitz discloses the method of claim 7, including: detecting the first number of press inputs directed to the rotatable input mechanism; and in response to detecting the first number of press inputs directed to the rotatable input mechanism, dismissing an active application by causing the active application to run in a background and/or displaying, via the display generation component, a home menu user interface (Hurwitz [0036] “An implementation of such a peripheral interface for AR or VR glasses according to one example would enable user 130 to …go back or cancel by pressing on crown 122”).
Regarding Claim 9, Faulkner-Hurwitz discloses the method of claim 8, including: detecting the second number of press inputs directed to the rotatable input mechanism; and in response to detecting the second number of press inputs directed to the rotatable input mechanism, displaying an application manager user interface (Hurwitz [0036] “An implementation of such a peripheral interface for AR or VR glasses according to one example would enable user 130 to …perform a quick access task by pressing button 126 with greater or less force” Faulkner [0221] “In response to detecting the input that meets the second criteria (e.g., a long press input that is maintained for at least a respective time threshold) that are distinct from the first criteria, the computer system displays a plurality of selectable options for changing the view into the first virtual environment (e.g., including menu options for changing the virtual environment represented in the virtual window (e.g., by changing the location, time of day, lighting, weather condition, zoom level, viewing perspective, season, date, etc.)).”).
Regarding Claim 11, Faulkner-Hurwitz discloses the method of claim 8, including: detecting a fourth number of press inputs directed to the rotatable input mechanism; and in response to detecting the fourth number of press inputs directed to the rotatable input mechanism, dismissing a virtual object by displaying a respective passthrough portion of the physical environment of the computer system (Faulkner [0242] “the computer system detects a user input that meets fifth criteria that correspond to a request for dismissing the first user interface (e.g., criteria for detecting a swipe input while a gaze input is focused on the first user interface).” Hurwitz [0242] “Pushing or pulling on crown 122 may also be inputs to which functions of the peripheral device 132 are assigned when smartwatch 110 is in the peripheral control mode.”).
Regarding Claim 12, Faulkner-Hurwitz discloses the method of claim 1, including, in response to detecting the first input: in accordance with a determination that the first input is the second type of input and has a duration meeting first criteria, performing a first operation, and in accordance with a determination that the first input is the second type of input and has a duration meeting second criteria different from the first criteria, performing a second operation different from the first operation (Faulkner [0184] “In some embodiments, the amount of movement performed by the moving finger (e.g., thumb) and or other movement metrics associated with the movement of the finger (e.g., speed, initial speed, ending speed, duration, direction, movement pattern, etc.) is used to quantitatively affect the operation that is triggered by the finger input.”).
Regarding Claim 13, Faulkner-Hurwitz discloses the method of claim 1, including, in accordance with the determination that the first input is the second type of input, displaying in the XR environment a home menu user interface (Faulkner [0109] “In some embodiments, one or more of user interface objects 7208, 7210, and 7212 are application launch icons (e.g., for performing an operation to launch a corresponding application, and an operation to display a quick action menu corresponding to a respective application, etc.).”).
Regarding Claim 14, Faulkner-Hurwitz discloses the method of claim 1, wherein the method includes: detecting a fourth input of the second type of input in conjunction with detecting a fifth input on a second input device; and in response to detecting the fourth input of the second type of input in conjunction with the fifth input on the second input device, performing one or more third operations (Faulkner [0242] “the computer system detects a user input that meets fifth criteria that correspond to a request for dismissing the first user interface (e.g., criteria for detecting a swipe input while a gaze input is focused on the first user interface). In response to detecting the user input that meets the fifth criteria, the computer system ceases to display the first user interface (e.g., without replacing the first user interface with the second user interface).”).
Regarding Claim 15, Faulkner-Hurwitz discloses the method of claim 14, wherein a respective third operation of the one or more third operations is selected from a group consisting of: taking a screenshot, powering off the computer system, restarting the computer system, and entering a hardware reset mode of the computer system (Faulkner [0242] “the computer system detects a user input that meets fifth criteria that correspond to a request for dismissing the first user interface (e.g., criteria for detecting a swipe input while a gaze input is focused on the first user interface). In response to detecting the user input that meets the fifth criteria, the computer system ceases to display the first user interface (e.g., without replacing the first user interface with the second user interface).”).
Regarding Claim 16, it recites similar limitations of claim 1 but in a storage medium form. The rationale of claim 1 rejection is applied to reject claim 16.
Regarding Claim 17, it recites similar limitations of claim 1 but in a system form. The rationale of claim 1 rejection is applied to reject claim 17.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Faulkner (US 20210097776 A1), in view of Hurwitz (US 20230046333 A1), further in view of TPGi, “Windows Accessibility Features for Persons with Dexterity Disabilities”.
Regarding Claim 10, Faulkner-Hurwitz discloses the method of claim 8, including: detecting a third number of press inputs directed to the rotatable input mechanism (Hurwitz [0036] “Pushing or pulling on crown 122 may also be inputs to which functions of the peripheral device 132 are assigned when smartwatch 110 is in the peripheral control mode.”); In the same field of endeavor, TPGi discloses and in response to detecting the third number of press inputs directed to the rotatable input mechanism, performing or enabling an accessibility mode operation (see the fourth paragraph, “Sticky Keys allows keyboard shortcuts to be executed one key at a time… To enable it, Shift has to be pressed five times.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the method of Faulkner-Hurwitz with the feature of enabling an accessibility mode operation in response to detecting five press inputs directed to the crown 122. Doing so could facilitate persons with dexterity impairments to operate the computing system of Faulkner-Hurwitz.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHONG WU whose telephone number is (571)270-5207. The examiner can normally be reached MON-FRI: 9AM-5PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHONG WU/Primary Examiner, Art Unit 2613