DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 08/04/2025 was filed after the mailing date of the application on 06/02/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 27 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claim 27 is drawn to a computer readable medium having stored thereon a computer program, where the computer readable medium as defined in the specification on Paragraph 06 can be a signal or carrier wave or paper; therefore, fail(s) to fall within a statutory category of invention.
A claim directed to a computer readable medium having stored thereon a computer program, where the computer readable medium as defined in the specification can be a signal or carrier wave or paper, covers a signal or carrier wave or paper which are non-statutory as noted, infra.
A claim directed to a computer program itself or signal or carrier wave is non-statutory because it is not:
A process occurring as a result of executing the program, or
A machine programmed to operate in accordance with the program, or
A manufacture structurally and functionally interconnected with the program in a manner which enable the program to act as a computer component and realize its functionality, or
A composition of matter.
A claim directed to a paper having thereon a computer program is non-statutory, because it covers printed matter which is non-statutory. It is not until the program is converted into an electronic form to be read and executed by the processor that it becomes functional descriptive material. There is no functional relationship between the paper and the computer program (see In re Gulack, 217 USPQ 401, In re Lowry ,32 F.3d 1579, 32 USPQ2d 1031 (Fed.Cir.1994)). The program as disclosed is merely printed on the paper, hence the program is merely non-functional descriptive material, therefor, the claimed paper with a computer program printed on it is non-statutory. See Ex parte S, 25 JPOS 904, Ex parte Glenn, 155 USPQ 42 , In re Lockert, 65 F.2d 159, 17 USPQ 515.
See MPEP § 2106.01. Data structures not claimed as embodied in computer readable media are descriptive material per se and are not statutory because they are not capable of causing functional change in the computer. See, e.g., Warmerdam, 33 F.3d at 1361, 31 USPQ2d at 1760 (claim to a data structure per se held nonstatutory). Such claimed data structures do not define any structural and functional interrelationships between the data structure and other claimed aspects of the invention, which permit the data structure's functionality to be realized. In contrast, a claimed computer readable medium encoded with a data structure defines structural and functional interrelationships between the data structure and the computer software and hardware components which permit the data structure's functionality to be realized, and is thus statutory. Similarly, computer programs claimed as computer listings per se, i.e., the descriptions or expressions of the programs are not physical “things.” They are neither computer components nor statutory processes, as they are not “acts” being performed. Such claimed computer programs do not define any structural and functional interrelationships between the computer program and other claimed elements of a computer, which permit the computer program's functionality to be realized.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-27 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Krivoruchko et al (Provisional 63/506124 as US Pub. 20240103803).
Regarding claim 1, Krivoruchko discloses:
A method, (at least refer to fig. 1A and paragraph 47. Describes the systems, methods, and GUIs described herein improve user interface interactions with virtual/augmented reality environments in multiple ways) comprising:
At a computer system that is in communication with one or more display generation components and one or more input devices, (at least refer to fig. 3 and paragraph 52. Describes the computer system 101 includes a controller 110 (e.g., processors of a portable electronic device or a remote server), a display generation component 120 (e.g., a head-mounted device (HMD), a display, a projector, a touch-screen, etc.), one or more input devices 125 (e.g., an eye tracking device 130, a hand tracking device 140, other input devices 150)):
Displaying, via the one or more display generation components, a first view of a three-dimensional environment, wherein the first view of the three-dimensional environment includes first application content that corresponds to a first application, (at least refer to fig. 3-4, 19F and paragraphs 62-63, 611. Describes the view of the three-dimensional environment is typically visible to the user via one or more display generation components (e.g., a display or a pair of display modules that provide stereoscopic content to different eyes of the same user) through a virtual viewport that has a viewport boundary that defines an extent of the three-dimensional environment that is visible to the user via the one or more display generation components. Para. 611, describes: the background content is included in a background over which the virtual content is displayed (e.g., background content in the representation of the physical environment). In some embodiments, the background content includes user interfaces (e.g., user interfaces generated by the computer system corresponding to applications). Para. 63, describes: displaying the attention indicator at the first location of the user interface includes displaying the attention indicator at a location of content of a first application (2006a), such as the indicator for attention 1916 for user interface 1912);
While displaying the first application content that corresponds to the first application in the first view of the three-dimensional environment, detecting a first change in position of attention of a user relative to the first application content, (at least refer to fig. 1I, 1O and paragraph 72. Describes gaze and/or attention information is, optionally, combined with hand tracking information to determine interactions between the user and one or more user interfaces based on direct and/or indirect inputs such as air gestures or inputs that use one or more hardware input devices such as one or more buttons); and
In response to detecting the first change in position of the attention of the user relative to the first application content, (at least refer to fig. 7K-L, 8G and paragraph 265. Describes the first gaze target associated with the first selectable user interface object is displayed when the attention of the user is detected as being directed to the first edge region of the scrollable region (836a), such as displaying target 704e′):
In accordance with a determination that the attention of the user has moved closer to a first portion of a first boundary that confines the first application content in two or more dimensions than to a second portion of the first boundary that is adjacent to the first portion of the first boundary, visually emphasizing the first portion of the first boundary relative to the second portion of the first boundary, (at least refer to fig. 11A and paragraph 353. Describes computer system 101 detects attention 1132a directed to object 1130a, and displays an attention indicator 1134a at the location of attention 1132a within object 1130a. As shown in FIG. 11A, the attention indicator 1134a optionally has a circular shape, and is optionally a visually emphasized portion of object 1130a that visually emphasis the location of attention 1132a with respect to portions of object 1130a that do not have attention. Further, the attention indicator 1134a is optionally masked and/or cutoff by the boundaries of object 1130a to which attention 1132a is directed, such that the attention indicator 1134a is not displayed outside of such boundaries); and
In accordance with a determination that the attention of the user has moved closer to the second portion of the first boundary than the first portion of the first boundary, visually emphasizing the second portion of the first boundary relative to the first portion of the first boundary, (at least refer to fig. 11B and paragraph 362. Describes computer system 101 has moved attention indicator 1134a downward and leftward correspondingly, and has switched from masking the right side of attention indicator 1134a to masking the bottom side of attention indicator 1134a, because attention indicator 1134a is colliding with the bottom side of object 1130a rather than the right side of object 1130a. Similarly, attention 1132c (e.g., gaze-based) has moved downward and leftward in object 1130c, as indicated by the arrow 1138c).
Regarding claim 26, Krivoruchko discloses:
A computer system that is in communication with one or more display generation components and one or more input devices, (at least refer to fig. 3 and paragraph 52. Describes the computer system 101 includes a controller 110 (e.g., processors of a portable electronic device or a remote server), a display generation component 120 (e.g., a head-mounted device (HMD), a display, a projector, a touch-screen, etc.), one or more input devices 125 (e.g., an eye tracking device 130, a hand tracking device 140, other input devices 150)) the computer system comprising:
One or more processors, (at least refer to fig. 3 and paragraph 52. Describes the computer system 101 includes a controller 110 (e.g., processors of a portable electronic device or a remote server)); and
Memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for, (at least refer to fig. 2-3 and paragraph 142. Describes the memory 220 includes high-speed random-access memory, such as dynamic random-access memory (DRAM), static random-access memory (SRAM), double-data-rate random-access memory (DDR RAM), or other random-access solid-state memory devices. the memory 220 or the non-transitory computer readable storage medium of the memory 220 stores the following programs, modules and data structures, or a subset thereof including an optional operating system 230 and a XR experience module 240):
Displaying, via the one or more display generation components, a first view of a three-dimensional environment, wherein the first view of the three-dimensional environment includes first application content that corresponds to a first application, (at least refer to fig. 3-4, 19F and paragraphs 62-63, 611. Describes the view of the three-dimensional environment is typically visible to the user via one or more display generation components (e.g., a display or a pair of display modules that provide stereoscopic content to different eyes of the same user) through a virtual viewport that has a viewport boundary that defines an extent of the three-dimensional environment that is visible to the user via the one or more display generation components. Para. 63, describes: the background content is included in a background over which the virtual content is displayed (e.g., background content in the representation of the physical environment). In some embodiments, the background content includes user interfaces (e.g., user interfaces generated by the computer system corresponding to applications). Para. 611, describes: displaying the attention indicator at the first location of the user interface includes displaying the attention indicator at a location of content of a first application (2006a), such as the indicator for attention 1916 for user interface 1912);
While displaying the first application content that corresponds to the first application in the first view of the three-dimensional environment, detecting a first change in position of attention of a user relative to the first application content, (at least refer to fig. 1I, 1O and paragraph 72. Describes gaze and/or attention information is, optionally, combined with hand tracking information to determine interactions between the user and one or more user interfaces based on direct and/or indirect inputs such as air gestures or inputs that use one or more hardware input devices such as one or more buttons); and
In response to detecting the first change in position of the attention of the user relative to the first application content, (at least refer to fig. 7K-L, 8G and paragraph 265. Describes the first gaze target associated with the first selectable user interface object is displayed when the attention of the user is detected as being directed to the first edge region of the scrollable region (836a), such as displaying target 704e′):
In accordance with a determination that the attention of the user has moved closer to a first portion of a first boundary that confines the first application content in two or more dimensions than to a second portion of the first boundary that is adjacent to the first portion of the first boundary, visually emphasizing the first portion of the first boundary relative to the second portion of the first boundary, (at least refer to fig. 11A and paragraph 353. Describes computer system 101 detects attention 1132a directed to object 1130a, and displays an attention indicator 1134a at the location of attention 1132a within object 1130a. As shown in FIG. 11A, the attention indicator 1134a optionally has a circular shape, and is optionally a visually emphasized portion of object 1130a that visually emphasis the location of attention 1132a with respect to portions of object 1130a that do not have attention. Further, the attention indicator 1134a is optionally masked and/or cutoff by the boundaries of object 1130a to which attention 1132a is directed, such that the attention indicator 1134a is not displayed outside of such boundaries); and
In accordance with a determination that the attention of the user has moved closer to the second portion of the first boundary than the first portion of the first boundary, visually emphasizing the second portion of the first boundary relative to the first portion of the first boundary, (at least refer to fig. 11B and paragraph 362. Describes computer system 101 has moved attention indicator 1134a downward and leftward correspondingly, and has switched from masking the right side of attention indicator 1134a to masking the bottom side of attention indicator 1134a, because attention indicator 1134a is colliding with the bottom side of object 1130a rather than the right side of object 1130a. Similarly, attention 1132c (e.g., gaze-based) has moved downward and leftward in object 1130c, as indicated by the arrow 1138c).
Regarding claim 27, Krivoruchko discloses:
A computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more display generation components and one or more input devices, the one or more programs including instructions for, (at least refer to fig. 2-3 and paragraph 52. Describes the computer system 101 includes a controller 110 (e.g., processors of a portable electronic device or a remote server), a display generation component 120 (e.g., a head-mounted device (HMD), a display, a projector, a touch-screen, etc.), one or more input devices 125 (e.g., an eye tracking device 130, a hand tracking device 140, other input devices 150. Para. 142, describes: the memory 220 or the non-transitory computer readable storage medium of the memory 220 stores the following programs, modules and data structures, or a subset thereof including an optional operating system 230 and a XR experience module 240)):
Displaying, via the one or more display generation components, a first view of a three-dimensional environment, wherein the first view of the three-dimensional environment includes first application content that corresponds to a first application, (at least refer to fig. 3-4, 19F and paragraphs 62-63, 611. Describes the view of the three-dimensional environment is typically visible to the user via one or more display generation components (e.g., a display or a pair of display modules that provide stereoscopic content to different eyes of the same user) through a virtual viewport that has a viewport boundary that defines an extent of the three-dimensional environment that is visible to the user via the one or more display generation components. Para. 63, describes: the background content is included in a background over which the virtual content is displayed (e.g., background content in the representation of the physical environment). In some embodiments, the background content includes user interfaces (e.g., user interfaces generated by the computer system corresponding to applications). Para. 611, describes: displaying the attention indicator at the first location of the user interface includes displaying the attention indicator at a location of content of a first application (2006a), such as the indicator for attention 1916 for user interface 1912);
While displaying the first application content that corresponds to the first application in the first view of the three-dimensional environment, detecting a first change in position of attention of a user relative to the first application content, (at least refer to fig. 1I, 1O and paragraph 72. Describes gaze and/or attention information is, optionally, combined with hand tracking information to determine interactions between the user and one or more user interfaces based on direct and/or indirect inputs such as air gestures or inputs that use one or more hardware input devices such as one or more buttons); and
In response to detecting the first change in position of the attention of the user relative to the first application content, (at least refer to fig. 7K-L, 8G and paragraph 265. Describes the first gaze target associated with the first selectable user interface object is displayed when the attention of the user is detected as being directed to the first edge region of the scrollable region (836a), such as displaying target 704e′):
In accordance with a determination that the attention of the user has moved closer to a first portion of a first boundary that confines the first application content in two or more dimensions than to a second portion of the first boundary that is adjacent to the first portion of the first boundary, visually emphasizing the first portion of the first boundary relative to the second portion of the first boundary, (at least refer to fig. 11A and paragraph 353. Describes computer system 101 detects attention 1132a directed to object 1130a, and displays an attention indicator 1134a at the location of attention 1132a within object 1130a. As shown in FIG. 11A, the attention indicator 1134a optionally has a circular shape, and is optionally a visually emphasized portion of object 1130a that visually emphasis the location of attention 1132a with respect to portions of object 1130a that do not have attention. Further, the attention indicator 1134a is optionally masked and/or cutoff by the boundaries of object 1130a to which attention 1132a is directed, such that the attention indicator 1134a is not displayed outside of such boundaries); and
In accordance with a determination that the attention of the user has moved closer to the second portion of the first boundary than the first portion of the first boundary, visually emphasizing the second portion of the first boundary relative to the first portion of the first boundary, (at least refer to fig. 11B and paragraph 362. Describes computer system 101 has moved attention indicator 1134a downward and leftward correspondingly, and has switched from masking the right side of attention indicator 1134a to masking the bottom side of attention indicator 1134a, because attention indicator 1134a is colliding with the bottom side of object 1130a rather than the right side of object 1130a. Similarly, attention 1132c (e.g., gaze-based) has moved downward and leftward in object 1130c, as indicated by the arrow 1138c).
Regarding claim 2, Krivoruchko discloses:
Wherein detecting the first change in position of the attention of the user relative to the first application content includes detecting that the attention of the user has moved to a respective location in the three-dimensional environment that meets first criteria, wherein the first criteria require that the respective location is within a first threshold range of the first boundary in order for the first criteria to be met, (at least refer to fig. 13A-C, 14D and paragraphs 462, 471. Describes in response to detecting the input directed to the first region of the user interface (1402c), in accordance with a determination that the first region of the user interface includes at least two selectable objects that meet first criteria, such as the region 1328b including objects 1330f-i in FIGS. 13A and 13A1, (such as a distance criterion that is satisfied when a distance between the at least two selectable objects displayed in the first region of the user interface is below a threshold distance (e.g., 0.5 cm, 1 cm, 1.5 cm, 2 cm, 5 cm, 20 cm, 80 cm, 1 m, 3 m, or another threshold distance). Para. 471, describes: in response to detecting the second input directed to the second region of the user interface (1414b), in accordance with a determination that the second region includes a set of two or more selectable objects that meet the first criteria (e.g., such as described with reference to step(s) 1402), such as attention 1332c directed to object 1328c of FIG. 13B, the computer system displays (1414c), via the display generation component, an enlarged view of the second region of the user interface (optionally including characteristics of display of the enlarged view of the first region of the user interface described in reference to step(s) 1402, but applied to the enlarged view of the second region of the user interface) without displaying the enlarged view of the first region of the user interface).
Regarding claim 3, Krivoruchko discloses:
Wherein the first portion of the first boundary is a portion of a respective two-dimensional surface, and the second portion of the first boundary is a different portion of the respective two-dimensional surface, (at least refer to fig. 13A and paragraph 443. Describes three-dimensional environment 1304 also includes virtual content, such as virtual content 1326a. Virtual content 1326a is optionally one or more of a user interface of an application (e.g., messaging user interface, or content browsing user interface), a two-dimensional object (e.g., a shape, or a representation of a photograph). virtual content 1326a is a user interface that includes different regions 1328a-d that include various selectable objects 1330a-m).
Regarding claim 4, Krivoruchko discloses:
In response to detecting the first change in position of the attention of the user relative to the first application content: in accordance with a determination that the attention of the user has moved closer to a respective portion of the first boundary than to one or more other portions of the first boundary, visually emphasizing the respective portion of the first boundary relative to the one or more other portions of the first boundary, (at least refer to fig. 11B and paragraphs 362, 367. Describes computer system 101 has moved attention indicator 1134a downward and leftward correspondingly, and has switched from masking the right side of attention indicator 1134a to masking the bottom side of attention indicator 1134a, because attention indicator 1134a is colliding with the bottom side of object 1130a rather than the right side of object 1130a. Similarly, attention 1132c (e.g., gaze-based) has moved downward and leftward in object 1130c, as indicated by the arrow 1138c. Para. 367, describes: the computer system 101 increase a visual prominence of hand-based attention indicator as distance between the hand of the user and a respective object decreases (e.g., the hand of the user moves closer to the respective object)).
Regarding claim 5, Krivoruchko discloses:
In response to detecting the first change in position of the attention of the user relative to the first application content: displaying one or more portions of the first boundary that extend in a first dimension and a second dimension without displaying one or more portions of the first boundary that extend in a third dimension that is different from the first dimension and from the second dimension, (at least refer to fig. 13A and paragraph 456. Describes in response to either of the above inputs, computer system 101 displays an enlarged view 1327a of region 1328c, as shown in FIG. 13C (e.g., without performing an operation associated with one of objects 1330j-1, even if attention 1332c was directed to one of those objects when the selection input was detected). Enlarged view 1327a is optionally an in-line enlarged view of region 1328c (e.g., as if displaying region 1328c through a magnifying glass, without displaying a new user interface object in three-dimensional environment 1304)).
Regarding claim 6, Krivoruchko discloses:
While visually emphasizing a respective portion of the first boundary, detecting a second change in position of the attention of the user relative to the first application content, (at least refer to fig. 7A-C and paragraph 335. Describes while the attention of the user is directed toward the first selectable user interface object and while the third selectable user interface object is displayed with a first visual appearance, the computer system detects (1016b) that the attention of the user of the computer system has changed to being directed toward the third selectable user interface object); and
In response to detecting the second change in position of the attention of the user relative to the first application content, (at least refer to fig. 7A-C and paragraph 336. Describes in response to detecting the that the attention of the user is directed toward the third selectable user interface object, the computer system displays (1016c) the third selectable user interface object with a second visual appearance, different from the first visual appearance):
In accordance with a determination that the attention of the user has moved closer to a third portion of the first boundary than to the respective portion of the first boundary, (at least refer to fig. 7A-C and paragraph 336. Describes in accordance with a determination that attention of the user directed toward the first selectable user interface object has changed away (e.g., for the period of time greater than the time threshold described in step(s) 1002) and is now directed toward the third selectable user interface object, the computer system changes the visual appearance of the first selectable user interface object):
Ceasing to visually emphasize the respective portion of the first boundary, and visually emphasizing the third portion of the first boundary relative to the respective portion of the first boundary, (at least refer to fig. 7A-C and paragraph 336. Describes in response to detecting the that the attention of the user is directed toward the third selectable user interface object, the computer system displays (1016c) the third selectable user interface object with a second visual appearance, different from the first visual appearance. For example, the second visual appearance optionally includes a second size larger, expanded (e.g., consistent, but not limited to the increased size of the first selectable user interface object described in method 800) than the first size of the first visual appearance. In some embodiments, in accordance with a determination that attention of the user directed toward the first selectable user interface object has changed away (e.g., for the period of time greater than the time threshold described in step(s) 1002) and is now directed toward the third selectable user interface object, the computer system changes the visual appearance of the first selectable user interface object (e.g., from a larger, expanded size to a smaller compact size as described in method 800)).
Regarding claim 7, Krivoruchko discloses:
Wherein detecting the first change in position of the attention of the user relative to the first application content includes detecting a change in position of a gaze of the user relative to the first application content, (at least refer to fig. 7 and paragraph 336. Describes performing the first operation is confirmed at the second selectable user interface object in accordance with the attention of the user directed towards the second selectable user interface object or a region of the second selectable user interface object for a period of time greater than the second time threshold (e.g., duration of gaze in the direction of the second selectable user interface object is beyond the second time threshold). In some embodiments, the second selectable user interface object corresponds to the first gaze target described with reference to method 800).
Regarding claim 8, Krivoruchko discloses:
Wherein detecting the first change in position of the attention of the user relative to the first application content includes detecting movement of a viewpoint of the user relative to the first application content, (at least refer to fig. 7 and paragraph 246. Describes while the attention of the user is not directed toward the first selectable user interface object or in a region of the first selectable user interface object, the computer system displays the first selectable user interface object at the first distance without the simulated shadow. Displaying the first selectable user interface object as closer to the viewpoint of the user when the attention of the user is directed toward the first selectable user interface object allows the computer system to convey to the user that attention of the user is directed toward the first selectable user interface object, thereby reducing errors in the interaction between the user and the computer system).
Regarding claim 9, Krivoruchko discloses:
Wherein the first portion of the first boundary is a portion of a first surface of the first boundary, and the second portion of the first boundary is a portion of a second surface of the first boundary, (at least refer to fig. 11C and paragraphs 350, 383. Describes three-dimensional environment 1104 also includes virtual content, such as virtual content 1126a and 1128a. Virtual content 1126a and 1128a are optionally one or more of a user interface of an application (e.g., messaging user interface, or content browsing user interface). Para. 383, describes: For example, the computer system optionally displays a first attention indicator (e.g., hand-based) for a first object with a first size, where a hand of a user is directed to the first object and is a first distance from the first object. The computer system optionally displays the first (e.g., same) attention indicator (e.g., hand-based) for a second object (e.g., different from the first object) with a second size (e.g., different from the first size), where the hand of the user is directed to the second object and is also the first distance from the second object).
Regarding claim 10, Krivoruchko discloses:
Wherein visually emphasizing a respective portion of the first boundary includes displaying a region of the respective portion that is closer to an edge of the first boundary with greater visual emphasis than a region of the respective portion that is further from the edge of the first boundary, (at least refer to fig. 11A and paragraphs 360, 418. Describes an attention indicator (e.g., hand-based), when displayed, is optionally brighter, less transparent, and/or less blurry within an inner area of a respective object, but fades, is more transparent, is blurrier, or changes in other visual characteristics toward the edges of the attention indicator (hand-based) and/or outside the inner area of the respective object. As shown in FIG. 11A, the inner area of the non-selectable object 1130i is less transparent compared to the edges of the non-selectable object 1130i when attention indicator 1134i (hand-based) is displayed. Para. 418, describes: the visual indication has a gradual fall-off near one or more edges of the visual indication to, for example, indicate a distance from a center point of the attention of the user. Displaying the visual indication with a visual appearance that changes based on a distance from a location of the attention of the user clearly indicates the center or central portion of the attention of the user).
Regarding claim 11, Krivoruchko discloses:
In response to detecting the first change in position of the attention of the user relative to the first application content, displaying at least one of the first portion of the first boundary and the second portion of the first boundary without displaying one or more additional portions of the first boundary that are different from the first portion and from the second portion, (at least refer to fig. 23A-B and paragraphs 703, 713. Describes three-dimensional environment 2302 also includes virtual objects, such as virtual objects 2304, 2306, and 2308. Virtual objects 2304, 2306, and 2308 are optionally one or more of a user interface of an application (e.g., scheduling user interface, browser user interface, or alarm user interface), a three-dimensional object (e.g., virtual clock, virtual ball, or virtual car), or any other element displayed by computer system 101 that is not included in the physical environment of computer system 101. Para. 713, describes: in response to detecting attention 2316 directed to selectable object 2306a, computer system 101 updates or expands virtual object 2306 (e.g., browser user interface) to include value selection user interface object 2306c. Value selection user interface objects 2304c and 2306c were not included in virtual objects 2304 and 2306, respectively, before attention was directed to respective selectable objects 2304b and 2306a); and
While displaying the at least one of the first portion of the first boundary and the second portion of the first boundary without displaying the one or more additional portions of the first boundary, detecting a user input corresponding to a request to resize the first application content, (at least refer to fig. 23A-D and paragraph 717. Describes computer system 101 detects attention 2314 shift from selectable component element 2304g to selectable component element 2304f. In response, as shown in FIG. 23D, computer system 101 displays selectable component element 2304f at a size and/or font style to emphasize the selectable component element 2304f relative to other selectable component elements of the value selection user interface object 2304c, such as selectable component elements 2304e and 2304g); and
In response to detecting the user input corresponding to the request to resize the first application content, displaying the one or more additional portions of the first boundary, (at least refer to fig. 23A-E and paragraph 717. Describes computer system 101 navigates through the set of value options for selectable component element 2304f such that selection region 2304d includes the “58” value option as shown in FIG. 23E in response to detecting that the attention of the user was directed to a location corresponding to the “58” value option as shown in the previous figure).
Regarding claim 12, Krivoruchko discloses:
In response to detecting the first change in position of the attention of the user relative to the first application content, displaying a first extent of the first boundary, (at least refer to fig. 23A-B and paragraph 713. Describes in response to detecting attention 2316 directed to selectable object 2306a, computer system 101 updates or expands virtual object 2306 (e.g., browser user interface) to include value selection user interface object 2306c. Value selection user interface objects 2304c and 2306c were not included in virtual objects 2304 and 2306, respectively, before attention was directed to respective selectable objects 2304b and 2306a); and
In response to detecting the user input corresponding to the request to resize the first application content, displaying a second extent of the first boundary, wherein the second extent is greater than the first extent, (at least refer to fig. 23A-E and paragraph 717. Describes computer system 101 detects attention 2314 shift from selectable component element 2304g to selectable component element 2304f. In response, as shown in FIG. 23D, computer system 101 displays selectable component element 2304f at a size and/or font style to emphasize the selectable component element 2304f relative to other selectable component elements of the value selection user interface object 2304c, such as selectable component elements 2304e and 2304g. The computer system 101 optionally navigates or scrolls through the set of value options for selectable component element 2304f in a downwards direction to reveal value options from the top region of selectable component element 2304f that were not previously displayed (e.g., “57” and “56” value options)).
Regarding claim 13, Krivoruchko discloses:
Displaying, via the one or more display generation components, a second view of a three-dimensional environment, wherein the second view of the three-dimensional environment includes second application content that corresponds to a second application, (at least refer to fig. 3-4, 19C and paragraphs 62-63, 611. Describes the view of the three-dimensional environment is typically visible to the user via one or more display generation components (e.g., a display or a pair of display modules that provide stereoscopic content to different eyes of the same user) through a virtual viewport that has a viewport boundary that defines an extent of the three-dimensional environment that is visible to the user via the one or more display generation components. Para. 63, describes: the background content is included in a background over which the virtual content is displayed (e.g., background content in the representation of the physical environment). In some embodiments, the background content includes user interfaces (e.g., user interfaces generated by the computer system corresponding to applications). Para. 611, describes: displaying the attention indicator at the second location of the user interface includes displaying the attention indicator at a location of content of a second application, different from the first application (2006b), such as the indicator for attention 1918 in FIGS. 19C and 19C1 for user interface 1910);
While displaying the second application content that corresponds to the second application in the second view of the three-dimensional environment, detecting, via the one or more input devices, that the attention of the user has moved relative to the second application content, (at least refer to fig. 1I, 1O, 19F and paragraphs 72, 611. Describes gaze and/or attention information is, optionally, combined with hand tracking information to determine interactions between the user and one or more user interfaces based on direct and/or indirect inputs such as air gestures or inputs that use one or more hardware input devices such as one or more buttons. Para. 611, describes: Displaying an attention indicator on a variety of applications when appropriate indicates that the computer system will respond to gaze-based input for that particular application and ensures consistent presentation of the attention indicator, thereby providing improved visual feedback to the user (e.g., indicating a particular application with which the user is interacting), which enhances the operability of the computer system); and
In response to detecting that the attention of the user has moved relative to the second application content, (at least refer to fig. 7 and paragraph 258. Describes in response to detecting the that the attention of the user is directed toward the second selectable user interface object, the computer system displays (826d) the first selectable user interface object with a second visual appearance, different from the first visual appearance, such as a changed visual appearance of object 704d):
In accordance with a determination that the second application content does not extend to a second boundary that confines the second application content in two or more dimensions and that the attention of the user has moved closer to a first portion of the second boundary than to a second portion of the second boundary that is adjacent to the first portion of the second boundary, visually emphasizing the first portion of the second boundary relative to the second portion of the second boundary, (at least refer to fig. 11B and paragraph 362. Describes computer system 101 has moved attention indicator 1134a downward and leftward correspondingly, and has switched from masking the right side of attention indicator 1134a to masking the bottom side of attention indicator 1134a, because attention indicator 1134a is colliding with the bottom side of object 1130a rather than the right side of object 1130a. Similarly, attention 1132c (e.g., gaze-based) has moved downward and leftward in object 1130c, as indicated by the arrow 1138c. Para. 391, describes: the visual indication does not extend over the second position when or while the visual indication in the area of the selectable object that emphasizes the first position in the area of the selectable object is displayed and does not extend over the first position when or while the visual indication is changed in appearance so as to emphasize the second position); and
In accordance with a determination that the second application content extends to the second boundary, forgoing visually emphasizing the first portion of the second boundary relative to the second portion of the second boundary, (at least refer to fig. 19B-C and paragraph 592. Describes computer system 101 detects attention 1916 shift from selectable object 1908b to selectable object 1908c. In response, computer system 101 ceases display of the attention indicator in object 1908b and also ceases display of the border of object 1908b, and displays the border or boundary for object 1908c (e.g., similar to as described for object 1908b) and displays an attention indicator at the location in object 1908c at the location of attention 1916).
Regarding claim 14, Krivoruchko discloses:
Including displaying one or more application management controls corresponding to the first application at respective positions relative to the first application content based on the first boundary, (at least refer to fig. 25A and paragraph 816. Describes the user interface is an application window associated with an application running on the computer system that is displaying the first object. For example, the first object is a user interface element or object that is associated with or corresponds to content displayed in the application window, such as a selectable option or button, a text-input field, a scroll bar, an image, a hyperlink, and/or a video clip).
Regarding claim 15, Krivoruchko discloses:
Wherein the one or more application management controls include one or more auxiliary user interface elements displayed outside of the first boundary, (at least refer to fig. 7A and paragraph 210. Describes virtual object 704 includes a plurality of selectable virtual objects (e.g., affordances, buttons, toggles, icons, or photos). Virtual object 706 is optionally a menu user interface that includes plurality of selectable virtual objects. Virtual object 708 is optionally a music player user interface that includes one or more selectable virtual objects to initiate playback of a first media item (“Soul Music Mix”)).
Regarding claim 16, Krivoruchko discloses:
Wherein the one or more application management controls include a resize affordance, (at least refer to fig. 11C and paragraph 380. Describes computer system 101 increases the visual prominence of attention indicator 1134b, such as by increasing its size and/or brightness, to indicate progress towards selecting object 1130b via direct interaction (e.g., corresponding to hand 1136a moving to a position that corresponds to touching or pressing object 1130b for selection)).
Regarding claim 17, Krivoruchko discloses:
Wherein: detecting the first change in position of the attention of the user relative to the first application content includes detecting a change in position of the attention of the user to a respective location in the three-dimensional environment that is outside of a first region corresponding to the resize affordance, (at least refer to fig. 11A and paragraph 353. Describes computer system 101 detects attention 1132a directed to object 1130a, and displays an attention indicator 1134a at the location of attention 1132a within object 1130a. As shown in FIG. 11A, the attention indicator 1134a optionally has a circular shape, and is optionally a visually emphasized portion of object 1130a that visually emphasis the location of attention 1132a with respect to portions of object 1130a that do not have attention. Further, the attention indicator 1134a is optionally masked and/or cutoff by the boundaries of object 1130a to which attention 1132a is directed, such that the attention indicator 1134a is not displayed outside of such boundaries); and
the resize affordance is displayed in response to detecting the first change in position of the attention of the user to the respective location in the three-dimensional environment, (at least refer to fig. 11C and paragraph 380. Describes computer system 101 increases the visual prominence of attention indicator 1134b, such as by increasing its size and/or brightness, to indicate progress towards selecting object 1130b via direct interaction (e.g., corresponding to hand 1136a moving to a position that corresponds to touching or pressing object 1130b for selection)).
Regarding claim 18, Krivoruchko discloses:
Wherein: in accordance with a determination that the respective location in the three-dimensional environment is a first location in the three-dimensional environment, the resize affordance is displayed with a first spatial relationship relative to the first boundary, (at least refer to fig. 11C and paragraph 380. Describes computer system 101 increases the visual prominence of attention indicator 1134b, such as by increasing its size and/or brightness, to indicate progress towards selecting object 1130b via direct interaction (e.g., corresponding to hand 1136a moving to a position that corresponds to touching or pressing object 1130b for selection)); and
In accordance with a determination that the respective location in the three-dimensional environment is a second location in the three-dimensional environment that is different from the first location, the resize affordance is displayed with a second spatial relationship relative to the first boundary that is different from the first spatial relationship, (at least refer to fig. 11B and paragraph 362. Describes computer system 101 has moved attention indicator 1134a downward and leftward correspondingly, and has switched from masking the right side of attention indicator 1134a to masking the bottom side of attention indicator 1134a, because attention indicator 1134a is colliding with the bottom side of object 1130a rather than the right side of object 1130a. Similarly, attention 1132c (e.g., gaze-based) has moved downward and leftward in object 1130c, as indicated by the arrow 1138c).
Regarding claim 19, Krivoruchko discloses:
Wherein the resize affordance displayed in response to detecting the first change in position of the attention of the user to the respective location in the three-dimensional environment is displayed with a first appearance, (at least refer to fig. 11C and paragraph 380. Describes computer system 101 increases the visual prominence of attention indicator 1134b, such as by increasing its size and/or brightness, to indicate progress towards selecting object 1130b via direct interaction (e.g., corresponding to hand 1136a moving to a position that corresponds to touching or pressing object 1130b for selection)) and the method includes:
While displaying the resize affordance with the first appearance, detecting, via the one or more input devices, a change in position of the attention of the user relative to the first application content to the first region corresponding to the resize affordance, (at least refer to fig. 11A-C and paragraphs 353, 381. Describes computer system 101 detects attention 1132a directed to object 1130a, and displays an attention indicator 1134a at the location of attention 1132a within object 1130a. The attention indicator 1134a optionally has a circular shape, and is optionally a visually emphasized portion of object 1130a that visually emphasis the location of attention 1132a with respect to portions of object 1130a that do not have attention. Further, the attention indicator 1134a is optionally masked and/or cutoff by the boundaries of object 1130a to which attention 1132a is directed, such that the attention indicator 1134a is not displayed outside of such boundaries. Para. 381, describes: direct interaction between a hand of the user and a respective object causes selection of the respective object. From FIG. 11B to FIG. 11C, attention 1132g (hand-based) is directed to the key 1130f and the hand 1136f is in direct interaction with the key 1130f (e.g., air tapping the key 1130f)); and
In response to detecting the change in position of the attention of the user to the first region corresponding to the resize affordance, displaying, via the one or more display generation components, the resize affordance with a second appearance that is different from the first appearance, (at least refer to fig. 11B-C and paragraphs 362, 381. Describes computer system 101 has moved attention indicator 1134a downward and leftward correspondingly, and has switched from masking the right side of attention indicator 1134a to masking the bottom side of attention indicator 1134a, because attention indicator 1134a is colliding with the bottom side of object 1130a rather than the right side of object 1130a. Similarly, attention 1132c (e.g., gaze-based) has moved downward and leftward in object 1130c, as indicated by the arrow 1138c. Para. 381, describes: direct interaction between a hand of the user and a respective object causes selection of the respective object. From FIG. 11B to FIG. 11C, attention 1132g (hand-based) is directed to the key 1130f and the hand 1136f is in direct interaction with the key 1130f (e.g., air tapping the key 1130f)).
Regarding claim 20, Krivoruchko discloses:
While displaying the resize affordance, detecting, via the one or more input devices, a change in position of the attention of the user outside of a second region corresponding to the resize affordance, (at least refer to fig. 11C and paragraph 380. Describes computer system 101 increases the visual prominence of attention indicator 1134b, such as by increasing its size and/or brightness, to indicate progress towards selecting object 1130b via direct interaction (e.g., corresponding to hand 1136a moving to a position that corresponds to touching or pressing object 1130b for selection)); and
In response to detecting the change in position of the attention of the user outside of the second region corresponding to the resize affordance, ceasing to display the resize affordance, (at least refer to fig. 11A-B and paragraph 365. Describes the hand 1136i is no longer directed towards (e.g., no longer in direct interaction with) the non-selectable object 1130i. Thus, the computer system 101 ceases display of the attention indicator 1134i (hand-based) corresponding to the hand 1136i in FIG. 11B. Instead, the computer system 101 is detecting attention 1132j (e.g., gaze-based) directed towards the non-selectable object 1130i).
Regarding claim 21, Krivoruchko discloses:
Wherein the one or more application management controls include a move affordance, (at least refer to fig. 25A and paragraph 365. Describes the first object is a user interface element or object that is associated with or corresponds to content displayed in the application window, such as a selectable option or button, a text-input field, a scroll bar, an image, a hyperlink, and/or a video clip).
Regarding claim 22, Krivoruchko discloses:
Wherein the one or more application management controls have a three-dimensional appearance that includes a non-zero length, non-zero width, and non-zero depth, (at least refer to fig. 25A and paragraphs 770, 772. Describes three-dimensional environment 2502 also includes a plurality of objects, such as user interface object 2507 (“Window 1”) and virtual keyboard 2515. Para. 772, describes: the virtual keyboard 2515 includes a plurality of virtual keys that are selectable to input content (e.g., letters, numbers, punctuation marks, and/or images) corresponding to the virtual keys in text-entry region 2520 (e.g., a word processing or note taking application with which the virtual keyboard 2515 is associated)).
Regarding claim 23, Krivoruchko discloses:
Wherein displaying a respective control of the one or more application management controls includes: in accordance with a determination that the first boundary has a first volumetric shape, displaying the respective control of the one or more application management controls with a first shape, (at least refer to fig. 25A and paragraph 770. Describes the user interface object 2507 includes input field 2506-1 that is selectable to initiate a process for inputting text into the input field 2506-1 for navigating to another website); and
In accordance with a determination that the first boundary has a second volumetric shape that is different from the first volumetric shape, displaying the respective control of the one or more application management controls with a second shape that is different from the first shape, (at least refer to fig. 25A and paragraph 772. Describes the virtual keyboard 2515 includes a plurality of virtual keys that are selectable to input content (e.g., letters, numbers, punctuation marks, and/or images) corresponding to the virtual keys in text-entry region 2520 (e.g., a word processing or note taking application with which the virtual keyboard 2515 is associated)).
Regarding claim 24, Krivoruchko discloses:
Wherein displaying the first view of the three-dimensional environment includes displaying, via the one or more display generation components, third application content concurrently with the first application content with an overlap between a portion of a third boundary that confines the third application content in two or more dimensions and a portion of the first boundary, (at least refer to fig. 19A and paragraph 587. Describes virtual content 1906, 1908, 1910 and 1912 are optionally one or more of a user interface of an application (e.g., messaging user interface, or content browsing user interface), a two-dimensional object (e.g., a shape, or a representation of a photograph) a three-dimensional object (e.g., virtual clock, virtual ball, or virtual car), or any other element displayed by computer system 101 that is not included in the physical environment of computer system 101) and the method includes:
While displaying the third application content concurrently with the first application content, detecting, via the one or more input devices, that the attention of the user is directed toward the overlap between the portion of the first boundary of the first application content and the portion of the third boundary of the third application content, (at least refer to fig. 1I, 1O, 19A and paragraphs 72, 587. Describes gaze and/or attention information is, optionally, combined with hand tracking information to determine interactions between the user and one or more user interfaces based on direct and/or indirect inputs such as air gestures or inputs that use one or more hardware input devices such as one or more buttons. Para. 587, describes: virtual content 1906 is a content playback user interface that includes one or more controls for controlling playback of content (e.g., music, movies, videos and/or podcasts) at computer system 101, including playback control 1906a that is selectable to pause the playback of the content); and
In response to detecting that the attention of the user is directed toward the overlap between the portion of the first boundary of the first application content and the portion of the third boundary of the third application content, (at least refer to fig. 19A and paragraph 589. Describes in response to detecting attention of the user directed to a selectable object in three-dimensional environment 1902, computer system 101 displays a visual indication of such attention in three-dimensional environment 1902):
In accordance with a determination that the first application content has higher priority than the third application content, visually emphasizing the portion of the first boundary of the first application content relative to the portion of the third boundary of the third application content, (at least refer to fig. 19E and paragraphs 607, 611. Describes the user interface is a user interface described with reference to method 800. In some embodiments, the attention indicator is a user interface object or element or visual indication that indicates the first location (or portion) of the user interface to which the attention of the user is directed. For example, the attention indicator is optionally displayed having a first visual appearance (e.g., first degree of coloring, first shape, first size, or first degree of transparency) at the first location in the user interface. Para. 611, describes: displaying the attention indicator at the first location of the user interface includes displaying the attention indicator at a location of content of a first application (2006a), such as the indicator for attention 1916 for user interface 1912); and
In accordance with a determination that the third application content has higher priority than the first application content, visually emphasizing the portion of the third boundary of the third application content relative to the portion of the first boundary of the first application content, ( at least refer to fig. 19E and paragraphs 607, 587. Describes for example, when the computer system detects attention directed to an attention responsive user interface object such as a selectable user interface object, the computer system displays the attention indicator having a second visual appearance different from the first visual appearance. Para. 587, describes: Virtual content 1908 is an object that includes selectable objects A, B, C and D that are selectable to perform respective operations. Virtual content 1910 is a news interface that includes selectable objects—including selectable object 1910a—to cause display of corresponding news articles).
Regarding claim 25, Krivoruchko discloses:
While displaying one or more portions of the first boundary: in accordance with a determination that the first application is associated with a first boundary appearance setting, displaying the one or more portions of the first boundary with a first appearance, (at least refer to fig. 11A and paragraph 353. Describes computer system 101 detects attention 1132a directed to object 1130a, and displays an attention indicator 1134a at the location of attention 1132a within object 1130a. As shown in FIG. 11A, the attention indicator 1134a optionally has a circular shape, and is optionally a visually emphasized portion of object 1130a that visually emphasis the location of attention 1132a with respect to portions of object 1130a that do not have attention. Further, the attention indicator 1134a is optionally masked and/or cutoff by the boundaries of object 1130a to which attention 1132a is directed, such that the attention indicator 1134a is not displayed outside of such boundaries); and
In accordance with a determination that the first application is associated with a second boundary appearance setting that is different from the first boundary appearance setting, displaying the one or more portions of the first boundary with a second appearance that is different from the first appearance, (at least refer to fig. 11B and paragraph 362. Describes computer system 101 has moved attention indicator 1134a downward and leftward correspondingly, and has switched from masking the right side of attention indicator 1134a to masking the bottom side of attention indicator 1134a, because attention indicator 1134a is colliding with the bottom side of object 1130a rather than the right side of object 1130a. Similarly, attention 1132c (e.g., gaze-based) has moved downward and leftward in object 1130c, as indicated by the arrow 1138c).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. 20190339058 relates to a method is performed at an electronic device with a touch-sensitive display and one or more cameras. The method includes displaying, on the touch-sensitive display, a first user interface of an application. The first user interface includes a representation of a field of view of at least one of the one or more cameras. The representation of the field of view is updated over time based on changes to current visual data detected by at least one of the one or more cameras. The field of view includes a physical object in a three-dimensional space. A representation of a measurement of the physical object is superimposed on an image of the physical object in the representation of the field of view. The method includes, while displaying the first user interface, detecting a first touch input on the touch-sensitive display on the representation of the measurement. The method further includes, in response to detecting the first touch input on the touch-sensitive display on the representation of the measurement, initiating a process for sharing information about the measurement
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IFEDAYO B ILUYOMADE whose telephone number is (571)270-7118. The examiner can normally be reached Monday-Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at 5712707230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/IFEDAYO B ILUYOMADE/Primary Examiner, Art Unit 2624 01/22/2026