DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant’s amendment filed on August 4, 2025 has been entered. Claims 1-23 and 25-26 are currently pending. Claim 26 is new. Applicant’s arguments are addressed herein below.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-23 and 25 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-23 and 25 of U.S. Patent No. 12,118,200. Although the claims at issue are not identical, they are not patentably distinct from each other because
Instant Application
U.S. Patent No. 12,118,200
1. A method comprising: at an electronic device having a processor:
receiving data corresponding to user activity in a 3D coordinate system corresponding to a 3D environment in which graphical elements are positioned;
generating sample locations in the 3D coordinate system based on the data corresponding to the user activity;
identifying a subset of the graphical elements for evaluation, the subset identified based on the sample locations;
associating a graphical element of the identified subset with the user activity based on the evaluation of subset; and
interpreting the user activity based on associating the graphical element with the user activity.
2. The method of claim 1, wherein associating the graphical element with the user activity comprises: for each of the identified graphical elements, determining a point on the respective graphical element based on the user activity; and prioritizing the identified graphical elements based on the point computed for each graphical element.
3. The method of claim 2, wherein determining the point on each of the graphical elements comprises: determining a closest opaque point to a sample location associated with the user activity; and determining a distance of the closest opaque point of each of the graphical elements to the sample location associated with the user activity.
4. The method of claim 2, wherein associating the graphical element with the user activity is based on determining that a closest opaque point within the graphical element is within an angular distance threshold of a sample location associated with the user activity.
5. The method of claim 2, wherein the graphical element to associate with the user activity is selected based on: determining that closest opaque points within multiple graphical elements are within an angular distance threshold of a sample location associated with the user activity; and selecting the graphical element from the multiple graphical elements based on a policy that ranks graphical elements based on element type, layers, geometry, or hysteresis logic.
6. The method of claim 1, wherein the identified graphical elements comprise: 3D virtual objects; and 2D elements defined by one or more applications.
7. The method of claim 1, wherein identifying the subset of graphical elements comprises: receiving data corresponding to positioning of graphical elements within the 3D coordinate system, the data corresponding to the positioning of the graphical elements based at least in part on data provided by an application; and identifying the subset of graphical elements by identifying intersections of the plurality of gaze sample locations with the graphical elements positioned within the 3D coordinate system.
8. The method of claim 7, wherein the graphical elements occupy a two-dimensional (2D) region and the method further comprises, based on associating the graphical elements with the user activity, identifying a point within the 2D region to the application such that the application can recognize an action to associate with the graphical element using a 2D app action recognition process.
9. The method of claim 7, wherein the data provided by the application comprises a layered tree structure defining the positional and containment relationships of the graphical elements relative to one another on a two-dimensional (2D) coordinate system.
10. The method of claim 7, wherein the data provided by the application identifies external effects for some of the graphical elements, wherein an external effect specifies that an operating system (OS) process is to provide responses to a specified user activity relative to a specified graphical element outside of an application process.
11. The method of claim 1, wherein the data corresponding to the user activity is a gaze direction within the 3D coordinate system, the gaze direction determined based on sensor data.
12. The method of claim 1, wherein the data corresponding to the user activity is a synthesized direction within the 3D coordinate system, the direction determined based on: determining a hand position of a hand in the 3D coordinate system based on sensor data; determining an intersection position of the hand with at least one graphical element based on the hand position; and determining the direction based on the intersection and a viewpoint position.
13. The method of claim 1, wherein the sample locations are generated by generating a pattern of rays around a gaze direction or a synthesized direction corresponding to user activity.
14. The method of claim 13, wherein the pattern of rays has between 2 and 100 rays.
15. The method of claim 13, wherein the pattern of rays has between 5 and 35 rays.
16. The method of claim 13, wherein the pattern of rays comprises an outer set of rays forming a shape.
17. The method of claim 16, wherein the shape is rotated relative to a horizon or a horizontal.
18. The method of claim 1, wherein the electronic device provides views of a 3D environment including the graphical elements, wherein at least some of the graphical elements are 2D user interface elements provided by one or more applications, wherein the input support process recognizes the user activity in the 3D coordinate system and provides data to the one or more applications to recognize 2D user interface input.
19. A system comprising: memory; and one or more processors coupled to the memory, wherein the memory comprises program instructions that, when executed by the one or more processors, cause the system to perform operations comprising:
receiving data corresponding to user activity in a 3D coordinate system corresponding to a 3D environment in which graphical elements are positioned;
generating sample locations in the 3D coordinate system based on the data corresponding to the user activity;
identifying a subset of the graphical elements for evaluation, the subset identified based on the sample locations;
associating a graphical element of the identified subset with the user activity based on the evaluation of subset; and
interpreting the user activity based on associating the graphical element with the user activity.
20. The system of claim 19, wherein associating the graphical element with the user activity comprises: for each of the identified graphical elements, determining a point on the respective graphical element based on the user activity; and prioritizing the identified graphical elements based on the point computed for each graphical element.
21. The system of claim 20, wherein determining the point on each of the graphical elements comprises: determining a closest opaque point to a sample location associated with the user activity; and determining a distance of the closest opaque point of each of the graphical elements to the sample location associated with the user activity.
22. The system of claim 20, wherein associating the graphical element with the user activity is based on determining that a closest opaque point within the graphical element is within an angular distance threshold of a sample location associated with the user activity.
23. The system of claim 20, wherein the graphical element to associate with the user activity is selected based on: determining that closest opaque points within multiple graphical elements are within an angular distance threshold of a sample location associated with the user activity; and selecting the graphical element from the multiple graphical elements based on a policy that ranks graphical elements based on element type, layers, geometry, or hysteresis logic.
25. A non-transitory computer-readable storage medium, storing program instructions computer-executable on a computer to perform operations comprising:
receiving data corresponding to user activity in a 3D coordinate system corresponding to a 3D environment in which graphical elements are positioned;
generating sample locations in the 3D coordinate system based on the data corresponding to the user activity;
identifying a subset of the graphical elements for evaluation, the subset identified based on the sample locations;
associating a graphical element of the identified subset with the user activity based on the evaluation of subset; and
interpreting the user activity based on associating the graphical element with the user activity.
1. A method comprising: at an electronic device having a processor:
receiving, at an input support process, data corresponding to user activity in a 3D coordinate system corresponding to a 3D environment in which a plurality of user interface (UI) targets of a user interface are positioned;
generating, at the input support process, a plurality of sample locations in the 3D coordinate system based on the data corresponding to the user activity; identifying, at the input support process, a subset of the plurality of UI targets within the 3D coordinate system for evaluation using a UI target selection criterion, the subset identified based on the plurality of sample locations;
selecting, at the input support process, a UI target of the identified subset to associate with the user activity based on the evaluation of subset of the plurality of UI targets using the UI target selection criterion; and
interpreting, at the input support process, the user activity as input associated with the selected UI target based on the selected UI target being associated with the user activity.
2. The method of claim 1, wherein selecting the UI target to associate with the user activity comprises: for each UI target of the identified subset, determining a point on the respective UI target based on the user activity; and prioritizing the UI targets of the subset based on the point computed for each respective UI target.
3. The method of claim 2, wherein determining the point on each of the respective UI targets comprises: determining a closest opaque point to a sample location associated with the user activity; and determining a distance of the closest opaque point of each of the respective UI targets to the sample location associated with the user activity.
4. The method of claim 2, wherein selecting the UI target to associate with the user activity is based on determining that a closest opaque point within the UI target is within an angular distance threshold of a sample location associated with the user activity.
5. The method of claim 2, wherein the UI target to associate with the user activity is selected based on: determining that closest opaque points within multiple UI targets are within an angular distance threshold of a sample location associated with the user activity; and selecting the UI target from the multiple UI targets based on a policy that ranks UI targets based on element type, UI layers, UI geometry, or hysteresis logic.
6. The method of claim 1, wherein the UI targets of the subset comprise: 3D virtual objects; and 2D elements defined by one or more applications.
7. The method of claim 1, wherein identifying the subset comprises: receiving, at the input support process, data corresponding to positioning of UI elements of an application within the 3D coordinate system, the data corresponding to the positioning of the UI element based at least in part on data provided by the application; and identifying the subset by identifying intersections of the plurality of gaze sample locations with the UI elements of the application positioned within the 3D coordinate system.
8.The method of claim 7, wherein the UI elements of the application occupy a two-dimensional (2D) region and the method further comprises, based on selecting the UI target to associate with the user activity, identifying a point within the 2D region to the application such that the application can recognize an action to associate with the UI element using a 2D app action recognition process.
9. The method of claim 7, wherein the data provided by the application comprises a layered tree structure defining the positional and containment relationships of the UI elements relative to one another on a two-dimensional (2D) coordinate system.
10. The method of claim 7, wherein the data provided by the application identifies external effects for some of the UI elements, wherein an external effect specifies that an operating system (OS) process is to provide responses to a specified user activity relative to a specified UI element outside of an application process.
11. The method of claim 1, wherein the data corresponding to the user activity is a gaze direction within the 3D coordinate system, the gaze direction determined based on sensor data.
12. The method of claim 1, wherein the data corresponding to the user activity is a synthesized direction within the 3D coordinate system, the direction determined based on: determining a hand position of a hand in the 3D coordinate system based on sensor data; determining an intersection position of the hand with at least one UI element based on the hand position; and determining the direction based on the intersection and a viewpoint position.
13. The method of claim 1, wherein the plurality of sample locations are generated by generating a pattern of rays around a gaze direction or a synthesized direction corresponding to user activity.
14. The method of claim 13, wherein the pattern of rays has between 2 and 100 rays.
15. The method of claim 13, wherein the pattern of rays has between 5 and 35 rays.
16. The method of claim 13, wherein the pattern of rays comprises an outer set of rays forming a shape.
17. The method of claim 16, wherein the shape is rotated relative to a horizon or a horizontal.
18. The method of claim 1, wherein the electronic device provides views of a 3D environment including the UI targets, wherein at least some of the UI targets are 2D user interface elements provided by one or more applications, wherein the input support process recognizes the user activity in the 3D coordinate system and provides data to the one or more applications to recognize 2D user interface input.
19. A system comprising: memory; and one or more processors coupled to the memory, wherein the memory comprises program instructions that, when executed by the one or more processors, cause the system to perform operations comprising:
receiving, at an input support process, data corresponding to user activity in a 3D coordinate system corresponding to a 3D environment in which a plurality of user interface (UI) targets of a user interface are positioned;
generating, at the input support process, a plurality of sample locations in the 3D coordinate system based on the data corresponding to the user activity; identifying, at the input support process, a subset of the plurality of (UI) targets within the 3D coordinate system for evaluation using a UI target selection criterion, the subset identified based on the plurality of sample locations;
selecting, at the input support process, a UI target of the identified subset to associate with the user activity based on the evaluation of subset of the plurality of UI targets using the UI target selection criterion; and
interpreting, at the input support process, the user activity as input associated with the selected UI target based on the selected UI target being associated with the user activity.
20. The system of claim 19, wherein selecting the UI target to associate with the user activity comprises: for each UI target of the identified subset, determining a point on the respective UI target based on the user activity; and prioritizing the UI targets of the subset based on the point computed for each respective UI target.
21. The system of claim 20, wherein determining the point on each of the respective UI targets comprises: determining a closest opaque point to a sample location associated with the user activity; and determining a distance of the closest opaque point of each of the respective to the sample location associated with the user activity.
22. The system of claim 20, wherein selecting the UI target to associate with the user activity is based on determining that a closest opaque point within the UI target is within an angular distance threshold of a sample location associated with the user activity.
23. The system of claim 20, wherein the UI target to associate with the user activity is selected based on: determining that closest opaque points within multiple UI targets are within an angular distance threshold of a sample location associated with the user activity; and selecting the UI target from the multiple UI targets based on a policy that ranks UI targets based on element type, UI layers, UI geometry, or hysteresis logic.
25. A non-transitory computer-readable storage medium, storing program instructions computer-executable on a computer to perform operations comprising:
receiving, at an input support process, data corresponding to user activity in a 3D coordinate system corresponding to a 3D environment in which a plurality of user interface (UI) targets of a user interface are positioned;
generating, at the input support process, a plurality of sample locations in the 3D coordinate system based on the data corresponding to the user activity; identifying, at the input support process, a subset of the plurality of UI targets within the 3D coordinate system for evaluation using a UI target selection criterion, the subset identified based on the plurality of sample locations;
selecting, at the input support process, a UI target of the identified subset to associate with the user activity based on the evaluation of subset of the plurality of UI targets using the UI target selection criterion; and
interpreting, at the input support process, the user activity as input associated with the selected UI target based on the selected UI target being associated with the user activity.
The instant Application claim is broader in every aspect than the patent claim and is therefore an obvious variant thereof. Although the conflicting claims are not identical, they are not patentability distinct from each other because the instant Application claim is generic to all that is recited in the above patent claim. The more specific anticipates the broader (see In re Goodman – 29 USPQ2d 2010), also see Eli Lilly and Co. v. Barr Laboratories Inc., 58 USPQ2d, 189 and Miller v. Eagle Mfg. Co., 151 U.S. 186 1894). Therefore, the instant claim is anticipated by the above patent claim.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 6, 11, 18-19 and 25 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Alcaide et al. (US 20200337653).
As to claim 1, Alcaide a method comprising: at an electronic device (Figs. 1(104): neural recording headset) having a processor (Fig. 1(120), [0038], [0068]):
receiving data corresponding to user activity in a 3D coordinate system corresponding to a 3D environment in which graphical elements are positioned ([0040]: eye-tracker 102 can be used to determine where a user is looking in their visual field in a three-dimensional space);
generating sample locations in the 3D coordinate system based on the data corresponding to the user activity ([0050]: UI/UX can be designed and manipulated, [0055]: generation and updating the UI/UX, [0062]: a set of tags or symbols 279 shown in an example UI/UX 271. All the tags 279 in the UI/UX 271 can be visible, one or more of the tags 279 can be made to transiently change in visual appearance to indicate their usability for selection. The change in appearance can be a change in any suitable property of the tags, e.g., color, shape, size, location, depth in 3D environment, mobility, etc., [0070]: user can focus their gaze on a tag-group containing the target tag (e.g., the letter Q) as indicated in FIG. 3A. As shown in FIG. 3B the tag-group indicated by the highlighted circle in FIG. 3A can be magnified following the analysis of oculomotor signals indicating that the user focused on that specific tag-group, changing the UI/UX 371 to that shown in FIG. 3B);
identifying a subset of the graphical elements for evaluation, the subset identified based on the sample locations ([0035]: identifying a user's point of focus can be implemented through manipulation of the Ul/ UX, [0040]: eye-tracker 102 can be used to determine where a user is looking in their visual field in a three dimensional space. Eye tracer 102 can be used to determine which subspaces in their visual field each of their eyes is “pointing to” (i.e., where in the visual space user's attention is focused) and to reveal significant information about user's intent. By simultaneously tracking the movement trajectories of both eyes with respect to each other the eye-tracker 102 can also register the depth of focus of the user, thus enabling pointing control in a three dimensional space, [0050], [0070], [0075]: can determine the user's intent (e.g., identify the target tag of interest to the user), can implement the selection of the determined target tag which can result in one or more actions associated with the target tag selection);
associating a graphical element of the identified subset with the user activity based on the evaluation of subset ([0050]: tags can be visual icons that change their appearance in specific manner to catch the attention of a user and to indicate their usability to control the UI/UX. Tags or control items can be associated with actions. Based on this determination, the UI/UX can implement the one or more specific actions associated with the tag flash); and
interpreting the user activity based on associating the graphical element with the user activity ([0044]: interpreting signals associated with user activity to determine an action intended by the user).
As to claim 6, Alcaide teaches the method of claim 1, wherein the identified graphical elements comprise: 3D virtual objects ([0040]: three-dimensional space); and 2D elements defined by one or more applications ([0040]: two or three-dimensional space, [0050]: UI/UX can include a sequence of visually stimulating two dimensional images, presented via a display, [0062]).
As to claim 11, Alcaide teaches the method of claim 1, wherein the data corresponding to the user activity is a gaze direction within the 3D coordinate system, the gaze direction determined based on sensor data ([0039]: eye-tracker 102 (and peripheral sensors 108), [0046]).
As to claim 18, Alcaide teaches the method of claim 1, wherein the electronic device provides views of a 3D environment including the graphical elements, wherein at least some of the graphical elements are 2D user interface elements provided by one or more applications, wherein the input support process recognizes the user activity in the 3D coordinate system and provides data to the one or more applications to recognize 2D user interface input ([0065]: application of one or more scaling functions, [0113]).
As to claim 19, it is the apparatus where a system comprising: memory; and one or more processors coupled to the memory, wherein the memory comprises program instructions that, when executed by the one or more processors, cause the system to perform the operations of claim 1. Please see claim 1 for detail analysis.
As to claim 25, it is a non-transitory computer-readable storage medium, storing program instructions to perform the functions of claim 1. Please see claim 1 for detail analysis.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 7-10 and 12-17 are rejected under 35 U.S.C. 103 as being unpatentable over Alcaide et al. (US 20200337653) in view of Gene at al. (US 20130154913).
As to claim 7, Alcaide teaches the method of claim 1, wherein identifying the subset of graphical elements comprises: receiving data corresponding to positioning of graphical elements within the 3D coordinate system, the data corresponding to the positioning of the graphical elements based at least in part on data provided by an application (Figs. 3A-3B, [0040], [0050]).
Alcaide does not expressly teach identifying the subset of graphical elements by identifying intersections of the plurality of gaze sample locations with the graphical elements positioned within the 3D coordinate system.
Gene teaches identifying the subset of graphical elements by identifying intersections of the plurality of gaze sample locations with the graphical elements positioned within the 3D coordinate system ([0095] – [0096]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Alcaide’s electronic device by incorporating Gene’s idea of identifying intersections of the plurality of gaze sample locations with the graphical elements in order to interpret user’s activity more accurately.
As to claim 8, Alcaide (as modified by Gene) teach the method of claim 7, wherein the graphical elements occupy a two-dimensional (2D) region and the method further comprises, based on associating the graphical elements with the user activity, identifying a point within the 2D region to the application such that the application can recognize an action to associate with the graphical element using a 2D app action recognition process (Gene: [0083], [0086] – [0089]).
As to claim 9, Alcaide (as modified by Gene) teach the method of claim 7, wherein the data provided by the application comprises a layered tree structure defining the positional and containment relationships of the graphical elements relative to one another on a two-dimensional (2D) coordinate system (Gene: [0083]).
As to claim 10, Alcaide (as modified by Gene) teach the method of claim 7, wherein the data provided by the application identifies external effects for some of the graphical elements, wherein an external effect specifies that an operating system (OS) process is to provide responses to a specified user activity relative to a specified graphical element outside of an application process (Gene: [0086] – [0089]).
As to claim 12, Alcaide (as modified by Gene) teach the method of claim 1, wherein the data corresponding to the user activity is a synthesized direction within the 3D coordinate system, the direction determined based on: determining a hand position of a hand in the 3D coordinate system based on sensor data (Gene: [0086]: enables a CAD designer to: 1. View their 3D CAD software objects on a real 3D display 2. Use natural gaze & hands gestures and actions to interact directly with their 3D CAD objects resize, rotate, move, stretch, poke, etc.), [0097]); determining an intersection position of the hand with at least one graphical element based on the hand position (Gene: Figs. 7, 11, [0086], [0097], [0104], [0142]); and determining the direction based on the intersection and a viewpoint position (Gene: Figs. 5, 7, 11, [0089], [0104], [0142]).
As to claim 13, Alcaide (as modified by Gene) teach the method of claim 1, wherein the sample locations are generated by generating a pattern of rays around a gaze direction or a synthesized direction corresponding to user activity (Gene: Fig. 5).
As to claim 14, Alcaide (as modified by Gene) teach the method of claim 13, wherein the pattern of rays has between 2 and 100 rays (Gene: Fig. 5).
As to claim 15, Alcaide (as modified by Gene) teach the method of claim 13, wherein the pattern of rays has between 5 and 35 rays (Gene: Fig. 5, it is an obvious design choice to select number of rays).
As to claim 16, Alcaide (as modified by Gene) teach the method of claim 13, wherein the pattern of rays comprises an outer set of rays forming a shape (Gene: [0060]: regular pattern).
As to claim 17, Alcaide (as modified by Gene) teach the method of claim 16, wherein the shape is rotated relative to a horizon or a horizontal (Gene: [0086]).
Claim 26 is rejected under 35 U.S.C. 103 as being unpatentable over Alcaide et al. (US 20200337653) in view of Yang et al. (US 20190258320).
As to claim 26, Alcaide teaches the method of claim 1, wherein generating the sample locations ([0050]: UI/UX can be designed and manipulated, [0055]: generation and updating the UI/UX, [0062], [0070]).
Alcaide does not expressly teach based on the user activity, determining a direction in the 3D coordinate system or a position in the 3D environment; and generating the sample locations based on the determined direction or the determined position.
Yang teaches based on the user activity, determining a direction in the 3D coordinate system or a position in the 3D environment (Figs. 7-10, [0163] – [0164]: virtual modalities 724, 926, and 1008 can include a variety of information including directions); and generating the sample locations based on the determined direction or the determined position (Figs. 7-10, [0167]: virtual modalities 724, 926, and 1008 can be generated using a series of unique real world markers; the markers can be three-dimensional objects. The software can be programmed to recognize a real-world object or other item. The software then superimposes interactive virtual items 716, 718, 720, 916, 920, 924, 1010, 1014, or 1016 in place of the real world object, [0194]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Alcaide’s electronic device by incorporating Yang’s idea of generating the sample locations based on the determined direction in order to provide user more flexibility.
Allowable Subject Matter
Claims 2-5 and 20-23 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Response to Arguments
Applicant's arguments filed August 4, 2025 have been fully considered but they are not persuasive.
On the 2nd page of remarks, Applicant asserts that the cited references do not teach "generating sample locations in the 3D coordinate system based on the data corresponding to the user activity". The Examiner respectfully disagrees to this assertion.
Alcaide teaches generating sample locations in the 3D coordinate system based on the data corresponding to the user activity ([0050]: UI/UX can be designed and manipulated, [0055]: generation and updating the UI/UX, [0062]: a set of tags or symbols 279 shown in an example UI/UX 271. All the tags 279 in the UI/UX 271 can be visible, one or more of the tags 279 can be made to transiently change in visual appearance to indicate their usability for selection. The change in appearance can be a change in any suitable property of the tags, e.g., color, shape, size, location, depth in 3D environment, mobility, etc., [0070]: user can focus their gaze on a tag-group containing the target tag (e.g., the letter Q) as indicated in FIG. 3A. As shown in FIG. 3B the tag-group indicated by the highlighted circle in FIG. 3A can be magnified following the analysis of oculomotor signals indicating that the user focused on that specific tag-group, changing the UI/UX 371 to that shown in FIG. 3B).
Applicant also states that the cited references do not teach “identifying a subset of the graphical elements for evaluation, the subset identified based on the sample locations". The Examiner respectfully disagrees to this to this statement.
Alcaide teaches identifying a subset of the graphical elements for evaluation, the subset identified based on the sample locations ([0035]: identifying a user's point of focus can be implemented through manipulation of the Ul/ UX, [0040]: eye-tracker 102 can be used to determine where a user is looking in their visual field in a three dimensional space. Eye tracer 102 can be used to determine which subspaces in their visual field each of their eyes is “pointing to” (i.e., where in the visual space user's attention is focused) and to reveal significant information about user's intent. By simultaneously tracking the movement trajectories of both eyes with respect to each other the eye-tracker 102 can also register the depth of focus of the user, thus enabling pointing control in a three dimensional space, [0050], [0070], [0075]: can determine the user's intent (e.g., identify the target tag of interest to the user), can implement the selection of the determined target tag which can result in one or more actions associated with the target tag selection).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AFROZA Y CHOWDHURY whose telephone number is (571)270-1543. The examiner can normally be reached M-F 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nitin Patel can be reached at (571)272-7677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AFROZA CHOWDHURY/Primary Examiner, Art Unit 2628