Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed January 12, 2026 have been fully considered but they are not persuasive.
At pages 8 and 9 of the Remarks, Applicant alleges the references do not disclose “identifying data for the application (i.e., which data to provide to the application) based on (1) ‘data corresponding to the user activity’ and (2) ‘data corresponding to the positioning of the UI element of the application.’” Examiner respectfully disagrees with the limited interpretation of cited reference and the claim limitations in question. As disclosed below, Yan discloses identifying, at the input support process (FIG. 1A, input framework 112 at [0038]; FIG. 5A input framework 600 at [0048]), data for the application based on the data corresponding to the user activity ([0031]-[0034] interaction and activity of user’s hand with 3D sculpture assets would require data of UI elements (3D sculpture assets) and user hand activity being sent by the 3D modeling application to the input framework based on the level of fidelity; further at FIGS. 5A-7 and input framework 600 determining which level of fidelity to apply when using the application at [0048]-[0065], application capabilities determine which of the algorithms 612-1 through 612-p are used to process inputs at input framework 600 based on fidelity level determined [0050]-[0051] and FIG. 5A) and the data corresponding to the positioning of the UI elements of the application ([0031]-[0034] describing viewable 3D sculpture assets of the 3D modeling application in view of FIG. 3C and [0044] with gestures in correspondence with UI element 192 and FIG. 5B with hand gesture recognition 760 hand positioning at 768 and hand interaction application 776 at [0052]-[0060] including virtual object manipulation (i.e., grab and drag)). As such, Examiner respectfully submits these elements are clearly taught by the references and properly addressed below. Therefore, the claims stand rejected.
Furthermore, Examiner notes the claim objection presented in the previous Office Action was not appropriately addressed by either argument or claim amendment; it is therefore, sustained and included in this action. Appropriate correction is required.
Claim Objections
Claim 1 is objected to because of the following informalities: the claim references the term “data corresponding to the positioning of user interface (UI) elements” at lines 5-6 a second time as “data corresponding to the positioning of the UI element” at lines 6-7, which is in the singular form of “element”. This appears to be a typographical error. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 7-8, 11-13, 16, and 18-25 are rejected under 35 U.S.C. 103 as being unpatentable over Yan et al., US 2023/0281938 A1 (hereinafter “Yan”) in view of Wallen et al., US 2022/0253125 A1 (hereinafter “Wallen”).
Regarding claim 1, Yan discloses a method (FIGS. 6A-7, generally [0067]-[0100]) comprising:
at an electronic device ([0008]-[0009] computing device; FIGS. 1A-4B, computing system 130 of artificial reality system 100 at [0037]-[0047] including head mounted display 102, wearable device 104, controller device 106, eyewear device 110 used in conjunction with the computing system 130) having a processor ([0008]-[0009] “one or more processors for performing any of the methods described herein”; FIGS. 6A-6C, the methods performed by the computing system 130 having one or more processors at [0067]):
executing an application as an application process inside an operating system (OS) of the electronic device ([0007]-[0010] “application executing on an operating system” applications given access to operating system level framework to identify input capabilities; FIGS. 1A-4B, applications 138-1, 138-2 at [0038]-[0047]; FIG. 5A applications 602-1 through 602-n at [0050]-[0051]; alternatively, artificial reality engine 234 at FIG. 8A and [0114]-[0117]);
receiving, at an input support process (FIG. 1A, input framework 112 at [0038]; FIG. 5A input framework 600 at [0048]), data corresponding to positioning of user interface (UI) elements ([0031]-[0034] describing viewable 3D sculpture assets of the 3D modeling application) of the application within a 3D coordinate system ([0031]-[0034] describing viewable 3D sculpture assets of the 3D modeling application, further FIGS. 3A-3D and [0042]-[0047])),
receiving, at the input support process (FIG. 1A, input framework 112 at [0038]; FIG. 5A input framework 600 at [0048]), data corresponding to user activity in the 3D coordinate system (FIG. 8A and [0114]-[0117]; further at FIGS. 5A-6C and [0048]-[0067], the input framework 112 receives the data for user’s activity in the coordinate system and determines whether a particular situation would allow for the data to be used by the application, exemplary [0031]-[0034]);
identifying, at the input support process (FIG. 1A, input framework 112 at [0038]; FIG. 5A input framework 600 at [0048]), data for the application based on the data corresponding to the user activity and the data corresponding to the positioning of the UI elements of the application ([0031]-[0034] interaction and activity of user’s hand with 3D sculpture assets would require data of UI elements (3D sculpture assets) and user hand activity being sent by the 3D modeling application to the input framework based on the level of fidelity; further at FIGS. 5A-7 and input framework 600 determining which level of fidelity to apply when using the application at [0048]-[0065], application capabilities determine which of the algorithms 612-1 through 612-p are used to process inputs at input framework 600 based on fidelity level determined [0050]-[0051] and FIG. 5A; for additional context, [0031]-[0034] describing viewable 3D sculpture assets of the 3D modeling application in view of FIG. 3C and [0044] with gestures in correspondence with UI element 192 and FIG. 5B with hand gesture recognition 760 hand positioning at 768 and hand interaction application 776 at [0052]-[0060] including virtual object manipulation (i.e., grab and drag)); and
providing the data for the application from the input support process to the application (FIGS. 5A-7 and [0050]-[0060] using the algorithms 612-1 through 612-p along with capability providers 608-1 through 608-m to provide the appropriate sensor data (e.g., camera, wrist sensor, wrist IMU etc.) to the application), wherein the application process recognizes input to the application based on the data for the application (FIGS. 5A-7 and [0050]-[0060] using the algorithms 612-1 through 612-p along with capability providers 608-1 through 608-m to provide the appropriate sensor data (e.g., camera, wrist sensor, wrist IMU etc.) to the application as input to the application; further see FIGS. 3A-4B for various examples of inputs received (e.g., sensor data) and translated by the application at [0043]-[0047]).
However, although Yan discloses positioning data determination of both real and virtual objects (Yan at [0031]-[0034], FIGS. 3A-4B), Yan does not explicitly disclose the data corresponding to the positioning of the UI element based at least in part on data provided by the application.
In the same field of endeavor, Wallen clearly discloses a virtual/augmented reality environment (VR environment 125) running on an virtual reality operating system VROS (FIG. 4 and [0034]) with applications such as virtual displays 140a-d having various UI elements in the forms of application icons 120 wherein the data corresponding to the positioning of the UI element based at least in part on data provided by the application (FIGS. 3-7, and [0033]-[0046] each virtual display 140 separately determining placement of application icons 121, and placement of personal UI 115 based on the type of application (i.e., personal UI) in communication with the VR environment; arrangement, positioning, sizing, and removing of virtual displays 140 along with application icons 120 and applications 121a-c are at user discretion at [0045] and communicated with and adjusted in the VR environment for proper displaying to the user 102, furthermore, predetermined positions snapped into place for the displays 140).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the virtual/augmented reality input system of Yan to incorporate the arrangement of applications and displayable environments as disclosed by Wallen because the references are within the same field of endeavor, namely, augmented and virtual reality environments and input determinations made therein. The motivation to combine these references would have been to improve productivity while maintaining key functionality (see Wallen at least at [0003]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Regarding claim 2, Yan in view of Wallen discloses the method of claim 1 (see above) further comprising displaying a view of an extended reality (XR) environment corresponding to the (3D) coordinate system (Wallen, FIG. 3, [0033] and pass-through view of the real-world environment through VR display device 135 with personal UI 115 with couch for example as a mixed reality object 150 at [0038]), wherein the UI elements of the application are displayed in the view of the XR environment (Wallen at FIGS. 3-8B, [0033]-[0046] describing virtual displays 104a-d and application icons 120 as displayed), wherein the XR environment comprises UI elements from multiple application processes corresponding to multiple applications (Wallen, FIGS. 3-8B and [0033]-[0046] with each virtual display 140a-d as a different UI element, alternatively each icon 120 being a UI element and/or each application 121a as disclosed therein), wherein the input support process identifies data for each of the multiple applications (Yan FIGS. 5A-7 and [0050]-[0060] using the algorithms 612-1 through 612-p along with capability providers 608-1 through 608-m to provide the appropriate sensor data (e.g., camera, wrist sensor, wrist IMU etc.) to the application as input to the application; further see FIGS. 3A-4B for various examples of inputs received (e.g., sensor data) and translated by the application at [0043]-[0047]; in view of Wallen at FIGS. 3-8B and [0033]-[0047] with separately operating apps 121a-c within various virtual displays 140a-c, operating the separate applications side-by-side on the virtual displays).
Regarding claim 3, Yan in view of Wallen discloses the method of claim 1 (see above), wherein the OS comprises an OS process configured to perform the input support process outside of the application process (see Yan at FIGS. 1A-6C with applications 138-1 through 138-2 and 602-1 through 602-n, these applications are separate from the input framework 112, 600 as disclosed at [0038]-[0050], further at [00007]-[0008] and [0175]-[0183] and [0197]).
Regarding claim 4, Yan in view of Wallen discloses the method of claim 3 (see above), wherein the OS process further comprises a simulation process configured to perform a simulation of a 3D environment based on a physical environment associated with the 3D coordinate system (Wallen at [0024]-[0032] and FIG. 1 generating a 3D representation of the physical environment), wherein the simulation process positions the UI elements of the application within the 3D coordinate system based on data provided by the application (Wallen at [0024]-[0032] and FIG. 1 generating a 3D representation of the physical environment with images based on the viewpoint of the user’s eyes, further at FIGS. 3-10).
Regarding claim 5, Yan in view of Wallen discloses the method of claim 4 (see above), wherein the simulation process positions the UI elements by:
positioning one or more components within the 3D coordinate system (Wallen at [0024]-[0032] and FIG. 1 generating a 3D representation of the physical environment and placing objects accordingly); and
positions the UI elements of the application on the one or more components, wherein the positioning of the UI elements of the application on the one or more components is defined based on the data provided by the application (Wallen at [0024]-[0032] and FIG. 1 generating a 3D representation of the physical environment and placing objects accordingly), wherein the application is unaware of the positioning of the one or more components within the 3D coordinate system (FIGS. 3-7, and [0033]-[0046] each virtual display 140 separately determining placement of application icons 121, and placement of personal UI 115 based on the type of application (i.e., personal UI) in communication with the VR environment; arrangement, positioning, sizing, and removing of virtual displays 140 along with application icons 120 and applications 121a-c are at user discretion at [0045] and communicated with and adjusted in the VR environment for proper displaying to the user 102, furthermore, predetermined positions snapped into place for the displays 140).
Regarding claim 7, Yan in view of Wallen discloses the method of claim 3 (see above), wherein the data provided by the application identifies external effects for some of the UI elements (Yan at FIGS. 5A-7 at [0045]-[0065] determination of various inputs available based on external factors and limitations), wherein an external effect specifies that the OS process is to provide responses to a specified user activity relative to a specified UI element outside of the application process (Yan at FIGS. 5A-7 at [0045]-[0065] determination of various inputs available based on external factors and limitations with notification to the user and the application accordingly).
Regarding claim 8, Yan in view of Wallen discloses the method of claim 1 (see above), wherein the data provided by the application is provided to the OS process via an inter-process communication link (Wallen at FIG. 11, describing linking 1150 at least at [0054], these links being known in the art and conceivably understood when the application and the OS process are being executed at different locations through a network 1110, as contemplated by Wallace and as known to one of ordinary skill in the art, in view of Yan at FIGS. 1A-6C of the input capabilities and fidelity levels 136 at [0038]).
Regarding claim 11, Yan in view of Wallen discloses the method of claim 1 (see above), wherein the data corresponding to the user activity comprises hands data and gaze data (Yan disclosing hand tracking at FIGS. 5A-7 and [0048]-[0065], and Wallen at [0024]-[0030], eye/gaze tracking).
Regarding claim 12, Yan in view of Wallen discloses the method of claim 1 (see above), wherein the data corresponding to the user activity comprises controller data and gaze data (Yan at FIGS. 4A-5C and controller 106 at [0046]-[0049] and [0052]-[0059]; and Wallen at [0024]-[0030], eye/gaze tracking).
Regarding claim 13, Yan in view of Wallen discloses the method of claim 1 (see above), wherein identifying the data for the application comprises identifying interaction event data by identifying only certain types of activity within the user activity to be included (Yan at FIGS. 6A-7 describing determination of which data input devices and their respective data may be used for the application in accordance with allowable fidelity levels at [0067]-[0087] meaning only certain modalities are supported, other input data are ignored).
Regarding claim 16, Yan in view of Wallen discloses the method of claim 1 (see above), wherein identifying the data for the application comprises identifying only certain attributes of the data corresponding to the user activity for inclusion in the data for the application (Yan FIGS. 6A-7 describing determination of which data input devices and their respective data may be used for the application in accordance with allowable fidelity levels at [0067]-[0087] meaning only certain modalities are supported, other input data are ignored even if user attempts to input – may receive a notification that input is not available at [0004]; and various examples provided at [0175]-[0201]).
Regarding claim 18, Yan in view of Wallen discloses the method of claim 1 (see above) wherein the data for the application comprises:
an interaction pose comprising position and orientation data for an interaction point within the UI elements of the application (Yan at FIGS. 5A-7 and [0051]-[0058] describing point and click interaction, with hand position and controller orientation determinations therein as the input, in view of Wallen describing UI elements at FIGS. 4-7 such as app icons 120 and various open applications 121a-d at [0034]-[0046]).
Regarding claim 19, Yan in view of Wallen discloses the method of claim 1 (see above), wherein the data for the application comprises:
a manipulator pose comprising position and orientation data corresponding to a hand within the 3D coordinate system (Yan at FIGS. 5A-7 and [0052]-[0072] and [0082]-0087] describing grabbing and dragging virtual objects using the hand interaction application 776 determining orientation and position of the hand therein).
Regarding claim 20, Yan in view of Wallen discloses the method of claim 1 (see above),wherein the data for the application comprises: an interaction state comprising data identifying a type of interaction (Wallen at FIGS 4-7 describing various types of user input interactions (and their respective interpretations) such as selecting, drag, resizing, removing/closing a window and rearranging the various virtual displays 140b and the applications 121b as disclosed at [0034]-[0046]).
Regarding claim 21, Yan in view of Wallen discloses the method of claim 1 (see above), wherein the data for the application comprises:
an interaction pose comprising position and orientation data for an interaction point within the UI elements of the application (Yan at FIGS. 5A-7 and [0051]-[0058] describing point and click interaction, with hand position and controller orientation determinations therein as the input, in view of Wallen describing UI elements at FIGS. 4-7 such as app icons 120 and various open applications 121a-d at [0034]-[0046]).;
a manipulator pose comprising position and orientation data corresponding to a hand within the 3D coordinate system (Yan at FIGS. 5A-7 and [0052]-[0072] and [0082]-0087] describing grabbing and dragging virtual objects using the hand interaction application 776 determining orientation and position of the hand therein); and
an interaction state comprising data identifying a type of interaction (Wallen at FIGS 4-7 describing various types of user input interactions (and their respective interpretations) such as selecting, drag, resizing, removing/closing a window and rearranging the various virtual displays 140b and the applications 121b as disclosed at [0034]-[0046])..
Regarding claim 22, Yan in view of Wallen discloses the method of claim 1 (see above), wherein the data for the application identifies a UI element(application icon 120 and/or first application 121a) being interacted with during an interaction event (Wallen, FIGS. 4-7 and [0034]-[0046] describing pose change of the user 102 and interaction of the user with the VR environment 125 received as an input including selection of an application associated with the change).
Regarding claim 23, Yan in view of Wallen discloses the method of claim 1 (see above), wherein the data for the application excludes data associated with applications other than the application (Wallen at FIG. 9, [0048]-[0050] privacy settings when using a shared environment with a second user, data of the shared application does not include data from the unshared applications based on privacy settings of either user 102a or 102b).
Regarding claim 24, it is similar in scope to claim 1 above, the only difference being claim 24 is directed to a non-transitory computer-readable storage medium (Yan at [0009] and [0206]); and one or more processors coupled to the non-transitory computer-readable storage medium (Yan at [0009] and [0206]), wherein the non-transitory computer-readable storage medium comprises program instructions(Yan at [0009] and [0206]) that, when executed on the one or more processors, cause the system to perform operations similar to claim 1 (see above). Therefore, claim 24 is similarly analyzed and rejected as claim 1.
Regarding claim 25, it is similar in scope to claim 1 above, the only difference being claim 25 is directed to a non-transitory computer-readable storage medium (Yan at [0009] and [0206]) storing program instructions executable via one or more processors (Yan at [0009] and [0206]) to perform operations and steps of claim 1. Therefore, claim 25 is similarly analyzed and rejected as claim 1.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Wallen as applied to claim 3 above, and further in view of Arnold, US 2014/0173435 A1 (hereinafter “Arnold”).
Regarding claim 6, Yan in view of Wallen discloses the method of claim 3 (see above).
However, Yan in view of Wallen does not explicitly disclose wherein the data provided by the application comprises a layered tree structure defining the positional and containment relationships of the UI elements relative to one another on a two-dimensional (2D) coordinate system.
In the same field of endeavor, Arnold discloses wherein the data provided by the application comprises a layered tree structure (Arnold at FIGS. 2A-3F and method of FIG. 4, and 250 illustrating structural layer 280 with children nodes 285 described at [0014]-[0018]) defining the positional and containment relationships of the UI elements relative to one another on a two-dimensional (2D) coordinate system (Arnold at FIGS. 2A-3F and the method of FIG. 4, and [0015]-[0018] and [0025]-[0035] describing relational orientation of objects/elements on a 2D coordinate structure of the display including position and relational data of elements therein).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the augmented reality interface elements of Yan in view of Wallen to incorporate the layered tree structure of the various programs and content interface elements as disclosed by Arnold because the references are within the same field of endeavor, namely, graphical interface elements and their presentation to a user. The motivation to combine these references would have been to improve to reduce latency of output based on input (see Arnold at [0033]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Claims 9 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Wallen as applied to claim 1 above, and further in view of Lacey et al., US 2019/0362557 A1 (hereinafter “Lacey”).
Regarding claim 9, Yan in view of Wallen discloses the method of claim 1 (see above).
However, Yan in view of Wallen does not explicitly disclose wherein the data corresponding to the user activity comprises gaze data comprising a stream of gaze vectors corresponding to gaze directions over time during use of the electronic device.
In the same field of endeavor, Lacey discloses wherein the data corresponding to the user activity comprises gaze data (FIGS. 36(i)-36(iii) at [0345] gaze data determining focus; FIGS. 59A-59B and [0440]-[0444]) comprising a stream of gaze vectors corresponding to gaze directions over time during use of the electronic device (FIGS. 36(i)-36(iii) at [0345], FIGS. 59A-59B and [0440]-[0444] dwell determination based on time threshold of eye gaze requiring determination of convergence of gaze vectors at [0374], generally describing input vectors determination at [0084] and [0162] and [0364]).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the augmented reality interface elements of Yan in view of Wallen to incorporate eye convergence vectors of the augmented reality display system as disclosed by Lacey because the references are within the same field of endeavor, namely, graphical interface elements and their presentation to a user for selection through user input. The motivation to combine these references would have been to improve confidence and accuracy of the various inputs provided by the user when interacting with the object and user interface elements (see Lacey at [0084]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Regarding claim 14, Yan in view of Wallen discloses the method of claim 1 (see above).
However, Yan in view of Wallen does not explicitly disclose wherein activity of the user activity that is determined to correspond to unintentional events rather than intentional input is excluded from the data for the application.
In the same field of endeavor, Lacey discloses wherein activity of the user activity that is determined to correspond to unintentional events rather than intentional input is excluded from the data for the application [0006]-[0007] describing the determined intentional inputs and excluding the data from sensors that diverge from the intended input (e.g., unintentional), known as transmodal input fusion as further described at [0080]-[0090] and FIGS. 39A-39D and [0373]-[0379] and [0409] and [0414], further at FIG. 56 and [0433] and FIGS. 58A-59B and [0436]-[0445]).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the augmented reality interface elements of Yan in view of Wallen to incorporate intended actions over unintended actions as disclosed by Lacey because the references are within the same field of endeavor, namely, graphical interface elements and their presentation to a user for selection through user input. The motivation to combine these references would have been to improve confidence and accuracy and reduce the error rate of the various inputs provided by the user when interacting with the object and user interface elements (see Lacey at [0080] and [0084]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Regarding claim 15, Yan in view of Wallen discloses the method of claim 1 (see above).
However, Yan in view of Wallen does not explicitly disclose wherein passive gaze-only activity of the user activity is excluded from the data for the application.
In the same field of endeavor, Lacey discloses wherein passive gaze-only activity of the user activity is excluded from the data for the application (FIGS. 52-56 and [0426]-[0433] describing filtering out eye gaze input and using the other inputs, such as hand gesture and head pose, further example at least at FIGS 48A-49 and [0405]-[0412] when eye gaze diverges and is removed).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the augmented reality interface elements of Yan in view of Wallen to incorporate removing eye gaze information when the data diverges as disclosed by Lacey because the references are within the same field of endeavor, namely, graphical interface elements and their presentation to a user for selection through a variety of user inputs. The motivation to combine these references would have been to improve confidence and accuracy and reduce the error rate of the various inputs provided by the user when interacting with the object and user interface elements (see Lacey at [0080] and [0084]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Claims 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Wallen as applied to claim 1 above, and further in view of Pollefeys et al., US 2020/0302634 A1 (hereinafter “Pollefeys”).
Regarding claim 10, Yan in view of Wallen discloses the method of claim 1 (see above).
However, Yan in view of Wallen does not explicitly disclose wherein the data corresponding to the user activity comprises hands data comprising a hand pose skeleton of multiple joints for each of multiple instants in time during use of the electronic device.
In the same field of endeavor, Pollefeys discloses wherein the data corresponding to the user activity comprises hands data comprising a hand pose skeleton of multiple joints for each of multiple instants in time during use of the electronic device (FIGS. 4A-4D and 5-6 and [0033]-[0042] describing using joints and skeletal representation of the hand to form articulated object poses 52 and movement of such over time at least at [0030] and [0053]-[0058]).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the augmented reality interface elements of Yan in view of Wallen to incorporate the hand pose determination using skeletal joints of a hand in input frames as disclosed by Pollefeys because the references are within the same field of endeavor, namely, interaction with user interface elements with gestures and various inputs. The motivation to combine these references would have been to improve prediction accuracy of estimating both articulated object and target object poses (see Lacey at [0058]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Regarding claim 17, Yan in view of Wallen discloses the method of claim 1 (see above). However, Yan in view of Wallen does not explicitly disclose wherein:
the data corresponding to the user activity includes hands data representing the positions of multiple joints of a hand; and
the data for the application includes a single hand pose that is provided instead of the hands data ([0017] and FIGS. 4A-4D and 5 describing single hand motions therein at [0032]-[0038]).
In the same field of endeavor, Pollefeys discloses wherein:
the data corresponding to the user activity includes hands data representing the positions of multiple joints of a hand (FIGS. 4A-4D and 5-6 and [0033]-[0042] describing using joints and skeletal representation of the hand to form articulated object poses 52 and movement of such over time at least at [0030] and [0053]-[0058]); and
the data for the application includes a single hand pose that is provided instead of the hands data ([0017] and FIGS. 4A-4D and 5 describing single hand motions therein at [0032]-[0038]).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the augmented reality interface elements of Yan in view of Wallen to incorporate the hand pose determination using skeletal joints of a hand in input frames as disclosed by Pollefeys because the references are within the same field of endeavor, namely, interaction with user interface elements with gestures and various inputs. The motivation to combine these references would have been to improve prediction accuracy of estimating both articulated object and target object poses (see Lacey at [0058]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Gum et al., US 2024/0385693 A1: FIG. 8 and [0102] limited interaction data received by the app that is only related to the app;
Shutzberg et al., US 2024/0385692 A1: FIGS. 6A-6B and [0067]-[0068] limited user activity information provided to app to protect privacy;
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SARVESH J. NADKARNI whose telephone number is (571)270-7562. The examiner can normally be reached 8AM-5PM M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin C. Lee can be reached at (571)272-2963. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SARVESH J NADKARNI/ Examiner, Art Unit 2629
/BENJAMIN C LEE/Supervisory Patent Examiner, Art Unit 2629