DETAILED ACTION
In Applicant’s Response dated 12/8/2025, Applicant amended claims 1 to 16; and argued against all rejections previously set forth in the Office action dated 9/8/2025.
Response to Argument
Applicant’s arguments were considered, but are moot in view of the new ground(s) of rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 3, 5, 7, 8, 9, 11, 12, 13, 15, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Moll et al., Pub. No.: 2023/0384928A1. in view of Yun Pub. No.; 2021/0271081A1.
With regard to claim 1:
Moll discloses a glasses device, comprising: a frame structure configured to fix the glasses device to a head of a user (see fig. 1 for the frame structure and the glasses, paragraph 26: “FIG. 1 is a perspective view of an AR system composed of a head-worn device (e.g., glasses 100 of FIG. 1), in accordance with some examples. The glasses 100 can include a frame 102 made from any suitable material such as plastic or metal, including any suitable shape memory alloy. In one or more examples, the frame 102 includes a first or left optical element holder 104 (e.g., a display or lens holder) and a second or right optical element holder 106 connected by a bridge 112. A first or left optical element 108 and a second or right optical element 110 can be provided within respective left optical element holder 104 and right optical element holder 106. The right optical element 110 and the left optical element 108 can be a lens, a display, a display assembly, or a combination of the foregoing. Any suitable display assembly can be provided in the glasses 100.”); at least one at least partially transparent vision element, which can be positioned in front of at least one eye of the user and through which the user can look (A head-worn device may be implemented with a transparent or semi-transparent display through which a user of the head-worn device, paragraph 2: “A head-worn device may be implemented with a transparent or semi-transparent display through which a user of the head-worn device can view the surrounding environment. Such devices enable a user to see through the transparent or semi-transparent display to view the surrounding environment, and to also see objects (e.g., virtual objects such as a rendering of a 2D or 3D graphic model, images, video, text, and so forth) that are generated for display to appear as a part of, and/or overlaid upon, the surrounding environment. This is typically referred to as “augmented reality” or “AR.” A head-worn device may additionally completely occlude a user's visual field and display a virtual environment through which a user may move or be moved. This is typically referred to as “virtual reality” or “VR.” As used herein, the term AR refers to either or both augmented reality and virtual reality as traditionally understood, unless the context indicates otherwise.”); and at least one sensor arrangement arranged on the frame structure (see fig. 1 for camera sensors, paragraph 30 and 31: “The glasses 100 include a first or left camera 114 and a second or right camera 116. Although two cameras are depicted, other examples contemplate the use of a single or additional (i.e., more than two) cameras. In one or more examples, the glasses 100 include any number of input sensors or other input/output devices in addition to the left camera 114 and the right camera 116. Such sensors or input/output devices can additionally include biometric sensors, location sensors, motion sensors, and so forth. In some examples, the left camera 114 and the right camera 116 provide video frame data for use by the glasses 100 to extract 3D information from a real-world scene.”), wherein the sensor arrangement is configured to capture sensor data while the user wears the glasses device, wherein the sensor data at least partially represent at least one subarea of a body of the user during the wearing of the glasses device (system captures video frame data of detectable portions of the user's body paragraph 48: “During the gesture-based keyboard process 400, in operation 430, one or more cameras 420 of the AR system generate real-world scene video frame data 432 of a real-world scene from a perspective of a user of the AR system. The one or more cameras 420 communicate the real-world scene video frame data 432 to a tracking service 424. Included in the real-world scene video frame data 432 are tracking video frame data of detectable portions of the user's body including portions of the user's upper body, arms, hands, and fingers. The tracking video frame data includes video frame data of movement of portions of the user's upper body, arms, and hands as the user makes a gesture or moves their hands and fingers to interact with the virtual keyboard user interface 450; video frame data of locations of the user's arms and hands in space as the user makes the gesture or moves their hands and fingers to interact with the virtual keyboard user interface 450; and video frame data of positions in which the user holds their upper body, arms, hands, and fingers as the user makes the gesture or moves their hands and fingers to interact with the virtual keyboard user interface 450.”), and wherein a pose determination of a pose of the user can be carried out based on the sensor data (gesture recognition service 422 determines a detected gesture/pose, paragraph 50: “In operation 438, the gesture recognition service 422 receives the current tracking data 464 from the tracking service 424 and generates current detected gesture data 436 based on the current tracking data 464. In some examples, the gesture recognition service 422 generates one or more current skeletal models of the user's upper body, arms, hands, and fingers based on landmark data of landmarks included in the current tracking data 464. The gesture recognition service 422 compares the one or more current skeletal models to previously generated gesture skeletal models. The gesture recognition service 422 determines a detected gesture on a basis of the comparison of the one or more current skeletal models with the gesture skeletal models and generates the current detected gesture data 436 based on the detected gesture. In additional examples, the gesture recognition service 422 generates the one or more current skeletal models based on the landmark data. The gesture recognition service 422 determines the detected gesture on a basis of categorizing the current skeletal models using artificial intelligence methodologies and a gesture model previously generated using machine learning methodologies. The gesture recognition service 422 generates the current detected gesture data 436 based on the detected gesture.”).
Moll does not disclose the aspect wherein the at least one sensor arrangement is oriented with at least one effective range configured to capture sensor data representing subareas of the body of the user in both frontal and lateral regions relative to the head of the user.
However Yun discloses the aspect wherein the at least one sensor arrangement is oriented with at least one effective range configured to capture sensor data representing subareas of the body of the user in both frontal and lateral regions relative to the head of the user (paragraph 56: “In one embodiment, the band-type flexible display 212 may include a transparent lens and a display panel disposed in a user eye direction of the smart glass. The head engaging band 214 may be formed in the form of a head band capable of mounting the camera-based mixed reality glass device 110 on a user's head. In one embodiment, the head engaging band 214 may place a plurality of cameras 216 along the head of the user on the outer surface. The plurality of cameras 216 may be disposed in front, rear, and rear sides of the head engaging band and photograph 360-degree image of the user as a peripheral image. For example, the head engaging band 214 may arrange the plurality of cameras 216 in the direction opposite to the user's eyes, the user's ear direction, and the user's backward direction. In one embodiment, the head engaging band 214 may fold a lens cover 20 mounted with a front camera in front of the user's eyes to the body portion of the head engaging band 214. More specifically, the head engaging band 214 may open and close the lens cover 20 mounted with the front camera to the center of the head engaging band 214 about hinges 10 at both sides thereof. When the lens cover 20 is open, the user may view a periphery visually through the transparent lens 30, and when the lens cover 20 is closed, the user may view the mixed reality image in which the virtual image is overlaid on the peripheral image through the band-type flexible display 212 disposed in the user's eye direction of the lens cover 20.”).
It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Yun to Moll so the senser can detect image data from the around the user to provide the user with more text of the surrounding and provide the user with a more immersive experience.
With regard to claim 2:
Moll and Yun disclose the glasses device according to claim 1, wherein the subarea of the body of the user includes one or more from the following list: upper body, shoulder, arm, upper arm, forearm, hand, torso, leg, thigh, lower leg, foot (Moll, paragraph 48: “During the gesture-based keyboard process 400, in operation 430, one or more cameras 420 of the AR system generate real-world scene video frame data 432 of a real-world scene from a perspective of a user of the AR system. The one or more cameras 420 communicate the real-world scene video frame data 432 to a tracking service 424. Included in the real-world scene video frame data 432 are tracking video frame data of detectable portions of the user's body including portions of the user's upper body, arms, hands, and fingers. The tracking video frame data includes video frame data of movement of portions of the user's upper body, arms, and hands as the user makes a gesture or moves their hands and fingers to interact with the virtual keyboard user interface 450; video frame data of locations of the user's arms and hands in space as the user makes the gesture or moves their hands and fingers to interact with the virtual keyboard user interface 450; and video frame data of positions in which the user holds their upper body, arms, hands, and fingers as the user makes the gesture or moves their hands and fingers to interact with the virtual keyboard user interface 450.”).
With regard to claims 3 and 13:
Moll and Yun disclose the glasses device according to claim 1, wherein the sensor arrangement is at least partially oriented in a viewing direction of the glasses device defined by the vision element and/or at an angle to the viewing direction (Moll, see fig. 1 for camera sensors pointing forward, paragraph 30 and 31: “The glasses 100 include a first or left camera 114 and a second or right camera 116. Although two cameras are depicted, other examples contemplate the use of a single or additional (i.e., more than two) cameras. In one or more examples, the glasses 100 include any number of input sensors or other input/output devices in addition to the left camera 114 and the right camera 116. Such sensors or input/output devices can additionally include biometric sensors, location sensors, motion sensors, and so forth. In some examples, the left camera 114 and the right camera 116 provide video frame data for use by the glasses 100 to extract 3D information from a real-world scene.”).
With regard to claim 5:
Moll and Yun disclose the glasses device according to claim 1, wherein the at least one sensor arrangement is at least partially integrated into a support body of the frame structure and/or of the temple element (Moll, see fig. 1 for camera sensors, paragraph 30 and 31: “The glasses 100 include a first or left camera 114 and a second or right camera 116. Although two cameras are depicted, other examples contemplate the use of a single or additional (i.e., more than two) cameras. In one or more examples, the glasses 100 include any number of input sensors or other input/output devices in addition to the left camera 114 and the right camera 116. Such sensors or input/output devices can additionally include biometric sensors, location sensors, motion sensors, and so forth. In some examples, the left camera 114 and the right camera 116 provide video frame data for use by the glasses 100 to extract 3D information from a real-world scene.”).
With regard to claim 7:
Moll and Yun disclose the glasses device according to claim 1, wherein the sensor arrangement includes one or more from the following list: camera sensor, stereo camera sensor, ultrasonic sensor, LiDAR sensor, edge emitter laser assembly, laser feedback interferometry sensor (Moll, see fig. 1 for camera sensors, paragraph 30 and 31: “The glasses 100 include a first or left camera 114 and a second or right camera 116. Although two cameras are depicted, other examples contemplate the use of a single or additional (i.e., more than two) cameras. In one or more examples, the glasses 100 include any number of input sensors or other input/output devices in addition to the left camera 114 and the right camera 116. Such sensors or input/output devices can additionally include biometric sensors, location sensors, motion sensors, and so forth. In some examples, the left camera 114 and the right camera 116 provide video frame data for use by the glasses 100 to extract 3D information from a real-world scene.”).
With regard to claim 8:
Moll and Yun disclose the glasses device according to claim 1, wherein the glasses device is configured as one from the following list: glasses, reading glasses, sunglasses, safety glasses, ski glasses, swim goggles, diving goggles (Moll, see fig. 1 for glasses, paragraph 30 and 31: “The glasses 100 include a first or left camera 114 and a second or right camera 116. Although two cameras are depicted, other examples contemplate the use of a single or additional (i.e., more than two) cameras. In one or more examples, the glasses 100 include any number of input sensors or other input/output devices in addition to the left camera 114 and the right camera 116. Such sensors or input/output devices can additionally include biometric sensors, location sensors, motion sensors, and so forth. In some examples, the left camera 114 and the right camera 116 provide video frame data for use by the glasses 100 to extract 3D information from a real-world scene.”).
With regard to claim 9:
Moll and Yun disclose the glasses device according to claim 1, further comprising: a computing unit configured to ascertain the pose determination of the pose of the user (Moll, paragraph 50: “In operation 438, the gesture recognition service 422 receives the current tracking data 464 from the tracking service 424 and generates current detected gesture data 436 based on the current tracking data 464. In some examples, the gesture recognition service 422 generates one or more current skeletal models of the user's upper body, arms, hands, and fingers based on landmark data of landmarks included in the current tracking data 464. The gesture recognition service 422 compares the one or more current skeletal models to previously generated gesture skeletal models. The gesture recognition service 422 determines a detected gesture on a basis of the comparison of the one or more current skeletal models with the gesture skeletal models and generates the current detected gesture data 436 based on the detected gesture. In additional examples, the gesture recognition service 422 generates the one or more current skeletal models based on the landmark data. The gesture recognition service 422 determines the detected gesture on a basis of categorizing the current skeletal models using artificial intelligence methodologies and a gesture model previously generated using machine learning methodologies. The gesture recognition service 422 generates the current detected gesture data 436 based on the detected gesture.”). based on the sensor data of the sensor arrangement (see fig. 1 for the sensors arrangement, paragraph 48: “During the gesture-based keyboard process 400, in operation 430, one or more cameras 420 of the AR system generate real-world scene video frame data 432 of a real-world scene from a perspective of a user of the AR system. The one or more cameras 420 communicate the real-world scene video frame data 432 to a tracking service 424. Included in the real-world scene video frame data 432 are tracking video frame data of detectable portions of the user's body including portions of the user's upper body, arms, hands, and fingers. The tracking video frame data includes video frame data of movement of portions of the user's upper body, arms, and hands as the user makes a gesture or moves their hands and fingers to interact with the virtual keyboard user interface 450; video frame data of locations of the user's arms and hands in space as the user makes the gesture or moves their hands and fingers to interact with the virtual keyboard user interface 450; and video frame data of positions in which the user holds their upper body, arms, hands, and fingers as the user makes the gesture or moves their hands and fingers to interact with the virtual keyboard user interface 450.”).
With regard to claim 11:
Moll and Yun disclose the glasses device according to claim 9, wherein a trained artificial intelligence is formed on the computing unit, and wherein the trained artificial intelligence is configured to ascertain the pose of the user based on the sensor data (Moll, paragraph 58: “In operation 414, the gesture text entry application 426 generates entered text data 456 based on the collected continuous motion gesture data of the continuous motion 454. In some examples, the gesture text entry application 426 maps the collected continuous motion gesture data to text data using artificial intelligence methodologies and a continuous motion gesture model previously generated using machine learning methodologies. The gesture text entry application 426 generates the current detected gesture data 436 based on the mapped text data.”).
With regard to claim 12:
Moll discloses a method for determining a pose of a user (gesture recognition service 422 determines a detected gesture/pose, paragraph 50: “In operation 438, the gesture recognition service 422 receives the current tracking data 464 from the tracking service 424 and generates current detected gesture data 436 based on the current tracking data 464. In some examples, the gesture recognition service 422 generates one or more current skeletal models of the user's upper body, arms, hands, and fingers based on landmark data of landmarks included in the current tracking data 464. The gesture recognition service 422 compares the one or more current skeletal models to previously generated gesture skeletal models. The gesture recognition service 422 determines a detected gesture on a basis of the comparison of the one or more current skeletal models with the gesture skeletal models and generates the current detected gesture data 436 based on the detected gesture. In additional examples, the gesture recognition service 422 generates the one or more current skeletal models based on the landmark data. The gesture recognition service 422 determines the detected gesture on a basis of categorizing the current skeletal models using artificial intelligence methodologies and a gesture model previously generated using machine learning methodologies. The gesture recognition service 422 generates the current detected gesture data 436 based on the detected gesture.”), comprising the following steps: receiving sensor data (system captures video frame data of detectable portions of the user's body paragraph 48: “During the gesture-based keyboard process 400, in operation 430, one or more cameras 420 of the AR system generate real-world scene video frame data 432 of a real-world scene from a perspective of a user of the AR system. The one or more cameras 420 communicate the real-world scene video frame data 432 to a tracking service 424. Included in the real-world scene video frame data 432 are tracking video frame data of detectable portions of the user's body including portions of the user's upper body, arms, hands, and fingers. The tracking video frame data includes video frame data of movement of portions of the user's upper body, arms, and hands as the user makes a gesture or moves their hands and fingers to interact with the virtual keyboard user interface 450; video frame data of locations of the user's arms and hands in space as the user makes the gesture or moves their hands and fingers to interact with the virtual keyboard user interface 450; and video frame data of positions in which the user holds their upper body, arms, hands, and fingers as the user makes the gesture or moves their hands and fingers to interact with the virtual keyboard user interface 450.”), of at least one sensor arrangement of a glasses device (see fig. 1 for the frame structure and the glasses, paragraph 26: “FIG. 1 is a perspective view of an AR system composed of a head-worn device (e.g., glasses 100 of FIG. 1), in accordance with some examples. The glasses 100 can include a frame 102 made from any suitable material such as plastic or metal, including any suitable shape memory alloy. In one or more examples, the frame 102 includes a first or left optical element holder 104 (e.g., a display or lens holder) and a second or right optical element holder 106 connected by a bridge 112. A first or left optical element 108 and a second or right optical element 110 can be provided within respective left optical element holder 104 and right optical element holder 106. The right optical element 110 and the left optical element 108 can be a lens, a display, a display assembly, or a combination of the foregoing. Any suitable display assembly can be provided in the glasses 100.”); wherein the sensor data at least partially represent at least one subarea of a body of a user of the glasses device (system captures video frame data of detectable portions of the user's body paragraph 48: “During the gesture-based keyboard process 400, in operation 430, one or more cameras 420 of the AR system generate real-world scene video frame data 432 of a real-world scene from a perspective of a user of the AR system. The one or more cameras 420 communicate the real-world scene video frame data 432 to a tracking service 424. Included in the real-world scene video frame data 432 are tracking video frame data of detectable portions of the user's body including portions of the user's upper body, arms, hands, and fingers. The tracking video frame data includes video frame data of movement of portions of the user's upper body, arms, and hands as the user makes a gesture or moves their hands and fingers to interact with the virtual keyboard user interface 450; video frame data of locations of the user's arms and hands in space as the user makes the gesture or moves their hands and fingers to interact with the virtual keyboard user interface 450; and video frame data of positions in which the user holds their upper body, arms, hands, and fingers as the user makes the gesture or moves their hands and fingers to interact with the virtual keyboard user interface 450.”), and wherein the sensor data were captured while the user wore the glasses device (paragraph 37: “In use, a user of the glasses 100 will be presented with information, content and various user interfaces on the near eye displays. As described in more detail herein, the user can then interact with the glasses 100 using a touchpad 126 and/or the buttons 128, voice inputs or touch inputs on an associated device (e.g., client device 826 illustrated in FIG. 8), and/or hand movements, locations, and positions detected by the glasses 100.”) performing a pose determination based on the sensor data and determining pose model information regarding a pose of the user of the glasses device (gesture recognition service 422 determines a detected gesture/pose, paragraph 50: “In operation 438, the gesture recognition service 422 receives the current tracking data 464 from the tracking service 424 and generates current detected gesture data 436 based on the current tracking data 464. In some examples, the gesture recognition service 422 generates one or more current skeletal models of the user's upper body, arms, hands, and fingers based on landmark data of landmarks included in the current tracking data 464. The gesture recognition service 422 compares the one or more current skeletal models to previously generated gesture skeletal models. The gesture recognition service 422 determines a detected gesture on a basis of the comparison of the one or more current skeletal models with the gesture skeletal models and generates the current detected gesture data 436 based on the detected gesture. In additional examples, the gesture recognition service 422 generates the one or more current skeletal models based on the landmark data. The gesture recognition service 422 determines the detected gesture on a basis of categorizing the current skeletal models using artificial intelligence methodologies and a gesture model previously generated using machine learning methodologies. The gesture recognition service 422 generates the current detected gesture data 436 based on the detected gesture.”). and providing the pose model information (The gesture recognition service 422 determines the detected gesture on a basis of categorizing the current skeletal models, paragraph 50: “In operation 438, the gesture recognition service 422 receives the current tracking data 464 from the tracking service 424 and generates current detected gesture data 436 based on the current tracking data 464. In some examples, the gesture recognition service 422 generates one or more current skeletal models of the user's upper body, arms, hands, and fingers based on landmark data of landmarks included in the current tracking data 464. The gesture recognition service 422 compares the one or more current skeletal models to previously generated gesture skeletal models. The gesture recognition service 422 determines a detected gesture on a basis of the comparison of the one or more current skeletal models with the gesture skeletal models and generates the current detected gesture data 436 based on the detected gesture. In additional examples, the gesture recognition service 422 generates the one or more current skeletal models based on the landmark data. The gesture recognition service 422 determines the detected gesture on a basis of categorizing the current skeletal models using artificial intelligence methodologies and a gesture model previously generated using machine learning methodologies. The gesture recognition service 422 generates the current detected gesture data 436 based on the detected gesture.”).
Moll does not disclose the aspect wherein the at least one sensor arrangement is oriented with at least one effective range configured to capture sensor data representing subareas of the body of the user in both frontal and lateral regions relative to the head of the user.
However Yun discloses the aspect wherein the at least one sensor arrangement is oriented with at least one effective range configured to capture sensor data representing subareas of the body of the user in both frontal and lateral regions relative to the head of the user (paragraph 56: “In one embodiment, the band-type flexible display 212 may include a transparent lens and a display panel disposed in a user eye direction of the smart glass. The head engaging band 214 may be formed in the form of a head band capable of mounting the camera-based mixed reality glass device 110 on a user's head. In one embodiment, the head engaging band 214 may place a plurality of cameras 216 along the head of the user on the outer surface. The plurality of cameras 216 may be disposed in front, rear, and rear sides of the head engaging band and photograph 360-degree image of the user as a peripheral image. For example, the head engaging band 214 may arrange the plurality of cameras 216 in the direction opposite to the user's eyes, the user's ear direction, and the user's backward direction. In one embodiment, the head engaging band 214 may fold a lens cover 20 mounted with a front camera in front of the user's eyes to the body portion of the head engaging band 214. More specifically, the head engaging band 214 may open and close the lens cover 20 mounted with the front camera to the center of the head engaging band 214 about hinges 10 at both sides thereof. When the lens cover 20 is open, the user may view a periphery visually through the transparent lens 30, and when the lens cover 20 is closed, the user may view the mixed reality image in which the virtual image is overlaid on the peripheral image through the band-type flexible display 212 disposed in the user's eye direction of the lens cover 20.”).
It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Yun to Moll so the senser can detect image data from the around the user to provide the user with more text of the surrounding and provide the user with a more immersive experience.
With regard to claim 15: Moll and Yun disclose the method according to claim 12, wherein the pose model information includes information regarding one or a plurality of the poses from the following list: finger pose, hand pose, forearm pose, arm pose, shoulder pose, upper body pose, lower body pose, overall body pose, movement poses comprising sitting pose, standing pose, lying pose, walking pose, running pose, jumping pose, dancing pose, arm movement pose, hand movement pose (Moll, The gesture recognition service 422 generates one or more current skeletal models of the user's upper body, arms, hands, and fingers, paragraph 50: “In operation 438, the gesture recognition service 422 receives the current tracking data 464 from the tracking service 424 and generates current detected gesture data 436 based on the current tracking data 464. In some examples, the gesture recognition service 422 generates one or more current skeletal models of the user's upper body, arms, hands, and fingers based on landmark data of landmarks included in the current tracking data 464. The gesture recognition service 422 compares the one or more current skeletal models to previously generated gesture skeletal models. The gesture recognition service 422 determines a detected gesture on a basis of the comparison of the one or more current skeletal models with the gesture skeletal models and generates the current detected gesture data 436 based on the detected gesture. In additional examples, the gesture recognition service 422 generates the one or more current skeletal models based on the landmark data. The gesture recognition service 422 determines the detected gesture on a basis of categorizing the current skeletal models using artificial intelligence methodologies and a gesture model previously generated using machine learning methodologies. The gesture recognition service 422 generates the current detected gesture data 436 based on the detected gesture.”).
Claim 16 is rejected for the same reason as claim 12.
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Moll in view of Yun and further in view of Raffle et al., Patent No.: 9171198B1.
With regard to claim 4:
Moll and Yun disclose the glasses device according to claim 1, wherein the frame structure includes at least one temple element for fixing the glasses device to an ear of the user (Moll see fig. 1 for temple element of the glasses).
Moll and Yun do not disclose the aspect wherein the at least one sensor arrangement is arranged at least partially on the temple element
However Raffle discloses the aspect wherein the frame structure includes at least one temple element for fixing the glasses device to an ear of the user, and wherein the at least one sensor arrangement is arranged at least partially on the temple element (see part 120, paragraph 19: “The HMD 102 can include an on-board computing system 118, a video camera 120, a sensor 122, and a finger-operable touch pad 124. The on-board computing system 118 is shown to be positioned on the extending side arm 114 of the HMD 102. The on-board computing system 118 can be provided on other parts of the HMD 102 or can be positioned remote from the HMD 102. For example, the on-board computing system 118 could be wire- or wirelessly-connected to the HMD 102. The on-board computing system 118 can include a processor and memory, for example. The on-board computing system 118 can be configured to receive and analyze data from the video camera 120 and the finger-operable touch pad 124 (and possibly from other sensory devices, user interfaces, or both) and generate images for output by the lens elements 110 and 112. The on-board computing system can take the form of the computing system 300, which is discussed below in connection with FIG. 3.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Raffle to Moll so the sensers are arranged at least partially on the temple element so they won’t obstruct the view of the user and able to sensor elements closer to either or both sides of the user’s head.
Claim 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Moll, in view of Yun and further in view of Philips, Pub. No.: 20220225859 A1.
With regard to claim 6:
Moll and Yun do not disclose the aspect wherein the sensor arrangement further includes at least one deflection device, and wherein an orientation of the sensor arrangement can be varied via the deflection device.
However Philips disclose the aspect wherein the sensor arrangement further includes at least one deflection device, and wherein an orientation of the sensor arrangement can be varied via the deflection device (the deflector allows changes to the camera sensor, Paragraph 65: “A block diagram is shown in FIG. 10, including an endoscope 1012 and a controller 1010. The connection between them may be wired (in which case they each have an electrical connector) or wireless (in which case they each include a wireless transceiver). The endoscope 1012 includes a camera 1030 and, in an embodiment, an orientation sensor 1056 at the distal end of the endoscope 1012. The orientation sensor 1056 may be an inertial measurement unit (INIU), accelerometer, gyroscope, or other suitable sensor. The endoscope 1012 also includes a light source 1062 and a wire deflector 1058 that is coupled to the controller 1010 and receives instructions that cause wire deflection in the endoscope 1012 to change the orientation of the distal end, and camera, of the endoscope 1012.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Philips to Moll and Yun so the system can be more flexible, allowing the sensor to be adjusted to different orientations.
Claim 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Moll in view of Yun and further in view of Li, CN 103654774 A.
With regard to claim 10:
Moll and Yun do not disclose the glasses device according to claim 1, further comprising: a transmitting/receiving unit configured to transmit the sensor data of the sensor arrangement to an external computing unit for a pose determination.
However Li discloses the aspect of a transmitting/receiving unit configured to transmit the sensor data of the sensor arrangement to an external computing unit for a pose determination (paragraph 62: “In one variant embodiment, the processing unit 14 using the Bluetooth module 17 the multi-channel sEMG signal and front arm motion sensing signal transmitted to an external device such as PC machine, and for human body pose and hand gesture motion identification and modeling on the external device, preferably visual modeling, as gesture which can be identified for gesture control.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Li to Moll and Yun so the system can use an external device to make pose determination wherein the external device could have more computation power and provide better results and saving system resources for the local device.
Claim 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Moll, in view of Yun, and further in view of PETERSEN, Pub. No.: WO 2022223192 A1.
With regard to claim 14:
Moll and Yun do not disclose A method according to claim 12, wherein the received sensor data include lidar data of at least one laser feedback interferometry sensor, and wherein performing of the pose determination includes: performing a laser feedback interferometry analysis and determining distance values and/or velocity values of at least subareas of the subareas of the body of the user represented by the sensor data.
However Petersen disclose the aspect wherein the received sensor data include lidar data of at least one laser feedback interferometry sensor, and wherein performing of the pose determination includes: performing a laser feedback interferometry analysis and determining distance values and/or velocity values of at least subareas of the subareas of the body of the user represented by the sensor data. (“The method thus allows eye gestures to be recognized in a particularly simple and efficient manner with a particularly high level of user comfort. The special way of recognizing the eye gestures by means of laser feedback interferometry offers the advantage of a particularly high temporal sampling rate, so that the eye gestures can be recognized with a particularly high temporal resolution. In addition, the method offers the advantage that simple and inexpensive components that have a low energy requirement can be used. It is also advantageous that no moving components, such as scanning devices, are required, which means that flexible and robust application options are available.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Petersen to Moll and Yun for the system to use Laser interferometry to determine user poses for a more precise determination at high resolution.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DI XIAO whose telephone number is (571)270-1758. The examiner can normally be reached 9Am-5Pm est M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DI XIAO/Primary Examiner, Art Unit 2178