Prosecution Insights
Last updated: April 19, 2026
Application No. 19/079,902

WEARABLE DEVICE FOR GUIDING USER'S POSTURE AND METHOD THEREOF

Non-Final OA §102§103
Filed
Mar 14, 2025
Examiner
BOLOTIN, DMITRIY
Art Unit
2623
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
94%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
901 granted / 1116 resolved
+18.7% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
21 currently pending
Career history
1137
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
43.1%
+3.1% vs TC avg
§102
26.2%
-13.8% vs TC avg
§112
16.5%
-23.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1116 resolved cases

Office Action

§102 §103
DETAILED ACTION It would be of great assistance to the Office if all incoming papers pertaining to a filed application carried the following items: 1. Application number (checked for accuracy, including series code and serial no.). 2. Group art unit number (copied from most recent Office communication). 3. Filing date. 4. Name of the examiner who prepared the most recent Office action. 5. Title of invention. 6. Confirmation number (See MPEP § 503). Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1 – 3, 8, 12 – 16 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Jones et al. (US 2021/0327142). As to claim 1, Jones discloses a wearable device comprising: a sensor (the HR system 700 includes one or more sensors in a sensing block 740 [0080)]; a camera (sensors used by some embodiments of HR systems include, but are not limited to, a camera that captures images [0023])]; a display (the display technology used by an HR system embodiment may include any method of projecting an image to an eye [0024), 750 of fig. 7]; memory comprising one or more storage media (memory 730 of fig. 7) storing instructions (instructions 732 of fig. 7); and at least one processor (processor 710 of fig. 7) comprising processing circuitry [0085], wherein the instructions (instructions 732 of fig. 7), when executed by the at least one processor (processor 710 of fig. 7) individually or collectively, cause the wearable device to: obtain, in a state that the wearable device is worn by a user, an image using the camera (sensors used by some embodiments of HR systems include, but are not limited to, a camera that captures images [0023], wherein he method may be performed by a hybrid-reality (HR) system and may utilize a head-mounted display (HMD) worn by a user [0092], a real-world object in a field of view of the user may be selected and a virtual object of the one or more virtual objects may be rendered at a position in the first image based on a real-world position of the selected real-world object [0093]), in the state, identify a posture of the user based on data of the sensor (the method starts 801 by establishing 803 a first situation of a body part of the user in three-dimensional (3D) space, for example using depth sensors in combination with object recognition [0092], wherein the body part situation may be any body part of the user in any situation, including, but not limited to, a head position, a head orientation, a hand position, a hand orientation, a foot position, or a foot orientation [0092]), based on identifying the posture of the user, which is a first preset posture, obtain a first visual object corresponding to an external object using the camera, and display a second visual object for guiding changing of the posture within a field-of-view (FoV) (the flowchart 800 continues by delivering a stimulus 820 to prompt the user to move the body part from the first situation to the second situation. In some embodiments, a first image of one or more virtual objects is rendered 824 based on at least the second situation and the first image displayed 826 to the user to deliver the stimulus [0093]. In such embodiments, a real-world object in a field of view of the user may be selected and a virtual object of the one or more virtual objects may be rendered at a position in the first image based on a real-world position of the selected real-world object [0093]), and based on identifying that the posture of the user is changed from the first preset posture to a second preset posture (comparing change in user situation at 840 of fig. 8 to identify whether the situation is changed [0092], [0098]), display the obtained first visual object in an area including a center of the FoV (in such embodiments, a real-world object in a field of view of the user may be selected and a virtual object of the one or more virtual objects may be rendered at a position in the first image based on a real-world position of the selected real-world object [0093]). As to claim 2 (dependent on 1), Jones discloses the wearable device, wherein the instructions (instructions 732 of fig. 7), when executed by the at least one processor (processor 710 of fig. 7) individually or collectively, cause the wearable device to: display, in at least a portion of the FoV, a third visual object for indicating a parameter associated with the data of the sensor (displaying virtual object with changing appearance [0052], [0095]). As to claim 3 (dependent on 2), Jones discloses the wearable device, wherein the instructions (instructions 732 of fig. 7), when executed by the at least one processor (processor 710 of fig. 7) individually or collectively, cause the wearable device to: identify, based on obtaining the data of the sensor , which is an accelerometer, the posture of the user (the objects position and/or orientation in three-dimensional space is detecting using accelerometer [0023], [0051]). As to claim 8 (dependent on 1), Jones discloses the wearable device, wherein the instructions (instructions 732 of fig. 7), when executed by the at least one processor (processor 710 of fig. 7) individually or collectively, cause the wearable device to: display a fourth visual object, which is a virtual object corresponding to the posture of the user (displaying virtual object based on second situation [0094]). As to claim 12 (dependent on 1), Jones discloses the wearable device, further comprising: a haptic sensor (the HR system 700 includes one or more sensors in a sensing block 740 [0080], including haptic transducers [0097]), wherein the instructions (instructions 732 of fig. 7), when executed by the at least one processor (processor 710 of fig. 7) individually or collectively, cause the wearable device to: guide, based on identifying the preset posture, changing of the posture using the haptic sensor (the stimulus may be delivered 820 as a haptic stimulus using a haptic transducer, [0097]). As to claim 13 (dependent on 1), Jones discloses the wearable device, wherein the instructions (instructions 732 of fig. 7), when executed by the at least one processor (processor 710 of fig. 7) individually or collectively, cause the wearable device to: adjust, based on adjusting an alpha value of the first visual object, a transparency of the first visual object (adjusting transparency of the object, [0093]). As to claim 14, Jones discloses a method of a wearable device, the method comprising: obtaining, by the wearable device, in a state that the wearable device is worn by a user, an image using a camera (sensors used by some embodiments of HR systems include, but are not limited to, a camera that captures images [0023], wherein he method may be performed by a hybrid-reality (HR) system and may utilize a head-mounted display (HMD) worn by a user [0092], a real-world object in a field of view of the user may be selected and a virtual object of the one or more virtual objects may be rendered at a position in the first image based on a real-world position of the selected real-world object [0093]); in the state, identifying, by the wearable device, a posture of the user based on data of a sensor (the method starts 801 by establishing 803 a first situation of a body part of the user in three-dimensional (3D) space, for example using depth sensors in combination with object recognition [0092], wherein the body part situation may be any body part of the user in any situation, including, but not limited to, a head position, a head orientation, a hand position, a hand orientation, a foot position, or a foot orientation [0092]); based on identifying the posture of the user, which is a first preset posture, obtaining, by the wearable device, a first visual object corresponding to an external object using the camera, and displaying, by the wearable device, a second visual object for guiding changing of the posture within a field-of-view (FoV) (the flowchart 800 continues by delivering a stimulus 820 to prompt the user to move the body part from the first situation to the second situation. In some embodiments, a first image of one or more virtual objects is rendered 824 based on at least the second situation and the first image displayed 826 to the user to deliver the stimulus [0093]. In such embodiments, a real-world object in a field of view of the user may be selected and a virtual object of the one or more virtual objects may be rendered at a position in the first image based on a real-world position of the selected real-world object [0093]); and based on identifying that the posture of the user is changed from the first preset posture to a second preset posture, displaying, by the wearable device, the obtained first visual object in an area including a center of the FoV (comparing change in user situation at 840 of fig. 8 to identify whether the situation is changed [0092],[0098]), display the obtained first visual object in an area including a center of the FoV (in such embodiments, a real-world object in a field of view of the user may be selected and a virtual object of the one or more virtual objects may be rendered at a position in the first image based on a real-world position of the selected real-world object [0093]). As to claim 15 (dependent on 14), Jones discloses the method, further comprising: displaying, in at least a portion of the FoV, a third visual object for indicating a parameter associated with the data of the sensor (displaying virtual object with changing appearance [0052], [0095]). As to claim 16 (dependent on 15) Jones discloses the method, further comprising: identifying, based on obtaining the data of the sensor (the HR system 700 includes one or more sensors in a sensing block 740 [0080)], which is an accelerometer, the posture of the user (the objects position and/or orientation in three-dimensional space is detecting using accelerometer [0023], [0051]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 4, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jones in view of Hatfield et al. (US 2023/0229010). As to claim 4 (dependent on 3) and claim 17 (dependent on 16) Jones discloses the wearable device and the method, wherein the instructions (instructions 732 of fig. 7), when executed by the at least one processor (processor 710 of fig. 7), but fails to disclose that the instructions cause individually or collectively the wearable device to: identify, based on the accelerometer, an angle between a first axis associated with a body part of the user, and a second axis rotatable based on the first axis according to changing of the posture of the user, and identify, based on identifying the angle, the posture of the user. In the same filed of endeavor, Hatfield discloses a head-mounted device (TITLE) comprising a processor (150 of fig. 9), the processor is configured to: identify, based on the accelerometer (posture detection comprises accelerometer [0078]), an angle between a first axis associated with a body part of the user, and a second axis rotatable based on the first axis according to changing of the posture of the user (identify angle 40 between head and torso [0051], [0058]), and identify, based on identifying the angle, the posture of the user (identify posture based on the angle [0058]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Jones and Hatfield such that the posture was identified based on the detected angle as disclosed by Hatfield, with motivation to prompt movement of the user to promote the user's health (Hatfield, [0037]). Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jones in view of Brems et al. (US 2021/0034222). As to claim 11 (dependent on 1), Jones discloses the wearable device, further comprising: communication circuitry (I/O block 720 of fig. 7), and the instructions (instructions 732 of fig. 7)executed by the at least one processor (processor 710 of fig. 7), but does not explicitly disclose that the instructions individually or collectively, cause the wearable device to: in a state that a communication link of the wearable device and the external object is established through the communication circuitry, receive information to display a screen of the external object from the external object, and display, based on the information, the screen in the FoV. In the same filed of endeavor, Brems discloses a wearable device (102 of fig. 1) in communication with external device (second device [0082]) , wherein in a state that a communication link of the wearable device and the external object is established through the communication circuitry, receive information to display a screen of the external object from the external object (obtaining, by the first device, content displayed by a second device, 1004 of fig. 10 [0081 – 0082]), and display, based on the information, the screen in the FoV (displaying, by the first device, the content with the virtual content, 1006 of fig. 10, [0082]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Jones and the teachings of Brems, such that the screen of the external object from the external object was displayed as disclosed by Brems, with motivation to enhance user experiences in a wide range of contexts (Brems [0003]). Allowable Subject Matter Claims 5 – 7, 9, 10 and 17 – 20 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: As to claim 5 (dependent on 4), the Prior Art of record fails to disclose alone or in combination the wearable device and the method, wherein the instructions, are executed by the at least one processor individually or collectively, but fails to disclose that instructions cause the wearable device to: identify, based on identifying the angle, which is greater than a first threshold, and less than a second threshold bigger than the first threshold, the first preset posture. (Emphasis Added.) As to claim 18 (dependent on 17) the Prior Art of record fails to disclose alone or in combination the method, further comprising: identifying, based on identifying the angle, which is greater than a first threshold, and less than a second threshold bigger than the first threshold, the first preset posture. (Emphasis Added.) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DMITRIY BOLOTIN whose telephone number is (571)270-5873. The examiner can normally be reached M-F 9AM - 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chanh Nguyen can be reached at (571)272-7772. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DMITRIY BOLOTIN/Primary Examiner, Art Unit 2623
Read full office action

Prosecution Timeline

Mar 14, 2025
Application Filed
Feb 21, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603040
Pixel Circuit and Display Device Including the Same
2y 5m to grant Granted Apr 14, 2026
Patent 12591404
HUMAN-COMPUTER INTERFACE-DRIVEN DEMAND-AWARE SCREEN SHARING
2y 5m to grant Granted Mar 31, 2026
Patent 12592167
CURVATURE VARIABLE DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12592190
PIXEL CIRCUIT, DISPLAY PANEL COMPRISING THE PIXEL CIRCUIT, AND DISPLAY DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12585357
ELECTRONIC DEVICE AND METHOD OF DRIVING THE SAME
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
94%
With Interview (+12.8%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 1116 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month