Prosecution Insights
Last updated: April 19, 2026
Application No. 18/619,078

DISPLAY DEVICE, WEARABLE ELECTRONIC DEVICE, AND OPERATING METHOD OF ELECTRONIC DEVICE

Final Rejection §103
Filed
Mar 27, 2024
Examiner
SNYDER, ADAM J
Art Unit
2623
Tech Center
2600 — Communications
Assignee
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
88%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
622 granted / 896 resolved
+7.4% vs TC avg
Strong +19% interview lift
Without
With
+18.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
30 currently pending
Career history
926
Total Applications
across all art units

Statute-Specific Performance

§101
0.5%
-39.5% vs TC avg
§103
59.3%
+19.3% vs TC avg
§102
26.6%
-13.4% vs TC avg
§112
6.8%
-33.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 896 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed on 09/11/2025 has been considered by Examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-19 are rejected under 35 U.S.C. 103 as being unpatentable over Jerauld (US 2014/0118225 A1) in view of Ross et al (US 2018/0157333 A1). Claim 1, Jerauld (Fig. 1A-21) discloses a display device (10; Fig. 1A; wherein discloses a head mounted display device), comprising: a sensing unit (113; Fig. 6A; Paragraph [0110]) configured to obtain movement information about at least one body part of a target user (706 or 704; Fig. 7A; Paragraph [0105]) responding to an extended reality (XR) image (Fig. 15A-17F) output by a display panel (120; Fig. 6A); and a processor (210; Fig. 6A) operably connected to the sensing unit (113; Fig. 6A), wherein the processor (210; Fig. 6A) is configured to (Fig. 10A) generate the XR image (Fig. 15A-17F) from movements of the target user (706 or 704; Fig. 7A; Paragraph [0105]) on a basis of the movement information (Paragraph [0105]; wherein discloses display provides feedback of emotional state based on subject’s gesture), calculate an emotional empathy degree and a physical empathy degree (1016; Fig. 10A) for another user (706 or 704; Fig. 7A; Paragraph [0105]) on a basis of the movement information of the target user (704; Fig. 7A) and movement information of the another user (706; Fig. 7A), and generate a final empathy degree (1018; Fig. 10A) of the target user (706 or 704; Fig. 7A; Paragraph [0105]) for the other user (706 or 704; Fig. 7A; Paragraph [0105]) on a basis of the emotional empathy degree and the physical empathy degree (1016; Fig. 10A). Jerauld does not expressly disclose calculate a basis of similarity between the movement information of the target user and movement information of the another user. Ross (Fig. 1-6) discloses calculate (Paragraph [0075]; wherein discloses a matching of gestures between users so that the detected similarities include shape, movement, location, etc.) a basis of similarity between the movement information (116; Fig. 1; 326; Fig. 3) of the target user (106; Fig. 1) and movement information (112; Fig. 1; 324; Fig. 3) of the another user (302; Fig. 3). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Jerauld’s head mounted display device by determining a similarity of movement between users, as taught by Ross, so to use a head mounted display device with a similarity of movement between users for providing a way for users to share particular portions of information with certain users in a same VR space while maintaining data and user privacy from other users in the VR space (Paragraph [0016]). Claim 12, Jerauld (Fig. 1A-21) discloses an electronic device (10; Fig. 1A) comprising: a memory (214 and 212; Fig. 6A) for storing at least one instruction (Paragraph [0157]); and at least one processor (210; Fig. 6A) operably connected to the memory (214 and 212; Fig. 6A), wherein the at least one processor (210; Fig. 6A) is configured to: execute the at least one instruction (Fig. 10A), so as to receive first movement information (1012; Fig. 10A) for at least one body part (Paragraph [0111]) of a first user (704; Fig. 7A) responding to an extended reality (XR) image (Fig. 15A-17F) or second movement information (1012; Fig. 10A) for at least one body part (Paragraph [0111]) of a second user (706; Fig. 7A) responding to the XR image (Fig. 15A-17F), obtain first feature information (1015; Fig. 10A) comprising gaze information (Paragraph [0083]), face information (Paragraph [0105]), and position information (Paragraph [0046]) of the first user (704; Fig. 7A) from the first movement information (1012; Fig. 10A), obtain second feature information (1012; Fig. 10A) comprising gaze information (Paragraph [0083]), face information (Paragraph [0105]), and position information (Paragraph [0046]) of the second user (702; Fig. 7A) from the second movement information (1012; Fig. 10A), obtain weights for pieces (1018; Fig. 10A) of the feature information by using a neural network model (870; Fig. 8), and generate a final empathy degree of the first user (1020; Fig. 10A) for the second user (702; Fig. 7A) on a basis of the first feature information (1012; Fig. 10A), the second feature information (1012; Fig. 10A), and the weights (1018; Fig. 10A). Jerauld does not expressly disclose generate a basis of similarity between the first feature information, and the second feature information. Ross (Fig. 1-6) discloses generate (Paragraph [0075]; wherein discloses a matching of gestures between users so that the detected similarities include shape, movement, location, etc.) a basis of similarity between the first feature information (116; Fig. 1; 326; Fig. 3), and the second feature information (112; Fig. 1; 324; Fig. 3). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Jerauld’s head mounted display device by determining a similarity of movement between users, as taught by Ross, so to use a head mounted display device with a similarity of movement between users for providing a way for users to share particular portions of information with certain users in a same VR space while maintaining data and user privacy from other users in the VR space (Paragraph [0016]). Claim 19, Jerauld (Fig. 1A-21) discloses a wearable electronic device (10; Fig. 1A), comprising: a display panel (120; Fig. 6A) for outputting an extended reality (XR) image (Fig. 15A-17F) to users (702 and 704; Fig. 7A); a sensing unit (113; Fig. 6A) operably connected to the display panel (120; Fig. 6A), where the sensing unit (113; Fig. 6A) is configured to obtain first movement information (1012; Fig. 10A) about at least one body part (Paragraph [0111]) of a first user (702; Fig. 7A; Paragraph [0105]) responding to the XR image (Fig. 15A-17F); a communication unit (346; Fig. 6B; Paragraph [0093]) operably connected to the sensing unit (113; Fig. 6A), wherein the communication unit (346; Fig. 6B; Paragraph [0093]) is configured to receive second movement information (Fig. 19) about at least one body part (Paragraph [0111]) of a second user (Fig. 704; Fig. 7A) responding to the XR image (Fig. 15A-17F); and a processor (210; Fig. 6A) operably connected to the communication unit (346; Fig. 6B; Paragraph [0093]), wherein the processor (210; Fig. 6A) is configured to calculate an emotional empathy degree and a physical empathy degree (1016; Fig. 10A) of the first user for the second user (702 and 704; Fig. 7A; Paragraph [0105]) on a basis of the first movement information (113; Fig. 6A; 702; Fig. 7A) and the second movement information (Fig. 19; 704; Fig. 7A), and generate a final empathy degree (1018; Fig. 10A) of the first user for the second user (706 and 704; Fig. 7A; Paragraph [0105]) on a basis of the emotional empathy degree and the physical empathy degree (1016; Fig. 10A). Jerauld does not expressly disclose configured to calculate a basis of similarity between the first movement information and the second movement information. Ross (Fig. 1-6) discloses configured to calculate (Paragraph [0075]; wherein discloses a matching of gestures between users so that the detected similarities include shape, movement, location, etc.) a basis of similarity between the first movement information (116; Fig. 1; 326; Fig. 3) and the second movement information (112; Fig. 1; 324; Fig. 3). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Jerauld’s head mounted display device by determining a similarity of movement between users, as taught by Ross, so to use a head mounted display device with a similarity of movement between users for providing a way for users to share particular portions of information with certain users in a same VR space while maintaining data and user privacy from other users in the VR space (Paragraph [0016]). Claims 2 and 13, Jerauld (Fig. 1A-21) discloses wherein the processor (210; Fig. 6A) is configured to obtain gaze information (134; Fig. 6A; Paragraph [0083]) of each of the target user (704; Fig. 7A) and the other user (702; Fig. 7A) from the movement information of each of the target user (704; Fig. 7A; Paragraph [0105]) and the other user (702; Fig. 7A), generate a gaze consistency degree (1018; Fig. 10A) between the target user (704; Fig. 7A) and the other user (702; Fig. 7A) by using the gaze information (134; Fig. 6A; Paragraph [0083]), obtains face information (Paragraph [0105]; wherein discloses “expressions on the user's face are detectable by the device 2”) of each of the target user (704; Fig. 7A) and the other user (702; Fig. 7A) from the movement information (1015; Fig. 10A) of each of the target user (704; Fig. 7A) and the other user (702; Fig. 7A), generate a facial expression similarity degree (1018; Fig. 10A) between the target user (704; Fig. 7A) and the other user (702; Fig. 7A) by using the face information (Paragraph [0105]; wherein discloses “expressions on the user's face are detectable by the device 2”), and generate the emotional empathy degree (1016; Fig. 10A) on the basis of at least one of (1010; Fig. 10A) the gaze consistency degree (134; Fig. 6A; Paragraph [0083]) and the facial expression similarity degree (Paragraph [0105]; wherein discloses “expressions on the user's face are detectable by the device 2”). Claims 3 and 14, Jerauld (Fig. 1A-21) discloses wherein the gaze information (134; Fig. 6A; Paragraph [0083]) comprises at least one of gaze direction information (Fig. 2A and 2B), pupil information (162l and 162r; Fig. 2A and 2B), and eye movement information (Paragraph [0054]; wherein discloses eye is rotated), and wherein the processor (210; Fig. 6A) is configured to generate the gaze consistency degree (1018; Fig. 10A) on the basis of at least one of a similarity degree in gaze directions (Fig. 2A and 2B) and between the target user (704; Fig. 7A) and the other user (702; Fig. 7A), a difference in pupil sizes (162l and 162r; Fig. 2A and 2B) between the target user (704; Fig. 7A) and the other user (702; Fig. 7A), or a difference in eye movement speeds Paragraph [0054]; wherein discloses eye is rotated) between the target user (704; Fig. 7A) and the other user (702; Fig. 7A). Claim 4, Jerauld (Fig. 1A-21) discloses wherein the higher the similarity degree in the gaze directions is (Fig. 2A and 2B), the higher the gaze consistency degree is (Paragraph [0050]), and the smaller the difference in the pupil sizes (162; and 162r; Fig. 6A and 6B) and the difference in the eye movement speeds are (Fig. 6A and 6B), the higher the gaze consistency degree is (Paragraph [0050]). Claims 5 and 15, Jerauld (Fig. 1A-21) discloses wherein the face information (Fig. 17A-17F) comprises movement information of facial muscles (Paragraph [0145]) corresponding to at least one of a plurality of parts of a face (Fig. 17A-17F; wherein figure shows movement of the least one of a plurality of parts of a face), and wherein the processor (210; Fig. 6A) is configured to generate the facial expression similarity degree (1018; Fig. 10A) on the basis of movement information of facial muscles (Fig. 17A-17F). Claims 6 and 16, Jerauld (Fig. 1A-21) discloses wherein the processor (210; Fig. 6A) is configured to obtain position information (144; Fig. 6A; Paragraph [0046]) between the target user (704; Fig. 7A) and the other user (702; Fig. 7A) from the movement information (144; Fig. 6A; Paragraph [0046]) of each of the target user (704; Fig. 7A) and the other user (702; Fig. 7A), generate a physical proximity degree (1015; Fig. 10A) and a movement similarity degree (1015; Fig. 10A) between the target user (704; Fig. 7A) and the other user (702; Fig. 7A) on the basis of the position information (144; Fig. 6A; Paragraph [0046]), and generate the physical empathy degree (1016; Fig. 10A) on the basis of at least one of the physical proximity degree (1015; Fig. 10A) or the movement similarity degree (1015; Fig. 10A). Claims 7 and 17, Jerauld (Fig. 1A-21) discloses wherein the position information comprises head position information (Paragraph [0066]; wherein discloses “The inertial sensors are for sensing position, orientation, and sudden accelerations of head mounted display device 2. From these movements, head position may also be determined”) of the target user (704; Fig. 7A) and wrist position information of the users (Paragraph [0111]; wherein discloses “ matching an image data to image models of a wearer's hand or finger during a gesture may be used rather than skeletal tracking for recognizing gestures.”), and wherein the processor (210; Fig. 6A) is configured to generate the physical proximity degree (850a and 854; Fig. 8) on the basis of at least one of a distance between a head position (Paragraph [0066]) of the target user (704; Fig. 7A) and a head position (Paragraph [0066]) of the other user (706; Fig. 7A) or distances between wrist positions (Paragraph [0111]) of the user (704; Fig. 7A) and wrist positions (Paragraph [0111]) of the other user (706; Fig. 7A). Claim 8, Jerauld (Fig. 1A-21) discloses wherein the closer the distance between the head position (Paragraph [0066]) of the target user (704; Fig. 7A) and the head position (Paragraph [0066]) of the other user is (706; Fig. 7A), the higher the physical proximity degree is (Fig. 7A; wherein figure shows users near each other), and the closer the distances between the wrist positions (Paragraph [0111]) of the target user (704; Fig. 7A) and the wrist positions (Paragraph [0111]) of the other user are (706; Fig. 7A), the higher the physical proximity degree is (Fig. 7A; wherein figure shows users near each other). Claim 9, Jerauld (Fig. 1A-21) discloses wherein the position information comprises head position information (Paragraph [0066]; wherein discloses “The inertial sensors are for sensing position, orientation, and sudden accelerations of head mounted display device 2. From these movements, head position may also be determined”) of the users (704 and 702; Fig. 7A) and wrist position information (Paragraph [0111]; wherein discloses “ matching an image data to image models of a wearer's hand or finger during a gesture may be used rather than skeletal tracking for recognizing gestures.”) of the users (704 and 702; Fig. 7A), and wherein the processor (210; Fig. 6A) is configured to generate the movement similarity degree (850a and 854l; Fig. 8) on the basis of a difference between a first center position relative to the head position (Paragraph [0066]; 704; Fig 7A) and the both wrist positions (Paragraph [0111]) of the target user (704; Fig. 7A) and a second center position relative to the head position (Paragraph [0066]; 706; Fig 7A) and the both wrist positions (Paragraph [0111]) of the other user (706; Fig 7A). Ross (Fig. 1-6) discloses wherein the position information (Paragraph [0075]; wherein discloses a matching of gestures between users so that the detected similarities include shape, movement, location, etc.) comprises information regarding both wrist positions (116 and 112; Fig. 1) of the users (106 and 104; Fig. 1). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Jerauld’s head mounted display device by determining a similarity of movement between users, as taught by Ross, so to use a head mounted display device with a similarity of movement between users for providing a way for users to share particular portions of information with certain users in a same VR space while maintaining data and user privacy from other users in the VR space (Paragraph [0016]). Claim 10, Jerauld (Fig. 1A-21) discloses wherein the position information comprises hand position (Paragraph [0111]; wherein discloses “ matching an image data to image models of a wearer's hand or finger during a gesture may be used rather than skeletal tracking for recognizing gestures.”) information of the users (704 and 706; Fig. 7A), and wherein the processor (210; Fig. 6A) is configured to generate the movement similarity degree (806; Fig. 8) on the basis of at least one of differences in positions of fingers (Paragraph [0111]) between both of the target user (704; Fig. 7A) or the other user (706; Fig. 7A) and differences in angles of finger joints (Paragraph [0111]) between the target user (704; Fig. 7A) and the other user (706; Fig. 7A). Claim 11, Jerauld (Fig. 1A-21) discloses wherein the display device (Fig. 8) comprises a neural network model trained (870; Fig. 8) on the basis of a sample empathy degree (Fig. 14) responded by the target user (704; Fig. 7A) for the other user (702; Fig. 7A) and sample feature information comprising gaze information (134; Fig. 6A), face movement information (113; Fig. 6A), and position information (144; Fig. 6A) which are obtained from training movement information (Fig. 14) and used to generate the final empathy degree (1016; Fig. 10A), and wherein the processor (210; Fig. 6A) is configured to generate the final empathy degree (1016; Fig. 10A) on the basis of weights (1018; Fig. 10A) of the neural network model (870; Fig. 8). Claim 18, Jerauld (Fig. 1A-21) discloses wherein the position information comprises head position information, wrist position information, and hand position information of the users, and wherein the position information comprises head position information (Paragraph [0066]; wherein discloses “The inertial sensors are for sensing position, orientation, and sudden accelerations of head mounted display device 2. From these movements, head position may also be determined”), wrist position information (Paragraph [0111]; wherein discloses “ matching an image data to image models of a wearer's hand or finger during a gesture may be used rather than skeletal tracking for recognizing gestures.”), and hand position information (Paragraph [0111]; wherein discloses “ matching an image data to image models of a wearer's hand or finger during a gesture may be used rather than skeletal tracking for recognizing gestures.”) of the users (704 and 702; Fig. 7A), and wherein the processor (210; Fig. 6A) is configured to generate the movement similarity degree (850a and 854l; Fig. 8) on the basis of a difference between a first center position relative to the head position (Paragraph [0066]; 704; Fig 7A) and the both wrist positions (Paragraph [0111]) of the first user (704; Fig. 7A) and a second center position relative to the head position (Paragraph [0066]; 702; Fig 7A) and the both wrist positions (Paragraph [0111]) of the second user (702; Fig 7A), and a difference in angles of finger joints (Paragraph [0111]) between the first user (704; Fig. 7A) and the second user (702; Fig. 7A). Response to Arguments Applicant's arguments with respect to claims 1-19 have been considered but are moot in view of the new ground(s) of rejection. In view of arguments, the references of Jerauld (US 2014/0118225 A1) and Ross et al (US 2018/0157333 A1) have been used for new ground rejection. Claims1, 12, and 19 are rejected in view of newly discovered reference(s) to Ross et al (US 2018/0157333 A1). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADAM J SNYDER whose telephone number is (571)270-3460. The examiner can normally be reached Monday-Friday 8am-4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chanh D Nguyen can be reached at (571)272-7772. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Adam J Snyder/Primary Examiner, Art Unit 2623 11/22/2025
Read full office action

Prosecution Timeline

Mar 27, 2024
Application Filed
Jun 09, 2025
Non-Final Rejection — §103
Sep 11, 2025
Response Filed
Nov 22, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602108
SYSTEMS AND METHODS OF MINIMIZING AND MAXIMIZING DISPLAY OF THREE-DIMENSIONAL OBJECTS
2y 5m to grant Granted Apr 14, 2026
Patent 12603042
SHIFT REGISTER UNIT, GATE DRIVING CIRCUIT AND DISPLAY PANEL WITH PULL-UP VOLTAGE STABILIZING CIRCUITS
2y 5m to grant Granted Apr 14, 2026
Patent 12602759
VERIFICATION OF CRITICAL DISPLAY FRAME PORTIONS FOR MULTIPLE DISPLAYS IN A VIRTUAL MACHINE ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12597400
DISPLAY PANEL AND DISPLAY DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12586546
DISPLAY PANEL INCLUDING PRE-CHARGING CONTROL MODULE AND DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
88%
With Interview (+18.8%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 896 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month