Prosecution Insights
Last updated: April 19, 2026
Application No. 18/327,570

DISPLAY DEVICE AND OPERATION METHOD THEREOF

Final Rejection §103
Filed
Jun 01, 2023
Examiner
BOYD, ALEXANDER L
Art Unit
2424
Tech Center
2400 — Computer Networks
Assignee
Samsung Electronics Co., Ltd.
OA Round
4 (Final)
74%
Grant Probability
Favorable
5-6
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
222 granted / 299 resolved
+16.2% vs TC avg
Strong +24% interview lift
Without
With
+24.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
35 currently pending
Career history
334
Total Applications
across all art units

Statute-Specific Performance

§101
4.8%
-35.2% vs TC avg
§103
53.9%
+13.9% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
18.5%
-21.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 299 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-16 are pending in this Office Action. Claims 1-2, 4, and 12-14 are amended. Response to Amendment The Amendment filed 7/25/2025 has been entered. The 35 U.S.C. 112(b) rejections previously set forth in the Non-Final Office Action mailed 5/2/2025 are withdrawn based on Applicant’s amendments. Response to Arguments Applicant’s arguments with respect to claims 1 and 12 have been fully considered, but are not persuasive. Applicant argues that Anderson fails to disclose or suggest "adjust the reproduction of the video content such that the movement of the user and the movement shown in the video content being reproduced are synchronized with each other". The examiner respectfully disagrees. Anderson teaches at par. 82-83, “plays the target movement in the video window 456” and “The movement training system 200 pauses the video and profile representation at each keyframe and presents a "Hold This Pose" message in the dialog box 452. The system remains paused until the user 350 holds the pose in a stable position for a duration of time, such as one second, or until a particular amount of time has elapsed, such as five seconds”. This demonstrates adjusting the video reproduction by pausing the video content until the user’s movement is synchronized with the movement made in the video. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-8, and 10-16 are rejected under 35 U.S.C. 103 as being unpatentable over Anderson et al. (US 2015/0099252). Regarding claims 1 and 12, Anderson teaches: A display device and an operation method of a display device comprising: a display [display device 110 (Fig. 1)] a detector comprising at least one sensor [a motion tracking module 230 with a motion sensor (par. 26 and 40, Fig. 2)] and at least one processor, comprising processing circuitry, individually and/or collectively- configured [CPU 102 includes one or more processing cores (par. 26, Fig. 1)] to: identify a plurality of different movements included in the video content [different movements included in video may be annotated and stored in database 242 (par. 41, Fig. 2 and 4B)] control reproduction of the video content including a plurality of frames on the display [CPU 102 provides display processor 112 with data and/or instructions defining the desired output images including video frames, including playing the target movement in the video window 456 (par. 32, 34, and Fig. 4D and 4E)] detect a gesture corresponding to movement of a user based on a result of detection by the at least one sensor [using the motion tracking module and motion sensor to detect and capture a posture movement or a dynamic movement of the user (par. 26, 64, 68, 71, and 101, Fig. 4B)] identify at least one frame including a movement corresponding to the detected gesture from among the plurality of different movements included in the video content [search the movement database 242 for movements that best match the movement of the user 350 and displaying at least one frame including the movements (par. 64 and 71-72, Fig. 4B)] and adjust the reproduction of the video content such that the movement of the user and the movement shown in the video content being reproduced are synchronized with each other [Playing the target movement in the video window 456. The movement training system 200 pauses the video and profile representation at each keyframe and presents a "Hold This Pose" message in the dialog box 452. The system remains paused until the user 350 holds the pose in a stable position for a duration of time, such as one second, or until a particular amount of time has elapsed, such as five seconds (par. 79 and 81-83, Fig. 4D and 4E)]. Anderson does not explicitly disclose: the detection of the posture movement or the dynamic movement of the user, used for querying the moving database, is while the video content is being reproduced. However, in another embodiment Anderson teaches: detection of a posture or pose corresponding to movement of the user is while the video content is being reproduced [playing the target movement in the video window 456 and detecting the user movement to perform the pose (par. 82-83, Fig. 4E)]. This is in accordance with page 12 of the specification disclosing “the terms 'posture', 'gesture', 'motion', 'pose', and/or 'movement' are collectively referred to as 'gesture'.” It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the embodiments of Anderson such that detecting the gesture corresponding to the movement of the user is while the video content is being reproduced. The motivation for doing so would have been to allow the user to quickly navigate to a different movement (Anderson - par. 10). Therefore, it would have been obvious to combine the embodiments of Anderson to obtain the invention as specified in the instant claim. Regarding claims 2 and 13, Anderson teaches the display device of claim 1; Anderson further teaches: the at least one processor is individually and/or collectively configured to adjust a reproduction speed of the video content to synchronize the movement of the user and the movement shown in the video content, and wherein the adjusting the reproduction speed of the video content includes at least one of decreasing the reproduction speed of the video content or increasing the reproduction speed of the video content [Playing the target movement in the video window 456. The movement training system 200 pauses the video and profile representation at each keyframe and presents a "Hold This Pose" message in the dialog box 452. The system remains paused until the user 350 holds the pose in a stable position for a duration of time, such as one second, or until a particular amount of time has elapsed, such as five seconds (par. 79 and 81-83, Fig. 4D and 4E). Pausing demonstrates reducing the reproduction speed to zero.]. Regarding claim 3, Anderson teaches the display device of claim 1; Anderson further teaches: the at least one processor is configured to pause the reproduction of the video content such that the at least one frame corresponding to the detected gesture among the plurality of frames included in the video content is displayed on the display [pausing the video (par. 83-84, Fig. 4B and 4E)]. Regarding claim 5, Anderson teaches the display device of claim 1; Anderson further teaches: the at least one processor is configured to move a reproduction point of the video content such that the at least one frame corresponding to the detected gesture among the plurality of frames included in the video content is displayed on the display [navigate to a particular keyframe (par. 97, Fig. 4G)]. Regarding claim 6, Anderson teaches the display device of claim 1; Anderson further teaches: the at least one processor is configured to obtain information about reproduction time periods of the identified plurality of movements, and control, based on the information about the reproduction time periods, the at least one frame corresponding to the detected gesture among the plurality of frames included in the video content, to be displayed on the display [receiving keyframe data associated with movements and their timing on a timeline of the video (par. 55-56, Fig. 4A) and navigating to a time of a particular keyframe (par. 97, Fig. 4G)]. Regarding claim 7, Anderson teaches the display device of claim 1; Anderson further teaches: the at least one processor is configured to analyze the video content to identify the plurality of different movements included in the video content, and perform control such that tagged video content is generated at least by inserting at least one tag corresponding to each of the identified plurality of movements into the video content [determining different keyframe movements on the video and adding an annotation for each keyframe movement (par. 56, Fig. 4A)]. Regarding claim 8, Anderson teaches the display device of claim 7; Anderson further teaches: the at least one processor is configured to control, based on the at least one tag, the at least one frame corresponding to the detected gesture among the plurality of frames included in the video content, to be displayed on the display [the search results including the at least one frame including the movements are displayed on the display. The user can select one of the movements and see a video of the movement displayed (par. 72 and 79, Fig. 4B and 4D). The video including an annotation that can be viewed by the user (par. 56 and 95, Fig. 4A and 4G)]. Regarding claim 10, Anderson teaches the display device of claim 1; Anderson further teaches: the at least one processor is configured to obtain an image corresponding to the detected gesture and control the obtained image to be displayed to be superimposed on a reproduction screen of the video content [obtaining images corresponding to the user movement (par 72, Fig. 4B). Overlaying an image (par. 54 and 93, Fig. 4A and 4G)]. Regarding claim 11, Anderson teaches the display device of claim 1; Anderson further teaches: the at least one processor is configured to control guide information about the detected gesture, to be displayed on a reproduction screen of the video content [providing guidance information to the user regarding the user movement (par. 76, 81, 83, 85, and 87, Fig. 4E and 4F)]. Regarding claim 14, Anderson teaches the method of claim 12; Anderson further teaches: the adjusting of the reproduction comprises performing at least one of: moving a reproduction point of the video content, and pausing the reproduction of the video content, such that the at least one frame corresponding to the detected gesture among the plurality of frames included in the video content is displayed on the display [navigate to a particular keyframe (par. 97, Fig. 4G). pausing the video (par. 83-84, Fig. 4B and 4E)]. Regarding claim 15, Anderson teaches the method of claim 12; Anderson further teaches: analyzing the video content to identify the plurality of different movements included in the video content and obtaining information about reproduction time periods of the identified plurality of movements, wherein the adjusting of the reproduction comprises, based on the information about the reproduction time periods, displaying, on the display, the at least one frame showing a movement corresponding to the detected gesture among the plurality of movements [determining different keyframe movements on the video associated with movements and their timing on a timeline of the video (par. 55-56, Fig. 4A) and navigating to a time of a particular keyframe (par. 97, Fig. 4G)]. Regarding claim 16, Anderson teaches the display device of claim 1; Anderson further teaches: the at least one sensor includes at least one of an image sensor, a motion sensor, and an infrared sensor [still or video cameras, motion sensors (par. 26)]. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Anderson et al. (US 2015/0099252) in view of Kwon et al. (WO 2016/204334). Regarding claim 4, Anderson teaches the display device of claim 1; Anderson does not explicitly disclose: the at least one processor is configured to, based on identifying that the movement of the user and the movement in the at least one frame including the movement corresponding to the detected gesture are synchronized while the video content is being reproduced, maintain the reproduction speed of the video content. Kwon teaches: the at least one processor is configured to, based on identifying that the movement of the user and the movement in the at least one frame including the movement corresponding to the detected gesture are synchronized while the video content is being reproduced, maintain the reproduction speed of the video content [displaying interactive content, such as an exercise program, and synchronizing movement of the exercise program with the movement of the user by increasing or decreasing the speed as necessary, or if the user's movement speed and the speed of the content are already synchronized, the exercise program continues without adjustment (page 6 and 8-9)]. It would have been obvious to one of ordinary skill in the art, having the teachings of Anderson and Kwon before the effective filing date of the claimed invention to modify the display device of Anderson by incorporating maintaining the reproduction speed of the content when the movement of the user is synchronized with the video content as disclosed by Kwon. The motivation for doing so would have been to adaptively accelerate, decelerate, or maintain the pace of the content according to the user's physical fitness (Kwon – page 9). Therefore, it would have been obvious to combine the teachings of Anderson and Kwon to obtain the invention as specified in the instant claim. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Anderson et al. (US 2015/0099252) in view of Asikainen et al. (US 2021/0008413). Regarding claim 9, Anderson teaches the display device of claim 1; Anderson does not explicitly disclose: the at least one processor is configured to input the result of the detection by the detector into a neural network and obtain information about the gesture of the user, the information being output as a result of computation through the neural network. Asikainen teaches: the at least one processor is configured to input the result of the detection by the detector into a neural network and obtain information about the gesture of the user, the information being output as a result of computation through the neural network [detecting a pose and inputting the pose data into a neural network to output classification information about the movement (par. 78 and 82, Fig. 1B and 2-4)]. It would have been obvious to one of ordinary skill in the art, having the teachings of Anderson and Asikainen before the effective filing date of the claimed invention to modify the display device of Anderson by incorporating the teaching of Asikainen such that the at least one processor is configured to input the result of the detection by the detector into a neural network and obtain information about the gesture of the user, the information being output as a result of computation through the neural network. The motivation for doing so would have been to obtain a classification of the movement (Asikainen – par. 82) and provide feedback and recommendations relating to the user movements. Therefore, it would have been obvious to combine the teachings of Anderson and Asikainen to obtain the invention as specified in the instant claim. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Alexander Boyd whose telephone number is (571)270-0676. The examiner can normally be reached Monday - Friday 9am-5pm PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin Bruckart can be reached at 571-272-3982. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALEXANDER BOYD/Examiner, Art Unit 2424 /BENJAMIN R BRUCKART/Supervisory Patent Examiner, Art Unit 2424
Read full office action

Prosecution Timeline

Jun 01, 2023
Application Filed
Aug 01, 2024
Non-Final Rejection — §103
Oct 25, 2024
Applicant Interview (Telephonic)
Oct 25, 2024
Examiner Interview Summary
Nov 05, 2024
Response Filed
Jan 22, 2025
Final Rejection — §103
Mar 28, 2025
Request for Continued Examination
Apr 04, 2025
Response after Non-Final Action
Apr 28, 2025
Non-Final Rejection — §103
Jul 25, 2025
Response Filed
Oct 03, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587698
OPTIMIZATION OF ENCODING PROFILES FOR MEDIA STREAMING
2y 5m to grant Granted Mar 24, 2026
Patent 12581167
DYNAMIC CONTENT SELECTION MENU
2y 5m to grant Granted Mar 17, 2026
Patent 12549798
SMART TV REMOTE-CONTROL SYSTEM OR METHOD WITH NON-STANDARD RC COMMAND TRANSLATION CAPABILITY
2y 5m to grant Granted Feb 10, 2026
Patent 12506889
CODEC MANAGEMENT AT AN INFORMATION HANDLING SYSTEM
2y 5m to grant Granted Dec 23, 2025
Patent 12489938
VIDEO TRANSMISSION APPARATUS, COMPUTER-READABLE STORAGE MEDIUM, VIDEO TRANSMISSION METHOD, AND SYSTEM
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+24.4%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 299 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month