Prosecution Insights
Last updated: April 18, 2026
Application No. 18/292,761

VIDEO EDITING DEVICE AND OPERATION METHOD OF VIDEO EDITING DEVICE

Non-Final OA §103§112
Filed
Jan 15, 2025
Examiner
ADAMS, EILEEN M
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
LG Electronics Inc.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
90%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
1247 granted / 1446 resolved
+28.2% vs TC avg
Minimal +4% lift
Without
With
+4.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
33 currently pending
Career history
1479
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
60.6%
+20.6% vs TC avg
§102
10.6%
-29.4% vs TC avg
§112
8.2%
-31.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1446 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION CLAIM INTERPRETATION 35 U.S.C. § 112(f) MPEP 2181(I) discloses that a claim limitation will be presumed to invoke 35 U.S.C. 112(f) if it meets the following 3-prong analysis: the claim limitation uses the phrase “means” or “step” or a term used as a substitute for “means” that is a generic placeholder; the phrase “means” or “step” or the substitute term is modified by functional language, typically linked by the transition word “for” or another linking word; and the phrase “means” or “step” or the substitute term is not modified by sufficient structure or material for performing the claimed function. Claims 1-14 disclose limitations which are presumed to invoke 35 U.S.C. 112(f) as said limitations meet said 3-prong analysis. Regarding Claims 1-14, a memory configured to store video information is considered to read on Fig. 1 unit 170; a transceiver configured to communicate is considered to read on Fig. 1 unit 110; a controller configured to receive…analyze is considered to read on Fig. 1 unit 180. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION. The specification shall conclude with one or more claims particularly pointingout and distinctly claiming the subject matter which the inventor or a joint inventor regards as theinvention. Claims 4, 11, 16, 20 is/are rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 4 recites ‘analyze the first set of analyzed features and the set of analyzed features’ whereby it is unclear how to construe a distinction between what is to be analyzed. Appropriate clarification is required. Claim 11 terminates with ‘display a manual-edit interface to the user with respect to the edited target video when receiving’ whereby it is unclear as to what relates to ‘receiving’. Appropriate clarification is required. Claim 16 recites ‘the target video to be more similar’ whereby it is unclear how to measure or quantify ‘more similar’. Appropriate clarification is required. Claim 20 recites ‘each of the first video timeline’ and Claim 20 depends from Claim 15 whereby ‘a first video timeline’ was not previously defined. Claim 20 will be construed to depend from Claim 19 and is allowable on this basis. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 1, 13-15, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over HSIEH et al (US Pub. No.: 2019-0080718) in view of KWON et al. (US Pub. No.: 2014-0195916). As per Claim 1 HSIEH discloses A video editing device, comprising (Figs. 1-7 [Abstract]): a memory configured to store video information (Figs. 1-7 unit 120 [0026, 0028]); a transceiver configured to communicate with an external device (Figs. 1-7 - unit 720 communication with unit 710 via network [0064-0065]); and a controller configured to (Figs. 1-7 unit 110 [0026-0028]): receive a user input (Figs. 1-7 - unit 121 target selection module [0031]), a reference video (Figs. 1-7 S205 receive original video [0030-0031]), analyze features of the reference video to generate a first set of analyzed features (Figs. 1-7 S210 patterns, contours, colors [0030-0032]), edit the video based on the first set of analyzed features from the reference video and the user input to generate an edited video (Figs. 1-7 processed video edited – user input dictates analysis boundaries [0029-0032] [0034] – via editing system 700 [0064, 0067]), and output the edited video (Figs. 1-7 S225 output stored in module 127 [0049]) HSIEH does not disclose but KWON discloses a target video (Figs. 1-4 [0099]); analyze features of the target video to generate a second set of analyzed features (Figs. 1-4 analyze a plurality of attributes of the target video [0099]); editing the target video (Figs. 1-4 edit the target video [0099]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include a target video; analyzing features of the target video to generate a second set of analyzed features; editing the target video as taught by KWON into the system of HSIEH because of the benefit taught by KWON to incorporate video template based editing as a more effective video editing and management system whereby HSIEH is limited to straight-forward user-input type video editing and would benefit from the complex video feature editing and analysis to expand and improve upon the user experience. As per Claim 13 HSIEH discloses The video editing device of claim 1, wherein the controller is further configured to receive feedback on the video from the user (Figs. 1-7 user input feedback and selections [0031-0032]) HSIEH does not disclose but KWON discloses the edited target video (See said analysis for Claim 1) As per Claim 14 HSIEH discloses The video editing device of claim 1, wherein the controller is further configured to save the edited video in to memory (Figs. 1-7 S225 stored in module 127 [0049]) HSIEH does not disclose but KWON discloses the edited target video (See said analysis for Claim 1) As per Claim 15 HSIEH discloses A method of controlling a video editing device, the method comprising (Figs. 1-7 [Abstract]): receiving, via a processor in the video editing device (Fig. 1 unit 110), a user input (See said analysis for Claim 1), a reference video (See said analysis for Claim 1); analyzing, via the processor, features of the reference video to generate a first set of analyzed features (See said analysis for Claim 1); editing, via the processor, the video based on the first set of analyzed features from the reference video (See said analysis for Claim 1) and the user input to generate an edited video (See said analysis for Claim 1); and outputting, via the processor, the edited video (See said analysis for Claim 1) HSIEH does not disclose but KWON discloses a target video (See said analysis for Claim 1); analyzing features of the target video to generate a second set of analyzed features (See said analysis for Claim 1); editing the target video (See said analysis for Claim 1) As per Claim 17 HSIEH discloses The method of claim 15, wherein the editing includes replacing a portion of the video with a portion of the reference video (Figs. 1-7 processed video edited user input dictates analysis boundaries [0029-0032] [0034] – via editing system 700 [0064, 0067]) HSIEH does not disclose but KWON discloses a target video (See said analysis for Claim 1) Claims 2-3, 16, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over HSIEH et al (US Pub. No.: 2019-0080718) in view of KWON et al. (US Pub. No.: 2014-0195916), as applied in Claims 1, 13-15, 17, and further in view of SWAZEY et al. (US Pub. No.: 2009-0154816) As per Claim 2 HSIEH discloses The video editing device of claim 1, wherein each of the first set of analyzed features (See said analysis for Claim 1) and HSIEH does not disclose but KWON discloses the second set of analyzed features (See said analysis for Claim 1) HSIEH and KWON do not disclose but SWAZEY discloses include at least one of a scene change effect (Figs. 17-18 included in edit [0036-0037] [0040] [0049] [0091]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate include at least one of a scene change effect as taught by SWAZEY into the system of HSIEH and KWON because of the benefit taught by SWAZEY to expand upon video types of feature analysis whereby HSIEH and KWON are both drawn to video editing through feature analysis and would benefit from the additional ability to detect scene events. (at least one of) a scene change length, a color filter, an angle of view, a composition, a video quality, a video background music (BGM), a detected person, a detected object, an object classification, a detected background, a detected place, an in-video text, or a subject motion (at least one of). As per Claim 3 HSIEH discloses The video editing device of claim 1, wherein the controller further configured to (See said analysis for Claim 1) HSIEH does not disclose but KWON discloses edit the target video (See said analysis for Claim 1) HSIEH and KWON do not disclose but SWAZEY discloses edit the video based on at least one of a scene change effect (Figs. 17-18 [0036-0037] [0040] [0049] [0091]) (The motivation that applied in Claim 2 applies equally to Claim 3) (at least one of) a scene change length, a color style, a motion blur, a resolution, a noise, a background music (BGM), an angle of view, a composition, or a dynamic motion based on the first set of analyzed features of the reference video (at least one of). As per Claim 16 HSIEH discloses The method of claim 15, wherein the editing includes: changing a scene within the video to be more similar to a scene within the reference video to generate the edited video (Figs. 1-7 processed video edited [0029-0032] [0034] – via editing system 700 [0064, 0067]), HSIEH does not disclose but KWON discloses a target video (See said analysis for Claim 1); the edited the target video (See said analysis for Claim 1) HSIEH and KWON do not disclose but SWAZEY discloses wherein the changing the scene within the video is based on at least one of a scene change effect (Figs. 17-18 [0036-0037] [0040] [0049] [0091]) (The motivation that applied in Claim 2 applies equally to Claim 16) (at least one of) a scene change length, a color filter, an angle of view, a composition, a video quality, a video background music (BGM), a detected person, a detected object, an object classification, a detected background, a detected place, an in-video text, and a subject motion included in the first set of analyzed features from the reference video (at least one of). As per Claim 18 HSIEH discloses The method of claim 15, wherein each of the first set of analyzed features (See said analysis for Claim 1) and HSIEH does not disclose but KWON discloses the second set of analyzed features (See said analysis for Claim 1) HSIEH and KWON do not disclose but SWAZEY discloses include at least one of a scene change effect (Figs. 17-18 included in edit [0036-0037] [0040] [0049] [0091]) (The motivation that applied in Claim 2 applies equally to Claim 18) (at least one of) a scene change length, a color filter, an angle of view, a composition, a video quality, a video background music (BGM), a detected person, a detected object, an object classification, a detected background, a detected place, an in-video text, or a subject motion (at least one of). Claims 4, 7-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over HSIEH et al (US Pub. No.: 2019-0080718) in view of KWON et al. (US Pub. No.: 2014-0195916), as applied in Claims 1, 13-15, 17, and further in view of TURKELSON et al. (US Pub. No.: 2020-0210768). As per Claim 4 HSIEH discloses The video editing device of claim 1, wherein the controller based on the first set of analyzed features of the reference video (See said analysis for Claim 1) HSIEH does not disclose but KWON discloses edit the target video (See said analysis for Claim 1) HSIEH and KWON do not disclose but TURKELSON discloses analyze the first set of analyzed features and the set of analyzed features based on a deep learning network (Figs. 1-6 subsystem 114 [0076]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include analyze the first set of analyzed features and the set of analyzed features based on a deep learning network as taught by TURKELSON into the system of HSIEH and KWON because of the benefit taught by TURKELSON to incorporate intelligent feature analysis which increased accuracy and reduces processing times which would benefit both the manual systems of HSIEH and KWON. As per Claim 7 HSIEH discloses The video editing device of claim 1, wherein the controller is further configured to (See said analysis for Claim 1) HSIEH and KWON do not disclose but TURKELSON discloses output an auto-edit interface to the user (Figs. 1-2 UI made available [0025, 0031] editing features [0038] auto-features [0079, 0081-0082]) (The motivation that applied in Claim 4 applies equally to Claim 7). As per Claim 8 HSIEH discloses The video editing device of claim 7, wherein the controller is further configured to (See said analysis for Claim 1) edit the video according to an editing request of the user (Figs. 1-7 processed video edited – user input dictates analysis boundaries [0029-0032] [0034] – via editing system 700 [0064, 0067]), and save the edited video according to a save request of the user with respect to the edited video (Figs. 1-7 save feature [0013] [0064, 0067] and S225 stored in module 127 [0049]). HSIEH does not disclose but KWON discloses edit the target video (See said analysis for Claim 1) As per Claim 9 HSIEH discloses The video editing device of claim 8, wherein the controller is further configured to transmit the saved target video to the external device (Figs. 1-7 [0049] unit 720 communication with unit 710 via network [0064-0065]). Allowable Subject Matter Claims 5-6, 10-12, 19-20 is/are objected to as being dependent upon the rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, and but for the outstanding rejections under 35 U.S.C. section 112(b). Claims 5-6, 10-12, 19-20 is/are allowed, but for the outstanding rejections under 35 U.S.C. section 112(b). The following is an examiner’s statement of reasons for allowance: As per Claim 5 the prior art of record either alone or in reasonable combination fails to teach or suggest “As per Claim 5 The video editing device of claim 4, wherein the deep learning network includes a video analysis network and a video editing network, and wherein the controller is further configured to analyze the features of the reference video and the features of the target video based on the video analysis network, and edit the target video based on the video editing network to generate the edited target video" These limitations in combination with the other limitations of the independent claim are thus deemed allowable. As per Claim 6 the prior art of record either alone or in reasonable combination fails to teach or suggest “The video editing device of claim 5, wherein the deep learning network is stored in an external server, wherein the video editing device is connected to the deep learning network through the transceiver, and wherein the controller is further configured to transmit the reference video and the target video to the deep learning network and receive the edited target video from the deep learning network" These limitations in combination with the other limitations of the independent claim are thus deemed allowable. As per Claim 10 the prior art of record either alone or in reasonable combination fails to teach or suggest “The video editing device of claim 8, wherein the controller is further configured to save the edited target video to a preset path uniform resource locator (URL)" These limitations in combination with the other limitations of the independent claim are thus deemed allowable. As per Claim 11 the prior art of record either alone or in reasonable combination fails to teach or suggest “The video editing device of claim 1, wherein the controller is further configured to: in response to receiving a signal for additional refinement from the user after the edited target video has been output, display a manual-edit interface to the user with respect to the edited target video when receiving" These limitations in combination with the other limitations of the independent claim are thus deemed allowable. As per Claim 12 the prior art of record either alone or in reasonable combination fails to teach or suggest “The video editing device of claim 11, wherein the manual-edit interface including at least one of a video clipping option, a background music (BGM) insertion option, a color change option, a mosaic processing option, or a caption addition option" These limitations in combination with the other limitations of the independent claim are thus deemed allowable. As per Claim 19 the prior art of record either alone or in reasonable combination fails to teach or suggest “The method of claim 15, further comprising: displaying an auto-edit interface including a first video timeline based on the first set of analyzed features from the reference video, a second video timeline based on the second set of analyzed features from the target video, and a third video timeline based on features of the edited video" These limitations in combination with the other limitations of the independent claim are thus deemed allowable. As per Claim 20 the prior art of record either alone or in reasonable combination fails to teach or suggest “The method of claim 15, wherein each of the first video timeline, the second video timeline and the third video timeline includes a plurality of divided areas corresponding to a plurality of categories, and wherein the plurality of categories include at least one of a scene, a scene change, faces, objects, places, activity level, and background music (BGM)" These limitations in combination with the other limitations of the independent claim are thus deemed allowable. The closest prior art of record HSIEH et al (US Pub. No.: 2019-0080718) for Claims 5-6, 10-12, 19-20 does not teach all the elements in combination with the other limitations of the independent claim. HSIEH only discloses a video editing device comprising a transceiver that communicates with an external device, a user input, a reference video, and analyzing features of the reference video to generate a first set of analyzed features. The prior art also discloses editing the video based on the first set of analyzed features from the reference video and the user input to generate an edited video, and outputting the edited video. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to EILEEN M ADAMS whose telephone number is 571-270-3688. The examiner can normally be reached on Monday-Friday from 8:30am-5:00pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, William Vaughn can be reached on (571) 272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-270-4688. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have any questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EILEEN M ADAMS/Primary Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Jan 15, 2025
Application Filed
Feb 25, 2026
Examiner Interview Summary
Feb 25, 2026
Examiner Interview (Telephonic)
Apr 07, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593013
DRIVING VIDEO RECORDING METHOD OF VEHICLE AND RECORDING DEVICE FOR THE SAME
2y 5m to grant Granted Mar 31, 2026
Patent 12581181
CASED GOODS INSPECTION AND METHOD THEREFOR
2y 5m to grant Granted Mar 17, 2026
Patent 12581207
ARRANGEMENT DETERMINATION APPARATUS, SYSTEM, ARRANGEMENT DETERMINATION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12574480
SURGICAL OPERATION ROOM SYSTEM, IMAGE RECORDING METHOD, PROGRAM, AND MEDICAL INFORMATION PROCESSING SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12568269
Music Service with Motion Video
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
90%
With Interview (+4.0%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 1446 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month