Prosecution Insights
Last updated: April 19, 2026
Application No. 18/848,481

VIDEO EDITING METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM

Non-Final OA §103
Filed
Sep 18, 2024
Examiner
NGUYEN, HAU H
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Lemon Inc.
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
807 granted / 892 resolved
+28.5% vs TC avg
Moderate +9% lift
Without
With
+8.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
22 currently pending
Career history
914
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
58.0%
+18.0% vs TC avg
§102
19.2%
-20.8% vs TC avg
§112
3.8%
-36.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 892 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 09/25/2024 was filed after the mailing date of the application. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 4-6, 12-14, 16-18, 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Khatib et al. (US. Patent App. Pub. No. 2014/0033042, “Khatib”, hereinafter). As per claim 1, as shown in Fig. 1 and 2, Khatib teaches a video editing method, comprising: determining a single-frame processing policy (at best understood since the claim does not specify what the scope of single-frame processing policy is, therefore, broadly interpreted as current state of video stream recited at ¶ [26]) and a video post-processing policy (Again, it is not clear what video post-processing policy is claimed here, however, further addressed below) corresponding to a video editing option selected by a user (¶ [29], such as target resolution); performing a single-frame processing on video frames of a target video input by the user based on the single-frame processing policy (¶ [24], performing editing individual frame), and caching a single-frame processing result to a single-frame processing list (Fig. 5, ¶ [41-42], caching individual frames); and forming a video editing sequence of the target video based on the video post-processing policy in combination with the single-frame processing list, and storing the video editing sequence (Fig. 5, forming video bucket 506, which includes sequence of rendered frames, ¶ [43-44], and storing in the render cache, ¶ [41-42]). As addressed, Khatib does not expressly teach determining video post-processing policy. However, at paragraph [29], a determination is made as for whether frame rate or resolution is taken into consideration, and also at paragraph [48], Khatib does teach bandwidth is also considered for the editing task. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify these teachings of Khatib as post-processing policy so as not to exhaust the processing power at the server (¶ [48]). As per claim 2, Khatib does impliedly teach locating a video frame editing result corresponding to a target video frame based on the video editing sequence, and presenting an edited video, the target video frame being any video frame selected by the user from the target video within an editing interface (¶ [41], referring to Fig. 5, “Timeline 502 is sliced into interesting regions based on the locations of edit/effect events, indicated S1, S2, . . . Sn (504)”. See ¶ [26-27] for presenting edited frame to the user’s editing interface 216, Fig. 2). As per claim 4, Khatib does impliedly teach wherein performing the single-frame processing on video frames of the target video input by the user based on the single-frame processing policy and caching the single-frame processing result to the single-frame processing list comprises: receiving the target video input by the user, and decoding a video frame of the target video in real time, and obtaining a decoded video frame which has been decoded (¶ [11]; in response to that the decoded video frame does not meet a preset single-frame processing condition, returning to obtain a decoded video frame until all decoded video frames are obtained (at best understood by the examiner as obtaining all decoded frames. It is not clear where the frames are returned to. Again, a preset single-frame processing condition is not defined here, thus given the broadest interpretation); in response to that the decoded video frame meets the preset single-frame processing condition, inputting the decoded video frame to a single-frame processing model corresponding to the video editing option, and caching the single-frame processing result output by the single-frame processing model to the single-frame processing list (as addressed in claim 1, referring to Fig. 5); and in the event that the single-frame processing list does not meet a video post-processing condition, returning to obtain a decoded video frame until all decoded video frames are obtained (also best understood since a video post-processing condition is not defined, and the obtaining is interpreted as streaming back to the client disclosed at ¶ [41]). As per claim 5, Khatib further teaches wherein the preset single-frame processing condition comprises: a duration of an interval between an obtaining time point for obtaining the decoded video frame and a single-frame executing time point of a previous video frame of a video frame corresponding to the obtaining time point reaches a set duration; or the decoded video frame meets a set frame format (¶ [24], “…it may be one or more individual frames represented in a compressed format”). As per claim 6, Khatib further teaches wherein forming the video editing sequence of the target video based on the video post-processing policy in combination with the single-frame processing list and storing the video editing sequence comprises: sequentially determining a current video frame to be edited based on frame sequence numbers of the target video (Fig. 5, numeral 508); in the event of determining that the single-frame processing list currently meets a video post-processing condition (undefined), determining a video frame editing result of the current video frame to be edited based on the single-frame processing list (as addressed in claim 1, editing individual frame); and returning to select a new video frame to be edited for processing, and in the event that a post-processing ending condition (also undefined) is met, forming the video editing sequence of the target video based on video frame editing results of video frames to be edited corresponding to all selected frame numbers in the frame sequence numbers of the target video and storing the video editing sequence (at best understood, see ¶ [41], i.e., rendering frame by frame and transmitting video stream to client, “When streaming back to the client, the real-time rendering and streaming engine is able to retrieve any portion of the requested playback that is present in the render cache, down to the level of an individual frame. Thus, if any still valid frame is present in the render cache, the streaming engine will retrieve and stream it, thereby ensuring that available resources are applied exclusively to render frames that have not yet been rendered”. The frame sequence number is described at ¶ [43]). Claim 12, which is similar in scope to claim 1 as addressed above, including the processor and storage shown in Fig. 1, is thus rejected under the same rationale. Claim 13, which is similar in scope to claim 12 as addressed above, is thus rejected under the same rationale. Claim 14, which is similar in scope to claim 2 as addressed above, is thus rejected under the same rationale. Claim 16, which is similar in scope to claim 4 as addressed above, is thus rejected under the same rationale. Claim 17, which is similar in scope to claim 5 as addressed above, is thus rejected under the same rationale. Claim 18, which is similar in scope to claim 2 as addressed above, is thus rejected under the same rationale. Claim 20, which is similar in scope to claim 4 as addressed above, is thus rejected under the same rationale. Claim 21, which is similar in scope to claim 5 as addressed above, is thus rejected under the same rationale. Claims 3, 8-10, 15 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Khatib et al. (US. Patent App. Pub. No. 2014/0033042) in view of Matsuda et al. (US. Patent App. Pub. No. 20150019972, “Matsuda”). As per claim 3, Khatib does not expressly teach wherein locating the video frame editing result corresponding to the target video frame based on the video editing sequence and presenting the edited video comprises: monitoring a drag and drop operation of the user against a target video progress bar, determining a video timestamp corresponding to an end of the drag and drop operation; determining a video frame corresponding to the video timestamp as the target video frame and accessing the video editing sequence; and deserializing the video editing sequence to obtain a video frame editing result in the video editing sequence, and locating the target video editing result of the target video frame, and playing the edited video with a time node corresponding to the target video frame as a start playing node. However, Matsuda teaches a very similar method of video editing for individual frame (¶ [9]), wherein the method further discloses the above features, i.e., wherein locating the video frame editing result corresponding to the target video frame based on the video editing sequence and presenting the edited video comprises: monitoring a drag and drop operation of the user against a target video progress bar (¶ [17], “Some embodiments allow the user to drag a media clip from the clip browser of the media-editing application or another media clip in the timeline into the border between the two adjacent media clips”), determining a video timestamp corresponding to an end of the drag and drop operation (see above); determining a video frame corresponding to the video timestamp as the target video frame and accessing the video editing sequence (Fig. 8, ¶ [172-173], “. When the user clicks on the space before the first media clip of the sequence of the media clips being edited in the timeline 715, the timeline 715 brings back the primary playhead 725 to the beginning (i.e., the in-point) of the first media clip”); and deserializing the video editing sequence to obtain a video frame editing result in the video editing sequence, and locating the target video editing result of the target video frame, and playing the edited video with a time node corresponding to the target video frame as a start playing node (¶ [31], “In some embodiments, the media-editing application allows the user to perform numeric editing to precisely specify the duration of a clip, accurately specify the starting and/or ending point of a clip in the timeline, specify the location of a playhead, etc.”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the method as taught by Matsuda into the method as taught by Khatib as addressed above, the advantage of which is to customize the edited video to the user’s need. As per claim 8, Khatib does not explicitly teach wherein the post-processing end condition comprises: video frame editing results being determined for all video frames with selected frame sequence numbers in the target video, wherein a video frame with a selected frame sequence number in the target video is a video frame to be edited. However, Matsuda does teach this feature (see ¶ [31], “In some embodiments, the media-editing application allows the user to perform numeric editing to precisely specify the duration of a clip, accurately specify the starting and/or ending point of a clip in the timeline, specify the location of a playhead, etc.”. ¶ [285-286] discloses a video frame with a selected frame sequence number in the target video is a video frame to be edited, i.e., trimming frames with sequence number). Thus, claim 8 would have been obvious over the combined references for the reason above. As per claim 9, as addressed above, the combined Khatib-Matsuda does also teach wherein forming the video editing sequence of the target video based on the video frame editing results of the video frames to be edited corresponding to all selected frame numbers in the frame sequence numbers of the target video and storing the video editing sequence comprises: arranging the video frame editing results of the video frames to be edited corresponding to the selected frame sequence numbers based on a serialization rule corresponding to the video editing option (Matsuda, ¶ [31] recited above), and obtaining the video editing sequence and storing the video editing sequence in a nonvolatile manner (Matsuda, ¶ [450-451] referring to Fig. 45). Thus, claim 9 would have been obvious over the combined references for the reason above. As per claim 10, the combined Khatib-Matsuda does impliedly teach wherein the video editing option comprises a video intelligent cutting (Matsuda, ¶ [24-25], such as trimming) and a video screen freezing (at best understood as preview a frame taught by Matsuda, ¶ [374], referring to Fig. 35). Thus, claim 10 would have been obvious over the combined references for the reason above. Claim 15, which is similar in scope to claim 3 as addressed above, is thus rejected under the same rationale. Claim 19, which is similar in scope to claim 3 as addressed above, is thus rejected under the same rationale. Allowable Subject Matter Claim 7 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is an examiner's statement of reasons for allowable subject matter: The prior art taken singly or in combination does not teach or suggest, a video editing method, among other things, comprising: determining that the single-frame processing list currently meets a video post-processing condition in the event that the single-frame processing list has cached all associated single-frame processing results required by the current video frame to be edited; obtaining, from the single-frame processing list, all associated single-frame processing results required by the current video frame to be edited; and performing a video frame editing on the current video frame to be edited based on a video editing algorithm corresponding to the video editing option in combination with all associated single-frame processing results required by the current video to be edited. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hau H. Nguyen whose telephone number is: 571-272-7787. The examiner can normally be reached on MON-FRI from 8:30-5:30. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard, can be reached on (571) 272-7773. The fax number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /HAU H NGUYEN/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Sep 18, 2024
Application Filed
Apr 02, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597194
METHOD FOR OBTAINING IMAGE RELATED TO VIRTUAL REALITY CONTENT AND ELECTRONIC DEVICE SUPPORTING THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12591435
DEVICE LINK MANAGEMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12586288
DEVICE AND METHOD FOR GENERATING DYNAMIC TEXTURE MAP FOR 3 DIMENSIONAL DIGITAL HUMAN
2y 5m to grant Granted Mar 24, 2026
Patent 12573135
GENERATION OF A DENSE POINT CLOUD OF A PHYSICAL OBJECT
2y 5m to grant Granted Mar 10, 2026
Patent 12573141
METHOD AND DEVICE FOR LEARNING 3D MODEL RECONSTRUCTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+8.9%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 892 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month