Prosecution Insights
Last updated: April 19, 2026
Application No. 19/054,412

Automatic Non-Linear Editing Style Transfer

Non-Final OA §103§112
Filed
Feb 14, 2025
Examiner
YANG, NIEN
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
Google LLC
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
287 granted / 399 resolved
+13.9% vs TC avg
Strong +29% interview lift
Without
With
+28.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
30 currently pending
Career history
429
Total Applications
across all art units

Statute-Specific Performance

§101
5.6%
-34.4% vs TC avg
§103
73.6%
+33.6% vs TC avg
§102
6.5%
-33.5% vs TC avg
§112
7.8%
-32.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 399 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Preliminary Remarks This is a reply to the application filed on 02/14/2025, in which, claims 1-20 remain pending in the present application with claims 1, 11, and 20 being independent claims. When making claim amendments, the applicant is encouraged to consider the references in their entireties, including those portions that have not been cited by the examiner and their equivalents as they may most broadly and appropriately apply to any particular anticipated claim amendments. Information Disclosure Statement The information disclosure statement (IDS) submitted on August 07, 2025 is in compliance with the provisions of 37 CFR 1.97 and is being considered by the Examiner. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. Claims 1-10 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 recites the phrase “decompose video data into a plurality of layers”. The limitation was not described in the specification. Claims 2-10 depend on claim 1, thus 35 U.S.C. 112(a) rejection is also invoked. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Rav-Acha et al. (US 20180204597 A1, hereinafter referred to as “Rav-Acha”) in view of Eronen et al. (US 20150194185 A1, hereinafter referred to as “Eronen”). Regarding claim 1, Rav-Acha discloses a computing system configured to decompose video data into a plurality of layers (see Rav-Acha, paragraph [0093]: “Decomposition & Story-Telling: This component is responsible for breaking and composing the image again based on the story to be told”), the computing system comprising: one or more processors (see Rav-Acha, paragraph [0164]: “at least one processor associated with a computer”); and one or more non-transitory computer-readable media that store instructions that (see Rav-Acha, paragraph [0163]: “the method described in the present disclosure can be stored as instructions in a non-transitory computer readable medium”), when executed by the one or more processors, cause the computing system to perform operations (see Rav-Acha, paragraph [0164]: “a computer processor may receive instructions and data from a read-only memory or a random access memory or both. At least one of aforementioned steps is performed by at least one processor associated with a computer. The essential elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data”), the operations comprising: accessing, by the computing system, video data comprising a plurality of respective shots (see Rav-Acha, paragraph [0101]: “FIG. 11 is a flowchart illustrating two flows 1110 and 1120 in accordance with some embodiments relating to the generation of variant video productions. In both flows, the user uploads an already edited video, also denoted herein as video production (for example, a video that was edited manually). Said video is being analyzed, to automatically detect the time borders of each of the original shots”); and providing, by the computing system, the updated output video via an interactive graphical user interface (see Rav-Acha, paragraph [0110]: “FIG. 13 is a diagram illustrating a user interface (UI) in accordance with embodiments according to the present invention. An example UI 1300 for manipulating shots in the storyboard may present the video cuts (shot 1 to shot 4) after being extracted, as well as audio segments 1330, transitions 1350, effects 1340 and texts 1360. In addition, video cuts originated from videos may be represented differently from video cuts originated from photos. The order of the shots can be changed by the user, via grabbing 1310 a shot from one place to another in the storyboard. Each shot can be removed, for example, by clicking on a button 1320. Similarly, the same is applicable for the audio segments 1330, the transitions 1350, and the texts 1360. Transitions and Text can also be replaced with new transitions or Text or be manipulated in various ways. The effects 1340 can also be grabbed and applied to any of the video cuts (shots)”). Regarding claim 1, Rav-Acha discloses all the claimed limitations with the exception of accessing, by the computing system, a stored editing template; processing, by the computing system, the video data based on the editing template; and automatically transferring editing style of the respective shots in a source video associated with the editing template to generate an updated output video. Eronen from the same or similar fields of endeavor discloses accessing, by the computing system (see Eronen, paragraph [0059]: “One or more of the computers disclosed in FIG. 1”), a stored editing template (see Eronen, paragraph [0159]: “selecting a template”); processing, by the computing system, the video data based on the editing template (see Eronen, paragraph [0159]: “The system creates (312) a template for the new automatic video remix. The template includes information for creating the automatic video remix”); and automatically transferring editing style of the respective shots in a source video associated with the editing template to generate an updated output video (see Eronen, paragraph [0160]: “the automatic video remix may be produced from the template and the selection sequence for source video segments which gives the best value for the goodness criterion. A goodness criterion may measure, for example, how well the attributes of the selected source video segment matches the selected editing parameters. This may be an average of the percentage of match between the shot attributes and the editing parameters”). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in Eronen with the teachings as in Rav-Acha. The motivation for doing so would ensure the system to have the ability to use the automatic video media remixing system and method disclosed in Eronen to comprise one or more of the computers; to access/select a template; to create a template for the new automatic video remix wherein the template includes information for creating the automatic video remix; and to produce the automatic video remix from the template and the selection sequence for source video segments which gives the best value for the goodness criterion wherein a goodness criterion may measure how well the attributes of the selected source video segment matches the selected editing parameters thus accessing, by the computing system a stored template; processing the video data based on the editing template and automatically transferring editing style of the respective shots in a source video associated with the editing template to generate an updated output video in order to enable the automated transfer of editing style from source content to other sets and pieces of target content so that amateur user can perform complex editing. Regarding claim 2, the combination teachings of Rav-Acha and Eronen as discussed above also disclose the computing system of claim 1, wherein the editing style comprises at least one of: (i) framing, (ii) camera motion, (iii) focus, (iv) zoom, (v) transition, (vi) playback speed, (vii) color, (viii) lighting, (ix) audio, or (x) text (see Rav-Acha, paragraph [0112]: “the system can automatically select the best shots based on a selection score. This score may be based on an analysis of the video content, to extract people, objects, actions, salient measures, camera motions, image quality measures, etc.”. Note to the Applicants: The USPTO considers the Applicant’s "one of" language to be anticipated by any reference containing one of the subsequent corresponding elements). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 3, the combination teachings of Rav-Acha and Eronen as discussed above also disclose the computing system of claim 1, wherein the video data comprises at least one of: (i) user generated content or (ii) third party content (see Eronen, paragraph [0060]: “A video remix can be created according to the preferences of a user. The source content refers to all types of media that is captured by users, wherein the source content may involve any associated context data”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 4, the combination teachings of Rav-Acha and Eronen as discussed above also disclose the computing system of claim 1, wherein the updated output video comprises at least one of: (i) textual data, (ii) audio data, (iii) graphical data, (iv) graphics data, (v) image data, (vi) video data, or (vii) multimedia data (see Rav-Acha, paragraph [0092]: “External sources of information and attached meta-data data can also be integrated, e.g. choosing music according to location and meta-data (i.e., we know that user is traveling based on his location), adding production effects of hearts to couples on valentine's day, knowing when it's a person's birthday, and the like”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 5, the combination teachings of Rav-Acha and Eronen as discussed above also disclose the computing system of claim 1, wherein the video data comprises at least one of (i) raw content, (ii) newly captured content, (iii) preprocessed content, (iv) partially edited content, (v) curated content, or (vi) user generated content (see Rav-Acha, paragraph [0100]: “automatic generation of a variant video production from an edited video production and a dedicated user interface supporting a semi-automated generation of such a variant video production”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 6, the combination teachings of Rav-Acha and Eronen as discussed above also disclose the computing system of claim 1, wherein the editing template comprises global editing style and local editing style attributes (see Rav-Acha, paragraph [0127]: “the user can view the text as part of the storytelling UI, and modify it (e.g., change the text or change its attributes such as color, duration). In the video generation stage of the re-editing, the modified text is recomposed again on top of the footage”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 7, the combination teachings of Rav-Acha and Eronen as discussed above also disclose the computing system of claim 6, wherein the local editing style attributes comprise attributes associated with particular positions, segments, or periods of time within source content (see Eronen, paragraph [0160]: “automatic video remix may be produced from the template and the selection sequence for source video segments which gives the best value for the goodness criterion. A goodness criterion may measure, for example, how well the attributes of the selected source video segment matches the selected editing parameters”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 8, the combination teachings of Rav-Acha and Eronen as discussed above also disclose the computing system of claim 1, the operations comprising: generating the stored editing template by: determining one or more shot boundaries in a source video based on analyzing the source video (see Eronen, paragraph [0109]: “the music is analysed to determine the times of beats and downbeats using methods known in the art. The locations of shot boundaries with respect to each musical measure are collected to be used as a model for switching patterns for the director of the movie”); analyzing identified content in each of one or more shots in the source video based on performing object detection on the respective shots (see Eronen, paragraph [0082]: “the number of people appearing in the shot; If the people are facing the camera, known techniques of face detection/recognition can be used to detect the faces in the video frame and count their number”); determining an editing style for each of the one or more shots in the source video based at least in part on measuring motion across frames within the respective shots (see Eronen, paragraph [0094]: “applying known methods for object segmentation on the video frames, obtaining the largest/most dominant shape, and then comparing the detected shape against a set of template shapes.a direction of a dominant movement in the shot (X, Y or Z-axis). According to an embodiment, this can be obtained as directions of video motion vectors for aesthetic source videos and as a combination of motion vector directions and device gyroscope data for user-provided source videos”); generating an editing template based on the editing style (see Eronen, paragraph [0081]: “user may still have a specific style that he/she follows when creating his/her videos, which style could be analysed and modelled by the system. If popular enough, such a style may be allowed to be used by other users in creation of their automatic video remixes”); and storing the editing template (see Eronen, paragraph [0145]: “The system creates (312) a template for the new automatic video remix”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 9, the combination teachings of Rav-Acha and Eronen as discussed above also disclose the computing system of claim 8, wherein the determining of the one or more shot boundaries is based at least in part of detecting a change in frame colors across frames of the source video (see Eronen, paragraph [0082]: “the color style; According to an embodiment, this can be obtained by histogramming the colors from the video frames. In particular, it may be determined whether the color style is black and white, certain tint, few colors vs. multiple colors in shot, use of particularly bright colors, or particularly dim”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 10, the combination teachings of Rav-Acha and Eronen as discussed above also disclose the computing system of claim 8, wherein the determining of the one or more shot boundaries is based at least in part on analyzing key point matching across frames of the source video (see Eronen, paragraph [0061]: “The service creates an automatic cut of the video clips of the users. The service may analyze the sensory data to determine which are interesting points at each point in time during the event, and then make switches between different source media in the final cut”). The motivation for combining the references has been discussed in claim 1 above. Claim 11 is rejected for the same reasons as discussed in claim 1 above. Claim 12 is rejected for the same reasons as discussed in claim 2 above. Claim 13 is rejected for the same reasons as discussed in claim 3 above. Claim 14 is rejected for the same reasons as discussed in claim 4 above. Claim 15 is rejected for the same reasons as discussed in claim 5 above. Claim 16 is rejected for the same reasons as discussed in claim 6 above. Claim 17 is rejected for the same reasons as discussed in claim 7 above. Claim 18 is rejected for the same reasons as discussed in claim 8 above. Claim 19 is rejected for the same reasons as discussed in claim 9 above. Claim 20 is rejected for the same reasons as discussed in claim 1 above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NIENRU YANG whose telephone number is (571)272-4212. The examiner can normally be reached Monday-Friday 10AM-6PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THAI TRAN can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. NIENRU YANG Examiner Art Unit 2484 /NIENRU YANG/Examiner, Art Unit 2484 /THAI Q TRAN/Supervisory Patent Examiner, Art Unit 2484
Read full office action

Prosecution Timeline

Feb 14, 2025
Application Filed
Feb 13, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604024
REPRODUCTION DEVICE, REPRODUCTION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12592259
SYSTEMS AND METHODS TO EDIT VIDEOS TO REMOVE AND/OR CONCEAL AUDIBLE COMMANDS
2y 5m to grant Granted Mar 31, 2026
Patent 12586609
USING AUDIO ANCHOR POINTS TO SYNCHRONIZE RECORDINGS
2y 5m to grant Granted Mar 24, 2026
Patent 12581030
REPRODUCTION DEVICE, REPRODUCTION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12556720
LEARNED VIDEO COMPRESSION AND CONNECTORS FOR MULTIPLE MACHINE TASKS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+28.7%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 399 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month