Prosecution Insights
Last updated: April 19, 2026
Application No. 18/509,420

GENERATING PERSONALIZED VIDEOS WITH CUSTOMIZED TEXT MESSAGES

Non-Final OA §102§103
Filed
Nov 15, 2023
Examiner
LI, RUIPING
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Snap Inc.
OA Round
5 (Non-Final)
77%
Grant Probability
Favorable
5-6
OA Rounds
2y 10m
To Grant
95%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
722 granted / 933 resolved
+15.4% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
40 currently pending
Career history
973
Total Applications
across all art units

Statute-Specific Performance

§101
13.0%
-27.0% vs TC avg
§103
41.2%
+1.2% vs TC avg
§102
25.9%
-14.1% vs TC avg
§112
13.7%
-26.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 933 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status. 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/16/2025 has been entered. 3. In the applicant’s submission, claims 1, 11 and 20 were amended. Accordingly, claims 1-20 are pending and being examined. Claims 1, 11, and 20 are independent form. Claim Rejections - 35 USC § 102 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 5. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 6. Claims 1-6, 8-16, and 18-20 are rejected under 35 U.S.C. 102(a)(1)/102(a)(2) as being anticipated by Taylor et al (US 2018/0226101, hereinafter “Taylor”). Regarding claim 1, Taylor discloses a method for generating personalized videos (the system and the method for creating a video (multimedia files) and animating text along with the video in real-time; see figs.1, 2A, and para.52.), the method comprising: receiving, by a computing device, a video template including a sequence of frame images, a default text (wherein the text font is a default font chosen by the system; see para.97: “At step 420 in the method 400, the platform measures the width and height of the text in a font chosen by the user or a default font chosen by the system.”), and at least one parameter for animation of the default text across the sequence of frame images (See para.55: “to receive user inputs 140, which can include, for example, input media files (e.g., music file, video file without captions, etc.), user commands (e.g., indication of timing locations for inserting lyrics or captions), and text to be inserted into the media file.” Also see, 210-240 of fig.2A and para.74—para.76. Also see, 211 and 221 of fig.2B, wherein 211 is “input text” including a series of text strings (text 1, text 2, ..., test 5) entered by the user for creating multimedia files; see para.82); providing, by the computing device (see the “additional actions applied to highlighted word(s) in fig.2C, the additional actions include changing the color, effect, and style of the selected words; see para.85), a user with a first option to replace the default text with an input text (See para.61: “The [input] text size and other style elements can change accordingly, depending on this number of fingers. For example, a user can change the font size using two fingers to tap the screen, or change the style elements using three fingers.” Also see pra.79: “any desired effects, chosen fonts, text and video options can also be provided for the user. Similarly, the user can also send any desired effects, chosen fonts, text and video options to the server to render the multimedia file.”); in response to receiving the input text, generating, by the computing device and based on the input text and the at least one parameter for animation, a configuration file (see the created multimedia file shown by 271 in fig.2B) including a plurality of parameters for rendering an output video, the plurality of parameters including: a text style for the input text for a frame in the sequence of frame images; a predefined position for the input text in the frame; and a number of text lines for the frame (wherein the multimedia file (i.e., the video) 271 of fig.2B is created by inserting each of the texts into the respective specified located frame in the video, such as “TEXT 1” into frame 1 from t=:00 to t=:05, “TEXT 2” into frame 2 from t=:05 to t=:10, ... and “TEXT 5” into frame 5 from t=:17,5 to t=:20; wherein each of the inserted texts has different style and/or different colors (e.g., “highlighted words” 232 in fig.2C); wherein each of the inserted texts has the predetermined number of text lines for the frame as shown by fig.2C); splitting, by the computing device, based on the configuration file and a length of the input text, the input text into the predetermined number of text lines (wherein the “input text” 211 is separated into five text groups (TEXT 1, TEXT 2, ..., TEXT 5) for 5 frames as shown in fig.2B; thereof, the predetermined number of text lines (i.e., “TEXT 1”) in frame 1 are separated into 2 text lines as shown in fig.5B; see fig.5A, fig.5B, and para.100—para.103); rendering, by the computing device and based on the configuration file, an output frame of the output video, the output frame including the frame in the sequence of frame images and a layer, the layer including the input text stylized based on the text style and placed at the predefined position and in the number of text lines (See 271 of fig.2B, and para.84: “the audio file is layered with the video files, the picture files, and the text strings so as to render the multimedia file, which can also be made available for downloading.” Also see para.71: “The processor 130 renders a multimedia file [which is inserted a plurality of texts in the specified video frames] by stitching together frames using Ffmpeg and Node Canvas or other suitable technology. The rendering is based on the media file(s), the timestamps, and the sequence of text strings.” Also see para.61: “The text size and other style elements can change accordingly”. As shown in fig.5B, wherein the output frame includes the video/photo 580 and the stylized text 575 having two text lines.); playing back, by the computing device, the output video; while playing back, providing, by the computing device, a user with a second option to change the at least one parameter for animation (see para.31: “Then, the user chooses the speed of the audio playback when recording. The speed can be either 25%, 50%, or 75% of the original speed. This selection of speed can take about 5 seconds. In next step, the user presses a record button, provided by the server through the user interface.”); and upon receiving an indication that the user has changed the at least one parameter for animation, dynamically changing, in the output video, by the computing device, the position of the input text in the output frame according to the at least one parameter for animation (see para.32: “When finished recording, the song plays back automatically and the user can customize the theme, font and effects. This step can take about 30 seconds. At this point, the lyric video is completed and can be saved to the computer or shared to the web once finished rendering. This step can take about 1 minute to about 3 minutes.”). Regarding claim 2, 12, Taylor discloses, further comprising, prior to the rendering the layer: splitting, by the computing device, the input text into a predetermined number of lines; generating, by the computing device and based on the input text, glyphs according to the text style; selecting, by the computing device and based on bounds of the frame in the sequence of frame images, a global scale for the glyphs; and pre-rendering, by the computing device, the layer including the glyphs resized according to the global scale (See fig.2B and para.82: “a series of text strings (text 1, text 2, . . . , text 5) are entered by the user. At step 221, a series of media files, including video files (video 1 and video 2) and picture file (picture 1), are entered by the user. Each video file is associated with one or more specific text strings. For example, as shown in FIG. 2B, text 1 is associated with video 1. Text 2, text 3, and text 4 are associated with video 2. Text 5 is associated with picture 1.” See fig.2C and para.86: “The user can then change the color, effect, and/or style of the selected text.”). Regarding claim 3, 13, Taylor discloses, wherein the pre-rendered layer includes one of the following: outlines for the glyphs and shadows for the glyphs (see “highlighted word(s)” in fig.2C). Regarding claim 4, 14, Taylor discloses, wherein the input text in the layer is resized based on a global scale, the global scale being selected to fit the input text across frame images in the sequence of frame images when the input text is animated according to the at least one parameter for animation (see fig.4 and para.97: “a method 400 of automatic resizing of text in videos. At step 410 in the method 400, a line of text is provided by the user to a platform. At step 420 in the method 400, the platform measures the width and height of the text in a font chosen by the user or a default font chosen by the system. At step 430 in the method 400, a scale ratio number is compared to the maximum width and height using the measurements acquired at step 420. This comparison can be calculated by the respective video measurements minus the padding given to each video. This calculation allows each line of text to automatically resize to the width of the window.”). Regarding claim 5, 15, Taylor discloses, wherein the at least one parameter for animation includes a change in positions of the input text across frame images of the sequence of frame images (see fig.4 and para.97: “a method 400 of automatic resizing of text in videos. At step 410 in the method 400, a line of text is provided by the user to a platform. At step 420 in the method 400, the platform measures the width and height of the text in a font chosen by the user or a default font chosen by the system. At step 430 in the method 400, a scale ratio number is compared to the maximum width and height using the measurements acquired at step 420. This comparison can be calculated by the respective video measurements minus the padding given to each video. This calculation allows each line of text to automatically resize to the width of the window.”). Regarding claim 6, 16, Taylor discloses, wherein the at least one parameter for animation includes a change in a font size of the input text across frame images of the sequence of frame images (see fig.4 and para.97: “a method 400 of automatic resizing of text in videos. At step 410 in the method 400, a line of text is provided by the user to a platform. At step 420 in the method 400, the platform measures the width and height of the text in a font chosen by the user or a default font chosen by the system. At step 430 in the method 400, a scale ratio number is compared to the maximum width and height using the measurements acquired at step 420. This comparison can be calculated by the respective video measurements minus the padding given to each video. This calculation allows each line of text to automatically resize to the width of the window.”). Regarding claim 8, 18, Taylor discloses, further comprising: providing, by the computing device, an option enabling a user to change the at least one parameter for animation; and upon receiving an indication that the user has changed the at least one parameter for animation, dynamically changing, in the output video, by the computing device, the text style for the input text according to the changed at least one parameter for animation (see fig.4 and para.97: “a method 400 of automatic resizing of text in videos. At step 410 in the method 400, a line of text is provided by the user to a platform. At step 420 in the method 400, the platform measures the width and height of the text in a font chosen by the user or a default font chosen by the system. At step 430 in the method 400, a scale ratio number is compared to the maximum width and height using the measurements acquired at step 420. This comparison can be calculated by the respective video measurements minus the padding given to each video. This calculation allows each line of text to automatically resize to the width of the window.”). Regarding claim 9, 19, Taylor discloses, wherein the option to change the at least one parameter for animation includes adding at least one visual effect to be applied to the input text (see “highlighted word(s)” and “text effect” in fig.2C, and para.85). Regarding claim 10, Taylor discloses the method of claim 8, where the option to change the at least one parameter for animation includes selecting at least one text parameter for the input text from a previously created list of text parameters (see “highlighted word(s)” and “text effect” in fig.2C, and para.85). Regarding claim 11, 20, each of which is an inherent variation of claim 1, thus it is interpreted and rejected for the reasons set forth in the rejection of claim 1. Claim Rejections - 35 USC § 103 7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 8. Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Taylor et al (US 2018/0226101, hereinafter “Taylor”). Regarding claim 7, 17, Taylor does not explicitly disclose, “the at least one parameter for animation includes a change in an orientation of the input text across frame images of the sequence of frame images” as recited in the claim. However, Taylor, para.97, teaches: “a method 400 of automatic resizing of text in videos. At step 410 in the method 400, a line of text is provided by the user to a platform. At step 420 in the method 400, the platform measures the width and height of the text in a font chosen by the user or a default font chosen by the system. At step 430 in the method 400, a scale ratio number is compared to the maximum width and height using the measurements acquired at step 420. This comparison can be calculated by the respective video measurements minus the padding given to each video. This calculation allows each line of text to automatically resize to the width of the window.” It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was made to know that the teachings of Taylor--namely, “a line of text [is] provided by the user” and “the width and height of the text in a font chosen by the user”, determine “an orientation of the input text across frame images of the sequence of frame images” recited in the claim. Therefore, the claim is unpatentable over Taylor. Response to Arguments 9. Applicant’s arguments filed on 10/16/2025 have been fully considered but they are not persuasive. On page 10 of applicant’s response, applicant argues: However, the media file in Taylor does not include any pre-authored default text. Neither does Taylor disclose that the media file includes any animation parameters associated with the default text and defined prior to receiving user input. ... However, in Taylor, the user is not provided with an option to replace the default text in the media file with the input text because no default text is present in the media file. The examiner respectfully disagrees with the arguments. As explained in the rejections of the claims, Paragraph [0097], Taylor clearly states “[a]t step 420 in the method 400, the platform measures the width and height of the text in a font chosen by the user or a default font chosen by the system.” It means that the text font which will be inserted a video is “a default font chosen by the system”. Further, Paragraph [0085] and the “Additional actions applied to highlighted word(s)” in fig.2C, Taylor discloses the “additional actions applied to highlighted word(s)” including changing the “color”, “effect”, and “style” (i.e., font) of the selected words. Specifically, paragraph [0061], Taylor clearly states, “The [input] text size and other style elements can change accordingly, depending on this number of fingers. For example, a user can change the font size using two fingers to tap the screen, or change the style elements using three fingers.” For at least the reasons set forth above, the applicant’s arguments are unpersuasive, and the examiner maintains rejections. Conclusion 10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RUIPING LI whose telephone number is (571)270-3376. The examiner can normally be reached 8:30am--5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, HENOK SHIFERAW can be reached on (571)272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit https://patentcenter.uspto.gov; https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center, and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RUIPING LI/Primary Examiner, Ph.D., Art Unit 2676
Read full office action

Prosecution Timeline

Nov 15, 2023
Application Filed
Nov 20, 2024
Non-Final Rejection — §102, §103
Feb 04, 2025
Response Filed
Mar 04, 2025
Final Rejection — §102, §103
Apr 16, 2025
Request for Continued Examination
Apr 21, 2025
Response after Non-Final Action
Jun 25, 2025
Non-Final Rejection — §102, §103
Sep 24, 2025
Response Filed
Oct 02, 2025
Final Rejection — §102, §103
Oct 16, 2025
Request for Continued Examination
Oct 23, 2025
Response after Non-Final Action
Jan 16, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602754
DYNAMIC IMAGING AND MOTION ARTIFACT REDUCTION THROUGH DEEP LEARNING
2y 5m to grant Granted Apr 14, 2026
Patent 12597183
METHOD AND APPARATUS FOR PERFORMING PRIVACY MASKING BY REFLECTING CHARACTERISTIC INFORMATION OF OBJECTS
2y 5m to grant Granted Apr 07, 2026
Patent 12597289
IMAGE ACCUMULATION APPARATUS, METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586408
METHOD AND APPARATUS FOR CANCELLING ANONYMIZATION FOR AN AREA INCLUDING A TARGET
2y 5m to grant Granted Mar 24, 2026
Patent 12573239
SYSTEM AND METHOD FOR LIVENESS VERIFICATION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
77%
Grant Probability
95%
With Interview (+18.0%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 933 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month