Prosecution Insights
Last updated: April 19, 2026
Application No. 19/019,401

Video Editing System

Non-Final OA §102§103
Filed
Jan 13, 2025
Examiner
ZHAO, DAQUAN
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
Pixxy Video Solutions Inc.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
92%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
791 granted / 1029 resolved
+18.9% vs TC avg
Moderate +15% lift
Without
With
+14.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
24 currently pending
Career history
1053
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
20.3%
-19.7% vs TC avg
§112
14.0%
-26.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1029 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-2 and 7 are rejected under 35 U.S.C. 102(a)(1) as being described by Griffin (US 2007/0201815). For claim 1, Griffin teaches a method of caching a multi-source video to expedite video editing (e.g. abstract), the method comprising: receiving a first multi-source video comprising a plurality of portions of video, each portion of video associated with one of a plurality of video streams, each video stream associated with one of a plurality of cameras (e.g. figures 1- 2, cameras 12, 14, 16 and 18, figure 3 paragraph 36: the video streams 102, 104, 106, 108 may correspond to the separate video streams recorded by the video cameras 12, 14, 16, 18 shown in FIGS. 1 and 2); extracting one of a plurality of source identifiers from each of the plurality of portions of video, each source identifier associated with one of the plurality of video streams (e.g. figure 4, paragraph 22 synchronizing time stamp values associated with a plurality of individually recorded video streams OR paragraph 43: video segments from the different video streams 102, 104, 106, 108 having overlapping time stamps, indicating that the were recorded at substantially the same time, are recordings of the same event or play.) automatically retrieving the plurality of portions of video in the first multi-source video based on the embedded source identifiers (e.g. paragraph 43: Thus, time stamp information may be used to identify video files from different video streams storing video segments recoding the same event or play. Once video files containing common video segments have been identified, they may be aligned or grouped together, so that a video editor may select a particular video segment from one of the video streams that best illustrates an aspect of the play that the video editor wishes to highlight.); receiving editing commands from a user, each editing command configured to modify at least one portion of video from the first multi-source video (e.g. paragraphs 34-35: a user may append additional information to the various files further identifying the video segments stored in each file. For example, for a video recording of a football game in which each play is stored in a separate video file, additional data appended to a file may include, additional data for each video file storing a video segment corresponding to a particular play might include information indicating which team had possession of the football during the play, the down and distance to go for a first down, the location of the line of scrimmage, and so forth. All such information stored with the various video files will help a video editor locate desired plays and retrieve and manipulate corresponding video clips for preparing customized video presentation or program.); modifying at least one portion of video from the first multi-source video based on the editing commands (e.g. paragraphs 34-35: a user may append additional information to the various files further identifying the video segments stored in each file.); and generating a second multi-source video based on the first multi-source video and the at least one modified portion of video (e.g. paragraph 35: All such information stored with the various video files will help a video editor locate desired plays and retrieve and manipulate corresponding video clips for preparing customized video presentation or program.). For claim 7, Griffin teaches causing the multi-source video to pre-cache the selected portions of video based on the embedded source identifiers (e.g. paragraph 32: The video editing system 20 includes a video processor 22, a large digital video data storage device 24 for storing both raw and edited digital video, and a user interface 26 allowing a user to interact with and control the video editing system 20) comprises: providing a video editing program (e.g. paragraph 32: video editing system 20) ; importing the multi-source video with embedded source identifiers into the editing program (e.g. paragraph 43: Thus, time stamp information may be used to identify video files from different video streams storing video segments recoding the same event or play. Once video files containing common video segments have been identified, they may be aligned or grouped together, so that a video editor may select a particular video segment from one of the video streams that best illustrates an aspect of the play that the video editor wishes to highlight.); reading the plurality of source identifiers embedded in the first multi-source video with the editing program (e.g. paragraph 43: Thus, time stamp information may be used to identify video files from different video streams storing video segments recoding the same event or play. Once video files containing common video segments have been identified, they may be aligned or grouped together, so that a video editor may select a particular video segment from one of the video streams that best illustrates an aspect of the play that the video editor wishes to highlight.); and automatically retrieving, using the editing program, the portions of video from each of the plurality of video streams based on the embedded source identifiers (e.g. paragraph 43: Thus, time stamp information may be used to identify video files from different video streams storing video segments recoding the same event or play. Once video files containing common video segments have been identified, they may be aligned or grouped together, so that a video editor may select a particular video segment from one of the video streams that best illustrates an aspect of the play that the video editor wishes to highlight.). For claim 2, Griffin teaches each of the plurality of source identifiers is embedded in at least one pixel in the first multi-source video (e.g. see figure 4 paragraph 45: The time stamps may be synchronized by previewing video segments stored in video files from the different video streams and visually identifying video segments corresponding to the same event or play. FIG. 4 shows an interface page 200 that may be used to synchronize the time stamp values associated with video segments from a plurality of video streams.). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 10-11 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Baker et al (US 2003/0193616) and further in view of Griffin (US 2007/0201815). For claim 10, Baker et al teach a system for generating a multi-source video to expedite video editing, the system comprising: a video switch (e.g. first figure: camera switch 22) configured to: a) receive a plurality of video streams, each video stream associated with one of a plurality of cameras (e.g. first figure, cam 1, Cam 2, Cam 3…etc ); and b) select at least one portion of video from each of the plurality of video streams; a source identifier mapping module configured to generate a plurality of source identifiers, each source identifier associated with one of the plurality of video streams (e.g. paragraph 8: The camera switch 22 selects one of the source identified video signal); a video watermarking module (e.g. first figure, paragraph 8: “watermark encoder”) configured to: embed, for each selected portion of video, one of the plurality of source identifiers in the portion of video; wherein the source identifier embedded in each selected portion of video corresponds to the video stream from which the portion of video was received (e.g. abstract or paragraph 4: “…The video signal from each video source has a source ID embedded in it, either in the form of vertical interval time code user bits or in the form of a watermark in the active video portion of the video signal, or both…”); and Baker et al do not further disclose: b) generate a multi-source video comprising the selected portions of video with embedded source identifier; and a video editing processor comprising memory configured to a)pre-cache the selected portions of video in memory based on the multi-source video and embedded source identifiers; b)receiving editing commands from user, each editing command configured to modify at least one portion of video from the first multi-source video based on the editing commands; and c) generate a second multi-source video based on the first multi-source video and the at least one modified portion of video. Griffin teaches: b) generate a multi-source video comprising the selected portions of video with embedded source identifier (e.g. figure 2, paragraph 32: “The video processor 22 is adapted to receive raw unedited video streams from the various cameras 12, 14, 16, 18 used to record an event such as a football game. Preferably, the video streams received from the cameras are received in the form of a plurality of discrete digital video files. Preferably each digital video file contains a discrete video segment of the recorded the event, such as a video recording of an individual play executed during the football game.” Paragraph 34: “For example, each video stream may be identified by the name of the opponents playing in the recorded game, the date the video was recorded, and the viewing angle represented by the particular video stream. For example, the video stream received from camera 12 might be identified as "Wolverines v. Trojans Jan. 1, 2007 Wide Angle." ”); and a video editing processor comprising memory configured to a) pre-cache the selected portions of video in memory based on the multi-source video and embedded source identifiers (e.g. paragraph 32: “The video editing system 20 includes a video processor 22, a large digital video data storage device 24 for storing both raw and edited digital video”); b)receiving editing commands from user, each editing command configured to modify at least one portion of video from the first multi-source video based on the editing commands (e.g. paragraphs 34-35: a user may append additional information to the various files further identifying the video segments stored in each file. For example, for a video recording of a football game in which each play is stored in a separate video file, additional data appended to a file may include, additional data for each video file storing a video segment corresponding to a particular play might include information indicating which team had possession of the football during the play, the down and distance to go for a first down, the location of the line of scrimmage, and so forth. All such information stored with the various video files will help a video editor locate desired plays and retrieve and manipulate corresponding video clips for preparing customized video presentation or program.); and c) generate a second multi-source video based on the first multi-source video and the at least one modified portion of video(e.g. paragraph 43: Thus, time stamp information may be used to identify video files from different video streams storing video segments recoding the same event or play. Once video files containing common video segments have been identified, they may be aligned or grouped together, so that a video editor may select a particular video segment from one of the video streams that best illustrates an aspect of the play that the video editor wishes to highlight.). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Griffin into the teaching of Baker et al to store plurality of video segments received from multiple source (e.g. Griffin, paragraph 15) to allow quick and easy access and control playback of video files (e.g. Griffin, paragraph 11). For claim 11, Griffin teaches each of the plurality of source identifiers is embedded in at least one pixel in the first multi-source video (e.g. see figure 4 paragraph 45: The time stamps may be synchronized by previewing video segments stored in video files from the different video streams and visually identifying video segments corresponding to the same event or play. FIG. 4 shows an interface page 200 that may be used to synchronize the time stamp values associated with video segments from a plurality of video streams.). The motivation for combining the references has been discussed in claim 10 above. For claim 16, Griffin teaches causing the multi-source video to pre-cache the selected portions of video based on the embedded source identifiers (e.g. paragraph 32: The video editing system 20 includes a video processor 22, a large digital video data storage device 24 for storing both raw and edited digital video, and a user interface 26 allowing a user to interact with and control the video editing system 20) comprises: providing a video editing program (e.g. paragraph 32: video editing system 20) ; importing the multi-source video with embedded source identifiers into the editing program (e.g. paragraph 43: Thus, time stamp information may be used to identify video files from different video streams storing video segments recoding the same event or play. Once video files containing common video segments have been identified, they may be aligned or grouped together, so that a video editor may select a particular video segment from one of the video streams that best illustrates an aspect of the play that the video editor wishes to highlight.); reading the plurality of source identifiers embedded in the first multi-source video with the editing program (e.g. paragraph 43: Thus, time stamp information may be used to identify video files from different video streams storing video segments recoding the same event or play. Once video files containing common video segments have been identified, they may be aligned or grouped together, so that a video editor may select a particular video segment from one of the video streams that best illustrates an aspect of the play that the video editor wishes to highlight.); and automatically retrieving, using the editing program, the portions of video from each of the plurality of video streams based on the embedded source identifiers (e.g. paragraph 43: Thus, time stamp information may be used to identify video files from different video streams storing video segments recoding the same event or play. Once video files containing common video segments have been identified, they may be aligned or grouped together, so that a video editor may select a particular video segment from one of the video streams that best illustrates an aspect of the play that the video editor wishes to highlight.). The motivation for combining the references has been discussed in claim 10 above. Claims 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Baker et al and Griffin, as applied to claims 10-11 and 16 above, and further in view of Suh (US 2005/0053353). For claim 17, Baker et al and Griffin do not further disclose remove the plurality of source identifiers embedded in the second multi-source video. Suh teaches remove the plurality of source identifiers embedded in the second multi-source video (e.g. paragraph 37: the time stamp is removed from the transport stream; or paragraph 42: the upload control part 230 removes the time stamp from the stream). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Suh into the teaching of Baker et al and Griffin to remove time stamp after reading or decoding video stream on the basis of a time steam (e.g. paragraph 42, Suh) for improve storage efficiency. For claim18, Baker et al do not further disclose generating a map consisting of a plurality of source identifiers and a plurality of sources of video, wherein the map associates each of the plurality of source identifiers with one of the plurality of sources of video. Griffin teaches generating a map consisting of a plurality of source identifiers and a plurality of sources of video, wherein the map associates each of the plurality of source identifiers with one of the plurality of sources of video (e.g. paragraph 34: For example, each video stream may be identified by the name of the opponents playing in the recorded game, the date the video was recorded, and the viewing angle represented by the particular video stream. For example, the video stream received from camera 12 might be identified as "Wolverines v. Trojans Jan. 1, 2007 Wide Angle." The video received from the camera 14 might be identified as "Wolverines v. Trojans Jan. 1, 2007 Tight Angle." The video stream from camera 16 might be identified as "Wolverines v. Trojans Jan. 1, 2007 Reverse Angle." Finally, the video stream from camera 18 might be identified as "Wolverines v. Trojans Jan. 1, 2007 End Zone."). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Griffin into the teaching of Baker et al to store plurality of video segments received from multiple source (e.g. Griffin, paragraph 15) to allow quick and easy access and control playback of video files (e.g. Griffin, paragraph 11). Claims 8-9 is rejected under 35 U.S.C. 103 as being unpatentable over Griffin, as applied to claims 1-2 and 7 above, and further in view of Suh (US 2005/0053353). For claim 8, Griffin does not further disclose remove the plurality of source identifiers embedded in the second multi-source video. Suh teaches remove the plurality of source identifiers embedded in the second multi-source video (e.g. paragraph 37: the time stamp is removed from the transport stream; or paragraph 42: the upload control part 230 removes the time stamp from the stream). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Suh into the teaching of Griffin to remove time stamp after reading or decoding video stream on the basis of a time steam (e.g. paragraph 42, Suh) for improve storage efficiency. For claim 9, Griffin teaches generating a map consisting of a plurality of source identifiers and a plurality of sources of video, wherein the map associates each of the plurality of source identifiers with one of the plurality of sources of video (e.g. paragraph 34: For example, each video stream may be identified by the name of the opponents playing in the recorded game, the date the video was recorded, and the viewing angle represented by the particular video stream. For example, the video stream received from camera 12 might be identified as "Wolverines v. Trojans Jan. 1, 2007 Wide Angle." The video received from the camera 14 might be identified as "Wolverines v. Trojans Jan. 1, 2007 Tight Angle." The video stream from camera 16 might be identified as "Wolverines v. Trojans Jan. 1, 2007 Reverse Angle." Finally, the video stream from camera 18 might be identified as "Wolverines v. Trojans Jan. 1, 2007 End Zone."). Allowable Subject Matter Claims 3-6 and 12-15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAQUAN ZHAO whose telephone number is (571)270-1119. The examiner can normally be reached M-Thur: 7:00 am-5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Tran can be reached on 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Email: daquan.zhao1@uspto.gov. Phone: (571)270-1119 /DAQUAN ZHAO/Primary Examiner, Art Unit 2484
Read full office action

Prosecution Timeline

Jan 13, 2025
Application Filed
Feb 05, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597257
MONITORING SYSTEM AND METHOD FOR RECOGNIZING THE ACTIVITY OF DETERMINED PERSONS
2y 5m to grant Granted Apr 07, 2026
Patent 12593108
SYSTEMS AND METHODS FOR AUTOMATED SPEECH-TO-TEXT CAPTIONING
2y 5m to grant Granted Mar 31, 2026
Patent 12587609
ELECTRONIC DEVICE AND CONTROL METHOD FOR CONTROLLING SPEED OF WORKOUT VIDEO
2y 5m to grant Granted Mar 24, 2026
Patent 12587721
VIDEO PROCESSING METHOD, APPARATUS AND SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12586610
METHOD, APPARATUS, DEVICE, STORAGE MEDIUM AND PROGRAM PRODUCT FOR VIDEO GENERATION
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
92%
With Interview (+14.8%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 1029 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month