Prosecution Insights
Last updated: April 19, 2026
Application No. 18/314,112

Scene removal, cinematic video editing, and image grid operations in video editing application

Non-Final OA §103
Filed
May 08, 2023
Examiner
YANG, NIEN
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
Apple Inc.
OA Round
4 (Non-Final)
72%
Grant Probability
Favorable
4-5
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
287 granted / 399 resolved
+13.9% vs TC avg
Strong +29% interview lift
Without
With
+28.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
30 currently pending
Career history
429
Total Applications
across all art units

Statute-Specific Performance

§101
5.6%
-34.4% vs TC avg
§103
73.6%
+33.6% vs TC avg
§102
6.5%
-33.5% vs TC avg
§112
7.8%
-32.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 399 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/21/2025 has been entered. Preliminary Remarks This is a reply to the amendments filed on 11/21/2025, in which, claims 1, 9, and 17 are amended. Claims 1-23 remain pending in the present application with claims 1, 9, and 17 being independent claims. When making claim amendments, the applicant is encouraged to consider the references in their entireties, including those portions that have not been cited by the examiner and their equivalents as they may most broadly and appropriately apply to any particular anticipated claim amendments. Response to Arguments Applicant's amendments filed on 11/21/2025 with respect to claims 1, 9, and 17 have been considered but are moot in view of the new ground(s) of rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-22 are rejected under 35 U.S.C. 103 as being unpatentable over Takao et al. (US 20220400208 A1, hereinafter referred to as “Takao”) in view of Thurston et al. (US 11425283 B1, hereinafter referred to as “Thurston”). Regarding claim 1, Takao discloses a non-transitory computer readable medium comprising one or more sequences of instructions which, when executed by one or more processors, cause the one or more processors to perform operations (see Takao, paragraph [0007]: “a non-transitory computer-readable medium storing a program executable by a computer”) comprising: receiving a first video clip comprising a first set of frames (see Takao, paragraphs [0064]: “The communication unit 222 can transmit images shot by the image capturing unit 211 (including live images), images recorded in the recording medium 228, and the like, and can also receive image data and various other types of information from external devices”; and paragraphs [0313]: “a live view image shot by the image capture apparatus can be received by the control apparatus through wired or wireless communication and displayed”), wherein the first video clip is generated by concurrently recording a first entity and a second entity using an image capture device (see Takao, FIG.24 and paragraph [0183]: “the system control unit 218 controls the image processing unit 214 to generate display images and sequentially write the images into the video memory region of the system memory 219 while the image capturing unit 211 continually shoots a moving image. As a result, the live view image is displayed in the EVF 217 or the display unit 108. The processing from step S702 onward is executed in parallel with the live view display”); wherein the first entity is (a) displayed within each of the first set of frames and (b) is in-focus in each of the first set of frames (see Takao, FIGS. 24A to 24C and paragraph [0269]: “the focus lens position of the imaging optical system that forms the other image may be adjusted so that the degree of focus is equivalent to the one of the left image and the right image in the live view image that is perceived to be in focus to a higher degree”), and wherein the second entity is (a) displayed within each of the first set of frames and (b) is out-of-focus in each of the first set of frames (see Takao, FIGS. 24A to 24C and paragraph [0269]: “the focus lens position of the imaging optical system that forms the other image may be adjusted so that the degree of focus is equivalent to the one of the left image and the right image in the live view image that is perceived to be in focus to a higher degree”); receiving user input requesting a second video clip where the second entity is in-focus (see Takao, FIGS. 24A to 24C and paragraph [0279]: “the user specifies a position to be brought into focus, the system control unit 218 changes the image so that the specified position is brought into focus”); responsive to receiving the user input, generating a second video clip comprising a second set of frames with: the second entity being displayed in-focus within each of the second set of frames (see Takao, paragraph [0277]: “FIGS. 24A to 24C are diagrams schematically illustrating a change in an in-focus subject through refocusing processing. FIG. 24A illustrates a state in which a subject 2403 is in focus but subjects 2402 and 2404 are out of focus”); and the first entity being displayed out-of-focus within each of the second set of frames (see Takao, paragraph [0278]: “If the image illustrated in FIG. 24A (a still image or a single frame of a moving image) is recorded in a refocusable format, the image can be changed such that the subject 2402 or 2404 is in focus. FIGS. 24B and 24C illustrate images changed so that the subjects 2402 and 2404 are in focus, respectively”). Regarding claim 1, Takao discloses all the claimed limitations with the exception of storing the second video clip. Thurston from the same or similar fields of endeavor discloses storing the second video clip (see Thurston, col 17, lines 47-49: “The sequence of images may form a timed sequence of images such as a video sequence. Rendered images can be stored in computer-readable memory”). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in Thurston with the teachings as in Takao. The motivation for doing so would ensure the system to have the ability to use the system and method disclosed in Thurston to form a timed sequence of image and to store the rendered image in computer-readable memory thus storing the second video clip in order to store data to data repository so that data later can be retrieved from data repository for quicker searching and retrieval. Regarding claim 2, the combination teachings of Takao and Thurston as discussed above also disclose the non-transitory computer readable medium as recited in claim 1, wherein the operations further comprise: determining, for each of the first set of frames, a set of image portions corresponding to the second entity (see Thurston, col 19, lines 47-52: “virtual object 190 is in softer focus (e.g., to de-emphasize its importance in the mind of a viewer), while physical objects 160 and 170 are strongly out of focus (e.g., to make them appear as part of the foreground)”); and sharpening the first set of image portions corresponding to the second entity to render the second entity in-focus (see Thurston, col 19, lines 59-61: “if calibration image 220 is placed within focal plane 320 b, then the focus parameters and virtual focus model may be adjusted until calibration image 220 appears in sharp focus”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 3, the combination teachings of Takao and Thurston as discussed above also disclose the non-transitory computer readable medium as recited in claim 1, wherein the operations further comprise: determining, for each of the first set of frames, a set of image portions corresponding to the first entity (see Thurston, col 12, lines 52-54: “An image dataset might be provided to an animator that is a deep image of that scene, rendered into a deep image”); and softening the first set of image portions corresponding to the first entity to render the first entity out-of-focus (see Thurston, col 12, lines 54-57: “It may be desirable to defocus objects within the scene to draw attention to different objects in the scene in a way that emulates the depth of field effect of a physical camera”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 4, the combination teachings of Takao and Thurston as discussed above also disclose the non-transitory computer readable medium as recited in claim 1, wherein storing the second set of frames as the second video clip comprises storing metadata in association with the second video clip, the metadata comprising information corresponding to: (a) the first and second entities, and (b) which of the first and second entities is in-focus (see Thurston, col 13, lines 61-67: “The precursor image for scene 106 may therefore be provided by a renderer and/or may be processed using a compositor. The precursor image may also be associated with precursor metadata that may include computer-generated imagery metadata, such as the scene description for scene 106 that is used in rendering and outputting scene 106 on display wall 102 using pixels 104”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 5, the combination teachings of Takao and Thurston as discussed above also disclose the non-transitory computer readable medium as recited in claim 1, wherein the operations further comprise: receiving metadata associated with the first video clip, the metadata comprising information corresponding to the second entity, wherein the second set of frames is generated using the metadata (see Thurston, col 10, lines 31-39: “The precursor image may be a single image displayed on the display wall or may be a sequence of images, such as frames of a video or animation. The precursor image may include precursor metadata for computer generated imagery and/or pixel display data for pixels of the display wall. In this regard, the precursor metadata may include output pixels, data, color, intensity, and the like for outputting the image on the display wall”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 6, the combination teachings of Takao and Thurston as discussed above also disclose the non-transitory computer readable medium as recited in claim 5, wherein the metadata is recorded in a stream concurrently with recording of the first video clip, and wherein the stream is stored separately from the first video clip (see Thurston, col 27, lines 19-24: “During or following the capture of a live action scene, live action capture system 1202 might output live action footage to a live action footage storage 1220. A live action processing system 1222 might process live action footage to generate data about that live action footage and store that data into a live action metadata storage 1224”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 7, the combination teachings of Takao and Thurston as discussed above also disclose the non-transitory computer readable medium as recited in claim 1, wherein a third entity visible in the first set of frames is out-of-focus, and wherein the operations further comprise: generating a third set of frames with the third entity being in-focus and the first and second entities being out-of-focus (see Thurston, col 15, lines 12-16: “a camera operator or camera operation algorithm may adjust the focus of the camera continuously through a rack focus range, recording image capture of the stage environment as the focus is adjusted”); and storing the third set of frames as a third video clip (see Thurston, col 17, lines 47-49: “The sequence of images may form a timed sequence of images such as a video sequence. Rendered images can be stored in computer-readable memory”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 8, the combination teachings of Takao and Thurston as discussed above also disclose the non-transitory computer readable medium as recited in claim 7, wherein storing the third set of frames as the third video clip comprises storing metadata in association with the third video clip, the metadata comprising information corresponding to: (a) the first, second, and third entities, and (b) which of the first, second, and third entities is in-focus (see Thurston, col 18, lines 14-18: “The calibration image may be stored in a memory and retrieved for image generation and display, or may be calculated at the time of display, based on real or desired focal parameters, or other considerations”). The motivation for combining the references has been discussed in claim 1 above. Claim 9 is rejected for the same reasons as discussed in claim 1 above. In addition, the combination teachings of Takao and Thurston as discussed above also disclose a system comprising: one or more processors (see Takao, paragraph [0310]: “multiple processors”); and a non-transitory computer readable medium comprising one or more sequences of instructions which, when executed by the one or more processors, cause the one or more processors to perform operations (see Takao, paragraph [0007]: “a non-transitory computer-readable medium storing a program executable by a computer”). Claim 10 is rejected for the same reasons as discussed in claim 2 above. Claim 11 is rejected for the same reasons as discussed in claim 3 above. Claim 12 is rejected for the same reasons as discussed in claim 4 above. Claim 13 is rejected for the same reasons as discussed in claim 5 above. Claim 14 is rejected for the same reasons as discussed in claim 6 above. Claim 15 is rejected for the same reasons as discussed in claim 7 above. Claim 16 is rejected for the same reasons as discussed in claim 8 above. Claim 17 is rejected for the same reasons as discussed in claim 1 above. Claim 18 is rejected for the same reasons as discussed in claim 2 above. Claim 19 is rejected for the same reasons as discussed in claim 3 above. Claim 20 is rejected for the same reasons as discussed in claim 4 above. Regarding claim 21, the combination teachings of Takao and Thurston as discussed above also disclose the non-transitory computer readable medium as recited in claim 1, wherein the second entity is a particular person (see Takao, 2402, 2403, and 2404 in FIGS. 24A to 24C). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 22, the combination teachings of Takao and Thurston as discussed above also disclose the non-transitory computer readable medium as recited in claim 1, wherein the second entity is a particular object (see Takao, 2404 or 2402 in FIGS. 24A to 24C). The motivation for combining the references has been discussed in claim 1 above. Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Takao and Thurston as applied to claim 1, in view of Hu et al. (US 20220360736 A1, hereinafter referred to as “Hu”). Regarding claim 23, the combination teachings of Takao and Thurston as discussed above disclose all the claimed limitations with the exceptions of non-transitory computer readable medium as recited in claim 1, wherein generating a second set of frames with the second entity being in-focus and the first entity being out-of-focus comprises maintaining a focus on the second entity while a position of the second entity moves relative to the second set of frames. Hu from the same or similar fields of endeavor discloses the non-transitory computer readable medium as recited in claim 1, wherein generating a second set of frames with the second entity being in-focus and the first entity being out-of-focus comprises maintaining a focus on the second entity while a position of the second entity moves relative to the second set of frames (Hu, paragraph [0089]: “When shooting with the camera, a preview picture includes the second video data obtained after frame interpolation and the OSD data (e.g., a focus frame, and an icon). The focus frame changes in the video window in response to a user's focusing operation, or the focus frame changes according to a position of a moving object obtained by the electronic device through object detection. The smoothness of the second video data obtained after frame interpolation is higher than that of the first video data, and display of the focus frame in the video window will not affect the display effect of the second video data”). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in Hu with the teachings as in Takao and Thurston. The motivation for doing so would ensure the system to have the ability to use the system and method for frame interpolation applied to an electronic device comprising a camera disclosed in Hu to change focus frame such as the moving ball shown in FIG. 10 of Hu in response to a user’s focusing operation and to display of the focus frame in the video window without affecting the display effect of the second video data thus maintaining a focus on the second entity in order to adjust focus object in display video so that user can adjust the focus for specific object within the set of frames. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NIENRU YANG whose telephone number is (571)272-4212. The examiner can normally be reached Monday-Friday 10AM-6PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THAI TRAN can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. NIENRU YANG Examiner Art Unit 2484 /NIENRU YANG/Examiner, Art Unit 2484 /THAI Q TRAN/Supervisory Patent Examiner, Art Unit 2484
Read full office action

Prosecution Timeline

May 08, 2023
Application Filed
Aug 19, 2024
Non-Final Rejection — §103
Nov 22, 2024
Examiner Interview Summary
Nov 22, 2024
Applicant Interview (Telephonic)
Nov 26, 2024
Response Filed
Apr 21, 2025
Non-Final Rejection — §103
May 22, 2025
Examiner Interview Summary
May 22, 2025
Applicant Interview (Telephonic)
Jul 25, 2025
Response Filed
Aug 18, 2025
Final Rejection — §103
Nov 05, 2025
Examiner Interview Summary
Nov 05, 2025
Applicant Interview (Telephonic)
Nov 21, 2025
Request for Continued Examination
Dec 06, 2025
Response after Non-Final Action
Jan 03, 2026
Non-Final Rejection — §103
Apr 09, 2026
Applicant Interview (Telephonic)
Apr 09, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604024
REPRODUCTION DEVICE, REPRODUCTION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12592259
SYSTEMS AND METHODS TO EDIT VIDEOS TO REMOVE AND/OR CONCEAL AUDIBLE COMMANDS
2y 5m to grant Granted Mar 31, 2026
Patent 12586609
USING AUDIO ANCHOR POINTS TO SYNCHRONIZE RECORDINGS
2y 5m to grant Granted Mar 24, 2026
Patent 12581030
REPRODUCTION DEVICE, REPRODUCTION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12556720
LEARNED VIDEO COMPRESSION AND CONNECTORS FOR MULTIPLE MACHINE TASKS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+28.7%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 399 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month