Prosecution Insights
Last updated: April 19, 2026
Application No. 18/877,915

METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM FOR RECOMMENDING A MULTIMEDIA EDITING RESOURCE

Non-Final OA §103
Filed
Dec 20, 2024
Examiner
ZHAO, DAQUAN
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
92%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
791 granted / 1029 resolved
+18.9% vs TC avg
Moderate +15% lift
Without
With
+14.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
24 currently pending
Career history
1053
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
20.3%
-19.7% vs TC avg
§112
14.0%
-26.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1029 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 7, 9-14, 16-19 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Greene et al (US 9639634) and further in view of Buyuklu et al (US 2017/0243611). For claim 1, Greene et al teach a method for recommending a multimedia editing resource, comprising: in response to a recommendation trigger operation in a first resource recommendation scenario, determining a recommendation analysis object corresponding to the first resource recommendation scenario (e.g. abstract: matching component to identify other videos that include one or more tagged elements, and a recommendation component to recommend the other videos. Column 2, lines 36-55: For example, a video can include various metadata tags that identify object, things, and/or people appearing in the video. These tagged objects can be employed to identify and recommend other videos related to the tagged object…For Example, when a user watches a video that includes a tagged person named John Smith, other videos in which John Smith appears can be identified for recommending to the user); determining at least one recommendation tag by analyzing multimedia resources corresponding to the recommendation analysis object (Column 2, lines 36-55: For example, a video can include various metadata tags that identify object, things, and/or people appearing in the video. These tagged objects can be employed to identify and recommend other videos related to the tagged object…For Example, when a user watches a video that includes a tagged person named John Smith, other videos in which John Smith appears can be identified for recommending to the user); and obtaining a multimedia editing resource that matches the at least one recommendation tag (e.g. abstract: matching component to identify other videos that include one or more tagged elements, and a recommendation component to recommend the other videos. Column 2, lines 36-55: For example, a video can include various metadata tags that identify object, things, and/or people appearing in the video. These tagged objects can be employed to identify and recommend other videos related to the tagged object…For Example, when a user watches a video that includes a tagged person named John Smith, other videos in which John Smith appears can be identified for recommending to the user) and displaying the multimedia editing resource (e.g. figure 8, column 16, lines 38-65: For example, as a video is playing elements that are tagged in the video that are displayed at the current segment of the video being played are identified. At 806, other videos that include one or more of the tagged elements included in the subset are identified (e.g., via identification component 106). At 808, the other videos are ranked based on number of matching elements included in the subset and the other videos, respectively (e.g., via ranking component 402). At 810, the subset of the other videos are recommended for viewing during playback of the video at or near the current point in the video based on the ranking (e.g., via recommendation component 110).). Greene et al do not further disclose: wherein the multimedia editing resource is configured to edit initial multimedia resources to obtain target multimedia resources, and the target multimedia resources are presented with an editing effect obtained by applying the multimedia editing resource to the initial multimedia resources. Buyuklu et al teach: wherein the multimedia editing resource is configured to edit initial multimedia resources to obtain target multimedia resources (e.g. paragraph 82: defining a plurality of effect descriptors, wherein the plurality of effect descriptors describe at least one video editing effect applied to a plurality of videos by a plurality of users), and the target multimedia resources are presented with an editing effect obtained by applying the multimedia editing resource to the initial multimedia resources (e.g. figures 12A-12C, paragraph 114, user can select an effect, such as “Fairy”, Heat”, “Snow”). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Buyuklu et al into the teaching of Greene et al to provide an efficient, easy-to-user system for video editing (e.g. paragraph 6, Buyuklu et al). Claims 9-10 are rejected for the same reasons as discussed in claim 1 above, wherein figure 3 of Greene et al shows processor 112 and memory 114. For claims 2, 11 and 16, Greene et al teach in response to the recommendation trigger operation in the first resource recommendation scenario, determining the recommendation analysis object corresponding to the first resource recommendation scenario comprises: in response to an editing trigger operation for the initial multimedia resources in a multimedia resource editing scenario, determining a target multimedia resource collection corresponding to a current user as a recommendation analysis object corresponding to the multimedia resource editing scenario, wherein the target multimedia resource collection includes at least one of a local album, an editing record of multimedia resources, a collection record of multimedia editing resources, or a usage record of multimedia editing resources (Column 2, lines 36-55: For example, a video can include various metadata tags that identify object, things, and/or people appearing in the video. These tagged objects can be employed to identify and recommend other videos related to the tagged object…For Example, when a user watches a video that includes a tagged person named John Smith, other videos in which John Smith appears can be identified for recommending to the user). For claims 3, 12 and 17, Greene et al teach determining the at least one recommendation tag by analyzing the multimedia resources corresponding to the recommendation analysis object comprises: analyzing a first multimedia resource in the multimedia resources corresponding to the recommendation analysis object, and determining the recommendation tag corresponding to the first multimedia resource; and determining at least one recommendation tag corresponding to a current user based on the recommendation tag corresponding to the first multimedia resource (Column 2, lines 36-55: For example, a video can include various metadata tags that identify object, things, and/or people appearing in the video. These tagged objects can be employed to identify and recommend other videos related to the tagged object…For Example, when a user watches a video that includes a tagged person named John Smith, other videos in which John Smith appears can be identified for recommending to the user). For claims 4, 13, and 18, Greene et al teach each of the at least one recommendation tag has a weight value, the weight value is configured to represent an interest of a current user in a multimedia editing resource with a corresponding recommendation tag, and obtaining the multimedia editing resource that matches the at least one recommendation tag comprises: obtaining the multimedia editing resource that matches the at least one recommendation tag based on the at least one recommendation tag and weight values corresponding to respective recommendation tags (e.g. column 12, line 55-column 13, line 9: FIG. 4 presents a diagram of another example system 400 for identifying media items for recommending to a user based on relatedness of tagged elements in the media items…Ranking component 402 is configured to rank or score media items included in a set of media items identified/generated by matching component 108 and/or social component 202 to reflect an inferred degree of interest the user to which the media items will be recommended to has in viewing the media items. Filter component 404 is configured to then filter a set of ranked media items, based on the ranking, to generate a subset of the media items for recommending to a user. For example, filter component 404 can generate a subset of the media items which are associated with a ranking above a threshold value.). For claims 5, 14 and 19, Greene et al teach determining a proportion of quantity of multimedia resources with a first recommendation tag in the at least one recommendation tag corresponding to the current user, in the multimedia resources corresponding to the recommendation analysis object; and determining a weight value corresponding to the first recommendation tag based on the proportion of quantity corresponding to the first recommendation tag and a predetermined initial weight value of the first recommendation tag (e.g. column 13, lines 9-34: For example, in addition to ranking media items based on a general number of shared tagged elements/object with the evaluated media item, ranking component 402 can rank the other media items based on inclusion of a number of priority or star tagged elements. A tagged element can be considered priority or starred based on relevance of the tagged item to the user as determined based on preferences of the user). For claims 7 and 21, Greene et al teach determining, based on a collection weight value of a target multimedia resource collection to which multimedia resources with a third recommendation tag in the at least one recommendation tag belong, a weight value corresponding to the third recommendation tag (e.g. column 12, line 55-column 13, line 9: FIG. 4 presents a diagram of another example system 400 for identifying media items for recommending to a user based on relatedness of tagged elements in the media items…Ranking component 402 is configured to rank or score media items included in a set of media items identified/generated by matching component 108 and/or social component 202 to reflect an inferred degree of interest the user to which the media items will be recommended to has in viewing the media items. Filter component 404 is configured to then filter a set of ranked media items, based on the ranking, to generate a subset of the media items for recommending to a user. For example, filter component 404 can generate a subset of the media items which are associated with a ranking above a threshold value.), wherein the target multimedia resource collection includes at least one of a local album, an editing record of multimedia resources, a collection record of multimedia editing resources or a usage record of multimedia editing resources, and each of a collection weight value of the editing record of multimedia resources, a collection weight value of the collection record of multimedia editing resources and a collection weight value of the usage record of multimedia editing resources is greater than a collection weight value of the local album(Column 2, lines 36-55: For example, a video can include various metadata tags that identify object, things, and/or people appearing in the video. These tagged objects can be employed to identify and recommend other videos related to the tagged object…For Example, when a user watches a video that includes a tagged person named John Smith, other videos in which John Smith appears can be identified for recommending to the user). Allowable Subject Matter Claims 6, 15 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAQUAN ZHAO whose telephone number is (571)270-1119. The examiner can normally be reached M-Thur: 7:00 am-5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Tran can be reached on 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Email: daquan.zhao1@uspto.gov. Phone: (571)270-1119 /DAQUAN ZHAO/Primary Examiner, Art Unit 2484
Read full office action

Prosecution Timeline

Dec 20, 2024
Application Filed
Feb 24, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597257
MONITORING SYSTEM AND METHOD FOR RECOGNIZING THE ACTIVITY OF DETERMINED PERSONS
2y 5m to grant Granted Apr 07, 2026
Patent 12593108
SYSTEMS AND METHODS FOR AUTOMATED SPEECH-TO-TEXT CAPTIONING
2y 5m to grant Granted Mar 31, 2026
Patent 12587609
ELECTRONIC DEVICE AND CONTROL METHOD FOR CONTROLLING SPEED OF WORKOUT VIDEO
2y 5m to grant Granted Mar 24, 2026
Patent 12587721
VIDEO PROCESSING METHOD, APPARATUS AND SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12586610
METHOD, APPARATUS, DEVICE, STORAGE MEDIUM AND PROGRAM PRODUCT FOR VIDEO GENERATION
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
92%
With Interview (+14.8%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 1029 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month