DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Preliminary Remarks
This is a reply to the application filed on 12/20/2024, in which, claims 16-18 and 20 are preliminary cancelled; claim 19 is preliminary amended; and claims 21-24 are newly added. Claims 1-15, 19, and 21-24 remain pending in the present application with claims 1, 6, and 19 being independent claims.
When making claim amendments, the applicant is encouraged to consider the references in their entireties, including those portions that have not been cited by the examiner and their equivalents as they may most broadly and appropriately apply to any particular anticipated claim amendments.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on December 28, 2024 is in compliance with the provisions of 37 CFR 1.97 and is being considered by the Examiner.
Abstract of The Disclosure
The abstract contains more than 150 words.
See 37 CFR 1.72 (b) and MPEP § 608.01(b). The abstract is a brief narrative of the disclosure as a whole, as concise as the disclosure permits, in a single paragraph preferably not exceeding 150 words. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 6-7, 19, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over He (CN 111105819 B, hereinafter referred to as “He”) in view of MacDonald (US 20180330756 A1, hereinafter referred to as “MacDonald”).
Regarding claim 1, He discloses a method for pushing a video template, wherein the method is applied to a client, and the method comprises:
obtaining and presenting a candidate template description information set, wherein the candidate template description information set comprises at least one piece of candidate template description information (see He, page 5: “selecting a target clipping template which meets a preset recommendation condition from the candidate clipping template set according to the template use characteristics; and displaying the target clipping template on a clipping template recommendation interface”);
the candidate template description information is configured to describe a common feature of at least one candidate video editing template, and the candidate template description information is screened into the candidate template description information set based on heat of the at least one candidate video editing template (see He, page 6: “the total use frequency of each historical editing template is calculated by calculating the frequency of different operation types of behaviors of the target user account on the historical editing template, so that the template characteristic of the historical editing template with the highest total frequency is used as the preference characteristic of the template, and the preference characteristic of the user using the editing template can be effectively determined according to the operation behaviors of the user”);
wherein the candidate video editing template comprises instruction information for a video editing operation, the candidate video editing template is configured to instruct to edit initial image material according to the video editing operation to obtain a target video, wherein the heat of the candidate video editing template comprises first heat and second heat, the first heat is configured to represent a preference level of a video creation user for the candidate video editing template, and the second heat is configured to represent a preference level of a video viewing user for the target video (see He, page 8: “The target clip template may be multiple, the target clip template may be displayed in the clip template recommendation interface according to a certain sequence, for example, according to the popularity value of the target clip template, or the target clip template may be displayed in the clip template recommendation interface according to the matching degree with the self-characteristics of the target user account or the preference of using the clip template”); and
in response to a user operation for selecting target template description information from the candidate template description information set (see He, page 8: “the target clipping template is recommended to the target user according to the template use characteristics of the target user account, the video clipping template which meets the habit of the user can be recommended according to the use styles of different users, the video clipping requirement of the user can be accurately identified, personalized video clipping template recommendation is carried out in a self-adaptive mode, the user is assisted to quickly obtain the video clipping template which meets the requirement”).
Regarding claim 1, He discloses all the claimed limitations with the exception of obtaining a target video editing template that matches the target template description information and presenting the target video editing template.
MacDonald from the same or similar fields of endeavor discloses obtaining a target video editing template that matches the target template description information and presenting the target video editing template (see MacDonald, paragraph [0078]: “create custom videos based video editing templates and shared using computer hardware such as mobile devices, tablets, laptops, game consoles, augmented reality headsets, desktop computers, TV sets, cloud servers and the internet”).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in MacDonald with the teachings as in He. The motivation for doing so would ensure the system to have the ability to use the system and method for creating and automating new video works disclosed in MacDonald to create custom videos based video editing templates; and to share using computer hardware such as mobile devices, tablets, laptops, game consoles, augmented reality headsets, desktop computers, TV sets, cloud servers and the internet thus obtaining a target video editing template that matches the target template description information and presenting the target video editing template in order to push a video template in a creation-oriented content ecological scenario so that the reliability of the mode for pushing a video template is ensured.
Regarding claim 2, the combination teachings of He and MacDonald as discussed above also disclose the method according to claim 1, wherein the first heat is determined based on first usage heat, second usage heat, and first interaction heat, wherein, the first usage heat is configured to represent a preference level of the video creation user for using at least part of material on the candidate video editing template for video creation (see He, page 8: “the template usage characteristics include at least a user representation; the obtaining of the template usage characteristics of the target user account includes: acquiring user attribute information and historical behavior data of the target user account; wherein the user attribute information is used for describing user attributes of the target user account; the historical behavior data comprises data resulting from the target user account performing different operation type behaviors on a historical clip template during a historical statistics period; and determining the user portrait according to the user attribute information and the historical behavior data”), and
the second usage heat is configured to represent a preference level of the video creation user for using the candidate video editing template for video creation (see He, page 8: “The preset recommendation condition may be based on the characteristics of the target user and the preference of using the clip template”); and
the first interaction heat is configured to represent a preference level of the video creation user for performing an interaction operation on the candidate video editing template (see He, page 11: “the method for implementing recommendation of a clipping template of the present application through interaction between the client 102 and the server 104 includes: s7001, the client 102 sends a template recommendation request; s7002, responding to the template recommendation request, and acquiring the template use characteristics of the target user account by the server 104; s7003, the server 104 selects a target clipping template which meets the preset recommendation condition from the candidate clipping template set according to the template use characteristics; s7004, the server 104 sends the target clip template to the client 102; s7005, the client 102 displays the target clipping template on the clipping template recommendation interface”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 6, the combination teachings of He and MacDonald as discussed above also disclose a method for pushing a video template, wherein the method is applied to a server (see He, page 10: “sending a template recommendation request to a server, triggering the server to acquire the template use characteristics of the target user account”), the method comprising:
obtaining heat of at least one candidate video editing template (see He, page 8: “according to the popularity value of the target clip template, or the target clip template may be displayed in the clip template recommendation interface according to the matching degree with the self-characteristics of the target user account or the preference of using the clip template”),
wherein the candidate video editing template comprises instruction information for a video editing operation, the candidate video editing template is configured to instruct to edit initial image material according to the video editing operation to obtain a target video, wherein the heat of the candidate video editing template comprises first heat and second heat, the first heat is configured to represent a preference level of a video creation user for the candidate video editing template, and the second heat is configured to represent a preference level of a video viewing user for the target video (see He, page 8: “The target clip template may be multiple, the target clip template may be displayed in the clip template recommendation interface according to a certain sequence, for example, according to the popularity value of the target clip template, or the target clip template may be displayed in the clip template recommendation interface according to the matching degree with the self-characteristics of the target user account or the preference of using the clip template”);
generating a candidate template description information set based on the heat of the at least one candidate video editing template, wherein the candidate template description information set comprises at least one piece of candidate template description information, the candidate template description information is configured to describe a common feature of the at least one candidate video editing template, and pushing the candidate template description information set to a client to present the candidate template description information set (see He, page 6: “the total use frequency of each historical editing template is calculated by calculating the frequency of different operation types of behaviors of the target user account on the historical editing template, so that the template characteristic of the historical editing template with the highest total frequency is used as the preference characteristic of the template, and the preference characteristic of the user using the editing template can be effectively determined according to the operation behaviors of the user”); and
receiving target template description information selected from the candidate template description information set and sent by the client (see He, page 8: “the target clipping template is recommended to the target user according to the template use characteristics of the target user account, the video clipping template which meets the habit of the user can be recommended according to the use styles of different users, the video clipping requirement of the user can be accurately identified, personalized video clipping template recommendation is carried out in a self-adaptive mode, the user is assisted to quickly obtain the video clipping template which meets the requirement”), obtaining a target video editing template that matches the target template description information, and pushing the target video editing template to the client to present the target video editing template (see MacDonald, paragraph [0078]: “create custom videos based video editing templates and shared using computer hardware such as mobile devices, tablets, laptops, game consoles, augmented reality headsets, desktop computers, TV sets, cloud servers and the internet”).
The motivation for combining the references has been discussed in claim 1 above.
Claim 7 is rejected for the same reasons as discussed in claim 2 above.
Claim 19 is rejected for the same reasons as discussed in claim 1 above. In addition, the combination teachings of He and MacDonald as discussed above also disclose a device comprising: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor, when executing the computer program (see He, page 14: “comprising a processor and a memory, the memory storing a computer program which when executed by the processor performs the steps”).
Claim 21 is rejected for the same reasons as discussed in claim 2 above.
Claims 3-5, 8-9, 12-15, and 22-24 are rejected under 35 U.S.C. 103 as being unpatentable over He and MacDonald as applied to claim 1, and further in view of Zhong (CN 113111222 A, hereinafter referred to as “Zhong”).
Regarding claim 3, the combination teachings of He and MacDonald as discussed above also disclose the method according to claim 1, wherein the second heat is determined based on first viewing heat, second viewing heat, and second interaction heat, wherein, the first viewing heat is configured to represent a preference level of the video viewing user for viewing an associated video comprising at least part of material in the target video (see He, page 9: “the total use frequency of each historical editing template is calculated by calculating the frequency of different operation types of behaviors of the target user account on the historical editing template, so that the template characteristic of the historical editing template with the highest total frequency is used as the preference characteristic of the template, and the preference characteristic of the user using the editing template can be effectively determined according to the operation behaviors of the user”).
Regarding claim 3, the combination teachings of He and MacDonald as discussed above disclose all the claimed limitations with the exceptions of the second viewing heat is configured to represent a preference level of the video viewing user for viewing the target video; and the second interaction heat is configured to represent a preference of the video viewing user for performing an interaction operation on the target video.
Zhong from the same or similar fields of endeavor discloses the second viewing heat is configured to represent a preference level of the video viewing user for viewing the target video (see Zhong, page 5: “the user feedback information may include interaction behavior information such as a predicted browsing or viewing duration, a click depth, a comment, an attention, a collection, and a sharing of the short video produced by using the candidate template. For example, the user feedback information predicted by the prediction model may include interactive behavior information such as the total time for which the short video produced by using the candidate template is browsed or watched, the click rate, the comment rate, the concern rate, the collection rate, and the sharing frequency”); and
the second interaction heat is configured to represent a preference of the video viewing user for performing an interaction operation on the target video (see Zhong, page 5: “after a plurality of candidate templates are obtained by synthesis, for each candidate template, a prediction model may be used to predict user feedback information corresponding to the candidate template”).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in Zhong with the teachings as in He and MacDonald. The motivation for doing so would ensure the system to have the ability to use method and device for generating short video template disclosed in Zhong to acquire a plurality of existing templates from a short video template library; to screen a plurality of candidate templates according to the user feedback information of each candidate template wherein when the candidate template is screened and it may be determined whether user feedback information of the candidate template meets a set recall index, and the candidate template may be used as a target template reserved for screening when the user feedback information of the candidate template meets the set recall index and candidate template can be deleted in response to that the user feedback information of the candidate template does not meet the set recall index; to monitor user interaction behaviors for a plurality of existing templates to obtain first actual feedback information of each existing template; to monitor user interaction behaviors on the target template to obtain second actual feedback information wherein prediction model can be adjusted when the difference between the effect index of the first actual feedback information and second actual feedback information; to display the target template in response to the query category corresponding to the classified query operation matching the category of the target template; and to add a target template reserved by screening to the short video template library thus obtaining and presenting an updated candidate template description information set wherein the updated candidate template description information set comprises updated candidate template description information and the updated candidate template description information is screened into the updated candidate template description information set based on updated heat of the at least one candidate video editing template; obtaining a plurality of recall video editing templates corresponding to recall reporting description information; pushing the candidate template description information set to the client to present the candidate template description information set; obtaining a consumption posterior index indicating usage of the video viewing user for the candidate video editing template that matches the candidate template description information during an observation window period; adjusting the heat of the candidate video editing template based on the consumption posterior index, and updating the candidate template description information set based on the adjusted heat; and performing deduplication on the plurality of candidate video editing templates and the plurality of recall video editing templates to obtain deduplicated templates in order to accurately push the candidate template description information based usage of the templates so that the reliability for pushing a video template can be improved.
Regarding claim 4, the combination teachings of He, MacDonald, and Zhong as discussed above also disclose the method according to claim 1, wherein the heat of the candidate video editing template further comprises:
search heat configured to represent a preference level of the video creation user for searching for the candidate video editing template (see Zhong, page 9: “when a user searches the short video template, the target template matched with the search keyword is preferentially displayed, and the utilization rate of the target template can be improved on the basis of meeting the query requirement of the user”); and/or,
recommendation heat configured to represent a preference level for recommending the candidate video editing template to the video creation user (see He, page 12: “A request receiving module 410, configured to receive a template recommendation request; the template recommendation request is generated by a client according to recommendation triggering operation of a target user account on the clipping template. A request response module 420, configured to, in response to the template recommendation request, obtain a template usage characteristic of the target user account”); and/or,
click heat configured to represent a preference level of the video creation user for clicking on the candidate video editing template (see Zhong, page 5: “according to the user feedback information, the target template with higher click rate, higher comment rate, higher attention rate, higher collection rate or more sharing times can be reserved, so that the reserved target template can better meet the actual requirements of the user, and the utilization rate of the target template can be improved”).
The motivation for combining the references has been discussed in claim 3 above.
Regarding claim 5, the combination teachings of He, MacDonald, and Zhong as discussed above also disclose the method according to claim 1, further comprising:
obtaining and presenting an updated candidate template description information set, wherein the updated candidate template description information set comprises updated candidate template description information, and the updated candidate template description information is screened into the updated candidate template description information set based on updated heat of the at least one candidate video editing template (see Zhong, page 5: “after the user feedback information corresponding to each candidate template is determined, a plurality of candidate templates may be screened according to the user feedback information of each candidate template, and the candidate templates remaining after screening are used as target templates, so that the target templates may be added to the short video template library. For example, according to the user feedback information, the target template with higher click rate, higher comment rate, higher attention rate, higher collection rate or more sharing times can be reserved, so that the reserved target template can better meet the actual requirements of the user, and the utilization rate of the target template can be improved”).
The motivation for combining the references has been discussed in claim 3 above.
Claim 8 is rejected for the same reasons as discussed in claim 3 above.
Claim 9 is rejected for the same reasons as discussed in claim 4 above.
Regarding claim 12, the combination teachings of He, MacDonald, and Zhong as discussed above also disclose the method according to claim 6, further comprising:
obtaining a plurality of recall video editing templates corresponding to recall reporting description information (see Zhong, page 6: “when the candidate template is screened, it may be determined whether the user feedback information of the candidate template meets a set recall index, and when the user feedback information of the candidate template meets the set recall index, the candidate template may be used as a target template retained by screening, and the target template may be added to the short video template library”); and
performing deduplication on the plurality of candidate video editing templates and the plurality of recall video editing templates to obtain deduplicated templates (see Zhong, page 6: “in response to that the user feedback information of the candidate template does not meet the set recall index, the candidate template is deleted”).
The motivation for combining the references has been discussed in claim 3 above.
Regarding claim 13, the combination teachings of He, MacDonald, and Zhong as discussed above also disclose the method according to claim 12, wherein obtaining the plurality of recall video editing templates corresponding to the recall reporting description information comprises:
determining a topic type of the recall reporting description information (see Zhong, page 9: “determine description information of the target template, where the description information of the target template is used by the client to monitor a keyword query operation triggered by a user”);
determining a corresponding template recall mode based on the topic type of the recall reporting description information (see Zhong, page 9: “display the target template in response to matching between a keyword queried by the keyword query operation and the description of the target template; wherein the description of the target template is a description of an included element”); and
recalling the plurality of recall video editing templates from a preset template library based on the template recall mode (see Zhong, page 9: “the description information of each existing template is determinable in the short video template library, for example, when a designer designs an existing template, the description information may be added to the existing template”).
The motivation for combining the references has been discussed in claim 3 above.
Regarding claim 14, the combination teachings of He, MacDonald, and Zhong as discussed above also disclose the method according to claim 12, wherein performing deduplication on the plurality of candidate video editing templates and the plurality of recall video editing templates comprises:
extracting material information from each of the candidate video editing templates and each of the recall video editing templates, and obtaining, based on the material information, content features respectively corresponding to each of the candidate video editing templates and each of the recall video editing templates (see Zhong, page 6: “perform feature extraction on each input element by using a feature extraction layer of the prediction model, so as to obtain features of the candidate template”);
determining, based on the content features, first similarities between each two templates of the plurality of candidate video editing templates and the plurality of recall video editing templates (see Zhong, page 6: “when the candidate template is screened, it may be determined whether the user feedback information of the candidate template meets a set recall index, and when the user feedback information of the candidate template meets the set recall index, the candidate template may be used as a target template retained by screening, and the target template may be added to the short video template library”);
extracting segment information from each of the candidate video editing templates and each of the recall video editing templates, and obtaining, based on the segment information, structural features respectively corresponding to each of the candidate video editing templates and each of the recall video editing templates (see Zhong, page 6: “when a candidate template is screened according to user feedback information, for each candidate template, it may be determined whether a probability and/or a number of times corresponding to the user feedback information of the candidate template in each dimension meet a recall index of the corresponding dimension, when there are probabilities and/or numbers corresponding to user feedback information of at least a set number of dimensions that meet the recall index of the corresponding dimension, the candidate template may be used as a target template retained by screening”);
determining, based on the structural features, second similarities between each two templates of the plurality of candidate video editing templates and the plurality of recall video editing templates (see Zhong, page 6: “when the candidate templates are screened according to the user feedback information, for each candidate template, it may be determined whether the probability and/or the number of times corresponding to the user feedback information of the candidate template meets the corresponding recall index, and in the case that the probability and/or the number of times corresponding to the user feedback information meet the corresponding recall index, the candidate template may be used as a target template to be retained in the screening, and the target template is added to the short video template library, and in the case that the probability and/or the number of times corresponding to the user feedback information do not meet the corresponding recall index, the candidate template may be deleted”); and
performing deduplication on the plurality of candidate video editing templates and the plurality of recall video editing templates based on the first similarities and the second similarities (see Zhong, page 6: “in response to that the user feedback information of the candidate template does not meet the set recall index, the candidate template is deleted”).
The motivation for combining the references has been discussed in claim 3 above.
Regarding claim 15, the combination teachings of He, MacDonald, and Zhong as discussed above also disclose the method according to claim 6, wherein the method further comprises:
after pushing the candidate template description information set to the client to present the candidate template description information set (see Zhong, page 8: “after the server adds the target template retained by screening to the short video template library, when the client monitors that the user uses the short video template in the short video template to make the short video”);
obtaining a consumption posterior index indicating usage of the video viewing user for the candidate video editing template that matches the candidate template description information during an observation window period (see Zhong, page 8: “display the target template in response to the query category corresponding to the classified query operation matching the category of the target template. That is to say, after the server adds the target template retained by screening to the short video template library, when the client monitors that the user uses the short video template in the short video template to make the short video, the server can preferentially display the automatically synthesized target template, and the utilization rate of the target template is improved”); and
adjusting the heat of the candidate video editing template based on the consumption posterior index, and updating the candidate template description information set based on the adjusted heat (see Zhong, page 8: “the difference between the effect index of the first actual feedback information and the effect index of the user feedback information can be calculated, when the difference is large, the model parameters of the prediction model can be adjusted, the prediction model continues to be trained, and when the difference is minimized, the training process of the model can be ended”).
The motivation for combining the references has been discussed in claim 3 above.
Claim 22 is rejected for the same reasons as discussed in claim 3 above.
Claim 23 is rejected for the same reasons as discussed in claim 4 above.
Claim 24 is rejected for the same reasons as discussed in claim 5 above.
Claims 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over He and MacDonald as applied to claim 1, and further in view of Rong et al. (CN 114970562 A, hereinafter referred to as “Rong”).
Regarding claim 10, the combination teachings of He and MacDonald as discussed above disclose all the claimed limitations with the exceptions of the method according to claim 6, further comprising: calculating a similarity between a plurality of pieces of candidate template description information, and comparing the similarity with a predetermined similarity threshold; and if the similarity is greater than the predetermined similarity threshold, merging the plurality of pieces of candidate template description information, and taking candidate template description information corresponding to a candidate video editing template with a maximum heat value as the merged candidate template description information, based on heat of at least one candidate video editing template corresponding to each piece of the candidate template description information.
Rong from the same or similar fields of endeavor discloses the method according to claim 6, further comprising:
calculating a similarity between a plurality of pieces of candidate template description information, and comparing the similarity with a predetermined similarity threshold (see Rong, page 9: “performing matching calculation on the preprocessed event to be processed and a preset event template to obtain a first matching degree”); and
if the similarity is greater than the predetermined similarity threshold, merging the plurality of pieces of candidate template description information, and taking candidate template description information corresponding to a candidate video editing template with a maximum heat value as the merged candidate template description information, based on heat of at least one candidate video editing template corresponding to each piece of the candidate template description information (see Rong, pages 5-6: “calculating the first matching degree and the second matching degree according to a preset rule to respectively obtain a first result and a second result, and taking a preset event template corresponding to the larger one of the first result and the second result as a target event template to carry out semantic understanding on the event to be processed. a preset event template corresponding to the greater of the first matching degree and the second matching degree is taken as a target event template … according to different weight values pre-distributed by the similar event and the event to be processed, the first matching degree and the second matching degree are respectively weighted and calculated, and then the first result and the second result can be obtained. Illustratively, the weight value of the similar event is 0.8, and the weight value of the event to be processed is 0.2. The above weight value distribution may be set”).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in Rong with the teachings as in He and MacDonald. The motivation for doing so would ensure the system to have the ability to use the semantic understanding method disclosed in Rong to perform matching calculation on the preprocessed event to be processed and a preset event template to obtain a first matching degree; to calculate the first matching degree and the second matching degree according to a preset rule to respectively obtain a first result and a second result; to take a preset event template corresponding to the larger one of the first result and the second result as a target event template to carry out semantic understanding on the event to be processed where the preset event template corresponding to the greater of the first matching degree and the second matching degree is taken as a target event template thus calculating a similarity between a plurality of pieces of candidate template description information, and comparing the similarity with a predetermined similarity threshold and if the similarity is greater than the predetermined similarity threshold, merging the plurality of pieces of candidate template description information, and taking candidate template description information corresponding to a candidate video editing template with a maximum heat value as the merged candidate template description information, based on heat of at least one candidate video editing template corresponding to each piece of the candidate template description information in order to ensure that the process of calculating candidate template description information is more accurate so that avoiding affecting the user experience due to pushing the same or similar candidate template description information.
Regarding claim 11, the combination teachings of He, MacDonald, and Rong as discussed above also disclose the method according to claim 10, wherein calculating the similarity between the plurality of pieces of candidate template description information comprises:
calculating a template similarity between candidate video editing templates corresponding to individual pieces of the candidate template description information (see Rong, page 5: “the preprocessed event to be processed is matched with a preset event template prestored in a database to obtain a first matching degree, and the preset event template judges whether a target event template matched with the event to be processed exists in the database according to the first matching degree, so that the user intention of the event to be processed is judged according to the target event template obtained through matching”);
calculating a semantic similarity between individual pieces of the candidate template description information (see Rong, page 5: “inputting the event to be processed into the pre-trained semantic enhancement model, so as to obtain a similar event expanded based on the event to be processed”);
calculating a word similarity between triggering words corresponding to individual pieces of candidate template description information (see Rong, page 5: “a plurality of preset event templates containing word dictionaries and word attributes are arranged, the word dictionaries and the word attributes in each preset event template are arranged according to specific rules, and each preset event template is unique. When the event to be processed is the weather of Shenzhen today, the literal data are firstly split into three keywords of Shenzhen today, Shenzhen and weather, then the three split keywords are respectively matched with word dictionaries in preset time templates in a database, a target event template with similarity values exceeding a preset threshold value is selected”); and
calculating the similarity between the plurality of pieces of candidate template description information based on the template similarity, the semantic similarity, and the word similarity (see Rong, page 7: “All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again. In particular implementation, the present application is not limited by the execution sequence of the described steps, and some steps may be performed in other sequences or simultaneously without conflict”).
The motivation for combining the references has been discussed in claim 10 above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NIENRU YANG whose telephone number is (571)272-4212. The examiner can normally be reached Monday-Friday 10AM-6PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THAI TRAN can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
NIENRU YANG
Examiner
Art Unit 2484
/NIENRU YANG/Examiner, Art Unit 2484
/THAI Q TRAN/Supervisory Patent Examiner, Art Unit 2484