Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6-7, 8-10, 13-14, 15-17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Yuan (U.S. 2019/0207985) in view of Oh et al. (U.S. 2023/0297832, hereinafter “Oh”).
Regarding Claim 1, Yuan teaches a content recommendation method, performed by a computer device (fig. 3; ¶ [0024] and [0028]—a system and method recommends content to a user), the method comprising:
acquiring positive sample content and negative sample content corresponding to a sample account (fig. 4-1; ¶ [0038]—positive samples and negative samples are obtained for model training); and
training a first recall model based on a matching relationship between the positive sample content and the negative sample content to obtain a second recall model, wherein the second recall model is configured to recommend content to an account (fig. 3; ¶ [0037] – [0038]—a recall model is trained based on the positive and negative samples to obtain a trained {second} recall model).
Yuan does not specifically teach extending the positive sample content via recall extension to obtain extended sample content; and training based on the extended sample content. However, Oh teaches extending positive sample content via recall extension to obtain extended sample content; and training a first recall model based on a matching relationship between the positive sample content and the extended sample content to obtain a second recall model, wherein the second recall model is configured to recommend content to an account. (¶ [0079] and [0084] – [0085]—an augmentation module extends the training data samples by increasing the amount of training data to train a content recommendation model).
All of the claimed elements were known in Yuan and Oh and could have been combined by known methods with no change in their respective functions. It therefore would have been obvious to a person of ordinary skill in the art at the time of filing of the applicant’s invention to combine the extending the positive sample content of Oh with the positive sample content and negative sample content of Yuan to yield the predictable result of extending the positive sample content via recall extension to obtain extended sample content; and training a first recall model based on a matching relationship between the positive sample content, the extended sample content, and the negative sample content to obtain a second recall model, wherein the second recall model is configured to recommend content to an account. One would be motivated to make this combination for the purpose of solving the problem of a deterioration in performance of a neural network model when the amount of training data is insufficient (Oh, ¶ [0085]).
Regarding Claim 8, Yuan teaches a computer device, comprising a processor and a memory, the memory storing at least one segment of program that, when loaded and executed by the processor, causes the computer device to implement a content recommendation method (figs. 1 and 4; ¶ [0053], [0120], and [0156] – [0158]) including:
acquiring positive sample content and negative sample content corresponding to a sample account (fig. 4-1; ¶ [0038]—positive samples and negative samples are obtained for model training); and
training a first recall model based on a matching relationship between the positive sample content and the negative sample content to obtain a second recall model, wherein the second recall model is configured to recommend content to an account (fig. 3; ¶ [0037] – [0038]—a recall model is trained based on the positive and negative samples to obtain a trained {second} recall model).
Yuan does not specifically teach extending the positive sample content via recall extension to obtain extended sample content; and training based on the extended sample content. However, Oh teaches extending positive sample content via recall extension to obtain extended sample content; and training a first recall model based on a matching relationship between the positive sample content and the extended sample content to obtain a second recall model, wherein the second recall model is configured to recommend content to an account. (¶ [0079] and [0084] – [0085]—an augmentation module extends the training data samples by increasing the amount of training data to train a content recommendation model).
All of the claimed elements were known in Yuan and Oh and could have been combined by known methods with no change in their respective functions. It therefore would have been obvious to a person of ordinary skill in the art at the time of filing of the applicant’s invention to combine the extending the positive sample content of Oh with the positive sample content and negative sample content of Yuan to yield the predictable result of extending the positive sample content via recall extension to obtain extended sample content; and training a first recall model based on a matching relationship between the positive sample content, the extended sample content, and the negative sample content to obtain a second recall model, wherein the second recall model is configured to recommend content to an account. One would be motivated to make this combination for the purpose of solving the problem of a deterioration in performance of a neural network model when the amount of training data is insufficient (Oh, ¶ [0085]).
Regarding Claim 15, Yuan teaches a non-transitory computer-readable storage medium, storing at least one segment of program that, when loaded and executed by a processor of a computer device, causes the computer device to implement a content recommendation method (figs. 1 and 4; ¶ [0053], [0120], and [0156] – [0158]) including:
acquiring positive sample content and negative sample content corresponding to a sample account (fig. 4-1; ¶ [0038]—positive samples and negative samples are obtained for model training); and
training a first recall model based on a matching relationship between the positive sample content and the negative sample content to obtain a second recall model, wherein the second recall model is configured to recommend content to an account (fig. 3; ¶ [0037] – [0038]—a recall model is trained based on the positive and negative samples to obtain a trained {second} recall model).
Yuan does not specifically teach extending the positive sample content via recall extension to obtain extended sample content; and training based on the extended sample content. However, Oh teaches extending positive sample content via recall extension to obtain extended sample content; and training a first recall model based on a matching relationship between the positive sample content and the extended sample content to obtain a second recall model, wherein the second recall model is configured to recommend content to an account. (¶ [0079] and [0084] – [0085]—an augmentation module extends the training data samples by increasing the amount of training data to train a content recommendation model).
All of the claimed elements were known in Yuan and Oh and could have been combined by known methods with no change in their respective functions. It therefore would have been obvious to a person of ordinary skill in the art at the time of filing of the applicant’s invention to combine the extending the positive sample content of Oh with the positive sample content and negative sample content of Yuan to yield the predictable result of extending the positive sample content via recall extension to obtain extended sample content; and training a first recall model based on a matching relationship between the positive sample content, the extended sample content, and the negative sample content to obtain a second recall model, wherein the second recall model is configured to recommend content to an account. One would be motivated to make this combination for the purpose of solving the problem of a deterioration in performance of a neural network model when the amount of training data is insufficient (Oh, ¶ [0085]).
Regarding Claims 2, 9, and 16, Yuan/Oh teaches wherein the second recall model is configured to recommend content to an account by: performing recommendation degree analysis on the account and to-be-recommended content through the second recall model to obtain recommended content in the to-be- recommended content; and sending the recommended content to the account (Yuan, ¶ [0061] and [0068]—the second recall model is used to provide recommendations to a user; ranking the recommendations is a degree analysis. Also Oh, ¶ [0075] and [0103]).
Regarding Claims 3, 10, and 17, Yuan/Oh teaches wherein the extending the positive sample content via recall extension to obtain extended sample content comprises:
determining a content publishing account of the positive sample content (¶ [0056] - [0057]—a graph represents content accessed by a user, i.e. it determines a user account);
acquiring a first content set published by the content publishing account within a historical time period; and obtaining the extended sample content based on the first content set (¶ [0080] – [0086]—graphs are obtained for a plurality of sessions of user access. Each session indicates a history of content accessed by the user in a time period, and the accesses from multiple sessions are used to extend the sample content from the individual graph of the first user session {the first content set}).
Regarding Claims 6, 13, and 20, Yuan/Oh teaches wherein the positive sample content corresponding to the sample account is acquired by: acquiring a historical interaction event of the sample account with historical recommended content within a historical time period; and identifying historical recommended content corresponding to a positive interactive relationship from the historical interaction event as the positive sample content (Yuan, ¶ [0038]—the positive sample content are acquired from a history action of the user, i.e. identifying historical recommended content that a user has interacted with positively. Also Oh, ¶ [0060], [0073], and [0080]—positive sample content are acquired from a session {a historical time period} and is content that a user accessed during that time period).
Regarding Claims 7 and 14, Yuan/Oh teaches wherein the negative sample content corresponding to the sample account is acquired by: randomly sampling a content pool to obtain the negative sample content; or acquiring historical recommended content corresponding to a negative interactive relationship from the historical interaction event as the negative sample content (Yuan, ¶ [0038]—negative samples are acquired from a history action of the user).
Claims 5, 12, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Yuan in view of Oh, as applied to claims 1, 8, and 15, above, and further in view of Fei et al. (U.S. 2023/0410155, hereinafter “Fei”).
Regarding Claims 5, 12, and 19, Yuan/Oh does not specifically teach wherein the training a first recall model based on a matching relationship between the positive sample content, the extended sample content, and the negative sample content to obtain a second recall model comprises:
training the first recall model based on the matching relationship between the positive sample content, the extended sample content, and the negative sample content to obtain an account sub-model and a content sub-model, the account sub-model being configured to analyze account information, and the content sub-model being configured to analyze content data.
However, Fei teaches wherein training a first recall model based on the matching relationship between positive sample content and negative sample content (¶ [0045]) comprises training the first recall model to obtain an account sub-model and a content sub-model, the account sub-model being configured to analyze account information, and the content sub-model being configured to analyze content data (fig. 1; ¶ [0039] – [0045] and [0059]—recall model 100 is comprises user-content model 105 {an account sub-model} and content-style model 110 {a content sub-model}. The models are jointly trained, as further described in ¶ [0050] and [0061]).
All of the claimed elements were known in Yuan/Oh and Fei and could have been combined by known methods with no change in their respective functions. It therefore would have been obvious to a person of ordinary skill in the art at the time of filing of the applicant’s invention to combine the sub-models of Fei with the positive sample content, the extended sample content, and the negative sample content of Yuan/Oh to yield the predictable result of wherein the training a first recall model based on a matching relationship between the positive sample content, the extended sample content, and the negative sample content to obtain a second recall model comprises: training the first recall model based on the matching relationship between the positive sample content, the extended sample content, and the negative sample content to obtain an account sub-model and a content sub-model, the account sub-model being configured to analyze account information, and the content sub-model being configured to analyze content data. One would be motivated to make this combination for the purpose of preventing user dissatisfaction by avoiding delays in receiving content (Fei, ¶ [0002]).
Allowable Subject Matter
Claims 4, 11, and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
None of the prior art of record teaches:
“wherein the extending the positive sample content via recall extension to obtain extended sample content comprises:
“determining an associated account associated with the sample account;
“acquiring a second content set consumed by the associated account within a historical time period; and
“obtaining the extended sample content based on the second content set”
as recited by the present claims. Oh extends the positive sample content only in consideration of the sample account; it does not determine an associated account associated with the sample account and acquire content a second content set consumed by the associated account within a historical time period. None of the prior art of record teaches these limitations in the context of the present claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. This art includes Ouaftouh, Sara, Ahmed Zellou, and Ali Idri (“Social recommendation: A user profile clustering‐based approach,” Concurrency and Computation: Practice and Experience 31.20 (2019): e5330), which teaches a system that recommends content by clustering users that share similar characteristics, but does not extend a set of samples for training.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAL W SCHNEE whose telephone number is (571) 270-1918. The examiner can normally be reached M-F 7:30 a.m. - 6:00 p.m.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Huntley can be reached at 303-297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HAL SCHNEE/Primary Examiner, Art Unit 2129