Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 5, 7, 16 are objected to because of the following informalities:
In Claim 5, line 15, “the user” was probably meant to be: the first user, in order to negate potential 35 USC 112 issues. The same objection is made for Claim 16 at line 18.
In Claim 7, line 3, “the first set of features” was probably meant to be: the set of features, in order to negate potential 35 USC 112 issues.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1:
All claims are directed towards either a method, a system or a non-transitory machine-readable storage medium and thus satisfies Step 1 as falling into one of the statutory categories.
Step 2A, Prong One:
Independent Claim 1 recites (the same analysis applies to similar independent Claims 12 and 17):
creating a set of features comprising a first prediction and an interaction feature, wherein (i) the first prediction represents a degree of similarity between a first embedding output by a first sub-model of a machine learning model and a second embedding output by a second sub-model of the machine learning model, and (ii) the interaction feature indicates a strength of association between a first user represented by the first embedding and a second user associated with a content item represented by the second embedding;
this limitation, under its broadest reasonable interpretation, covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of creating sets of features based on similarity between embeddings and strength of association between users using observation and evaluation (the machine learning model and its sub models are considered as using the models as a tool to perform the abstract idea - see MPEP 2106.05(f)).
Step 2A, Prong Two:
Claim 1 recites the additional elements of (the same analysis applies to similar independent Claims 12 and 17):
inputting the set of features to a third sub-model of the machine learning model;
this limitation is considered as adding insignificant extra-solution activity (inputting data) to the judicial exception - see MPEP 2106.05(g); and the machine learning model and its sub models are considered as using the models as a tool to perform the abstract idea - see MPEP 2106.05(f).
receiving a second prediction, the second prediction output by the third sub-model, wherein the second prediction represents a likelihood that the first user is interested in the content item;
this limitation is considered as adding insignificant extra-solution activity (receiving data) to the judicial exception - see MPEP 2106.05(g); and the machine learning model and its sub models are considered as using the models as a tool to perform the abstract idea - see MPEP 2106.05(f).
and providing the second prediction to a recommendation system.
This limitation is considered as adding insignificant extra-solution activity (providing/outputting data) to the judicial exception - see MPEP 2106.05(g). The additional elements of the “processors” as recited in independent Claims 12 and 17 are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are therefore directed to an abstract idea.
Step 2B:
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are considered as adding insignificant extra-solution activity (inputting, receiving and outputting data) to the judicial exception and therefore considered as appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception - see MPEP 2106.05(d); and the machine learning model and its sub models are considered as using the models as a tool to perform the abstract idea - see MPEP 2106.05(f). The additional elements of the “processors” as recited in independent Claims 12 and 17 amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are therefore not patent eligible.
Dependent Claims 2, 13 and 18 are considered as using the models as a tool to perform the abstract idea - see MPEP 2106.05(f), which includes training the models.
Dependent Claims 3, 14 and 19 are considered as adding well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception - see MPEP 2106.05(d).
Dependent Claims 4, 15 and 20 are considered as adding well-understood, routine, conventional activities previously known to the industry (adding users or features), specified at a high level of generality, to the judicial exception - see and MPEP 2106.05(d).
Regarding Claim 5 (and similar Claim 16):
Step 2A, Prong One:
after a second set of content items features representing the content item are available: creating a second set of features comprising a third prediction and the interaction feature; wherein the third prediction represents a degree of similarity between the first embedding and a third embedding generated by the second sub-model of the machine learning model based on the second set of content items representing the content item;
this limitation, under its broadest reasonable interpretation, covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of creating sets of features based on similarity between embeddings and sets of content items using observation and evaluation (the machine learning model and its sub models are considered as using the models as a tool to perform the abstract idea - see MPEP 2106.05(f)).
Step 2A, Prong Two:
The additional elements recited are considered as:
the set of features is a first set of features;
this limitation is considered as adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g).
the second embedding representing the content item is generated by the second sub-model based on a first set of content item features representing the content item;
this limitation is considered as using the models as a tool to perform the abstract idea - see MPEP 2106.05(f).
inputting the second set of features to the third sub-model to obtain a fourth prediction output by the third sub-model;
this limitation is considered as adding insignificant extra-solution activity (inputting data) to the judicial exception - see MPEP 2106.05(g); and the machine learning model and its sub models are considered as using the models as a tool to perform the abstract idea - see MPEP 2106.05(f).
wherein the fourth prediction represents a likelihood that the user is interested in the content item;
this limitation is considered as adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g).
and providing the fourth prediction to the recommendation system.
This limitation is considered as adding insignificant extra-solution activity (providing/outputting data) to the judicial exception - see MPEP 2106.05(g). The additional elements of the “processors” as recited in dependent Claim 16 is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are therefore directed to an abstract idea.
Step 2B:
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are considered as adding insignificant extra-solution activity to the judicial exception and therefore considered as appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception - see MPEP 2106.05(d); and the machine learning model and its sub models are considered as using the models as a tool to perform the abstract idea - see MPEP 2106.05(f). The additional elements of the “processors” as recited in dependent Claim 16 amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are therefore not patent eligible.
Dependent Claim 6-7 are also considered as falling under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of performing dot products and concatenating features using pen and paper.
Dependent Claims 8-9 limitations are considered as adding well-understood, routine, conventional activities previously known to the industry (recommending an invite to attend an online event and sharing a content item) to the judicial exception - see MPEP 2106.05(d).
Dependent Claims 10-11 limitations are considered as using the models as a tool to perform the abstract idea - see MPEP 2106.05(f).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-6, 9-20 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Lee, US 2022/0358366 A1.
Regarding Claim 1, Lee teaches:
A method comprising: creating a set of features comprising a first prediction and an interaction feature (paragraph 39: “The relationship and interaction branch 310 may analyze gathered feature information to generate interaction and relationship information that may be associated with the user and the content item. In some examples, the generated interaction and relationship information may be utilized to generate more accurate predictions of associations between the user and the content item”),
wherein (i) the first prediction represents a degree of similarity between a first embedding output by a first sub-model of a machine learning model and a second embedding output by a second sub-model of the machine learning model (paragraph 38: “in some examples, the network 300 may utilize input functions 301-304 to gather feature information (e.g., sparse features, dense features) related to a user and a content item, and may implement a user tower 305 and a content item tower 306 to analyze the gathered feature information. In particular, the network 300 may generate a user embedding 307 and a content item embedding 308 and may determine utilize a dot-product function 309 to determine a dot-product of the user embedding 307 and the content item embedding 308 to indicate an association between the user and the content item”. The dot product representing the degree of similarity),
and (ii) the interaction feature indicates a strength of association between a first user represented by the first embedding and a second user associated with a content item represented by the second embedding (paragraph 64: “the client device 630B may be utilized by a second user to provide feedback (e.g., likes, comments) that may be utilized to generate feature-related information for a content item distributed by the service provider”. And, paragraph 39: “the network 300 may also include a “relationship and interaction” branch 310. The relationship and interaction branch 310 may analyze gathered feature information to generate interaction and relationship information that may be associated with the user and the content item. In some examples, the generated interaction and relationship information may be utilized to generate more accurate predictions of associations between the user and the content item”. See also Tang, US 20190188561 A1, for example paragraph 18 that discusses associations between users);
inputting the set of features to a third sub-model of the machine learning model (paragraph 38: “the network 300 may utilize input functions 301-304 to gather feature information (e.g., sparse features, dense features) related to a user and a content item, and may implement a user tower 305 and a content item tower 306 to analyze the gathered feature information”);
receiving a second prediction, the second prediction output by the third sub-model, wherein the second prediction represents a likelihood that the first user is interested in the content item (paragraph 33: “to generate a prediction for a first object and a second object, the network 200 may utilize a dot product function 209. In particular, in some examples, the dot product function 209 may be used to estimate a dot product of the user embedding 207 and the content item embedding 208, which may indicate (i.e., predict) an association between the user and the content item. In one example, the dot product of the user embedding 207 and the content item embedding 208 may indicate a likelihood that a user may select a content item”);
and providing the second prediction to a recommendation system (paragraph 29: “the network 200 may be implemented to aid in recommendation of the content item for the user”).
Regarding Claim 2, Lee further teaches:
The method of claim 1, wherein the first sub-model, the second sub-model, and the third sub-model are jointly trained (paragraph 31: “the network 200 may implement a first multi-layer neural network (NN) tower 205 associated with a user (a “user tower”) and a second multi-layer neural network (NN) tower 206 associated with a content item (a “content item tower”). In some examples, the user tower 205 and content item tower 206 may be implemented to utilize one or more layers to analyze the gathered feature information and to determine one or more relationships and interactions between the user and the content item”. And, paragraph 42: “implementation of the network 400 may include a plurality of stages, including generating a network structure for network 400, implementing a training stage for the network 400, and implementing an inference stage for the network”. That is all stages or sub-models of the network are trained together or jointly).
Regarding Claim 3, Lee further teaches:
The method of claim 1, wherein the second user associated with the content item is a creator of the content item (paragraph 16: “To select a content item of interest, a content distributor may analyze and rank a library of content items based on various aspects”. The content distributor regarded as the content item creator).
Regarding Claim 4, Lee further teaches:
The method of claim 1, wherein the interaction feature is a first interaction feature; and wherein creating the set of features comprises: including a second interaction feature in the set of features; wherein the second interaction feature represents a number of users associated with the first user that are also associated with the content item (paragraph 99: “the systems and methods described herein may utilize to capture representative feature information that may provide enhanced representations of relationships and interactions between a first object and a second object. The systems and methods may also improve representations provided by a first embedding for a first object (e.g., a user) and a second embedding for a second object (e.g., a content item), and effectively capture one or more associated relationships and interactions. These one or more relationships and interactions may be indicative of how likely the first object will be “interested” (i.e., affiliated) in the second object”. And, paragraph 106: “a user may specify different privacy settings for different types of entities, such as individual users, friends-of-friends, followers, user groups, or corporate entities”).
Regarding Claim 5, Lee further teaches:
The method of claim 1, wherein: the set of features is a first set of features (paragraph 38: “in some examples, the network 300 may utilize input functions 301-304 to gather feature information”);
the second embedding representing the content item is generated by the second sub-model based on a first set of content item features representing the content item (paragraph 32: “and the second multi-layer neural network (NN) tower 206 associated with a content item may be used to generate a second embedding 208 relating to a content item”);
the method further comprises: after a second set of content items features representing the content item are available: creating a second set of features comprising a third prediction and the interaction feature (paragraph 39: “The relationship and interaction branch 310 may analyze gathered feature information to generate interaction and relationship information that may be associated with the user and the content item. In some examples, the generated interaction and relationship information may be utilized to generate more accurate predictions of associations between the user and the content item”);
wherein the third prediction represents a degree of similarity between the first embedding and a third embedding generated by the second sub-model of the machine learning model based on the second set of content items representing the content item (paragraph 38: “in some examples, the network 300 may utilize input functions 301-304 to gather feature information (e.g., sparse features, dense features) related to a user and a content item, and may implement a user tower 305 and a content item tower 306 to analyze the gathered feature information. In particular, the network 300 may generate a user embedding 307 and a content item embedding 308 and may determine utilize a dot-product function 309 to determine a dot-product of the user embedding 307 and the content item embedding 308 to indicate an association between the user and the content item”. The dot product representing the degree of similarity);
inputting the second set of features to the third sub-model to obtain a fourth prediction output by the third sub-model; wherein the fourth prediction represents a likelihood that the user is interested in the content item (paragraph 38: “the network 300 may utilize input functions 301-304 to gather feature information (e.g., sparse features, dense features) related to a user and a content item, and may implement a user tower 305 and a content item tower 306 to analyze the gathered feature information”. And, paragraph 33: “to generate a prediction for a first object and a second object, the network 200 may utilize a dot product function 209. In particular, in some examples, the dot product function 209 may be used to estimate a dot product of the user embedding 207 and the content item embedding 208, which may indicate (i.e., predict) an association between the user and the content item. In one example, the dot product of the user embedding 207 and the content item embedding 208 may indicate a likelihood that a user may select a content item”);
and providing the fourth prediction to the recommendation system (paragraph 29: “the network 200 may be implemented to aid in recommendation of the content item for the user”).
Regarding Claim 6, Lee further teaches:
The method of claim 1, wherein the first prediction comprises a dot product of the first embedding and the second embedding (paragraph 33: “in some examples, the dot product function 209 may be used to estimate a dot product of the user embedding 207 and the content item embedding”).
Regarding Claim 9, Lee further teaches:
The method of claim 1, wherein: the second user associated with the content item has access to the content item in a content management system (paragraph 63: “the external system 620 may be utilized by a service provider distributing content (e.g., a social media application provider) to store any information relating to one or more users of content items and a library of one or more content items”. The library representing the content management system);
and the recommendation system recommends to the second user associated with the content item to share the content item with the first user (paragraph 106: “any images shared by the first user are visible only to the first user's friends on the online social network”).
Regarding Claim 10, Lee further teaches:
The method of claim 1, wherein the first embedding is generated by the first sub-model prior to receiving a request to create the content item (paragraph 32: “In some examples, the first multi-layer neural network (NN) tower 205 associated with a user may be used to generate a first embedding 207 associated with the user (the “user embedding”), and the second multi-layer neural network (NN) tower 206 associated with a content item may be used to generate a second embedding 208 relating to a content item”. And, paragraph 33: “in some examples, the dot product function 209 may be used to estimate a dot product of the user embedding 207 and the content item embedding 208, which may indicate (i.e., predict) an association between the user and the content item”. That is the embeddings are first generated and then used to create/predict the content item).
Regarding Claim 11, Lee further teaches:
The method of claim 1, wherein the creating the set of features, inputting the set of features to the third sub-model, receiving the second prediction output by third sub-model, and providing the second prediction to the recommendation system are performed in response to the recommendation system receiving a request to make a candidate invitee recommendation to the second user associated with the content item (paragraph 72: “implement 616 an inference stage of the neural network to generate a prediction and a prediction loss; and provide 617 a recommendation. In some examples, the instructions 613-617 may be utilized to enable a neural network to recommend a content item (i.e., a first object) to a user”. The content item can be associated with a candidate invitee recommendation);
and wherein the first embedding is generated by first sub-model prior to the recommendation system receiving the request (paragraph 32: “In some examples, the first multi-layer neural network (NN) tower 205 associated with a user may be used to generate a first embedding 207 associated with the user (the “user embedding”), and the second multi-layer neural network (NN) tower 206 associated with a content item may be used to generate a second embedding 208 relating to a content item”. And, paragraph 33: “in some examples, the dot product function 209 may be used to estimate a dot product of the user embedding 207 and the content item embedding 208, which may indicate (i.e., predict) an association between the user and the content item”. That is the embeddings are first generated and then used to create/predict the content item or request).
Claims 12-16 are similar to Claims 1-5 and are rejected under the same rationale as stated above for those claims.
Claims 17-20 are similar to Claims 1-4 and are rejected under the same rationale as stated above for those claims.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee, US 2022/0358366 A1, in view of Fan, US 2023/0401464 A1.
Regarding Claim 7, with Lee teaching those limitations of the claim as previously pointed out, Lee may not have taught all of the following, however, Fan in a similar field of endeavor shows:
The method of claim 1, wherein including the first prediction and the interaction feature in the set of features is based on concatenating the first prediction and the interaction feature to form a set of fusion features; and wherein the first set of features comprises the set of fusion features (paragraph 69: “after encoding, all of the features for the user are concatenated and all of the features for the episode are concatenated”. See also Volkovs, US 20220058489 A1, for example paragraphs 41-42). (Emphasis added).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to use the teachings of Fan with that of Lee for concatenating features.
The ordinary artisan would have been motivated to modify Lee in the manner set forth above for the purposes of having a pre-trained recommender model that is trained using contrastive learning with feature-level augmentation [Fan: paragraph 95].
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee, US 2022/0358366 A1, in view of Tang, US 2019/0188561 A1.
Regarding Claim 8, with Lee teaching those limitations of the claim as previously pointed out, Lee may not have taught all of the following, however, Tang in a similar field of endeavor shows:
The method of claim 1, wherein: the content item is an online event hosted by an online social network (Abstract: “An online system distributes content items describing events to one or more users of the online system”. And, paragraph 13: “the online system 130 is a social networking system”);
the first user is a member of the online social network; the second user associated with the content item is a member of the online social network and has indicated to the online social network an intent to attend the online event; and the recommendation system recommends to the second user associated with the content item to invite the first user to attend the online event (paragraph 56: “The online system determines 550, for each of the plurality of users, a measure of a likelihood of the user being interested in the event, or a measure of likelihood of the user attending the event. In various embodiments, the determined likelihood is based on a measure of distance between the vector representation of a user and the vector representation of the event. The online system identifies 560 a subset of the plurality of users that are likely to attend the event. In an embodiment, the subset of the plurality of user is identified by, for example, comparing the determined 550 measure of likelihood of the user attending the event with a threshold value and selecting all users that have more than the threshold likelihood of attending the event”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to use the teachings of Tang with that of Lee for recommending users to attend an online event.
The ordinary artisan would have been motivated to modify Lee in the manner set forth above for the purposes of determining a likelihood of attendance of an event by a user [Tang: Abstract].
Examiner’s Note:
The Examiner cites particular pages, sections, columns, line numbers, and/or paragraphs in the references as applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in its entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner and the additional related prior arts made of record that are considered pertinent to applicant's disclosure to further show the general state of the art. The Examiner's interpretations in parenthesis are provided with the cited references to assist the applicants to better understand how the examiner interprets the prior art to read on the claims. Such comments are entirely consistent with the intent and spirit of compact prosecution.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892 for the relevant prior art where for example Lineberger, US 20160191450 A1, teaches recommending appropriate data content for a social media user.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVE MISIR whose telephone number is (571)272-5243. The examiner can normally be reached M-R 8-5 pm, F some hours.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar can be reached at 5712703169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVE MISIR/Primary Examiner, Art Unit 2127