Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to the arguments filed on 09/02/2025. Claims 1-20 are pending in the application and have been considered below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Claim 1:
For Step 1, the claim is a method so it does recite a statutory category of invention.
For Step 2, Prong 1:
The claim recites the limitation of “generating the target collaborative embedding; and
the training set comprising the target collaborative embedding and the training item,
the training item being a training input for a given training iteration and the target collaborative embedding being a training target for the given training iteration;
during the given training iteration.” The generating limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is nothing in the claim precludes the generating steps from practically being performed in the human mind. This limitation is a mental process.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process”
The claim recites the limitation of” determining, by the server, a penalty score for the given training iteration by comparing the predicted collaborative embedding output by the first MLA and the target collaborative embedding output by the second MLA, the penalty score being indicative of a similarity between the predicted collaborative embedding and the target collaborative embedding of the training item.” The determining limitation, as drafted, are processes that, under its broadest reasonable interpretation, cover performance of the limitation in the mind but for the recitation of generic computer components. That is nothing in the claim precludes the determining step from practically being performed in the human mind. This limitation is a mental process.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process”
For Step 2, Prong 2, the claim recites additional elements: content recommendation system, server, inputting, by the server, the training item into the MLA, the MLA being
configured to generate a predicted collaborative embedding for the training item," and"
adjusting, by the server, the MLA using the penalty score so as to increase the similarity
between the predicted collaborative embedding and the target collaborative embedding
of the training item."
The “outputting, by a second MLA executed by the server, a target collaborative embedding for the training item based on previous user interactions between the users of the content recommendation system and the training item” step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “content recommendation system and server are generic computer components to apply an abstract idea under 2106.05(f).
The” inputting, by the server, the training item into the first MLA, the first MLA being configured to output a predicted collaborative embedding for the training item “step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “outputting, by the first MLA, the predicted collaborative embedding for the training
item” step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “adjusting, by the server, the first MLA using the penalty score so as to increase the similarity between the predicted collaborative embedding and the target collaborative embedding of the training item” is a generic computer component that amount to mere instructions to apply the abstract idea. See MPEP 2106.05(f).
Step 2B
The additional elements of content recommendation system, server and” adjusting, by the server, the first MLA using the penalty score so as to increase the similarity between the predicted collaborative embedding and the target collaborative embedding of the training item” do not amount to significantly more for the reasons set forth in step 2A above.
Additionally, under the Subject Matter Eligibility (SME), a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be reevaluated in Step 2B.
Here the” inputting (i.e. sending or transmitting), by the server, the training item into the MLA, the MLA being configured to generate a predicted collaborative embedding for the training item “step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(iv).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data”.
Here the “outputting, by a second MLA executed by the server, a target collaborative embedding for the training item based on previous user interactions between the users of the content recommendation system and the training item” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93.
Here the “outputting, by the first MLA, the predicted collaborative embedding for the training item” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of content recommendation system, server and” adjusting, by the server, the first MLA using the penalty score so as to increase the similarity between the predicted collaborative embedding and the target collaborative embedding of the training item” to perform the claim steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 2:
Claim 2, which incorporates the rejection of claim 1, recites an additional element:
“inputting, by the server, raw textual data of the training item.”
The “inputting, by the server, raw textual data of the training item” step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “server is a generic computer component to apply an abstract idea under 2106.05(f).
Step 2B
The additional element of server does not amount to significantly more for the reasons set forth in step 2A above.
Additionally, under the Subject Matter Eligibility (SME), a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be reevaluated in Step 2B.
Here the” inputting (i.e. sending or transmitting), by the server, the training item into the MLA, the MLA being configured to generate a predicted collaborative embedding for the training item “step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(iv).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.”
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of a server to perform the claim steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 3:
Claim 3, which incorporates the rejection of claim 2, recites further limitations such as “
determining, by the server, the raw textual data based on content of the training item” that are part of the abstract idea.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process.”
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 4:
Claim 4, which incorporates the rejection of claim 3, recites and additional element:
Singular Value Decomposition (SVD) based MLA.
The “Singular Value Decomposition (SVD) based MLA” is a generic computer component that amount to mere instructions to apply the abstract idea. See MPEP 2106.05(f).
The additional element of server does not amount to significantly more for the reasons set forth in step 2A above.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “Singular Value Decomposition (SVD) based MLA” to perform the claim steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 5:
Claim 5, which incorporates the rejection of claim 2, recites further limitations such as “
determining, by the server, a plurality of potential recommendation items to be provided to the given user, the plurality of potential recommendation items including:
(i) a set of items associated with previous user interactions between the users and the respective items from the set of items, and
(ii) at least one other item, the at least one other item including the digital item” and
generating, by the server, a parameter for the digital item as a product of (i) the predicted collaborative embedding of the digital item and (ii) the other user embedding, the parameter being an input into a fourth MLA configured to rank the plurality of potential recommendation items; and
generating, by the server, a second other parameter for the given item from the set of items as a product of (i) the respective collaborative embedding, and (ii) the user collaborative embedding, the second other parameter being an input into the fourth MLA configured to rank the plurality of potential recommendations items” that are part of the abstract idea.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process.”
The claim recites additional elements: “acquiring, by the server, an indication of a request for content recommendation from a given user of the content recommendation system,” “acquiring, by the server, a collaborative embedding for the given item from the set of items, the collaborative embedding having been determined by the second MLA based on the previous user interactions between the user and the items from the set of items, the previous user interactions between the users and the given item having been sufficient for determining the collaborative embedding for the given item by the second MLA,”
user collaborative embedding having been determined by the second MLA
based on the previous user interactions between the user and the items from
the set of items,” acquiring, by the server, another user embedding for the given user, the other user embedding having been determined by a third MLA based on the predicted collaborative embedding for the digital item and user interactions between the given user and items of the content recommendation system.”
The “acquiring (i.e. receiving), by the server, an indication of a request for content recommendation from a given user of the content recommendation system,” step is a form of insignificant extra-solution activity.” MPEP 2106.05(g).
The “acquiring (i.e. receiving), by the server, a collaborative embedding for the given item from the set of items,, the user collaborative embedding having been determined by the second MLA based on the previous user interactions between the user and the items from the set of items, the previous user interactions between the users and the given item having been sufficient for determining the collaborative embedding for the given item by the second MLA” step is a form of insignificant extra-solution activity.” MPEP 2106.05(g).
The “acquiring (i.e. receiving), by the server, a predicted collaborative embedding for the digital item” step is a form of insignificant extra-solution activity.” MPEP 2106.05(g).
The “acquiring (i.e. receiving), by the server, a user collaborative embedding for the given user, the user collaborative embedding having been determined by the second MLA based on the previous user interactions between the user and the items from the set of items.” MPEP 2106.05(g).
The “acquiring (i.e. receiving), by the server, another user embedding for the given user, the other user embedding having been determined by a third MLA based on the predicted collaborative embedding for the digital item and user interactions between the given user and items of the content recommendation system.” MPEP 2106.05(g).
Under the Subject Matter Eligibility (SME), a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be reevaluated in Step 2B.
Here the “acquiring (i.e. receiving), by the server, an indication of a request for content recommendation from a given user of the content recommendation system “step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(iv).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.”
Here the “acquiring (i.e. receiving), by the server, a collaborative embedding for the given item from the set of items,, the user collaborative embedding having been determined by the second MLA based on the previous user interactions between the user and the items from the set of items, the previous user interactions between the users and the given item having been sufficient for determining the collaborative embedding for the given item by the second MLA” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(iv).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.”
Here the “acquiring (i.e. receiving), by the server, a predicted collaborative embedding for the digital item” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(iv).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.”
Here the “acquiring (i.e. receiving), by the server, a user collaborative embedding for the given user, the user collaborative embedding having been determined by the second MLA based on the previous user interactions between the user and the items from the set of items” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(iv).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.”
Here the “acquiring (i.e. receiving), by the server, another user embedding for the given user, the other user embedding having been determined by a third MLA based on the predicted collaborative embedding for the digital item and user interactions between the given user and items of the content recommendation system” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(iv).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.”
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 6:
Claim 6, which incorporates the rejection of claim 1, recites further limitations such as “the collaborative embedding has been determined by the second MLA in an off-line mode, prior to receipt of the indication of the request for content recommendation” that are part of the abstract idea.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process.”
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 7:
Claim 7, which incorporates the rejection of claim 1, recites an additional element:
“the fourth MLA is a decision-tree based MLA.”
The “fourth MLA is a decision-tree based MLA” is a generic computer component that amount to mere instructions to apply the abstract idea. See MPEP 2106.05(f).
The additional element of the fourth MLA is a decision-tree based MLA does not amount to significantly more for the reasons set forth in step 2A above.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “fourth MLA is a decision-tree based MLA” to perform the claim steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 8:
Claim 8, which incorporates the rejection of claim 1, recites further limitations such as “generating, by the server, another parameter for the given item as a product of
(i) the other predicted collaborative embedding of the given item and (ii) the
other user embedding, the other parameter being an input into the fourth MLA
configured to rank the plurality of potential recommendation items” that are part of the abstract idea.
The claim recites an additional element: “acquiring (i.e. receiving), by the server, another predicted collaborative embedding for a given item for the set of items, the other predicted collaborative embedding having been generated by the first MLA based on content data associated with the given item.”
The “acquiring (i.e. receiving), by the server, another predicted collaborative embedding for a given item for the set of items, the other predicted collaborative embedding having been generated by the first MLA based on content data associated with the given item” step is a form of insignificant extra-solution activity.” MPEP 2106.05(g).
Here the acquiring (i.e. receiving), by the server, another predicted collaborative embedding for a given item for the set of items, the other predicted collaborative embedding having been generated by the first MLA based on content data associated with the given item” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(iv).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.”
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 9:
Claim 9, which incorporates the rejection of claim 1, recites an additional element:
“the second MLA is trained on a plurality of training sets.”
The recited “second MLA is trained on a plurality of training sets” is a generic training recitation that may amount to a generic computer component to apply an abstract idea under 2106.05(f).
The additional elements of “second MLA is trained on a plurality of training sets” does not amount to significantly more for the reasons set forth in step 2A above.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “second MLA is trained on a plurality of training sets” to perform the claim steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 10:
Claim 10, which incorporates the rejection of claim 1, recites further limitations such as “
the training item is used in a second plurality of training sets, and wherein the plurality of training sets is larger than the second plurality of training sets “that are part of the abstract idea.
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 11:
For Step 1, the claim is a system, so it does recite a statutory category of invention.
For Step 2, Prong 1:
The claim recites the limitation of “generating the target collaborative embedding; and
the training set comprising the target collaborative embedding,
the training set comprising the target collaborative embedding and the training item,
the training item being a training input for a given training iteration and the target collaborative embedding being a training target for the given training iteration; and
during the given training iteration.” The generating limitations, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is nothing in the claim precludes the generating steps from practically being performed in the human mind. This limitation is a mental process.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process”
The claim recites the limitation of” determine a penalty score for the given training iteration by comparing the predicted collaborative embedding output by the MLA and the target collaborative embedding output by the second MLA, the penalty score being indicative of a similarity between the predicted collaborative embedding and the target collaborative embedding of the training item. The determining limitation, as drafted, are processes that, under its broadest reasonable interpretation, cover performance of the limitation in the mind but for the recitation of generic computer components. That is nothing in the claim precludes the determine step from practically being performed in the human mind. This limitation is a mental process.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process.”
For Step 2, Prong 2, the claim recites additional elements: content recommendation system, server,” input the training item into the first MLA, the MLA being configured to output a predicted collaborative embedding for the training item,” and” adjust the first MLA using the penalty score so as to increase the similarity between the predicted collaborative embedding and the target collaborative embedding of the training item.”
The “content recommendation system and server are generic computer components to apply an abstract idea under 2106.05(f).
The “output, by a second MLA executed by the server, a target collaborative embedding for the training item based on previous user interactions between the users of the content recommendation system and the training item” step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The” input the training item into the first MLA, the first MLA being configured to generate a predicted collaborative embedding for the training item “step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “output, by the first MLA, the predicted collaborative embedding for the training
item” step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “adjust the first MLA using the penalty score so as to increase the similarity between the predicted collaborative embedding and the target collaborative embedding of the training item” is a generic computer component that amount to mere instructions to apply the abstract idea. See MPEP 2106.05(f).
Step 2B
The additional elements of content recommendation system, server and” adjust the first MLA using the penalty score so as to increase the similarity between the predicted collaborative embedding and the target collaborative embedding of the training item” do not amount to significantly more for the reasons set forth in step 2A above.
Additionally, under the Subject Matter Eligibility (SME), a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be reevaluated in Step 2B.
Here the” input (i.e. sending or transmitting) the training item into the first MLA, the MLA being configured to generate a predicted collaborative embedding for the training item “step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(iv).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data”.
Here the “output, by a second MLA executed by the server, a target collaborative embedding for the training item based on previous user interactions between the users of the content recommendation system and the training item” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93.
Here the “output, by the first MLA, the predicted collaborative embedding for the training item” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of content recommendation system, server and” adjust, by the server, the MLA using the penalty score so as to increase the similarity between the predicted collaborative embedding and the target collaborative embedding of the training item” to perform the claim steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 12:
Claim 12, which incorporates the rejection of claim 11, recites an additional element:
“input raw textual data of the training item.”
The “input raw textual data of the training item” step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “server is a generic computer component to apply an abstract idea under 2106.05(f).
Step 2B
The additional element of server does not amount to significantly more for the reasons set forth in step 2A above.
Additionally, under the Subject Matter Eligibility (SME), a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be reevaluated in Step 2B.
Here the” input (i.e. sending or transmitting), raw textual data of the training item” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(iv).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.”
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of a server to perform the claim steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 13:
Claim 13, which incorporates the rejection of claim 12, recites further limitations such as “determine the raw textual data based on content of the training item” that are part of the abstract idea.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process.”
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 14:
Claim 14, which incorporates the rejection of claim 11, recites and additional element:
Singular Value Decomposition (SVD) based MLA.
The “Singular Value Decomposition (SVD) based MLA” is a generic computer component that amount to mere instructions to apply the abstract idea. See MPEP 2106.05(f).
The additional element of server does not amount to significantly more for the reasons set forth in step 2A above.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “Singular Value Decomposition (SVD) based MLA” to perform the claim steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 15:
Claim 15, which incorporates the rejection of claim 11, recites further limitations such as “determine, by the server, a plurality of potential recommendation items to be provided to the given user, the plurality of potential recommendation items including:
(i) a set of items associated with previous user interactions between the users and the respective items from the set of items, and
(ii) at least one other item, the at least one other item including the digital item” and
generate, by the server, a parameter for the digital item as a product of (i) the predicted collaborative embedding of the digital item and (ii) the other user embedding, the parameter being an input into a third MLA configured to rank the plurality of potential recommendation items; and
generate, by the server, a second other parameter for the given item from the set of items as a product of (i) the respective collaborative embedding, and (ii) the user collaborative embedding, the second other parameter being an input into the third MLA configured to rank the plurality of potential recommendations items” that are part of the abstract idea.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process.”
The claim recites additional elements: “acquire an indication of a request for content recommendation from a given user of the content recommendation system,” “acquire a collaborative embedding for the given user, the user collaborative embedding having been determined by the second MLA based on the previous user interactions between the user and the items from the set of items, the previous user interactions between the users and the given item having been sufficient for determining the collaborative embedding for the given item by the second MLA,” “acquire a predicted collaborative embedding for the digital item,” “acquire another collaborative embedding for the given user, the user collaborative embedding having been determined by the other MLA based on the previous user interactions between the user and the items from the set of items,” ”acquire an other user embedding for the given user, the other user embedding having been determined by a second other MLA based on the predicted collaborative embedding for the digital item and user interactions between the given user and items of the recommendation system.”
The “acquire (i.e. receiving) an indication of a request for content recommendation from a given user of the content recommendation system,” step is a form of insignificant extra-solution activity.” MPEP 2106.05(g).
The “acquire a collaborative embedding for the given user, the user collaborative embedding having been determined by the second MLA based on the previous user interactions between the user and the items from the set of items, the previous user interactions between the users and the given item having been sufficient for determining the collaborative embedding for the given item by the second MLA” step is a form of insignificant extra-solution activity.” MPEP 2106.05(g).
The “acquire (i.e. receiving) a predicted collaborative embedding for the digital item” step is a form of insignificant extra-solution activity.” MPEP 2106.05(g).
The “acquire (i.e. receiving) a user collaborative embedding for the given user, the user collaborative embedding having been determined by the second MLA based on the previous user interactions between the user and the items from the set of items.” MPEP 2106.05(g).
The “acquire (i.e. receiving) another user embedding for the given user, the other user embedding having been determined by a second MLA based on the predicted collaborative embedding for the digital item and user interactions between the given user and items of the recommendation system.” MPEP 2106.05(g).
Under the Subject Matter Eligibility (SME), a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be reevaluated in Step 2B.
Here the “acquire (i.e. receiving) an indication of a request for content recommendation from a given user of the content recommendation system “step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(iv).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.”
Here the “acquire(i.e. receiving) a collaborative embedding for the given user, the user collaborative embedding having been determined by the second MLA based on the previous user interactions between the user and the items from the set of items, the previous user interactions between the users and the given item having been sufficient for determining the collaborative embedding for the given item by the second MLA” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(iv).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.”
Here the “acquire (i.e. receiving) a predicted collaborative embedding for the digital item” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(iv).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.”
Here the “acquire (i.e. receiving) a user collaborative embedding for the given user, the user collaborative embedding having been determined by the second r MLA based on the previous user interactions between the user and the items from the set of items” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(iv).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.”
Here the “acquire (i.e. receiving) an other user embedding for the given user, the other user embedding having been determined by a second MLA based on the predicted collaborative embedding for the digital item and user interactions between the given user and items of the recommendation system” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(iv).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.”
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 16:
Claim 16, which incorporates the rejection of claim 15, recites further limitations such as “the collaborative embedding has been determined by the second MLA in an off-line mode, prior to receipt of the indication of the request for content recommendation” that are part of the abstract idea.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process.”
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 17:
Claim 17, which incorporates the rejection of claim 15, recites an additional element:
“the fourth MLA is a decision-tree based MLA.”
The “second MLA is a decision-tree based MLA” is a generic computer component that amount to mere instructions to apply the abstract idea. See MPEP 2106.05(f).
The additional element of the second MLA is a decision-tree based MLA does not amount to significantly more for the reasons set forth in step 2A above.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “second MLA is a decision-tree based MLA” to perform the claim steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 18:
Claim 18, which incorporates the rejection of claim 15, recites further limitations such as “generate another parameter for the given item as a product of
(i) the other predicted collaborative embedding of the given item and (ii) the
other user embedding, the other parameter being an input into the fourth MLA
configured to rank the plurality of potential recommendation items” that are part of the abstract idea.
The claim recites an additional element: “acquire (i.e. receiving) an other predicted collaborative embedding for a given item for the set of items, the other predicted collaborative embedding having been generated by the MLA based on content data associated with the given item.”
The “acquire (i.e. receiving) another predicted collaborative embedding for a given item for the set of items, the other predicted collaborative embedding having been generated by the first MLA based on content data associated with the given item” step is a form of insignificant extra-solution activity.” MPEP 2106.05(g).
Here the acquire (i.e. receiving) another predicted collaborative embedding for a given item for the set of items, the other predicted collaborative embedding having been generated by the first MLA based on content data associated with the given item” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(iv).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.”
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 19:
Claim 19, which incorporates the rejection of claim 11, recites an additional element:
“the second MLA is trained on a plurality of training sets.”
The recited “other MLA is trained on a plurality of training sets” is a generic training recitation that may amount to a generic computer component to apply an abstract idea under 2106.05(f).
The additional elements of “second MLA is trained on a plurality of training sets” does not amount to significantly more for the reasons set forth in step 2A above.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “second MLA is trained on a plurality of training sets” to perform the claim steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 20:
Claim 20, which incorporates the rejection of claim 19, recites further limitations such as “the training item is used in a second plurality of training sets, and wherein the plurality of training sets is larger than the second plurality of training sets “that are part of the abstract idea.
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 7, 9-11, 17 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over ZHIVOTVOREV et al. (US 2019/0164069 A1, hereinafter referred to as ZHIVOTVOREV), in view of GULIN (US 2019/0164084 A1, hereinafter referred to as GULIN).
As to claim 1, ZHIVOTVOREV teaches a method for training a first Machine Learning Algorithm (MLA) to generate a predicted collaborative embedding for a digital item, the digital item being a potential recommendation item of a content recommendation system (paragraphs [0091]- [0100], potentially recommendable items; [0147]-[0148] “To that end, the first MLA 116 is trained to predict user-non-specific popularity scores for given items during a training phase. How the first MLA 116 is trained to predict user-non-specific popularity scores of given items during the training phase thereof will now be described; [0191] train the first MLA 116), the content recommendation system configured to recommend items to users of the content recommendation system, the content recommendation system being hosted by a server, the method executable by the server, the method comprising:
generating, by the server, a training set for a training item, (paragraphs
[0230] - [0232], generate a respective training set for each one of the set of items), the generating including:
outputting, by a second MLA executed by the server a target collaborative embedding for the training item based on previous user interactions between the users of the content recommendation system and the training item (paragraphs [0132]-[0134] To that end, the server 112 may employ the second MLA 118 for generating (interpreted by Examiner as “outputting”) a user-specific popularity score for at least some items of the pool of potentially recommendable items stored in the recommendable item database 124; [0230]- [0232], generate (interpreted by Examiner as “output”) a respective training set for each one of the set of items),
the previous user interactions between the users and the training item being sufficient for generating the target collaborative embedding (paragraphs [0016]-[0017], generating, by the server, a set of user-specific recommendation items by selecting from the pool of items user-specific recommendation items to be presented to the user based on the respective user-specific popularity scores; [0028], receiving, by the server, an indication of previous user interactions associated with each one of the set of items on the landing page. The method also comprises generating, by the server, a feature vector for each one of the set of items from the landing page based on information associated with the visual characteristics of a respective one of the set of items on the landing page; [0125] enough previous interactions with the recommendation service to enable the server 112 to collect enough indications of
previous user interactions associated with the given user and, thus, to enable the
server 112 to generate user-specific content recommendations for the given
user. The indications of previous user interactions associated with the given
user are stored in the user interaction database 126 and can be retrieved by the
server 112 for further processing; [0230]- [0233], “generate a respective training
set for each one of the set of items based on the respective feature vector and
the respective user interactions of the respective one of the set of items on the
respective landing page; [0282], based on the previous user interactions of the
user 102. The server 112 may employ the second MLA 118 for generating a
user-specific popularity score for the at least some items of the pool of
potentially recommendable items stored in the recommendable item
database 124.)
However, ZHIVOTVOREV fails to explicitly teach:
the training set comprising the target collaborative embedding and the training item, the training item being a training input for a given training iteration and the target collaborative embedding being a training target for the given training iteration; and
during the given training iteration:
inputting, by the server, the training item into the first MLA, the first MLA being configured to output a predicted collaborative embedding for the training item;
outputting, by the first MLA, the predicted collaborative embedding for the
training item;
determining, by the server, a penalty score for the given training iteration by comparing the predicted collaborative embedding output by the first MLA and the target collaborative embedding generated by the second MLA,
the penalty score being indicative of a similarity between the predicted collaborative embedding and the target collaborative embedding of the training item; and
adjusting, by the server, the first MLA using the penalty score so as to increase the similarity between the predicted collaborative embedding and the target collaborative embedding of the training item.
GULIN, in combination with ZHIVOTVOREV, teaches:
the training set comprising the target collaborative embedding and the training item, the training item being a training input for a given training iteration and the target collaborative embedding being a training target for the given training iteration (paragraphs [0081], each training object of the set of training object containing an indication of a document and a target associated with the document; organizing the set of training objects into an ordered list of training objects, the ordered list of training objects is organized such that for each given training object in the ordered list of training objects there is at least one of: (i) a preceding training object that occurs before the given training object and (ii) a subsequent training object that occurs after the given training object; descending the set of training objects through the decision tree so that each one of the set of training objects gets categorized, by the decision tree model at the given iteration of training, into a given child node of the at least one node of the given level of the decision tree; generating the prediction quality parameter for the given level of a decision tree by: generating, for a given training object that has been categorized into the given child node, a prediction quality parameter, the generating being executed based on targets of only those training objects that occur before the given training object in the ordered list of training
objects; and [0095], the prediction quality parameter being for evaluating prediction quality of the decision tree prediction model at a given iteration of training of the decision tree, the given iteration of training of the decision tree having at least one previous iteration of training of a previous decision tree, the decision tree and the previous decision tree forming an ensemble of tree generated using a decision tree boosting technique.);
during the given training iteration:
inputting, by the server, the training item into the first MLA, the first MLA being configured to output a predicted collaborative embedding for the training item (paragraphs [0035], providing training data for training at least one mathematical model, wherein the training data is based on past flight information of a plurality of passengers, and the training data comprises a first set of vectors and an associated target variable for each passenger in the plurality of passengers; training at least one mathematical model (interpreted by Examiner as a model to include a first MLA) with the training data; and providing a second set of vectors relating to past flight information of the passenger as inputs to the trained at least one mathematical model (interpreted by Examiner by Examiner as a model to include a first MLA) and calculating an output of the trained at least one mathematical model (interpreted by Examiner as a model to include a first MLA) based on the inputs, wherein the output represents a prediction of future flight activities of the passenger);
outputting, by the first MLA, the predicted collaborative embedding for the
training item (paragraphs [0035] providing a second set of vectors relating to
past flight information of the passenger as inputs to the trained at least one
mathematical model (interpreted by Examiner as a model to include a first
MLA) and
calculating an output of the trained at least one mathematical model (interpreted
by Examiner as a model to include a first MLA) based on the inputs, wherein
the output represents a prediction of future flight activities of the passenger);
determining, by the server, a penalty score for the given training iteration by comparing the predicted collaborative embedding output by the first MLA and the target collaborative embedding output by the second MLA,
the penalty score being indicative of a similarity between the predicted collaborative embedding and the target collaborative embedding of the training item (paragraphs [0027], the MLA calculates a metric (i.e. a "score" interpreted by Examiner as a “penalty score”), which denotes how close (Examiner interprets “how close “to include “comparing”) the current iteration of the model, which includes the given tree (or the given level of the given tree) and preceding trees, has gotten in its prediction to the correct answers (targets). The score of the model is calculated based on the predictions made and actual target values (correct answers) of the training objects used for training; [0045] When the MLA calculates a prediction score for a given training objects in a leaf based on targets of all the training objects in the leaf, the MLA kind of "peeks" into the target of the given training objects and targets of the neighboring training objects in the leaf (which can be thought as "looking ahead"). That can cause the overfitting to appear comparatively earlier in the training process); and
adjusting, by the server, the first MLA using the penalty score so as to increase the similarity between the predicted collaborative embedding and the target collaborative embedding of the training item (paragraphs [0027], the MLA calculates a metric (i.e. a "score"), which denotes how close the current iteration of the model, which includes the given tree (or the given level of the given tree) and preceding trees, has gotten in its prediction to the correct answers (targets). The score of the model is calculated based on the predictions made and actual target values (correct answers) of the training objects used for training; wherein Examiner interprets the given level of the given tree to include the “adjusting);
It would have been obvious to one of ordinary skill in the art before the effective filing of
the claimed invention to modify the system of ZHIVOTVOREV, to add a penalty score to the system of ZHIVOTVOREV, as taught by GULIN above. The modification would have been obvious because one of ordinary skill would be motivated to provide a powerful and effective mechanism to improve usage of computing processing
power; and) deliver to an end user more relevant predictions, as suggested by GULIN ([0113]).
As to claim 7, which incorporates the rejection of claim 5, ZHIVOTVOREV teaches wherein the fourth MLA is a decision-tree based MLA (paragraph [0015] Decision tree based MLAs).
As to claim 9, which incorporates the rejection of claim 5, ZHIVOTVOREV teaches wherein the second MLA is trained on a plurality of training sets (paragraphs [0028], training, by the server, an MLA based on the plurality of training sets to predict a user-non-specific popularity score of a given item. The user-non-specific popularity score is independent from any given user; [0134] How the second MLA 118 of the server 112 is trained and configured to generate the user-specific popularity scores and how the second MLA 118 is configured to select the set of user-specific recommendation items from the pool of potentially recommendable items to be presented to the user 102 is disclosed in a patent application Ser. No. 15/607,555, filed on May 29, 2017; [0191]] training sets for training a machine learning algorithm (MLA) implemented by the system of FIG. 1).
As to claim 10, which incorporates the rejection of claim 9, ZHIVOTVOREV teaches
wherein the training item is used in a second plurality of training sets, and wherein the plurality of training sets is larger than the second plurality of training sets (paragraphs [0181]- [0191], training sets 361, 362, 363, 364, 365, 366 and 367; wherein Examiner interprets training sets 361, 363, 364, 365, 366 and 367 to be larger than training sets 362 (i.e. second plurality of training sets)).
As to claim 11, ZHIVOTVOREV teaches a server for training a first Machine Learning Algorithm (MLA) to output a predicted collaborative embedding for an digital item, the digital item being a potential recommendation item of a content recommendation system, the content recommendation system configured to recommend items to users of the content recommendation system, the content recommendation system being hosted by the server, the server being configured to:
generate a training set for a training item (paragraphs [0230]-[0232], generate a respective training set for each one of the set of items), to generate comprises the server configured to:
output, by a second MLA (paragraph [0132], second MLA; [0230]-[0232], generate (interpreted by Examiner as “output”), a target collaborative embedding for the training item based on previous user interactions between the users of the content recommendation system and the training item, the previous user interactions between the users and the training item being sufficient for generating the target collaborative embedding (paragraphs [0016]-[0017], generating, by the server, a set of user-specific recommendation items by selecting from the pool of items user-specific recommendation items to be presented to the user based on the respective user-specific popularity scores; [0028], receiving, by the server, an indication of previous user interactions associated with each one of the set of items on the landing page. The method also comprises generating, by the server, a feature vector for each one of the set of items from the landing page based on information associated with the visual characteristics of a respective one of the set of items on the landing page.).
However, ZHIVOTVOREV fails to explicitly teach:
the training set comprising the target collaborative embedding and the training item, the training item being a training input for a given training iteration and the target collaborative embedding being a training target for the given training iteration;
during the given training iteration:
input the training item into the first MLA (paragraph [0147]-[0148], first MLA), the first MLA being configured to generate a predicted collaborative embedding for the training item;
determine a penalty score for the given training iteration by comparing the predicted collaborative embedding generated by the MLA and the target collaborative embedding generated by the other MLA, the penalty score being indicative of a similarity between the predicted collaborative embedding and the target collaborative embedding of the training item; and
adjust the MLA using the penalty score so as to increase the similarity between the predicted collaborative embedding and the target collaborative embedding of the training item.
GULIN, in combination with ZHIVOTVOREV, teaches:
the training set comprising the target collaborative embedding and the training item, the training item being a training input for a given training iteration and the target collaborative embedding being a training target for the given training iteration; (paragraphs [0081], each training object of the set of training object containing an indication of a document and a target associated with the document; organizing the set of training objects into an ordered list of training objects, the ordered list of training objects is organized such that for each given training object in the ordered list of training objects there is at least one of: (i) a preceding training object that occurs before the given training object and (ii) a subsequent training object that occurs after the given training object; descending the set of training objects through the decision tree so that each one of the set of training objects gets categorized, by the decision tree model at the given iteration of training, into a given child node of the at least one node of the given level of the decision tree; generating the prediction quality parameter for the given level of a decision tree by: generating, for a given training object that has been categorized into the given child node, a prediction quality parameter, the generating being executed based on targets of only those training objects that occur before the given training object in the ordered list of training objects; and [0095]); and
during the given training iteration:
input the training item into the first MLA, the first MLA being configured to generate a predicted collaborative embedding for the training item (paragraphs [0035], providing training data for training at least one mathematical model (interpreted
by Examiner as a model to include a first MLA), wherein the training data is based on past flight information of a plurality of passengers, and the training data comprises a first set of vectors and an associated target variable for each passenger in the plurality of passengers; training at least one mathematical model (interpreted as a model to include a first MLA) with the training data; and providing a second set of vectors relating to past flight information of the passenger as inputs to the trained at least one mathematical model (interpreted as a model to include a first MLA) and calculating an output of the trained at least one mathematical model (interpreted as a model to include a first MLA) based on the inputs, wherein the output represents a prediction of future flight activities of the passenger);
outputting, by the first MLA, the predicted collaborative embedding for the
training item (paragraphs [0035] providing training data for training at least one
mathematical model (interpreted as a model to include a first MLA), wherein the
training data is based on past flight information of a plurality of passengers, and
the training data comprises a first set of vectors and an associated target variable
for each passenger in the plurality of passengers; training at least one mathematical model with the training data; and
providing a second set of vectors relating to past flight information of the passenger as inputs to the trained at least one mathematical model and calculating an output of the trained at least one mathematical model based on the inputs, wherein the output represents a prediction of future flight activities of the passenger);
determine a penalty score for the given training iteration by comparing the predicted collaborative embedding generated by the MLA and the target collaborative embedding generated by the other MLA, the penalty score being indicative of a similarity between the predicted collaborative embedding and the target collaborative embedding of the training item (paragraphs [0027], the MLA calculates a metric (i.e. a "score" interpreted by Examiner as a “penalty score”), which denotes how close (Examiner interprets “how close “to include “comparing”) the current iteration of the model, which includes the given tree (or the given level of the given tree) and preceding trees, has gotten in its prediction to the correct answers (targets). The score of the model is calculated based on the predictions made and actual target values (correct answers) of the training objects used for training; [0045] When the MLA calculates a prediction score for a given training objects in a leaf based on targets of all the training objects in the leaf, the MLA kind of "peeks" into the target of the given training objects and targets of the neighboring training objects in the leaf (which can be thought as "looking ahead"). That can cause the overfitting to appear comparatively earlier in the training process); and
adjust the MLA using the penalty score so as to increase the similarity between the predicted collaborative embedding and the target collaborative embedding of the training item (paragraphs [0027], the MLA calculates a metric (i.e. a "score"), which denotes how close the current iteration of the model, which includes the given tree (or the given level of the given tree) and preceding trees, has gotten in its prediction to the correct answers (targets). The score of the model is calculated based on the predictions made and actual target values (correct answers) of the training objects used for training; wherein Examiner interprets the given level of the given tree to include the “adjusting).
It would have been obvious to one of ordinary skill in the art before the effective filing of
the claimed invention to modify the system of ZHIVOTVOREV, to add a penalty score to the system of ZHIVOTVOREV, as taught by GULIN above. The modification would have been obvious because one of ordinary skill would be motivated to provide a powerful and effective mechanism to improve usage of computing processing
power; and) deliver to an end user more relevant predictions, as suggested by GULIN ([0113]).
As to claim 17, which incorporates the rejection of claim 15, ZHIVOTVOREV teaches wherein the third MLA is a decision-tree based MLA (paragraph [0015] Decision tree based MLAs).
As to claim 19, which incorporates the rejection of claim 11, ZHIVOTVOREV teaches wherein the other MLA is trained on a plurality of training sets (paragraphs [0028],
training, by the server, an MLA based on the plurality of training sets to predict a user-non-specific popularity score of a given item. The user-non-specific popularity score is independent from any given user; [0191]] training sets for training a machine learning algorithm (MLA) implemented by the system of FIG. 1).
As to claim 20, which incorporates the rejection of claim 19, ZHIVOTVOREV teaches
wherein the training item is used in a second plurality of training sets, and wherein the plurality of training sets is larger than the second plurality of training sets (paragraphs [0181]- [0191], training sets 361, 362, 363, 364, 365, 366 and 367; wherein Examiner interprets training sets 361, 363, 364, 365, 366 and 367 to be larger than training sets 362 (i.e. second plurality of training sets)).
Claims 2-3 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over ZHIVOTVOREV et al. (US 2019/0164069 A1, hereinafter referred to as ZHIVOTVOREV), in view of GULIN (US 2019/0164084 A1, hereinafter referred to as GULIN), and further in view of Li et al. (US 2011/0055699 A1, hereinafter referred to as Li).
As to claim 2, which incorporates the rejection of claim 1, ZHIVOTVOREV and GULIN fail to explicitly teach wherein the inputting the training item comprises inputting, by the server, raw textual data of the training item.
However, Li, in combination with ZHIVOTVOREV and GULIN teaches wherein the inputting the training item comprises inputting, by the server, raw textual data of the training item (paragraph [0084], raw textual data are analyzed offline to train a topic classifier on problem topic taxonomy 638).
It would have been obvious to one of ordinary skill in the art before the effective filing of
the claimed invention to modify the combination system of ZHIVOTVOREV and GULIN to add raw textual data to the combination system of ZHIVOTVOREV and GULIN, as taught by GULIN above. The modification would have been obvious because one of ordinary skill would be motivated to provide a powerful and effective mechanism to
analyze the raw textual data, solution mining and building engine 600 performs
several steps, which includes text parsing, keyword tagging, and data labeling., as suggested by GULIN ([0084]).
As to claim 3, which incorporates the rejection of claim 2, ZHIVOTVOREV and GULIN fail to explicitly teach wherein the method further comprises determining, by the server, the raw textual data based on content of the training item.
However, Li, in combination with ZHIVOTVOREV and GULIN teaches wherein the method further comprises determining, by the server, the raw textual data based on content of the training item (paragraph [0084], raw textual data are analyzed offline to train a topic classifier on problem topic taxonomy 638).
It would have been obvious to one of ordinary skill in the art before the effective filing of
the claimed invention to modify the combination system of ZHIVOTVOREV and GULIN to add raw textual data to the combination system of ZHIVOTVOREV and GULIN, as taught by GULIN above. The modification would have been obvious because one of ordinary skill would be motivated to provide a powerful and effective mechanism to
analyze the raw textual data, solution mining and building engine 600 performs
several steps, which includes text parsing, keyword tagging, and data labeling., as suggested by GULIN ([0084]).
As to claim 12, which incorporates the rejection of claim 11, ZHIVOTVOREV and GULIN fail to explicitly teach wherein the input the training item comprises the server configured to input raw textual data of the training item.
However, Li, in combination with ZHIVOTVOREV and GULIN teaches wherein the input the training item comprises the server configured to input raw textual data of the training item.
However, Li, in combination with ZHIVOTVOREV and GULIN teaches wherein the input the training item comprises inputting, by the server, raw textual data of the training item (paragraph [0084], raw textual data are analyzed offline to train a topic classifier on problem topic taxonomy 638).
It would have been obvious to one of ordinary skill in the art before the effective filing of
the claimed invention to modify the combination system of ZHIVOTVOREV and GULIN to add raw textual data to the combination system of ZHIVOTVOREV and GULIN, as taught by Li above. The modification would have been obvious because one of ordinary skill would be motivated to provide a powerful and effective mechanism to
analyze the raw textual data, solution mining and building engine 600 performs
several steps, which includes text parsing, keyword tagging, and data labeling., as suggested by Li ([0084]).
As to claim 13, which incorporates the rejection of claim 12, ZHIVOTVOREV and GULIN fail to explicitly teach the server being configured to determine the raw textual data based on content of the training item.
However, Li, in combination with ZHIVOTVOREV and GULIN teaches wherein the method further comprises determining, by the server, the raw textual data based on content of the training item (paragraph [0084], raw textual data are analyzed offline to train a topic classifier on problem topic taxonomy 638).
It would have been obvious to one of ordinary skill in the art before the effective filing of
the claimed invention to modify the combination system of ZHIVOTVOREV and GULIN to add raw textual data to the combination system of ZHIVOTVOREV and GULIN, as taught by GULIN above. The modification would have been obvious because one of ordinary skill would be motivated to provide a powerful and effective mechanism to
analyze the raw textual data, solution mining and building engine 600 performs
several steps, which includes text parsing, keyword tagging, and data labeling., as suggested by Li ([0084]).
Claims 4-5, 8, 14-15 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over ZHIVOTVOREV et al. (US 2019/0164069 A1, hereinafter referred to as ZHIVOTVOREV), in view of GULIN (US 2019/0164084 A1, hereinafter referred to as GULIN), and further in view of LIFAR et al. (US 2018/0075137 A1, hereinafter referred to as LIFAR).
As to claim 4, which incorporates the rejection of claim 1, ZHIVOTVOREV and GULIN fail to explicitly teach wherein the second MLA is a Singular Value Decomposition (SVD) based MLA.
However, LIFAR, in combination with ZHIVOTVOREV and GULIN teaches wherein the second MLA is a Singular Value Decomposition (SVD) based MLA (paragraph [0013]-[0014] and [0040], Singular Value Decomposition (SVD).
It would have been obvious to one of ordinary skill in the art before the effective filing of
the claimed invention to modify the combination system of ZHIVOTVOREV and GULIN to add a Singular Value Decomposition (SVD).to the combination system of ZHIVOTVOREV and GULIN, as taught by LIFAR above. The modification would have been obvious because one of ordinary skill would be motivated to apply the SYD algorithm to generate a typical matrix of relevancy scores (that are based on user digital-items interactions) can be split into a user matrix and a digital item matrix, as suggested by LIFAR ([0014]).
As to claim 5, which incorporates the rejection of claim 1, ZHIVOTVOREV and GULIN fail to explicitly teach wherein the method further comprises:
acquiring, by the server, an indication of a request for content recommendation from a given user of the content recommendation system;
determining, by the server, a plurality of potential recommendation items to be provided to the given user, the plurality of potential recommendation items including:
a set of items associated with previous user interactions between the users and the respective items from the set of items, and
at least one other item, the at least one other item including the digital item;
acquiring, by the server, a collaborative embedding for a given item from the set of items,
the collaborative embedding having been determined by the second MLA based on the previous user interactions between the users and the given item from the set of items,
the previous user interactions between the users and the given item having been sufficient for determining the collaborative embedding for the given item by the second MLA;
acquiring, by the server, a predicted collaborative embedding for the digital item;
acquiring, by the server, a user collaborative embedding for the given user, the user collaborative embedding having been determined by the other MLA based on the previous user interactions between the user and the items from the set of items;
acquiring, by the server, another user embedding for the given user, the other user embedding having been determined by a third MLA based on the predicted collaborative embedding for the digital item and user interactions between the given user and items of the content recommendation system;
generating, by the server, a parameter for the digital item as a product of (i) the predicted collaborative embedding of the digital item and (ii) the other user embedding, the parameter being an input into a fourth MLA configured to rank the plurality of potential recommendation items; and generating, by the server, a second other parameter for the given item from the set of items as a product of (i) the respective collaborative embedding, and (ii) the user collaborative embedding, the second other parameter being an input into the third MLA configured to rank the plurality of potential recommendations items.
However, LIFAR, in combination with ZHIVOTVOREV and GULIN teaches wherein the method further comprises:
acquiring, by the server, an indication of a request for content recommendation from a given user of the content recommendation system (paragraphs [0030],
acquiring an indication of a plurality of user-item interactions, each user-item interaction being associated with a user and a digital item; based on the plurality of user-item interactions, generating a matrix of user-item relevance scores; [0042])
determining, by the server, a plurality of potential recommendation items to be provided to the given use, the plurality of potential recommendation items including:
a set of items associated with previous user interactions between the users and the respective items from the set of items, and
at least one other item, the at least one other item including the digital item (paragraph [0010] the trained machine learning algorithm of the recommendation system selects a number of potential recommended items from a number of potential sources for the recommended items (for a particular user) [0024]-[0025]);
acquiring, by the server, a collaborative embedding for a given item from the set of items,
the collaborative embedding having been determined by the second MLA based on the previous user interactions between the users and the given item from the set of items,
the previous user interactions between the users and the given item having been sufficient for determining the collaborative embedding for the given item by the second MLA (paragraphs 0024]-[0025], a server (such as a recommendation system server) first acquires logs of users' interactions with a plurality of digital items. In some embodiments, the plurality of digital items can be text based, such as articles, books, other texts, and the like. The logs contain indications of user interactions with the plurality of digital items ( or to be precise, an indication of certain users with the respective interacted digital items, as well as the nature of their interaction);
acquiring, by the server, a predicted collaborative embedding for the digital item (paragraphs 0024]-[0027], a server (such as a recommendation system server) first acquires logs of users' interactions with a plurality of digital items. In some embodiments, the plurality of digital items can be text based, such as articles, books, other texts, and the like. The logs contain indications of user interactions with the plurality of digital items ( or to be precise, an indication of certain users with the respective interacted digital items, as well as the nature of their interaction);
acquiring, by the server, a user collaborative embedding for the given user, the user collaborative embedding having been determined by the second MLA based on the previous user interactions between the user and the items from the set of items (paragraphs [0024]-[0027], a server (such as a recommendation system server) first acquires logs of users' interactions with a plurality of digital items. In some embodiments, the plurality of digital items can be text based, such as articles, books, other texts, and the like. The logs contain indications of user interactions with the plurality of digital items (or to be precise, an indication of certain users with the respective interacted digital items, as well as the nature of their interaction; [0030], acquiring an indication of a plurality of user-item interactions, each user-item interaction being associated with a user and a digital item; based on the plurality of user-item interactions, generating a matrix of user-item relevance scores; [0042])
acquiring, by the server, another user embedding for the given user, the other user embedding having been determined by a third MLA based on the predicted collaborative embedding for the digital item and user interactions between the given user and items of the content recommendation system (paragraphs [0024]-[0027], a server (such as a recommendation system server) first acquires logs of users' interactions with a plurality of digital items. In some embodiments, the plurality of digital items can be text based, such as articles, books, other texts, and the like. The logs contain indications of user interactions with the plurality of digital items (or to be precise, an indication of certain users with the respective interacted digital items, as well as the nature of their interaction; [0030], acquiring an indication of a plurality of user-item interactions, each user-item interaction being associated with a user and a digital item; based on the plurality of user-item interactions, generating a matrix of user-item relevance scores; [0042])
generating, by the server, a parameter for the digital item as a product of (i) the predicted collaborative embedding of the digital item and (ii) the other user embedding, the parameter being an input into a fourth MLA configured to rank the plurality of potential recommendation items (paragraphs [0019] and [0024]-[0027], the product of the user feature and item feature matrixes approximating the user rating matrix); and
generating, by the server, a second other parameter for the given item from the set of items as a product of (i) the respective collaborative embedding, and (ii) the user collaborative embedding, the second other parameter being an input into the fourth MLA configured to rank the plurality of potential recommendations items (paragraphs [0004], ranking algorithm and [0010], second server).
It would have been obvious to one of ordinary skill in the art before the effective filing of
the claimed invention to modify the combination system of ZHIVOTVOREV and GULIN to add a Singular Value Decomposition (SVD).to the combination system of ZHIVOTVOREV and GULIN, as taught by LIFAR above. The modification would have been obvious because one of ordinary skill would be motivated to apply the SYD algorithm to generate a typical matrix of relevancy scores (that are based on user digital-items interactions) can be split into a user matrix and a digital item matrix, as suggested by LIFAR ([0014]).
As to claim 8, which incorporates the rejection of claim 1, ZHIVOTVOREV and GULIN fail to explicitly teach wherein the method further comprises:
acquiring, by the server, another predicted collaborative embedding for a given item for the set of items, the other predicted collaborative embedding having been generated by the MLA based on content data associated with the given item;
generating, by the server, another parameter for the given item as a product of (i) the other predicted collaborative embedding of the given item and (ii) the other user embedding, the other parameter being an input into the third MLA configured to rank the plurality of potential recommendation items.
However, LIFAR, in combination with ZHIVOTVOREV and GULIN teaches wherein the method further comprises:
acquiring, by the server, another predicted collaborative embedding for a given item for the set of items, the other predicted collaborative embedding having been generated by the MLA based on content data associated with the given item (paragraphs [0024]-[0027], a server (such as a recommendation system server) first acquires logs of users' interactions with a plurality of digital items. In some embodiments, the plurality of digital items can be text based, such as articles, books, other texts, and the like. The logs contain indications of user interactions with the plurality of digital items (or to be precise, an indication of certain users with the respective interacted digital items, as well as the nature of their interaction; [0030], acquiring an indication of a plurality of user-item interactions, each user-item interaction being associated with a user and a digital item; based on the plurality of user-item interactions, generating a matrix of user-item relevance scores; [0042]);
generating, by the server, another parameter for the given item as a product of (i) the other predicted collaborative embedding of the given item and (ii) the other user embedding, the other parameter being an input into the third MLA configured to rank the plurality of potential recommendation items (paragraphs [0004], ranking algorithm, [0019] and [0024]-[0027], the product of the user feature and item feature matrixes approximating the user rating matrix).
It would have been obvious to one of ordinary skill in the art before the effective filing of
the claimed invention to modify the combination system of ZHIVOTVOREV and GULIN to add a Singular Value Decomposition (SVD).to the combination system of ZHIVOTVOREV and GULIN, as taught by LIFAR above. The modification would have been obvious because one of ordinary skill would be motivated to apply the SYD algorithm to generate a typical matrix of relevancy scores (that are based on user digital-items interactions) can be split into a user matrix and a digital item matrix, as suggested by LIFAR ([0014]).
As to claim 14, which incorporates the rejection of claim 11, ZHIVOTVOREV and GULIN fail to explicitly teach wherein the second MLA is a Singular Value Decomposition (SVD) based MLA.
However, LIFAR, in combination with ZHIVOTVOREV and GULIN teaches wherein the second MLA is a Singular Value Decomposition (SVD) based MLA (paragraphs [0013]-[0014] and [0040], Singular Value Decomposition (SVD)).
It would have been obvious to one of ordinary skill in the art before the effective filing of
the claimed invention to modify the combination system of ZHIVOTVOREV and GULIN to add a Singular Value Decomposition (SVD).to the combination system of ZHIVOTVOREV and GULIN, as taught by LIFAR above. The modification would have been obvious because one of ordinary skill would be motivated to apply the SYD algorithm to generate a typical matrix of relevancy scores (that are based on user digital-items interactions) can be split into a user matrix and a digital item matrix, as suggested by LIFAR ([0014]).
As to claim 15, which incorporates the rejection of claim 11, ZHIVOTVOREV and GULIN fail to explicitly teach the server being further configured to:
acquire an indication of a request for content recommendation from a given user of the content recommendation system;
determine a plurality of potential recommendation items to be provided to the given user, the plurality of potential recommendation items including:
(iii) a set of items associated with previous user interactions between the users and the respective items from the set of items, and
(iv) at least one other item, the at least one other item including the digital item;
acquire a collaborative embedding for a given item from the set of items,
the collaborative embedding having been determined by the second MLA based on the previous user interactions between the users and the given item from the set of items,
the previous user interactions between the users and the given item having been sufficient for determining the collaborative embedding for the given item by the other MLA; acquire a predicted collaborative embedding for the digital item;
acquire a user collaborative embedding for the given user, the user collaborative embedding having been determined by the second MLA based on the previous user interactions between the user and the items from the set of items;
acquire another user embedding for the given user, the other user embedding having been determined by a third MLA based on the predicted collaborative embedding for the digital item and user interactions between the given user and items of the content recommendation system;
generate a parameter for the digital item as a product of (i) the predicted collaborative embedding of the digital item and (ii) the other user embedding, the parameter being an input into a fourth MLA configured to rank the plurality of potential recommendation items; and
generate another parameter for the given item from the set of items as a product of (i) the respective collaborative embedding, and (ii) the user collaborative embedding, the other parameter being an input into the fourth MLA configured to rank the plurality of potential recommendations items.
However, LIFAR, in combination with ZHIVOTVOREV and GULIN teaches:
the server being further configured to:
acquire an indication of a request for content recommendation from a given user of the content recommendation system (paragraphs [0030], acquiring an indication of a plurality of user-item interactions, each user-item interaction being associated with a user and a digital item; based on the plurality of user-item interactions, generating a matrix of user-item relevance scores; [0042]);
determine a plurality of potential recommendation items to be provided to the given user, the plurality of potential recommendation items including:
(iii) a set of items associated with previous user interactions between the users and the respective items from the set of items, and
(iv) at least one other item, the at least one other item including the digital item;
acquire a collaborative embedding for a given item from the set of item (paragraph [0010] the trained machine learning algorithm of the recommendation system selects a number of potential recommended items from a number of potential sources for the recommended items (for a particular user) [0024]-[0025]);
the collaborative embedding having been determined by the second MLA based on the previous user interactions between the users and the given item from the set of items, the previous user interactions between the users and the given item having been sufficient for determining the collaborative embedding for the given item by the second MLA (paragraphs 0024]-[0025], a server (such as a recommendation system server) first acquires logs of users' interactions with a plurality of digital items. In some embodiments, the plurality of digital items can be text based, such as articles, books, other texts, and the like. The logs contain indications of user interactions with the plurality of digital items ( or to be precise, an indication of certain users with the respective interacted digital items, as well as the nature of their interaction);
acquire a predicted collaborative embedding for the digital item (XXXXXX);
acquire a user collaborative embedding for the given user, the user collaborative embedding having been determined by the second MLA based on the previous user interactions between the user and the items from the set of items (paragraphs [0024]-[0027], a server (such as a recommendation system server) first acquires logs of users' interactions with a plurality of digital items. In some embodiments, the plurality of digital items can be text based, such as articles, books, other texts, and the like. The logs contain indications of user interactions with the plurality of digital items (or to be precise, an indication of certain users with the respective interacted digital items, as well as the nature of their interaction; [0030], acquiring an indication of a plurality of user-item interactions, each user-item interaction being associated with a user and a digital item; based on the plurality of user-item interactions, generating a matrix of user-item relevance scores; [0042]);
acquire another user embedding for the given user, the other user embedding having been determined by a third MLA based on the predicted collaborative embedding for the digital item and user interactions between the given user and items of the recommendation system (paragraphs [0024]-[0027], a server (such as a recommendation system server) first acquires logs of users' interactions with a plurality of digital items. In some embodiments, the plurality of digital items can be text based, such as articles, books, other texts, and the like. The logs contain indications of user interactions with the plurality of digital items (or to be precise, an indication of certain users with the respective interacted digital items, as well as the nature of their interaction; [0030], acquiring an indication of a plurality of user-item interactions, each user-item interaction being associated with a user and a digital item; based on the plurality of user-item interactions, generating a matrix of user-item relevance scores; [0042]);
generate a parameter for the digital item as a product of (i) the predicted collaborative embedding of the digital item and (ii) the other user embedding, the parameter being an input into a fourth MLA configured to rank the plurality of potential recommendation items (paragraphs [0019] and [0024]-[0027], the product of the user feature and item feature matrixes approximating the user rating matrix); and
generate another parameter for the given item from the set of items as a product of (i) the respective collaborative embedding, and (ii) the user collaborative embedding, the other parameter being an input into the third MLA configured to rank the plurality of potential recommendations items (paragraphs [0004], ranking algorithm and [0010], second server).
It would have been obvious to one of ordinary skill in the art before the effective filing of
the claimed invention to modify the combination system of ZHIVOTVOREV and GULIN to add a Singular Value Decomposition (SVD).to the combination system of ZHIVOTVOREV and GULIN, as taught by LIFAR above. The modification would have been obvious because one of ordinary skill would be motivated to apply the SYD algorithm to generate a typical matrix of relevancy scores (that are based on user digital-items interactions) can be split into a user matrix and a digital item matrix, as suggested by LIFAR ([0014]).
As to claim 18, which incorporates the rejection of claim 15, ZHIVOTVOREV and GULIN fail to explicitly teach wherein the server is further configured to:
acquire another predicted collaborative embedding for a given item for the set of items, the other predicted collaborative embedding having been generated by the first MLA based on content data associated with the given item; and
generate a second other parameter for the given item as a product of (i) the other predicted collaborative embedding of the given item and (ii) the other user embedding, the second other parameter being an input into the third MLA configured to rank the plurality of potential recommendation items.
However, LIFAR, in combination with ZHIVOTVOREV and GULIN teaches wherein the server is further configured to:
acquire another predicted collaborative embedding for a given item for the set of items, the other predicted collaborative embedding having been generated by the MLA based on content data associated with the given item (paragraphs [0024]-[0027], a server (such as a recommendation system server) first acquires logs of users' interactions with a plurality of digital items. In some embodiments, the plurality of digital items can be text based, such as articles, books, other texts, and the like. The logs contain indications of user interactions with the plurality of digital items (or to be precise, an indication of certain users with the respective interacted digital items, as well as the nature of their interaction); and
generate a second other parameter for the given item as a product of (i) the other predicted collaborative embedding of the given item and (ii) the other user embedding, the second other parameter being an input into the fourth MLA configured to rank the plurality of potential recommendation items (paragraphs [0004], ranking algorithm; [0019] and [0024]-[0027], the product of the user feature and item feature matrixes approximating the user rating matrix).
It would have been obvious to one of ordinary skill in the art before the effective filing of
the claimed invention to modify the combination system of ZHIVOTVOREV and GULIN to add a Singular Value Decomposition (SVD).to the combination system of ZHIVOTVOREV and GULIN, as taught by LIFAR above. The modification would have been obvious because one of ordinary skill would be motivated to apply the SYD algorithm to generate a typical matrix of relevancy scores (that are based on user digital-items interactions) can be split into a user matrix and a digital item matrix, as suggested by LIFAR ([0014]).
Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over ZHIVOTVOREV et al. (US 2019/0164069 A1, hereinafter referred to as ZHIVOTVOREV), in view of GULIN (US 2019/0164084 A1, hereinafter referred to as GULIN), and further in view of TIKHONOV (US 2018/0011937 A1, hereinafter referred to as TIKHONOV).
As to claim 6, which incorporates the rejection of claim 5, ZHIVOTVOREV and GULIN fail to explicitly teach wherein the collaborative embedding has been determined by the second MLA in an off-line mode, prior to receipt of the indication of the request for content recommendation.
However, TIKHONOV, in combination with ZHIVOTVOREV and GULIN, teaches wherein the collaborative embedding has been determined by the second MLA in an off-line mode, prior to receipt of the indication of the request for content recommendation (paragraphs [001]-[0012] and [0417], “offline”-mode).
It would have been obvious to one of ordinary skill in the art before the effective filing of
the claimed invention to modify the combination system of ZHIVOTVOREV and GULIN to add an off-line mod to the combination system of ZHIVOTVOREV and GULIN, as taught by TIKHONOV above. The modification would have been obvious because one of ordinary skill would be motivated to apply the SYD algorithm to use a technical
effect arises form an ability to pre-qualify network resources as sources of recommended content items when entering a new territory, as suggested by TIKHONOV ([0417]).
As to claim 16, which incorporates the rejection of claim 15, ZHIVOTVOREV and GULIN fail to explicitly teach wherein the collaborative embedding has been determined by the second MLA in an off-line mode, prior to receipt of the indication of the request for content recommendation.
However, TIKHONOV, in combination with ZHIVOTVOREV and GULIN teaches wherein the collaborative embedding has been determined by the second MLA in an off-line mode, prior to receipt of the indication of the request for content recommendation (paragraphs [001]-[0012] and [0417], “offline”-mode).
It would have been obvious to one of ordinary skill in the art before the effective filing of
the claimed invention to modify the combination system of ZHIVOTVOREV and GULIN to add an off-line mod to the combination system of ZHIVOTVOREV and GULIN, as taught by LIFAR above. The modification would have been obvious because one of ordinary skill would be motivated to apply the SYD algorithm to use a technical
effect arises form an ability to pre-qualify network resources as sources of recommended content items when entering a new territory, as suggested by TIKHONOV ([0417]).
Response to Applicant’s arguments
Applicant's arguments on file on 09/02/2025 with respect to claims 1-20 have been considered and are not persuasive for the 101 rejections.
Claim Rejections- 35 USC§ 101
Argument (pages 1-2)
Applicant appears to assert that the letter reminds examiners that the limitation "training the neural network in a first stage using the first training set" does not recite a judicial exception because it "does not set forth or describe any mathematical relationships,
calculations, formulas, or equations using words or mathematical symbols."
Like the example in the letter from Deputy Commissioner Kim, the claims of the present application do not set forth any mathematical relationships, calculations, formulas, or equations using words or mathematical symbols, and are also patent-eligible.
Additionally, even if the claims do recite an abstract idea, which Applicant does not
concede, any abstract idea recited within the claims is integrated into a practical application. As explained in paragraph [0011] of the filed specification, typically "user interaction data is somewhat "sparse" to the extent where it is difficult to properly estimate the relevance of some digital content to given users since these users have not interacted with some digital content and the recommendation system does not have much information to draw from in order to determine whether some digital content would be appreciated by given users if it is recommended thereto."
In order to resolve this issue, as described in paragraph [0013] of the filed specification, the claims describe a method that uses "Transfer Leaming (TL) techniques in order to perform such estimation of the item-specific embedding of collaborative type, when collaborative type data for a given item is too sparse and/or insufficient for generating such embedding via matrix factorization models."
Using the claimed features, predictions can be made as to whether a user would appreciate a digital content item, even in instances where there is not sufficient information about interactions with the digital content item to make a prediction using a traditional method. This is a practical application of any abstract idea.
For at least these reasons, all claims are patent-eligible. Withdrawal of the rejection is
respectfully requested.
Examiner response:
Examiner respectfully disagrees. The analysis of the training in example 39 is not analogous to the analysis of training steps in the instant claims. This is because in example 39, the training limitations were not required to be analyzed under step 2A prong 2 or step 2B as it was determined in step 2A prong 1 that the claim did not recite any judicial exception. On the other hand, in the instant claim as analyzed in the rejection, claims do recite a judicial exception and the training limitations have been evaluated in step 2A prong 2 and step 2B.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process”
Examiner is interpreting the limitations as abstract ideas implemented on a generic computer. (Step 2A Prong 1).
The claim recites the limitation of “generating” is an observation or evaluation based on the target collaborative embedding; and the training set comprising the target collaborative embedding and the training item, the training item being a training input for a given training iteration and the target collaborative embedding being a training target for the given training iteration.” This type of observation or evaluation is an act that can be practically performed in the human mind, similar to the mental thought processes that occur when a person creates for each item a feature vector Such mental observations or evaluations fall within the “mental processes” grouping of abstract idea set forth in the 2019 PEG. 2019 PEG Section I, 84 Fed. Reg. at 52.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process”
The claim recites the limitation of” determining” is an observation or evaluation based on a penalty score for the given training iteration by comparing the predicted collaborative embedding output by the first MLA and the target collaborative embedding output by the second MLA, the penalty score being indicative of a similarity between the predicted collaborative embedding and the target collaborative embedding of the training item.” This type of observation or evaluation is an act that can be practically performed in the human mind, similar to the mental thought processes that occur when a person calculates a score by comparing a prediction to a target. Such mental observations or evaluations fall within the “mental processes” grouping of abstract idea set forth in the 2019 PEG. 2019 PEG Section I, 84 Fed. Reg. at 52.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process”
The newly added claim features do not improve the functionality of a computer or any technology.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
The additional elements of 1 and 11 do not amount to significantly more than the judicial exception and do not improve any computer functionalities or technology. Therefore, the additional elements do link to the abstract idea.
For at least these reasons, Examiner respectfully maintains the rejections of all claims.
No further arguments are presented for the dependent claims. Examiner respectfully maintains the rejection under 35 U.S.C. § 101 of amended independent claims 1 and 11 and their dependent claims.
Claim Rejections - 35 USC § 103
Argument (pages 2-4)
Applicant appears to assert that Zhivotovorev describes a system with "a first machine learned algorithm (MLA) 116" and "a second MLA 118". Zhivotovorev, [0082]. But in the system of Zhivotovorev, neither MLA is trained by comparing the output of the first MLA and the second MLA. The first MLA is trained by comparing a predicted "user-non-specific popularity score of a given item" to actual "user interactions with these items on their respective landing pages." Zhivotovorev, ,i [0235]. The first MLA is not trained by comparing the output of the first MLA to the output of another MLA. The training of the second MLA of Zhivotovorev is described in U.S. Patent Application Publication No. 2018/0075137Al, and also does not involve comparing the output of an MLA to the output of another MLA.
Gulin also does not describe training an MLA by comparing the output of one MLA to the output of another MLA. Gulin describes training objects that include labels "indicative of how relevant the document is to the search query." Gulin, ,i [0222]. The labels are not the output of an MLA.
For at least these reasons, claims 1 and 11 are patentable over Gulin and Zhivotovorev.
Claims 7, 9, and 10 depend, either directly or indirectly, from claim 1. Claims 17, 19, and 20 depend, either directly or indirectly, from claim 11. By virtue of their dependence on an allowable independent claim and further in view of the various additional features recited therein, claims 7, 9, 10, 17, 19, and 20 are also patentable over Gulin and Zhivotovorev. Withdrawal of the rejection is respectfully requested.
Claims 2-3 and 12-13 stand rejected under 35 U.S.C. § 103 as being unpatentable over
Zhivotovorev, Gulin, and U.S. Patent Application Publication No. 2011/0055699 ("Li"). Claims 2.
and 3 depend, either directly or indirectly, from independent claim 1. Claims 12 and 13 depend, either directly or indirectly, from independent claim 11. Li is only cited to show features independent claims 2, 3, 12, and 13, and does not teach a modification to Zhivotovorev and Gulin that would overcome the deficiencies cited above. By virtue of their dependence on an allowable independent claim, and further in view of the various additional features recited therein, claims 2, 3, 12, and 13 are patentable over Zhivotovorev, Gulin, and Li. Withdrawal of the rejection is respectfully requested.
Claims 4-5, 8, 14-15 and 18 stand rejected under 35 U.S.C. § 103 as being unpatentable over Zhivotovorev, Gulin, and U.S. Patent Application Publication No. 2018/0075137 ("Lifar").
Claims 4, 5, and 8 depend, either directly or indirectly, from independent claim 1. Claims 14, 15, and 18 depend, either directly or indirectly, from independent claim 11. Lifar is only cited to show features in dependent claims 4, 5, 8, 14, 15, and 18, and does not teach a modification to Zhivotovorev and Gulin that would overcome the deficiencies cited above. By virtue of their dependence on an allowable independent claim, and further in view of the various additional features recited therein, claims 4, 5, 8, 14, 15, and 18 are patentable over Zhivotovorev, Gulin, and Lifar. Withdrawal of the rejection is respectfully requested.
Claims 6 and 16 stand rejected under 35 U.S.C. § 103 as being unpatentable over
Zhivotovorev, Gulin, and U.S. Patent Application Publication No. 2018/0011937 ("Tikhonov").
Claim 6 depends, indirectly, from independent claim 1. Claim 16 depends, indirectly, from independent claim 11. Tikhonov is only cited to show features in dependent claims 6 and 16, and does not teach a modification to Zhivotovorev and Gulin that would overcome the deficiencies cited above. By virtue of their dependence on an allowable independent claim, and further in view of the various additional features recited therein, claims 6 and 16 are patentable over Zhivotovorev, Gulin, and Tikhonov. Withdrawal of the rejection is respectfully requested.
Examiner response:
Examiner respectfully disagrees. Gulin teaches “the MLA calculates a metric (i.e. a "score" (interpreted by Examiner as a “penalty score”)), which denotes how close (Examiner interprets “how close “to include “comparing”) the current iteration of the model, which includes the given tree (or the given level of the given tree) (interpreted by Examiner to include a first MLA) and preceding trees and (interpreted by Examiner to include a second MLA), has gotten in its prediction to the correct answers (targets). The score of the model is calculated based on the predictions made and actual target values (correct answers) of the training objects used for training ([0027].)
No further arguments are presented for the dependent claims. Examiner respectfully maintains the rejection under 35 U.S.C. § 103 of amended independent claims 1 and 11 and their dependent claims.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABABACAR SECK whose telephone number is (571)270-7146. The examiner can normally be reached Monday-Friday 8:00 A.M.-6:00 P.M..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at 571270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ABABACAR SECK/Examiner, Art Unit 2147
/VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147