DETAILED ACTION
This Office Action is in response to communications filed on December 23, 2025 for Application No. 18/020,910, in which claims 1-3, 5-12, 27-28, 30-31, and 33-37 are presented for examination. The amendments filed on December 23, 2025 have been entered, where claims 1, 3, 5-9, 11, 27, 28, and 31 are amended, claims 35-37 are newly added, and claims 4, 13, and 32 are canceled, wherein claims 14-26 and 29 were previously canceled.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3, 5-12, 27-28, 30-31, and 33-37 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract ideas without significantly more.
Regarding Claim 1:
Step 1: Claim 1 is a process claim. Therefore, claims 1-3 and 5-12 are directed to a statutory category of eligible subject matter.
Step 2A Prong 1: If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the "Mental Processes" grouping of abstract ideas. Here, steps of the claimed method are mental processes. Specifically, the claim recites
“A method for . . . the method comprising: determining an implicit feature based on the first user data, the first resource data, the second user data and the second resource data” (mental process – amounts to exercising judgement to evaluate observed data to form an opinion on an implicit feature, which may be aided by pen and paper);
“wherein the determining the implicit feature comprises, in a case of determining that the target domain and the source domain have an overlapping user according to the first user data and the second user data: extracting a first implicit user feature from the first user data using a collaborative filtering manner” (mental process – amounts to exercising judgement to evaluate observed data to determine whether there are overlapping users, and in the event that there are, forming an opinion on an implicit feature using a filtering mechanism, which may be aided by pen and paper);
“extracting a second implicit user feature from the second user data of the overlapping user using the collaborative filtering manner” (mental process – amounts to observing data and forming an opinion on an implicit feature using a filtering mechanism, which may be aided by pen and paper);
“concatenating the first implicit user feature and the second implicit user feature in a first concatenating manner to obtain a concatenating user feature” (mental process – amounts to forming an opinion that two observed datapoints should be considered a continuous whole, which may be aided by pen and paper);
“determining the implicit feature based on the concatenating user feature” (mental process – amounts to forming an opinion based on a determination about observed data); and
“recommend a resource to a user of the target domain” (mental process –amounts to exercising judgement to evaluate and observed data to form an onion on a recommendation).
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“acquiring first user data and first resource data of a target domain, and acquiring second user data and second resource data of a source domain” (acquiring data amounts to insignificant extra-solution activity that is incidental to the claimed subject matter);
“implemented by an electronic device and . . . training a ranking model . . . and training the ranking model based on the implicit feature, wherein the ranking model is configured to” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea); and
“wherein the implicit feature comprises a feature vector without specific physical meaning” (amounts to merely reciting a particular technological environment or field of use, which does not impose any meaningful limits on practicing the abstract idea).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional elements:
“acquiring first user data and first resource data of a target domain, and acquiring second user data and second resource data of a source domain” (acquiring data, such as through a network, see buySAFE, Inc. v. Google, Inc., 765F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014), or by accessing information in memory, see Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681,1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93, is well‐understood, routine, and conventional; which is recited here with a high level of generality, and remains insignificant extra-solution activity even upon reconsideration);
“implemented by an electronic device and . . . training a ranking model . . . and training the ranking model based on the implicit feature, wherein the ranking model is configured to” (mere instructions to apply the exception using generic computer components does not provide an inventive concept); and
“wherein the implicit feature comprises a feature vector without specific physical meaning” (merely reciting a particular technological environment or field of use does not provide an inventive concept).
For the reasons above, Claim 1 is rejected as being directed to an abstract idea without significantly more. This rejection applies equally to dependent claims 2-3 and 5-12. The additional limitations of the dependent claims are addressed below.
Regarding Claim 2:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 2 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites
“determining an explicit feature based on the first user data and the first resource data” (mental process – amounts to exercising judgement to evaluate observed data to form an opinion on an explicit feature, which may be aided by pen and paper).
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“and training the ranking model based on the explicit feature and the implicit feature” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional elements:
“and training the ranking model based on the explicit feature and the implicit feature” (mere instructions to apply the exception using generic computer components does not provide an inventive concept).
Accordingly, Claim 2 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 3:
Step 2A Prong 1: See the rejection of Claim 2 above, which Claim 3 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites
“wherein the determining an explicit feature based on the first user data and the first resource data comprises: acquiring a first explicit user feature from the first user data of each of a plurality of target domains using a same feature encoding manner, and acquiring a first explicit resource feature from the first resource data of each of the plurality of target domains using a same feature encoding manner, wherein formats of the first explicit user features of the plurality of target domains are identical to each other, and formats of the first explicit resource features of the plurality of target domains are identical to each other” (mental process – amounts to exercising judgement to evaluate observed datasets to form opinions on features, in the form of encodings of specific and consistent formats, which may be aided by pen and paper) and
“concatenating, for each of the plurality of target domains, the first explicit user feature and the first explicit resource feature in a second concatenating manner, to obtain the explicit feature” (mental process – amounts to forming an opinion that two observed datapoints should be considered a continuous whole, which may be aided by pen and paper)
Step 2A Prong 2 & Step 2B: There are no elements left for consideration of implementation within a practical application or for consideration of significantly more.
Accordingly, Claim 3 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 5:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 5 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites
“wherein the concatenating the first implicit user feature and the second implicit user feature comprises: determining a weight corresponding to the second implicit user feature based on a number of the second user data of the overlapping user and a number of the first user data” (mental process – amounts to exercising judgement to evaluate observed data to form an opinion on a weight variable, which may be aided by pen and paper) and
“obtaining the concatenating user feature based on the first implicit user feature, the second implicit user feature and the weight” (mental process – amounts to forming an opinion that two observed datapoints should be considered a continuous whole, with reference to a known variable, which may be aided by pen and paper).
Step 2A Prong 2 & Step 2B: There are no elements left for consideration of implementation within a practical application or for consideration of significantly more.
Accordingly, Claim 5 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 6:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 6 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites
“wherein the determining an implicit feature comprises, in a case of determining that the target domain and the source domain have an overlapping resource according to the first resource data and the second resource data: extracting a first implicit resource feature from the first resource data using a collaborative filtering manner” (mental process – amounts to exercising judgement to evaluate observed data to determine whether there are overlapping resources, and in the event that there are, forming an opinion on an implicit feature using a filtering mechanism, which may be aided by pen and paper);
“extracting a second implicit resource feature from the second resource data of the overlapping resource using the collaborative filtering manner” (mental process – amounts to observing data and forming an opinion on an implicit feature using a filtering mechanism, which may be aided by pen and paper);
“concatenating the first implicit resource feature and the second implicit resource feature in a first concatenating manner to obtain a concatenating resource feature” (mental process – amounts to forming an opinion that two observed datapoints should be considered a continuous whole, which may be aided by pen and paper); and
“determining the implicit feature based on the concatenating resource feature” (mental process – amounts to forming an opinion based on a determination about observed data).
Step 2A Prong 2 & Step 2B: There are no elements left for consideration of implementation within a practical application or for consideration of significantly more.
Accordingly, Claim 6 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 7:
Step 2A Prong 1: See the rejection of Claim 6 above, which Claim 7 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites
“wherein the concatenating the first implicit resource feature and the second implicit resource feature comprises: determining a weight corresponding to the second implicit resource feature based on a number of the second resource data of the overlapping resource and a number of the first resource data” (mental process – amounts to exercising judgement to evaluate observed data to form an opinion on a weight variable, which may be aided by pen and paper) and
“obtaining the concatenating resource feature based on the first implicit resource feature, the second implicit resource feature and the weight” (mental process – amounts to forming an opinion that two observed datapoints should be considered a continuous whole, with reference to a known variable, which may be aided by pen and paper).
Step 2A Prong 2 & Step 2B: There are no elements left for consideration of implementation within a practical application or for consideration of significantly more.
Accordingly, Claim 7 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 8:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 8 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites
“wherein the determining an implicit feature comprises, in a case of determining that the target domain and the source domain have an overlapping user according to the first user data and the second user data: extracting a first joint implicit feature from the first user data and the first resource data” (mental process – amounts to exercising judgement to evaluate observed data to determine whether there are overlapping users, and in the event that there are, forming an opinion on an implicit feature, which may be aided by pen and paper);
“extracting a second joint implicit feature . . . based on the first resource data and second user data of the overlapping user” (mental process – amounts to observing data and forming an opinion on an implicit feature using a filtering mechanism, which may be aided by pen and paper); and
“determining the implicit feature based on the first joint implicit feature and the second joint implicit feature” (mental process – amounts to forming an opinion based on a determination about observed data).
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional element:
“using a graph neural network . . . using the graph neural network” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“using a graph neural network . . . using the graph neural network” (mere instructions to apply the exception using generic computer components does not provide an inventive concept).
Accordingly, Claim 8 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 9:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 9 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites
“wherein the determining an implicit feature comprises, in a case of determining that the target domain and the source domain have an overlapping resource according to the first resource data and the second resource data: extracting a first joint implicit feature from the first user data and the first resource data . . . ,” (mental process – amounts to exercising judgement to evaluate observed data to determine whether there are overlapping resources, and in the event that there are, forming an opinion on an implicit feature, which may be aided by pen and paper);
“extracting a second joint implicit feature . . . based on the first user data and second resource data of the overlapping resource” (mental process – amounts to observing data and forming an opinion on an implicit feature using a filtering mechanism, which may be aided by pen and paper); and
“determining the implicit feature based on the first joint implicit feature and the second joint implicit feature” (mental process – amounts to forming an opinion based on a determination about observed data).
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional element:
“using a graph neural network . . . using the graph neural network” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“using a graph neural network . . . using the graph neural network” (mere instructions to apply the exception using generic computer components does not provide an inventive concept).
Accordingly, Claim 9 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 10:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 10 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites
“determining the implicit feature based on the first user data and the first resource data, in a case of determining that the target domain and the source domain have no overlapping user according to the first user data and the second user data and that the target domain and the source domain have no overlapping resource according to the first resource data and the second resource data” (mental process – amounts to exercising judgement to evaluate observed data to determine whether there are users resources, and in the event that there are not, forming an opinion on an implicit feature, which may be aided by pen and paper).
Step 2A Prong 2 & Step 2B: There are no elements left for consideration of implementation within a practical application or for consideration of significantly more.
Accordingly, Claim 10 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 11:
Step 2A Prong 1: See the rejection of Claim 2 above, which Claim 11 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites
“concatenating the explicit feature and the implicit feature in a second concatenating manner to obtain a first concatenating feature, and acquiring a sample label corresponding to the first concatenating feature” (mental process – amounts to exercising judgement to evaluate observed data to form an opinion that they should be considered a cohesive whole and determining an appropriate label for the whole, which may be aided by pen and paper; notably, the acquiring was not considered to be transmitting data because the features were generated, and as a result, not previously labeled).
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“wherein the training the ranking model based on the explicit feature and the implicit feature comprises: . . . training the ranking model based on the first concatenating feature and the corresponding sample label” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional elements:
“wherein the training the ranking model based on the explicit feature and the implicit feature comprises: . . . training the ranking model based on the first concatenating feature and the corresponding sample label” (mere instructions to apply the exception using generic computer components does not provide an inventive concept).
Accordingly, Claim 11 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 12:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 12 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites
“obtaining an implicit feature based on the user data and the resource data” (mental process – amounts to exercising judgement to evaluate observed data to form an opinion on an implicit feature, which may be aided by pen and paper) and
“and determining the resource to be recommended matched with the user to be recommended from the resource data” (mental process – amounts to forming a recommendation opinion based on observed data).
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional element:
“acquiring user data of one or more users to be recommended and resource data of one or more resources to be recommended of the target domain” (acquiring data amounts to insignificant extra-solution activity that is incidental to the claimed subject matter) and
“and inputting the implicit feature into the ranking model . . . according to a ranking result of the ranking model” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“acquiring user data of one or more users to be recommended and resource data of one or more resources to be recommended of the target domain” (acquiring data, such as through a network, see buySAFE, Inc. v. Google, Inc., 765F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014), or by accessing information in memory, see Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681,1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93, is well‐understood, routine, and conventional; which is recited here with a high level of generality, and remains insignificant extra-solution activity even upon reconsideration) and
“and inputting the implicit feature into the ranking model . . . according to a ranking result of the ranking model” (mere instructions to apply the exception using generic computer components does not provide an inventive concept).
Accordingly, Claim 12 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 27:
Step 1: Claim 27 is a machine claim. Therefore, claims 27, 30-31, and 33 are directed to a statutory category of eligible subject matter.
Step 2A Prong 1: If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the "Mental Processes" grouping of abstract ideas. Here, the claim recites limitations that are substantially the same as the limitations of Claim 1. As a result, and as elaborated above, these limitations are abstract ideas because they are mental processes.
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“An electronic device, comprising: at least one processor; and a memory communicatively connected with the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions, when executed by the at least one processor, cause the at least one processor to at least . . . train the ranking model based on the implicit feature, wherein the ranking model is configured to” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea);
“acquire first user data and first resource data of a target domain, and acquire second user data and second resource data of a source domain” (acquiring data amounts to insignificant extra-solution activity that is incidental to the claimed subject matter); and
“wherein the implicit feature comprises a feature vector without specific physical meaning” (amounts to merely reciting a particular technological environment or field of use, which does not impose any meaningful limits on practicing the abstract idea).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional elements:
“An electronic device, comprising: at least one processor; and a memory communicatively connected with the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions, when executed by the at least one processor, cause the at least one processor to at least . . . train the ranking model based on the implicit feature, wherein the ranking model is configured to” (mere instructions to apply the exception using generic computer components does not provide an inventive concept);
“acquire first user data and first resource data of a target domain, and acquire second user data and second resource data of a source domain” (acquiring data, such as through a network, see buySAFE, Inc. v. Google, Inc., 765F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014), or by accessing information in memory, see Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681,1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93, is well‐understood, routine, and conventional; which is recited here with a high level of generality, and remains insignificant extra-solution activity even upon reconsideration); and
“wherein the implicit feature comprises a feature vector without specific physical meaning” (merely reciting a particular technological environment or field of use does not provide an inventive concept).
For the reasons above, Claim 27 is rejected as being directed to an abstract idea without significantly more. This rejection applies equally to dependent claims 30-31 and 33. The additional limitations of the dependent claims are addressed below.
Regarding Claim 28:
Step 1: Claim 28 is a machine claim. Therefore, claims 28 and 34-37 are directed to a statutory category of eligible subject matter.
Step 2A Prong 1: If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the "Mental Processes" grouping of abstract ideas. Here, the claim recites limitations that are substantially the same as the limitations of Claim 1. As a result, and as elaborated above, these limitations are abstract ideas because they are mental processes.
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“A non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are configured to cause a computer system to at least . . . train the ranking model based on the implicit feature, wherein the ranking model is configured to” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea);
“acquire first user data and first resource data of a target domain, and acquire second user data and second resource data of a source domain” (acquiring data amounts to insignificant extra-solution activity that is incidental to the claimed subject matter); and
“wherein the implicit feature comprises a feature vector without specific physical meaning” (amounts to merely reciting a particular technological environment or field of use, which does not impose any meaningful limits on practicing the abstract idea).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional elements:
“A non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are configured to cause a computer system to at least . . . train the ranking model based on the implicit feature, wherein the ranking model is configured to” (mere instructions to apply the exception using generic computer components does not provide an inventive concept);
“acquire first user data and first resource data of a target domain, and acquire second user data and second resource data of a source domain” (acquiring data, such as through a network, see buySAFE, Inc. v. Google, Inc., 765F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014), or by accessing information in memory, see Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681,1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93, is well‐understood, routine, and conventional; which is recited here with a high level of generality, and remains insignificant extra-solution activity even upon reconsideration); and
“wherein the implicit feature comprises a feature vector without specific physical meaning” (merely reciting a particular technological environment or field of use does not provide an inventive concept).
For the reasons above, Claim 28 is rejected as being directed to an abstract idea without significantly more. This rejection applies equally to dependent claims 34-37. The additional limitations of the dependent claims are addressed below.
Regarding Claim 30, the claim recites additional limitations that are substantially the same as the additional limitations of Claim 2, in the form of a machine. The claim is also directed to performing mental processes without integration into a practical component or significantly more.
Accordingly, Claim 30 is rejected under the same rationale.
Regarding Claim 31, the claim recites additional limitations that are substantially the same as the additional limitations of Claim 3, in the form of a machine. The claim is also directed to performing mental processes without integration into a practical component or significantly more.
Accordingly, Claim 31 is rejected under the same rationale.
Regarding Claim 33:
Step 2A Prong 1: See the rejection of Claim 27 above, which Claim 33 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites
“obtain an implicit feature based on the user data and the resource data” (mental process – amounts to exercising judgement to evaluate observed data to form an opinion on an implicit feature that is contained in the observed data, which may be aided by pen and paper) and
“determine the resource to be recommended matched with the user to be recommended from the resource data” (mental process – amounts to forming a recommendation opinion based on observed data).
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional element:
“An electronic device, comprising: at least one processor; and a memory communicatively connected with the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions, when executed by the at least one processor, cause the at least one processor to at least: . . . and input the implicit feature into a ranking model, and . . . according to a ranking result of the ranking model . . . wherein the ranking model is obtained by training using the electronic device of claim 27” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea, see the rejection of Claim 27 above for details) and
“acquire user data of one or more users to be recommended and resource data of one or more resources to be recommended of a target domain” (acquiring data amounts to insignificant extra-solution activity that is incidental to the claimed subject matter).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“An electronic device, comprising: at least one processor; and a memory communicatively connected with the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions, when executed by the at least one processor, cause the at least one processor to at least: . . . and input the implicit feature into a ranking model, and . . . according to a ranking result of the ranking model . . . wherein the ranking model is obtained by training using the electronic device of claim 27” (mere instructions to apply the exception using generic computer components does not provide an inventive concept) and
“acquire user data of one or more users to be recommended and resource data of one or more resources to be recommended of a target domain” (acquiring data, such as through a network, see buySAFE, Inc. v. Google, Inc., 765F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014), or by accessing information in memory, see Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681,1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93, is well‐understood, routine, and conventional; which is recited here with a high level of generality, and remains insignificant extra-solution activity even upon reconsideration).
Accordingly, Claim 33 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 34:
Step 2A Prong 1: See the rejection of Claim 28 above, which Claim 34 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites
“obtain an implicit feature based on the user data and the resource data” (mental process – amounts to exercising judgement to evaluate observed data to form an opinion on an implicit feature that is contained in the observed data, which may be aided by pen and paper) and
“and determine the resource to be recommended matched with the user to be recommended from the resource data” (mental process – amounts to forming an opinion on a recommendation based on observed data).
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional element:
“A non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are configured to cause a computer system to at least: . . . and input the implicit feature into a ranking model . . . according to a ranking result of the ranking model . . . wherein the ranking model is obtained by training using the non-transitory computer-readable storage medium of claim 28” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea, see the rejection of Claim 28 above for details) and
“acquire user data of one or more users to be recommended and resource data of one or more resources to be recommended of a target domain” (acquiring data amounts to insignificant extra-solution activity that is incidental to the claimed subject matter).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“A non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are configured to cause a computer system to at least: . . . and input the implicit feature into a ranking model . . . according to a ranking result of the ranking model . . . wherein the ranking model is obtained by training using the non-transitory computer-readable storage medium of claim 28” (mere instructions to apply the exception using generic computer components does not provide an inventive concept) and
“acquire user data of one or more users to be recommended and resource data of one or more resources to be recommended of a target domain” (acquiring data, such as through a network, see buySAFE, Inc. v. Google, Inc., 765F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014), or by accessing information in memory, see Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681,1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93, is well‐understood, routine, and conventional; which is recited here with a high level of generality, and remains insignificant extra-solution activity even upon reconsideration).
Accordingly, Claim 34 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 35, the claim recites additional limitations that are substantially the same as the additional limitations of Claim 2, in the form of a machine. The claim is also directed to performing mental processes without integration into a practical component or significantly more.
Accordingly, Claim 35 is rejected under the same rationale.
Regarding Claim 36, the claim recites additional limitations that are substantially the same as the additional limitations of Claim 3, in the form of a machine. The claim is also directed to performing mental processes without integration into a practical component or significantly more.
Accordingly, Claim 36 is rejected under the same rationale.
Regarding Claim 37, the claim recites additional limitations that are substantially the same as the additional limitations of Claim 5, in the form of a machine. The claim is also directed to performing mental processes without integration into a practical component or significantly more.
Accordingly, Claim 37 is rejected under the same rationale.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 6, 10, 12, 27-28, 30, and 33-35 are rejected under 35 U.S.C. 103 as being unpatentable over Kumar et al. (hereinafter Kumar) (Pat. Pub. No. US2020/0184537 A1) in view of Sun et al. (hereinafter Sun) (“Parallel Split-Join Networks for Shared Account Cross-domain Sequential Recommendations”).
Regarding Claim 1, Kumar teaches a method for training a ranking model, the method implemented by an electronic device and comprising (Abstract, “Aspects of the present disclosure involve . . . methods . . . includes a plurality of models for obtaining a recommendation score”; Para. [0030], “Predictions may be made and the model trained”; Para. [0035], “Each model may be personalized to provide a recommended list of charitable causes with a recommendation score provided for each charitable cause. The higher the score, the increase better chance of a correct charitable cause prediction”; Para. [0036], “The factors may vary from model to model, as such, these factors may be calibrated across each of the other models in order to achieve a recommendation score that is consistent across the models . . . Note that the use of such technique may include the use of ranking where the charitable causes are ranked and presented to the user 302 in a ranked order”; see also Fig. 3; Fig. 9; Para. [0062]-[0064], “FIG. 9 is a block diagram of a networked system 900 for implementing the processes described herein . . . system 900 may include or implement a plurality of devices, computers, servers . . . The merchant device 902, primary user device 932, and the third-party service provider computer 912 may each include one or more processors, memories, and other appropriate components for executing computer-executable instructions such as program code and/or data. The computer-executable instructions may be stored on one or more computer readable mediums or computer readable devices”, where the method, “processes described herein”, is “implement[ed]” by an electronic device, “system 900 may include or implement a plurality of devices, computers, servers”):
acquiring first user data and first resource data of a target domain, and acquiring second user data and second resource data of a source domain (Para. [0030], “Predictions may be made and the model trained by accessing the details of the user 302i which may be housed in a data warehouse 418. The data warehouse can include various repositories with details about the user 302i including but not limited to purchases made by the user, donations”, where the “donations” to charities (first resource data) “made by the user” (first user data) are of a target domain of charitable donations and the “purchases” (second resource data) “made by the user” (second user data) are of the source domain of merchant transactions);
determining an implicit feature based on the first user data, the first resource data, the second user data and the second resource data (Fig. 7; Para. [0052], “FIGS. 7 illustrates matrix diagram 700 of item-to-item collaborative filtering used to obtain a recommendation score. In this item-to-item collaborative approach, a cross domain approach is considered where, for example, a merchant and a charity are considered. Therefore, in this ongoing charity example, the recommendation score would be based on a prediction where purchases and contributions are correlated. Thus, prediction may be made that users who purchased with merchant X also donate to charity Y”, where the “matrix diagram 700” represents the implicit feature of “correlate[ion]” between “purchases and contributions”; see also Para. [0041], “The first model, which includes the use of cross-collaborative filtering 502 is a model designed to consider not only a user and his/her transactions but consider transactions across domains. For example, transactional information about a merchant and a charity are considered . . . To illustrate this, consider a user who ordered online pet food at merchant X, then given that the user purchased pet food at merchant X, there is a likelihood that the user will donate to the Friends of Animals charity”, where the implicit feature is determined based on the user and resource data for both the target and source domains),
wherein the implicit feature comprises a feature vector without specific physical meaning (Fig. 7, where, as discussed above, the “matrix diagram 700” is a representation of the implicit feature of “correlate[ion]” between “purchases and contributions”, and where, as depicted in Fig. 7 and generally understood by one of ordinary skill in the art, the matrix “700” comprises vectors, which are feature vectors when representing the implicit feature, see Para. [0052], “FIGS. 7 illustrates matrix diagram 700 of item-to-item collaborative filtering used to obtain a recommendation score. In this item-to-item collaborative approach, a cross domain approach is considered where, for example, a merchant and a charity are considered. Therefore, in this ongoing charity example, the recommendation score would be based on a prediction where purchases and contributions are correlated. Thus, prediction may be made that users who purchased with merchant X also donate to charity Y”; see also Fig. 7, where the feature vectors can have a meaning of “charity 702” or “704 merchant” associated with purchases or contributions of users “1” through “m”, which is within the broadest reasonable meaning of without a specific physical meaning because: 1. Records of purchases represent actions without a specific tangible form or physical movement, see Para. [0041], “consider a user who ordered online pet food at merchant X, then given that the user purchased pet food at merchant X, there is a likelihood that the user will donate to the Friends of Animals charity”, where the purchase can be virtual, as opposed to physical, and 2. charities and merchants may represent entities with no physical location or multiple physical location, such as the “Red Cross”, so merely associating a user with a contribution to an organization does not represent a specific physical meaning, see Para. [0017], “a charity to contribute to, the YMCA, United Way, Red Cross, etc”) and
wherein the determining the implicit feature comprises (Fig. 7; Para. [0052], “FIGS. 7 illustrates matrix diagram 700 of item-to-item collaborative filtering used to obtain a recommendation score. In this item-to-item collaborative approach, a cross domain approach is considered where, for example, a merchant and a charity are considered. Therefore, in this ongoing charity example, the recommendation score would be based on a prediction where purchases and contributions are correlated. Thus, prediction may be made that users who purchased with merchant X also donate to charity Y”, where the “matrix diagram 700” represents the implicit feature of “correlate[ion]” between “purchases and contributions”; see also Para. [0041], “The first model, which includes the use of cross-collaborative filtering 502 is a model designed to consider not only a user and his/her transactions but consider transactions across domains. For example, transactional information about a merchant and a charity are considered . . . To illustrate this, consider a user who ordered online pet food at merchant X, then given that the user purchased pet food at merchant X, there is a likelihood that the user will donate to the Friends of Animals charity”, where the implicit feature is determined),
in a case of determining that the target domain and the source domain have an overlapping user according to the first user data and the second user data (Fig. 7, where references “706a” through “706c” correspond with cases where “charity 702” donations (target domain) and “merchant 704” transactions (source domain) had overlapping users, see generally Para. [0041], “one focus of the cross-collaborative filtering model 502 may include making a recommendation based in part on an association or prediction regarding people who made a purchase with a particular merchant and also donated to a particular charity”):
extracting a first implicit user feature from the first user data using a collaborative filtering manner (Fig. 7, where column “i” entries labeled “R”, such as row “1” are first implicit features from purchase data “1” through “m” (first user data when under column “i”); where the features are implicit because the values are calculated instead of plainly expressed and their significance are derived from an association, see Para. [0053], “thus, the weighted sum of the similarity score for a user and charity may be calculated based on [equation] where R denotes the number of transactions of a customer with a merchant or charity and the sum is over the top N user merchants.sub.j. Note that the top N user merchants for each user may be selected based on the number of transactions a user may have with each merchant, wherein the execution time of the algorithm may be decrease (significantly) if the computation is restricted to the top N merchants when calculating the weighted score for each charity”; where “computed and used for determining a recommendation score” is within the broadest reasonable interpretation of extracting, see Para. [0053], “For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score”; and where “matrix diagram 700” uses “collaborative filtering”, see Para. [0052], “FIGS. 7 illustrates matrix diagram 700 of item-to-item collaborative filtering”),
extracting a second implicit user feature from the second user data of the overlapping user using the collaborative filtering manner (Fig. 7, where column “j 704” entries labeled “R”, such as row “1” are second implicit features from purchase data “1” through “m” (second user data when under column “j 704”); where the features are implicit because the values are calculated instead of plainly expressed and their significance are derived from an association, see Para. [0053], “thus, the weighted sum of the similarity score for a user and charity may be calculated based on [equation] where R denotes the number of transactions of a customer with a merchant or charity and the sum is over the top N user merchants.sub.j. Note that the top N user merchants for each user may be selected based on the number of transactions a user may have with each merchant, wherein the execution time of the algorithm may be decrease (significantly) if the computation is restricted to the top N merchants when calculating the weighted score for each charity”; where “computed and used for determining a recommendation score” is within the broadest reasonable interpretation of extracting, see Para. [0053], “For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score”; and where “matrix diagram 700” uses “collaborative filtering” to determine overlapping uses, emphasized by bold boxes, see Fig. 7; Para. [0052], “FIGS. 7 illustrates matrix diagram 700 of item-to-item collaborative filtering”),
. . . [associating] the first implicit user feature and the second implicit user feature . . . to obtain a[n associated] . . . user feature, and determining the implicit feature based on the [associated] . . . user feature (Fig. 7, where the labeled “R” entry implicit features form the “similarities” between the first and second user data, which determine the correlation, and which as discussed above is the implicit feature, see Para. [0053], “For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score”; Para. [0052], “the recommendation score would be based on a prediction where purchases and contributions are correlated”); and
training the ranking model based on the implicit feature (Fig. 7; Para. [0053], “In one embodiment, a similarity matrix or matrix diagram 700 may be created based on, for example, co-purchases or other similarity such that [equation] is used as for model training”),
wherein the ranking model is configured to recommend a resource to a user of the target domain (Para. [0035], “Each model may be personalized to provide a recommended list of charitable causes with a recommendation score provided for each charitable cause. The higher the score, the increase better chance of a correct charitable cause prediction”; Para. [0036], “The factors may vary from model to model, as such, these factors may be calibrated across each of the other models in order to achieve a recommendation score that is consistent across the models . . . Note that the use of such technique may include the use of ranking where the charitable causes are ranked and presented to the user 302 in a ranked order”; see also Fig. 3).
Kumar does not explicitly disclose . . . concatenating . . . in a first concatenating manner . . . concatenating . . . the concatenating . . . .
However, Sun teaches . . . concatenating [features across domains] . . . in a first concatenating manner [to obtain a concatenating feature, where] . . . the concatenating . . . [is used for cross domain recommendation] (Pg. 9, Col. 2, Para. 1, “The hybrid recommendation decoder integrates hybrid information from both domains A and B to evaluate the recommendation probabilities of the candidate items. Specifically, it first gets the hybrid representation by concatenating the representation hB from domain B and the transformed representation h(A→B) from domain A to domain B”).
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the determining an implicit feature based on first user data, first resource data, second user data, and second resource data, comprising: determining a user overlap between first and second resource data, extracting a first implicit feature from first user data and a second implicit feature from second user data with an overlapping user using collaborative filtering, and determining the implicit feature based on the associated feature of the first and second features of Kumar with the concatenating features across domains in a first concatenating manner to obtain a concatenating feature, where the concatenating is used for cross domain recommendation of Sun in order to effectuate the combination of information across domains, which will allow downstream similarity analysis to include complementary information (Sun, Pg. 2, Col. 2, Para. 2, “Finally, the hybrid recommendation decoder module estimates the recommendation scores for each item based on the information from both domains, i.e., the in-domain representations from the target domain and the cross-domain representations from the complementary domain”, which utilizes the above concatenating approach to effectuate this analysis).
Regarding Claim 2, Kumar in view of Sun teach the method according to claim 1, further comprising:
determining an explicit feature based on the first user data and the first resource data (Kumar, Fig. 7, where the matrix entries represent the correlation implicit feature, see Kumar, Para. [0053], “For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score”, whereas the “rows” correspond with user “co-purchases” (first user data) that include donations to the “charity” (first resource data), see Kumar, Para. [0053], “Considering matrix diagram 700 or similarity matrix, a first approach is to consider a charity (i) and a merchant (j) and identify those instances where a similarity exists between the two. As illustrated in the similarity matrix, a charity 702 and merchant, 704 are both examined to determine what similarities exist between the two. For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score. In one embodiment, a similarity matrix or matrix diagram 700 may be created based on, for example, co-purchases or other similarity”, where the explicit features of “charity” associated with each user’s “co-purchases” must be determined to be used to construct the matrix); and
training the ranking model based on the explicit feature and the implicit feature (Kumar, Fig. 7; Kumar, Para. [0053], “In one embodiment, a similarity matrix or matrix diagram 700 may be created based on, for example, co-purchases or other similarity such that [equation] is used as for model training”, where, as discussed above, “matrix diagram 700” includes both the implicit and explicit features).
Regarding Claim 6, Kumar in view of Sun teach the method according to claim 1, wherein the determining an implicit feature comprises (Kumar, Fig. 7; Kumar, Para. [0052], “FIGS. 7 illustrates matrix diagram 700 of item-to-item collaborative filtering used to obtain a recommendation score. In this item-to-item collaborative approach, a cross domain approach is considered where, for example, a merchant and a charity are considered. Therefore, in this ongoing charity example, the recommendation score would be based on a prediction where purchases and contributions are correlated. Thus, prediction may be made that users who purchased with merchant X also donate to charity Y”, where the “matrix diagram 700” represents the implicit feature of “correlate[ion]” between “purchases and contributions”; see also Kumar, Para. [0041], “The first model, which includes the use of cross-collaborative filtering 502 is a model designed to consider not only a user and his/her transactions but consider transactions across domains. For example, transactional information about a merchant and a charity are considered . . . To illustrate this, consider a user who ordered online pet food at merchant X, then given that the user purchased pet food at merchant X, there is a likelihood that the user will donate to the Friends of Animals charity”, where the implicit feature is determined),
in a case of determining that the target domain and the source domain have an overlapping resource according to the first resource data and the second resource data (Kumar, Fig. 7, where references “706a” through “706c” correspond with cases where “charity 702” donations (target domain) and “merchant 704” transactions (source domain) had overlap, where, in view of Sun, the overlap is between resources, see Sun, Pg. 2, Col. 1, Para. 1, “For example, as illustrated in Figure 1, videos like “Mickey Mouse” in the video domain might help to predict the next item “School House Fun” in the educational domain because they reflect the same interest in the Disney cartoon character “Mickey Mouse” presumably by a child in this family”, where the “video domain” and “educational domain” overlap with “Mickey Mouse” resources; see also Sun, Pg. 3, Fig. 1):
extracting a first implicit resource feature from the first resource data using a collaborative filtering manner (Kumar, Fig. 7, where column “i” (first resource data) entries labeled “R”, such as row “1” are first implicit resource features from purchase data “1” through “m”; where the features are implicit because the values are calculated instead of plainly expressed and their significance are derived from an association, see Kumar, Para. [0053], “thus, the weighted sum of the similarity score for a user and charity may be calculated based on [equation] where R denotes the number of transactions of a customer with a merchant or charity and the sum is over the top N user merchants.sub.j. Note that the top N user merchants for each user may be selected based on the number of transactions a user may have with each merchant, wherein the execution time of the algorithm may be decrease (significantly) if the computation is restricted to the top N merchants when calculating the weighted score for each charity”; where “computed and used for determining a recommendation score” is within the broadest reasonable interpretation of extracting, see Kumar, Para. [0053], “For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score”; and where “matrix diagram 700” uses “collaborative filtering”, see Kumar, Para. [0052], “FIGS. 7 illustrates matrix diagram 700 of item-to-item collaborative filtering”);
extracting a second implicit resource feature from the second resource data of the overlapping resource using the collaborative filtering manner (Kumar, Fig. 7, where column “j 704” (second resource data) entries labeled “R”, such as row “1” are second implicit resource features from purchase data “1” through “m”; where the features are implicit because the values are calculated instead of plainly expressed and their significance are derived from an association, see Kumar, Para. [0053], “thus, the weighted sum of the similarity score for a user and charity may be calculated based on [equation] where R denotes the number of transactions of a customer with a merchant or charity and the sum is over the top N user merchants.sub.j. Note that the top N user merchants for each user may be selected based on the number of transactions a user may have with each merchant, wherein the execution time of the algorithm may be decrease (significantly) if the computation is restricted to the top N merchants when calculating the weighted score for each charity”; where “computed and used for determining a recommendation score” is within the broadest reasonable interpretation of extracting, see Kumar, Para. [0053], “For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score”; and where “matrix diagram 700” uses “collaborative filtering”, see Kumar, Para. [0052], “FIGS. 7 illustrates matrix diagram 700 of item-to-item collaborative filtering”; and, in view of Sun, the overlap is between both users and resources, see Sun, Pg. 2, Col. 1, Para. 1, “For example, as illustrated in Figure 1, videos like “Mickey Mouse” in the video domain might help to predict the next item “School House Fun” in the educational domain because they reflect the same interest in the Disney cartoon character “Mickey Mouse” presumably by a child in this family”, where the “video domain” and “educational domain” overlap with “Mickey Mouse” resources; see also Sun, Pg. 3, Fig. 1);
concatenating the first implicit resource feature and the second implicit resource feature in a first concatenating manner to obtain a concatenating resource feature; and determining the implicit feature based on the concatenating resource feature (Kumar, Fig. 7, where the labeled “R” entry implicit features form the “similarities” between the first and second user data, and, in view of Sun, the overlap is between both users and resources, see Sun, Pg. 2, Col. 1, Para. 1, “For example, as illustrated in Figure 1, videos like “Mickey Mouse” in the video domain might help to predict the next item “School House Fun” in the educational domain because they reflect the same interest in the Disney cartoon character “Mickey Mouse” presumably by a child in this family”, where the “video domain” and “educational domain” overlap with “Mickey Mouse” resources; see also Sun, Pg. 3, Fig. 1, which determine the correlation, and which as discussed above is the implicit feature, see Kumar, Para. [0053], “For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score”; Kumar, Para. [0052], “the recommendation score would be based on a prediction where purchases and contributions are correlated”; and where, in view of Sun, the similarities are formed by concatenating the features in a first concatenating manner to obtain the concatenated similarities for use in determining the implicit feature, see Sun, Pg. 9, Col. 2, Para. 1, “The hybrid recommendation decoder integrates hybrid information from both domains A and B to evaluate the recommendation probabilities of the candidate items. Specifically, it first gets the hybrid representation by concatenating the representation hB from domain B and the transformed representation h(A→B) from domain A to domain B”).
The reasons of obviousness for the combination of with Sun are discussed above in regard to Claim 1 and remain applicable here.
Additionally, before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the determining an implicit feature based on first user data, first resource data, second user data, and second resource data, comprising: determining a user overlap between first and second resource data, extracting a first implicit feature from first user data and a second implicit feature from second user data with an overlapping user using collaborative filtering, and determining the implicit feature based on the concatenating feature of the first and second features concatenated in a first concatenating manner of Kumar in view of Sun with the cross domain recommendation in the case of determining that the first resource data of a target domain and the second resource data of a source domain have an overlapping resource in further view of Sun in order to increase the likelihood that a user will be receptive to a target resource recommendation (Sun, Pg. 2, Col. 1, Para. 1, “User behavior in one domain may be helpful for improving recommendations in another domain [76, 66, 33] because user behavior in different domains may reflect similar user interests. For example, . . . [resources with overlap of] “Mickey Mouse””; Kumar, Para. [0041], “To illustrate this, consider a user who ordered online pet food at merchant X, then given that the user purchased pet food at merchant X, there is a likelihood that the user will donate to the Friends of Animals charity”, where the likelihood of donation would increase if the “purchase” overlapped with “the target resource”, such as if “merchant X” was also a charitable organization, instead of merely associated with “pets”, such as “Friends of Animals charity”).
Regarding Claim 10, Kumar in view of Sun teach the method according to claim 1, further comprising:
determining the implicit feature based on the first user data and the first resource data (Kumar, Fig. 3A, Kumar, Para. [0023], “The use case considered in FIG. 3A may use a collaborative filter type approach and use case where a user 302c may be presented with a recommendation based on the donations the user 302c made and/or other users 302a, b made. To illustrate this approach, consider user 302a. This user 302a has a donation history and as illustrated, user 302a has a donation history with donations made to charitable causes 304-310. Turning to user 302b, this user 302b also has a donation history but in this example, donations are focused on a single charitable cause 308. Now using a similar approach, user 302c can be characterized as previously donating with donation history support primary focused on two charitable causes 306, 310. In one embodiment, a charitable cause recommendation may be provided to a user 302c using a collaborative filtering approach. In this scenario, an observation, analysis, or correlation can be made such that a similarity is identified between user 302a and user 302b. This correlation or similarity 312 can be identified based in part on the observation that user 302c, like user 302a, donated to charitable causes 306 and 308. Therefore, based on this assessment and similarity 312, two new recommendations 314 may be made and surfaced to user 302c”, where the “correlation” implicit feature is based on the “donation” target domain of first user data and first resource data and the ”collaborative filtering approach” requires a mathematical representation that is within the broadest reasonable interpretation of a model in order to generate the “charitable cause recommendation”),
in a case of determining that the target domain and the source domain have no overlapping user according to the first user data and the second user data and that the target domain and the source domain have no overlapping resource according to the first resource data and the second resource data (Kumar, Para. [0035], “A first technique can include the making a prediction based on the information available. The use of this technique may be appropriate when not all the desired data is available to compute the desired charitable cause for all users at all time. For example . . . purchase history may not be available or may be limited. In such instances, from the personalization models available, those with inadequate details may be eliminated or not considered when predicting the list of charitable causes”, where, when “purchase history” (source domain) is unavailable, there can be no overlapping user or resource because there is no second user data or second resource data, and, as a result, the donation-only model discussed above would be used; notably, this feature will be used in “re-train[ing]”, see Kumar, Abstract, “the system is introduced that can re-train the recommendation model based on a feedback received in response to a recommendation made using on the recommendation score obtained”).
Regarding Claim 12, Kumar in view of Sun teach the method according to claim 1, further comprising:
acquiring user data of one or more users to be recommended and resource data of one or more resources to be recommended of the target domain (Kumar, Fig. 8; Kumar, Para. [0055] – [0056], “FIG. 8, an example process 800 for obtaining a recommendation implemented by a system and method . . . Process 800 may begin with operation 802, where a system receives or determines that user information is available for processing”, “user information” is data on users of the target group to receive a “recommendation”; Kumar, Para. [0059], “Next, at operation 810, a determination is made as to the one or more causes with the higher recommendation scores that may be used here and presented to the user at operation 812”, where the “causes” are resource data of the target group, which must be acquired for a “score” to be “determine[d]”);
obtaining an implicit feature based on the user data and the resource data (Kumar, Fig. 8; Kumar, Para. [0057], “determining recommendation score(s) for charitable causes that may be presented to a user for donation at operation 808 . . . based in part on an association or prediction regarding people who made a purchase with a particular merchant and also donated to a particular charity. In other words, the system is designed to show that users who make purchases at a particular merchant are also likely to make a donation to a specific cause”, where the implicit feature is the correlation between the “purchase” contained in the “user information” and the “particular charity” from the resource data); and
inputting the implicit feature into the ranking model (Kumar, Fig. 8; Kumar, Para. [0057], “Once the user information is available for transacting, process 800 can continue to operation 806 where based in part on the characterization or user information retrieved, a determination is made regarding which model(s) to use for obtaining and presenting a recommendation to the user. As indicated above an in conjunction with FIG. 5, a random walk, cluster, and cross-domain collaborative model may be used. Additionally, the model(s) used may be further used in conjunction with an ensemble model for determining recommendation score(s) for charitable causes that may be presented to a user for donation at operation 808. Recall that in the cross-domain filtering model, a recommendation may be made based in part on an association or prediction regarding people who made a purchase with a particular merchant and also donated to a particular charity”; see also Kumar, Fig. 7; Kumar, Para. [0052], “To illustrate how a cross-domain based recommendation score may be obtained, FIG. 7 is presented to provide some insight. In particular, FIGS. 7 illustrates matrix diagram 700 of item-to-item collaborative filtering used to obtain a recommendation score”),
and determining the resource to be recommended matched with the user to be recommended from the resource data according to a ranking result of the ranking model (Kumar, Fig. 8; Kumar, Para. [0059], “Next, at operation 810, a determination is made as to the one or more causes with the higher recommendation scores that may be used here and presented to the user at operation 812”, where, as discussed above, the recommendation can be according to a ranking result of the models, see Kumar, Para. [0035], “Each model may be personalized to provide a recommended list of charitable causes with a recommendation score provided for each charitable cause. The higher the score, the increase better chance of a correct charitable cause prediction”; Kumar, Para. [0036], “The factors may vary from model to model, as such, these factors may be calibrated across each of the other models in order to achieve a recommendation score that is consistent across the models . . . Note that the use of such technique may include the use of ranking where the charitable causes are ranked and presented to the user 302 in a ranked order”; see also Kumar, Fig. 3).
Regarding Claim 27, Kumar teaches an electronic device, comprising: at least one processor; and a memory communicatively connected with the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions, when executed by the at least one processor, cause the at least one processor to at least: . . . (Fig. 9; Para. [0062]-[0064], “FIG. 9 is a block diagram of a networked system 900 for implementing the processes described herein . . . system 900 may include or implement a plurality of devices, computers, servers . . . The merchant device 902, primary user device 932, and the third-party service provider computer 912 may each include one or more processors, memories, and other appropriate components for executing computer-executable instructions such as program code and/or data. The computer-executable instructions may be stored on one or more computer readable mediums or computer readable devices”).
The additional elements of the dependent claim are substantially the same as the limitations of Claim 1, therefore it is rejected under the same rationale.
Regarding Claim 28, Kumar teaches a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are configured to cause a computer system to at least: . . . (Fig. 9; Para. [0062]-[0064], “FIG. 9 is a block diagram of a networked system 900 for implementing the processes described herein . . . system 900 may include or implement a plurality of . . . computers . . . computer 912 may each include one or more processors, memories, and other appropriate components for executing computer-executable instructions such as program code and/or data. The computer-executable instructions may be stored on one or more computer readable mediums or computer readable devices”, where “computer readable devices” are non-transitory; see also Fig. 8; Para. [0055], “FIG. 8 illustrates a flow diagram illustrating operations for obtaining a recommendation score using cross-domain collaborative filtering is presented. According to some embodiments, process 800 may include one or more of operations 802-814, which may be implemented, at least in part, in the form of executable code stored on a non-transitory, tangible, machine readable media”).
The additional elements of the dependent claim are substantially the same as the limitations of Claim 1, therefore it is rejected under the same rationale.
Regarding Claim 30, the additional elements of the dependent claim are substantially the same as the limitations of Claim 2, therefore it is rejected under the same rationale.
Regarding Claim 33, Kumar teaches an electronic device, comprising: at least one processor; and a memory communicatively connected with the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions, when executed by the at least one processor, cause the at least one processor to at least (Fig. 9; Para. [0062]-[0064], “FIG. 9 is a block diagram of a networked system 900 for implementing the processes described herein . . . system 900 may include or implement a plurality of devices, computers, servers . . . The merchant device 902, primary user device 932, and the third-party service provider computer 912 may each include one or more processors, memories, and other appropriate components for executing computer-executable instructions such as program code and/or data. The computer-executable instructions may be stored on one or more computer readable mediums or computer readable devices”):
. . . wherein the ranking model is obtained by training using the electronic device of claim 27 (see the rejection of Claim 27 above for details).
The additional elements of the dependent claim are substantially the same as the limitations of Claim 12, therefore it is rejected under the same rationale.
Regarding Claim 34, Kumar teaches a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are configured to cause a computer system to at least (Fig. 9; Para. [0062]-[0064], “FIG. 9 is a block diagram of a networked system 900 for implementing the processes described herein . . . system 900 may include or implement a plurality of . . . computers . . . computer 912 may each include one or more processors, memories, and other appropriate components for executing computer-executable instructions such as program code and/or data. The computer-executable instructions may be stored on one or more computer readable mediums or computer readable devices”, where “computer readable devices” are non-transitory; see also Fig. 8; Para. [0055], “FIG. 8 illustrates a flow diagram illustrating operations for obtaining a recommendation score using cross-domain collaborative filtering is presented. According to some embodiments, process 800 may include one or more of operations 802-814, which may be implemented, at least in part, in the form of executable code stored on a non-transitory, tangible, machine readable media”):
. . . wherein the ranking model is obtained by training using the non-transitory computer-readable storage medium of claim 28 (see the rejection of Claim 28 above for details).
The additional elements of the dependent claim are substantially the same as the limitations of Claim 12, therefore it is rejected under the same rationale.
Regarding Claim 35, the additional elements of the dependent claim are substantially the same as the limitations of Claim 2, therefore it is rejected under the same rationale.
Claims 3, 31, and 36 are rejected under 35 U.S.C. 103 as being unpatentable over Kumar in view of Sun, Zhu et al. (hereinafter Zhu) (“Cross-Domain Recommendation: Challenges, Progress, and Prospects”), and He et al. (hereinafter He) (“Neural Collaborative Filtering”).
Regarding Claim 3, Kumar in view of Sun teach the method according to claim 2, wherein the determining an explicit feature based on the first user data and the first resource data comprises (Kumar, Fig. 7, where the matrix entries represent the correlation implicit feature, see Kumar, Para. [0053], “For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score”, whereas the “rows” correspond with user “co-purchases” (first user data) that include donations to the “charity” (first resource data), see Kumar, Para. [0053], “Considering matrix diagram 700 or similarity matrix, a first approach is to consider a charity (i) and a merchant (j) and identify those instances where a similarity exists between the two. As illustrated in the similarity matrix, a charity 702 and merchant, 704 are both examined to determine what similarities exist between the two. For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score. In one embodiment, a similarity matrix or matrix diagram 700 may be created based on, for example, co-purchases or other similarity”, where the explicit features of “charity” associated with each user’s “co-purchases” must be determined to be used to construct the matrix):
acquiring a first explicit user feature from the first user data of each of a plurality of [entries in a target domain] . . . (Kumar, Para. [0030], “Predictions may be made and the model trained by accessing the details of the user 302i which may be housed in a data warehouse 418. The data warehouse can include various repositories with details about the user 302i including but not limited to purchases made by the user, donations”, where the explicit user feature of “purchas[ing]” “user” is acquired for each of a plurality of entries of the first user data, see Kumar, Fig. 7, where the rows are a plurality of entries),
and acquiring a first explicit resource feature from the first resource data of each of the plurality of [entries in the target domain] . . . (Kumar, Para. [0030], “Predictions may be made and the model trained by accessing the details of the user 302i which may be housed in a data warehouse 418. The data warehouse can include various repositories with details about the user 302i including but not limited to purchases made by the user, donations”, where the explicit resource feature of “donations” is acquired, where the pluralization of demonstrates it is for a plurality of entries),
. . . the first explicit user features of the plurality of [entries in the target domain] . . . , and . . . the first explicit resource features of the plurality of [entries in the target domain] . . . (Kumar, Para. [0030], “Predictions may be made and the model trained by accessing the details of the user 302i which may be housed in a data warehouse 418. The data warehouse can include various repositories with details about the user 302i including but not limited to purchases made by the user, donations”, each discussed in detail in the preceding two parentheticals); and
[associating], for each of the plurality of [entries in the target domain], the first explicit user feature and the first explicit resource feature . . . , to obtain the explicit feature (Kumar, Fig. 7, where the “rows” correspond with the explicit user feature of “co-purchases” from first user data that is associated with the explicit resource feature of donations to the “charity” from the first resource data, see Kumar, Para. [0053], “Considering matrix diagram 700 or similarity matrix, a first approach is to consider a charity (i) and a merchant (j) and identify those instances where a similarity exists between the two. As illustrated in the similarity matrix, a charity 702 and merchant, 704 are both examined to determine what similarities exist between the two. For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score. In one embodiment, a similarity matrix or matrix diagram 700 may be created based on, for example, co-purchases or other similarity”).
Kumar in view of Sun do not explicitly disclose . . . target domains using a same feature encoding manner . . . target domains using a same feature encoding manner . . . wherein formats of . . . target domains are identical to each other . . . formats of . . . target domains are identical to each other . . . concatenating . . . target domains . . . in a second concatenating manner.
However, Zhu teaches . . . [generation of recommendations for a plurality of] target domains . . . target domains . . . target domains . . . target domains . . . target domains . . . (Pg. 4, Col. 1, Para. 2, “a multi-target CDR scenario, the researchers aim to improve the recommendation accuracy in multiple domains simultaneously. The core idea of multitarget CDR is to leverage more auxiliary information from more domains to achieve a further improvement of recommendation performance”, where “multi-target” “recommendations” for “multiple domains” is a plurality of target domains).
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the acquiring of first explicit user features and first explicit resource features for a plurality of entries in a target domain to generate recommendations for the target domain of Kumar in view of Sun with the generation of recommendations for a plurality of target domains of Zhu in order to utilize source data of broad applicability to improve the recommendations for a plurality of target domains instead of only one (Zhu, Pg. 2, Col. 1-2, Para. 2-1, “The above conventional CDR approaches are single-target approaches that can only leverage the auxiliary information from a richer domain to help a sparser domain. However, each of the domains may be relatively richer in certain types of information (e.g., ratings, reviews, user profiles, item details, and tags); if such information can be leveraged well, it is possible to improve the recommendation performance in all domains simultaneously rather than in a single target domain only. To this end . . . multi-target CDR . . . have been proposed recently to improve the recommendation performance in dual/multiple domains”).
Additionally, He teaches . . . [acquiring a first explicit user feature] using a same feature encoding manner . . . [acquiring a first explicit resource feature] using a same feature encoding manner (Pg. 175, Col. 2, Para. 1, “we use only the identity of a user and an item as the input feature, transforming it to a binarized sparse vector with one-hot encoding”, where the “identity of a user” is a first explicit user feature and “an item” is a first explicit resource feature, which are both “encod[ed]” in a same “one-hot” feature encoding manner)
. . . wherein formats of [the first explicit user features] . . . are identical to each other . . . formats of [the first explicit resource features] . . . are identical to each other (Pg. 175, Col. 1, Fig. 2; Pg. 175, Col. 1-2, “we adopt a multi-layer representation to model a user–item interaction yui as shown in Figure 2 . . . The bottom input layer consists of two feature vectors vUu and vIi that describe user u and item i, respectively . . . where P ∈ RM×K and Q ∈ RN×K, denoting the latent factor matrix for users and items, respectively”, where the user encodings have identical formats of “binarized sparse vector with one-hot encoding” length “M” and the item encodings have identical formats of “binarized sparse vector with one-hot encoding” length “N”)
. . . concatenating [the first explicit user features with the first explicit resource features] . . . in a second concatenating manner (Pg. 176, Col. 2, Para. 2, “Since NCF adopts two pathways to model users and items, it is intuitive to combine the features of two pathways by concatenating them”, where this “concatenating” is a second manner because it is a distinct operation from that taught by the previously established prior art; see also Pg. 175, Col. 1, Fig. 2).
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the determining an explicit feature based on first user data and first resource data, comprising: acquiring a first explicit user feature from the first user data for each of a plurality of target domains and a first explicit resource feature from the first resource data for each of a plurality of target domains and associating, for each of the plurality of target domains, the first explicit user feature and the first explicit resource feature to obtain the explicit feature of Kumar in view of Sun and Zhu with the acquiring first explicit user features and first explicit resource features using a same feature encoding manner, where the formats of the first explicit user features are identical and the formats of the first explicit resource features are identical, and where the first explicit user features are concatenated with the first explicit resource features in a second concatenating manner of He in order to model user-resource relationships in a simple and intuitive manner (He, Pg. 181, Col. 2, Para. 3, “We devised a general framework . . . that model user–item interactions in different ways. Our framework is simple and generic”; where using encodings of identical formats for each feature type allows for “easily adjusted” components, see He, Pg. 175, Col. 2, Para. 1, “we use only the identity of a user and an item as the input feature, transforming it to a binarized sparse vector with one-hot encoding. Note that with such a generic feature representation for inputs, our method can be easily adjusted to address the cold-start problem by using content features to represent users and items”; and where “concatenating” features is an “intuitive” approach, see He, Pg. 176, Col. 2, Para. 2, “Since NCF adopts two pathways to model users and items, it is intuitive to combine the features of two pathways by concatenating them”).
Regarding Claim 31, the additional elements of the dependent claim are substantially the same as the limitations of Claim 3, therefore it is rejected under the same rationale.
Regarding Claim 36, the additional elements of the dependent claim are substantially the same as the limitations of Claim 3, therefore it is rejected under the same rationale.
Claims 5, 7, and 37 are rejected under 35 U.S.C. 103 as being unpatentable over Kumar in view of Sun and Xu et al. (hereinafter Xu) (“Deep Feature Aggregation Framework Driven by Graph Convolutional Network for Scene Classification in Remote Sensing”).
Regarding Claim 5, Kumar in view of Sun teach the method according to claim 1, wherein the concatenating the first implicit user feature and the second implicit user feature comprises (Kumar, Fig. 7, where the labeled “R” entry implicit features form the “similarities” between the first and second user data, which determine the correlation, and which as discussed above is the implicit feature, see Kumar, Para. [0053], “For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score”; Kumar, Para. [0052], “the recommendation score would be based on a prediction where purchases and contributions are correlated”; and where, in view of Sun, the similarities are formed by concatenating the features to obtain the concatenated similarities for use in determining the implicit feature, see Sun, Pg. 9, Col. 2, Para. 1, “The hybrid recommendation decoder integrates hybrid information from both domains A and B to evaluate the recommendation probabilities of the candidate items. Specifically, it first gets the hybrid representation by concatenating the representation hB from domain B and the transformed representation h(A→B) from domain A to domain B”):
determining a weight corresponding to the second implicit user feature based on a number of the second user data of the overlapping user and a number of the first user data (Kumar, Para. [0053], “the weighted sum of the similarity score for a user and charity may be calculated based on
P
u
,
i
=
∑
(
S
i
j
*
R
i
j
)
∑
R
i
j
where R denotes the number of transactions of a customer with a merchant or charity”, where “Rij” is a weight; which corresponds to the second implicit user feature, Kumar, Fig. 7, where column “j 704” (second resource data) entries labeled “R”, such as row “1” are second implicit resource features from purchase data “1” through “m”, are the user attribute corresponding to the overlapping users “u” in the “weighted sum” equation; which is based on a number of the second user data, The number “R” associated with the entries labeled “R” under column “j 704” in this instance, as indicated by “Rij”; which is based on a number of the first user data, The number “R” associated with the entries labeled “R” under column “i” in this instance, as also indicated by “Rij”; see also Kumar, Fig. 7); and
obtaining the concatenating user feature based on the first implicit user feature, the second implicit user feature . . . (Kumar, Fig. 7, where the labeled “R” entry implicit features form the “similarities” between the first and second user data, which determine the correlation that as discussed above is the implicit feature, see Kumar, Para. [0053], “For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score”; Kumar, Para. [0052], “the recommendation score would be based on a prediction where purchases and contributions are correlated”; and where, in view of Sun, the similarities are formed by concatenating the features in a concatenating manner to obtain the concatenated similarities for use in determining the implicit feature, see Sun, Pg. 9, Col. 2, Para. 1, “The hybrid recommendation decoder integrates hybrid information from both domains A and B to evaluate the recommendation probabilities of the candidate items. Specifically, it first gets the hybrid representation by concatenating the representation hB from domain B and the transformed representation h(A→B) from domain A to domain B”).
The reasons of obviousness are discussed above in regard to Claim 1 and remain applicable here.
Kumar in view of Sun do not explicitly disclose . . . and the weight (where the weight is used to determine the weighted sum of the similarity score instead of the concatenating user feature).
However, Xu teaches . . . [obtaining a concatenating feature based on a first feature, a second feature,] and the weight (Pg. 5752, Col. 1, Abstract, “a weighted concatenation method is adopted to integrate multiple features (i.e., multilayer convolutional features and fully connected features) by introducing three weighting coefficients”, where multiple features require there be a first and second feature; see also Pg. 5756, Col. 1, Para. 6-7, “To balance the importance of three layers, two weight coefficients, α and β, are introduced for fusion . . . the weighted concatenation method is adopted to integrate FC features . . . and multilayer convolutional features”).
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine concatenating of the first implicit user feature and the second implicit user to obtain a concatenating feature, comprising: determining a weight corresponding to the second implicit user feature based on a number of the second user data of the overlapping user and a number of the first user data and then concatenating the first and second implicit user features of Kumar in view of Sun with the obtaining a concatenating feature based on a first feature, a second feature, and a weight of Xu in order to balance importance during concatenation (Xu, Pg. 5751, Col. 1, Abstract, “weighted concatenation method is adopted to integrate multiple features (i.e., multilayer convolutional features and fully connected features) by introducing three weighting coefficients”; Xu, Pg. 5756, Col. 1, Para. 6, “To balance the importance of three layers, two weight coefficients, α and β, are introduced for fusion, and the multilayer convolutional features”, where the weights of Kumar similarly balance importance by assigning more weight to implicit features with more transactions, see Kumar, Para. [0053], “the weighted sum of the similarity score for a user and charity may be calculated based on
P
u
,
i
=
∑
(
S
i
j
*
R
i
j
)
∑
R
i
j
where R denotes the number of transactions of a customer with a merchant or charity”), which contributes to improved performance (Xu, Pg. 5751, Col. 1, Abstract, “Experimental results performed . . . demonstrate that the proposed DFAGCN framework obtains more competitive performance than some state-of-the-art methods of scene classification in terms of OAs”).
Regarding Claim 7, Kumar in view of Sun and Xu teach the method according to claim 6, wherein the concatenating the first implicit resource feature and the second implicit resource feature comprises (Kumar, Fig. 7, where the labeled “R” entry implicit features form the “similarities” between the first and second user data, and, in view of Sun, the overlap is between both users and resources, see Sun, Pg. 2, Col. 1, Para. 1, “For example, as illustrated in Figure 1, videos like “Mickey Mouse” in the video domain might help to predict the next item “School House Fun” in the educational domain because they reflect the same interest in the Disney cartoon character “Mickey Mouse” presumably by a child in this family”, where the “video domain” and “educational domain” overlap with “Mickey Mouse” resources; see also Sun, Pg. 3, Fig. 1, which determine the correlation, and which as discussed above is the implicit feature, see Kumar, Para. [0053], “For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score”; Kumar, Para. [0052], “the recommendation score would be based on a prediction where purchases and contributions are correlated”; and where, in view of Sun, the similarities are formed by concatenating the features to obtain the concatenated similarities for use in determining the implicit feature, see Sun, Pg. 9, Col. 2, Para. 1, “The hybrid recommendation decoder integrates hybrid information from both domains A and B to evaluate the recommendation probabilities of the candidate items. Specifically, it first gets the hybrid representation by concatenating the representation hB from domain B and the transformed representation h(A→B) from domain A to domain B”):
determining a weight corresponding to the second implicit resource feature based on a number of the second resource data of the overlapping resource and a number of the first resource data (Kumar, Para. [0053], “the weighted sum of the similarity score for a user and charity may be calculated based on
P
u
,
i
=
∑
(
S
i
j
*
R
i
j
)
∑
R
i
j
where R denotes the number of transactions of a customer with a merchant or charity”, where “Rij” is a weight; which corresponds to the second implicit resource feature, Kumar, Fig. 7, where column “j 704” (second resource data) entries labeled “R”, such as row “1” are second implicit resource features from purchase data “1” through “m”; which is based on a number of the second resource data, the number “R” associated with the entries labeled “R” under column “j 704” in this instance, as indicated by “Rij”; which is also based on a number of the first resource data, the number “R” associated with the entries labeled “R” under column “i” in this instance, as also indicated by “Rij”; see also Kumar, Fig. 7; where, in view of Sun, the overlap is between both users and resources, see Sun, Pg. 2, Col. 1, Para. 1, “For example, as illustrated in Figure 1, videos like “Mickey Mouse” in the video domain might help to predict the next item “School House Fun” in the educational domain because they reflect the same interest in the Disney cartoon character “Mickey Mouse” presumably by a child in this family”, where the “video domain” and “educational domain” overlap with “Mickey Mouse” resources; see also Sun, Pg. 3, Fig. 1); and
obtaining the concatenating resource feature based on the first implicit resource feature, the second implicit resource feature and the weight (Kumar, Fig. 7, where the labeled “R” entry implicit features form the “similarities” between the first and second user data, which determine the correlation that as discussed above is the implicit feature, see Kumar, Para. [0053], “For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score”; Kumar, Para. [0052], “the recommendation score would be based on a prediction where purchases and contributions are correlated”; and where, in view of Sun, the similarities are formed by concatenating the features in a concatenating manner to obtain the concatenated similarities for use in determining the implicit feature, see Sun, Pg. 9, Col. 2, Para. 1, “The hybrid recommendation decoder integrates hybrid information from both domains A and B to evaluate the recommendation probabilities of the candidate items. Specifically, it first gets the hybrid representation by concatenating the representation hB from domain B and the transformed representation h(A→B) from domain A to domain B”; and where, in view of Xu, the determined weight is used for the concatenating, see Xu, Pg. 5752, Col. 1, Abstract, “a weighted concatenation method is adopted to integrate multiple features (i.e., multilayer convolutional features and fully connected features) by introducing three weighting coefficients”, where multiple features requires a first and second feature; see also Xu, Pg. 5756, Col. 1, Para. 6-7, “To balance the importance of three layers, two weight coefficients, α and β, are introduced for fusion . . . the weighted concatenation method is adopted to integrate FC features . . . and multilayer convolutional features”).
The reasons of obviousness are discussed above in regard to Claims 1 and 6, in regard to the combination of with Sun, and Claim 5, in regard to the combination of with Xu, and remain applicable here.
Regarding Claim 37, the additional elements of the dependent claim are substantially the same as the limitations of Claim 5, therefore it is rejected under the same rationale.
Claim 8-9 is rejected under 35 U.S.C. 103 as being unpatentable over Kumar in view of Sun and Fan et al. (hereinafter Fan) (“Graph Neural Networks for Social Recommendation”).
Regarding Claim 8, Kumar in view of Sun teach the method according to claim 1, wherein the determining an implicit feature comprises (Kumar, Fig. 7; Kumar, Para. [0052], “FIGS. 7 illustrates matrix diagram 700 of item-to-item collaborative filtering used to obtain a recommendation score. In this item-to-item collaborative approach, a cross domain approach is considered where, for example, a merchant and a charity are considered. Therefore, in this ongoing charity example, the recommendation score would be based on a prediction where purchases and contributions are correlated. Thus, prediction may be made that users who purchased with merchant X also donate to charity Y”, where the “matrix diagram 700” represents the implicit feature of “correlate[ion]” between “purchases and contributions”; see also Kumar, Para. [0041], “The first model, which includes the use of cross-collaborative filtering 502 is a model designed to consider not only a user and his/her transactions but consider transactions across domains. For example, transactional information about a merchant and a charity are considered . . . To illustrate this, consider a user who ordered online pet food at merchant X, then given that the user purchased pet food at merchant X, there is a likelihood that the user will donate to the Friends of Animals charity”, where the implicit feature is determined),
in a case of determining that the target domain and the source domain have an overlapping user according to the first user data and the second user data (Kumar, Fig. 7, where references “706a” through “706c” correspond with cases where “charity 702” donations (target domain) and “merchant 704” transactions (source domain) had overlapping users, see generally Kumar, Para. [0041], “one focus of the cross-collaborative filtering model 502 may include making a recommendation based in part on an association or prediction regarding people who made a purchase with a particular merchant and also donated to a particular charity”):
extracting a first joint implicit feature from the first user data and the first resource data . . . (Kumar, Fig. 7, where column “i” entries labeled “R”, such as row “1” are first joint implicit features from purchase data “1” through “m” (first user data when under column “i”) and “charity” donations (first resource data), where “computed and used for determining a recommendation score” is within the broadest reasonable interpretation of extracting, see Kumar, Para. [0053], “For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score”),
extracting a second joint implicit feature . . . based on the first resource data and second user data of the overlapping user (Kumar, Fig. 7, where column “j 704” entries labeled “R”, such as row “1” are second joint implicit features from purchase data “1” through “m” (second user data when under column “j 704”) based on “charity” donations (first resource data) because there must be a donation co-purchase, where “computed and used for determining a recommendation score” is within the broadest reasonable interpretation of extracting, see Kumar, Para. [0053], “For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score”); and
determining the implicit feature based on the first joint implicit feature and the second joint implicit feature (Kumar, Fig. 7, where the labeled “R” entry joint implicit features form the “similarities” which determine the correlation, which as discussed above is the implicit feature, see Kumar, Para. [0053], “For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score”; Kumar, Para. [0052], “the recommendation score would be based on a prediction where purchases and contributions are correlated”).
Kumar in view of Sun do not explicitly disclose . . . using a graph neural network . . . using the graph neural network . . . .
However, Fan teaches . . . [extracting a first joint implicit feature] using a graph neural network . . . [extracting a second joint implicit feature] using the graph neural network [for use in recommendations] . . . (Pg. 417, Col. 1, Abstract, “in this paper, we present a novel graph neural network framework (GraphRec) for social recommendations. In particular, we provide a principled approach to jointly capture interactions and opinions in the user-item graph”, where “interactions” are within the broadest reasonable interpretation of features and the pluralization of “interactions” requires a first and second).
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the determining an implicit feature based on first user data, first resource data, second user data, and second resource data, comprising: determining a user overlap between first and second resource data, extracting a first joint implicit feature from first user data and first resource data, extracting a second joint implicit feature based on first resource data and second user data, and determining the implicit feature based on the joint features of Kumar in view of Sun with the extracting a first and second joint implicit feature using a graph neural network for use in recommendations of Fan in order to determine recommendations using a method with demonstrated power and potential (Fan, Pg. 417, Col. 1, Abstract, “In recent years, Graph Neural Networks (GNNs), which can naturally integrate node information and topological structure, have been demonstrated to be powerful in learning on graph data. These advantages of GNNs provide great potential to advance social recommendation since data in social recommender systems can be represented as user-user social graph and user-item graph”, where the data of Kumar can be represented in graph form, see Kumar, Fig. 6).
Regarding Claim 9, Kumar in view of Sun and Fan teach the method according to claim 1, wherein the determining an implicit feature comprises (Kumar, Fig. 7; Kumar, Para. [0052], “FIGS. 7 illustrates matrix diagram 700 of item-to-item collaborative filtering used to obtain a recommendation score. In this item-to-item collaborative approach, a cross domain approach is considered where, for example, a merchant and a charity are considered. Therefore, in this ongoing charity example, the recommendation score would be based on a prediction where purchases and contributions are correlated. Thus, prediction may be made that users who purchased with merchant X also donate to charity Y”, where the “matrix diagram 700” represents the implicit feature of “correlate[ion]” between “purchases and contributions”; see also Kumar, Para. [0041], “The first model, which includes the use of cross-collaborative filtering 502 is a model designed to consider not only a user and his/her transactions but consider transactions across domains. For example, transactional information about a merchant and a charity are considered . . . To illustrate this, consider a user who ordered online pet food at merchant X, then given that the user purchased pet food at merchant X, there is a likelihood that the user will donate to the Friends of Animals charity”, where the implicit feature is determined),
in a case of determining that the target domain and the source domain have an overlapping resource according to the first resource data and the second resource data (Kumar, Fig. 7, where references “706a” through “706c” correspond with cases where “charity 702” donations (target domain) and “merchant 704” transactions (source domain) had overlap, where, in view of Sun, the overlap is between resources, see Sun, Pg. 2, Col. 1, Para. 1, “For example, as illustrated in Figure 1, videos like “Mickey Mouse” in the video domain might help to predict the next item “School House Fun” in the educational domain because they reflect the same interest in the Disney cartoon character “Mickey Mouse” presumably by a child in this family”, where the “video domain” and “educational domain” overlap with “Mickey Mouse” resources; see also Sun, Pg. 3, Fig. 1):
extracting a first joint implicit feature from the first user data and the first resource data using a graph neural network (Kumar, Fig. 7, where column “i” entries labeled “R”, such as row “1” are first joint implicit features from purchase data “1” through “m” (first user data when under column “i”) and “charity” donations (first resource data), where “computed and used for determining a recommendation score” is within the broadest reasonable interpretation of extracting, see Kumar, Para. [0053], “For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score”, where, in view of Fan, a “graph neural network” is used, see Fan, Pg. 417, Col. 1, Abstract, “in this paper, we present a novel graph neural network framework (GraphRec) for social recommendations. In particular, we provide a principled approach to jointly capture interactions and opinions in the user-item graph”);
extracting a second joint implicit feature using the graph neural network based on the first user data and second resource data of the overlapping resource (Kumar, Fig. 7, where column “j 704” entries labeled “R”, such as row “1” are second joint implicit features from purchase data “1” through “m” for “merchant” transactions (second resource data) and based on first user data because there must be an overlapping co-donation, Kumar, Para. [0053], “For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score”, where, in view of Fan, a “graph neural network” is used, see Fan, Pg. 417, Col. 1, Abstract, “in this paper, we present a novel graph neural network framework (GraphRec) for social recommendations. In particular, we provide a principled approach to jointly capture interactions and opinions in the user-item graph”); and
determining the implicit feature based on the first joint implicit feature and the second joint implicit feature (Kumar, Fig. 7, where the labeled “R” joint implicit features form the “similarities” which determine the correlation, which as discussed above is the implicit feature, see Kumar, Para. [0053], “For example, as illustrated in matrix diagram 700, both charity 702 and merchant 704, share some similarities 706 in rows 1, m−2 and m. From these similarities, a similarity value may be computed and used for determining a recommendation score”; Kumar, Para. [0052], “the recommendation score would be based on a prediction where purchases and contributions are correlated”, where in view of Sun, “similarities 706” must satisfy the additional requirement of overlapping resources, in addition to the previous requirement of overlapping users, see Sun, Pg. 2, Col. 1, Para. 1, “For example, as illustrated in Figure 1, videos like “Mickey Mouse” in the video domain might help to predict the next item “School House Fun” in the educational domain because they reflect the same interest in the Disney cartoon character “Mickey Mouse” presumably by a child in this family”, where the “video domain” and “educational domain” overlap with “Mickey Mouse” resources; see also Sun, Pg. 3, Fig. 1).
The reasons of obviousness are discussed above in regard to Claim 8, for the combination with Fan, and Claim 6, for the combination with Sun, and remain applicable here.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Kumar in view of Sun and Chen et al. (hereinafter Chen) (“Enhancing Explicit and Implicit Feature Interactions via Information Sharing for Parallel Deep CTR Models”).
Regarding Claim 11, Kumar in view of Sun teach the method according to claim 2, wherein the training the ranking model based on the explicit feature and the implicit feature comprises: . . . ; and training the ranking model . . . (Kumar, Fig. 7; Kumar, Para. [0053], “In one embodiment, a similarity matrix or matrix diagram 700 may be created based on, for example, co-purchases or other similarity such that [equation] is used as for model training”, where, as discussed above, “matrix diagram 700” includes both the implicit and explicit features).
Kumar does not explicitly disclose . . . concatenating the explicit feature and the implicit feature in a second concatenating manner to obtain a first concatenating feature, and acquiring a sample label corresponding to the first concatenating feature . . . based on the first concatenating feature and the corresponding sample label.
However, Chen teaches . . . concatenating the explicit feature and the implicit feature in a second concatenating manner to obtain a first concatenating feature (Pg. 4, Col. 1-2, Para. 3-1, “The existing parallel deep CTR models learn explicit and implicit feature interactions separately . . . To overcome this limitation, we introduce a dense fusion strategy, which is implemented by our proposed bridge module, to capture the layer-wise interactive signals between two parallel networks . . . that takes two vector as input. . . Concatenation concatenates the input vectors”, where “explicit” and “implicit” “input” “vectors” are “concatenate[ed]”, which is in a second manner because it is a distinct operation from the manner previously disclosed in the prior art; see also Pg. 5, Col. 1, Para. 1, “Deep CTR models with parallel structure exploit explicit and implicit features simultaneously based on the shared embeddings. Explicit feature interactions are usually modeled with pre-defined interaction functions for efficiently exploring bounded-degree interaction (e.g., cross network in DCN), while implicit feature interactions are mostly learned via fully connected layers”),
and acquiring a sample label corresponding to the first concatenating feature . . . (Pg. 5, Col. 1, Para. 4, “Assume the outputs of the 𝐿-th cross layer, deep layer and bridge module are x𝐿, h𝐿 and f𝐿, the result of EDCN is represented as: 𝑦ˆ = Sigmoid(wT[x𝐿, h𝐿, f𝐿] + b) . . . where 𝑦𝑖 and 𝑦ˆ𝑖 are the ground truth label and estimated value of the 𝑖-th instance, respectively”, where “x𝐿” corresponds to explicit features and “h𝐿” correspond to implicit features, see Pg. 3, Col. 2, Para. 4, “cross network and deep network for explicit and implicit feature interaction modeling. Specifically, the cross layers and deep layers in these two networks are respectively denoted as: x𝑙 . . . h𝑙”, which are concatenated as “[x𝐿, h𝐿, f𝐿]” and correspond to the label “𝑦𝑖”)
[training] . . . based on the first concatenating feature and the corresponding sample label (Pg. 5, Col. 1, Para. 4, “Assume the outputs of the 𝐿-th cross layer, deep layer and bridge module are x𝐿, h𝐿 and f𝐿, the result of EDCN is represented as: 𝑦ˆ = Sigmoid(wT[x𝐿, h𝐿, f𝐿] + b) . . . The loss function is the widely-used LogLoss with a regularization term . . . where 𝑦𝑖 and 𝑦ˆ𝑖 are the ground truth label and estimated value of the 𝑖-th instance, respectively. 𝑁 is the total number of training instances”, where the “training” is based on the “loss” between the label and the “estimated value” using the concatenated features).
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the training of a ranking model based on an explicit feature and an implicit feature of Kumar in view of Sun with the concatenation of an explicit feature and an implicit feature as a concatenated feature, acquiring a label corresponding to the concatenated feature, and the training of a model using the label and the concatenated feature of Chen in order to gain stronger interactive signals between explicit and implicit features, which improves model performance (Chen, Pg. 4, Col. 1, Para. 3, “The existing parallel deep CTR models learn explicit and implicit feature interactions separately via two parallel sub-networks respectively . . . networks are performed separately and independently, meaning no information fusion until the last layer, which is called late fusion. The late fusion strategy fails to capture the correlation between two parallel networks in the intermediate layers, weakening the interactive signals between explicit and implicit feature interaction. Moreover, the lengthy updating progress in each sub-networks may lead to skewed gradients during the backward propagation [9], thus hindering the learning procedure of both networks. To overcome this limitation, we introduce a dense fusion”; Chen, Pg. 9, Col. 1, “We conduct extensive experiments on two benchmark real-world datasets and an industrial dataset to demonstrate the effectiveness and compatibility of EDCN. Besides, a one-month online A/B test in the Huawei advertising platform shows that two modules improve the base model by 7.30% and 4.85% in terms of CTR and eCPM”).
Response to Arguments
Applicant's arguments filed on December 23, 2025 have been fully considered. Each argument is addressed in detail below.
I. Applicant argues the objections to the claims should be withdrawn (Applicant’s Remarks, 12/23/2025, Pg. 11, Section “Claim Objections”).
Applicant’s amendments have overcome each and every objection to the claims, as previously set forth in the October 2nd, 2025 Office Action. As a result, these objections have been withdrawn.
II. Applicant argues the rejections to the claims, under 35 USC § 112, should be withdrawn (Applicant’s Remarks, 12/23/2025, Pg. 11, Section “Rejection Under 35 U.S.C. § 112”).
Applicant’s amendments have overcome each and every rejection to the claims, under 35 USC § 112, as previously set forth in the October 2nd, 2025 Office Action. As a result, these rejections have been withdrawn.
III. Applicant argues the rejections to the claims, under 35 USC § 101, should be withdrawn (Applicant’s Remarks, 12/23/2025, Pg. 11-12, Section “Rejection Under 35 U.S.C. § 101”).
Specifically, Applicant argues the subject matter of claim 1 constitutes a practical application that is significantly more than an abstract idea because it provides a technical processing flow solution to address challenges in cross-domain machine learning through a series of specific technical steps.
As per the Applicant, the subject matter is of a collaborative filtering method where “a first implicit user feature and a second implicit user feature [are extracted] from the target domain and the source domain respectively” (Pg. 11-12, Para. 7-1) and the specific technical steps include “Firstly, this method can convert sparse data in the target domain into implicit features with intensive information, which is a data augmentation technique that overcomes sparsity. Secondly, auxiliary features can be extracted from the source domain in a similar way, but avoiding direct introduction that may cause inconsistent sample distribution between the source and target domains, leading to the phenomenon of negative transfer” (Pg. 12, Para. 1) (internal quotation marks omitted).
Additionally, as also per the Applicant, these specific steps provide for practical, concrete, and beneficial technical effects in the field of computer technology. Specifically, “[b]y concatenating the above two types of features, the introduction of source domain features (second implicit user feature) effectively supplements the information content of the originally sparse target domain training data, directly expands the representation space of the target domain samples, and/or alleviates data sparsity. On the other hand, this fusion at the implicit feature level, rather than merging at the raw data level, allows source domain information to be utilized without distorting the distribution of target domain data, helping to avoid risk of negative transfer” (Pg. 12, Para. 2) (internal quotation marks omitted).
According to MPEP 2106.04(d)(1), “A claim reciting a judicial exception is not directed to the judicial exception if it also recites additional elements demonstrating that the claim as a whole integrates the exception into a practical application. One way to demonstrate such integration is when the claimed invention improves the functioning of a computer or improves another technology or technical field . . . The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. Second, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement”.
According to MPEP 2106.05(f), “Another consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two . . . [is that] A claim having broad applicability across many fields of endeavor may not provide meaningful limitations that integrate a judicial exception into a practical application”.
Here, the asserted technical improvements of alleviating data sparsity and avoiding the risk of negative transfer are recited in the specification, but in only a conclusory manner (see, for example, Applicant’s Spec. Para. [0039] and [0012], where the asserted benefits are concluded to result from the claimed subject matter, but without sufficient detail). As a result, it is not determined to disclose an improvement (see MPEP 2106.04(d)(1)). Additionally, or alternatively, the extracting and concatenating of features from a source dataset for use in a target dataset have broad applicability across many fields of endeavor. For example, these approaches are used broadly for preprocessing across a wide variety of data analytics and machine learning applications. As a result, its broad applicability across many fields of endeavor prevents it from constituting integration into a practical application (MPEP 2106.05(f)). Finally, it is worth pointing out that the limitations recited by the Applicant, such as the extracting and concatenating are themselves mental processes. As a result, they definitionally cannot constitute an additional element (see MPEP 2106.04(d)(1)).
As a result, the arguments are not persuasive.
IV. Applicant argues the rejections to the claims, under 35 USC § 102 and 35 USC § 103, should be withdrawn (Applicant’s Remarks, 12/23/2025, Pg. 12-20, Sections “Rejection Under 35 U.S.C. § 102” and “Rejection Under 35 U.S.C. § 103”).
In response to Applicant’s amendments, the previously communicated rejections under 35 U.S.C. § 102 and 35 U.S.C. § 103, have been withdrawn. However, Applicants arguments are not persuasive in light of the new grounds for rejection, under 35 U.S.C. § 103, discussed in detail above. The new grounds of rejection rely on new combinations or the existing prior art of record to teach the new combinations of elements in the amended claims, which were not presented in these arrangements in any of the previously presented claims. As a result, Applicant arguments against the previously communicated rejections under 35 U.S.C. § 102 and 35 U.S.C. § 103 are rendered moot.
However, for clarity of the record and in the interest of compact prosecution, arguments still relevant to the new grounds of rejection are discussed below. Relevant MPEP excerpts are reproduced here:
According to MPEP 2111, “During patent examination, the pending claims must be given their broadest reasonable interpretation consistent with the specification” (internal quotation marks omitted) (see also Phillips v. AWH Corp., 415 F.3d 1303, 1316, 75 USPQ2d 1321, 1329 (Fed. Cir. 2005)).
Additionally, according to MPEP 2145, “Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims” (see also In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993)).
Applicant argues that Kumar fails to disclose “A method for training a ranking model, the method . . . comprising . . . determining an implicit feature based on the first user data, the first resource data, the second user data and the second resource data, wherein the implicit feature comprises a feature vector without specific physical meaning and wherein the determining the implicit feature comprises, in a case of determining that the target domain and the source domain have an overlapping user according to the first user data and the second user data: extracting a first implicit user feature from the first user data using a collaborative filtering manner, extracting a second implicit user feature from the second user data of the overlapping user using the collaborative filtering manner, concatenating the first implicit user feature and the second implicit user feature in a first concatenating manner to obtain a concatenating user feature, and determining the implicit feature based on the concatenating user feature; and training the ranking model based on the implicit feature, wherein the ranking model is configured to recommend a resource to a user of the target domain” (Claim 1; see also Claims 27-28).
First, Applicant argues the disclosure of Kumar is insufficient to teach the subject matter of Claim 1 because “Kumar’s scheme is explicitly similarity calculation, rather than implicit feature extraction” (Pg. 13, Para. 3). However, as discussed above and reiterated with further explicitly here, the act of determining the “matrix diagram 700” of “correlate[ion]” between “purchases and contributions” is within the broadest reasonable interpretation of “determining an implicit feature based on the first user data, the first resource data, the second user data and the second resource data” (Claim 1, ln. 5-6) because the correlations are implicitly present when analyzing the data, but not explicitly contained in attributes of the data (see Kumar, Fig. 7; Para. [0052], “FIGS. 7 illustrates matrix diagram 700 of item-to-item collaborative filtering used to obtain a recommendation score. In this item-to-item collaborative approach, a cross domain approach is considered where, for example, a merchant and a charity are considered. Therefore, in this ongoing charity example, the recommendation score would be based on a prediction where purchases and contributions are correlated. Thus, prediction may be made that users who purchased with merchant X also donate to charity Y”, where the “matrix diagram 700” represents the implicit feature of “correlate[ion]” between “purchases and contributions”; see also Para. [0041], “The first model, which includes the use of cross-collaborative filtering 502 is a model designed to consider not only a user and his/her transactions but consider transactions across domains. For example, transactional information about a merchant and a charity are considered . . . To illustrate this, consider a user who ordered online pet food at merchant X, then given that the user purchased pet food at merchant X, there is a likelihood that the user will donate to the Friends of Animals charity”, where the implicit feature is determined based on the user and resource data for both the target and source domains). As a result, the argument is not persuasive (MPEP 2111).
Additionally, Applicant argues that the implicit feature of Claim 1 “is subsequently used as feature is subsequently used as an input to the ranking model to enrich and enhance the training samples of the ranking model, and does not directly represent any interpretable similarity or interaction count” (Pg. 13, Para. 3). However, these limitations are not positively recited in claim 1 and, even if recited in the specification, are not read into the claim. As a result, the argument is not persuasive (MPEP 2145).
Similarly, Applicant argues that “Kumar disclos[es] a scheme of determining an interpretable similarity score based on explicit statistics” and, therefore, does not teach the implicit feature vector of the claims (Pg. 13, Para. 3). However, as above, prohibitions against using explicit statistical measures to determine the implicit feature or prohibitions against the implicit feature comprising similarity scores are not positively recited in claim 1 and, even if recited in the specification, are not read into the claim. As a result, the argument is not persuasive (MPEP 2145).
Next, Applicant argues the matrix disclosed by Kumar “denotes the number of transactions of a customer with a merchant or charity, which has a clear physical meaning” (Pg. 13, Para. 3). However, as discussed above, the disclosure of Kumar is within the broadest reasonable interpretation of a “implicit feature comprises a feature vector without specific physical meaning” (Claim 1, ln. 6-7) (Fig. 7, where, as discussed above, the “matrix diagram 700” is a representation of the implicit feature of “correlate[ion]” between “purchases and contributions”, and where, as depicted in Fig. 7 and generally understood by one of ordinary skill in the art, the matrix “700” comprises vectors, which are feature vectors when representing the implicit feature, see Para. [0052], “FIGS. 7 illustrates matrix diagram 700 of item-to-item collaborative filtering used to obtain a recommendation score. In this item-to-item collaborative approach, a cross domain approach is considered where, for example, a merchant and a charity are considered. Therefore, in this ongoing charity example, the recommendation score would be based on a prediction where purchases and contributions are correlated. Thus, prediction may be made that users who purchased with merchant X also donate to charity Y”; see also Fig. 7, where the feature vectors can have a meaning of “charity 702” or “704 merchant” associated with purchases or contributions of users “1” through “m”, which is within the broadest reasonable meaning of without a specific physical meaning because: 1. Records of purchases represent actions without a specific tangible form or physical movement, see Para. [0041], “consider a user who ordered online pet food at merchant X, then given that the user purchased pet food at merchant X, there is a likelihood that the user will donate to the Friends of Animals charity”, where the purchase can be virtual, as opposed to physical, and 2. charities and merchants may represent entities with no physical location or multiple physical location, such as the “Red Cross”, so merely associating a user with a contribution to an organization does not represent a specific physical meaning, see Para. [0017], “a charity to contribute to, the YMCA, United Way, Red Cross, etc”). As a result, the argument is not persuasive (MPEP 2111).
Second, Applicant argues that Kumar fails to disclose “specific steps for extracting implicit features of users in the source domain and target domain separately” because it does not disclose “a learning process for two different domains and the same batch of users, producing two feature vectors” (Pg. 14, Para. 2). However, these limitations are not positively recited in claim 1 and, even if recited in the specification, are not read into the claim. As a result, the argument is not persuasive (MPEP 2145).
Third, Applicant argues that “the cited portions of Kumar do not disclose use of collaborative filtering for feature extractors” because “Kumar's collaborative filtering is not a specific tool for feature vector extraction . . . [which is] implementation is based on co-occurrence statistics for similarity calculation, which belongs to a similarity measurement method in traditional collaborative filtering, rather than using collaborative filtering to generate embedding vectors” (Pg. 14, Para. 3) (internal quotation marks omitted). However, prohibitions against using co-occurrence statistics for similarity calculation or requirements for a use of a specific tool for feature extraction or generation of embedding vectors are not positively recited in claim 1 and, even if recited in the specification, are not read into the claim. As a result, the argument is not persuasive (MPEP 2145).
Fourth, Applicant argues “there is a fundamental difference in the processing logic of "overlapping users" between the cited portions of Kumar and claim 1” because “the logic of claim 1 is to first determine whether there are overlapping users, and if so, trigger a specific feature extraction and fusion process”, whereas “Kumar's logic is that the entire model is naturally built on the premise of overlapping users, because the similarity calculation itself requires users to have behavior in both of the two domains. It does not have an independent step of identifying overlapping users, nor does it have different processing branches based on whether they overlap or not. The bold boxes in the matrix are only used to visually emphasize certain high similarity pairs, and do not represent an independent feature extraction process” (Pg. 14-15, Para. 4-1). However, conditional branching processing logic is not positively recited in claim 1 and, even if recited in the specification, are not read into the claim. As a result, the argument is not persuasive (MPEP 2145). Instead, the claim recites “in a case of determining that the target domain and the source domain have an overlapping user according to the first user data and the second user data: . . . [steps of the method occur]” (Claim 1, ln. 8-15). This limitation requires that the steps of the method occur in the specific case outlined above, with no positively recited limitations requiring different cases with alternative steps, in this claim. As a result, even if Applicant’s characterization of Kumar, as logic based on the “the premise of overlapping users” where “the similarity calculation itself requires users to have behavior in both of the two domains” and without an “independent step of identifying overlapping users” were accurate, it would still be within the broadest reasonable interpretation of the limitations because the steps occur in the specific case, as evidenced by the bold boxes in the matrix. As a result, the argument is not persuasive (MPEP 2111).
Additionally, Applicant argues that the cited excerpts of Zhu, He, Sun, Xu, Fan, and Chen, as relied upon in the October 2nd, 2025 Office Action, fail to overcome the above-asserted defects in the teachings of Kumar (Pg. 15-20, Para. 3-3). However, as discussed in detail above, the arguments in favor of the asserted defects in the teachings of Kumar are not persuasive. Additionally, none of Zhu, He, Sun, Xu, Fan, or Chen is relied upon to overcome the above-asserted defects in either the October 2nd, 2025 Office Action or this Office Action.
As a result, the arguments are not persuasive.
Conclusion
Applicant's amendments necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW BRYCE GOLAN whose telephone number is (571)272-5159. The examiner can normally be reached Monday through Friday, 8:00 AM to 5:00 PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached at (571) 270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MATTHEW BRYCE GOLAN/Examiner, Art Unit 2123
/ALEXEY SHMATOV/Supervisory Patent Examiner, Art Unit 2123