DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
The present application, 17894155 filed 08/23/2022 claims foreign priority to TW111125512, filed 07/07/2022.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 03/16/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claim 7 is objected to under 37 C.F.R. 1.71(a) which requires “full, clear, concise, and exact terms” as to enable any person skilled in the art or science to which the invention or discovery appertains, or with which it is most nearly connected, to make and use the same. The following should be corrected.
A. In claim 7 line 4, “categorical of neural network computation” should read “categorical neural network computation” instead for better clarity.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 8-21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 8 recites “wherein the categorical feature vectors are added to a first matrix” in lines 3-4. This limitation is unclear because it merely states a function (that the categorical feature vectors are somehow added to a first matrix) that is not performed by any structure recited in the claim. It is unclear whether the recited functions follow from the structure recited in the claim, i.e., the first memory, the first interaction computation circuit, the second memory, and the second interaction computation circuit, so it is unclear whether the function requires some other structure or is simply a result of operating the device in a certain manner. Further, the specification is silent as to which structure of the device performs the claimed function. Further clarification is required. See MPEP 2173.05(g) for more information. Claims 9-14 inherit the same deficiency as claim 8 by reason of dependence.
Claim 15 recites “wherein a plurality of categorical feature vectors are added to the first matrix” in lines 3-4. This limitation is unclear because it merely states a function (that a plurality of categorical feature vectors are somehow added to a first matrix) that is not performed by any structure recited in the claim. It is unclear whether the recited functions follow from the structure recited in the claim, i.e., the memory, and the processor, so it is unclear whether the function requires some other structure or is simply a result of operating the device in a certain manner. Further, the specification is silent as to which structure of the device performs the claimed function. Further clarification is required. See MPEP 2173.05(g) for more information. Claims 16-21 inherit the same deficiency as claim 15 by reason of dependence.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Under Step 1, claims 1-7 recite a series of steps and, therefore, is a process. Claims 8-21 recite a device and, therefore, is a machine.
Under Step 2A prong 1, claim 1 recites
A total interaction method, configured to compute an interaction relationship between a plurality of features in a recommendation system, comprising:
adding a plurality of categorical feature vectors to a first matrix, wherein each of the categorical feature vectors comprises a plurality of latent features;
performing one of categorical feature interaction computation and latent feature interaction computation on the first matrix to generate a second matrix;
transposing the second matrix to generate a transposed matrix; and
performing the other one of the categorical feature interaction computation and the latent feature interaction computation on the transposed matrix to generate a total interaction result.
The above underlined limitations of adding a plurality of categorical feature vectors, performing one of categorical feature interaction computation and latent feature interaction computation on the first matrix to generate a second matrix; transposing the second matrix to generate a transposed matrix; and performing the other one of the categorical feature interaction computation and the latent feature interaction computation on the transposed matrix to generate a total interaction result amounts to processing mathematical relationships/calculations and falls within the “Mathematical Concepts” and “Mental Processes” grouping of abstract ideas. The steps of “adding”, “performing”, “transposing” and “performing” is a process that under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the steps from practically being performed in the human mind. For example, the claim encompasses manually forming a first matrix using latent features of each a plurality of categorical feature vectors as rows or columns for the first matrix as shown in Figs. 5-6, performing one of categorical feature interaction computation and latent feature interaction computation to generate a second matrix; transposing the second matrix and performing the other one of the categorical feature interaction computation and the latent feature interaction computation on the transposed matrix to generate a total interaction result using pen and paper. See also 2106.04(a)(2) I.A which states that “organizing information and manipulating information through mathematical correlations” as an example of mathematical relationships abstract idea. Accordingly, the claim is directed to recite an abstract idea.
Under step 2A prong 2 and step 2B, the claim does not recite any additional elements. Accordingly, the claim is not integrated into a practical application and does not amount to significantly more than the abstract idea.
Under step 2A prong 1, claims 2-7 recite the same abstract idea as claim 1 by reason of dependence. Further, claims 2-7 recite further details of the abstract idea of performing the categorical feature interaction computation and the latent feature interaction computation. More specifically, claim 2 recites “wherein the categorical feature interaction computation comprises a plurality of iterations, and an ith iteration of the iterations comprises: performing neural network computation on an ith latent feature of each of the categorical feature vectors to generate an ith column element of the second matrix, wherein each of the categorical feature vectors comprises d latent features, d is an integer, and i is an integer greater than 0 and less than or equal to d”; claim 3 recites “wherein the neural network computation comprises multilayer perceptron computation or convolutional neural network computation”; claim 4 recites “wherein the latent feature interaction computation comprises a plurality of iterations, and an ith iteration of the iterations comprises: performing neural network computation on an ith column element of the transposed matrix to generate an ith column element of the total interaction result, wherein a number of columns of the transposed matrix is hc, he is an integer, and i is an integer greater than 0 and less than or equal to hc”; claim 5 recites “wherein the neural network computation comprises multilayer perceptron computation or convolutional neural network computation”; claim 6 recites “wherein the latent feature interaction computation comprises a plurality of iterations, and an ith iteration of the iterations comprises: performing neural network computation on all latent features of an ith categorical feature vector in the categorical feature vectors to generate an ith column element of the second matrix, wherein a number of the categorical feature vectors is k, k is an integer, and i is an integer greater than 0 and less than or equal to k”; and claim 7 recites “wherein the categorical feature interaction computation comprises a plurality of iterations, and an ith iteration of the iterations comprises: performing categorical of neural network computation on an ith column element of the transposed matrix to generate an ith column element of the total interaction result, wherein a number of columns of the transposed matrix is hc, he is an integer, and i is an integer greater than 0 and less than or equal to hc” which falls within the “Mathematical Concepts” and/or “Mental Processes” grouping of abstract ideas. Accordingly, the claims are directed to recite an abstract idea.
Under step 2A prong 2 and step 2B, claims 2-7 do not recite any additional elements. Accordingly, the claims are not integrated into a practical application and do not amount to significantly more than the abstract idea.
Under Step 2A prong 1, claim 8 recites
A total interaction device, configured to compute an interaction relationship between a plurality of features in a recommendation system, comprising:
a first memory, configured to store a plurality of categorical feature vectors, wherein the categorical feature vectors are added to a first matrix, and each of the categorical feature vectors comprises a plurality of latent features;
a first interaction computation circuit, coupled to the first memory, and configured to perform one of categorical feature interaction computation and latent feature interaction computation on the first matrix to generate a second matrix;
a second memory, coupled to the first interaction computation circuit to receive the second matrix, and configured to transpose the second matrix to generate a transposed matrix; and
a second interaction computation circuit, coupled to the second memory to receive the transposed matrix, and configured to perform the other one of the categorical feature interaction computation and the latent feature interaction computation on the transposed matrix to generate a total interaction result.
The above underlined limitations of adding a plurality of categorical feature vectors, performing one of categorical feature interaction computation and latent feature interaction computation on the first matrix to generate a second matrix; transposing the second matrix to generate a transposed matrix; and performing the other one of the categorical feature interaction computation and the latent feature interaction computation on the transposed matrix to generate a total interaction result amounts to processing mathematical relationships/calculations and falls within the “Mathematical Concepts” and “Mental Processes” grouping of abstract ideas. The steps of “add”, “perform”, “transpose” and “perform” is a process that under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, other than reciting “a first and second computation circuit” and a “memory”, nothing in the claim element precludes the steps from practically being performed in the human mind. For example, but for the “a first and second computation circuit” and a “memory” language, the claim encompasses manually forming a first matrix using latent features of each a plurality of categorical feature vectors as rows or columns for the first matrix as shown in Figs. 5-6, performing one of categorical feature interaction computation and latent feature interaction computation to generate a second matrix; transposing the second matrix and performing the other one of the categorical feature interaction computation and the latent feature interaction computation on the transposed matrix to generate a total interaction result using pen and paper. See also 2106.04(a)(2) I.A which states that “organizing information and manipulating information through mathematical correlations” as an example of mathematical relationships abstract idea. Accordingly, the claim is directed to recite an abstract idea.
Under step 2A prong 2, the claim recites the following additional elements: a first memory, configured to store a plurality of categorical feature vectors; a first interaction computation circuit, coupled to the first memory; a second memory, coupled to the first interaction computation circuit to receive the second matrix; and a second interaction computation circuit, coupled to the second memory to receive the transposed matrix. However, the additional elements of “a first memory”, “a second memory”, “a first interaction computation circuit” and “a second interaction computation circuit” are recited at a high-level of generality (i.e., as generic computer memory for storing data; and as generic computation circuit for performing mathematical computations) such that they amount to no more than mere instructions using a generic computer component or merely as tools to implement the abstract idea or merely reciting the words “apply it” (or an equivalent) with the judicial exception. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See MPEP 2106.05(f)(2). The additional elements of “store a plurality of categorical feature vectors”, “receive the second matrix” and “receive the transposed matrix” are merely adding insignificant extra-solution activities. The additional elements do not, individually or in combination, integrate the exception into a practical application. Accordingly, the claim is not integrated into a practical application.
Under step 2B, claim 8 does not include additional elements that, individually or in combination, are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “a first memory”, “a second memory”, “a first interaction computation circuit” and “a second interaction computation circuit” are recited at a high-level of generality (i.e., as generic computer memory for storing data; and as generic computation circuit for performing mathematical computations) such that they amount to no more than mere instructions using a generic computer component or merely as tools to implement the abstract idea or merely reciting the words “apply it” (or an equivalent) with the judicial exception. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See MPEP 2106.05(f)(2). The additional elements of “store a plurality of categorical feature vectors”, “receive the second matrix” and “receive the transposed matrix” are merely adding insignificant extra-solution activities. See MPEP 2106.05(d)(II) which states that the courts have recognized computer functions such as “Receiving or transmitting data over a network” and “Storing and retrieving information in memory” as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. The claim does not recite additional elements that alone or in combination amount to an inventive concept. Accordingly, the claim does not amount to significantly more than the abstract idea.
Under step 2A prong 1, claims 9-14 recite the same abstract idea as claim 8 by reason of dependence. Further, claims 9-14 recite substantially the same limitations (abstract idea) as claims 2-7 respectively. Claims 2-7 analysis applies equally to claims 9-14 respectively.
Under Step 2A prong 1, claim 15 recites
A total interaction device, configured to compute an interaction relationship between a plurality of features in a recommendation system, comprising:
a memory, configured to provide a first matrix, wherein a plurality of categorical feature vectors are added to the first matrix, and each of the categorical feature vectors comprises a plurality of latent features; and
a processor, coupled to the memory, wherein the processor performs one of categorical feature interaction computation and latent feature interaction computation on the first matrix to generate a second matrix, the processor transposes the second matrix to generate a transposed matrix, and the processor performs the other one of the categorical feature interaction computation and the latent feature interaction computation on the transposed matrix to generate a total interaction result.
The above underlined limitations of adding a plurality of categorical feature vectors, performing one of categorical feature interaction computation and latent feature interaction computation on the first matrix to generate a second matrix; transposing the second matrix to generate a transposed matrix; and performing the other one of the categorical feature interaction computation and the latent feature interaction computation on the transposed matrix to generate a total interaction result amounts to processing mathematical relationships/calculations and falls within the “Mathematical Concepts” and “Mental Processes” grouping of abstract ideas. The steps of “add”, “perform”, “transpose” and “perform” is a process that under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, other than reciting “a processor”, nothing in the claim element precludes the steps from practically being performed in the human mind. For example, but for the “a processor” language, the claim encompasses manually forming a first matrix using latent features of each a plurality of categorical feature vectors as rows or columns for the first matrix as shown in Figs. 5-6, performing one of categorical feature interaction computation and latent feature interaction computation to generate a second matrix; transposing the second matrix and performing the other one of the categorical feature interaction computation and the latent feature interaction computation on the transposed matrix to generate a total interaction result using pen and paper. See also 2106.04(a)(2) I.A which states that “organizing information and manipulating information through mathematical correlations” as an example of mathematical relationships abstract idea. Accordingly, the claim is directed to recite an abstract idea.
Under step 2A prong 2, the claim recites the following additional elements: a memory, configured to provide a first matrix; and a processor, coupled to the memory. However, the additional elements of “a memory”, and “a processor” are recited at a high-level of generality (i.e., as a generic computer memory for storing data; and as a generic processor for performing mathematical computations) such that they amount to no more than mere instructions using a generic computer component or merely as tools to implement the abstract idea or merely reciting the words “apply it” (or an equivalent) with the judicial exception. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See MPEP 2106.05(f)(2). The additional elements of “provide a first matrix” is merely adding insignificant extra-solution activity. The additional elements do not, individually or in combination, integrate the exception into a practical application. Accordingly, the claim is not integrated into a practical application.
Under step 2B, claim 15 does not include additional elements that, individually or in combination, are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “a memory”, and “a processor” are recited at a high-level of generality (i.e., as a generic computer memory for storing data; and as a generic processor for performing mathematical computations) such that they amount to no more than mere instructions using a generic computer component or merely as tools to implement the abstract idea or merely reciting the words “apply it” (or an equivalent) with the judicial exception. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See MPEP 2106.05(f)(2). The additional elements of “provide a first matrix” is merely adding insignificant extra-solution activity. See MPEP 2106.05(d)(II) which states that the courts have recognized computer functions such as “Receiving or transmitting data over a network” and “Storing and retrieving information in memory” as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. The claim does not recite additional elements that alone or in combination amount to an inventive concept. Accordingly, the claim does not amount to significantly more than the abstract idea.
Under step 2A prong 1, claims 16-21 recite the same abstract idea as claim 15 by reason of dependence. Further, claims 16-21 recite substantially the same limitations (abstract idea) as claims 2-7 respectively. Claims 2-7 analysis applies equally to claims 16-21 respectively.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-7 and 15-21 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Xu et al. (US 20250023762 A1), hereinafter Xu.
Regarding claim 15, Xu teaches a total interaction device, configured to compute an interaction relationship between a plurality of features in a recommendation system, comprising:
a memory, configured to provide a first matrix, wherein a plurality of categorical feature vectors are added to the first matrix, and each of the categorical feature vectors comprises a plurality of latent features (Xu Figs. 19-20 and paragraphs [0231, 0233] “the communication apparatus 1900 may further include a memory 1920, configured to store instructions executed by the processor 1910, store input data required by the processor 1910 to run instructions, or store data generated after the processor 1910 runs instructions”; Figs. 4 and 7A-7C and paragraphs [0151, 0159] first matrix - input high-dimensional signal vector is N x dk; plurality of categorical feature vectors – signal vectors; plurality of latent features – signals in each signal vector); and
a processor, coupled to the memory, wherein the processor performs one of categorical feature interaction computation and latent feature interaction computation on the first matrix to generate a second matrix (Xu Figs. 19-20 and paragraphs [0231, 0233] “The processor 1910 may implement, by using the instructions stored in the memory 1920, the method shown in the foregoing method embodiments”; Figs. 7A-8 and paragraphs [0151 and 0156-0160] “In the first matrix, an element in the first row and the first column is U, and the element is for obtaining a feature of a first high-dimensional signal vector; an element in the first row and the second column is V, and the element is for obtaining features of the first high-dimensional signal vector and a second high-dimensional signal vector; and an element in the first row and the third column is V, the element is for obtaining features of the first high-dimensional signal vector and a third high-dimensional signal vector, and so on … It is assumed that first output of the second interaction module is h'1 =f(h1 W), where f may be summation, and represents that h1 is multiplied by the first matrix, h1 represents first output of the first interaction module, and W represents a weight; Figs. 5A-5D and paragraphs [0126-0129] categorical feature interaction computation – one of the computation performed by the first interaction module and the second interaction module; latent feature interaction computation - computation performed by other one of the computation performed by the first interaction module and the second interaction module; second matrix – output of the one of the first interaction module and second interaction module), the processor transposes the second matrix to generate a transposed matrix (Xu Figs 7C and 8 and paragraph [0160] “a matrix transpose operation may be added after output of the first interaction module. In other words, the matrix transpose operation may be performed on the output of the first interaction module”), and
the processor performs the other one of the categorical feature interaction computation and the latent feature interaction computation on the transposed matrix to generate a total interaction result (Xu Figs. 5A-5D, 7A-7C and 8 and paragraphs [0151 and 0156-0160] “the matrix transpose operation may be performed on the output of the first interaction module. This is because the output of the first interaction module is (dkxN)-dimensional, and the second interaction module performs an operation on a dk dimension … The feature between the signal vectors can be obtained by using the first matrix, and the feature of each signal vector can be obtained by using the second matrix”; paragraphs [0126-0129]; total interaction result - feature between the signal vectors and the feature of each signal vector; paragraph [0117]).
Regarding claim 16, Xu teaches all the limitations of claim 15 as stated above. Further, Xu teaches wherein the categorical feature interaction computation performed by the processor comprises a plurality of iterations, and an ith iteration of the iterations comprises: performing neural network computation on an ith latent feature of each of the categorical feature vectors to generate an ith column element of the second matrix, wherein each of the categorical feature vectors comprises d latent features, d is an integer, and i is an integer greater than 0 and less than or equal to d (Xu Figs. 7A-7C and 8 and paragraphs [0009, 0125, 0151-0160] “layers of iterations may be performed at the interaction layer. In other words, L interaction module groups may be connected to each other, so that the L layers of iterations are performed at the interaction layer”; paragraphs [0151-0160] “In the first matrix, an element in the first row and the first column is U, and the element is for obtaining a feature of a first high-dimensional signal vector; an element in the first row and the second column is V, and the element is for obtaining features of the first high-dimensional signal vector and a second high-dimensional signal vector; and an element in the first row and the third column is V, the element is for obtaining features of the first high-dimensional signal vector and a third high-dimensional signal vector, and so on … the first row of a matrix output by the first interaction module is h1=f (S1U, S2V, S3V, S4V ... ), where f may be summation, and represents that S is multiplied by the first matrix … It is assumed that first output of the second interaction module is h'1 =f(h1 W), where f may be summation, and represents that h1 is multiplied by the first matrix, h1 represents first output of the first interaction module, and W represents a weight”).
Regarding claim 17, Xu teaches all the limitations of claim 16 as stated above. Further, Xu teaches wherein the neural network computation comprises multilayer perceptron computation or convolutional neural network computation (Xu paragraphs [0154 and 0157] “the first matrix may be in a form of a fully connected neural network. output of the first interaction module is hs=f(SWs+b). Ws and b are trained parameter … It may be understood that the second matrix may also be expanded to a form of a fully connected neural network, for example, h'=f(h W +b ), where W and b are trained parameters”; multilayer perceptron computation - hs=f(SWs+b) or h'=f(hW +b)).
Regarding claim 18, Xu teaches all the limitations of claim 15 as stated above. Further, Xu teaches wherein the latent feature interaction computation performed by the processor comprises a plurality of iterations, and an ith iteration of the iterations comprises: performing neural network computation on an ith column element of the transposed matrix to generate an ith column element of the total interaction result, wherein a number of columns of the transposed matrix is hc, hc is an integer, and i is an integer greater than 0 and less than or equal to hc (Xu Figs. 7A-7C and 8 and paragraphs [0009, 0125, 0151-0160] “In the first matrix, an element in the first row and the first column is U, and the element is for obtaining a feature of a first high-dimensional signal vector; an element in the first row and the second column is V, and the element is for obtaining features of the first high-dimensional signal vector and a second high-dimensional signal vector; and an element in the first row and the third column is V, the element is for obtaining features of the first high-dimensional signal vector and a third high-dimensional signal vector, and so on … the first row of a matrix output by the first interaction module is h1=f (S1U, S2V, S3V, S4V ... ), where f may be summation, and represents that S is multiplied by the first matrix … It is assumed that first output of the second interaction module is h'1 =f(h1 W), where f may be summation, and represents that h1 is multiplied by the first matrix, h1 represents first output of the first interaction module, and W represents a weight”).
Regarding claim 19, Xu teaches all the limitations of claim 18 as stated above. Further, Xu teaches wherein the neural network computation comprises multilayer perceptron computation or convolutional neural network computation (Xu paragraphs [0154 and 0157] “the first matrix may be in a form of a fully connected neural network. output of the first interaction module is hs=f(SWs+b). Ws and b are trained parameter … It may be understood that the second matrix may also be expanded to a form of a fully connected neural network, for example, h'=f(h W +b ), where W and b are trained parameters”; multilayer perceptron computation - hs=f(SWs+b) or h'=f(hW +b)).
Regarding claim 20, Xu teaches all the limitations of claim 15 as stated above. Further, Xu teaches wherein the latent feature interaction computation performed by the processor comprises a plurality of iterations, and an ith iteration of the iterations comprises: performing neural network computation on all latent features of an ith categorical feature vector in the categorical feature vectors to generate an ith column element of the second matrix, wherein a number of the categorical feature vectors is k, k is an integer, and i is an integer greater than 0 and less than or equal to k Xu Figs. 7A-7C and 8 and paragraphs [0009, 0125, 0151-0160] “In the first matrix, an element in the first row and the first column is U, and the element is for obtaining a feature of a first high-dimensional signal vector; an element in the first row and the second column is V, and the element is for obtaining features of the first high-dimensional signal vector and a second high-dimensional signal vector; and an element in the first row and the third column is V, the element is for obtaining features of the first high-dimensional signal vector and a third high-dimensional signal vector, and so on … the first row of a matrix output by the first interaction module is h1=f (S1U, S2V, S3V, S4V ... ), where f may be summation, and represents that S is multiplied by the first matrix … It is assumed that first output of the second interaction module is h'1 =f(h1 W), where f may be summation, and represents that h1 is multiplied by the first matrix, h1 represents first output of the first interaction module, and W represents a weight”).
Regarding claim 21, Xu teaches all the limitations of claim 15 as stated above. Further, Xu teaches wherein the categorical feature interaction computation performed by the processor comprises a plurality of iterations, and an ith iteration of the iterations comprises: performing categorical neural network computation on an ith column element of the transposed matrix to generate an ith column element of the total interaction result, wherein a number of columns of the transposed matrix is hc, he is an integer, and i is an integer greater than 0 and less than or equal to hc (Xu Figs. 7A-7C and 8 and paragraphs [0009, 0125, 0151-0160] “In the first matrix, an element in the first row and the first column is U, and the element is for obtaining a feature of a first high-dimensional signal vector; an element in the first row and the second column is V, and the element is for obtaining features of the first high-dimensional signal vector and a second high-dimensional signal vector; and an element in the first row and the third column is V, the element is for obtaining features of the first high-dimensional signal vector and a third high-dimensional signal vector, and so on … the first row of a matrix output by the first interaction module is h1=f (S1U, S2V, S3V, S4V ... ), where f may be summation, and represents that S is multiplied by the first matrix … It is assumed that first output of the second interaction module is h'1 =f(h1 W), where f may be summation, and represents that h1 is multiplied by the first matrix, h1 represents first output of the first interaction module, and W represents a weight”).
Regarding claims 1-7, they are directed to a method practiced by the device of claims 15-21 respectively. All steps performed by the method of claims 1-7 would be practiced by the device of claims 15-21 respectively. Claims 15-21 analysis applies equally to claims 1-7 respectively.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 8-14 are rejected under 35 U.S.C. 103 as being unpatentable over Xu in view of Yagain (US 20110264723 A1).
Regarding claim 8, Xu teaches a total interaction device, configured to compute an interaction relationship between a plurality of features in a recommendation system, comprising:
a first memory, configured to store a plurality of categorical feature vectors, wherein the categorical feature vectors are added to a first matrix, and each of the categorical feature vectors comprises a plurality of latent features (Xu Figs. 19-20 and paragraphs [0231, 0233] “the communication apparatus 1900 may further include a memory 1920, configured to store instructions executed by the processor 1910, store input data required by the processor 1910 to run instructions, or store data generated after the processor 1910 runs instructions”; Figs. 4 and 7A-7C and paragraphs [0151, 0159] first matrix - input high-dimensional signal vector is N x dk; plurality of categorical feature vectors – signal vectors; plurality of latent features – signals in each signal vector);
a first interaction computation circuit, coupled to the first memory, and configured to perform one of categorical feature interaction computation and latent feature interaction computation on the first matrix to generate a second matrix (Xu Figs. 19-20 and paragraphs [0231, 0233] “The processor 1910 may implement, by using the instructions stored in the memory 1920, the method shown in the foregoing method embodiments”; Figs. 7A-8 and paragraphs [0151 and 0156-0160] “In the first matrix, an element in the first row and the first column is U, and the element is for obtaining a feature of a first high-dimensional signal vector; an element in the first row and the second column is V, and the element is for obtaining features of the first high-dimensional signal vector and a second high-dimensional signal vector; and an element in the first row and the third column is V, the element is for obtaining features of the first high-dimensional signal vector and a third high-dimensional signal vector, and so on … It is assumed that first output of the second interaction module is h'1 =f(h1 W), where f may be summation, and represents that h1 is multiplied by the first matrix, h1 represents first output of the first interaction module, and W represents a weight; Figs. 5A-5D and paragraphs [0126-0129] first interaction computation circuit – one of the first interaction module and the second interaction module; categorical feature interaction computation – one of the computation performed by the first interaction module and the second interaction module; latent feature interaction computation - computation performed by other one of the computation performed by the first interaction module and the second interaction module; second matrix – output of the one of the first interaction module and second interaction module);
(Xu Figs 7C and 8 and paragraph [0160] “a matrix transpose operation may be added after output of the first interaction module. In other words, the matrix transpose operation may be performed on the output of the first interaction module”); and
a second interaction computation circuit, (Xu Figs. 5A-5D, 7A-7C and 8 and paragraphs [0151 and 0156-0160] “the matrix transpose operation may be performed on the output of the first interaction module. This is because the output of the first interaction module is (dkxN)-dimensional, and the second interaction module performs an operation on a dk dimension … The feature between the signal vectors can be obtained by using the first matrix, and the feature of each signal vector can be obtained by using the second matrix”; paragraphs [0126-0129]; second interaction computation circuit – the other one of the first interaction module and the second interaction module; total interaction result - feature between the signal vectors and the feature of each signal vector; paragraph [0117]).
Xu does not explicitly teach a second memory, coupled to the first interaction computation circuit to receive the second matrix, and configured to transpose the second matrix to generate a transposed matrix; and a second interaction computation circuit, coupled to the second memory.
However, on the same field of endeavor, Yagain discloses a memory coupled in between a first and second computation circuit and configured to receive a matrix from the first computation circuit and configured to transpose the matrix to generate a transposed matrix that is provided to the second computation circuit (Yagain Figs. 1 and 4 and paragraphs [0023] “FIG. 1 illustrates a block diagram of a device 100 for successively transposing a two dimensional (2D) structure, according to an exemplary embodiment. The device 100 includes data storage elements 102, write control logic 104, and read control logic 106. The data storage elements 102 may be memory elements or registers”; second memory – device 100; paragraph [0040] “It will be appreciated that, the NxM matrix transpose circuit 404 is the exemplary device 100 of FIG. 1”).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective
filling date of the claimed invention, to modify Xu using Yagain and configure the device of Xu to include a second memory in between the first and second interaction module for receiving the output of the first or second interaction module and transposing the output and providing the transposed output to the second or first interaction module respectively. As discussed, Xu already discloses transposing the output of the first or second interaction module and inputting the transposed output to the second or first interaction module respectively, therefore, it would be obvious to include a second memory in between the interaction modules because a memory device is a commonly used to perform matrix transpositions (Yagain paragraph [0006]).
Therefore, the combination of Xu as modified in view of Yagain teaches a second memory, coupled to the first interaction computation circuit to receive the second matrix, and configured to transpose the second matrix to generate a transposed matrix; and a second interaction computation circuit, coupled to the second memory.
Regarding claim 9, Xu as modified in view of Yagain teaches all the limitations of claim 8 as stated above. Further, Xu as modified in view of Yagain teaches wherein the categorical feature interaction computation performed by the first interaction computation circuit comprises a plurality of iterations, and an ith iteration of the iterations comprises: performing neural network computation on an ith latent feature of each of the categorical feature vectors to generate an ith column element of the second matrix, wherein each of the categorical feature vectors comprises d latent features, d is an integer, and i is an integer greater than 0 and less than or equal to d Xu Figs. 7A-7C and 8 and paragraphs [0009, 0125, 0151-0160] “In the first matrix, an element in the first row and the first column is U, and the element is for obtaining a feature of a first high-dimensional signal vector; an element in the first row and the second column is V, and the element is for obtaining features of the first high-dimensional signal vector and a second high-dimensional signal vector; and an element in the first row and the third column is V, the element is for obtaining features of the first high-dimensional signal vector and a third high-dimensional signal vector, and so on … the first row of a matrix output by the first interaction module is h1=f (S1U, S2V, S3V, S4V ... ), where f may be summation, and represents that S is multiplied by the first matrix … It is assumed that first output of the second interaction module is h'1 =f(h1 W), where f may be summation, and represents that h1 is multiplied by the first matrix, h1 represents first output of the first interaction module, and W represents a weight”).
Regarding claim 10, Xu as modified in view of Yagain teaches all the limitations of claim 9 as stated above. Further, Xu as modified in view of Yagain teaches wherein the neural network computation comprises multilayer perceptron computation or convolutional neural network computation (Xu paragraphs [0154 and 0157] “the first matrix may be in a form of a fully connected neural network. output of the first interaction module is hs=f(SWs+b). Ws and b are trained parameter … It may be understood that the second matrix may also be expanded to a form of a fully connected neural network, for example, h'=f(h W +b ), where W and b are trained parameters”; multilayer perceptron computation - hs=f(SWs+b) or h'=f(hW +b)).
Regarding claim 11, Xu as modified in view of Yagain teaches all the limitations of claim 8 as stated above. Further, Xu as modified in view of Yagain teaches wherein the latent feature interaction computation performed by the second interaction computation circuit comprises a plurality of iterations, and an ith iteration of the iterations comprises: performing neural network computation on an ith column element of the transposed matrix to generate an ith column element of the total interaction result, wherein a number of columns of the transposed matrix is hc, he is an integer, and i is an integer greater than 0 and less than or equal to hc Xu Figs. 7A-7C and 8 and paragraphs [0009, 0125, 0151-0160] “In the first matrix, an element in the first row and the first column is U, and the element is for obtaining a feature of a first high-dimensional signal vector; an element in the first row and the second column is V, and the element is for obtaining features of the first high-dimensional signal vector and a second high-dimensional signal vector; and an element in the first row and the third column is V, the element is for obtaining features of the first high-dimensional signal vector and a third high-dimensional signal vector, and so on … the first row of a matrix output by the first interaction module is h1=f (S1U, S2V, S3V, S4V ... ), where f may be summation, and represents that S is multiplied by the first matrix … It is assumed that first output of the second interaction module is h'1 =f(h1 W), where f may be summation, and represents that h1 is multiplied by the first matrix, h1 represents first output of the first interaction module, and W represents a weight”).
Regarding claim 12, Xu as modified in view of Yagain teaches all the limitations of claim 11 as stated above. Further, Xu as modified in view of Yagain teaches wherein the neural network computation comprises multilayer perceptron computation or convolutional neural network computation (Xu paragraphs [0154 and 0157] “the first matrix may be in a form of a fully connected neural network. output of the first interaction module is hs=f(SWs+b). Ws and b are trained parameter … It may be understood that the second matrix may also be expanded to a form of a fully connected neural network, for example, h'=f(h W +b ), where W and b are trained parameters”; multilayer perceptron computation - hs=f(SWs+b) or h'=f(hW +b)).
Regarding claim 13, Xu as modified in view of Yagain teaches all the limitations of claim 8 as stated above. Further, Xu as modified in view of Yagain teaches wherein the latent feature interaction computation performed by the first interaction computation circuit comprises a plurality of iterations, and an ith iteration of the iterations comprises: performing neural network computation on all latent features of an ith categorical feature vector in the categorical feature vectors to generate an ith column element of the second matrix, wherein a number of the categorical feature vectors is k, k is an integer, and i is an integer greater than 0 and less than or equal to k Xu Figs. 7A-7C and 8 and paragraphs [0009, 0125, 0151-0160] “In the first matrix, an element in the first row and the first column is U, and the element is for obtaining a feature of a first high-dimensional signal vector; an element in the first row and the second column is V, and the element is for obtaining features of the first high-dimensional signal vector and a second high-dimensional signal vector; and an element in the first row and the third column is V, the element is for obtaining features of the first high-dimensional signal vector and a third high-dimensional signal vector, and so on … the first row of a matrix output by the first interaction module is h1=f (S1U, S2V, S3V, S4V ... ), where f may be summation, and represents that S is multiplied by the first matrix … It is assumed that first output of the second interaction module is h'1 =f(h1 W), where f may be summation, and represents that h1 is multiplied by the first matrix, h1 represents first output of the first interaction module, and W represents a weight”).
Regarding claim 14, Xu as modified in view of Yagain teaches all the limitations of claim 8 as stated above. Further, Xu as modified in view of Yagain teaches wherein the categorical feature interaction computation performed by the second interaction computation circuit comprises a plurality of iterations, and an ith iteration of the iterations comprises: performing categorical neural network computation on an ith column element of the transposed matrix to generate an ith column element of the total interaction result, wherein a number of columns of the transposed matrix is hc, he is an integer, and i is an integer greater than 0 and less than or equal to hc Xu Figs. 7A-7C and 8 and paragraphs [0009, 0125, 0151-0160] “In the first matrix, an element in the first row and the first column is U, and the element is for obtaining a feature of a first high-dimensional signal vector; an element in the first row and the second column is V, and the element is for obtaining features of the first high-dimensional signal vector and a second high-dimensional signal vector; and an element in the first row and the third column is V, the element is for obtaining features of the first high-dimensional signal vector and a third high-dimensional signal vector, and so on … the first row of a matrix output by the first interaction module is h1=f (S1U, S2V, S3V, S4V ... ), where f may be summation, and represents that S is multiplied by the first matrix … It is assumed that first output of the second interaction module is h'1 =f(h1 W), where f may be summation, and represents that h1 is multiplied by the first matrix, h1 represents first output of the first interaction module, and W represents a weight”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Carlo Waje whose telephone number is (571)272-5767. The examiner can normally be reached 9:00-6:00 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Trujillo can be reached at (571) 272-3677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Carlo Waje/Examiner, Art Unit 2151 (571)272-5767