Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. CN202111184748.8, filed on 10/12/2021.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1, 7, and 13 recites “performing first mapping processing on the plurality of encoding features to obtain a plurality of first recommendation scores in one-to-one correspondence with a plurality of recommendation dimensions” and “performing second mapping processing on the plurality of encoding features in each of the plurality of recommendation dimensions to obtain a mapping feature of the recommendation dimension”. It is not clear what is the difference between the second mapping and the first mapping. In the first mapping, the encoding features are mapped to a plurality of first recommendation scores and these recommendation scores are one-to-one correspondence with a plurality of recommendation dimensions. This claim limitation defines a relationship between the encoding feature, first recommendation scores, and recommendation dimensions through a mapping functionality. The second mapping processing seems redundant because the second mapping associates the encoding features with a plurality of recommendation dimensions. The claims failed to distinctly identify the difference between the first mapping processing and the second mapping processing. The claims also fail to distinctively point out what constitutes as a “mapping feature”.
The examiner interprets the first mapping processing to be performed on a set of encoding features to determine the relationships between historical features and users’ interactions. The recommendation scores represent the predicted likelihood of users’ interacting with the feature. The second mapping processing is defined as finding additional new features that the user may be interested and determining the recommendation dimensions for the additional features. Thus, the mapping feature of the second mapping processing consists of additional features with recommendation dimensions that the user may want to interact.
In addition, “mapping processing” is not explicitly defined in the claims and the scope of the definition remains very broad. Mapping processing is interpreted by the Examiner to mean processing the input data to identify relationships between data.
Claims 2-6, 8-12, and 14-18 are rejected on the same basis as claims 1, 7, and 13 because claims 2-6, 8-12, and 14-18 are dependent claims of the independent claims 1, 7, and 13.
Claims 4, 10, and 16 recites “performing horizontal splicing processing on the first recommendation scores of the plurality of recommendation dimensions to obtain a tiling vector”, “performing third full connection processing on the plurality of encoding features to obtain a third hidden layer feature”, and “performing sixth mapping processing on the third hidden layer feature to obtain the mapping feature with the same dimension as the tiling vector”. Horizontal slicing processing is not clearly defined in the claims. From claim 1, the first recommendation scores may already be a vector and it is not clear on what constitutes as a tiling vector. Claims 4, 10, and 16 does not disclose what the first recommendation scores are being processed with to obtain a tiling vector. Horizontal slicing processing is being interpreted to mean a process to combine data. A tiling vector is interpreted to consists of weights that defines the importance of the target objects.
Third full connection processing is not clearly defined. It is assumed there is a second and first full connection processing, but there is no such recitation. Similarly, third hidden layer feature suggest there should be a first and second hidden layer feature, but it is not disclosed in the claims. Without disclosing the first and second processing steps, the claims are very vague because what constitutes as the third full connection processing to generate third hidden layer feature. Third full connection processing is defined as generating a dense vector representing the target object.
In claim 1, a first and second mapping processing is disclosed and claim 4 discloses a sixth mapping processing. What constitutes as the sixth mapping processing and how is the sixth mapping processing defined. What is the difference between sixth mapping processing and the second mapping processing in which claim 4 is dependent on. Sixth mapping processing is interpreted to mean defining relationships between 2 sets of data. In a recommender system, the system may process user and item data. The mapping processing may consist of mapping of user-to-item, item-to-item, or user-to-user correlations.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Claim 1:
Subject Matter Eligibility Analysis Step 1:
Claim 1 recites “A method for information recommendation, executed by an electronic device and comprising” and is thus a process, one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
“performing encoding processing on a plurality of reference features to obtain an encoding feature of each reference feature, the reference features comprising at least one of the following: an object feature of a target object or an information feature of to-be- recommended information” (a mental process that can be performed in the human mind with the aid of pen and paper, i.e. evaluation; See Specification par. 57, Object feature can be a user’s age, which can be represented as a numerical value. Converting a user’s age to binary is a form of encoding and can be performed in the human mind with the aid of pen and paper.)
“performing first mapping processing on the plurality of encoding features to obtain a plurality of first recommendation scores in one-to-one correspondence with a plurality of recommendation dimensions, the first recommendation scores representing recommendation scores of the target object for the to-be-recommended information in the corresponding recommendation dimensions” (a mental process that can be performed in the human mind with the aid of pen and paper, i.e. evaluation; See Specification par. 19 & 116, Predicting recommendation scores based on features (user’s age) and recommendation dimensions (click-through-rate).)
“performing second mapping processing on the plurality of encoding features in each of the plurality of recommendation dimensions to obtain a mapping feature of the recommendation dimension” (a mental process that can be performed in the human mind with the aid of pen and paper, i.e. evaluation; See Specification par. 19, Generating correlations between recommendation scores and recommendation dimensions.)
“performing fusion processing on the first recommendation scores of the plurality of recommendation dimensions based on the mapping feature of each of the plurality of recommendation dimensions to obtain a fusion feature of the recommendation dimension, and performing recommendation score prediction processing on the to-be-recommended information based on the fusion feature to obtain a second recommendation score of the target object for the to-be-recommended information” (a mental process that can be performed in the human mind with the aid of pen and paper, i.e. evaluation, See Specification par. 19, Combining feature vectors to obtain a fusion feature and predicting a recommendation score based on fusion feature.)
“” (a mental process that can be performed in the human mind with the aid of pen and paper, i.e. evaluation; See Specification par. 61, Generating a recommended item based on score exceeding a score threshold.)
Claim 1 therefore recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
“... executed by an electronic device ...” (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f))
“executing a recommendation operation ” (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f))
The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. Therefore, Claim 1 is directed to the abstract idea.
Subject Matter Eligibility Analysis Step 2B:
“... executed by an electronic device ...” (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f))
“executing a recommendation operation ” (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f))
The additional elements as disclosed above alone or in combination do not recite significantly more than the abstract idea itself as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. Therefore, Claim 1 is subject-matter ineligible.
Regarding Claim 7:
The claim recites a system that performs the method as described in claim 1. Therefore, claim 7 is rejected for the same reasons as disclosed for claim 1. The limitations for additional elements of claim 7 are analyzed below.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Please see Step 2A Prong 1 analysis of claim 1
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B:
“An electronic device, comprising: a memory, configured to store a computer executable instruction; and a processor, configured to implement, when executing the computer executable instruction stored in the memory, a method for information recommendation including” (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f))
Regarding Claim 13:
The claim recites an article of manufacture that performs the method as described in claim 1. Therefore, claim 13 is rejected for the same reasons as disclosed for claim 1. The limitations for additional elements of claim 13 are analyzed below.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Please see Step 2A Prong 1 analysis of claim 1
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B:
“A non-transitory computer readable storage medium, storing a computer executable instruction that, when executed by a processor of an electronic device, causes the electronic device to implement a method for information recommendation including” (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f))
Regarding Claims 2, 8, and 14:
Subject Matter Eligibility Analysis Step 2A Prong 1:
“wherein the reference features comprise at least one of a continuous feature or a discrete feature, and the performing encoding processing on a plurality of reference features to obtain an encoding feature of each reference feature comprises” (a mental process that can be performed in the human mind with the aid of pen and paper, i.e. evaluation; See Specification par. 57, Object feature can be a user’s age, which can be represented as a numerical value. Converting a user’s age to binary is a form of encoding and can be performed in the human mind with the aid of pen and paper.)
“performing, when the reference features are the continuous features, discretization processing on the continuous features to obtain discrete features of the continuous features, and performing encoding processing on the discrete features of the continuous features to obtain encoding features of the continuous features” (a mental process that can be performed in the human mind with the aid of pen and paper, i.e. evaluation; See Specification par. 79)
“performing, when the reference features are the discrete features, encoding processing on the discrete features to obtain encoding features of the discrete features” (a mental process that can be performed in the human mind with the aid of pen and paper, i.e. evaluation; See Specification par. 79)
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B: None
Regarding Claims 3, 9, and 15:
Subject Matter Eligibility Analysis Step 2A Prong 1:
“performing feature crossing processing on the plurality of encoding features to obtain at least one crossing feature” (a mathematical calculation; See Specification par. 84)
“predicting fitting of the to-be-recommended information in each recommendation dimension based on the plurality of encoding features to obtain a fitting feature corresponding to each recommendation dimension” (a mental process that can be performed in the human mind with the aid of pen and paper, i.e. evaluation)
“performing splicing processing on the crossing feature and the fitting feature of each recommendation dimension to obtain a splicing feature corresponding to the recommendation dimension” (a mathematical calculation, See Specification par. 95)
“performing third mapping processing on the splicing feature of each recommendation dimension to obtain the first recommendation score of the to-be-recommended information corresponding to the recommendation dimension” (a mental process that can be performed in the human mind with the aid of pen and paper, i.e. evaluation)
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B: None
Regarding Claims 4, 10, and 16:
Subject Matter Eligibility Analysis Step 2A Prong 1:
“performing horizontal splicing processing on the first recommendation scores of the plurality of recommendation dimensions to obtain a tiling vector” (a mathematical calculation, See Specification par. 102, Joining one or more features from a vector of features)
“performing sixth mapping processing on the third hidden layer feature to obtain the mapping feature with the same dimension as the tiling vector” (a mathematical calculation, See par. 102 in Specification, Applying an activation function to obtain hidden layer feature.)
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B:
“performing third full connection processing on the plurality of encoding features to obtain a third hidden layer feature” (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f), See par. 92 in Specification, using expert networks to process the features.)
Regarding Claims 5, 11, and 17:
Subject Matter Eligibility Analysis Step 2A Prong 1:
“performing element-wise product calculation on the score matrix and the mapping matrix to obtain the fusion feature” (a mathematical calculation)
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B:
“obtaining a score matrix composed of the first recommendation score of the recommendation dimension and obtaining a mapping matrix composed of the mapping feature corresponding to the recommendation dimension” (This step is directed to data gathering, which is understood to be insignificant extra solution activity (2106.05(g) in step 2A prong 2) and well understood, routine and conventional activity of transmitting and receiving data as identified by the court (2106.05(d) in step 2B))
Regarding Claims 6, 12, and 18:
Subject Matter Eligibility Analysis Step 2A Prong 1:
“performing seventh mapping processing on the fusion feature to obtain a mapping feature corresponding to the fusion feature” (a mental process that can be performed in the human mind with the aid of pen and paper, i.e. evaluation)
“performing recommendation score prediction processing on the to-be-recommended information based on the mapping feature corresponding to the fusion feature to obtain the second recommendation score of the target object for the to-be-recommended information” (a mental process that can be performed in the human mind with the aid of pen and paper, i.e. evaluation)
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B: None
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4-8, 10-14, and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Koh (US20230030341A1) in view of Yang (US20210406761A1).
Regarding claim 1, Koh teaches:
“A method for information recommendation, executed by an electronic device and comprising” (abstract, A system that generates digital content recommendations.)
“performing encoding processing on a plurality of reference features to obtain an encoding feature of each reference feature, the reference features comprising at least one of the following: an object feature of a target object or an information feature of to-be-recommended information” ([0097-0100, 0103], The fragment machine learning model generates an encoding of each of the historical sequence. The historical sequence represents the different versions of a content fragment of the digital communications that was sent to recipients. The content fragment may consist of text or images. The model is trained to determine the likelihood the fragment variants will be liked by the recipients. Thus, the content fragment represents information feature of the to-be-recommended information.)
“performing first mapping processing on the plurality of encoding features to obtain a plurality of first recommendation scores ” ([0094, 0103-0113, Figure 5], The fragment machine learning model generates feature vector from the encodings of the historical sequence. The model may modify the generated feature vector by encoding additional information using a graphical model. The graphical model includes examination nodes to model viewing behavior and reward nodes to model interaction behaviors (recommendation dimensions) of the content fragments. The graphical model maps the dependencies between the feature vector, examination nodes, and reward nodes. The graphical model generates a predicted content fragment interaction metric (recommendation score) reflecting the likelihood of actions for a digital communication.)
“performing second mapping processing on the plurality of ” ([0126-0127, Figure 6], The image content model determines digital content variants based on the determined descriptors and the model additionally determines predicted digital content performance metrics (recommendation dimensions) for the digital content variants. Each digital content variant is mapped to a predicted click-through rate metric. A data structure contains the mapping between performance metrics and digital contents.)
“performing fusion processing on the first recommendation scores of the plurality of recommendation dimensions based on the mapping feature of each of the plurality of recommendation dimensions to obtain a fusion feature of the recommendation dimension, and performing recommendation score prediction processing on the to-be-recommended information based on the fusion feature to obtain a second recommendation score of the target object for the to-be-recommended information” ([0128-0133, Figure 7], A matrix of historical performance metrics (first recommendation scores) is generated that includes previous digital communications, templates of the previous digital communications, and the performance metrics. The matrix is an aggregation of all historical performance metrics for previous digital communication. Embedded matrices U and V are generated based on matrix A, where matrix A consists of historical performance metrics. Embedded matrices U and V (fusion feature) are generated using non-negative matrix factorization (fusion processing). A predicted multivariate performance metric (second recommendation score) is computed from the dot product of the embedding matrices.)
“executing a recommendation operation of the to-be-recommended information corresponding to the target object based on the second recommendation score of the to-be- recommended information” ([0136], Multivariate testing recommendations are generated based on the predicted multivariate performance metric.)
Koh does not explicitly disclose an implementation of “performing first mapping processing on the plurality of encoding features to obtain a plurality of first recommendation scores in one-to-one correspondence with a plurality of recommendation dimensions” and “performing second mapping processing on the plurality of encoding features in each of the plurality of recommendation dimensions to obtain a mapping feature of the recommendation dimension”. However, Yang discloses in the same field of endeavor:
“performing first mapping processing on the plurality of encoding features to obtain a plurality of first recommendation scores in one-to-one correspondence with a plurality of recommendation dimensions, the first recommendation scores representing recommendation scores of the target object for the to-be-recommended information in the corresponding recommendation dimensions” ([0016-0019, 0022, 0026, 0042, Figure 4], The system calculates a user-co-cluster affinity score using the user representations. A co-cluster is defined by user-item interaction (recommendation dimensions). The system can relate users to co-clusters (one-to-one correspondence). The user representation may be user embeddings. The affinity score represents strong or weak information to recommend a user based on the user-interaction parameter.)
“performing second mapping processing on the plurality of encoding features in each of the plurality of recommendation dimensions to obtain a mapping feature of the recommendation dimension” ([0015, 0019, 0022, 0025, 0042, Figure 4], The system computes item-co-cluster affinity score of how close an item is to a co-cluster. Here, a first mapping is performed to relate user with co-clusters representing user interactions. A second mapping is performed to relate items with co-clusters. A min-sum pooling operation is performed on both item-co-cluster affinity score and user-co-cluster affinity score to generate a final preference score.)
It would be obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of “performing first mapping processing on the plurality of encoding features to obtain a plurality of first recommendation scores in one-to-one correspondence with a plurality of recommendation dimensions” and “performing second mapping processing on the plurality of encoding features in each of the plurality of recommendation dimensions to obtain a mapping feature of the recommendation dimension” from Yang into the teaching of Koh. Doing so can improve the performance of a recommendation system by implementing a learning process of fine-grained co-cluster structure of items and users based on behavior data to present better recommendations to users (Yang, abstract).
Regarding claim 7:
Claim 7 recites a system that performs the same process as described in Claim 1. Therefore claim 7 is rejected under the same reasons mention for claim 1. The additional elements of claim 7 is addressed below by Koh:
“An electronic device, comprising: a memory, configured to store a computer executable instruction; and a processor, configured to implement, when executing the computer executable instruction stored in the memory, a method for information recommendation including” ([0213], A computer consisting of a processor and memory can be implemented to perform the recommendation process.)
Regarding claim 13:
Claim 13 recites an article of manufacture that performs the same process as described in Claim 1. Therefore claim 13 is rejected under the same reasons mention for claim 1. The additional elements of claim 13 is addressed below by Koh:
“A non-transitory computer readable storage medium, storing a computer executable instruction that, when executed by a processor of an electronic device, causes the electronic device to implement a method for information recommendation including” ([0213], A computer consisting of a processor and memory can be implemented to perform the recommendation process.)
Regarding claims 2, 8, and 14, Koh teaches:
“performing, when the reference features are the continuous features, discretization processing on the continuous features to obtain discrete features of the continuous features, and performing encoding processing on the discrete features of the continuous features to obtain encoding features of the continuous features” ([0039, 0116-0118, 0121], An image content model determines descriptors of the selected digital content item by utilizing an encoding model. A digital content item may be a video (continuous features). Descriptors may include topic tags, object tags, and scene tags (discrete features) of the digital content item. The image content model links the descriptors to one or more entities and identifies nearest neighbors of the linked entities within an entity embedding space.)
“performing, when the reference features are the discrete features, encoding processing on the discrete features to obtain encoding features of the discrete features” ([0116-0118, 0121], An image content model determines descriptors of the selected digital content image (discrete features) by utilizing an encoding model. Descriptors may include topic tags, object tags, and scene tags. The image content model links the descriptors to one or more entities and identifies nearest neighbors of the linked entities within an entity embedding space.)
Regarding claims 4, 10, and 16, Koh in view of Yang teaches:
“performing horizontal splicing processing on the first recommendation scores of the plurality of recommendation dimensions to obtain a tiling vector” ([Yang, 0030-0032], The user representation is constructed using Formula 2. The numerator of Formula 2 is computed to define how important an item is for a co-cluster. The SoftMax function computes weights (tiling vector) and the user embedding are a weighted average of the item embeddings.)
“performing third full connection processing on the plurality of encoding features to obtain a third hidden layer feature” ([Yang, 0025], The model converts a set of items into item embeddings, which may be a dense vector with a particularly dimensionality.)
“performing sixth mapping processing on the third hidden layer feature to obtain the mapping feature with the same dimension as the tiling vector” ([Yang, 0032], A linear projection (sixth mapping processing) is performed on the computed weights to align the feature spaces of users and items when generating the user representations (mapping feature). This generates a mapping between items and users.)
Regarding claims 5, 11, and 17, Koh teaches:
“obtaining a score matrix composed of the first recommendation score of the recommendation dimension and obtaining a mapping matrix composed of the mapping feature corresponding to the recommendation dimension” ([0129-0134, Figure 7], The multivariate testing results prediction model generates a matrix of historical performance metrics (score matrix) and embedding matrices (mapping matrix) based on the matrix A. The matrix consists of predicted performance metrics such as click-throughs and conversion.)
“performing element-wise product calculation on the score matrix and the mapping matrix to obtain the fusion feature” ([0132], To determine a predicted multivariate performance metric for the candidate digital communication, the multivariate testing results prediction model computes a dot product of the embedding matrix U1 and the embedding matrix V.)
Regarding claims 6, 12, and 18, Koh teaches:
“performing seventh mapping processing on the fusion feature to obtain a mapping feature corresponding to the fusion feature” ([0134-0135], The multivariate testing results prediction model generates one or more multivariant testing recommendations based on the predicted multivariant performance metrics.)
“performing recommendation score prediction processing on the to-be-recommended information based on the mapping feature corresponding to the fusion feature to obtain the second recommendation score of the target object for the to-be-recommended information” ([0062, 0136, 0212], The multivariate testing results prediction model identifies (performing recommendation score prediction processing) candidate digital communications in a top percentage of the predicted multivariant performance metrics by ranking the recommendations. Candidate digital candidates with predicted multivariant performance metrics above a threshold is selected as recommendations. The recommendations and performance metrics are provided for display.)
Claims 3, 9, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Koh (US20230030341A1) in view of Yang (US20210406761A1) and Wang “DCN V2: Improved Deep & Cross Network and Practical Lessons for Web-scale Learning to Rank Systems”.
Regarding claims 3, 9, and 15, Koh in view of Yang teaches:
“performing feature ” ([0118-0121], The determined descriptors are linked to entities or concept within the knowledge graph.)
“performing third mapping processing on the ” ([0094, 0103-0113, Figure 5], The fragment machine learning model generates feature vector from the encodings of the historical sequence. The model may modify the generated feature vector by encoding additional information using a graphical model. The graphical model includes examination nodes to model viewing behavior and reward nodes to model interaction behaviors (recommendation dimensions) of the content fragments. The graphical model maps the dependencies between the feature vector, examination nodes, and reward nodes. The graphical model generates a predicted content fragment interaction metric (recommendation score) reflecting the likelihood of actions for a digital communication.)
Koh in view of Yang does not explicitly disclose an implementation of “feature crossing”, “predicting fitting of the to-be-recommended information in each recommendation dimension based on the plurality of encoding features to obtain a fitting feature corresponding to each recommendation dimension” and “performing splicing processing on the crossing feature and the fitting feature of each recommendation dimension to obtain a splicing feature corresponding to the recommendation dimension”. However, Wang discloses in the same field of endeavor:
“performing feature crossing processing on the plurality of encoding features to obtain at least one crossing feature” ([pg. 3, Section 3.2, par. 1-3, pg. 3, Figure 1(b)], The cross network receives the embedded vectors to be processed by the cross layers to obtain a crossing feature vector.)
“predicting fitting of the to-be-recommended information in each recommendation dimension based on the plurality of encoding features to obtain a fitting feature corresponding to each recommendation dimension” ([pg. 3, Section 3.3, par. 1; pg. 7, Section 7.1, par. 1-3; pg. 3, Figure 1(b)], A deep network is implemented to process the embedded feature vectors. The Criteo dataset is used for click-through-rate prediction.)
“performing splicing processing on the crossing feature and the fitting feature of each recommendation dimension to obtain a splicing feature corresponding to the recommendation dimension” ([pg. 3, Section 3.4, par. 3-4], The outputs of the cross and deep networks are concatenated (splicing processing) to create the final output layer.)
“performing third mapping processing on the splicing feature of each recommendation dimension to obtain the first recommendation score of the to-be-recommended information corresponding to the recommendation dimension” ([pg. 3, Section 3.4, par. 3-4], The prediction (recommendation dimension) is computed along with a Log Loss value (recommendation score).)
It would be obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of “feature crossing”, “predicting fitting of the to-be-recommended information in each recommendation dimension based on the plurality of encoding features to obtain a fitting feature corresponding to each recommendation dimension” and “performing splicing processing on the crossing feature and the fitting feature of each recommendation dimension to obtain a splicing feature corresponding to the recommendation dimension” from Wang into the teaching of Koh in view of Yang. Doing so can improve recommender systems by implementing effective feature crosses to learn feature interactions (Wang, abstract).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GARY MAC whose telephone number is (703)756-1517. The examiner can normally be reached Monday - Friday 8:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Kawsar can be reached at (571) 270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GARY MAC/Examiner, Art Unit 2127
/ABDULLAH AL KAWSAR/Supervisory Patent Examiner, Art Unit 2127