DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Response to Amendment (Submitted on 12/4/2025) The applicant has amended the claims 1, 6, 9 and 15 in which the claim 1 , and claim 6 is amended from features from claim 2 and claim 7 , claim 9 is amended with features of claim 10 and claim 15 is amended with features of claim 16. The claims 2,7, 10 and 16 are CANCELED. The applicant has provided the following arguments and the examiner has provided the responses to each argument. In regard to 101 rejections: - On Page s 11 -12 , the applicant argues that that amended claim 1 should be eligible under Steps 2A and 2B, as th e adding specific limitations can provide improvements within the context of beyond conventional, routine as the claim provides a specific process carried out to training a tag recommendation model. As described in the present application, the claimed process of amended independent claim 1 provides the following beneficial effects of “ ERNIE is used to represent the training materials semantically, which can make the representations of features of the training materials more accurate . By training the double-layer neural network structure, the coverage of the materials is increased , thereby improving the accuracy of the obtained interest tags" and "By splicing the training behavior vectors and the training service vectors in some embodiments of the disclosure, final training semantic vectors of fixed or reasonable length are obtained, which is beneficial to improve the generalization capability of the neural network model ." The applicant argues that therefore the claim language from amended independent claim 1 demonstrates that the presently claimed subject matter is directed to a technical problem that is unique to training a tag recommendation model. The specific steps of the claimed process solve such unique technical problems and are not well understood, routine, or conventional . Examiner’s Response : Training a tag recommendation model is not inherently unique or unconventional. The key is to demonstrate the novelty in the technical problem solving (how the model performs). That is to say simply stating that the ML performs a task faster, efficiently or more accurately is not sufficient to overcome 101 rejections. The applicant need to demonstrate that the claim has captured the result and provide overall technical impact on the recommendation system. The applicant need to be more specific in terms of overall improvement to the computer technology so as for any POSITA to understand to the overall computer improvement for practical applications. As such, the claim merely suggesting an improvement to the model itself which is generic computer function. The examiner submits same arguments to claims 9 and 15. - On Page 12 , the applicant argues that that amended claim 6 should be eligible under Steps 2A and 2B, as the adding specific limitations can provide improvements within the context of beyond conventional, routine as the claim 6 provides a process “ Through the method for obtaining a tag provided by the disclosure, the users interest tags can be accurately obtained, so that relevant materials can be recommended accurately" and that “ the tag vectors are parsed by using sigmoid as the activation function and that the interest tags corresponding to the features are obtained from the features in the tag vectors, and the interest tags of the user are determined from the obtained interest tags ." . The applicant argument is claim 6 demonstrates that the presently claimed subject matter is directed to a technical problem that is unique to training a tag recommendation model . The specific steps of the claimed process solve such unique technical problems and are not well understood, routine, or conventional. Examiner’s Response : As note d before , Training a tag recommendation model is not inherently unique or unconventional. In regard to applicant’s argument s pecific to parsing of tag vectors, the parsing process is very conventional and routine perhaps well known to a POSITA that involve tokenization which is well understood in the NLP applications and the examiner submits that might be again just improvement to the model itself. That is to say simply stating that the ML performs a task faster, efficiently or more accurately is not sufficient to overcome 101 rejections. The applicant need to demonstrate that the claim has captured the result and provide overall technical impact on the recommendation system. The applicant need to be more specific in terms of overall improvement to the computer technology so as any POSITA to understand the overall computer improvement for practical applications. Thus, a s such, the claim 6 merely suggesting an improvement to the model itself which is generic computer function. In C onclusion, the amendment to claims 1, 6, 9 and 15 has not supported to overcome 101 rejections and the examiner reaffirms the 101 rejections on c laims FILLIN "Pluralize the word 'Claim' if necessary and then identify the claim(s) being rejected." 1, 3-6, 8-9, 11-15 and 17-20 . Claims 2, 7 , 10 and 16 have been CANCELED by the applicant. In regard to 10 3 rejections: - On Page 16 - 17 , the applicant summarizes the claim 1 as amended six limitations designated as features A J , B J , C J , D J , E J , and F J. The examiner notes that limitations [ Feature EJ] were drawn from claim 2 under F J with respect to vector lengths and averaging are drawn from claims 2 (claims 10 and 16). The examiner submits that the primary reference “Weiss” teaches the “fully connected neural network” as a “autoencoder” in claims 4, 6, 12 and 18. Applicant Argument # 1 (Page 17): - The applicant argues with reference to features A and B that Weiss fails to explicitly disclose " in response to receiving an instruction," and Wang pertains to knowledge data collection and not to training materials for tag recommendation . Examiner Response The examiner “ totally and respectfully disagrees” with the argument. The examiner notes that the applicant refers to WANG teaching of [0084] , [0085] , [0007], [0141], [0142] and [ 0124] for the arguments. The examiner first interprets that training materials for tag recommendation include data set that includes resources and user profiles with tag to build relationship between the tage and the resources. Reference WANG discloses in [0093 ] “ In the natural language processing system shown in FIG. 1, the user equipment may receive an instruction of the user . For example, the user equipment may receive a piece of text entered by the user , and then initiate a request to the data processing device, so that the data processing device executes a natural language processing application (for example, text classification, text inference, named entity re cogni tion, or translation) on the piece of text obtained by the user equipment, to obtain a processing result (for example, a classification result, an inference result, a named entity re cogni tion result, or a translation result) of a corresponding natural language processing application for the piece of text. For example, the user equipment may receive a piece of Chinese text entered by the user, and then initiate a request to the data processing device, so that the data processing device performs entity classification on the piece of Chinese text, to obtain an entity classification result for the piece of Chinese text. For example, the user equipment may receive a piece of Chinese text entered by the user, and then initiate a request to the data processing device, so that the data processing device translates the piece of Chinese text into English, to obtain an English translation for the piece of Chinese text ” and further discloses in [ 0018] ” In a target processing model training process, the parameters of the original processing model and/or the parameters of the original fusion model are adjusted based on the first knowledge data and the training text, to obtain the target processing model and/ or the target fusion model. This improves a capability of understanding natural language by the target processing model and/or the target fusion model, and improves accuracy of the processing result of the target processing model ”. Applicant Argument # 2 (Page 18) : - The applicant argues on f eature C that CAO does not disclose the semantic vector training and that C AO 's vectors are independent and are not integrated with any semantic vector. Further, the applicant argues that i n contrast, amended claim 1 specifies aggregating social network information into a pre-existing "training semantic vector " derived from a previous step. This step is entirely absent in CAO. The applicant further argues that C AO performs "feature creation," not the "aggregating into " as recited in amended claim 1. As shown in paragraph [0085] of C AO , the system "creates an overall feature matrix" by combining path counts from meta-paths and uses it as the training feature vector. The applicant refer to [0084] and [0085] for the arguments. Examiner Response The examiner “ totally and respectfully disagrees”. As shown in paragraph [0085] of C AO , the system "creates an overall feature matrix" by combining path counts from meta-paths and uses it as the training feature vector ”. The reference CAO teaches in [ 0092 ] ” As used herein, a meta-path corresponds to a type of path within the network schema, containing a certain sequence of link types. For example, in FIG. 7, a meta-path denotes a composite relationship from tweets to venues. The semantic meaning of this meta-path is that the tweet and the venue share common words via tips . The link type “ contain.sup.− 1” represents the inverted relation of “contain”. The tweet and venues connected through the meta-path can be regarded as being more likely to be linked than those without such correlations ”. Further CAO teaches in [ 0093] “ Different meta-paths usually represent different relationships among linked nodes with different semantic meanings. For example, the meta-path denotes that the tweet was posted by a Twitter user who is a mayor of the venue in Foursquare, while the meta-path indicates the tweet was posted by a Twitter user whose friend checks in at the venue. In this way, relationships between tweets and venues can be described by different meta-paths with different semantic s. Thus , four types of meta-paths as shown in FIG. 7 are extracted and summarized in FIG. 8. Further, the examiner submits that Cao teaches in [ 0015] “ In some implementations, identifying for the new social message corresponding meta-paths to the particular venue includes: obtaining a social graph as a social network schema based on types of entities and relation ships extracted from a collection of messages and the collection of venues , wherein each type of entities is represented as a type of node in the social network schema and the relation ships between the entities are represented as different types of links; and based on the social graph, content of the new social message and/or a user writing the new social message and/or social friends of the user, identifying for the new social message corresponding meta-paths connecting the new social message to the particular venue, wherein each of the corresponding meta-paths represents a type of path within the social network, containing a certain sequence of link types . Applicant Argument # 3 (Page 18) : - The applicant argues on 1.2 Feature C that CAO does not teach the “aggregation into a training semantic vector” as it does not teach the key input of “training semantic vector” as defined in claim 1 and the reference teaches directly “compute features” based on raw “social message” and “meta- paths” and the feature generation process is performed independently and does not take any form of “training semantic vector” as input. The applicant has made references CAO’s [0084], [0085] for the arguments. The applicant argues that none of the references make up for this deficiency. Examiner’s Response The examiner “ totally and respectfully disagree” with the argument. WANG teaches in [0084]: "Using the plurality of social message and venue pairs, the server 104 computes features based on meta-paths and geo-coordinate information ” and in [0085] teaches "the path counts for different meta-paths are combined to create an overall feature matrix and the overall feature matrix is represented as the training feature vector." . The examiner teaching in [0085]: "first encoding the respective training social message in the pair as a label" . The reference CAO teaches in [ 0092 ] ” As used herein, a meta-path corresponds to a type of path within the network schema, containing a certain sequence of link types. For example, in FIG. 7, a meta-path denotes a composite relationship from tweets to venues. The semantic meaning of this meta-path is that the tweet and the venue share common words via tips . The link type “ contain.sup.− 1” represents the inverted relation of “contain”. The tweet and venues connected through the meta-path can be regarded as being more likely to be linked than those without such correlations ”. Further CAO teaches in [0093] “ Different meta-paths usually represent different relationships among linked nodes with different semantic meanings. For example, the meta-path denotes that the tweet was posted by a Twitter user who is a mayor of the venue in Foursquare, while the meta-path indicates the tweet was posted by a Twitter user whose friend checks in at the venue. In this way, relationships between tweets and venues can be described by different meta-paths with different semantic s. Thus , four types of meta-paths as shown in FIG. 7 are extracted and summarized in FIG. 8. Further, the examiner submits that Cao teaches in [ 0015] “ In some implementations, identifying for the new social message corresponding meta-paths to the particular venue includes: obtaining a social graph as a social network schema based on types of entities and relation ships extracted from a collection of messages and the collection of venues , wherein each type of entities is represented as a type of node in the social network schema and the relation ships between the entities are represented as different types of links; and based on the social graph, content of the new social message and/or a user writing the new social message and/or social friends of the user, identifying for the new social message corresponding meta-paths connecting the new social message to the particular venue, wherein each of the corresponding meta-paths represents a type of path within the social network, containing a certain sequence of link types . Applicant Argument # 4(Page 19) : - The applicant argues that the Semantic- entities and relationships essentially involves extracting and constructing a new set of features from social data. This fundamentally differs in technical means and concept from the " aggregation " operation recited in amended claim 1, which involves injecting or fusing social network information into an existing semantic vector carrier. Therefore, Feature C cannot be reached by combining Weiss, Wang, and CAO. Examiner’s Response The examiner “ totally and respectfully disagrees” on the combination argument . First, the examiner interprets that fusing the social network involves integrating user attributes, social relationships as a vector representation. Wiess in [ 0007] teaches “ the present invention includes systems, methods, circuits, and associated computer executable code for deep learning based natural language understanding, wherein: (1) a word tokenization and spelling correction model/machine may generate corrected word sets outputs based on respective character strings inputs; and/or (2) a word seman tics derivation model/machine may generate seman tically tagged sentences outputs based on respective word sets inputs” . The reference CAO teaches in [ 0092 ] ” As used herein, a meta-path corresponds to a type of path within the network schema, containing a certain sequence of link types. For example, in FIG. 7, a meta-path denotes a composite relationship from tweets to venues . The semantic meaning of this meta-path is that the tweet and the venue share common words via tips . The link type “ contain.sup.− 1” represents the inverted relation of “contain”. The tweet and venues connected through the meta-path can be regarded as being more likely to be linked than those without such correlations ”. Perhaps it may be known to the POSITA that the meta-path that provides a composite relationship can represent social network information into a semantic vector carrier where meta-path capture semantic relationships. Further WANG discloses in [0007] “ using the target text vector and the target knowledge vector based on a target fusion model, to obtain a fuse d target text vector and a fuse d target knowledge vector; and processing the fuse d target text vector and/or the fuse d target knowledge vector based on a target processing model, to obtain a processing result corresponding to a target task ” and teaches in [0008]” uses the obtained fuse d target text vector and/or the obtained fuse d target knowledge vector as input data for the target processing model ”. Applicant Argument # 5 (Page 20): - The applicant argues on feature D that reference Y E teaches 2- layer NN for service evaluation and are based on quality data and not interest tags . The applicant refers to [0118] and [0027-0029 ] of Y E . Further, the applicant argues YE teaches a completely different problem of monitoring communication networks and does not teach “ tag recommendation model “ of claim which the process examines aspect of tag and help understand user interests and develop algorithms based on the user’s tagging behavior. Examiner’s Response The examiner “totally and respectfully disagrees” with the argument. The examiner first submits that perhaps known to the POSITA that attention mechanism in NLP significantly enhance the ability to simulate biological behavior by allowing models to focus on relevant parts of the input data. They enable dynamic weighting of input tokens based on their relevance to the current task, overcoming the limitations of fixed- size context vectors in models . T his capability mirrors human cognitive processes, as it allows models to prioritize information that is most informative for the current output step, thus improving performance in tasks like translation and summarization . The examiner submits that WANG [ 0 115] “ The attention mechanism simulates an internal process of biological observation behavior , and is a mechanism that aligns internal experience with external feeling to increase observation precision of some regions. The mechanism can quickly select high-value information from a large amount of information by using limited attention resources. The attention mechanism is widely used in natural language processing tasks, especially machine translation, because the attention mechanism can quickly extract an important feature of sparse data. A self-attention mechanism (self-attention mechanism) is an improvement of the attention mechanism. The self-attention mechanism reduces dependence on external information and is better at capturing an internal correlation of data or features. Applicant Argument # 6 (Page 21): - The applicant argues that reference “YE” within the context of double layer neural network does not disclose a tag recommendation model as it is used for service quality evaluation and argues that it is teaching a completely different problem. The applicant made references to YE teachings in [0118], [0027] -[ 0029], [0049]-[0053] in support of the arguments. Examiner’s Response The examiner “ totally and respectfully “disagrees” with the argument . Reference YE discloses in [0027] “ determining the tag based on the quality monitoring “ and further discloses in [0028] ” determining an evaluation indicator of service quality based on the service type to which the service quality evaluation model is applicable ” and further in [0029] discloses ” calculating a value of the evaluation indicator using the quality monitoring data , and determining the value of the evaluation indicator as a tag ” with further teaching in [0053]” a training module, configured to train a deep neural network model using the training set to obtain a service quality evaluation model”. In this context, the determination of the tag on quality monitoring relates to recommendation model with evaluation indicator as “interest tag” . Applicant Argument # 7 (Page 21): - The applicant argues that with respect to features E and F, reference Weiss does not disclose “ representing the behavior training materials as training behavior vectors of “ different length”. The specific context of argument is “vec tor” with specific references to [0074] -[ 0083]. Specifically the argument is that Weiss vectors and is limited to character sequences and structural annotations with no mention of vector representations . Examiner’s Response: The examiner totally “disagrees” with the argument . First, the examiner interprets that the training behavior vectors of different involves handling sequences that vary in length due to variability in the number of tokens or characters in the input data. Reference Weiss teaches in [ 0039] ” According to some embodiments of the present invention, a neural network based system for spell correction and tokenization of natural language, may comprise: (1 ) An artificial neural network architecture, to generate variable length ‘character level output streams’ for system fed variable length ‘character level input streams ” and “ An unsupervised training mechanism for adjusting the neural network to learn correct variable length ‘character level output streams’, wherein correct variable length ‘character level output streams’ needs to be identical to respective original variable length ‘character level input streams’ prior to their random character level modifications ”. Further reference Weiss discloses in [0047] According to some embodiments, an unsupervised, or weakly supervised, learning process, executed by a word token ization and spelling correction model of a system for Deep Learning may include: (1) receiving a string of one or more characters; (2) encoding and indexing the characters as a multi-value index; (3) embedding each character as a numbers vector ; (4) entering a matrix of one or more character number vectors, as input . Applicant Argument # 8 (Page 21): - The applicant argues that YE’s "tag" and the "interest tag" in amended claim 1 are different in nature and function suggesting that "tag" in YE is an "evaluation indicator value" calculated from service quality monitoring data (as described in paragraph [0029], serving as a continuous, quantitative regression target. In contrast, the "interest tag" in amended claim 1 is a discrete, categorical recommendation target. Therefore, feature D cannot be reached by combing Weiss, WANG, CAO and YE. Examiner’s Response The examiner “ totally and respectfully “disagrees” with the argument . The examiner first interprets that “ an interest tag” is a tool to capture the user needs and interest to help build a deep relationship and personal interactions by categorizing the tags. Reference YE discloses in [0094] ” The embodiments of the present disclosure provide a training method for service quality evaluation models. The method may be applicable to the network frame illustrated in FIG. 1. The network frame may include service nodes, monitoring nodes, and a model training node. The service node may be a node in a CDN service system that provides services to user s ” and further discloses in [ 0110] “ The tag may intuitively reflect the service quality , and for different service types, the indicators used to evaluate service quality may be different. According to the service type to which the service quality evaluation model is applicable, a corresponding evaluation indicator may be used to determine the tag . The step of determining the tag based on the quality monitoring data may include: determining an evaluation indicator of service quality based on the service type to which the service quality evaluation model is applicable, Applicant Argument # 9 (Page 22): - The applicant argues that reference WANG does not disclose “ representing the behavior training of fixed length training service vector”. The applicant argues that attributes may be of fixed-length but it does teach representing “service training materials “ as vectors. The specific context of argument is “vector” with specific references to [0141] and [0145]. Once again, the argument says the WANG focus on knowledge representation and not on fusing the behavior vectors. Examiner’s Response : The examiner totally “disagrees” with the argument . The examiner refers to the same counter argument as before in regard to knowledge representation . Perhaps known to the POS I TA, the examiner first interprets that the behavior service training service is providing support for users to improve their behavior. Perhaps known to the POSTA, the examiner first interprets that representation of behavior training in fixed-length involves converting a converting a complex data into a sequence of numbers or a feature vector. W ANG discloses in [0135]” the LM may also be understood as a probability model used to calculate a probability of a sentence. In other words, the language model is a probability distribution of a natural language text sequence , and the probability distribution represents a possibility of existence of text with a specific sequence and a specific length. In short , the language model predicts a next word based on a context. Because there is no need to manually tag a corpus, the language model can learn rich semantic knowledge from an unlimited large-scale corpus . WANG discloses in [ 0122] “ By using NLP and components of the NLP, a very large amount of text data can be managed or a lot of automated tasks can be performed, and various problems can be resolved, such as automatic summarization (automatic summarization), machine translation (machine translation, MT), named entity recognition (named entity recognition, NER), relation extraction (relation extraction, RE), information extraction (information extraction, IE), senti ment analysis, speech recognition (speech recognition), question answering (question answering), and topic segmentation ”. A sentiment analysis is a “behavior analysis”. Applicant Argument # 10 (Page 22): - The applicant argues that reference WANG does not disclose “averaging” or “fusing operation” and therefore does not disclose “ obtaining the training semantic vectors by averaging the training behavior vectors that are averaged with the training service vectors” as required by amended claim 1 . Examiner’s Response The examiner “ tot a lly “disagrees” with the argument . The reference WANG teaches in [ 0008] “According to the solution provided in this application, the target fusion model fuses the target text vector corresponding to the to-be-processed text and the target knowledge vector corresponding to the target knowledge data ” and discloses in [0050 ]” process the training text to obtain a first text vector; fuse the first text vector and the first knowledge vector based on an original fusion model ” and ” adjust parameters of the original fusion model based on the first task result and the second task result, to obtain the target fusion model ” and discloses in [ 0112] “ A recurrent neural network (recurrent neural network, RNN) is used to process sequence data” and discloses in [ 210] “ optionally , if the first knowledge data includes structured knowledge, the first knowledge data may be encode d by using an existing knowledge encoding method (for example, translating embedding, TransE), and obtained encode d information is the first knowledge vector” and further discloses in [0298]” The knowledge aggregator #5 may have a complex multilayer network structure, for example, a multilayer self-attention mechanism network structure, a multilayer perceptron network structure, or a recurrent neural network structure, and may simply weight and average the encoded text sequence and the encoded knowledge sequence. Applicant Argument # 1 1 (Page 23): - T he applicant argues with reference to Claim 6 that the prior art references do not teach parsing of the tage vector and hence the references cannot be combined. Examiner’s Response The examiner “totally and respectfully” disagrees with the arguments. The applicant argument is with respect to the amended claim 6. The parsing is taught by reference “Kasai” as it is added in the new grounds of rejection of “Weiss” WANG, CAO and “Kasai”. In C onclusion, t he examiner’s rebuttal for the applicant’s the argument of the applicant of claim 1 is same as arguments for claim s 9 and 15 and the examiner reaffirms the 103 rejections on c laims FILLIN "Pluralize the word 'Claim' if necessary and then identify the claim(s) being rejected." 1, 3-6, 8-9, 11-15 and 17-20 and MOVE the application to FINAL REJECTION under 103. Claims 2, 7 , 10 and 16 have been CANCELED by the applicant. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims FILLIN "Pluralize the word 'Claim' if necessary and then identify the claim(s) being rejected." 1, 3-6, 8-9, 11-15 and 17-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: According to the first part of the analysis, in the instant case, claim 1 and claim 6 is directed to a method claim, claim 9 is directed to an electronic device comprising a processor and a memory, and claim 15 is directed a storage to execute the storage contents. Thus, each of the claims falls within one of the four statutory categories (i.e. process, machine, manufacture, or composition of matter). In regard to claim 1 . (Currently Amended) Step 2A Prong 1: “ obtaining training semantic vectors that comprise the interest tags by representing features of the training materials using a semantic enhanced representation frame;” is a mental step of vector data representation. “ obtaining training encoding vectors by aggregating social networks into the training semantic vectors;” a mental step of vector data aggregation . “ representing the behavior training materials as training behavior vectors of different lengths” a mental step of vector data representation. “ and representing the service training materials as fixed-length training service vectors in the semantic enhanced representation frame” is a mental step of vector data representation. “ and obtain the training semantic vectors by averaging the training behavior vectors and fusing the training behavior vectors that are averaged with the training service vectors” is a mental step of vector data representation. Additional Elements Step 2A Prong 2 : “ A method for training a tag recommendation model, comprising:” recited in the preamble do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h. “ collecting training materials that comprise interest tags “ do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). “ and obtaining the tag recommendation model by training a double-layer neural network structure using the training encoding vectors as inputs and the interest tags as outputs” do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). “ wherein the training materials comprise behavior training materials and service training materials” do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). Step 2B: “ A method for training a tag recommendation model, comprising:” recited in the preamble do is directed to a generic computer function and does not amount to significantly more than the judicial exception in the claim . See MPEP 2106.05(h). “ collecting training materials that comprise interest tags “ is directed to a generic computer function and does not amount to significantly more than the judicial exception in the claim. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). “ obtaining training encoding vectors by aggregating social networks into the training semantic vectors;” is directed to a generic computer function and does not amount to significantly more than the judicial exception in the claim. See MPEP 2106.05(h). “ and obtaining the tag recommendation model by training a double-layer neural network structure using the training encoding vectors as inputs and the interest tags as outputs” is directed to a generic computer function and does not amount to significantly more than the judicial exception in the claim. See MPEP 2106.05(h). “ wherein the training materials comprise behavior training materials and service training materials” does not amount to significantly more than the judicial exception in the claim. See MPEP 2106.05(h). In regard to claim 3: (Original) Step 2A Prong 1: “ wherein obtaining the training encoding vectors by aggregating the social networks into the training semantic vectors, comprises:” “ is a mental step of data aggregation. “ determining intimacy values between any two of the social networks; “ is a mental step of data comparison. “ the intimacy values as values of elements in a matrix, “ is a mental step of data comparison. “ and generating an adjacency matrix based on the values of the elements; in response to that a sum of weights of elements in each row of the adjacency matrix is one, assigning weights to the elements, wherein a weight assigned to each of elements arranged diagonally in the adjacency matrix is greater than weights assigned to other elements; mental step of matrix data representation. “ and obtaining a training semantic vector corresponding to each element in the adjacency matrix, and obtaining the training encoding vectors by calculating a product of the training semantic vector and a value of each element after assigning by a graph convolutional network.” is a mental step of matrix data representation. Step 2A Prong 2 : no additional elements. Step 2B: no additional elements. In regard to claim 4: (Original) Step 2A Prong 2 : “ wherein obtaining the tag recommendation model by training the double- layer neural network structure using the training encoding vectors as the inputs and the interest tags as the outputs, comprises: “ do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). “ obtaining new training encoding vectors by inputting the training encoding vectors into a forward network; “ do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). “ obtaining training tag vectors by inputting the new training encoding vectors into a fully-connected network” do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). “ and obtaining the tag recommendation model by determining the training tag vectors as independent variables, and outputs as the interest tags” do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). Step 2B: “ wherein obtaining the tag recommendation model by training the double- layer neural network structure using the training encoding vectors as the inputs and the interest tags as the outputs, comprises: “ is directed to a generic computer function and does not amount to significantly more than the judicial exception in the claim. See MPEP 2106.05(h). “ obtaining new training encoding vectors by inputting the training encoding vectors into a forward network; “ is directed to a generic computer function and does not amount to significantly more than the judicial exception in the claim. See MPEP 2106.05(h). “ obtaining training tag vectors by inputting the new training encoding vectors into a fully-connected network” is directed to a generic computer function and does not amount to significantly more than the judicial exception in the claim. See MPEP 2106.05(h). “ and obtaining the tag recommendation model by determining the training tag vectors as independent variables, and outputs as the interest tags” is directed to a generic computer function and does not amount to significantly more than the judicial exception in the claim. See MPEP 2106.05(h). In regard to claim 5: (Original) Step 2A Prong 1 : “ and determining first interest tags corresponding to the interest tags in the training tag vectors, calculating a ratio of the first interest tags to the interest tags,” is a mental step of data identification. Additional Elements Step 2A Prong 2 : “ wherein obtaining the tag recommendation model by determining the training tag vectors as the independent variables, and the outputs as the interest tags, comprises: “ do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). “ obtaining interest tags in the training tag vectors by parsing the training tag vectors by an activation function;” do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). “ determining a probability threshold value of the tag recommendation model, and obtaining the tag recommendation model whose output tag probability value is greater than or equal to the probability threshold value” do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). Step 2B: “ wherein obtaining the tag recommendation model by determining the training tag vectors as the independent variables, and the outputs as the interest tags, comprises: “ is directed to a generic computer function and does not amount to significantly more than the judicial exception in the claim. See MPEP 2106.05(h). “ obtaining interest tags in the training tag vectors by parsing the training tag vectors by an activation function;” is directed to a generic computer function and does not amount to significantly more than the judicial exception in the claim. See MPEP 2106.05(h). do not integrate the judicial exception into a practical application. These additional elements “ determining a probability threshold value of the tag recommendation model, and obtaining the tag recommendation model whose output tag probability value is greater than or equal to the probability threshold value” is directed to a generic computer function and does not amount to significantly more than the judicial exception in the claim. See MPEP 2106.05(h ). In regard to claim 6: ( Currently Amended) Step 2A Prong 1: “ obtaining training semantic vectors that comprise the interest tags by representing features of the training materials using a semantic enhanced representation frame;” is a mental step of vector data representation. “ obtaining training encoding vectors by aggregating social networks into the training semantic vectors;” a mental step of vector data aggregation . “ and obtaining the training semantic vectors that comprise the interest tags by representing the features of the training materials using the semantic enhanced representation frame” mental step of data identification. “ representing the behavior training materials as training behavior vectors of different lengths” a mental step of vector data representation. “ and representing the service training materials as fixed-length training service vectors in the semantic enhanced representation frame” is a mental step of vector data representation. “ and obtain the training semantic vectors by averaging the training behavior vectors and fusing the training behavior vectors that are averaged with the training service vectors” is a mental step of vector data representation. Step 2A Prong 2 : “ A method for obtaining a tag, comprising: obtaining corresponding materials in response to receiving an instruction for obtaining an interest tag;” do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). “ and obtaining the interest tags by inputting the encoding vectors into a pre-trained tag recommendation model.” do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h ). “ wherein the training materials comprise behavior training materials and service training materials” ” do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). Step 2B: “ A method for obtaining a tag, comprising: obtaining corresponding materials in response to receiving an instruction for obtaining an interest tag;” is directed to a generic computer function and does not amount to significantly more than the judicial exception in the claim. See MPEP 2106.05(h). “ and obtaining the interest tags by inputting the encoding vectors into a pre-trained tag recommendation model.” is directed to a generic computer function and does not amount to significantly more than the judicial exception in the claim. See MPEP 2106.05(h). “ wherein the training materials comprise behavior training materials and service training materials ” does not amount to significantly more than the judicial exception in the claim. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). In regard to claim 8: (Currently Amended ) Step 2A Prong 2 : “ wherein parsing the tag vectors, and outputting the interest tags based on the probability threshold value of the tag recommendation model, comprises: “ do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). “ obtaining a plurality of tags by parsing the tag vectors based on an activation function in the tag recommendation model” do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). “ and determining tags whose occurrence probability is greater than or equal to the probability threshold value in the plurality of tags as the interest tags” do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). Step 2B: “ wherein parsing the tag vectors, and outputting the interest tags based on the probability threshold value of the tag recommendation model, comprises: “ is directed to a generic computer function and does not amount to significantly more than the judicial exception in the claim. See MPEP 2106.05(h). “ obtaining a plurality of tags by parsing the tag vectors based on an activation function in the tag recommendation model” is directed to a generic computer function and does not amount to significantly more than the judicial exception in the claim. See MPEP 2106.05(h). “ and determining tags whose occurrence probability is greater than or equal to the probability threshold value in the plurality of tags as the interest tags” is directed to a generic computer function and does not amount to significantly more than the judicial exception in the claim. See MPEP 2106.05(h). In regard to claim 9: ( Currently Amended) Step 2A Prong 1 : “ obtaining training semantic vectors that comprise the interest tags by representing features of the training materials using a semantic enhanced representation frame;” is a mental step of vector data representation. “ obtaining training encoding vectors by aggregating social networks into the training semantic vectors;” a mental step of vector data aggregation . “ and obtaining the training semantic vectors that comprise the interest tags by representing the features of the training materials using the semantic enhanced representation frame” mental step of data identification. “ representing the behavior training materials as training behavior vectors of different lengths” a mental step of vector data representation. “ and representing the service training materials as fixed-length training service vectors in the semantic enhanced representation frame” is a mental step of vector data representation. “ and obtain the training semantic vectors by averaging the training behavior vectors and fusing the training behavior vectors that are averaged with the training service vectors” is a mental step of vector data representation. Additional Elements Step 2A Prong 2 : “ An electronic device, comprising: a processor; and a memory communicatively coupled to the processor; wherein the memory is configured to store instructions executable by the processor, and the processor is configured to: “ do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). “ collecting training materials that comprise interest tags “ do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). “ and obtaining the tag recommendation model by training a double-layer neural network structure using the training encoding vectors as inputs and the interest tags as outputs” do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). “ wherein the training materials comprise behavior training materials and service training materials” ” do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). Step 2B: “ An electronic device, comprising: a processor; and a memory communicatively coupled to the processor; wherein the memory is configured to store instructions executable by the processor, and the processor is configured to: “ is directed to a generic computer function and does not amount to significantly more than the judicial exception in the claim. See MPEP 2106.05(h). “ collecting training materials that comprise interest tags “ is directed to a generic computer function and does not amount to significantly more than the judicial exception in the claim. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). “ and obtaining the tag recommendation model by training a double-layer neural network structure using the training encoding vectors as inputs and the interest tags as outputs” is directed to a generic computer function and does not amount to significantly more than the judicial exception in the claim. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). “ wherein the training materials comprise behavior training materials and service training materials” ” is directed to a generic computer function and does not amount to significantly more than the judicial exception in the claim. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). In regard to claim 11: ( Original) Step 2A Prong 2 : “ wherein , the processor is further configured to: determine intimacy values between any two of the social networks” do not integrate the judicial exception into a practical application. These additional elements are merely directed to using a computer as a tool to perform an abstract idea. See MPEP 2106.05(h). “ wherein , the processor is further configured to: determine intimacy values between any two of the social networks” is directed to a generic computer function and does not amount to significantly more than the judicial except