DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Status Claim s 1-7 are currently pending and under exam herein. Claim s 1-7 are rejected. Priority The instant application is a U.S. national stage application of PCT Application No. PCT/ CN2021 /135524 under 35 U.S.C . 371, filed December 3, 2021, claiming priority of Chinese Application No.202011564310.8 , filed December 25, 2020. At this point in the examination, the effective filing date of the claims is December 25, 2020. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The Information Disclosure Statement filed 07/12/2022 is in compliance with the provisions of 37 CFR 1.97 and have therefore been considered, in part. Signed copies of the IDS documents are included in this Office Action. It is noted that certain references have not been considered and are lined-through, as they do not comply with the requirements set forth in 37 CFR 1.97. The instant citations lack (a) A concise explanation of the relevance, as it is presently understood by the individual designated in 37 CFR 1.56(c) most knowledgeable about the content of the information, unless a complete translation is provided; and/or (b) A written English language translation of a non-English language document, or portion thereof, if it is within the possession, custody or control of, or is readily available to any individual designated in 37 CFR 1.56(c). Drawings The drawings filed on 07/12/2022 are accepted. Abstract Applicant is reminded of the proper content of an abstract of the disclosure. A patent abstract is a concise statement of the technical disclosure of the patent and should include that which is new in the art to which the invention pertains. The abstract should not refer to purported merits or speculative applications of the invention and should not compare the invention with the prior art. If the patent is of a basic nature, the entire technical disclosure may be new in the art, and the abstract should be directed to the entire disclosure. If the patent is in the nature of an improvement in an old apparatus, process, product, or composition, the abstract should include the technical disclosure of the improvement. The abstract should also mention by way of example any preferred modifications or alternatives. Where applicable, the abstract should include the following: (1) if a machine or apparatus, its organization and operation; (2) if an article, its method of making; (3) if a chemical compound, its identity and use; (4) if a mixture, its ingredients; (5) if a process, the steps. Extensive mechanical and design details of an apparatus should not be included in the abstract. The abstract should be in narrative form and generally limited to a single paragraph within the range of 50 to 150 words in length. The abstract exceeds 150 words . See MPEP § 608.01(b) for guidelines for the preparation of patent abstracts. 112b Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 2, 4-5 and 7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. There is insufficient antecedent basis for the following limitations in the claim 1 : L imitation " FILLIN "Enter appropriate information" \* MERGEFORMAT the molecule " in FILLIN "Enter appropriate information" \* MERGEFORMAT lines 6, 14, 15 and 24 of page 9 . The rejection might be overcome by for example, amending the claim to replace “each molecule” in line 3, with “a molecule”. L imitation “the representation” in lines 13 and 14 of page 9 . The rejection might be overcome by for example, amending the claim to replace “the representation” with “a representation ” . Limitation “the specific process” in line 17 of page 9 . The rejection might be overcome by for example, amending the claim to introduce clear antecedent basis for “ specific process”. Limitation “the feature representation” in lines 21-22 of page 9 . The rejection might be overcome by for example, amending the claim to introduce clear antecedent basis for “feature representation” or by replacing “the feature representation” with “the structure-aware feature representation”. Limitation “the fingerprint similarity between molecules” in line 7 of page 10. The rejection might be overcome by for example, amending the claim to introduce clear antecedent basis for “fingerprint similarity between molecules”. Limitation “the feature of each neighbor node” in lines 2-3 of page 10. The rejection might be overcome by for example, amending the claim to introduce clear antecedent basis for “feature of each neighbor node”. For compact examination, it is assumed that the preceding suggested will be implemented. Because dependent claims 4-5 and 7 incorporate the unsupported limitations of independent claim 1 and do not include further limitations that remedy the issue, they are likewise rejected under 35 U.S.C. 112(b). Claim 2 recites the limitation “the partial molecular structures” in line 21 page 10. There is insufficient antecedent basis for this limitation in the claim. The rejection might be overcome by for example, amending the claim to introduce clear antecedent basis for “partial molecular structures”. For compact examination, it is assumed that the preceding suggested will be implemented. Claim 7 recites the limitation “the molecular attributes” in lines 21-22 of page 11. There is insufficient antecedent basis for this limitation in the claim. The rejection might be overcome by for example, amending the claim to introduce clear antecedent basis for “molecular attributes” or by replacing “the molecular attributes” with “the downstream molecular attributes”. For compact examination, it is assumed that the preceding suggested will be implemented. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Subject matter eligibility evaluation in accordance with MPEP 2106 : Eligibility Step 1: Claims 1-7 are directed to a method (process) [Step 1: YES] Eligibility Step 2A : First it is determined in Prong One whether a claim recites a judicial exception, and if so, then it is determined in Prong Two whether the recited judicial exception is integrated into a practical application of that exception. Eligibility Step 2A Prong One: In determining whether a claim is directed to a judicial exception, examination is performed that analyzes whether the claim recites a judicial exception, i.e., whether a law of nature, natural phenomenon, or abstract idea is set forth or described in the claim. Independent claim 1 recites the following steps which fall within the mental processes and/or mathematical concepts groupings of abstract ideas: (1) obtaining a molecular fingerprint representation of each molecule, and calculating a similarity between each two molecular fingerprints (i.e., mental processes – can done with pen and paper and mathematical concepts – “calculating”) (2) collecting a full amount of chemical functional group information, and matching a corresponding functional group for each atom in the molecule (i.e., mental processes: collecting and matching – can done with pen and paper); wherein, when an atom belongs to a plurality of functional groups, a functional group containing a larger number of atoms is preferentially matched as the functional group corresponding to the atom (i.e., mental processes: collecting and matching – can done with pen and paper) (3) using a heterogeneous graph to model a molecular graph, wherein the heterogeneous graph is a graph containing different types of nodes and edges, different atoms correspond to different node types, and different bonds correspond to different edge types (i.e., mental processes – can done with pen and paper). (4) and mapping the molecule to a feature space through an aggregation function to obtain a structure-aware feature representation (i.e., mathematical concepts – “aggregation function”). (4) transferring information by the relational graph convolutional network ( RGCN ) in the structure-aware molecular encoder through calculating and aggregating information for different types of edges, and integrating the information aggregated by different edges for different types of nodes ( i.e., mathematical concepts – “aggregating”, “calculating”); (4) after obtaining the feature representation of each atom and the functional group that the atom belongs to, then aggregating the features of the nodes and the functional groups to obtain the structure-aware feature representation of the molecule ( i.e., mathematical concepts – “aggregating”); (4) wherein, a formula for the information transfer of the relational graph convolutional network ( RGCN ) is as follows: 30 wherein, R is a set of all edges, Nf is all neighbor nodes which are adjacent to the node i and are of edge type r, ci,r is a parameter that can be learned, WT is a weight matrix of the current layer 1, h ( i.e., mathematical concepts – “formula”, “weight matrix”, ) is a feature vector of the current layer 1 to the current node i ( i.e., mathematical concepts – “vector” ) the feature of each neighbor node is multiplied by a weight corresponding to the edge type, and then is multiplied by a learnable parameter, and then summed, and finally, the information transferred by a self-loop edge is added and the activation function - is passed ( i.e., mathematical concepts – multiplication, sum , “self-loop edge” ) Dependent claims 3- 6 further recite the following steps which fall within the mental processes and/or mathematical concepts groupings of abstract ideas, as noted below. Dependent claim 3 further recites: The molecular graph representation learning method based on contrastive learning according to claim 2, wherein in step (1), the similarity between two molecular fingerprints is calculated using a Tanimoto coefficient, and the formula is as follows (i.e., mathematical concepts – calculation, “coefficient”, “formula”) wherein, the partial molecular structures of 166 molecules are pre-specified by MACCs fingerprints when any molecular structure is contained, the corresponding position is recorded as 1, otherwise, it is recorded as 0 (i.e., mental processes: recording – can done with pen and paper) a and b respectively represent the number of 1 displayed in the A and B molecules, and c represents the number of 1 displayed in both the A and B molecules (i.e., mental processes: matching/correlation – can done with pen and paper) Dependent claim 4 further recites: The molecular graph representation learning method based on contrastive learning according to claim 1, wherein, in step (5), when selecting the positive and negative samples, one molecule of which similarity with a target molecule is greater than a certain threshold is selected as the positive sample, K molecules of which each similarity is less than a certain threshold are selected as the negative samples (i.e., mental processes: selecting samples) a feature representation corresponding to the target molecule is denoted as q, a feature representation of the positive sample is denoted as ko, and the feature representations of K negative samples are denoted as k 1 ,---, k K (i.e., mental processes: denoting based on feature representation ) Dependent claim 5 further recites: The molecular graph representation learning method based on contrastive learning according to claim 4, wherein, after obtaining the feature representations of each target molecule and the positive and negative samples thereof, a loss is calculated by using a loss function, and the parameters of the structure-aware molecular encoder are updated through a back-propagation algorithm , which causes the structure-aware molecular encoder to recognize the target molecule and the positive samples as similar instances and distinguish the target molecule and the positive samples from dissimilar samples (i.e., mathematical concepts - calculating loss using loss function) Dependent claim 6 further recites: the loss function i s InfoNCE , and the formula is as follows: wherein, T is a hyperparameter, the loss function causes the model to identify the target molecule q and positive samples ko as similar instances, and to distinguish q from dissimilar instances k1 ,- --, kK (i.e ., mathematical concepts - calculating loss using loss function) Therefore, claims 1 and 3- 6 recite an abstract idea. [Step 2A Prong One: YES] Eligibility Step 2A Prong Two: In determining whether a claim is directed to a judicial exception, further examination is performed that analyzes if the claim recites additional elements that when examined as a whole integrates the judicial exception(s) into a practical application (MPEP 2106.04(d)). A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. The claimed additional elements are analyzed to determine if the abstract idea is integrated into a practical application (MPEP 2106.04(d)(I); MPEP 2106.05(a-h)). If the claim contains no additional elements beyond the abstract idea, the claim fails to integrate the abstract idea into a practical application (MPEP 2106.04(d)(III)). The judicial exceptions identified in Eligibility Step 2A Prong One are not integrated into a practical application because of the reasons noted below. Dependent claims 3- 6 do not recite any elements in addition to the judicial exception, and thus are part of the judicial exception. The additional element in independent claim 1 include: (4) constructing a structure-aware molecular encoder, using a relational graph convolutional network ( RGCN ) in the structure-aware molecular encoder to encode the representation of each atom in the molecule and the representation of the functional group to which the atom belongs (4) wherein the specific process is as follows: taking the heterogeneous graph with initialized node features and functional group features as an input of the structure-aware molecular encoder (4) which is used as an output of the layer and an input of a next layer (6) obtaining the structure-aware molecular encoder by using the contrastive learning method for training on a large-sample molecular dataset, applying the structure-aware molecular encoder to a prediction task of downstream molecular attributes The additional elements in dependent claim 2 includes: a Simplified Molecular Input Line Entry System (SMILES) representation of each molecule is transformed into the molecular fingerprint through Rdkit the molecular fingerprint is selected from one of Morgan fingerprints, Molecular ACCess System ( MACCs ) fingerprint and topology fingerprint. The additional elements in dependent claim 7 include: training the structure-aware molecular encoder on the large-sample molecular data set through the contrastive learning method described in step (5); then inputting molecular data in a small-sample data set into the structure-aware molecular encoder, and then using a linear classifier to classify the molecular representations output by the encoder, and predicting the molecular attributes . The additional elements identified above are insignificant extra-solution activities that are part of the data gathering process used in the recited judicial exceptions (see MPEP 2106.05(g)). When all limitations in Claims 1-7 have been considered as a whole , the claims are deemed to not recite any additional elements that would integrate a judicial exception into a practical application, and therefore Claims 1-7 are directed to an abstract idea (MPEP 2106.04(d)). [Step 2A Prong Two: NO] Eligibility Step 2B : Because the claims recite an abstract idea, and do not integrate that abstract idea into a practical application, the claims are probed for a specific inventive concept. The judicial exception alone cannot provide that inventive concept or practical application (MPEP 2106.05). Identifying whether the additional elements beyond the abstract idea amount to such an inventive concept requires considering the additional elements individually and in combination to determine if they amount to significantly more than the judicial exception (MPEP 2106.05A i -vi). The claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception(s) because the reasons noted below. Dependent claims 3- 6 do not recite any elements in addition to the judicial exception(s). The additional elements recited in independent claim 1, and in dependent claims 2 and 7 are identified above, and carried over from Step 2A : Prong Two along with their conclusions for analysis at Step 2B . Any additional element or combination of elements that was considered to be insignificant extra-solution activity at step Step 2A : Prong Two was re-evaluated at step 2B , because if such re-evaluation finds that the element is unconventional or otherwise more than what is well-understood, routine, conventional activity in the field, this finding may indicate that the additional element is no longer considered to be insignificant; and all additional elements and combination of elements are other than what is well-understood, routine, conventional activity in the field, or simply append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, per MPEP 2106.05(d). The additional element of constructing a structure-aware molecular encoder, using a relational graph convolutional network ( RGCN ) in the structure-aware molecular encoder to encode the representation of each atom in the molecule and the representation of the functional group to which the atom belongs (claim 1) is conventional . Conventionality is shown by Zhou et al. (“Graph Neural Networks: A Review of Methods and Applications.” arXiv:1812.08434 ( v4 ) ). In this review paper, Zhou et al. show that R- GCN can be used with different graph types such as heterogenous graphs ( Fig2.a , pg.5 ) and that “Structure-Aware Convolutional Network ” have been proposed ( col.1 , para.7 , lns.1 -2, pg.8 ) , this review also shows that in chemistry and biology, by applying GNNs to molecular graphs, better fingerprints can be obtained and that molecules can be naturally seen as graphs, with atoms being the nodes and chemical-bonds being the edges ( col. 2 , para. 1 , pg.13 ). A review from 20 18 indicates that this approach is widely used in the field. The additional elements of taking the heterogeneous graph with initialized node features and functional group features as an input of the structure-aware molecular encoder and taking the heterogeneous graph with initialized node features and which is used as an output of the layer and an input of a next layer (claim 1); inputting molecular data in a small-sample data set into the structure-aware molecular encoder, and then using a linear classifier to classify the molecular representations output by the encoder, and predicting the molecular attributes (claim 7) merely invokes a computer tool and does not improve the technology of a generic computer (see MPEP 2106.05(a)). The additional element s of a Simplified Molecular Input Line Entry System (SMILES) representation of each molecule is transformed into the molecular fingerprint through Rdkit and the molecular fingerprint is selected from one of Morgan fingerprints, Molecular ACCess System ( MACCs ) fingerprint and topology fingerprint (claim 2) are conventional. Conventionality is shown by Landrum, G. (2013). RDKit documentation ( Release 2013.09.1 ) because Landrum, G. shows that SMILES is first utilized as input (1.2 functionality overview; ln.1 , pg. 2) then used for fi ngerprints “Fingerprinting: Daylight-like, atom pairs, topological torsions, Morgan algorithm, “ MACCS keys”, etc.” (1.2 functionality overview; ln.12 , pg. 2) , demonstrating a standard workflow that is standard in practice since 2013 at least. The additional elements of applying the structure-aware molecular encoder to a prediction task of downstream molecular attributes (claim 1) is conventional. Conventionally is shown in the instant application specification, “In order to improve the expression ability of the molecular fingerprints, some studies have introduced the graph neural network, which takes Simplified Molecular Input Line Entry System (SMILES) representations of the molecules as an input, learns the representations of the molecules in a low-dimensional vector space, and applies them to downstream tasks such as a property prediction ” ( para.6 , lns.29 -33, pgs.1 -2). Demonstrating that previous studies have shown that this approach is routinely employed. The additional elements of training the structure-aware molecular encoder on the large-sample molecular data set through the contrastive learning method (claim 7) and obtaining the structure-aware molecular encoder by using the contrastive learning method for training on a large-sample molecular dataset (claim 1) is conventional. Conventionality is shown by the Conventionally is shown in the instant application specification, “Meanwhile, due to the extremely large molecular space, the generalization ability of the model is generally poor. To improve the generalization ability of the neural networks, some work tries to build pre-trained models on the graph representations of the molecules” ( para.1 , lns.5 -7, pg.2 ). Demonstrating that previous studies have shown that this approach is commonly used. Therefore, when taken alone, all additional elements in in dependent claim 1 and dependent claims 2 and 7 not amount to significantly more than the above-identified judicial exceptions(s). Even when evaluated as combination, the additional elements fail to transform the exceptions(s) into patent-eligible application of that exception. Thus, claims 1-20 are deemed to not contribute an inventive concept, i.e., amount to significantly more than the judicial exception(s) (MPEP 2106.05(II)). [Step 2B : NO] Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness . This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1 is rejected under 35 U.S.C. 103 as being unpatentable over Schlichtkrull et al. ( Modeling Relational Data with Graph Convolutional Networks, ESWC 2018 ), in view of Sun et al. (" Infograph : Unsupervised And Semi-Supervised Graph-Level Representation Learning Via Mutual Information Maximization". Arxiv:1908.01000v3 [ Cs.Lg ] 17 Jan 2020 ) , and Rogers and Hahn.( Journal of Chemical Information and Modeling , vol. 50, no. 5, 28 Apr. 2010, pp. 742–754 ), and Zhou et al. ( “Graph Neural Networks: A Review of Methods and Applications.” arXiv:1812.08434 ( v4 ) ) , and in further view of Shang et al. ( “Edge Attention-based Multi-Relational Graph Convolutional Networks”. arXiv:1802.04944v1 [stat.ML] 14 Feb 2018 ) . The italicized text corresponds to the instant claim limitations. Schlichtkrull et al. discloses a R- GCN model that is primarily motivated as an extension of GCNs that operate on local graph neighborhoods to large-scale relational data ( para.5 , lns.1 -4, pg.594 ), the model updates node representations by aggregating information from neighboring nodes across different types of relations. Schlichtkrull et al. further discloses that the hidden representation of node i is computed as eq 2 (see below) which aggregates information from neighbors across relations . The model also defines a separate matrix for each relation ( eq.3 , pg.596 ), meaning that each relation contributes in a different way to the node representation. ( eq.2 , pg.595 ; taking the heterogeneous graph with initialized node features and functional group features as an input of the structure-aware molecular encoder, transferring information by the relational graph convolutional network ( RGCN ) in the structure-aware molecular encoder through calculating and aggregating information for different types of edges, and integrating the information aggregated by different edges for different types of nodes; after obtaining the feature representation of each atom and the functional group that the atom belongs to, then aggregating the features of the nodes and the functional groups to obtain the structure-aware feature representation of the molecule; wherein, a formula for the information transfer of the relational graph convolutional network ( RGCN ) is as follows: wherein, R is a set of all edges, Nf is all neighbor nodes which are adjacent to the node i and are of edge type r, ci,r is a parameter that can be learned, WT is a weight matrix of the current layer 1, h; is a feature vector of the current layer 1 to the current node i ; the feature of each neighbor node is multiplied by a weight corresponding to the edge type, and then is multiplied by a learnable parameter, and then summed, and finally, the information transferred by a self-loop edge is added and the activation function - is passed, which is used as an output of the layer and an input of a next layer ). Schlichtkrull et al. is silent the limitations in steps 1-3, and 5 of claim 1 and further constructing a structure-aware molecular encoder, using a relational graph convolutional network ( RGCN ) in the structure-aware molecular encoder to encode the representation of each atom in the molecule and the representation of the functional group to which the atom belongs and mapping the molecule to a feature space through an aggregation function to obtain a structure-aware feature representation of step 4 in claim 1. However, these limitations were known in the art at the time of the effective date of the invention, as taught by Sun et al. , Rogers and Hahn. , Shang et al. , and Zhou et al. Sun et al. disclose a graph representation learning method based on contrastive learning (see “input graphs” and “Discriminator (+) (-)” Fig.2 , pg.7 ) as the main idea of contrastive learning is to learn discriminative representations by contrasting positive and negative examples . Sun et al. further disclose that the graph representation learning method based on contrastive learning is used for molecular property prediction ( para.5 , lns.5 -7, pg.2 ; a molecular graph representation learning method based on contrastive learning ) . Rogers and Hahn. disclose representing molecules as fingerprints and calculating similarity between two molecular fingerprints as it mentions the Tanimoto method ( Table 1, pg. 752 and col.2 , para.5 , ln.6 , pg.751 ; (1) obtaining a molecular fingerprint representation of each molecule, and calculating a similarity between each two molecular fingerprints ). Rogers and Hahn. disclose that each atom belongs to many overlapping environments built from neighboring atoms over many iterations, and the larger ones include more atoms, corresponding choosing the biggest functional group ( Fig.1 , pg. 743 ; col.1 , para.1 , lns . pg.743 ; ; (2) collecting a full amount of chemical functional group information, and matching a corresponding functional group for each atom in the molecule; wherein, when an atom belongs to a plurality of functional groups, a functional group containing a larger number of atoms is preferentially matched as the functional group corresponding to the atom ) . Shang et al. discloses that molecules are modeled as graphs where atoms and bonds define the structure ( Fig.1 , pg.2 ; col.1 , para.1 , lns.8 -12, pg. 1), and that edges are not uniform, but instead represent different relationships ( col.1 , para.1 , lns.6 -8, pg. 1), where bond types are explicitly categorized and atom identity is incorporated into the representation (Table 1, pg. 2 ; using a heterogeneous graph to model a molecular graph, wherein the heterogeneous graph is a graph containing different types of nodes and edges, different atoms correspond to different node types, and different bonds correspond to different edge types ) . Zhou et al. (“Graph Neural Networks: A Review of Methods and Applications.” arXiv:1812.08434 ( v4 ) ). In this review paper, Zhou et al. show that R- GCN can be used with different graph types such as heterogenous graphs ( Fig2.a , pg.5 ) and that “Structure-Aware Convolutional Network ” have been proposed ( col.1 , para.7 , lns.1 -2, pg.8 ) , this review also shows that in chemistry and biology, by applying GNNs to molecular graphs, better fingerprints can be obtained and that molecules can be naturally seen as graphs, with atoms being the nodes and chemical-bonds being the edges ( col. 2 , para. 1 , pg.13 ; constructing a structure-aware molecular encoder, using a relational graph convolutional network ( RGCN ) in the structure-aware molecular encoder to encode the representation of each atom in the molecule and the representation of the functional group to which the atom belongs and mapping the molecule to a feature space through an aggregation function to obtain a structure-aware feature representation ). Sun et al. disclose a graph representation learning method based on contrastive learning (see “input graphs” and “Discriminator (+) (-)” Fig.2 , pg.7 ) and Rogers and Hahn. disclose representing molecules as fingerprints and calculating similarity between two molecular fingerprints as it mentions the Tanimoto method (Table 1, pg. 752 and col.2 , para.5 , ln.6 , pg.751 ; (5) according to the fingerprint similarity between molecules, selecting positive and negative samples, and carrying out a comparative learning in the feature space ). Zhou et al. discloses obtaining the structure-aware molecular encoder and applying the structure-aware molecular encoder to a prediction task of downstream molecular attributes (see above, and Sun et al. disclose a graph representation learning method based on contrastive learning (see “input graphs” and “Discriminator (+) (-)” Fig.2 , pg.7 ), furthermore Schlichtkrull et al. discloses a model that is primarily motivated as an extension of GCNs that operate on local graph neighborhoods to large-scale relational data ( para.5 , lns.1 -4, pg.594 ; ( 6) obtaining the structure-aware molecular encoder by using the contrastive learning method for training on a large-sample molecular dataset, and applying the structure-aware molecular encoder to a prediction task of downstream molecular attributes ). It would have been obvious to one of the ordinary skill in the art at the time the invention was made to modify the R- GCN of Schlichtkrull et al ., with the contrastive learning of Sun et al., the fingerprint similarity of Rogers and Hahn ., the heterogeneous graph/structure awarness of Shang et al . and constructing a structure-aware molecular encoder, using a relational graph convolutional network ( RGCN ) of Zhou et al. , because combing these methods can lead to great improvements as mentioned by Schlichtkrull et al. that “our method achieves significant improvement” ( para.2 , lns . 6- 7,pg.594 ), and Sun et al. “ InfoGraph is superior to state-of-the-art baselines ” (abstract, pg.1 ) as examples. A person of ordinary skill in the art would therefore be motivated to combine these techniques because it shows improvements. One would have had a reasonable expectation of success for making this combination because are in the same field, and putting these approaches together would lead to a superior method. Claims 2 is rejected under 35 U.S.C. 103 as being unpatentable over Schlichtkrull et al. (Modeling Relational Data with Graph Convolutional Networks, ESWC 2018), in view of Sun et al. (" Infograph : Unsupervised And Semi-Supervised Graph-Level Representation Learning Via Mutual Information Maximization". Arxiv:1908.01000v3 [ Cs.Lg ] 17 Jan 2020), and Rogers and Hahn.( Journal of Chemical Information and Modeling , vol. 50, no. 5, 28 Apr. 2010, pp. 742–754), and Zhou et al. (“Graph Neural Networks: A Review of Methods and Applications.” arXiv:1812.08434 ( v4 )), and Shang et al. (“Edge Attention-based Multi-Relational Graph Convolutional Networks”. arXiv:1802.04944v1 [stat.ML] 14 Feb 2018) as applied to claim 1 above and in further view of Landrum, G. (2013). RDKit documentation ( Release 2013.09.1 ) . The italicized text corresponds to the instant claim limitations. The limitations of claim 1 have been taught by Schlichtkrull et al., Sun et al., Rogers and Hahn. , Zhou et al., and Shang et al. above. Schlichtkrull et al ., Sun et al., Rogers and Hahn ., Zhou et al ., and Shang et al. are silent to the Molecular Input Line Entry System (SMILES) representation of each molecule being transformed into the molecular fingerprint through Rdkit and the molecular fingerprint is selected from one of Morgan fingerprints, Molecular ACCess System ( MACCs ) fingerprint and topology fingerprint in claim 2. However, these limitations were known in the art at the time of the effective date of the invention, as taught by Landrum, G. Regarding claim 2, Landrum, G. teaches that SMILES is first utilized as input (1.2 functionality overview; ln.1 , pg. 2) then used for fingerprints “Fingerprinting: Daylight-like, atom pairs, topological torsions, Morgan algorithm, “ MACCS keys”, etc.” (1.2 functionality overview; ln.12 , pg. 2; the molecular graph representation learning method based on contrastive learning according to claim 1, wherein in step (1), a Simplified Molecular Input Line Entry System (SMILES) representation of each molecule is transformed into the molecular fingerprint through Rdkit ; the molecular fingerprint is selected from one of Morgan fingerprints, Molecular ACCess System ( MACCs ) fingerprint and topology fingerprint). It would have been obvious to one of the ordinary skill in the art at the time the invention was made to modify the R- GCN of Schlichtkrull et al ., with the contrastive learning of Sun et al., the fingerprint similarity of Rogers and Hahn ., the heterogeneous graph/structure awareness of Shang et al . and constructing a structure-aware molecular encoder, using a relational graph convolutional network ( RGCN ) of Zhou et al. with the SMILES transformation and fingerprints of Landrum G ., because they provide established ways to represent molecular structures for machine learning. A person of ordinary skill in the art would therefore be motivated to combine these techniques because they address complementary aspects of molecular representation. One would have had a reasonable expectation of success for making this combination because these approaches are compatible and belong to the same field. Claims 3 is rejected under 35 U.S.C. 103 as being unpatentable over Schlichtkrull et al. (Modeling Relational Data with Graph Convolutional Networks, ESWC 2018), in view of Sun et al. (" Infograph : Unsupervised And Semi-Supervised Graph-Level Representation Learning Via Mutual Information Maximization". Arxiv:1908.01000v3 [ Cs.Lg ] 17 Jan 2020), and Rogers and Hahn.( Journal of Chemical Information and Modeling , vol. 50, no. 5, 28 Apr. 2010, pp. 742–754), and Zhou et al. (“Graph Neural Networks: A Review of Methods and Applications.” arXiv:1812.08434 ( v4 )), and Shang et al. (“Edge Attention-based Multi-Relational Graph Convolutional Networks”. arXiv:1802.04944v1 [stat.ML] 14 Feb 2018) and Landrum, G. (2013). RDKit documentation ( Release 2013.09.1 ), as applied to claim 2 above and in further view of Mellor et al . ( Molecular Fingerprint-Derived Similarity Measures for Toxicological Read-Across: Recommendations for Optimal Use.” Regulatory Toxicology and Pharmacology , vol. 101, 1 Feb. 2019, pp. 121–134 ) . The limitations of claim 1 -2 have been taught by Schlichtkrull et al., Sun et al., Rogers and Hahn., Zhou et al., Shang et al. and Landrum, G. above. Schlichtkrull et al., Sun et al., Rogers and Hahn., Zhou et al., Shang et al. and Landrum, G. are silent to the similarity between two molecular fingerprints is calculated using a Tanimoto coefficient, wherein, the partial molecular structures of 166 molecules are pre-specified by MACCs fingerprints; when any molecular structure is contained, the corresponding position is recorded as 1, otherwise, it is recorded as 0; a and b respectively represent the number of 1 displayed in the A and B molecules, and c represents the number of 1 displayed in both the A and B molecules in claim 3. However, these limitations were known in the art at the time of the effective date of the invention, as taught by Mellor et al. Regarding claim 3, Mellor et al. teach the similarity between two molecular fingerprints is calculated using a Tanimoto coefficient (table 1, pg. 122 ), that MACCs typically contains 166 structural features ( col.1 , para.2 , lns.6 -7, pg.123 ), that a fingerprint is typically a binary vector with bits set to 1 or 0 depending on the presence or absence of a structural feature ( col.1 , para.1 , lns.1 -3, pg.123 ), and that the formula for the tanimoto coefficient is as follows: ( the molecular graph representation learning method based on contrastive learning according to claim 2, wherein in step (1), the similarity between two molecular fingerprints is calculated using a Tanimoto coefficient, and the formula is as follows: wherein, the partial molecular structures of 166 molecules are pre-specified by MACCs fingerprints; when any molecular structure is contained, the corresponding position is recorded as 1, otherwise, it is recorded as 0; a and b respectively represent the number of 1 displayed in the A and B molecules, and c represents the number of 1 displayed in both the A and B molecules ). It would have been obvious to one of the ordinary skill in the art at the time the invention was made to modify the R- GCN of Schlichtkrull et al., with the contrastive learning of Sun et al., the fingerprint similarity of Rogers and Hahn., the heterogeneous graph/structure awareness of Shang et al and constructing a structure-aware molecular encoder, using a relational graph convolutional network ( RGCN ) of Zhou et al. with the methods of Landrum G., and the tanimoto coefficient of Mellor et al. because fingerprint similarity alone has limits. A person of ordinary skill in the art would therefore be motivated to combine these techniques to get improved molecular representations and Mellor et al. states that using Tanimoto similarity values computed from generic molecular fingerprints has been empirically successful ( col.2 , para.2 , lns . 1-3, pg.132 ). One would have had a reasonable expectation of success for making this combination because it would provide successful results. Claims 4- 7 are rejected under 35 U.S.C. 103 as being unpatentable over Schlichtkrull et al. (Modeling Relational Data with Graph Convolutional Networks, ESWC 2018), in view of Sun et al. (" Infograph : Unsupervised And Semi-Supervised Graph-Level Representation Learning Via Mutual Information Maximization". Arxiv:1908.01000v3 [ Cs.Lg ] 17 Jan 2020), and Rogers and Hahn.( Journal of Chemical Information and Modeling , vol. 50, no. 5, 28 Apr. 2010, pp. 742–754), and Zhou et al. (“Graph Neural Networks: A Review of Methods and Applications.” arXiv:1812.08434 ( v4 )), and Shang et al. (“Edge Attention-based Multi-Relational Graph Convolutional Networks”. arXiv:1802.04944v1 [stat.ML] 14 Feb 2018), and in further view of He et al ( “Momentum Contrast for Unsupervised Visual Representation Learning.” 2020 IEEE/ CVF Conference on Computer Vision and Pattern Recognition ( CVPR ), June 2020 . The italicized text corresponds to the instant claim limitations ) . The limitations of claim 1 have been taught by Schlichtkrull et al., Sun et al., Rogers and Hahn., Zhou et al., and Shang et al. above. Schlichtkrull et al., Sun et al., Rogers and Hahn., Zhou et al., and Shang et al. are silent to the limitations in claim 4 -7 . However, these limitations were known in the art at the time of the effective date of the invention, as taught by He et al. Regarding claim 4, He et al. teach contrast for unsupervised visual representation where a contrastive loss is a function whose value is low when q is similar to its positive key k+ and dissimilar to all other keys (considered negative keys for q) ( Fig1 ., pg.9726 and col.2 , para.1 -2, lns.1 -10, pg.0727; when selecting the positive and negative samples, one molecule of which similarity with a target molecule is greater than a certain threshold is selected as the positive sample, K molecules of which each similarity is less than a certain threshold are selected as the negative samples; a feature representation corresponding to the target molecule is denoted as q, a feature representation of the positive sample is denoted as ko, and the feature representations of K negative samples are denoted as k 1 ,---, k K ). Regarding claims 5 , He et al. teach that the contrastive loss serves as an unsupervised objective function for training the encoder networks that represent the queries and keys, which are optimized through gradient based learning to make positive samples and negatives dissimilar ( Fig1 ., pg.9726 and col.2 , para.4 , lns . 1-3, pg. 9727 ; after obtaining the feature representations of each target molecule and the positive and negative samples thereof, a loss is calculated by using a loss function, and the parameters of the structure-aware molecular encoder are updated through a back-propagation algorithm , which causes the structure-aware molecular encoder to recognize the target molecule and the positive samples as similar instances and distinguish the target molecule and the positive samples from dissimilar samples ). Regarding claim 6, He et al. teach the InfoNCE loss defined as where the sum is over one positive and K negative samples ( eq.1 , pg.9727 and col.2 , para.2 , lns.7 -13; the molecular graph representation learning method based on contrast learning according to claim 5, wherein the loss function is InfoNCE , and the formula is as follows: wherein, T is a hyperparameter, the loss function causes the model to identify the target molecule q and positive samples ko as similar instances, and to distinguish q from dissimilar instances k1 ,---, kK ). Regarding claim 7, He et al. teach encoding input data into representations that can be used for downstream tasks using an encoder ( Fig.1 , pg.9726 ; abstract, lns.1 -10, pg.9726 ) followed by training a linear classifier on the frozen representations to perform prediction ( col.1 , para.2 , lns.4 -5, pg.9730 ; training the structure-aware molecular encoder on the large-sample molecular data set through the contrastive learning method described in step (5); then inputting molecular data in a small-sample data set into the structure-aware molecular encoder, and then using a linear classifier to classify the molecular representations output by the encoder, and predicting the molecular attributes ). It would have been obvious to one of the ordinary skill in the art at the time the invention was made to modify the R- GCN of Schlichtkrull et al., with the contrastive learning of Sun et al., the fingerprint similarity of Rogers and Hahn., the heterogeneous graph/structure awarness of Shang et al and constructing a structure-aware molecular encoder, using a relational graph convolutional network ( RGCN ) of Zhou et al. with the InfoNCE loss and linear classification of He et al. because He et al . shows that its learning methods can outperform its counterpart (abstract, pg1 ). A person of ordinary skill in the art would therefore be motivated to combine these techniques to get improve results. One would have had a reasonable expectation of success for making this combination because the methods are in the same field and lead to improvements. Conclusion No claims are allowed. Inquiries Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT ANDRIELE EICHNER whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-9956 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT M-F, 9-5 ET . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Karlheinz Skowronek can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-9047 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center ( EBC ) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.S.E./ Examiner, Art Unit 1687 /Karlheinz R. Skowronek/ Supervisory Patent Examiner, Art Unit 1687