Prosecution Insights
Last updated: April 19, 2026
Application No. 18/062,561

METHOD AND SYSTEM FOR GENERATING TASK-RELEVANT STRUCTURAL EMBEDDINGS FROM MOLECULAR GRAPHS

Non-Final OA §101§103
Filed
Dec 06, 2022
Examiner
HAN, KYU HYUNG
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
Huawei Cloud Computing Technologies Co. Ltd.
OA Round
1 (Non-Final)
43%
Grant Probability
Moderate
1-2
OA Rounds
4y 6m
To Grant
85%
With Interview

Examiner Intelligence

Grants 43% of resolved cases
43%
Career Allow Rate
3 granted / 7 resolved
-12.1% vs TC avg
Strong +42% interview lift
Without
With
+41.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
30 currently pending
Career history
37
Total Applications
across all art units

Statute-Specific Performance

§101
38.4%
-1.6% vs TC avg
§103
50.9%
+10.9% vs TC avg
§102
4.2%
-35.8% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 18-20 are objected to because of the following informalities: In claims 18-20, line 1, “The device of …” should read “The computer-readable medium of …” Claim Rejections – 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 16-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. As per claim 11, the claim limitation recites “computer-readable medium”. However, the usage of the phrase “computer-readable medium” is broad enough to include both “non-transitory” and “transitory” media. The specification further explicitly does not limit the utilization of a non-transitory computer-readable medium (See specification, paragraph 0041). Also, extrinsic evidence suggests that computer-readable medium covers a signal per se. Therefore, when the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 U.S.C. 101 as covering non-statutory subject matter. See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Cir. 2007) (transitory embodiments are not directed to statutory subject matter). Therefore, claim 16 and its dependent claims are non-statutory. A suggestion is made to the Applicant to amend the claim to recite non-transitory computer-readable medium. Claims 17-20 are rejected for the same reason set forth in the rejection of claim 16. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 1-7 are method claims. Claims 8-20 are machine/system/product claims. Therefore, claims 1-20 are directed to either a process, machine, manufacture or composition of matter (pending resolution of the signal per se issue stated above). With respect to claim 1: Step 2A – Prong 1: … generating, … a set of task-relevant feature vectors from the input data, the set of task-relevant feature vectors representing local physical features of the molecular graph; (mental process – a person can manually generate a set of task-relevant feature vectors from the input data, the set of task-relevant feature vectors representing local physical features of the molecular graph, with the assistance of a pen/paper.) generating, … a set of task-relevant structural embeddings from the input data, the set of task-relevant structural embeddings representing connectivity among the set of vertices and task-relevant features of the set of vertices, the task-relevant features being relevant for classifying the candidate molecule; (mental process – a person can manually generate a set of task-relevant structural embeddings from the input data, the set of task-relevant structural embeddings representing connectivity among the set of vertices and task-relevant features of the set of vertices, the task-relevant features being relevant for classifying the candidate molecule, with the assistance of a pen/paper.) combining each task-relevant feature vector in the set of task-relevant feature vectors with a respective task-relevant structural embedding in the set of task-relevant structural embeddings, to obtain a set of combined vectors; (mental process – a person can manually combine each task-relevant feature vector in the set of task-relevant feature vectors with a respective task-relevant structural embedding in the set of task-relevant structural embeddings, to obtain a set of combined vectors, with the assistance of a pen/paper.) and generating, … a predicted class label for the input data from the set of combined vectors, the predicted class label representing a classification of the candidate molecule. (mental process – a person can manually generate a predicted class label for the input data from the set of combined vectors, the predicted class label representing a classification of the candidate molecule, with the assistance of a pen/paper.) Step 2A – Prong 2: This judicial exception is not integrated into a practical application. A method for classifying a candidate molecule, the method comprising: obtaining input data representing a molecular graph defined by a set of vertices and a set of edges, the molecular graph being a representation of the candidate molecule; (Adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)). … using a physical model, … (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: High level recitation of using a physical model to generate a set of task-relevant feature vectors.); … using a trained embedding generator, … (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: High level recitation of using a trained embedding generator to generate a set of task-relevant structural embeddings.); … … using a trained classifier, … (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: High level recitation of using a trained classifier to generate a predicted class label for the input data from the set of combined vectors.); Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception. A method for classifying a candidate molecule, the method comprising: obtaining input data representing a molecular graph defined by a set of vertices and a set of edges, the molecular graph being a representation of the candidate molecule; (MPEP 2106.05(d)(II) indicate that merely “Receiving or transmitting data over a network, e.g., using the Internet to gather data” is a well‐understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim – the input data representing a molecular graph is merely received). Thereby, a conclusion that the claimed distribute step is well-understood, routine, conventional activity is supported under Berkheimer.) … using a physical model, … (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: High level recitation of using a physical model to generate a set of task-relevant feature vectors.); … using a trained embedding generator, … (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: High level recitation of using a trained embedding generator to generate a set of task-relevant structural embeddings.); … … using a trained classifier, … (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: High level recitation of using a trained classifier to generate a predicted class label for the input data from the set of combined vectors.); With respect to claim 2: Step 2A – Prong 1: The method of claim 1, wherein the embedding generator comprises a geometrical embedder based on good edit similarity and a gated recurrent unit (GRU) module, the geometrical embedder generating a set of geometrical embeddings representing the connectivity among the set of vertices, the GRU module further generating the set of task-relevant structural embeddings from the set of geometrical embeddings and task-relevant features. (mental process – a person can recognize that the embedding generator comprises a geometrical embedder based on good edit similarity and a gated recurrent unit (GRU) module, the geometrical embedder generating a set of geometrical embeddings representing the connectivity among the set of vertices, the GRU module further generating the set of task-relevant structural embeddings from the set of geometrical embeddings and task-relevant features.) With respect to claim 3: Step 2A – Prong 1: The method of claim 2, wherein the geometrical embedder is trained to generate the set of geometrical embeddings using a hierarchy of margins to encode local connections with respect to each vertex in the set of vertices. (mental process – a person can recognize that the geometrical embedder is trained to generate the set of geometrical embeddings using a hierarchy of margins to encode local connections with respect to each vertex in the set of vertices.) With respect to claim 4: Step 2A – Prong 1: The method of claim 2, wherein training the geometrical embedder and the GRU module comprises generating, using a decoder neural network, a reconstructed adjacency matrix of the molecular graph from the set of task-relevant structural embeddings, computing a molecular structure reconstruction loss between the reconstructed adjacency matrix and an actual adjacency matrix of the molecular graph, and backpropagating the molecular structure reconstruction loss to update weights of the GRU module and the geometrical embedder. (mental process – a person can recognize that the training the geometrical embedder and the GRU module comprises generating, using a decoder neural network, a reconstructed adjacency matrix of the molecular graph from the set of task-relevant structural embeddings, computing a molecular structure reconstruction loss between the reconstructed adjacency matrix and an actual adjacency matrix of the molecular graph, and backpropagating the molecular structure reconstruction loss to update weights of the GRU module and the geometrical embedder.) With respect to claim 5: Step 2A – Prong 1: The method of claim 4, wherein the molecular structure reconstruction loss is used as a regularization term for training of the classifier. (mental process – a person can recognize that the molecular structure reconstruction loss is used as a regularization term for training of the classifier.) With respect to claim 6: Step 2A – Prong 1: The method of claim 1, wherein combining each task-relevant feature vector in the set of task-relevant feature vectors with the respective task-relevant structural embedding in the set of task-relevant structural embeddings comprises concatenating each task-relevant feature vector in the set of task-relevant feature vectors with the respective task-relevant structural embedding in the set of task- relevant structural embeddings. (mental process – a person can recognize that combining each task-relevant feature vector in the set of task-relevant feature vectors with the respective task-relevant structural embedding in the set of task-relevant structural embeddings comprises concatenating each task-relevant feature vector in the set of task-relevant feature vectors with the respective task-relevant structural embedding in the set of task- relevant structural embeddings.) With respect to claim 7: Step 2A – Prong 1: The method of claim 1, wherein the physical model is a molecular docking model. (mental process – a person can recognize that the physical model is a molecular docking model.) Claim 8 is substantially similar to claim 1, but has the following additional elements: With respect to claim 8: Step 2A – Prong 2: This judicial exception is not integrated into a practical application. A device for classifying a candidate molecule, comprising: a processing unit configured to execute instructions to cause the device to: (mere instructions to apply the exception using a generic computer component – processing unit applies exception.) Claims 9-14 are rejected on the same grounds under 35 U.S.C. 101 as claims 2-7 as they are substantially similar, respectively. Mutatis mutandis. With respect to claim 15: Step 2A – Prong 1: The device of claim 8, wherein the physical model, the trained embedding generator and the trained classifier are part of a molecule classification module executed by the processing unit. (mental process – a person can recognize that the physical model, the trained embedding generator and the trained classifier are part of a molecule classification module executed by the processing unit.) Claim 16 is substantially similar to claim 1, but has the following additional elements: With respect to claim 16: Step 2A – Prong 2: This judicial exception is not integrated into a practical application. A computer-readable medium having instructions encoded thereon, wherein the instructions, when executed by a processing unit of a device, cause the device to: (mere instructions to apply the exception using a generic computer component – computer-readable medium having instructions and processing unit apply exception.) Claims 17-20 are rejected on the same grounds under 35 U.S.C. 101 as claims 2-5 as they are substantially similar, respectively. Mutatis mutandis. Claim Rejections – 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103) as being unpatentable over Rong et al. (US 20210043283 A1) hereinafter known as Rong in view of Pabrinkis et al. (US 20220188652 A1) hereinafter known as Pabrinkis in view of Park et al (“Enhancing the interpretability of transcription factor binding site prediction using attention mechanism”) hereinafter known as Park. Regarding independent claim 1, Rong teaches: A method for classifying a candidate molecule, the method comprising: obtaining input data representing a molecular graph defined by a set of vertices and a set of edges, the molecular graph being a representation of the candidate molecule; (Rong [¶ 0051]: “the heterogeneous graph that needs to be classified and predicted is obtained by using a heterogeneous graph identification operation requested by the user terminal, thereby identifying properties of a corresponding chemical substance, a protein substance or the like” Rong teaches that the heterogeneous graph represents the molecular structure of a chemical compound. Rong [¶ 0072]: “The nodes mentioned are the nodes on the topology structure, and may also be referred to as “vertices”. The relationships mentioned may be alternatively embodied by edges that connect the nodes.” Rong teaches that the node feature vectors and the relationship feature vectors of the heterogeneous graph are analogous to the vertices and edges, respectively, of a graph.) generating, … a set of task-relevant feature vectors from the input data, the set of task-relevant feature vectors representing local physical features of the molecular graph; (Rong [¶ 0072]: “The generated feature information describes the stored nodes on the one hand, and describes the relationships between the nodes on the other hand.” Rong teaches that the feature vectors representing local physical features of the molecular graph are generated. Rong [¶ 0078]: “Through the foregoing definition and description, the characterization of the given heterogeneous graph is implemented, the feature information including the node feature vectors and the relationship feature vectors is obtained, and the network topology is implemented for the heterogeneous information network” Rong teaches that these feature vectors are obtained by mathematical analysis/processing of a neural network/model.) … … and generating, using a trained classifier, a predicted class label …, the predicted class label representing a classification of the candidate molecule. (Rong [¶ 0212]: “the classification module 1070 is configured to: perform a tag prediction on the graph representation vector by using a trained and optimized loss function to obtain the classification prediction result of the heterogeneous graph relative to given tag data, the classification prediction result being indicated as a tag predicted by the heterogeneous graph.” Rong teaches that the classification of the heterogeneous graph is predicted using the representation/feature vectors. This is done using a trained and optimized loss function in a classification module, analogous to the trained classifier.) Rong does not explicitly teach: … using a physical model, … generating, using a trained embedding generator, a set of task-relevant structural embeddings from the input data, the set of task-relevant structural embeddings representing connectivity among the set of vertices and task-relevant features of the set of vertices, the task-relevant features being relevant for classifying the candidate molecule; combining each task-relevant feature vector in the set of task-relevant feature vectors with a respective task-relevant structural embedding in the set of task-relevant structural embeddings, to obtain a set of combined vectors; … for the input data from the set of combined vectors … However, Pabrinkis teaches: … using a physical model, … (Pabrinkis [¶ 0062]: “vector representations of proteins, molecules, interactions, and other information may be represented as vectors 216, which may either be extracted from the knowledge graph 215 or may be created directly from data received from the data extraction engine” Pabrinkis teaches that the vector representations of the molecules may be extracted from input data in the knowledge graph. Pabrinkis [¶ 0101]: “various cheminformatics libraries may be used as a learned force-field for docking simulations, which perform gradient descent of the ligand atomic coordinates with respect to the binding affinity 1806 and pose score 1805 (the model outputs)” Pabrinkis teaches a docking simulation, which models the chemical/biological molecule, analogous to the physical model.) generating, … a set of task-relevant structural embeddings from the input data, the set of task-relevant structural embeddings representing connectivity among the set of vertices and task-relevant features of the set of vertices, the task-relevant features being relevant for classifying the candidate molecule; (Pabrinkis [¶ 0127]: “Once processing by the MPNN is complete, its results are sent or concatenation 1350 with the results from a second machine learning algorithm, in this case an encoding-only transformer” Pabrinkis teaches that the input data is not only processed by a message passing neural network, which strengthens key node relationships, but it is also transformed by an encoding-only transformer, creating encodings/embeddings.) combining each task-relevant feature vector in the set of task-relevant feature vectors with a respective task-relevant structural embedding in the set of task-relevant structural embeddings, to obtain a set of combined vectors; (Pabrinkis [¶ 0007]: “generate a concatenated vector comprising the one or more small molecule vector inputs and a vector representation of the one or more large molecules” Pabrinkis teaches that the input data vector is combined with the vector representation of the larger structure of the molecule, or the encoding/embedding as described above, to form a concatenated vector.) … for the input data from the set of combined vectors … (Pabrinkis [¶ 0007]: “output the concatenated vector to the latent space, … the compressed latent space comprises a candidate set of latent examples … reconstruct the candidate set of latent examples to arrive at a candidate set of molecules that match the desired molecular properties” Pabrinkis teaches that the concatenated vectors are used to generate a set of candidate molecules.) Rong and Pabrinkis are in the same field of endeavor as the present invention, as the references are directed to analysis of molecular compounds. It would have been obvious, before the effective filing date of the claimed invention, to a person of ordinary skill in the art, to combine generating features from a molecular graph to classify the molecule as taught in Rong with generating further embeddings from the input data and combining to form a concatenated vector as taught in Pabrinkis. Pabrinkis provides this additional functionality. As such, it would have been obvious to one of ordinary skill in the art to modify the teachings of Rong to include teachings of Pabrinkis because the combination would allow for two different representations of the input data to be processed together to form a prediction/classification of the candidate molecule. This has the potential benefit of improving the accuracy of the prediction/classification, as additional various features of the input molecule can be analyzed. Rong and Pabrinkis do not explicitly teach: … using a trained embedding generator, … However, Park teaches: … using a trained embedding generator, … (Park [Page 1, Paragraph 2]: “KEGRU, based on bidirectional Gated Recurrent Unit (GRU) network, was developed to improve the performance of the TF binding site prediction” Park teaches that the history of their development of TBiNet included KEGRU, which is based on a gated recurrent network. The GRU is analogous to the embedding generator as that particular unit embeds the physical properties of transcription factors to create a prediction.) Park is in the same field of endeavor as the present invention, as it is directed to analysis of molecular compounds using edit distances and a gated recurrent unit (GRU). It would have been obvious, before the effective filing date of the claimed invention, to a person of ordinary skill in the art, to combine generating features from a molecular graph to classify the molecule as taught in Rong as modified by Pabrinkis with finding the edit distance between organic elements as taught in Park. Park provides this additional functionality. As such, it would have been obvious to one of ordinary skill in the art to modify the teachings of Rong as modified by Pabrinkis to include teachings of Park because the combination would allow for the classification to include controls regarding the quality of how similar/dissimilar any two subcompound/elements are in the molecular graph. This has the potential benefit of generating more accurate embeddings of the physical structure of the molecular graph so that the classification can be more accurate. Regarding dependent claim 2, Park teaches: The method of claim 1, wherein the embedding generator comprises a geometrical embedder based on good edit similarity and a gated recurrent unit (GRU) module, the geometrical embedder generating a set of geometrical embeddings representing the connectivity among the set of vertices, the GRU module further generating the set of task-relevant structural embeddings from the set of geometrical embeddings and task-relevant features. (Park [Page 8, Figure 5]: Park teaches a dendrogram of clustered motifs by edit distance. The motifs are analogous to the physical elements of the molecular graph. Park teaches that there are thresholds of the edit distance, represented as red dotted lines in the figure, which is a control of the quality of the edit distance. Park [Page 1, Paragraph 2]: “KEGRU, based on bidirectional Gated Recurrent Unit (GRU) network, was developed to improve the performance of the TF binding site prediction” Park teaches that the history of their development of TBiNet included KEGRU, which is based on a gated recurrent network.) The reasons to combine are substantially similar to those of claim 1. Regarding dependent claim 3, Park teaches: The method of claim 2, wherein the geometrical embedder is trained to generate the set of geometrical embeddings using a hierarchy of margins to encode local connections with respect to each vertex in the set of vertices. (Park [Page 8, Paragraph 2]: “Edit distance measures the similarity between DNA sequences. We apply edit distance to the consensus sequences of motifs which are obtained from TOMTOM.” Park teaches that the edit distances measure the similarity between DNA sequences. This is analogous to the hierarchy of margins in the present invention because although the present invention analyzes one molecule, the relational values between organic subcompounds/elements are calculated similarly.) The reasons to combine are substantially similar to those of claim 1. Regarding dependent claim 4, Rong teaches: The method of claim 2, wherein training the geometrical embedder and the GRU module comprises generating, using a decoder neural network, a reconstructed adjacency matrix of the molecular graph from the set of task-relevant structural embeddings, computing a molecular structure reconstruction loss between the reconstructed adjacency matrix and an actual adjacency matrix of the molecular graph, and backpropagating the molecular structure reconstruction loss to update weights of the GRU module and the geometrical embedder. (Rong [¶ 0075]: “The adjacency matrix A records all the edge information of an edge type in the heterogeneous graph G, where K represents the type of an edge, for example, different chemical bonds in a compound, N represents the quantity of the nodes, and each matrix N×N represents an adjacency matrix including an edge of a type.” Rong teaches that the adjacency matrix contains all the information of an edge type in the graph. Rong [¶ 0183]: “a key node V′ may be sampled according to the centrality of each node, and then an adjacency matrix A′ is reconstructed according to the key node V′.” Rong teaches that a reconstruction of the adjacency matrix can be created from a sampled key node. Rong [¶ 0222-0224]: “In another example embodiment … loss function being trained and optimized by using a heterogeneous graph of a given chemical substance or protein substance and tag data labeled by the heterogeneous graph” Rong teaches that the loss function may be trained by the physical characteristics of the graph (represented in the adjacency matrix) and its reconstruction.) The reasons to combine are substantially similar to those of claim 1. Regarding dependent claim 5, Rong teaches: The method of claim 4, wherein the molecular structure reconstruction loss is used as a regularization term for training of the classifier. (Rong [Equation below ¶ 0154]: PNG media_image1.png 145 664 media_image1.png Greyscale Rong teaches that the p_i are values resulting from a sigmoid function, which is used to aid in regularization.) The reasons to combine are substantially similar to those of claim 1. Regarding dependent claim 6, Rong teaches: The method of claim 1, wherein combining each task-relevant feature vector in the set of task-relevant feature vectors with the respective task-relevant structural embedding in the set of task-relevant structural embeddings comprises concatenating each task-relevant feature vector in the set of task-relevant feature vectors with the respective task-relevant structural embedding in the set of task- relevant structural embeddings. (Rong [¶ 0108-0109]: “Generate feature vectors corresponding to the key nodes for the graph topology information according to the heterogeneous graph and the feature information. For each key node in the graph topology information, the corresponding feature vectors may be generated for the graph topology information according to the heterogeneous graph and the feature information obtained by defining the heterogeneous graph. The generated feature vectors are used for numerically characterizing the corresponding key nodes” Rong teaches that the key nodes of the graph topology information have corresponding feature information, which shows that the generation of the feature vectors reflects a combination of the two as both information is contained in a feature vector.) The reasons to combine are substantially similar to those of claim 1. Regarding dependent claim 7, Rong teaches: The method of claim 1, wherein the physical model is a molecular docking model. (Rong [¶ 0078]: “Through the foregoing definition and description, the characterization of the given heterogeneous graph is implemented, the feature information including the node feature vectors and the relationship feature vectors is obtained, and the network topology is implemented for the heterogeneous information network” Rong teaches that these feature vectors are obtained by mathematical analysis/processing of a neural network/model, which is a molecular docking model.) The reasons to combine are substantially similar to those of claim 1. Independent claim 8 is rejected on the same grounds under 35 U.S.C. 103 as claim 1 as claim 8 is substantially similar to claim 1, but has the following additional elements: Rong teaches: A device for classifying a candidate molecule, comprising: a processing unit configured to execute instructions to cause the device to: (Rong [¶ 0019]: “the trained embedding generator and the trained classifier may be part of a molecule classification module executed by the processing unit” Rong teaches that a processing unit may be configured to execute the instructions for generating embeddings and classifying.) The reasons to combine are substantially similar to those of claim 1. Claims 9-14 are rejected on the same grounds under 35 U.S.C. 103 as claims 2-7 as they are substantially similar, respectively. Mutatis mutandis. Regarding dependent claim 15, Rong teaches: The device of claim 8, wherein the physical model, the trained embedding generator and the trained classifier are part of a molecule classification module executed by the processing unit. (Rong [¶ 0019]: “the trained embedding generator and the trained classifier may be part of a molecule classification module executed by the processing unit” Rong teaches that a processing unit may be configured to execute the instructions for generating embeddings and classifying, and can be part of the set of instructions comprising a module.) The reasons to combine are substantially similar to those of claim 1. Independent claim 16 is rejected on the same grounds under 35 U.S.C. 103 as claim 1 as claim 16 is substantially similar to claim 1, but has the following additional elements: Rong teaches: A computer-readable medium having instructions encoded thereon, wherein the instructions, when executed by a processing unit of a device, cause the device to: (Rong [¶ 0019]: “the trained embedding generator and the trained classifier may be part of a molecule classification module executed by the processing unit” Rong teaches that a processing unit may be configured to execute the instructions for generating embeddings and classifying.) The reasons to combine are substantially similar to those of claim 1. Claims 17-20 are rejected on the same grounds under 35 U.S.C. 103 as claims 2-5 as they are substantially similar, respectively. Mutatis mutandis. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYU HYUNG HAN whose telephone number is (703) 756-5529. The examiner can normally be reached on MF 9-5. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached on (571) 270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Kyu Hyung Han/ Examiner Art Unit 2123 /ALEXEY SHMATOV/Supervisory Patent Examiner, Art Unit 2123
Read full office action

Prosecution Timeline

Dec 06, 2022
Application Filed
Mar 12, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585928
HARDWARE ARCHITECTURE FOR INTRODUCING ACTIVATION SPARSITY IN NEURAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12387101
SYSTEMS AND METHODS FOR PRUNING BINARY NEURAL NETWORKS GUIDED BY WEIGHT FLIPPING FREQUENCY
2y 5m to grant Granted Aug 12, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
43%
Grant Probability
85%
With Interview (+41.7%)
4y 6m
Median Time to Grant
Low
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month