Prosecution Insights
Last updated: April 19, 2026
Application No. 17/518,953

METHOD AND APPARATUS FOR DETECTING MOLECULE BINDING SITE, ELECTRONIC DEVICE, AND STORAGE MEDIUM

Non-Final OA §101§103
Filed
Nov 04, 2021
Examiner
ANDERSON-FEARS, KEENAN NEIL
Art Unit
1687
Tech Center
1600 — Biotechnology & Organic Chemistry
Assignee
Tencent Technology (Shenzhen) Company Limited
OA Round
2 (Non-Final)
6%
Grant Probability
At Risk
2-3
OA Rounds
5y 1m
To Grant
56%
With Interview

Examiner Intelligence

Grants only 6% of cases
6%
Career Allow Rate
1 granted / 16 resolved
-53.7% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
5y 1m
Avg Prosecution
45 currently pending
Career history
61
Total Applications
across all art units

Statute-Specific Performance

§101
32.6%
-7.4% vs TC avg
§103
33.2%
-6.8% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§101 §103
DETAILED ACTION Applicant's response, filed 8/13/2025, has been fully considered. The following rejections and/or objections are either reiterated or newly applied. They constitute the complete set presently being applied to the instant application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-20 are pending. Drawings Response to Amendment In view of applicant’s amendments to the drawings, previous objections to the drawings are withdrawn. Claim Rejections - 35 USC § 101 Response to Amendment In view of applicant’s amendments to the claims, previous rejection under 35 U.S.C. 101 has been reviewed, updated and provided below. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract ideas without significantly more. The claims recite a method of detecting a molecular binding site through the use of neural networks to map potential binding sites and determine binding probabilities. The judicial exception is not integrated into a practical application because while claims 1-20 attempt to integrate the exception into a practical application, said application is either generically recited computer elements that do not add a meaningful limitation to the abstract idea or they are insignificant extra solution activities and simply implementing the abstract idea on a computer. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the computer elements only store and retrieve information in memory as well as perform basic calculations that are known to be well-understood, routine and conventional computer functions as recognized by the decisions in MPEP § 2105.05(d). Framework with which to Analyze Subject Matter Eligibility: Step 1: Are the claims directed to a category of statutory subject matter (a process, machine, manufacture, or composition of matter)? [see MPEP § 2106.03] Claims are directed to statutory subject matter, specifically methods (claims 1-20). Step 2A Prong One: Do the claims recite a judicially recognized exception, i.e., an abstract idea, a law of nature, or a natural phenomenon? [see MPEP § 2106.04(a)] The claims herein recite abstract ideas, mental processes and mathematical concepts. With respect to the Step 2A Prong One evaluation, the instant claims are found herein to recite abstract ideas that fall into the grouping of mental processes and mathematical concepts. Claim 1: Determining a first target point, extracting a rotation-invariant location feature, and invoking a site detection model to perform prediction processing are mental processes. The prediction probability indicating a possibility of each site being a binding site is a mathematical concept. Determining a binding site based on the prediction probability of each site is both a mental process and mathematical concept. Claim 2: Constructing a global location feature and constructing the 3D coordinates of the first target point are mathematical concepts. Claim 5: Invoking the site detection model to perform prediction processing is a mental process. Performing feature extraction on the graph data, fusing the global biological feature, and probability fitting on the fused feature are all both mental processes and mathematical concepts. Claim 6: Mapping the location feature is a mental process. Performing dimension reduction on the first feature is a mathematical concept. Claim 7: Performing feature extraction on an edge convolutional feature and performing dimension reduction on the third feature using the pooling layer are both mathematical concepts. Concatenating the graph data and mapping the second feature using the MLP are mental processes. Claim 8: Constructing a cluster map for each edge of the convolutional layer is both a mental process and mathematical concept. Mapping the cluster map using an MLP is a mental process. Performing dimension reduction on the intermediate feature using the pooling feature is a mathematical concept. Claim 9: Mapping the fused feature using the MLP is a mental process. Claim 10: Determining a site with a highest prediction probability and greater than a probability threshold is both a mental process and mathematical concept. Claim 11: Determining a first target point and a second target point, and determining a binding site from the site in the target molecule based on the prediction probability are both mental processes. Extracting a rotation-invariant location feature in the 3D coordinates is a mathematical concept. Invoking a site detection model to perform prediction processing on the extracted rotation-invariant location feature is both a mental process and mathematical concept. Claim 12: Constructing a global location feature of the 3D coordinates and constructing the 3D coordinates of the first target point are both mental processes. Claim 15: The processor being configured to invoke the site detection model to perform prediction processing on the extracted rotation-invariant location feature is a mental process. Performing feature extraction on the graph data, fusing the global biological feature and edge convolutional feature, and probability fitting on the fused feature are all both mental processes and mathematical concepts. Claim 16: Mapping the location feature of each site using the MLP is a mental process. Performing dimension reduction on the first feature of each site using the pooling layer is a mathematical concept. Claim 17: Performing feature extraction on an edge convolutional feature and performing dimension reduction on the third feature using the pooling layer are both mathematical concepts. Concatenating the graph data and mapping the second feature using the MLP are mental processes. Claim 18: Constructing a cluster map for each edge of the convolutional layer is both a mental process and mathematical concept. Mapping the cluster map using an MLP is a mental process. Performing dimension reduction on the intermediate feature using the pooling feature is a mathematical concept. Claim 19: Mapping the fused feature using the MLP is a mental process. Claim 20: Determining a first target point and a second target point, and determining a binding site from the site in the target molecule based on the prediction probability are both mental processes. Extracting a rotation-invariant location feature in the 3D coordinates is a mathematical concept. Invoking a site detection model to perform prediction processing on the extracted rotation-invariant location feature is both a mental process and mathematical concept. Step 2A Prong Two: If the claims recite a judicial exception under prong one, then is the judicial exception integrated into a practical application? [see MPEP § 2106.04(d) and MPEP § 2106.05(a)-(c) & (e)-(h)] Because the claims do recite judicial exceptions, direction under Step 2A Prong Two provides that the claims must be examined further to determine whether they integrate the abstract ideas into a practical application. The following claims recite the following additional elements in the form of non-abstract elements: Claim 1: Obtaining 3D coordinates of at least one site is mere data gathering. Claim 2: Obtaining the location feature of each site is mere data gathering. Claim 3: The global location feature comprising at least one feature of those specified within the group is merely selecting a particular data source. Claim 4: The local location feature between the site and neighborhood point comprising at least one feature of those specified within the group is merely selecting a particular data source. Claim 5: The site detection model being a graph convolutional network is well-understood, routine and conventional within the art. Inputting the location feature into the input layer of the GCN, inputting the graph data into the edge convolutional layer of the GCN and inputting the fused feature into the output layer of the GCN are all mere data gathering. Claim 6: Inputting the location feature into an MLP input layer and inputting the first feature into a pooling layer of the input layer are both mere data gathering. Claim 7: Inputting the second feature into an MLP and inputting the third feature into a pooling layer are both mere gathering. Claim 8: Inputting the cluster map into an MLP and inputting the intermediate feature into a pooling layer are both mere data gathering. Claim 9: Inputting the fused feature into an MLP in the output layer is mere data gathering. Claim 11: Memory, computer instructions and a processor are all general, nonspecific computer elements. Obtaining 3D coordinates of at least one site in a target molecule is mere data gathering. Claim 12: A processor is a generic, nonspecific computer element. Obtaining the location feature of each site is mere data gathering. Claim 13: The global location feature comprising at least one feature of those specified within the group is merely selecting a particular data source. Claim 14: The local location feature between the site and neighborhood point comprising at least one feature of those specified within the group is merely selecting a particular data source. Claim 15: The site detection model being a graph convolutional network is well-understood, routine and conventional within the art. A processor is a generic, nonspecific computer element. Inputting the location feature into the input layer of the GCN, inputting the graph data into the edge convolutional layer of the GCN and inputting the fused feature into the output layer of the GCN are all mere data gathering. Claim 16: A processor is a generic, nonspecific computer element. Inputting the location feature into an MLP input layer and inputting the first feature into a pooling layer of the input layer are both mere data gathering. Claim 17: A processor is a generic, nonspecific computer element. Inputting the second feature into an MLP and inputting the third feature into a pooling layer are both mere gathering. Claim 18: A processor is a generic, nonspecific computer element. Inputting the cluster map into an MLP and inputting the intermediate feature into a pooling layer are both mere data gathering. Claim 19: A processor is a generic, nonspecific computer element. Inputting the fused feature into an MLP in the output layer is mere data gathering. Claim 20: A non-transitory storage medium, computer readable instructions and a processor are generic, nonspecific computer elements. Obtaining 3D coordinates of at least one site in a target molecule is mere data gathering. Step 2B: If the claims do not integrate the judicial exception, do the claims provide an inventive concept? [see MPEP § 2106.05] Because the additional claim elements do not integrate the abstract idea into a practical application, the claims are further examined under Step 2B, which evaluates whether the additional elements, individually and in combination, amount to significantly more than the judicial exception itself by providing an inventive concept. The claims do not recite additional elements that are sufficient to amount to significantly more than the judicial exception because the claims recite additional elements that are generic, conventional, nonspecific or insignificant extra solution activity. These additional elements include: The additional elements of a non-transitory storage medium, computer readable instructions, memory and a processor are all are generic, nonspecific computer elements that are well understood and conventional within the art [see MPEP § 2106.5(d), 2106.05(f) and 2106.05(g)]. Therefore, taken both individually and as a whole, the additional elements do not amount to significantly more than the judicial exception by providing an inventive concept. The additional elements of the site detection model being a graph convolutional network is well-understood, routine and conventional within the art (Zhang et al. 2019) [see MPEP § 2106.5(d), 2106.05(f) and 2106.05(g)]. Therefore, taken both individually and as a whole, the additional elements do not amount to significantly more than the judicial exception by providing an inventive concept. The additional elements of obtaining 3D coordinates of at least one site, obtaining the location feature of each site, inputting the location feature into the input layer of the GCN, inputting the graph data into the edge convolutional layer of the GCN, inputting the fused feature into the output layer of the GCN, inputting the location feature into an MLP input layer, inputting the first feature into a pooling layer of the input layer, inputting the cluster map into an MLP, inputting the intermediate feature into a pooling layer, and inputting the fused feature into an MLP in the output layer are all insignificant extra solution activities, specifically mere data gathering [see MPEP § 2106.5(g)]. Therefore, taken both individually and as a whole, the additional elements do not amount to significantly more than the judicial exception by providing an inventive concept. The additional elements of the global location feature comprising at least one feature of those specified within the group, and the local location feature between the site and neighborhood point comprising at least one feature of those specified within the group are both insignificant extra solution activities, specifically merely selecting a particular data source [see MPEP § 2106.5(g)]. Therefore, taken both individually and as a whole, the additional elements do not amount to significantly more than the judicial exception by providing an inventive concept. Therefore, claims 1-20, when the limitations are considered individually and as a whole, are rejected under 35 USC § 101 as being directed to non-statutory subject matter. Response to Arguments Applicant’s arguments, see Remarks page 2, filed 8/13/2025, with respect to claims 1-20 have been fully considered and are not persuasive. Applicant asserts on page 14 of the Remarks that under Step 2A Prong 1 claim 1 is not directed to mental processes but rather is similar to example 38 and is not an abstract idea because it involves large data processing and a trained site detection model. However the BRI (broadest reasonable interpretation) of claim 1 does not include large data processing but rather is merely a set of points, the first being a center point of a sphere, the second a radius from that point, and a second target point being an intersection between said radius and an outer surface of the spherical space. Additionally the site detection model is not described within the claims as a trained model nor is it invoked unless to the perform the prediction process to obtain a probability. Applicant’s assertion that it is similar to example 38 of the Subject Matter Eligibility Examples is also misguided as while the claim did not recite mental processes because the steps could not be performed in a human mind, this is because the information was radically different. The instant application is using rotation invariant location features, compared to the audio circuit signals of example 38. Rotation invariant features are merely transforms of existing features of geometric space, however audio circuit signals are not interpretable by human minds except as melodies. Applicant asserts on page 15 of the Remarks that claims are directed to a practical application, yet applicant asserts the practical application to be an improved accuracy and stability of binding detection, which is an improvement to technology. However, MPEP 2106.05(a) states It is important to note, the judicial exception alone cannot provide the improvement. An increase in the accuracy/stability of prediction is an improvement to the mental process of predicting, not an improvement to the additional elements of the claims such as the computer technology or signal processing. Additionally, applicant asserts that the ordered combination of elements results in an improvement under step 2B, however the criteria for Step 2B is conventionality of the additional elements, both individually and as a whole (which is established above). This seems to be an assertion that under Step 2B the ordered combination of all elements provides an improvement, however this would necessitate the combination of elements to be unconventional, however as described above the additional elements are conventional both individually and as a whole. Claim Rejections - 35 USC § 103 Response to Amendment In view of applicant’s amendments to the claims, previous rejections under 35 U.S.C. 103 have been reviewed and amended accordingly. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 10-11 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Jimenez et al. (Bioinformatics (2017) 3036-3042; previously cited), Le Guilloux et al. (BMC bioinformatics (2009) 1-11; newly cited), and Venkatraman et al. (BMC Bioinformatics (2009) 1-21; previously cited). Claim 1 is directed to a method for detecting a molecule binding site via obtaining 3D coordinates of the site and from that determining a first and second target point from which rotationally-invariant features might be extracted and used to predict the probability of binding site, and from said probability determining if it is a binding site. Claim 11 is directed to a device for detecting a molecule binding site via obtaining 3D coordinates of the site and from that determining a first and second target point from which rotationally-invariant features might be extracted and used to predict the probability of binding site, and from said probability determining if it is a binding site. Claim 20 is directed to a non-transitory storage medium for detecting a molecule binding site via obtaining 3D coordinates of the site and from that determining a first and second target point from which rotationally-invariant features might be extracted and used to predict the probability of binding site, and from said probability determining if it is a binding site. Jimenez et al. teaches on page 3037, column 2, paragraph 2 “We treat protein structures from a computer vision perspective, as if they were 3D images. Coordinates of this 3D image are defined to span the bounding box of the protein plus a buffer of 8 A ° to account for pockets located close to its edges. The 3D image is then discretized into a grid of 1 x 1 x 1 A° 3 sized voxels”, which reads on obtaining three-dimensional (3D) coordinates of at least one site in a to-be-detected target molecule, the target molecule being a chemical molecule with a to-be-detected binding site, the 3D coordinates being defined in a 3D coordinate system. Jimenez et al. further teaches on page 3038, column 1, paragraph 2 “Subgrids of 16 x 16 x 16 voxels out of these arrays are then sampled, defining smaller protein areas with local properties. We use the fact that for all proteins in the database we know the location of its corresponding binding site to label each of the subgrids as positive, if its geometric center is closer than 4 A ° to the pocket geometric center and negative otherwise”. Jimenez et al. does not teach the extraction of a rotation-invariant location feature nor the prediction and determination of a binding site. Le Guilloux et al. teaches on page 3, column 1, paragraph 4 “Briefly, an alpha sphere is a sphere that contacts four atoms on its boundary and contains no internal atom. By definition the four atoms are at an equal distance (sphere radius) to the alpha sphere centre. Alpha sphere radii reflect the local curvature defined by the four atoms… Thus, it is possible to filter the ensemble of alpha spheres defined from the atoms of a protein according to some minimal and maximal radii values in order to address pocket detection”, for which the alpha sphere radii would be the second point, which in view of Jimenez et al. reads on determining a first target point and a second target point, the first target point being a center point of all sites within a spherical space, the spherical space being a spherical space with the each of the at least one site as a center of a sphere and a target length as a radius, and the second target point being an intersection between a forward extension line of a vector, starting from an origin of the 3D coordinate system and pointing to the each of the at least one site, and an outer surface of the spherical space. Venkatraman et al. teaches on page 3, column 1, paragraph 1 “More recently, several publications have featured the use of spherical harmonics and its extension, the 3D Zernike descriptors (3DZDs), which have been successfully applied to comparing shapes of proteins and ligands HEX, for example, uses spherical polar basis functions to model surface shapes. It also avoids the use of expensive grid-based calculations employed in FFT based methods and instead uses the expansion coefficients of spherical harmonics to calculate correlations of the ligand and receptor surface overlaps. Spherical harmonics, however, are not rotationally invariant and make use of Wigner matrices to identify rotationally invariant regions. In contrast, 3DZD corrects this drawback while providing a more compact shape definition. Our previous studies have shown that the rotation invariant descriptor effectively captures protein surface shape similarity on both global and local levels”, which reads on extracting a rotation-invariant location feature in the 3D coordinates of the each of the at least one site based on the 3D coordinates of the each of the at least one site, 3D coordinates of the first target point, and 3D coordinates of the second target point, the rotation-invariant location feature being used for indicating location information of the each of the at least one site in the target molecule. Venkatraman et al. further teaches on page one in the abstract “We present a novel protein docking algorithm based on the use of 3D Zernike descriptors as regional features of molecular shape. The key motivation of using these descriptors is their invariance to transformation, in addition to a compact representation of local surface shape characteristics. Docking decoys are generated using geometric hashing, which are then ranked by a scoring function that incorporates a buried surface area and a novel geometric complementarity term based on normals associated with the 3D Zernike shape description”, which reads on invoking a site detection model to perform prediction processing on the extracted rotation- invariant location feature, to obtain a prediction probability of the each of the at least one site, the prediction probability indicating a possibility of the each of the at least one site being a binding site; and determining a binding site from the at least one site in the target molecule based on the prediction probability of the each of the at least one site. It would have been obvious at the time of effective filing to a person of ordinary skill in the art to take the teachings of Jimenez et al. for using 3D coordinates and geometric data, along with the teachings of Le Guilloux et al. for the use of spheres in binding pocket geometry as this would merely be a substitution of one known method for another, specifically exchanging using raw geometric data for specified clusters that reduce the computation necessary as Le Guilloux et al. teaches on page 3, column 1, paragraph 2 “It has several inherent advantages such as computational efficiency”. Using these two it would then be obvious to a person skilled in the art to combine them with the teachings of Venkatraman et al. for finding rotationally-invariant features and predicting protein binding sites based off of said geometry, specifically as all papers are building models for examining protein binding and both Le Guilloux et al. and Venkatraman et al. are calculating rotational invariant features (properties of a shape, object, or data representation that remain unchanged even if the object is rotated). One would have had a reasonable expectation of success given that all papers focus on the same subject and the overall methods are similar with slight adjustments for the inclusion of data types or information. Therefore, it would be obvious to one with ordinary skill in the art to incorporate the teachings of each and to be successful. Claim 10 is directed to the method of claim 1 but further specifies that the site with the highest probability and greater than a probability threshold is determined as the binding site It would be obvious that the site with the greatest probability of being the binding site would be determined the binding site. Claims 2-4 and 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Jimenez et al. (Bioinformatics (2017) 3036-3042; previously cited), Le Guilloux et al. (BMC bioinformatics (2009) 1-11; newly cited), and Venkatraman et al. (BMC Bioinformatics (2009) 1-21; previously cited) as applied to claims 1 and 11 above, and further in view of Krivak et al. (Journal of Cheminformatics (2018) 1-12; previously cited). Claim 2 is directed to the method of claim 1 but further specifies that a global location feature be constructed based upon the 3D coordinates followed by the construction of the 3D coordinates of the first target point, from which the location feature can be derived. Claim 12 is directed to the device of claim 11 but further specifies that a global location feature be constructed based upon the 3D coordinates followed by the construction of the 3D coordinates of the first target point, from which the location feature can be derived. Jimenez et al., Le Guilloux et al., and Venkatraman et al. teach the method of claims 1 and 11 as described above. Venkatraman et al. further teaches on page 3, column 1, paragraph 1 “Our previous studies have shown that the rotation invariant descriptor effectively captures protein surface shape similarity on both global and local levels”, reading on the construction of a global location feature of the each of the at least one site based on the 3D coordinates of the each of the at least one site, the 3D coordinates of the first target point, and the 3D coordinates of the second target point. Jimenez et al. and Venkatraman et al. do not teach the construction of a 3D coordinate from the first target point based upon at least one neighborhood point. Krivak et al. teaches on page 9, column 1, paragraph 4 “To generate predictions for a given protein using a pre-trained classification model P2Rank follows these instructions: 1) Generate a set of regularly spaced points lying on a protein’s Solvent Accessible Surface (SAS points). Positions of the points are calculated by a fast numerical algorithm implemented in CDK library. 2) Calculate feature descriptors of SAS points based on their local chemical neighborhood”, reading on constructing, based on the 3D coordinates of the each of the at least one site, the 3D coordinates of the first target point, the 3D coordinates of the second target point, and 3D coordinates of at least one neighborhood point of the site, at least one local location feature between the site and the at least one neighborhood point. It would have been obvious at the time of the effective filing date to a person of ordinary skill in the art to take the teachings of Jimenez et al., Le Guilloux et al., and Venkatraman et al. for the method of claims 1 and 11, and combine them with the teachings of Krivak et al. for calculating feature descriptors based upon local chemical neighborhoods, specifically as Le Guilloux et al. is teaching the use of alpha spheres based on portions of the molecule for predicting a binding pocket and Krivak et al. is teaching the use of local feature descriptors, or in this case the alpha sphere feature descriptors, particularly as the latter shows their increased performance when using said method (Abstract: Results). One would have had a reasonable expectation of success given that all papers focus on the same subject and the overall methods are highly similar with only slight adjustments to the algorithms. Therefore, it would be obvious to one with ordinary skill in the art to incorporate the teachings of each and to be successful. Claim 3 is directed to the method of claim 2 and thus claim 1 but further specifies that the global location feature comprise one of the specified measurements given. Claim 13 is directed to the device of claim 12 and thus claim 11 but further specifies that the global location feature comprise one of the specified measurements given. Jimenez et al., Le Guilloux et al., and Venkatraman et al. teach the method of claim 1 as described above. Venkatraman et al. further teaches on page 15, column 1, paragraph 2 “For each ligand point located within this bound, compare the labels (3DZD, normals, torsion angles, and point distances) of the points (receptor vs ligand) and those of the corresponding reference frames”, reading on wherein the global location feature comprises at least one of a magnitude of the each of the at least one site, a distance between the each of the at least one site and the first target point, a distance between the first target point and the second target point, a cosine value of a first angle, or a cosine value of a second angle, the first angle being an angle formed between a first line segment and a second line segment, the second angle being an angle formed between the second line segment and a third line segment, the first line segment being a line segment formed between the each of the at least one site and the first target point, the second line segment being a line segment formed between the first target point and the second target point, and the third line segment being a line segment formed between the each of the at least one site and the second target point. Claim 4 is directed to the method of claim 2 and therefore claim 1 but further specifies that for the neighborhood point, the local location feature between the site and neighborhood point comprise one of the specified measurements given. Claim 14 is directed to the device of claim 12 and therefore claim 11 but further specifies that for the neighborhood point, the local location feature between the site and neighborhood point comprise one of the specified measurements given. Jimenez et al., Le Guilloux et al., and Venkatraman et al. teach the method of claim 1 as described above. Venkatraman et al. further teaches on page 15, column 1, paragraph 2 “For each ligand point located within this bound, compare the labels (3DZD, normals, torsion angles, and point distances) of the points (receptor vs ligand) and those of the corresponding reference frames”. Krivak et al. teaches on page 9, column 1, paragraph 4 “To generate predictions for a given protein using a pre-trained classification model P2Rank follows these instructions: 1) Generate a set of regularly spaced points lying on a protein’s Solvent Accessible Surface (SAS points). Positions of the points are calculated by a fast numerical algorithm implemented in CDK library. 2) Calculate feature descriptors of SAS points based on their local chemical neighborhood”. It would have been obvious at the time of invention that the use of a neighborhood point or chemical neighborhood would, if Venkatraman et al. be applied such as previously described, that there be an associated measurement corresponding to “normals, torsion angles, and point distances” of said chemical neighborhood or neighborhood point. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Jimenez et al. (Bioinformatics (2017) 3036-3042; previously cited), Le Guilloux et al. (BMC bioinformatics (2009) 1-11; newly cited), and Venkatraman et al. (BMC Bioinformatics (2009) 1-21; previously cited) as applied to claims 1 and 11 above, and further in view of Fout et al. (Advances in neural information processing systems (2017) 1-10; previously cited) and Kulmanov et al. (Bioinformatics (2018) 660-668; previously cited). Claim 5 is directed to the method of claim 1 but further specifies that the model is a GCN which inputs the location feature into an input layer along with graph data into the edge layer and fuses the global feature with the graph data and convolutional feature to obtain a fused feature and inputs the fused feature into the output layer to obtain a probability. Claim 15 is directed to the device of claim 11 but further specifies that the model is a GCN which inputs the location feature into an input layer along with graph data into the edge layer and fuses the global feature with the graph data and convolutional feature to obtain a fused feature and inputs the fused feature into the output layer to obtain a probability. Jimenez et al., Le Guilloux et al., and Venkatraman et al. teach the method of claim 1 as described above. Fout et al. teaches in the abstract “By performing convolution over a local neighborhood of a node of interest, we are able to stack multiple layers of convolution and learn effective latent representations that integrate information across the graph that represent the three-dimensional structure of a protein of interest. An architecture that combines the learned features across pairs of proteins is then used to classify pairs of amino acid residues as part of an interface or not”, on page 1 paragraph 2 “we propose a graph convolution approach that allows us to tackle the challenging problem of predicting protein interfaces”, page 3 paragraph 3 “Multiple layers of these graph convolution operators can be used, and this will have the effect of learning features that characterize the graph at increasing levels of abstraction, and will also allow information to propagate through the graph, thereby integrating information across regions of increasing size. Furthermore, these operators are rotation-invariant if the features have this property”, finally figure 2 provides a description of the GCN with convolutional layers, input layers and output layers that provide a classification. These read on the use of a GCN, the structure of the GCN with an input layer, at least one edge convolutional layer, and an output layer, as well as inputting the location feature of the each of the at least one site into the input layer of the GCN, and outputting graph data of the each of the at least one site by using the input layer and inputting the graph data of the each of the at least one site into the at least one edge convolutional layer of the GCN, and performing feature extraction on the graph data of the each of the at least one site by using the at least one edge convolutional layer, to obtain a global biological feature of the each of the at least one site. Kulmanov et al. teaches on page 662, column 1, paragraph 5 “We combined the knowledge graph embeddings for the nodes with the output of the max-pooling layer of length 832 as a combined feature vector”, reading on fusing the global biological feature, the graph data of the each of the at least one site, and an edge convolutional feature outputted by the at least one edge convolutional layer, to obtain a fused feature. It is obvious that a classification layer would read on an output layer that provides a probability of whether or not a site is a binding site or not (i.e. classification). It would have been obvious at the effective filing date to modify the teachings of Jimenez et al., Le Guilloux et al., and Venkatraman et al. for the methods of claims 1 and 11, with the teachings of Fout et al. for the use and structure of the GCN along with the input data, and the teachings of Kulmanov et al. for the fusing/combining of features, specifically biological and graphical to obtaining features which are then used as input to the model. More specifically, Fout et al. is able to show that they can “stack multiple layers of convolution and learn effective latent representations that integrate information across the graph that represent the three dimensional structure of a protein of interest” (abstract), of which the previous references are already using convolutional networks, and Fout et al. was able to show that “several graph convolution operators yielded accuracy that is better than the state-of-the-art SVM method in this task” (abstract). Additionally, Kulmanov et al. allows for the combining of various features and was able to show increased predictions for predicting function of proteins. One would have had a reasonable expectation of success given that all papers focus on the same subject and the overall methods are highly similar with only slight adjustments to the algorithms. Therefore, it would be obvious to one with ordinary skill in the art to incorporate the teachings of each and to be successful. Claims 6-7, 9 and 16-17, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Jimenez et al. (Bioinformatics (2017) 3036-3042; previously cited), Le Guilloux et al. (BMC bioinformatics (2009) 1-11; newly cited), Venkatraman et al. (BMC Bioinformatics (2009) 1-21; previously cited), Fout et al. (Advances in neural information processing systems (2017) 1-10; previously cited), and Kulmanov et al. (Bioinformatics (2018) 660-668; previously cited) as applied to claims 1, 5, 11, and 15 above, and further in view of Haberal et al. (Fourth International Conference on Mathematics and Computers in Sciences and in Industry (2017) 1-5; newly cited). Claim 6 is directed to the method of claim 5 and thus claim 1, but further specifies inputting the location feature into the multilayer perceptron, performing dimension reduction, and outputting into a pooling layer. Claim 16 is directed to the device of claim 15 and thus claim 11, but further specifies inputting the location feature into the multilayer perceptron, performing dimension reduction, and outputting into a pooling layer. Jimenez et al., Le Guilloux et al., Venkatraman et al., Fout et al., and Kulmanov et al. teach the method of claims 5 and 15 as described above. Haberal et al. teaches on page 3, paragraph 2 “In this paper, we have implemented a CNN for prediction of protein metal binding site using the Keras framework. Our DeepMBS model (Figure 3) is composed of five stages. These stages are: convolution, pooling followed by dropout, and multi-layer perceptron layers. Softmax function, in the end, maps the output from the model into prediction. The convolution stages consist of 4 layers. Both of the pooling and multilayer per-ceptron stages consist of 2 layers. There are two classes that predict the state of metal bonding as the output of the model”, and on page 2, column 2, paragraph 2 “The most advantageous aspect of the deep learning approach is the convergence of the preprocessing, dimensionality reduction and classification stages in a single model. The output of each layer is used as input to the next layer”, reading on wherein inputting the location feature of the each of the at least one site into the input layer of the GCN, and outputting graph data of the each of the at least one site by using the input layer comprises: inputting the location feature of the each of the at least one site into a multilayer perceptron (MLP) of the input layer, and mapping the location feature of the each of the at least one site by using the MLP, to obtain a first feature of the each of the at least one site, a dimension quantity of the first feature being greater than a dimension quantity of the location feature; and inputting the first feature of the each of the at least one site into a pooling layer of the input layer, and performing dimension reduction on the first feature of the each of the least one site by using the pooling layer, to obtain the graph data of the each of the at least one site. It would have been obvious at the effective filing date to combine the teachings of Jimenez et al., Le Guilloux et al., Venkatraman et al., Fout et al., and Kulmanov et al. for the method and device of claims 5 and 15 respectively, with the teachings of Haberal et al. for the use of a multilayer perceptron to ouput a prediction probability as Fout et al. teaches the use of a neural network and a multilayer perceptron would be a mere substitution of known elements, which in view of the following from Haberal et al. in the abstract, “results show that a better performance can be achieved with deep learning approach compared with previous studies on the same dataset”, would be a known, improved, outcome. One would have had a reasonable expectation of success given that Haberal et al. is predicting binding sites and use similar models as to the previous citations. Therefore, it would be obvious to one with ordinary skill in the art to incorporate the teachings of each and to be successful. Claim 7 is directed to the method of claim 5 and thus claim 1, but further details the ordering of feature information from one layer to another. Claim 17 is directed to the device of claim 15 and thus claim 11, but further details the ordering of feature information from one layer to another. pooling layer. Jimenez et al., Le Guilloux et al., Venkatraman et al., Fout et al., and Kulmanov et al. teach the method of claims 5 and 15 as described above. Haberal et al. teaches on page 3, paragraph 2 “In this paper, we have implemented a CNN for prediction of protein metal binding site using the Keras framework. Our DeepMBS model (Figure 3) is composed of five stages. These stages are: convolution, pooling followed by dropout, and multi-layer perceptron layers. Softmax function, in the end, maps the output from the model into prediction. The convolution stages consist of 4 layers. Both of the pooling and multilayer per-ceptron stages consist of 2 layers. There are two classes that predict the state of metal bonding as the output of the model”, and on page 2, column 2, paragraph 2 “The most advantageous aspect of the deep learning approach is the convergence of the preprocessing, dimensionality reduction and classification stages in a single model. The output of each layer is used as input to the next layer”. While Haberal et al. does not explicitly teach each step, figure 3 provides the flow logic for each of the five stages and includes each of those specified within the claim. While the ordering might be dissimilar it would be obvious to a person skilled in the art to modify the teachings to optimize the accuracy and decrease computation of the model, and would thereby read on performing, for each edge convolutional layer in the at least one edge convolutional layer, feature extraction on an edge convolutional feature outputted by a previous edge convolutional layer, to obtain an extracted edge convolutional feature, and inputting the extracted edge convolutional feature into a next edge convolutional layer; concatenating the graph data of the each of the at least one site and at least one edge convolutional feature outputted by the at least one edge convolutional layer, to obtain a second feature; inputting the second feature into a multilayer perceptron (MLP), and mapping the second feature by using the MLP, to obtain a third feature; and inputting the third feature into a pooling layer, and performing dimension reduction on the third feature by using the pooling layer, to obtain the global biological feature. Claim 9 is directed to the method of claim 5 and thus claim 1, but further specifies the inputting of the fused feature into a multilayer perceptron in the output layer to obtain a prediction probability. Claim 19 is directed to the device of claim 15 and thus claim 11, but further specifies the inputting of the fused feature into a multilayer perceptron in the output layer to obtain a prediction probability. Jimenez et al., Le Guilloux et al., Venkatraman et al., Fout et al., and Kulmanov et al. teach the method of claims 5 and 15 as described above. Haberal et al. teaches on page 3, column 1, paragraph 2 “In this paper, we have implemented a CNN for prediction of protein metal binding site using the Keras framework. Our DeepMBS model (Figure 3) is composed of five stages. These stages are: convolution, pooling followed by dropout, and multi-layer perceptron layers. Softmax function, in the end, maps the output from the model into prediction”, which reads on wherein inputting the fused feature into the output layer of the GCN, and performing, by using the output layer, probability fitting on the fused feature, to obtain the prediction probability comprises: inputting the fused feature into a multilayer perceptron (MLP) in the output layer, and mapping the fused feature by using the MLP, to obtain the prediction probability. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Jimenez et al. (Bioinformatics (2017) 3036-3042; previously cited), Le Guilloux et al. (BMC bioinformatics (2009) 1-11; newly cited), Venkatraman et al. (BMC Bioinformatics (2009) 1-21; previously cited), Fout et al. (Advances in neural information processing systems (2017) 1-10; previously cited), Kulmanov et al. (Bioinformatics (2018) 660-668; previously cited), and Haberal et al. (Fourth International Conference on Mathematics and Computers in Sciences and in Industry (2017) 1-5; newly cited) as applied to claims 1, 5-7, 9, 11, 15-17, and 19 above, and further in view of Pan et al. (Bioinformatics (2018) 3427-3436; newly cited). Claim 8 is directed to the method of claim 5 and thus claim 1, but further specifies constructing a map for features that will be passed from layer to layer and used in pooling layer for dimension reduction. Claim 18 is directed to the device of claim 15 and thus claim 11, but further specifies constructing a map for features that will be passed from layer to layer and used in pooling layer for dimension reduction. Jimenez et al., Le Guilloux et al., Venkatraman et al., Fout et al., and Kulmanov et al. teach the method of claims 5 and 15 as described above. Haberal et al. teaches on page 3, paragraph 2 “In this paper, we have implemented a CNN for prediction of protein metal binding site using the Keras framework. Our DeepMBS model (Figure 3) is composed of five stages. These stages are: convolution, pooling followed by dropout, and multi-layer perceptron layers. Softmax function, in the end, maps the output from the model into prediction. The convolution stages consist of 4 layers. Both of the pooling and multilayer per-ceptron stages consist of 2 layers. There are two classes that predict the state of metal bonding as the output of the model”, and on page 2, column 2, paragraph 2 “The most advantageous aspect of the deep learning approach is the convergence of the preprocessing, dimensionality reduction and classification stages in a single model. The output of each layer is used as input to the next layer”. Pan et al. teaches on page 3429, column 2, paragraph 2 “The Convolutional Neural Network (CNN) consists of convolution, max-pool and fully connected layers. In this study, CNN captures non-linear features. The convolution outputs the pointwise product between input one-hot matrix and filters, followed by a rectified linear ReLU that sparsifies the outputs of the convolution and keep only positive matches. Finally, a max pooling operation is applied to reduce the dimensionality by selecting the maximum value over a window. Where M is the input one-hot matrix of sequence s, Fk is the coefficient of motif detector k, m is the kernel size and the outputs xik from the convolution operation are the feature maps, i is index of nucleotides in a sequence, l is the index corresponding to A, C, G, U in matrix.”, which in view of the teachings from Haberal et al. reads on constructing a cluster map for the each edge convolutional layer in the at least one edge convolutional layer based on the edge convolutional feature outputted by the previous edge convolutional layer; inputting the cluster map into an MLP of the edge convolutional layer, and mapping the cluster map by using the MLP, to obtain an intermediate feature of the cluster map; andinputting the intermediate feature into a pooling layer in the edge convolutional layer, performing dimension reduction on the intermediate feature by using the pooling layer, and inputting the dimension-reduced intermediate feature into the next edge convolutional layer. It would have been obvious at the time of filing to modify the teachings of Jimenez et al., Le Guilloux et al., Venkatraman et al., Fout et al., and Kulmanov et al. for the method of claim 5, with the teachings of Haberal et al. for the use of a multilayer perceptron to ouput a prediction probability, and the teachings of Pan et al. for incorporating feature maps into the CNN, as Pan et al. teaches in the abstract “iDeepE demonstrates a better performance over state-of-the-art methods…We also find that the local CNN runs 1.8 times faster than the global CNN with comparable performance when using GPUs”. One would have had a reasonable expectation of success given that this would be a known substitution, with an improved outcome. Therefore, it would be obvious to one with ordinary skill in the art to incorporate the teachings of each and to be successful. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEENAN NEIL ANDERSON-FEARS whose telephone number is (571)272-0108. The examiner can normally be reached M-Th, alternate F, 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Karlheinz Skowronek can be reached at 571-272-9047. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.N.A./Examiner, Art Unit 1687 /OLIVIA M. WISE/Supervisory Patent Examiner, Art Unit 1685
Read full office action

Prosecution Timeline

Nov 04, 2021
Application Filed
May 06, 2025
Non-Final Rejection — §101, §103
Aug 13, 2025
Response Filed
Nov 21, 2025
Non-Final Rejection — §101, §103
Mar 18, 2026
Interview Requested
Apr 09, 2026
Examiner Interview Summary
Apr 09, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592298
Hardware Execution and Acceleration of Artificial Intelligence-Based Base Caller
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
6%
Grant Probability
56%
With Interview (+50.0%)
5y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month