Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
Claims 1,2,5,6 and 8-14 are pending.
Request for Continued Examination
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 3/11/2026 has been entered.
Response to Amendment
This action is in response to the RCE filled on 3/11/2026. The amendment has been entered. Claims 1 and 6 have been amended, and claims 3,4 and 7 have been canceled. Claims 1,2,5,6 and 8-14 are pending, with claims 1 and 6 being independent in the instant application.
Response to Arguments
Applicant's Arguments/Remarks filed on 3/11/2026 on page 9-14 regarding 35 U.S.C. 103 rejections have been fully considered and are found unpersuasive in view of the amended claims and presented Arguments/Remarks by the Applicant.
Applicant stated in Arguments/Remarks page 13: “Bombarelli does not disclose using the explanatory values, which are derived from the intermediate layer of the autoencoder, to train the prediction model which is used to predict physical properties”.
Examiner respectfully disagrees with this argument/remark above. Bombarelli disclosed in page 268 heading ‘Abstract’: “A deep neural network was trained on hundreds of thousands of existing chemical structures to construct three coupled functions: an encoder, a decoder, and a predictor. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to discrete molecular representations. The predictor estimates chemical properties from the latent continuous vector representation of the molecule. Continuous representations of molecules allow us to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules.” The disclosure “latent space” corresponds to claim aspect “an autoencoder having a plurality of intermediate layers”. Moreover, the figure in the Abstract in Bombarelli’s disclosure is similar to the Fig. 7 in current Application’s Drawing).
Therefore, a new ground of rejections is necessitated by Applicant's claim amendments, the previous rejections regarding 35 U.S.C.103 are being amended in this current office action. (See analysis below Claim Rejections-35 U.S.C. §103).
Examiner Notes
Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. The entire reference is considered to provide disclosure relating to the claimed invention. The claims & only the claims form the metes & bounds of the invention. Office personnel are to give the claims their broadest reasonable interpretation in light of the supporting disclosure. Unclaimed limitations appearing in the specification are not read into the claim. Prior art was referenced using terminology familiar to one of ordinary skill in the art. Such an approach is broad in concept and can be either explicit or implicit in meaning. Examiner's Notes are provided with the cited references to assist the applicant to better understand how the examiner interprets the applied prior art. Such comments are entirely consistent with the intent & spirit of compact prosecution.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham, v. John Deere Co., 383 U.S.1.148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
7. Claims 1 and 2 are rejected under 35 U.S.C. 103 as being unpatentable over YUTA (Pub. No. US2010/0145896A1, IDS provided on 12/21/2021) and in view of an NPL “Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules” by Rafael Gómez-Bombarelli et al. (hereinafter Bombarelli; IDS provided on 12/21/2021 is from 2017 version, however Examiner considered this prior art from 2018 version and attached in current office action).
Regarding Claim 1, YUTA teaches a compound property prediction device for predicting a compound property using a case-by-case compound database storing a plurality of case databases, the case database including a plurality of records recording structural information about compound structures in association with compound properties about properties of compounds, (YUTA disclosed in page 2 para [0020-0021]: “an object of the invention is to provide a compound property prediction apparatus and method that can achieve high prediction rates by generating a prediction model accurately reflecting information concerning each particular compound whose properties are to be predicted; … there is provided a compound property prediction apparatus including: a training sample library in which a parameter value relating to a chemical structure and a value for a prediction item are preregistered for each individual one of a plurality of training samples;”).
YUTA teaches the device comprising: a processor; and a memory, coupled to the processor, storing instructions that when executed by the processor, (YUTA disclosed in page 2 para [0021]: “According to a first aspect of the invention, … there is provided a compound property prediction apparatus including: a training sample library in which a parameter value relating to a chemical structure and a value for a prediction item are preregistered for each individual one of a plurality of training samples;” The disclosure “compound property prediction apparatus including: a training sample library” corresponds to claim limitation “a compound property prediction device” (e.g.,). Any person having skills in the art would understand that a device or apparatus comprises memory and processor and memory, coupled to the processor, storing instructions that when executed by the processor. The disclosure “training sample library” corresponds to memory or database, it is obvious to understand that compound property prediction apparatus includes memory and processor to perform the claimed invention).
YUTA teaches configures the processor to: receive a designation of at least one case database stored in the case-by-case database, (YUTA disclosed in page 8 para [0102]: “In step S803, the parameter values of the unknown sample are retrieved, for example, from the internal memory, and in step S804, the ID number of one particular training sample Y1 and its parameter values are retrieved from the training sample library 16;” The disclosure “the parameter values of the unknown sample are retrieved, from the internal memory; the ID number of one particular training sample Y1 and its parameter values are retrieved from the training sample library” correspond to claim limitation “receive a designation of at least one case database stored in the case-by-case database”).
However, YUTA doesn’t explicitly teach the limitations “generate an autoencoder having a plurality of intermediate layers for converting structural information corresponding to the received case database to multi-variables, predict compound properties using the multi-variables converted by the autoencoder, receive structural information about structures of compounds having properties to be predicted, input the structural information about the structures of the compounds having the properties to be predicted to the autoencoder and convert the structural information to multi-variables in the plurality of intermediate layers and use the multi- variables generated in the plurality of intermediate layers as explanatory variables, input the explanatory variables to the prediction model and predict properties that are the objective variables, and set compound properties corresponding to training data as objective variables and train the prediction model using the explanatory variables and the objective variables.
Bombarelli teaches generate an autoencoder having a plurality of intermediate layers for converting structural information corresponding to the received case database to multi-variables, (Bombarelli disclosed in page 268 heading ‘Abstract’: “A deep neural network was trained on hundreds of thousands of existing chemical structures to construct three coupled functions: an encoder, a decoder, and a predictor. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to discrete molecular representations. The predictor estimates chemical properties from the latent continuous vector representation of the molecule. Continuous representations of molecules allow us to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules.” In page 269 left col. (last para): “We apply such generative models to chemical design, using a pair of deep networks trained as an autoencoder to convert molecules represented as SMILES strings into a continuous vector representation. In principle, this method of converting from a molecular representation to a continuous vector representation could be applied to any molecular representation, … We chose to use SMILES representation because it can be readily converted into a molecule. … We trained the autoencoder jointly on a property prediction task; we added a multilayer perceptron that predicts property values from the continuous representation generated by the encoder …”.
The disclosure above “latent space” corresponds to claim aspect “an autoencoder having a plurality of intermediate layers”. Moreover, the figure in the Abstract in Bombarelli’s disclosure is similar to the Fig. 7 in current Application’s Drawing).
Bombarelli teaches predict compound properties using the multi-variables converted by the autoencoder, (Bombarelli disclosed in page 272-273 heading ‘Property prediction of molecules’ (right col.): “we extended the purely generative model to also predict property values from the latent representation. We trained a multi-layer perceptron jointly with the autoencoder to predict properties from the latent representation of each molecule. … With joint training for property prediction, the distribution of molecules in the latent space is organized by property values. Figure 3 shows the mapping of true property values to the latent space representation of molecules, … The latent space generated by autoencoders jointly trained with the property prediction task shows in the distribution of molecules a gradient by property values;” Further, Fig. 1 (a) in page 269 disclosed “A diagram of the proposed autoencoder for molecular design, including the joint property prediction model. Starting from a discrete molecular representation, such as a SMILES string, the encoder network converts each molecule into a vector in the latent space, which is effectively a continuous molecular representation.”
It has been disclosed in page 269 under heading ‘Abstract’: “We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This model allows us to generate new molecules for efficient exploration and optimization through open-ended spaces of chemical compounds.” Therefore, it is understood that chemical compound’s properties being predicted in Bombarelli’s disclosure).
Bombarelli teaches receive structural information about structures of compounds having properties to be predicted, (Bombarelli disclosed in page 270 (right col.): “To enable molecular design, the chemical structures encoded in the continuous representation of the autoencoder need to be correlated with the target properties that we are seeking to optimize. Therefore, we added a model to the autoencoder that predicts the properties from the latent space representation. This autoencoder was then trained jointly on the reconstruction task and a property prediction task; an additional multi-layer perceptron (MLP) was used to predict the property from the latent vector of the encoded molecule.” It has been disclosed in page 1 under heading ‘Abstract’: “We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This model allows us to generate new molecules for efficient exploration and optimization through open-ended spaces of chemical compounds.”).
Bombarelli teaches input the structural information about the structures of the compounds having the properties to be predicted to the autoencoder and convert the structural information to multi-variables in the plurality of intermediate layers and use the multi- variables generated in the plurality of intermediate layers as explanatory variables, (Examiner would construe the claim element (according to the conventional meaning in the art) “explanatory variables” as “numerical or categorical descriptors that define molecular features”.
Bombarelli disclosed in page 273 heading ‘Optimization of Molecules via Properties’: “We next optimized molecules in the latent space from the autoencoder which was jointly trained for property prediction. In order to create a smoother landscape to perform optimizations, we used a Gaussian process model to model the property predictor model. Gaussian processes can be used to predict any smooth continuous function and are extremely lightweight, requiring only a few minutes to train on a dataset of a few thousand molecules. The Gaussian process was trained to predict target properties for molecules given the latent space representation of the molecules as an input.” This disclosure teaches the limitation “input the structural information about the structures of the compounds having the properties to be predicted to the autoencoder”.
In page 270 (at right col.): “To enable molecular design, the chemical structures encoded in the continuous representation of the autoencoder need to be correlated with the target properties that we are seeking to optimize. Therefore, we added a model to the autoencoder that predicts the properties from the latent space representation. This autoencoder was then trained jointly on the reconstruction task and a property prediction task; an additional multilayer perceptron (MLP) was used to predict the property from the latent vector of the encoded molecule. To propose promising new candidate molecules, we can start from the latent vector of an encoded molecule and then move in the direction most likely to improve the desired attribute. The resulting new candidate vectors can then be decoded into corresponding molecules (Figure 1b).”
The disclosure above “we added a model to the autoencoder that predicts the properties from the latent space representation, this autoencoder was then trained jointly on the reconstruction task and a property prediction task; an additional multilayer perceptron (MLP) was used to predict the property from the latent vector of the encoded molecule” teaches the claim limitation “convert the structural information to multi-variables in the plurality of intermediate layers and use the multi- variables generated in the plurality of intermediate layers as explanatory variables” (since “latent space” corresponds to claim aspect “an autoencoder having a plurality of intermediate layers)).
Bombarelli teaches input the explanatory variables to the prediction model and predict properties that are the objective variables, (Bombarelli disclosed in page 272 heading ‘Property Prediction of Molecules’: “we extended the purely generative model to also predict property values from the latent representation. We trained a multilayer perceptron jointly with the autoencoder to predict properties from the latent representation of each molecule. With joint training for property prediction, the distribution of molecules in the latent space is organized by property values. Figure 3 shows the mapping of property values to the latent space representation of molecules, compressed into two dimensions using PCA. The latent space generated by autoencoders jointly trained with the property prediction task shows in the distribution of molecules a gradient by property values; molecules with high values are located in one region, and molecules with low values are in another. … Our VAE model shows that property prediction performance for electronic properties (i.e., orbital energies) are similar to graph convolutions for some properties; ...”.
The disclosure “We trained a multilayer perceptron jointly with the autoencoder to predict properties from the latent representation of each molecule” corresponds to claim limitation “inputs the explanatory variables to the prediction model.” Further, the disclosures “Figure 3 shows the mapping of property values to the latent space representation of molecules; our VAE model shows that property prediction performance for electronic properties (i.e., orbital energies) are similar to graph convolutions for some properties” correspond to claim limitation “predicts properties that are the objective variables”).
and Bombarelli teaches set compound properties corresponding to training data as objective variables and train the prediction model using the explanatory variables and the objective variables. (Bombarelli disclosed in page 274 heading ‘Optimization of molecules via properties’ (4th para): “Figure 4b) shows the path of one optimization from the starting molecule to the final molecule in the two-dimensional PCA representation, the final molecule ending up in the region of high objective value.” It has been disclosed in page 273 Figure 4: “Optimization results for the jointly trained autoencoder using 5× QED−SAS as the objective function. Part (a) shows a violin plot which compares the distribution of sampled molecules from normal random sampling, SMILES optimization via a common chemical transformation with a genetic algorithm, and from optimization on the trained gaussian process model with varying levels of accuracy/training points.” Part (b) shows the starting and ending points of several optimization runs on a PCA plot of latent space colored by the objective function.”
It has been disclosed in page 269 left col. (last para): “We apply such generative models to chemical design, using a pair of deep networks trained as an autoencoder to convert molecules represented as SMILES strings into a continuous vector representation. In principle, this method of converting from a molecular representation to a continuous vector representation could be applied to any molecular representation, including chemical fingerprints, …”).
YUTA and Bombarelli are analogous art because they are related to have compound property prediction apparatus. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of YUTA and Bombarelli, before him or her, to modify storing compound structures related to compound properties of YUTA, to include predicting and inputting structural information related to compound properties of Bombarelli. The suggestion/motivation for doing so would have been obvious by Bombarelli because “We propose a new family of methods for exploring chemical space based on continuous encodings of molecules. These methods eliminate the need to hand-build libraries of compounds and allow a new type of directed gradient-based search through chemical space. We observed high fidelity in reconstruction, the ability to capture characteristic features of a molecular training set into the generative model, good predictive power when training jointly an autoencoder and a predictor, and the ability to perform model-based optimization of molecules in the smoothed latent space.” (Bombarelli disclosed in page 12 heading ‘Conclusion’).
Regarding claim 2, YUTA and Bombarelli teach the compound property prediction device according to claim 1, however YUTA doesn’t explicitly teach the limitation “the autoencoder is a model having a property of enabling the structural information to be restored from the multi-variables after converting the structural information to the multi-variables”.
wherein Bombarelli teaches the autoencoder is a model having a property of enabling the structural information to be restored from the multi-variables after converting the structural information to the multi-variables. (Bombarelli disclosed in page 270 (at right col.): “To enable molecular design, the chemical structures encoded in the continuous representation of the autoencoder need to be correlated with the target properties that we are seeking to optimize. Therefore, we added a model to the autoencoder that predicts the properties from the latent space representation. This autoencoder was then trained jointly on the reconstruction task and a property prediction task; an additional multi-layer perceptron (MLP) was used to predict the property from the latent vector of the encoded molecule. To propose promising new candidate molecules, we can start from the latent vector of an encoded molecule … The resulting new candidate vectors can then be decoded into corresponding molecules. (Figure 1b)”).
YUTA and Bombarelli are analogous art because they are related to have compound property prediction apparatus. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of YUTA and Bombarelli, before him or her, to modify storing compound structures related to compound properties of YUTA, to include predicting and inputting structural information related to compound properties of Bombarelli. The suggestion/motivation for doing so would have been obvious by Bombarelli because “We propose a new family of methods for exploring chemical space based on continuous encodings of molecules. These methods eliminate the need to hand-build libraries of compounds and allow a new type of directed gradient-based search through chemical space. We observed high fidelity in reconstruction, the ability to capture characteristic features of a molecular training set into the generative model, good predictive power when training jointly an autoencoder and a predictor, and the ability to perform model-based optimization of molecules in the smoothed latent space.” (Bombarelli disclosed in page 12 heading ‘Conclusion’).
Claims 5, 6, 8-12, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over YUTA and Bombarelli and further in view of a research paper “PubChem Substance and Compound databases Sunghwan” Kim et al. (hereinafter Kim, published online September 2015).
Regarding claim 5, YUTA and Bombarelli teach the compound property prediction device according to claim 1, however YUTA and Bombarelli do not explicitly teach the claim limitation “the processor is configured to search the case database with a keyword”.
wherein Kim teaches the processor is configured to search the case database with a keyword. (Kim disclosed in page D1212 heading ‘Summary’ (left col.): “In the present paper, we described the PubChem Substance and Compound databases. … In addition to text-based search through Entrez, PubChem also enables users to perform various nontextual searches (such as identity search, molecular formula search, substructure/superstructure search, 2-D and 3-D similarity searches) using the Chemical Structure Search tool.”).
YUTA, Bombarelli and Kim are analogous art because they are related to have database storing structural information about material structures related to material properties. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of YUTA, Bombarelli and Kim before him or her, to modify storing material structures related to material properties in a database of YUTA and Bombarelli, to include searching database using keyword or performing a text-based search in database of Kim. The suggestion/motivation for doing so would have been obvious by Kim because “In the present paper, we described the PubChem Substance and Compound databases. In addition to text-based search through Entrez, PubChem also enables users to perform various nontextual searches (such as identity search, molecular formula search, substructure/superstructure search, 2-D and 3-D similarity searches) using the Chemical Structure Search tool. PubChem is committed to continue serving as a key chemical information resource not only to the biomedical research community but also to the scientific community as a whole. (Kim disclosed in page D1212 heading ‘SUMMARY’ (left col.).
Regarding claim 6, YUTA teaches a compound property prediction method, executing: a first step of preparing a first database including a plurality of records recording structural information about compound structures; (YUTA disclosed in page 2 para [0020-0021]: “an object of the invention is to provide a compound property prediction apparatus and method that can achieve high prediction rates by generating a prediction model accurately reflecting information concerning each particular compound whose properties are to be predicted; … there is provided a compound property prediction apparatus including: a training sample library in which a parameter value relating to a chemical structure and a value for a prediction item are preregistered for each individual one of a plurality of training samples;”).
However, YUTA doesn’t explicitly teach the limitations “a second step of extracting structural information from the first database prepared in the first step; a third step of training an autoencoder, having a plurality of intermediate layers, for converting structural information to multi-variables using the structural information extracted in the second step; a fourth step of preparing a second database including a plurality of records recording structural information about compound structures in association with compound properties about properties of compounds; a fifth step of extracting structural information from the second database prepared in the fourth step; a sixth step of converting the structural information extracted in the fifth step to multi-variables using the autoencoder; a seventh step of obtaining explanatory variables on the basis of the multi-variables converted generated in the plurality of intermediate layers of the autoencoder in the sixth step and obtaining objective variables on the basis of compound properties extracted from the second database; and setting compound properties corresponding to training data as objective variables and train a prediction model using the explanatory variables and the objective variables; and inputting the explanatory variables to the prediction model and predict properties that are the objective variables.”
Bombarelli teaches a second step of extracting structural information from the first database prepared in the first step; (Bombarelli disclosed in page 272 heading ‘Property prediction of molecules’: “we extended the purely generative model to also predict property values from the latent representation. We trained a multi-layer perceptron jointly with the autoencoder to predict properties from the latent representation of each molecule. … With joint training for property prediction, the distribution of molecules in the latent space is organized by property values. Figure 3 shows the mapping of true property values to the latent space representation of molecules, … The latent space generated by autoencoders jointly trained with the property prediction task shows in the distribution of molecules a gradient by property values;”).
Bombarelli teaches a third step of training an autoencoder, having a plurality of intermediate layers, for converting structural information to multi-variables using the structural information extracted in the second step; (Bombarelli disclosed in page 268 heading ‘Abstract’: “A deep neural network was trained on hundreds of thousands of existing chemical structures to construct three coupled functions: an encoder, a decoder, and a predictor. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to discrete molecular representations. The predictor estimates chemical properties from the latent continuous vector representation of the molecule. Continuous representations of molecules allow us to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules.” In page 269 left col. (last para): “We apply such generative models to chemical design, using a pair of deep networks trained as an autoencoder to convert molecules represented as SMILES strings into a continuous vector representation. In principle, this method of converting from a molecular representation to a continuous vector representation could be applied to any molecular representation, … We chose to use SMILES representation because it can be readily converted into a molecule. … We trained the autoencoder jointly on a property prediction task; we added a multilayer perceptron that predicts property values from the continuous representation generated by the encoder …”.
The disclosure above “latent space” corresponds to claim aspect “an autoencoder having a plurality of intermediate layers”. Moreover, the figure in the Abstract in Bombarelli’s disclosure is similar to the Fig. 7 in current Application’s Drawing).
Bombarelli teaches a fourth step of preparing a second database including a plurality of records recording structural information about compound structures in association with compound properties about properties of compounds; (Bombarelli disclosed in page 270-271: “Two autoencoder system were trained; one with 108,000 molecules from the QM9 dataset of molecules with fewer than 9 heavy atoms and another with 250,000 drug-like commercially available molecules extracted at random from the ZINC database. … The latent space representations for the QM9 and ZINC datasets had 156 dimensions and 196 dimensions respectively.”).
Bombarelli teaches a fifth step of extracting structural information from the second database prepared in the fourth step; (Bombarelli disclosed in page 272: “Table 1 compares the distribution of chemical properties in the training sets … with molecules decoded from sampling random points in the latent space of an VAE trained only for the reconstruction task. We compare the water-octanol partition coefficient (logP), the synthetic accessibility score (SAS), the natural-product score (NP) and drug-likeness (QED). … The molecules generated using the VAE show chemical properties that are more similar to the original dataset than the set of molecules generated by the genetic algorithm. The two rightmost columns in Table 1 report the fraction of molecules that belong to the 17 million drug-like compounds from which the training set was selected and how often they can be found in a library of existing organic compounds. … In the case of the QM9 dataset, since the combinatorial space is smaller, the training set has more coverage and the VAE generates essentially the same population statistics as the training data.”).
Bombarelli teaches a sixth step of converting the structural information extracted in the fifth step to multi-variables using the autoencoder; (Bombarelli disclosed in page 270 (at right col.): “To enable molecular design, the chemical structures encoded in the continuous representation of the autoencoder need to be correlated with the target properties that we are seeking to optimize. Therefore, we added a model to the autoencoder that predicts the properties from the latent space representation. This autoencoder was then trained jointly on the reconstruction task and a property prediction task; an additional multilayer perceptron (MLP) was used to predict the property from the latent vector of the encoded molecule. To propose promising new candidate molecules, we can start from the latent vector of an encoded molecule and then move in the direction most likely to improve the desired attribute. The resulting new candidate vectors can then be decoded into corresponding molecules (Figure 1b).”).
Bombarelli teaches a seventh step of obtaining explanatory variables on the basis of the multi-variables converted generated in the plurality of intermediate layers of the autoencoder in the sixth step and obtaining objective variables on the basis of compound properties extracted from the second database; (Bombarelli disclosed in page 268 heading ‘Abstract’: “A deep neural network was trained on hundreds of thousands of existing chemical structures to construct three coupled functions: an encoder, a decoder, and a predictor. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to discrete molecular representations. The predictor estimates chemical properties from the latent continuous vector representation of the molecule. Continuous representations of molecules allow us to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules.” In page 270 (right col.): “To enable molecular design, the chemical structures encoded in the continuous representation of the autoencoder need to be correlated with the target properties that we are seeking to optimize. Therefore, we added a model to the autoencoder that predicts the properties from the latent space representation. This autoencoder was then trained jointly on the reconstruction task and a property prediction task; an additional multilayer perceptron (MLP) was used to predict the property from the latent vector of the encoded molecule. To propose promising new candidate molecules, we can start from the latent vector of an encoded molecule and then move in the direction most likely to improve the desired attribute. The resulting new candidate vectors can then be decoded into corresponding molecules (Figure 1b).”
Further, in page 272-273 heading ‘Property Prediction of Molecules’: “we extended the purely generative model to also predict property values from the latent representation. We trained a multilayer perceptron jointly with the autoencoder to predict properties from the latent representation of each molecule. With joint training for property prediction, the distribution of molecules in the latent space is organized by property values. Figure 3 shows the mapping of property values to the latent space representation of molecules, compressed into two dimensions using PCA. The latent space generated by autoencoders jointly trained with the property prediction task shows in the distribution of molecules a gradient by property values; molecules with high values are located in one region, and molecules with low values are in another. … Our VAE model shows that property prediction performance for electronic properties (i.e., orbital energies) are similar to graph convolutions for some properties; ...”).
and Bombarelli teaches setting compound properties corresponding to training data as objective variables and train a prediction model using the explanatory variables and the objective variables; (Bombarelli disclosed in page 274 heading ‘Optimization of molecules via properties’ (4th para): “Figure 4b) shows the path of one optimization from the starting molecule to the final molecule in the two-dimensional PCA representation, the final molecule ending up in the region of high objective value.” It has been disclosed in page 273 Figure 4: “Optimization results for the jointly trained autoencoder using 5× QED−SAS as the objective function. Part (a) shows a violin plot which compares the distribution of sampled molecules from normal random sampling, SMILES optimization via a common chemical transformation with a genetic algorithm, and from optimization on the trained gaussian process model with varying levels of accuracy/training points.” Part (b) shows the starting and ending points of several optimization runs on a PCA plot of latent space colored by the objective function.”
It has been disclosed in page 269 left col. (last para): “We apply such generative models to chemical design, using a pair of deep networks trained as an autoencoder to convert molecules represented as SMILES strings into a continuous vector representation. In principle, this method of converting from a molecular representation to a continuous vector representation could be applied to any molecular representation, including chemical fingerprints, …”).
and Bombarelli teaches inputting the explanatory variables to the prediction model and predict properties that are the objective variables. (Bombarelli disclosed in page 272 heading ‘Property Prediction of Molecules’: “we extended the purely generative model to also predict property values from the latent representation. We trained a multilayer perceptron jointly with the autoencoder to predict properties from the latent representation of each molecule. With joint training for property prediction, the distribution of molecules in the latent space is organized by property values. Figure 3 shows the mapping of property values to the latent space representation of molecules, compressed into two dimensions using PCA. The latent space generated by autoencoders jointly trained with the property prediction task shows in the distribution of molecules a gradient by property values; molecules with high values are located in one region, and molecules with low values are in another. … Our VAE model shows that property prediction performance for electronic properties (i.e., orbital energies) are similar to graph convolutions for some properties; ...”.
The disclosure “We trained a multilayer perceptron jointly with the autoencoder to predict properties from the latent representation of each molecule” corresponds to claim limitation “inputs the explanatory variables to the prediction model.” Further, the disclosures “Figure 3 shows the mapping of property values to the latent space representation of molecules; our VAE model shows that property prediction performance for electronic properties (i.e., orbital energies) are similar to graph convolutions for some properties” correspond to claim limitation “predicts properties that are the objective variables”).
YUTA and Bombarelli are analogous art because they are related to have compound property prediction apparatus. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of YUTA and Bombarelli, before him or her, to modify storing compound structures related to compound properties of YUTA, to include predicting and inputting structural information related to compound properties of Bombarelli. The suggestion/motivation for doing so would have been obvious by Bombarelli because “We propose a new family of methods for exploring chemical space based on continuous encodings of molecules. These methods eliminate the need to hand-build libraries of compounds and allow a new type of directed gradient-based search through chemical space. We observed high fidelity in reconstruction, the ability to capture characteristic features of a molecular training set into the generative model, good predictive power when training jointly an autoencoder and a predictor, and the ability to perform model-based optimization of molecules in the smoothed latent space.” (Bombarelli disclosed in page 12 heading ‘Conclusion’).
However, YUTA and Bombarelli do not explicitly teach the limitation “at least one case database is selected as the first database from a case-by-case database storing a plurality of case databases;
wherein Kim teaches at least one case database is selected as the first database from a case-by-case database storing a plurality of case databases; (Kim disclosed in page 1206 heading ‘Web interfaces for textual search’ (right col.): “Entrez is the search and retrieval system used for PubChem’s three primary databases and other major NCBI databases … By default, if a specific database is not selected in the search menu, Entrez searches all Entrez databases available and lists the number of records in each database that are returned for this ‘global query’. Simply by selecting one of the three PubChem database from the global query result page, one can see the query result specific to that database. If an Entrez search returns multiple records, they are displayed in a document summary (DocSum) report.”).
YUTA, Bombarelli and Kim are analogous art because they are related to have database storing structural information about material structures related to material properties. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of YUTA, Bombarelli and Kim before him or her, to modify storing material structures related to material properties in a database of YUTA and Bombarelli, to include searching database using keyword or performing a text-based search in database of Kim. The suggestion/motivation for doing so would have been obvious by Kim because “In the present paper, we described the PubChem Substance and Compound databases. In addition to text-based search through Entrez, PubChem also enables users to perform various nontextual searches (such as identity search, molecular formula search, substructure/superstructure search, 2-D and 3-D similarity searches) using the Chemical Structure Search tool. PubChem is committed to continue serving as a key chemical information resource not only to the biomedical research community but also to the scientific community as a whole. (Kim disclosed in page D1212 heading ‘SUMMARY’ (left col.).
Regarding claim 8, YUTA, Bombarelli and Kim teach the compound property prediction method according to claim 6, however, YUTA and Bombarelli do not explicitly teach the limitations “in the case-by-case compound database, text information is stored in association with the case database, and in the first step, a user searches the text information and selects at least one case database”.
wherein Kim teaches in the case-by-case compound database, text information is stored in association with the case database, and in the first step, a user searches the text information and selects at least one case database. (Kim disclosed in page D1206 heading ‘Web interfaces for textual search’ (right col.): “Entrez is the search and retrieval system used for PubChem’s three primary databases and other major NCBI databases … One can search the PubChem databases through Entrez by initiating a search from the PubChem home page …, which also provides launch points to various PubChem services, tools, help documents and more … Entrez searches all Entrez databases available and lists the number of records in each database that are returned for this ‘global query’. Simply by selecting one of the three PubChem database from the global query result page, one can see the query result specific to that database.” It has been disclosed in page D1205 heading ‘Data Organization’ (left col. 2nd para): “PubChem extracts unique chemical structures from the Substance database through a process called ‘standardization’ … and stores them in the Compound database … This allows substance records from different data sources about the same molecule to be aggregated through a common ‘compound’ record in the Compound database.”).
YUTA, Bombarelli and Kim are analogous art because they are related to have database storing structural information about material structures related to material properties. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of YUTA, Bombarelli and Kim before him or her, to modify storing material structures related to material properties in a database of YUTA and Bombarelli, to include searching database using keyword or performing a text-based search in database of Kim. The suggestion/motivation for doing so would have been obvious by Kim because “In the present paper, we described the PubChem Substance and Compound databases. In addition to text-based search through Entrez, PubChem also enables users to perform various nontextual searches (such as identity search, molecular formula search, substructure/superstructure search, 2-D and 3-D similarity searches) using the Chemical Structure Search tool. PubChem is committed to continue serving as a key chemical information resource not only to the biomedical research community but also to the scientific community as a whole. (Kim disclosed in page D1212 heading ‘SUMMARY’ (left col.).
Regarding claim 9, YUTA, Bombarelli and Kim teach the compound property prediction method according to claim 6, wherein YUTA teaches in the first step, the case database includes a plurality of records recording structural information about compound structures in association with compound properties about properties of compound, (YUTA disclosed in page 2 para [0020-0021]: “an object of the invention is to provide a compound property prediction apparatus and method that can achieve high prediction rates by generating a prediction model accurately reflecting information concerning each particular compound whose properties are to be predicted; … there is provided a compound property prediction apparatus including: a training sample library in which a parameter value relating to a chemical structure and a value for a prediction item are preregistered for each individual one of a plurality of training samples;”).
However, YUTA and Bombarelli do not explicitly teach the limitation “at least one case database is selected from the case-by-case compound database as the second database”.
wherein Kim teaches in the fourth step, at least one case database is selected from the case-by-case compound database as the second database. (Kim disclosed in page D1206-D1207 heading ‘Web interfaces for textual search’: “Entrez is the search and retrieval system used for PubChem’s three primary databases … If an Entrez search returns multiple records, they are displayed in a document summary (DocSum) report. Figure 3 shows an example of the DocSum page from an Entrez search in the PubChem Compound database. The DocSum page for a search in the Substance database is very similar to Figure 3 in layout and format. ... The DocSum page contains controls to change the display type, to sort the results by various means, or to export the page to a file or printer. In addition, the icons and links on the right column of the DocSum page allow users to perform further analysis on the query result, to download the corresponding records, to refine or modify the search, to obtain associated records in other databases and so on.”).
YUTA, Bombarelli and Kim are analogous art because they are related to have database storing structural information about material structures related to material properties. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of YUTA, Bombarelli and Kim before him or her, to modify storing material structures related to material properties in a database of YUTA and Bombarelli, to include searching database using keyword or performing a text-based search in database of Kim. The suggestion/motivation for doing so would have been obvious by Kim because “In the present paper, we described the PubChem Substance and Compound databases. In addition to text-based search through Entrez, PubChem also enables users to perform various nontextual searches (such as identity search, molecular formula search, substructure/superstructure search, 2-D and 3-D similarity searches) using the Chemical Structure Search tool. PubChem is committed to continue serving as a key chemical information resource not only to the biomedical research community but also to the scientific community as a whole. (Kim disclosed in page D1212 heading ‘SUMMARY’ (left col.).
Regarding claim 10, YUTA, Bombarelli and Kim teach the compound property prediction method according to claim 9, however, YUTA doesn’t explicitly teach the limitation “the compound properties included in the records of the first database and the compound properties included in the records of the second database are compound properties having different definitions”.
wherein Bombarelli teaches the compound properties included in the records of the first database and the compound properties included in the records of the second database are compound properties having different definitions. (Bombarelli disclosed in page 270-271: “Two autoencoder system were trained; one with 108,000 molecules from the QM9 dataset of molecules with fewer than 9 heavy atoms and another with 250,000 drug-like commercially available molecules extracted at random from the ZINC database. … The latent space representations for the QM9 and ZINC datasets had 156 dimensions and 196 dimensions respectively.” The disclosure “ZINC datasets” and “QM9 dataset” corresponds to claim elements “first and second databases respectively. The QM9 database has material properties of “108,000 molecules with fewer than 9 heavy atoms” and ZINC database having different definitions (e.g., 250,000 drug-like commercially available molecules extracted at random from the ZINC database)).
YUTA and Bombarelli are analogous art because they are related to have compound property prediction apparatus. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of YUTA and Bombarelli, before him or her, to modify storing compound structures related to compound properties of YUTA, to include predicting and inputting structural information related to compound properties of Bombarelli. The suggestion/motivation for doing so would have been obvious by Bombarelli because “We propose a new family of methods for exploring chemical space based on continuous encodings of molecules. These methods eliminate the need to hand-build libraries of compounds and allow a new type of directed gradient-based search through chemical space. We observed high fidelity in reconstruction, the ability to capture characteristic features of a molecular training set into the generative model, good predictive power when training jointly an autoencoder and a predictor, and the ability to perform model-based optimization of molecules in the smoothed latent space.” (Bombarelli disclosed in page 12 heading ‘Conclusion’).
Regarding claim 11, YUTA, Bombarelli and Kim teach the compound property prediction method according to claim 6, is incorporating the rejections of claim 2, because claim 11 has substantially similar claim language as claim 2, therefore claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over YUTA, Bombarelli and Kim as discussed above for substantially similar rationale.
Regarding claim 12, YUTA, Bombarelli and Kim teach the compound property prediction method according to claim 6, however, YUTA doesn’t explicitly teach the limitation “a ninth step of preparing structural information about compound structures having properties to be predicted;”
further Bombarelli teaches a ninth step of preparing structural information about compound structures having properties to be predicted; (Bombarelli disclosed in page 272-273: “we extended the purely generative model to also predict property values from the latent representation. We trained a multi-layer perceptron jointly with the autoencoder to predict properties from the latent representation of each molecule. … With joint training for property prediction, the distribution of molecules in the latent space is organized by property values. Figure 3 shows the mapping of true property values to the latent space representation of molecules, … The latent space generated by autoencoders jointly trained with the property prediction task shows in the distribution of molecules a gradient by property values;”).
Bombarelli teaches a tenth step of converting the structural information prepared in the ninth step to multi-variables using the autoencoder; (Bombarelli disclosed in page 270 (right col.): “To enable molecular design, the chemical structures encoded in the continuous representation of the autoencoder need to be correlated with the target properties that we are seeking to optimize. Therefore, we added a model to the autoencoder that predicts the properties from the latent space representation. This autoencoder was then trained jointly on the reconstruction task and a property prediction task; an additional multilayer perceptron (MLP) was used to predict the property from the latent vector of the encoded molecule. To propose promising new candidate molecules, we can start from the latent vector of an encoded molecule and then move in the direction most likely to improve the desired attribute. The resulting new candidate vectors can then be decoded into corresponding molecules (Figure 1b).”).
Bombarelli teaches an eleventh step of obtaining explanatory variables on the basis of the multi-variables converted in the tenth step; (Bombarelli disclosed in page 272-273 heading ‘Property Prediction of Molecules’: “we extended the purely generative model to also predict property values from the latent representation. We trained a multilayer perceptron jointly with the autoencoder to predict properties from the latent representation of each molecule. With joint training for property prediction, the distribution of molecules in the latent space is organized by property values. Figure 3 shows the mapping of property values to the latent space representation of molecules, compressed into two dimensions using PCA. The latent space generated by autoencoders jointly trained with the property prediction task shows in the distribution of molecules a gradient by property values; molecules with high values are located in one region, and molecules with low values are in another. … Our VAE model shows that property prediction performance for electronic properties (i.e., orbital energies) are similar to graph convolutions for some properties; ...”).
and Bombarelli teaches a twelfth step of assuming compound properties that are the objective variables by applying the explanatory variables obtained in the eleventh step to the prediction model. (Bombarelli disclosed in page 272-273 heading ‘Property Prediction of Molecules’: “we extended the purely generative model to also predict property values from the latent representation. We trained a multilayer perceptron jointly with the autoencoder to predict properties from the latent representation of each molecule. With joint training for property prediction, the distribution of molecules in the latent space is organized by property values. Figure 3 shows the mapping of property values to the latent space representation of molecules, compressed into two dimensions using PCA. The latent space generated by autoencoders jointly trained with the property prediction task shows in the distribution of molecules a gradient by property values; molecules with high values are located in one region, and molecules with low values are in another. … Our VAE model shows that property prediction performance for electronic properties (i.e., orbital energies) are similar to graph convolutions for some properties; ...”.
The disclosure “We trained a multilayer perceptron jointly with the autoencoder to predict properties from the latent representation of each molecule” corresponds to claim limitation “explanatory variables obtained in the eleventh step to the prediction model.” Further, the disclosures “Figure 3 shows the mapping of property values to the latent space representation of molecules; our VAE model shows that property prediction performance for electronic properties (i.e., orbital energies) are similar to graph convolutions for some properties” correspond to claim limitation “compound properties that are the objective variables by applying the explanatory variables”).
YUTA and Bombarelli are analogous art because they are related to have compound property prediction apparatus. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of YUTA and Bombarelli, before him or her, to modify storing compound structures related to compound properties of YUTA, to include predicting and inputting structural information related to compound properties of Bombarelli. The suggestion/motivation for doing so would have been obvious by Bombarelli because “We propose a new family of methods for exploring chemical space based on continuous encodings of molecules. These methods eliminate the need to hand-build libraries of compounds and allow a new type of directed gradient-based search through chemical space. We observed high fidelity in reconstruction, the ability to capture characteristic features of a molecular training set into the generative model, good predictive power when training jointly an autoencoder and a predictor, and the ability to perform model-based optimization of molecules in the smoothed latent space.” (Bombarelli disclosed in page 12 heading ‘Conclusion’).
Regarding claim 14, YUTA, Bombarelli and Kim teach the compound property prediction method according to claim 6, however, YUTA doesn’t explicitly teach the limitation “both of the first database and the second database include the plurality of records recording the structural information about the compound structures in association with the compound properties about the properties of the compound, and record data having different definitions or types with respect to the compound properties”.
wherein Bombarelli teaches both of the first database and the second database include the plurality of records recording the structural information about the compound structures in association with the compound properties about the properties of the compound, and record data having different definitions or types with respect to the compound properties. (Bombarelli disclosed in page 270-271: Two autoencoder system were trained; one with 108,000 molecules from the QM9 dataset of molecules with fewer than 9 heavy atoms and another with 250,000 drug-like commercially available molecules extracted at random from the ZINC database. … The latent space representations for the QM9 and ZINC datasets had 156 dimensions and 196 dimensions respectively.” The disclosure “ZINC datasets” and “QM9 dataset” corresponds to claim elements “first and second databases respectively. The QM9 database has material properties of “108,000 molecules with fewer than 9 heavy atoms” and ZINC database having different definitions (e.g., 250,000 drug-like commercially available molecules extracted at random from the ZINC database)).
YUTA and Bombarelli are analogous art because they are related to have compound property prediction apparatus. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of YUTA and Bombarelli, before him or her, to modify storing compound structures related to compound properties of YUTA, to include predicting and inputting structural information related to compound properties of Bombarelli. The suggestion/motivation for doing so would have been obvious by Bombarelli because “We propose a new family of methods for exploring chemical space based on continuous encodings of molecules. These methods eliminate the need to hand-build libraries of compounds and allow a new type of directed gradient-based search through chemical space. We observed high fidelity in reconstruction, the ability to capture characteristic features of a molecular training set into the generative model, good predictive power when training jointly an autoencoder and a predictor, and the ability to perform model-based optimization of molecules in the smoothed latent space.” (Bombarelli disclosed in page 12 heading ‘Conclusion’).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over YUTA, Bombarelli and Kim and further in view of a conference paper “Replacement AutoEncoder: A Privacy-Preserving Algorithm for Sensory Data Analysis” by Richard G. Clegg et al. (hereinafter Clegg, published in 2018).
Regarding claim 13, YUTA, Bombarelli and Kim teach the compound property prediction method according to claim 6, however YUTA, Bombarelli and Kim do not explicitly teach the limitation “at least one of the autoencoder and the prediction model is stored in a storage device and reused.”
wherein Clegg teaches at least one of the autoencoder and the prediction model is stored in a storage device and reused. (Examiner notes that the claim language includes two optional embodiments, a first embodiment “the autoencoder” "or" a second embodiment “the prediction model”. Claim limitation recites term “at least one of”, therefore, only one of the two embodiments need to be taught by the reference.
Clegg disclosed in page 175 section VIII (right col.): “We have introduced Replacement AutoEncoder: a feature learning algorithm which learns how to transform discriminative features that correspond to sensitive inferences, … Another direction for continuing this research could be using Long Short Term Memory (LSTMs) networks which are capable of learning long-term dependencies in data to capture the discriminative features of time-series.” In page 168-169 section IV A.: “The performance of machine learning models is heavily dependent on the type of data representation and the robustness of extracted features on which they are applied. … the hidden layers of a multilayer neural network (starting with the raw input) can potentially learn more abstract features at higher layers of representations and reuse a subset of these features for each particular task.” Therefore, the disclosures teach the limitation “autoencoder is stored in a storage device and reused”).
YUTA, Bombarelli, Kim and Clegg are analogous art because they are related to have machine learning method to improve the prediction in training sample set. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of YUTA, Bombarelli, Kim and Clegg, before him or her, to modify storing material structures related to material properties of YUTA and Bombarelli, to include storing machine learning device (e.g., autoencoder) in a storage device to reuse or recover in further use of Clegg. The suggestion/motivation for doing so would have been obvious by Clegg because “Autoencoders are neural networks trained to reconstruct their original input, which can be considered as a form of feature extraction algorithm. We consider existing approaches for preserving inference privacy in time-series data analysis, we propose a feature-based replacement method to eliminate sensitive information in time-series by transforming them to non-sensitive data (see Figure 1). By applying our method, data utility will be unaffected for specific applications and cloud apps can accurately infer the desired information. (Clegg disclosed in page 165-166 section I).
Conclusion
8. The prior arts made of record and not relied upon is considered pertinent to applicant's disclosure. The article “Machine Learning and Statistical Analysis for Materials Science: Stability and Transferability of Fingerprint Descriptors and Chemical Insights” by Praveen Pankajakshan et al. presented a recipe of statistical analyses and machine learning methodology developed here to help a material scientist to start from a raw dataset, build predictive models, and uncover the governing mechanism. This entire recipe applied to a published database containing 298 alloys and proposed machine learning method based on BOPGD provides significant advantages over conventionally used methods of descriptor selection such as artificial neural networks and LASSO-based methods. A modification of the d-band model that includes the chemical effect of work function, and show that the resulting predictive model gives the binding energy of CO to catalyst fairly accurately. The scheme is particularly efficient in reducing a set of large number of descriptors to a minimal one, it is expected be a versatile tool in obtaining chemical insights into complex phenomena and development of predictive models for design of materials.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NUPUR DEBNATH whose telephone number is (571)272-8161. The examiner can normally be reached M-F 8:00 am -4:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Renee D Chavez can be reached on (571)270-1104. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NUPUR DEBNATH/Examiner, Art Unit 2186
/RENEE D CHAVEZ/Supervisory Patent Examiner, Art Unit 2186