DETAILED ACTION
Applicant’s response, filed 11/26/20025, has been fully considered. Rejections and/or objections not reiterated from previous Office Actions are hereby withdrawn. The following rejections and/or objections are either reiterated or newly applied. They constitute the complete set presently being applied to the instant application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Status
Claims 1, 3-8, and 10-11 are pending.
Claims 2,9, and 12 are canceled.
Claims 1, 3-8, and 10-11 are rejected.
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d) to Republic of Korea App. No. 10-2021-0007219, filed 01/19/2021. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) filed on 12/15/2021 is in compliance with the provisions of 37 CFR 1.97 and has therefore been considered. A signed copy of the IDS document is included with this Office Action.
Drawings
The Drawings submitted 12/15/2021 are accepted.
Claim Interpretation
The outstanding objections from the previous office action are maintained.
35 U.S.C. 112(f)
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a system” in independent claim 1, “a virtual genomic variant data generation unit”, “a virtual variant learning unit”, “an actual variant learning unit”, and “a weight extraction unit” in dependent claim 1, “an evolutionary conservation data generation unit” and “a virtual pathogenic variant determination unit” in claim 3, and “a pathogenic determination unit” in claim 1.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
The three-prong test: (A) “system” is a substitute for “means” that is a generic placeholder; (B)unit is modified by the functional language “configured”; (C) system is not modified and therefore does not provide sufficient structure for performing the step generating the evolutionary conservation data including an evolutionary conservation feature by using multiple sequence alignment (MSA) from target protein sequence data and a plurality of similar pieces of protein sequence data.
The three-prong test: (A) “system” is a substitute for “means” that is a generic placeholder; (B)unit is modified by the functional language “configured”; (C) system is not modified and therefore does not provide sufficient structure for performing the step generating the evolutionary conservation data including an evolutionary conservation feature by using multiple sequence alignment (MSA) from target protein sequence data and a plurality of similar pieces of protein sequence data.
The specification discloses a system [Fig. 1, 0010, and 0020], but does not disclose adequate structure to perform the claimed function. See below regarding issues under 112(a) and 112(b) arising from this claim interpretation.
The three-prong test: (A) “unit” is a substitute for “means” that is a generic placeholder; (B)unit is modified by the functional language “for”; (C) unit is modified by virtual genomic variant data generation, therefore does not provide sufficient structure for performing the step generating the virtual genomic variant data from the evolutionary conservation data,
The three-prong test: (A) “unit” is a substitute for “means” that is a generic placeholder; (B)unit is modified by the functional language “for”; (C) unit is modified by virtual variant learning, therefore does not provide sufficient structure for performing the step learning an artificial neural network model using the virtual genomic variant data.
The three-prong test: (A) “unit” is a substitute for “means” that is a generic placeholder; (B)unit is modified by the functional language “for”; (C) unit is modified by actual variant learning, therefore does not provide sufficient structure for performing the step learning an artificial neural network model using the actual genomic variant data.
The three-prong test: (A) “unit” is a substitute for “means” that is a generic placeholder; (B)unit is modified by the functional language “for”; (C) unit is modified by weight extraction, therefore does not provide sufficient structure for performing the step obtaining weight values of a hidden layer of the artificial neural network model when the virtual variant learning unit.
The three-prong test: (A) “unit” is a substitute for “means” that is a generic placeholder; (B)unit is modified by the functional language “for”; (C) unit is modified by virtual pathogenic variant determination, therefore does not provide sufficient structure for generating each of virtual pathogenic genomic variant data and virtual non-pathogenic genomic variant data according to preset criteria from the evolutionary conservation feature.
The three-prong test: (A) “unit” is a substitute for “means” that is a generic placeholder; (B)unit is modified by the functional language “for”; (C) unit is modified by pathogenic determination, therefore does not provide sufficient structure determines pathogenicity of a target genomic variant using an artificial neural network model learned by the actual variant learning unit. The specification discloses the instant units but does not disclose adequate structure to perform the claimed function. See below regarding issues under 112(a) and 112(b) arising from this claim interpretation.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Response to Applicant Arguments
Applicant submits the claims make clear how virtual genomic variant data is generated, how this data is consumed by the neural network, how hidden-layer weight values are extracted, and how pathogenicity is finally determined [p. 6, par. 3].
It is respectfully found not persuasive. The specification does disclose examples but does not provide a specific algorithm structure or step by step instructions on how to perform the data processing pipeline.
Claim Rejections - 35 USC § 112
35 U.S.C. 112(a)
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
Claim(s) 1, 3-8, and 10-11 is/are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
The outstanding objections from the previous office action are maintained.
Claim 1 and 3 and those dependent therefrom is/are rejected because, as outlined above under 35 USC 112(f), the disclosure does not contain adequate structure for “a virtual genomic variant data generation unit”, “a virtual variant learning unit”, “an actual variant learning unit”, and “a weight extraction unit” in dependent claim 1, “an evolutionary conservation data generation unit” and “a virtual pathogenic variant determination unit” in claim 3, and “a pathogenic determination unit” in claim 1 to perform the claimed functions. The specification as published does mention these units but without any structure. Therefore, there is insufficient disclosure as to necessary structure, steps explained in prose, or any mathematical expression necessary to carry out the above recited function.
With respect to the above limitations, adequate written description for specific programming to carry out said functions in computer-related inventions requires disclosure of the algorithm by which to perform said function. Without the algorithm disclosed, it is unclear as to the exact structure that performs said function (see Finisar Corp. v. DirecTV Group Inc., 86 USPQ2d 1609, 1623 (Fed. Cir. 2008); Halliburton Energy Services v. M-I LLC 514 F.3d 1244, 1256 n.7 (Fed. Cir. 2008)). This raises issues under 112(a) because without the respective algorithms disclosed, one is not apprised of the inventor or joint inventor having possession of the claimed invention.
35 USC § 112(b)
The outstanding rejection has been withdrawn in view of the claim amendments
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3-8, and 10-11 are rejected under 35 U.S.C. 101 because the claimed invention is not directed to a process, machine, manufacture, or composition of matter.
MPEP 2106 organizes judicial exception analysis into Steps 1, 2A (Prongs One and Two) and 2B as follows below. MPEP 2106 and the following USPTO website provide further explanation and case law citations: uspto.gov/patent/laws-and-regulations/examination-policy/examination-guidance-and-training-materials.
Framework with which to Evaluate Subject Matter Eligibility:
Step 1: Are the claims directed to a process, machine, manufacture, or composition of matter;
Step 2A, Prong One: Do the claims recite a judicially recognized exception, i.e. a law of nature, a natural phenomenon, or an abstract idea;
Step 2A, Prong Two: If the claims recite a judicial exception under Prong One, then is the judicial exception integrated into a practical application (Prong Two); and
Step 2B: If the claims do not integrate the judicial exception, do the claims provide an inventive concept.
Framework Analysis as Pertains to the Instant Claims:
Step 1
The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because they only recite a system with units. The plain meaning of the claims include a system with two step process of leaning an artificial neural network model and learn actual genomic variant data using units, which are products that do not have a physical or tangible form, such as information (often referred to as "data per se") and a computer program per se (often referred to as "software per se") when claimed as a product without any structural recitations MPEP 2106.03. The specification does not modify the plain meaning of the claims, therefore they are directed to “software per se”.
Response to Applicant Arguments
Applicant submits claim 1 is directed to a "system," which is unquestionably a "machine" [p. 9, par. 4].
It is respectfully found not persuasive. There can be no system without a computer or processor. The instant claims are software alone and therefore are not patent eligible as a machine is a "concrete thing, consisting of parts, or of certain devices and combination of devices." Digitech, 758 F.3d at 1348-49, 111 USPQ2d at 1719 (quoting Burr v. Duryee, 68 U.S. 531, 570, 17 L. Ed. 650, 657 (1863) MPEP 2106.03. Non-limiting examples of claims that are not directed to any of the statutory categories include products that do not have a physical or tangible form, such as information (often referred to as "data per se") or a computer program per se (often referred to as "software per se") when claimed as a product without any structural recitations MPEP 2106.3.
Applicant submits that the claims are not directed to an abstract idea but rather to a specific technological improvement in the field of machine learning applied to genomic variant analysis [p. 9, par. 5].
It is respectfully found not persuasive. As there are no judicial exceptions or additional elements, there can be no technological improvement.
Applicant submits the claimed system is therefore not directed to mere mathematical analysis or generic data processing [p. 9, par. 6].
As there are no claims that are patent eligible, there can not be a technical improvement.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
For the following rejections, instantly claimed elements which are considered to be equivalent to the prior art teachings are described in bold for all claims, and underlined text indicates newly recited portions necessitated by claim amendment.
A. Claims 1, 3-5, 7-8, and 10-11, are rejected under 35 U.S.C. 103 as being unpatentable over Shamsi et al. (Shamsi, Zahra, Matthew Chan, and Diwakar Shukla. “TLmutation: Predicting the Effects of Mutations Using Transfer Learning.” The journal of physical chemistry. B 124.19 (2020): 3845–3854. Web, cited on IDS dated 12/15/2021) in view of Hopf et al. (Hopf, Thomas A., et al. "Mutation effects predicted from sequence co-variation." Nature biotechnology 35.2 (2017): 128-135, cited on IDS dated 12/15/2021).
The instant rejection is newly stated and is necessitated by claim amendment.
Claim 1 is directed to a system for predicting pathogenicity of a genomic variant, comprising :
Shamsi discloses TLmutation: predicting the effects of mutations using transfer learning [title]. Shamsi further discloses conducted multiple experiments using the proposed transfer learning algorithms to evaluate the practical efficiency in predicting the effects of mutations of multiple proteins with different types of training and test data sets [p. 3846, col. 2, par. 1].
a virtual genomic variant data generation unit configured to generate virtual genomic variant data from evolutionary conservation data, wherein the evolutionary conservation data is generated from multiple sequence alignment of target protein sequence data and a plurality of similar pieces of protein sequence data;
Shamsi discloses in this work, we aimed to extract new predictive models of the biological function of proteins from predictive models trained on the evolutionary data and extend it to a new protein via unsupervised transfer learning [p. 3852, col. 1, par. 3]. Shamsi further discloses mutagenesis data sets provide an opportunity to utilize this data-rich regime and investigate the transferability of mutational effects among homologous proteins [p. 3850, col. 2 par. 2 and fig 5] which reads on multiple sequence alignment of a target protein. Shamsi doesn’t explicitly discloses multiple sequence alignment.
However, Hopf discloses mutation effects predicted from sequence co-variation [title]. Hopf further discloses EVmutation, an unsupervised statistical method for predicting the effects of mutations that explicitly captures residue dependencies between positions [abstract]. Hopf also discloses for each analyzed protein (target sequence), multiple sequence alignments of the corresponding protein family were obtained by the default five search iterations of the profile HMM homology search tool jackhammer [p. 136, col. 1, par. 1]. As TLmutation is modification of EVmutation they use the same data generation processes.
a virtual variant learning unit configured to train an artificial neural network model using the virtual genomic variant data; a weight extraction unit configured to extract weight values of a hidden layer of the artificial neural network model trained by the virtual variant learning unit;
Shamsi discloses an algorithm, TLmutation, which is an adaptation of the successful variant effect predictor EVmutation, that utilizes deep mutational data sets to enhance predictions of variant effects in proteins [p. 3846, col. 1, par. 4]. Shamsi further discloses implementing an algorithm, TLmutation, which transfers knowledge from a model, trained on natural sequences and deep mutational data, to a new protein function for the same protein [p. 3846, col. 1, par. 4]. Shamsi also discloses the algorithm transfers knowledge from a well-trained MRF model of one task, to other similar tasks when either limited or no training data is available [ p. 3847, col. 2, par. 5]. Shamsi further discloses using the same learned weights w to switch the potentials in M base target and obtain Mnew target while using the same value of θ,base, c, target for potentials that remain active in the target MRF [p. 3847, col. 2, par. 5]. As ANN’s are able to reuse learned features and parameters from one task to improve performance on new, related tasks, saving time and data and furthermore the core computational model for these processes, allowing knowledge to be shared across different models or layers within the same network.
an actual variant learning unit configured to train the artificial neural network model using actual genomic variant data, wherein the actual variant learning unit is configured to apply the weight values extracted by the weight extraction unit to the artificial neural network model when learning the actual genomic variant data; and
Shamsi discloses implementing an algorithm, TLmutation, which transfers knowledge from a model, trained on natural sequences and deep mutational data, to a new protein function for the same protein [p. 3846, col. 1, par. 4]. Shamsi also discloses the algorithm transfers knowledge from a well-trained MRF model of one task, to other similar tasks when either limited or no training data is available [ p. 3847, col. 2, par. 5]. Shamsi further discloses using the same learned weights w to switch the potentials in M base target and obtain Mnew target while using the same value of θ,base, c, target for potentials that remain active in the target MRF [p. 3847, col. 2, par. 5]. As ANN’s are able to reuse learned features and parameters from one task to improve performance on new, related tasks, saving time and data and furthermore the core computational model for these processes, allowing knowledge to be shared across different models or layers within the same network.
a pathogenicity determination unit configured to evaluate the pathogenicity of a genomic variant based on the artificial neural network model trained by the actual learning unit, producing a score indicating the likelihood of pathogenicity
Shamsi discloses in this study, ρ accesses the association between two ranked variables, the predicted effect of a point mutation and the experimental effect [p. 3847, col. 1, par. 2] which reads on pathogenicity. With a score being interpreted as a measure of potential pathogenicity, particularly by predicting how mutations alter protein function a the spearman’s test correlation assessment can be used as a score of pathogenicity.
Claim 3 is directed to the system of claim 1, wherein the virtual genomic variant data generation unit comprises: an evolutionary conservation data generation unit configured to generate the evolutionary conservation data including an evolutionary conservation feature by using the multiple sequence alignment from the target protein sequence data and the plurality of similar pieces of protein sequence data; and a virtual pathogenic variant determination unit configured to generate virtual pathogenic genomic variant data and virtual non-pathogenic genomic variant data according to preset criteria from the evolutionary conservation feature.
Shamsi is silent on evolutionary conservation.
However, Hopf discloses analysis of structural features where evolutionary couplings calculated from multiple sequence alignments were compared to experimental protein 3D structures from the PDB77 to assess if the identified epistatic constraints correspond to structural contacts [p. 138, col. 2, par. 5]. Hopf further discloses structures and mappings to the target sequence were obtained using jackhammer-based searches against the PDB (one search itera-tion), and residue pair distances calculated for up to ten of the most significant hits with a normalized bit-score of at least than 0.5 bits/residue to the target sequence [p. 138, col. 2, par. 5]. Hopf also discloses alignments for protein sequences with disease and/or neutral variants were generated by identifying Pfam domains in the respective sequence using hmmscan from HMMER [p. 138, col. 1, par. 1].
Claim 4 is directed to the system of claim 3, wherein the evolutionary conservation feature is a frequency of amino acids found in a corresponding residue.
Hopf further discloses structures and mappings to the target sequence were obtained using jackhammer-based searches against the PDB (one search itera-tion), and residue pair distances calculated for up to ten of the most significant hits with a normalized bit-score of at least than 0.5 bits/residue to the target sequence [p. 138, col. 2, par. 5].
Claim 5 is directed to the system of claim 3, wherein the multiple sequence alignment is performed by a BLAST algorithm or an HHBLits algorithm.
Shamsi doesn’t explicitly discloses multiple sequence alignment.
However, Hopf discloses for each analyzed protein (target sequence), multiple sequence alignments of the corresponding protein family were obtained by the default five search iterations of the profile HMM homology search tool jackhammer [p. 136, col. 1, par. 1]. Hopf also discloses As TLmutation is modification of EVmutation they use the same data generation processes.
Claim 7 is directed to the system of claim 1,wherein the actual genomic variant data comprises actual pathogenic genomic variant data and actual non-pathogenic genomic variant data.
Shamsi is silent on the actual genomic variant data comprises actual pathogenic genomic variant data and actual non-pathogenic genomic variant data.
However, Hopf discloses alignments for protein sequences with disease and/or neutral variants were generated by identifying Pfam domains in the respective sequence using hmmscan from HMMER [p. 138, col. 1, par. 1] from test and training data.
Claim 8 is directed to the system of claim 1,wherein the knowledge transfer comprises transfer learning system further performs multi-task learning.
Shamsi discloses a transfer learning algorithms for MRF models [p. 3846, fig. 1]. Shamsi further discloses in each source and target domain, the new task is a subset of the base task [p. 3846, fig. 1] and knowledge is then transferred from the source domain, where training data is available to the target domain [p. 3846, fig. 1], which reads on multi-task learning.
Claim 10 is directed to the system of claim 8, wherein in the multi-task learning, the respective weight values extracted from the virtual variant learning unit and the actual variant learning unit are alternately applied to a hidden layer of an artificial neural network model.
Shamsi discloses an algorithm, TLmutation, which is an adaptation of the successful variant effect predictor EVmutation, that utilizes deep mutational data sets to enhance predictions of variant effects in proteins [p. 3846, col. 1, par. 4]. Shamsi further discloses implementing an algorithm, TLmutation, which transfers knowledge from a model, trained on natural sequences and deep mutational data, to a new protein function for the same protein [p. 3846, col. 1, par. 4]. Shamsi also discloses the algorithm transfers knowledge from a well-trained MRF model of one task, to other similar tasks when either limited or no training data is available [ p. 3847, col. 2, par. 5]. Shamsi further discloses using the same learned weights w to switch the potentials in M base target and obtain Mnew target while using the same value of θ,base, c, target for potentials that remain active in the target MRF [p. 3847, col. 2, par. 5]. As ANN’s are able to reuse learned features and parameters from one task to improve performance on new, related tasks, saving time and data and furthermore the core computational model for these processes, allowing knowledge to be shared across different models or layers within the same network.
Claim 11 is directed to the system of claim 1,wherein the hidden layer is an initial layer of the artificial neural network model.
Although Shamsi and Hopf do not explicitly discloses hidden layers of an ANN. Knowledge transfer algorithms require hidden layers for learning and transferring feature data.
In regards to claim(s) 1, 3-8, and 10-11, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine, Shamsi with Hopf as they both are directed to pathogenicity of mutations. The motivation would have been to include the evolutionary conservation data of Hopf to the transfer model of Shamsi to include non-pathogenic data as well as pathogenic data in the model as a design choice, a finding that one could have combined the elements as claimed by known methods, and that in combination, each element merely would have performed the same function as it did separately. Additionally, substituting the HHM sequence alignment of Hopf for the sequence alignment of Shamsi would provide the same data, a finding that one of ordinary skill in the art could have substituted one known element for another, and the results of the substitution would have been predictable.
Conclusion
No claims are allowed.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Inquiries
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Dawn Bickham whose telephone number (703)756-1817. The examiner can normally be reached on Monday - Friday 8-4.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Olivia Wise can be reached on (571)272-2249. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/D.M.B./Examiner, Art Unit 1685 /Soren Harward/Primary Examiner, TC 1600