Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
1. This Final Office action is in reply to the Applicant amendment filed on 25 November 2025.
2. Claims 1-5, 7-9, 14, and 20 have been amended.
3. Claims 1-20 are currently pending and have been examined.
Response to Amendment
In the previous office action, broadly Claims 1-20 were rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter (abstract idea). Applicants have not amended Claims 1-20 to provide statutory support and the rejection is maintained.
In the previous office action, broadly recited Claims 1-20 were rejected under 35 U.S.C. 112, second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which Applicant regards as the invention. Applicants have provided clarification and amended Claims 1-20 mostly to remove the previous recited indefinite phrase limitations. Although clarified, the broad recited phrase limitation “is correct” is still given a broad and reasonable interpretation from the below cited prior art as best interpreted and the rejection is withdrawn.
Response to Arguments
Applicant’s arguments filed 25 November 2025 have been fully considered but they are not persuasive. In the remarks regarding the 35 USC § 101 rejection for Claims 1-20, Applicant argues that: (1) the claims are not directed to an abstract idea, and even if they were, they would amount to significantly more than the abstract idea. Examiner respectfully disagrees. Still commensurate to the two-part subject matter eligibility framework decision in the Federal court decision in Alice Corp. Pty. Ltd. V. CLS Bank International et al., (Alice), 2019 revised patent subject matter eligibility guidance (2019 PEG) and the October 2019 Update: Subject Matter Eligibility (“October 2019 Update), and the new “July 2024 Guidance Update on Patent Subject Matter Eligibility Examples, including on Artificial Intelligence”, and the Examiner details the maintained rejection under 35 U.S.C. 101 in the below rejection with further explanation. Applicant argues that as amended, Applicant states: basically referencing a Federal Circuit case of Enfish cites “[s]oftware can make non-abstract improvements to computer technology, just as hardware improvements can”…” and minor and vague discussion for comparison to Ex parte Desjardins, Appeal 2024-000567. (see Remarks/Arguments pages). However the Examiner respectfully disagrees. First, the Enfish decision was completely different subject matter that Applicants’ claimed and broadly recited claim limitations. Second, the Desjardins decision has not been fully incorporated into the Patent Examiners Manual of Patent Procedure for legal analysis under non-statutory claim language and will not be addressed for the instant application at this time. However the broadly recited claims are still deemed non-statutory under this rejection as seen below and with additional clarification from the Examiner’s analysis that still recite:
Step 2 A: Prong 1: (a) Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations. The steps of: "calculating…determining... thresholds" "generating actual output data" "determining... characteristics... meet one or more machine learning modeling thresholds" and “calculating a…first value that indicates a degree to which the... model is correct" are mathematical calculations. "first value that indicates the degree to which the respective machine learning model is correct given the respective actual output data and the expected output data". This language describes the calculation of a model performance metric (e.g., accuracy, precision, recall). The calculation of a metric, which involves comparing outputs and deriving a value, is a fundamental mathematical concept/computation. "second value that indicates the degree to which the respective machine learning model is correct given the testing data from the input data and the expected output data". This is a restatement of the first, specifying the use of testing data, and also describes a mathematical concept/computation. The mere use of testing data, a common practice described by IBM, does not turn the calculation into a practical application. "selecting a machine learning model from the two or more machine learning models using the second parameter that meets the one or more machine learning modeling thresholds". The process of "selecting" a model based on quantitative "values" and "thresholds" is a decision-making step that can be performed by a human or a basic computer program. This is an application of a fundamental economic or business practice (model selection is a well-known process in the field, as noted by IBM) and also involves the application of the prior mathematical calculations. This element, in isolation, is an abstract idea. "providing, to a system, the selected machine learning model to cause the system to generate a recommendation using the selected machine learning model". This element describes using the selected model in a downstream application (generating a recommendation). However, the act of "providing" the model and having a "system" use it does not, without more, specify how the abstract idea is integrated into a concrete, technological process in a way that is not itself abstract. The 'recommendation' itself is likely a result or output of the model, not a physical process. In combination, the elements describe a mental process or mathematical method for comparing models using calculated metrics and selecting the "best" one, which is an abstract idea.
(b) Mental processes – concepts performed in the human mind (including an observation, evaluation, judgment, opinion). The steps of evaluating data, determining parameter types, and selecting a model, while implemented on a computer, are fundamentally analytical processes that can be performed in the human mind. The claim focuses on data analysis, threshold evaluation, and model selection—all considered abstract, "mental" steps, even if performed on a computer.
See MPEP § 2106.04(a) II C. Hence, the claims are ineligible under Step 2A Prong one. Furthermore, the dependent claims are merely directed to the particulars of the abstract idea and likewise do not add significantly more to the above-identified judicial exception. The limitations of the claims do not transform the abstract idea that they recite into patent-eligible subject matter because the claims simply instruct the practitioner to implement the abstract idea using generally-recited computer components. a In summary as indicated below through Steps 1-2B, the recitation of a computer (one or more processors) to perform the claim limitations amount to no more that mere instruction to apply the exception using a generic computer component (x). Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solution activity, which do not provide an inventive concept.
Prong Two: Claims 1-20: With regard to this step of the analysis (as explained in MPEP § 2106.04(d)), the judicial exception is not integrated into a practical application. Therefore, the claims contain computer components/elements (“one or more computers; storage devices; non-transitory computer readable medium; one/two or more machine learning model(s)”, etc.) (e.g., see Applicants’ published Specification ¶' s 2-14, 27-39) that are cited at a high level of generality and are merely invoked as a tool to perform the abstract idea. Simply implementing an abstract idea on a computer is not a practical application of the abstract idea.
Step 2B: As explained in MPEP § 2106.05, Claims 1-20 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea nor recites additional elements that integrate the judicial exception into a practical application. The additional elements of “one or more computers; storage devices; non-transitory computer readable medium; one/two or more machine learning model(s)”, etc. are generically-recited computer-related elements that amount to a mere instruction to “apply it” (the abstract idea) on the computer-related elements (see MPEP § 2106.05 (f) – Mere Instructions to Apply an Exception). These additional elements in the claims are recited at a high level of generality and are merely limiting the field of use of the judicial exception (see MPEP §2106.05 (h) – Field of Use and Technological Environment). There is no indication that the combination of elements improves the function of a computer or improves any other technology. Furthermore, the dependent claims are merely directed to the particulars of the abstract idea and likewise do not add significantly more to the above-identified judicial exception. The limitations of the claims do not transform the abstract idea that they recite into patent-eligible subject matter because the claims simply instruct the practitioner to implement the abstract idea using generally-recited computer components, and furthermore do not amount to an improvement to a computer or any other technology, and thus are ineligible. For at least these reasons, the rejection is maintained.
Applicant submits that: (2) Molero (US 2023/0377747) does not teach or suggest in representative and amended broadly recited Claim 1: “Molero does not disclose the claimed first value or second value; selecting the machine learning model using the first and second values” (see Remarks/Arguments pages 12-14) (emphasis added) [see Remarks pages 12-14]. With regard to argument (2), the Examiner respectfully disagrees. First these specific limitations are not recited as such in the broadly recited claim limitations. Apparently, Applicants are arguing that Molero does not teach a “…first value…second value…” within the context of the following broadly recited claim limitations with the Examiner’s citations and clarifications recited in Claim 1 as: determining (to determine a proposed treatment for a subject with a condition), by the one or more computers and for each of the two or more machine learning models/plurality of parameters (A model may infer missing subject information and/or may use a framework (e.g., a Bayesian framework) to estimate parameter dependency based on population-level data. In some instances, Markov Chain Monte Carlo simulations can be used to estimate posterior distributions of population parameters and subject-specific parameters, and a covariate model can identify systematic variability explainable by measurable subject characteristics (e.g., age, height, disease type)), a first value (parameters; value; metric(s); numerical or non-numerical values) that indicates a degree to which the respective machine learning model (An array representation (e.g., a transformed representation, such as a vector, an N-dimensional matrix, or any numerical representation of a non-numerical value) may be any numerical and/or categorical representation of the values of data fields of a subject record. For example, an array representation of a subject record may be a vector representation of the subject record in a domain space, such as in a Euclidean space. In some instances, cloud server 135 may be configured to transform an entire subject record into a numerical representation, such as a vector. For a given subject record, cloud server 135 may evaluate each data element to determine the type of data contained or included in that data element. The type of data may inform the cloud server 135 as to which process or technique to perform to transform the numerical or non-numerical values of that data element into a numerical representation) is correct using respective actual output data and expected output data (Accuracy of the predictions can be fed back to the Generator network until a threshold accuracy is obtained or a threshold number of iterations have occurred. The transformation can then be used to estimate subject-specific metrics (and/or uni- or multi-dimensional distributions thereof) that represent pharmacokinetics corresponding to an individual subject. This approach can facilitate using a limited and/or small number of subject-specific variable values to generate a subject-specific distribution that may more fully represent biological activity. A sampling technique (e.g., Monte-Carlo technique) may sample from the distribution to generate data to use to train another model (e.g., a pharmacokinetic model or neural network)), wherein the first value that indicates a degree to which the respective output data is correct (accuracy; Manifestation data 773 is an example of empirical result data that may be received by, availed to and/or stored at central artificial-intelligence system 750. Manifestation data 773 (or other empirical result data) may be used to assess accuracy of one or more population-level models, subject-specific models, population-level workflows and/or subject-specific workflows. Manifestation data 773 (or other empirical result data) may be monitored to determine whether to initiate re-training of an AI model, selecting a different AI model, adjusting pre- and/or post-processing functions used for a given subject, etc. Manifestations data 773 (or other empirical result data) may further indicate an accuracy of various model predictions, which may influence whether such models are subsequently used and/or retrained) (see at least paragraphs 147, 209-224, 265-271, 282-283);
determining (cloud server 135 may evaluate each data element to determine the type of data contained or included in that data element. The type of data may inform the cloud server 135 as to which process or technique to perform to transform the numerical or non-numerical values of that data element into a numerical representation), by the one or more computers and for each of the two or more machine learning models, a second value (cloud server 135 may transform non-numerical values (e.g., the text of a physician's notes) of a data element of a subject record into a numerical representation (e.g., a vector). The transformation may include using natural language processing techniques, such as Word2Vec or other text vectorization techniques, to generate a numerical value that represents each word of text. The generated numerical value may serve as a vector that can be inputted into a trained neural network to perform intelligent analysis) that indicates a degree to which the respective machine learning model is correct (accuracy) using testing data from the input data and expected output data, (The sensor data and/or an inference made based on the sensor data may be used to (for example) select and/or configure a machine-learning model, select and/or configure a pre-processing function, and/or select and/or configure a post-processing function. For example, a first set of rate constants can be defined for a pharmacokinetic model to be used when it is inferred that a user is stationary; a second set of rate constants can be defined for the pharmacokinetic model to be used when it is inferred that a user is participating in a low-intensity activity; and a third set of rate constants can be defined for the pharmacokinetic model to be used when it is inferred that a user is participating a high-intensity activity. In some instances, the first, second and third sets of rate constants may have been separately learned using different training sets. In some instances, one of the first, second and third sets of rate constants may be learned using a training data set, and each rate constant may be adjusted by a corresponding absolute or relative amount to determine a corresponding rate constant for another of the second or third set of rate constants. Dynamic selections of rate-constant sets can then be made, with smooth transitions being facilitated by availing and/or sharing state variables and/or other interim variables), wherein the second value indicates a repeatability of generating respective actual output data (Manifestation data 773 is an example of empirical result data that may be received by, availed to and/or stored at central artificial-intelligence system 750. Manifestation data 773 (or other empirical result data) may be used to assess accuracy of one or more population-level models, subject-specific models, population-level workflows and/or subject-specific workflows. Manifestation data 773 (or other empirical result data) may be monitored to determine whether to initiate re-training of an AI model, selecting a different AI model, adjusting pre- and/or post-processing functions used for a given subject, etc. Manifestations data 773 (or other empirical result data) may further indicate an accuracy of various model predictions, which may influence whether such models are subsequently used and/or retrained) (see at least paragraphs 48, 147, 209-224, 265-271, 282-283);
selecting, by the one or more computers, a machine learning model from the two or more machine learning models using the second parameter that meets the one or more machine learning modeling thresholds (one or more models 718 include a machine-learning model and/or pharmacokinetic model that predicts (for example) a concentration of an active agent at one or more time points, factor activity levels at one or more times, a clotting time or clotting propensity at one or more times, a time at which a clotting propensity falls below or reaches a threshold, and/or a time at which an active-agent concentration falls below or reaches a threshold), (a) the first value that indicates the degree (based on a degree; update the rule when an update condition has been satisfied) to which the respective machine learning model is correct given the respective actual output data and the expected output data (cloud server 135 may update the rule when an update condition has been satisfied. An update condition may be a threshold value. For example, the threshold value may be a number or percentage of external entities that have integrated a modified version of the rule into their custom rule bases. As another example, the update condition may be determined using an output of a trained machine-learning model), and, (b) the second value that indicates the degree (The result of the comparison (e.g., in a domain space, such as a Euclidean space) between two numerical representations may indicate a degree to which the text included in the target data element is similar to the text included in the data element of another subject record) to which the respective machine learning model is correct given the testing data from the input data and the expected output data (Weights used to calculate the similarity metric may be determined (for example) based on a degree to which the attribute was related to prediction accuracy in test data (e.g., such that higher weights are assigned when differences between attribute values of accurate predictions and attribute values of inaccurate predictions were larger and/or more significant), a degree to which the attribute is unique across a population of hemophilia subjects (e.g., such that higher weights are assigned when a subject attribute is more unique), and/or a variability of the attribute in training data (e.g., such that higher weights are assigned when there is lower variability of the attribute); In some instances, model use data 772 indicates which subjects (and/or attributes thereof) are using a model, and population-level training code 766 and/or subject-specific adjustment code 769 may further train a population-level AI model and/or subject-level AI model to improve accuracy for similar subjects and/or for other subjects that are not currently using the model; In some instances, hemophilia app 717 processes sensor data to select a model that is to be used to generate hemophilia-related predictions for the subject. The different model selection may include selecting a model trained using different data, trained using a different loss function and/or objective function, having different fixed hyperparameters, and/or having a different architecture. For example, a default model selected by hemophilia app 717 for the subject may include a model that prioritizes accurately predicting levels of an active treatment agent. Meanwhile, upon inferring that a user has engaged in high-intensity activity, hemophilia app 717 may transition to a model that prioritizes accurately predicting occurrence of abnormal bleeding events. Transitioning between models may be facilitated by defining dynamic variable correspondences between the models. For example, each of multiple models may be configured to receive a predicted active-ingredient level (e.g., which may have been generated via processing of a previous time step). Other input may be objective and/or fixed (e.g., physical and/or demographic attributes of a subject and/or variables based on or including sensor variables). Thus, when switching from one model to another, input variables for the other model may be readily available. In some instances, post-processing is implemented to further smooth and/or filter predictions generated by the two models) (see at least paragraphs 147, 158, 173, 209-224, 265-271, 282). It is noted that any citations to specific, pages, columns, paragraphs, lines, or figures in the prior art references and any interpretation of the reference should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. See MPEP 2123. The Examiner has a duty and responsibility to the public and to Applicant to interpret the claims as broadly as reasonably possible during prosecution. In re Prater, 415 F.2d 1 393, 1404-05, 162 USPQ 541, 550-51 (CCPA 1969). For at least these reasons, the rejection is maintained.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Broadly recited Claims 1-20 are rejected under 35 U.S.C. §101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, natural phenomenon, or an abstract idea) because the claimed invention is directed to a judicial exception (i.e., a law of nature, natural phenomenon, or an abstract idea) without significantly more. The claims as a whole recite certain grouping of an abstract idea and are analyzed in the following step process:
Step 1: Claims 1-20 are each focused to a statutory category of invention, namely “system; non-transitory computer readable medium; method” sets.
Step 2A: Prong One: Claims 1-20 recite limitations that set forth the abstract ideas, namely, the claims as a whole recite the claimed invention is directed to an abstract idea without significantly more. Representative independent Claim 1 recites steps for, generally: receiving data, evaluating data against modeling thresholds, transforming parameter types, generating model outputs, calculating model accuracy, and selecting a model based on performance because the claims encompass processing information by:
“receiving, from a plurality of data sources, input data that includes, for each of multiple records, a) a plurality of parameters, and b) values for at least some of the parameters;
determining, for the plurality of parameters, if characteristics of a corresponding parameter in the multiple records meet one or more machine learning modeling thresholds;
transforming, by the one or more computers and for at least one of the parameters i) that does not meet at least of the one or more machine learning modeling thresholds and ii) has a first parameter type, the corresponding parameter to a second parameter with a second, different parameter type that meets the one or more propensity modeling thresholds;
generating, by the one or more computers and using each of two or more trained machine learning machine learning models, actual output data by providing training data from the input data to the respective propensity model;
determining, for each of the two or more machine learning models, a first value that indicates a degree to which the respective machine learning model is correct using respective actual output data and expected output data, wherein the first value that indicates a degree to which the respective actual output data is correct;
determining, for each of the two or more propensity models, a second value that indicates a degree to which the respective machine learning model is correct using testing data from the input data and expected output data, wherein the second value indicates a repeatability of generating the respective actual output data;
selecting a machine learning model from the two or more machine learning models using the second parameter that meets the one or more machine learning modeling thresholds and, for the two or more machine learning models, (a) the first value that indicates the degree to which the respective machine learning model is correct given the respective actual output data and the expected output data and, (b) the second value that indicates the degree to which the respective machine learning model is correct given the testing data from the input data and the expected output data; and
providing, to a system, the selected machine learning model to cause the system to generate a recommendation using the selected machine learning model”
The claims fall under the categories:
(a) Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations. The steps of: "calculating…determining... thresholds" "generating actual output data" "determining... characteristics... meet one or more machine learning modeling thresholds" and “calculating a…first value that indicates a degree to which the... model is correct" are mathematical calculations. "first value that indicates the degree to which the respective machine learning model is correct given the respective actual output data and the expected output data". This language describes the calculation of a model performance metric (e.g., accuracy, precision, recall). The calculation of a metric, which involves comparing outputs and deriving a value, is a fundamental mathematical concept/computation. "second value that indicates the degree to which the respective machine learning model is correct given the testing data from the input data and the expected output data". This is a restatement of the first, specifying the use of testing data, and also describes a mathematical concept/computation. The mere use of testing data, a common practice described by IBM, does not turn the calculation into a practical application. "selecting a machine learning model from the two or more machine learning models using the second parameter that meets the one or more machine learning modeling thresholds". The process of "selecting" a model based on quantitative "values" and "thresholds" is a decision-making step that can be performed by a human or a basic computer program. This is an application of a fundamental economic or business practice (model selection is a well-known process in the field, as noted by IBM) and also involves the application of the prior mathematical calculations. This element, in isolation, is an abstract idea. "providing, to a system, the selected machine learning model to cause the system to generate a recommendation using the selected machine learning model". This element describes using the selected model in a downstream application (generating a recommendation). However, the act of "providing" the model and having a "system" use it does not, without more, specify how the abstract idea is integrated into a concrete, technological process in a way that is not itself abstract. The 'recommendation' itself is likely a result or output of the model, not a physical process. In combination, the elements describe a mental process or mathematical method for comparing models using calculated metrics and selecting the "best" one, which is an abstract idea.
(b) Mental processes – concepts performed in the human mind (including an observation, evaluation, judgment, opinion). The steps of evaluating data, determining parameter types, and selecting a model, while implemented on a computer, are fundamentally analytical processes that can be performed in the human mind. The claim focuses on data analysis, threshold evaluation, and model selection—all considered abstract, "mental" steps, even if performed on a computer.
See MPEP § 2106.04(a) II C. Hence, the claims are ineligible under Step 2A Prong one. Furthermore, the dependent claims are merely directed to the particulars of the abstract idea and likewise do not add significantly more to the above-identified judicial exception. The limitations of the claims do not transform the abstract idea that they recite into patent-eligible subject matter because the claims simply instruct the practitioner to implement the abstract idea using generally-recited computer components.
Prong Two: Claims 1-20: With regard to this step of the analysis (as explained in MPEP § 2106.04(d)), the judicial exception is not integrated into a practical application. Therefore, the claims contain computer components/elements (“one or more computers; storage devices; non-transitory computer readable medium; one/two or more machine learning model(s)”, etc.) (e.g., see Applicants’ published Specification ¶' s 2-14, 27-39) that are cited at a high level of generality and are merely invoked as a tool to perform the abstract idea. Simply implementing an abstract idea on a computer is not a practical application of the abstract idea. It is notable that mere physicality or tangibility of an additional element or elements is not a relevant consideration in Step 2A Prong Two. As the Supreme Court explained in Alice Corp., mere physical or tangible implementation of an exception does not guarantee eligibility. Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 224, 110 USPQ2d 1976, 1983-84 (2014) (“The fact that a computer ‘necessarily exist[s] in the physical, rather than purely conceptual, realm,’ is beside the point”). See also Genetic Technologies Ltd. v. Merial LLC, 818 F.3d 1369, 1377, 118 USPQ2d 1541, 1547 (Fed. Cir. 2016) (steps of DNA amplification and analysis are not “sufficient” to render claim 1 patent eligible merely because they are physical steps). Conversely, the presence of a non-physical or intangible additional element does not doom the claims, because tangibility is not necessary for eligibility under the Alice/Mayo test. Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 118 USPQ2d 1684 (Fed. Cir. 2016) (“that the improvement is not defined by reference to ‘physical’ components does not doom the claims”). See also McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1315, 120 USPQ2d 1091, 1102 (Fed. Cir. 2016), (holding that a process producing an intangible result (a sequence of synchronized, animated characters) was eligible because it improved an existing technological process). Furthermore, the dependent claims are merely directed to the particulars of the abstract idea and likewise do not add significantly more to the above-identified judicial exception. The limitations of the claims do not transform the abstract idea that they recite into patent-eligible subject matter because the claims simply instruct the practitioner to implement the abstract idea using generally-recited computer components, and furthermore do not amount to an improvement to a computer or any other technology, and thus are ineligible. See MPEP § 2106.05(f) (h).
Step 2B: As explained in MPEP § 2106.05, Claims 1-20 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea nor recites additional elements that integrate the judicial exception into a practical application. The additional elements of “one or more computers; storage devices; non-transitory computer readable medium; one/two or more machine learning model(s)”, etc. are generically-recited computer-related elements that amount to a mere instruction to “apply it” (the abstract idea) on the computer-related elements (see MPEP § 2106.05 (f) – Mere Instructions to Apply an Exception). These additional elements in the claims are recited at a high level of generality and are merely limiting the field of use of the judicial exception (see MPEP §2106.05 (h) – Field of Use and Technological Environment). There is no indication that the combination of elements improves the function of a computer or improves any other technology.
Furthermore, the dependent claims are merely directed to the particulars of the abstract idea and likewise do not add significantly more to the above-identified judicial exception. The limitations of the claims do not transform the abstract idea that they recite into patent-eligible subject matter because the claims simply instruct the practitioner to implement the abstract idea using generally-recited computer components, and furthermore do not amount to an improvement to a computer or any other technology, and thus are ineligible.
Examiner interprets that the steps of the claimed invention both individually and as an ordered combination result in Mere Instructions to Apply a Judicial Exception (see MPEP §2106.05 (f)). These claims recite only the idea of a solution or outcome with no restriction on how the result is accomplished and no description of the mechanism used for accomplishing the result. Here, the claims utilize a computer or other machinery (e.g., see Applicants’ published Specification ¶' s 2-14, 27-39) regarding using existing computer processors as well as program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored. “model training system 102” in its ordinary capacity for performing tasks (e.g., to receive, analyze, transmit and display data) and/or use computer components after the fact to an abstract idea (e.g., a fundamental economic practice and certain methods of organization human activities) and does not provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016)). Software implementations are accomplished with standard programming techniques with logic to perform connection steps, processing steps, comparison steps and decisions steps. These claims are directed to being a commonplace business method being applied on a general-purpose computer (see Alice Corp. Pty, Ltd. V. CLS Bank Int' l, 134 S. Ct. 2347, 1357, 110 USPQ2d 1976, 1983 (2014)); Versata Dev. Group, Inc., v. SAP Am., Inc., 793 D.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015)) and require the use of software such as via a server to tailor information and provide it to the user on a generic computer. Based on all these, Examiner finds that when viewed either individually or in combination, these additional claim element(s) do not provide meaningful limitation(s) that raise to the high standards of eligibility to transform the abstract idea(s) into a patent eligible application of the abstract idea(s) such that the claim(s) amounts to significantly more than the abstract idea(s) itself. Accordingly, Claims 1-20 are rejected under 35 U.S.C. §101 because the claimed invention is directed to a judicial exception (i.e. abstract idea exception) without significantly more.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Broadly recited Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Molero Leon et al. (Molero) (US 2023/0377747 A1).
With regard to Claims 1, 9, 14, Molero teaches a system/computer-implemented method/non-transitory computer storage medium encoded with instructions that, when executed by the one or more computers (process 1000) (see at least paragraphs 315-345), to cause the one or more computers to perform operations comprising:
receiving, by one or more computers and from a plurality of data sources (sensor data collected), input data/that includes, for each of multiple records, a) a plurality of parameters (parameters (e.g., that correspond to rate constants, dynamics, tissue volumes, etc.) may depend on characteristics of a subject (e.g., age, weight, sex, type of hemophilia, comorbidity, etc.). In some instances, pertinent subject characteristics can be rather practically and precisely determined. In some instances, at least some of the subject characteristics are unavailable. A model may infer missing subject information and/or may use a framework (e.g., a Bayesian framework) to estimate parameter dependency based on population-level data), and b) values for at least some of the parameters (In some embodiments, a method is provided that includes receiving a subject-specific data set corresponding to a subject, the subject-specific data set including or identifying: a type of hemophilia; treatment type; demographic data; and/or a photograph of a part of the particular subject or information derived based on a photograph of a part of the particular subject. At least part of the subject-specific data set is processed using a classifier model to identify one or more population-level machine-learning models from among a set of population-level machine-learning models. Each of the set of population-level machine-learning models includes a machine-learning model trained using a training set corresponding to a set of other subjects with hemophilia. One or more indications are received that identify one or more times at which a treatment of the treatment type was administered to the subject. A hemophilia-pertinent time course is predicted for the subject using the one or more times and a data-processing workflow using a population-level machine-learning model of the one or more population-level machine-learning models. A representation of sensor data collected at a device associated with the subject is received. A transformed data processing workflow is determined (based on the representation of sensor data) that generates hemophilia-pertinent predictions for the subject. A hemophilia-pertinent prediction is generated for the subject using the transformed data-processing workflow. A result corresponding to the hemophilia-pertinent prediction is output) (see at least paragraphs 10-30, 215);
determining, by the one or more computers and for the plurality of parameters (With respect to any of these types of pharmacokinetic models, the parameters (e.g., that correspond to rate constants, dynamics, tissue volumes, etc.) may depend on characteristics of a subject (e.g., age, weight, sex, type of hemophilia, comorbidity, etc.). In some instances, pertinent subject characteristics can be rather practically and precisely determined. In some instances, at least some of the subject characteristics are unavailable. A model may infer missing subject information and/or may use a framework (e.g., a Bayesian framework) to estimate parameter dependency based on population-level data. In some instances, Markov Chain Monte Carlo simulations can be used to estimate posterior distributions of population parameters and subject-specific parameters, and a covariate model can identify systematic variability explainable by measurable subject characteristics (e.g., age, height, disease type)), if characteristics of a corresponding parameter in the multiple records meet one or more machine learning (machine learning) modeling thresholds (A Generator network can identify a transformation of the distribution along one or more dimensions. The transformation can be defined at least in part based on subject-specific characteristics (e.g., weight, age) and/or data (e.g., one or more subject-associated rate constants, dynamic variables, etc.). A sampling technique (e.g., Monte Carlo technique) can sample from the transformed distribution, and a Discriminator network can predict whether the sample(s) correspond to the population or the subject. Accuracy of the predictions can be fed back to the Generator network until a threshold accuracy is obtained or a threshold number of iterations have occurred. The transformation can then be used to estimate subject-specific metrics (and/or uni- or multi-dimensional distributions thereof) that represent pharmacokinetics corresponding to an individual subject. This approach can facilitate using a limited and/or small number of subject-specific variable values to generate a subject-specific distribution that may more fully represent biological activity. A sampling technique (e.g., Monte-Carlo technique) may sample from the distribution to generate data to use to train another model (e.g., a pharmacokinetic model or neural network)) (see at least paragraphs 10-30, 209-220);
transforming, by the one or more computers and for at least one of the parameters i) that does not meet at least of the one or more machine learning modeling thresholds (If a threshold crossing or other condition satisfaction indicates that it was sufficiently unlikely that particular sensor data (or a processed version thereof and/or representation thereof) would be observed within a given underlying data distribution, it may be determined at block 930 to discontinue (e.g., permanently discontinue or temporarily discontinue) use of the workflow and/or to transition to a new or modified workflow for hemophilia-pertinent predictions) and ii) has a first parameter type, the corresponding parameter to a second parameter with a second, different parameter type that meets (a predicted factor level may be visually presented, a notification of satisfaction of a warning condition (e.g., indicating that a predicted value will fall below a threshold prior to a next scheduled treatment or that a rate of change of a predicted value exceeds a change threshold) may be presented via an audible or haptic stimuli) the one or more propensity modeling thresholds (the feedforward network additionally receives input indicating when treatments were administered to the subject (e.g., and the dosage that was administered), and the feedforward neural network may output a prediction of a time at which a clotting propensity (e.g., corresponding to a clotting time in an assay) falls below or reaches a particular threshold, a predicted current clotting propensity, a predicted current level or activity of an active ingredient, and/or a predicted time at which a level or activity of an active ingredient falls below or reaches a particular threshold. In such instances, the feedforward neural network may have been trained to generate predictions for a particular type of treatment or an additional input may identify the treatment type) (see at least paragraphs 10, 173, 209-224, 320);
generating, by the one or more computers and using each of two or more trained machine learning models, actual output data by providing training (machine-learning trained using a training set) data from the input data to the respective machine learning model (hemophilia app 717 processes sensor data and other subject data to predict one or more relationships between sensor data (or a processed version thereof) and hemophilia-related incidences. The processing may include performing a multi-dimensional analysis or may use a machine-learning model to predict what, if any, types of exercise intensities or user exertion affect hemostasis (e.g., as indicated by whether bleeding events were normal or abnormal and/or whether spontaneous bleeding occurred). For example, it may be determined that movement and/or exertion characteristic of high-intensity exercise transiently increases a probability of a bleeding event (though the magnitude and/or duration of such increase may be subject-specific). The assessment may further account for a time since a last treatment and/or recent predicted active-ingredient level, clotting propensity, etc. For example, the assessment may predict how exercise intensity and/or user exertion transiently affects one or more time constants of a pharmacokinetic model. As another example, the assessment may predict how an output of a pharmacokinetic model is to be post-processed (e.g., transiently post-processed) to a transient change in hemostasis. The post-processing may include (for example) multiplying a result (e.g., risk of abnormal bleeding, predicted clotting time, recommended time interval at which next treatment is to be received, etc.) by a value, adding or subtracting an amount to/from a result and/or transforming a result using a non-linear function. In some instances, an effect of the post-processing is limited to predictions during which the post-processing is performed. In some instances, a model implements an iterative approach, whereby processing for successive time points depend on values from previous time points. Thus, post-processing may have a long-lasting effect) (see at least paragraphs 10-30, 215, 231-235);
determining (to determine a proposed treatment for a subject with a condition), by the one or more computers and for each of the two or more machine learning models/plurality of parameters (A model may infer missing subject information and/or may use a framework (e.g., a Bayesian framework) to estimate parameter dependency based on population-level data. In some instances, Markov Chain Monte Carlo simulations can be used to estimate posterior distributions of population parameters and subject-specific parameters, and a covariate model can identify systematic variability explainable by measurable subject characteristics (e.g., age, height, disease type)), a first value (parameters; value; metric(s); numerical or non-numerical values) that indicates a degree to which the respective machine learning model (An array representation (e.g., a transformed representation, such as a vector, an N-dimensional matrix, or any numerical representation of a non-numerical value) may be any numerical and/or categorical representation of the values of data fields of a subject record. For example, an array representation of a subject record may be a vector representation of the subject record in a domain space, such as in a Euclidean space. In some instances, cloud server 135 may be configured to transform an entire subject record into a numerical representation, such as a vector. For a given subject record, cloud server 135 may evaluate each data element to determine the type of data contained or included in that data element. The type of data may inform the cloud server 135 as to which process or technique to perform to transform the numerical or non-numerical values of that data element into a numerical representation) is correct using respective actual output data and expected output data (Accuracy of the predictions can be fed back to the Generator network until a threshold accuracy is obtained or a threshold number of iterations have occurred. The transformation can then be used to estimate subject-specific metrics (and/or uni- or multi-dimensional distributions thereof) that represent pharmacokinetics corresponding to an individual subject. This approach can facilitate using a limited and/or small number of subject-specific variable values to generate a subject-specific distribution that may more fully represent biological activity. A sampling technique (e.g., Monte-Carlo technique) may sample from the distribution to generate data to use to train another model (e.g., a pharmacokinetic model or neural network)), wherein the first value that indicates a degree to which the respective output data is correct (accuracy; Manifestation data 773 is an example of empirical result data that may be received by, availed to and/or stored at central artificial-intelligence system 750. Manifestation data 773 (or other empirical result data) may be used to assess accuracy of one or more population-level models, subject-specific models, population-level workflows and/or subject-specific workflows. Manifestation data 773 (or other empirical result data) may be monitored to determine whether to initiate re-training of an AI model, selecting a different AI model, adjusting pre- and/or post-processing functions used for a given subject, etc. Manifestations data 773 (or other empirical result data) may further indicate an accuracy of various model predictions, which may influence whether such models are subsequently used and/or retrained) (see at least paragraphs 147, 209-224, 265-271, 282-283);
determining (cloud server 135 may evaluate each data element to determine the type of data contained or included in that data element. The type of data may inform the cloud server 135 as to which process or technique to perform to transform the numerical or non-numerical values of that data element into a numerical representation), by the one or more computers and for each of the two or more machine learning models, a second value (cloud server 135 may transform non-numerical values (e.g., the text of a physician's notes) of a data element of a subject record into a numerical representation (e.g., a vector). The transformation may include using natural language processing techniques, such as Word2Vec or other text vectorization techniques, to generate a numerical value that represents each word of text. The generated numerical value may serve as a vector that can be inputted into a trained neural network to perform intelligent analysis) that indicates a degree to which the respective machine learning model is correct (accuracy) using testing data from the input data and expected output data, (The sensor data and/or an inference made based on the sensor data may be used to (for example) select and/or configure a machine-learning model, select and/or configure a pre-processing function, and/or select and/or configure a post-processing function. For example, a first set of rate constants can be defined for a pharmacokinetic model to be used when it is inferred that a user is stationary; a second set of rate constants can be defined for the pharmacokinetic model to be used when it is inferred that a user is participating in a low-intensity activity; and a third set of rate constants can be defined for the pharmacokinetic model to be used when it is inferred that a user is participating a high-intensity activity. In some instances, the first, second and third sets of rate constants may have been separately learned using different training sets. In some instances, one of the first, second and third sets of rate constants may be learned using a training data set, and each rate constant may be adjusted by a corresponding absolute or relative amount to determine a corresponding rate constant for another of the second or third set of rate constants. Dynamic selections of rate-constant sets can then be made, with smooth transitions being facilitated by availing and/or sharing state variables and/or other interim variables), wherein the second value indicates a repeatability of generating respective actual output data (Manifestation data 773 is an example of empirical result data that may be received by, availed to and/or stored at central artificial-intelligence system 750. Manifestation data 773 (or other empirical result data) may be used to assess accuracy of one or more population-level models, subject-specific models, population-level workflows and/or subject-specific workflows. Manifestation data 773 (or other empirical result data) may be monitored to determine whether to initiate re-training of an AI model, selecting a different AI model, adjusting pre- and/or post-processing functions used for a given subject, etc. Manifestations data 773 (or other empirical result data) may further indicate an accuracy of various model predictions, which may influence whether such models are subsequently used and/or retrained) (see at least paragraphs 48, 147, 209-224, 265-271, 282-283);
selecting, by the one or more computers, a machine learning model from the two or more machine learning models using the second parameter that meets the one or more machine learning modeling thresholds (one or more models 718 include a machine-learning model and/or pharmacokinetic model that predicts (for example) a concentration of an active agent at one or more time points, factor activity levels at one or more times, a clotting time or clotting propensity at one or more times, a time at which a clotting propensity falls below or reaches a threshold, and/or a time at which an active-agent concentration falls below or reaches a threshold), (a) the first value that indicates the degree (based on a degree; update the rule when an update condition has been satisfied) to which the respective machine learning model is correct given the respective actual output data and the expected output data (cloud server 135 may update the rule when an update condition has been satisfied. An update condition may be a threshold value. For example, the threshold value may be a number or percentage of external entities that have integrated a modified version of the rule into their custom rule bases. As another example, the update condition may be determined using an output of a trained machine-learning model), and, (b) the second value that indicates the degree (The result of the comparison (e.g., in a domain space, such as a Euclidean space) between two numerical representations may indicate a degree to which the text included in the target data element is similar to the text included in the data element of another subject record) to which the respective machine learning model is correct given the testing data from the input data and the expected output data (Weights used to calculate the similarity metric may be determined (for example) based on a degree to which the attribute was related to prediction accuracy in test data (e.g., such that higher weights are assigned when differences between attribute values of accurate predictions and attribute values of inaccurate predictions were larger and/or more significant), a degree to which the attribute is unique across a population of hemophilia subjects (e.g., such that higher weights are assigned when a subject attribute is more unique), and/or a variability of the attribute in training data (e.g., such that higher weights are assigned when there is lower variability of the attribute); In some instances, model use data 772 indicates which subjects (and/or attributes thereof) are using a model, and population-level training code 766 and/or subject-specific adjustment code 769 may further train a population-level AI model and/or subject-level AI model to improve accuracy for similar subjects and/or for other subjects that are not currently using the model; In some instances, hemophilia app 717 processes sensor data to select a model that is to be used to generate hemophilia-related predictions for the subject. The different model selection may include selecting a model trained using different data, trained using a different loss function and/or objective function, having different fixed hyperparameters, and/or having a different architecture. For example, a default model selected by hemophilia app 717 for the subject may include a model that prioritizes accurately predicting levels of an active treatment agent. Meanwhile, upon inferring that a user has engaged in high-intensity activity, hemophilia app 717 may transition to a model that prioritizes accurately predicting occurrence of abnormal bleeding events. Transitioning between models may be facilitated by defining dynamic variable correspondences between the models. For example, each of multiple models may be configured to receive a predicted active-ingredient level (e.g., which may have been generated via processing of a previous time step). Other input may be objective and/or fixed (e.g., physical and/or demographic attributes of a subject and/or variables based on or including sensor variables). Thus, when switching from one model to another, input variables for the other model may be readily available. In some instances, post-processing is implemented to further smooth and/or filter predictions generated by the two models) (see at least paragraphs 147, 158, 173, 209-224, 265-271, 282);
providing, by the one or more computers and to a system, the selected machine learning model to cause the system to generate a recommendation using the selected machine learning model (In some instances, hemophilia app 717 processes sensor data and other subject data to predict one or more relationships between sensor data (or a processed version thereof) and hemophilia-related incidences. The processing may include performing a multi-dimensional analysis or may use a machine-learning model to predict what, if any, types of exercise intensities or user exertion affect hemostasis (e.g., as indicated by whether bleeding events were normal or abnormal and/or whether spontaneous bleeding occurred). For example, it may be determined that movement and/or exertion characteristic of high-intensity exercise transiently increases a probability of a bleeding event (though the magnitude and/or duration of such increase may be subject-specific). The assessment may further account for a time since a last treatment and/or recent predicted active-ingredient level, clotting propensity, etc. For example, the assessment may predict how exercise intensity and/or user exertion transiently affects one or more time constants of a pharmacokinetic model. As another example, the assessment may predict how an output of a pharmacokinetic model is to be post-processed (e.g., transiently post-processed) to a transient change in hemostasis. The post-processing may include (for example) multiplying a result (e.g., risk of abnormal bleeding, predicted clotting time, recommended time interval at which next treatment is to be received, etc.) by a value, adding or subtracting an amount to/from a result and/or transforming a result using a non-linear function. In some instances, an effect of the post-processing is limited to predictions during which the post-processing is performed. In some instances, a model implements an iterative approach, whereby processing for successive time points depend on values from previous time points. Thus, post-processing may have a long-lasting effect) (see at least paragraphs 201, 202, 231, 283).
With regard to Claims 2, 15, Molero teaches:
determining, for each of the two or more machine learning models, a difference between the respective first value and the respective value accuracy (see at least paragraphs 147, 210-224, 232, 265-271, 282);
determining, for each of the two or more machine learning models, if the respective difference meets a difference threshold (see at least paragraphs 210-224, 265-271, 282);
selecting, from the two or more machine learning models, the propensity model using a result of the determination if the respective differences meet the difference threshold (see at least paragraphs 210-224, 265-271, 282).
With regard to Claims 3, 16, Molero teaches wherein selecting the machine learning model comprises selecting, from the two or more machine learning models, a propensity model that has a respective difference that meets the difference threshold (see at least paragraphs 210-224, 265-271, 282).
With regard to Claims 4, 17, Molero teaches wherein selecting the machine learning model comprises selecting, from the two or more machine learning models, a machine learning model that a) has a respective difference that meets the difference threshold and b) has a first value that meets a value threshold (see at least paragraphs 210-224, 265-271, 282).
With regard to Claims 5, 18, Molero teaches wherein selecting the machine learning model comprises selecting, from the two or more machine learning models, a machine learning model that has a first value that meets a value threshold (see at least paragraphs 210-224, 265-271, 282).
With regard to Claims 6, 19, Molero teaches wherein the testing data comprises different data from the input data (see at least paragraphs 210-224, 265-271, 282).
With regard to Claims 7, 20, Molero teaches:
receiving the input data comprises receiving input data that includes one or more parameter types (see at least paragraphs 210-224, 265-271, 282);
providing the selected machine learning model comprises providing the selected machine learning model to enable the system to generate a recommendation using the selected machine learning model and second input data that includes values for at least some of the one or more parameter types (see at least paragraphs 210-224, 265-271, 282).
With regard to Claim 8, Molero teaches:
determining, for at least some of the one or more parameter types, if a percentage of the multiple records that include a corresponding value for the corresponding parameter type meets a percentage threshold (see at least paragraphs 210-224, 265-271, 282);
determining, for at least some of the one or more parameter types, if the parameter type meets a machine learning modeling criterion (see at least paragraphs 210-224, 265-271, 282);
using a result of if the parameter type meets a machine learning modeling criterion, selectively determining, for at least some pairs of parameter types from of the one or more parameter types, if a corresponding pair of parameters is correlated (see at least paragraphs 210-224, 265-271, 282).
With regard to Claim 10, Molero teaches:
determining, for at least some of the plurality of parameters, whether a percentage of the multiple records that include a corresponding value for the corresponding parameter satisfies a percentage threshold (see at least paragraphs 210-224, 265-271, 282);
determining, for at least some of the plurality of parameters, whether a type of the corresponding parameter can be used for machine learning modeling (see at least paragraphs 210-224, 265-271, 282);
determining, for at least some pairs of parameters from of the plurality of parameters, whether a corresponding pair of parameters is correlated (see at least paragraphs 210-224, 265-271, 282) (see at least paragraphs 210-224, 265-271, 282).
With regard to Claim 11, Molero teaches:
transforming, for at least one of the parameters i) that does not satisfy at least of the one or more machine learning modeling thresholds and ii) has a first parameter type, the corresponding parameter to a second parameter with a second, different parameter type that satisfies the one or more machine learning modeling thresholds (see at least paragraphs 210-224, 265-271, 282).
With regard to Claim 12, Molero teaches:
receiving, from the plurality of data sources, second input data that includes, for each of multiple second records, a) a second plurality of parameters, and b) second values for at least some of the second parameters (see at least paragraphs 210-224, 265-271, 282);
determining, for the second plurality of parameters, whether characteristics of the corresponding second parameter in the multiple second records satisfy the one or more machine learning modeling thresholds (see at least paragraphs 210-224, 265-271, 282);
in response to determining that the characteristics of at least some of the second plurality of parameters do not satisfy the one or more machine learning modeling thresholds, selecting a collaborative filtering model (see at least paragraphs 166, 210-224, 265-271, 282);
providing, to another system, the collaborative filtering model to enable the other system to generate a second recommendation using the collaborative filtering model and third input data that includes the second plurality of parameters and corresponding values for at least some of the second plurality of parameters (see at least paragraphs 166, 210-224, 265-271, 282).
With regard to Claim 13, Molero teaches:
receiving the input data comprises receiving input data that includes the plurality of parameters, each parameter of which has a corresponding parameter type (see at least paragraphs 210-224, 265-271, 282);
selecting the machine learning model comprises selecting, from the two or more machine learning models, the propensity model that is mapped to the parameter types for which the input data has corresponding values (see at least paragraphs 210-224, 265-271, 282).
Conclusion
The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure:
Newman et al. (US 11,144,938)
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS L MANSFIELD whose telephone number is (571)270-1904. The examiner can normally be reached M-Thurs, alt. Fri. (9-6).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Munson can be reached at (571) 270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
THOMAS L. MANSFIELD
Examiner
Art Unit 3623
/THOMAS L MANSFIELD/Primary Examiner, Art Unit 3624