Prosecution Insights
Last updated: April 19, 2026
Application No. 17/653,077

SYSTEM FOR CONTROL AND ANALYSIS OF GAS FERMENTATION PROCESSES

Non-Final OA §101§103§112
Filed
Mar 01, 2022
Examiner
ANDERSON-FEARS, KEENAN NEIL
Art Unit
1687
Tech Center
1600 — Biotechnology & Organic Chemistry
Assignee
Lanzatech Inc.
OA Round
1 (Non-Final)
6%
Grant Probability
At Risk
1-2
OA Rounds
5y 1m
To Grant
56%
With Interview

Examiner Intelligence

Grants only 6% of cases
6%
Career Allow Rate
1 granted / 16 resolved
-53.7% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
5y 1m
Avg Prosecution
45 currently pending
Career history
61
Total Applications
across all art units

Statute-Specific Performance

§101
32.6%
-7.4% vs TC avg
§103
33.2%
-6.8% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 03/01/2022 and 01/24/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Status Claims 1-23 are pending. Claims 1-23 are rejected. Priority The instant application claims benefit of priority to Provisional Application 63/15,241. As such, the effective filing date of claims 1-23 is 3/3/2021. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2-3, 11 and 13-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 2 recites the limitation “the at least one RNN layer of the second deep learning neural network” in line 15. There is insufficient antecedent basis for this limitation in the claim. Claim 3 is dependent from claim 2 but does not rectify the lack of antecedent basis and as such is also rejected. Claim 11 recites the limitation " the metabolite production " in line 2. There is insufficient antecedent basis for this limitation in the claim. It appears the claim should have been dependent from claim 10 and amending the claim to be dependent from claim 10 would overcome the rejection. Claim 11 recites the limitation "the window of future control decisions" in lines 3 and 4. There is insufficient antecedent basis for this limitation in the claim. It appears the claim should have been dependent from claim 10 and amending the claim to be dependent from claim 10 would overcome the rejection. Claim 11 recites the limitation " the deep learning neural network " in line 5, but claim 9 contains 2 DNNs so it is unclear which DNN is referenced. There is insufficient antecedent basis for this limitation in the claim. It appears the claim should have been dependent from claim 10 and amending the claim to be dependent from claim 10 would overcome the rejection. Claim 12 recites the limitation “the deep learning neural network” in line 2, but claim 9 contains 2 DNNs so it is unclear which DNN is referenced. There is insufficient antecedent basis for this limitation in the claim . Claim 13 recites the limitation " the future control decisions" in line 1. There is insufficient antecedent basis for this limitation in the claim. It appears the claim should have been dependent from claim 10 and amending the claim to be dependent from claim 10 would overcome the rejection. Claim 13 recites the limitation "the window of future control decisions" in line 1. There is insufficient antecedent basis for this limitation in the claim. It appears the claim should have been dependent from claim 10 and amending the claim to be dependent from claim 10 would overcome the rejection. Claim 14 recites the limitation "the window of future control decisions" in line 1. There is insufficient antecedent basis for this limitation in the claim. It appears the claim should have been dependent from claim 10 and amending the claim to be dependent from claim 10 would overcome the rejection. Claim 15 recites the limitation "the window of future control decisions" in line 2. There is insufficient antecedent basis for this limitation in the claim. It appears the claim should have been dependent from claim 10 and amending the claim to be dependent from claim 10 would overcome the rejection. Claim 16 recites the limitation "the guidance" in line 1. There is insufficient antecedent basis for this limitation in the claim. It appears the claim should have been dependent from claim 15 and amending the claim to be dependent from claim 10 would overcome the rejection. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 5-20, and 22-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract ideas without significantly more. The claims recite methods for determining a fermentation state using a neural network and bioreactor data. The judicial exception is not integrated into a practical application because while claims 1, 5-20 and 22-23 attempt to integrated the exception into a practical application, said application is either generically recited computer elements that do not add a meaningful limitation to the abstract idea or it is insignificant extra solution activity and merely implementing the abstract idea on a computer. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the computer elements only store and retrieve information in memory as well as perform basic calculations that are known to be well-understood, routine and conventional computer functions as recognized by the decisions listed in MPEP § 2106.05(d). Framework with which to Analyze Subject Matter Eligibility: Step 1: Are the claims directed to a category of stator subject matter (a process, machine, manufacture, or composition of matter)? [see MPEP § 2106.03] Claims are directed to statutory subject matter, specifically a method (Claims 1, 5-20, and 22-23). Step 2A Prong One: Do the claims recite a judicially recognized exception, i.e., an abstract idea, a law of nature, or a natural phenomenon? [see MPEP § 2106.04(a)] The claims herein recite abstract ideas, specifically mental processes and mathematical concepts. With respect to the Step 2A Prong One evaluation, the instant claims are found herein to recite abstract ideas that fall into the grouping of mental processes and mathematical concepts. Claim 1: Generating a first embedding, generating a second embedding, determining a distance between the first and second embedding, and determining a state based on the distance are processes of interpreting, translating, calculating, and identifying that can be performed via pen and paper or within the human mind and are therefore abstract ideas, specifically mental processes. The second embedding being based on the second historical fermentation data associated with a known state is merely further limiting the data itself which is an abstract idea, specifically a mental process. Claim 5: The distance being an element wise difference between the first and second embeddings is a verbal articulation of a mathematical process and therefore and abstract idea, specifically a mathematical concept. Claim 6: The first historical data comprising a first window of time-series data, and the second historical data comprising a second window of time-series data are merely further limiting the data itself which are abstract ideas, specifically mental processes. Claim 7: The second historical data comprising a plurality of time-series data associated with a known state is merely further limiting the data itself which is an abstract idea, specifically a mental process. Determining an embedding for each fermentation state, and determining a known state that has the smallest distance from the first embedding are processes of interpreting, translating, calculating, and identifying that can be performed via pen and paper or within the human mind and are therefore abstract ideas, specifically mental processes. Claim 8: The known state comprising one or more from the list provided is merely further limiting the data itself which is an abstract idea, specifically a mental process. Claim 10: Predicting future metabolite production is a process of interpreting, calculating, and identifying that can be performed via pen and paper or within the human mind and is therefore an abstract idea, specifically a mental process. Claim 11: The metabolite production comprises acetate and ethanol production is merely further limiting the data itself which is an abstract idea, specifically a mental process. Determining actual ethanol and acetate production during the window of future control decisions is a process of calculating that can be performed via pen and paper or within the human mind and is therefore an abstract idea, specifically a mental process. Claim 13: The future control decisions comprising one or more changes to the specified changes is merely further limiting the data itself which is an abstract idea, specifically a mental process. Claim 14: The window of future control decisions comprising no changes to the fermentation process is merely further limiting the data itself which is an abstract idea, specifically a mental process. Claim 17: Determining probabilities for each of the fermentation states, determining a known fermentation state with the highest probability, and assigning a known state having the highest probability are processes of calculating, and identifying that can be performed via pen and paper or within the human mind and are therefore abstract ideas, specifically mental processes. Claim 18: The historical fermentation data comprising time-series data is merely further limiting the data itself which is an abstract idea, specifically a mental process. Claim 19: Training the DNN to define a regression function is a verbal articulation of a mathematical process and is therefore an abstract idea, specifically a mathematical concept. Claim 23: The fermentation state comprising one or more of the specified states is merely further limiting the data itself which is an abstract idea, specifically a mental process. Step 2A Prong Two: If the claims recite a judicial exception under prong one, then is the judicial exception integrated into a practical application? [see MPEP § 2106.04(d) and MPEP § 2106.05(a)-(c) & (e)-(h)] Because the claims do recite judicial exceptions, direction under Step 2A Prong Two provides that the claims must be examined further to determine whether they integrate the abstract ideas into a practical application. The following claims recite the following additional elements in the form of non-abstract elements: Claim 1: Receiving historical fermentation data, and inputting the historical fermentation data into a DNN are insignificant extra solution activities, specifically necessary data gathering (See Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering), Performing clinical tests on individuals to obtain input for an equation, In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989) and Determining the level of a biomarker in blood, Mayo, 566 U.S. at 79, 101 USPQ2d at 1968. See also PerkinElmer, Inc. v. Intema Ltd., 496 Fed. App'x 65, 73, 105 USPQ2d 1960, 1966 (Fed. Cir. 2012) (assessing or measuring data derived from an ultrasound scan, to be used in a diagnosis)) [See MPEP § 2106.05(g)]. The first and second deep learning neural networks are mere instructions to apply the judicial exception in a computer environment (See Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984) [See MPEP § 2106.05(a)]. Claim 4: An output layer connected to at least one LSTM layer are generically recited elements of computers that do not improve upon the functioning of any computer herein [See MPEP § 2106.05(d)(I) & (II)]. Receiving an output from the LSTM layer is an insignificant extra solution activity, specifically necessary data outputting (See Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering), Performing clinical tests on individuals to obtain input for an equation, In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989) and Determining the level of a biomarker in blood, Mayo, 566 U.S. at 79, 101 USPQ2d at 1968. See also PerkinElmer, Inc. v. Intema Ltd., 496 Fed. App'x 65, 73, 105 USPQ2d 1960, 1966 (Fed. Cir. 2012) (assessing or measuring data derived from an ultrasound scan, to be used in a diagnosis)) [See MPEP § 2106.05(g)]. Claim 6: Inputting the first window of time-series data into the DNN, and inputting the second window of time-series data into the second DNN are insignificant extra solution activities, specifically necessary data gathering (See Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering), Performing clinical tests on individuals to obtain input for an equation, In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989) and Determining the level of a biomarker in blood, Mayo, 566 U.S. at 79, 101 USPQ2d at 1968. See also PerkinElmer, Inc. v. Intema Ltd., 496 Fed. App'x 65, 73, 105 USPQ2d 1960, 1966 (Fed. Cir. 2012) (assessing or measuring data derived from an ultrasound scan, to be used in a diagnosis)) [See MPEP § 2106.05(g)]. Claim 7: Inputting each of the historical time-series data sets into the second DNN is an insignificant extra solution activity, specifically necessary data gathering (See Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering), Performing clinical tests on individuals to obtain input for an equation, In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989) and Determining the level of a biomarker in blood, Mayo, 566 U.S. at 79, 101 USPQ2d at 1968. See also PerkinElmer, Inc. v. Intema Ltd., 496 Fed. App'x 65, 73, 105 USPQ2d 1960, 1966 (Fed. Cir. 2012) (assessing or measuring data derived from an ultrasound scan, to be used in a diagnosis)) [See MPEP § 2106.05(g)]. Claim 9: The training of the neural network comprising the use of one or more of the data types provided is an insignificant extra solution activity, specifically necessary data gathering (See Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering), Performing clinical tests on individuals to obtain input for an equation, In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989) and Determining the level of a biomarker in blood, Mayo, 566 U.S. at 79, 101 USPQ2d at 1968. See also PerkinElmer, Inc. v. Intema Ltd., 496 Fed. App'x 65, 73, 105 USPQ2d 1960, 1966 (Fed. Cir. 2012) (assessing or measuring data derived from an ultrasound scan, to be used in a diagnosis)) [See MPEP § 2106.05(g)]. Claim 10: Receiving a window of historical fermentation data, and outputting an indication of the prediction are insignificant extra solution activities, specifically necessary data gathering and outputting (See Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering), Performing clinical tests on individuals to obtain input for an equation, In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989) and Determining the level of a biomarker in blood, Mayo, 566 U.S. at 79, 101 USPQ2d at 1968. See also PerkinElmer, Inc. v. Intema Ltd., 496 Fed. App'x 65, 73, 105 USPQ2d 1960, 1966 (Fed. Cir. 2012) (assessing or measuring data derived from an ultrasound scan, to be used in a diagnosis)) [See MPEP § 2106.05(g)]. The deep learning neural network is mere instructions to apply the judicial exception in a computer environment (See Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984) [See MPEP § 2106.05(a)]. Claim 11: Training the DNN on the actual ethanol and acetate production during the window of future control decisions is an insignificant extra solution activity, specifically necessary data gathering (See Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering), Performing clinical tests on individuals to obtain input for an equation, In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989) and Determining the level of a biomarker in blood, Mayo, 566 U.S. at 79, 101 USPQ2d at 1968. See also PerkinElmer, Inc. v. Intema Ltd., 496 Fed. App'x 65, 73, 105 USPQ2d 1960, 1966 (Fed. Cir. 2012) (assessing or measuring data derived from an ultrasound scan, to be used in a diagnosis)) [See MPEP § 2106.05(g)]. Claim 12: Training the DNN with windows of historical time-series data is and insignificant extra solution activity, specifically necessary data gathering (See Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering), Performing clinical tests on individuals to obtain input for an equation, In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989) and Determining the level of a biomarker in blood, Mayo, 566 U.S. at 79, 101 USPQ2d at 1968. See also PerkinElmer, Inc. v. Intema Ltd., 496 Fed. App'x 65, 73, 105 USPQ2d 1960, 1966 (Fed. Cir. 2012) (assessing or measuring data derived from an ultrasound scan, to be used in a diagnosis)) [See MPEP § 2106.05(g)]. Claim 15: Outputting an indication of guidance for controlling the fermentation process and the guidance for controlling the fermentation process being based on the window of future control decisions are insignificant extra solution activities, specifically necessary data outputting (See Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering), Performing clinical tests on individuals to obtain input for an equation, In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989) and Determining the level of a biomarker in blood, Mayo, 566 U.S. at 79, 101 USPQ2d at 1968. See also PerkinElmer, Inc. v. Intema Ltd., 496 Fed. App'x 65, 73, 105 USPQ2d 1960, 1966 (Fed. Cir. 2012) (assessing or measuring data derived from an ultrasound scan, to be used in a diagnosis)) [See MPEP § 2106.05(g)]. Claim 16: The guidance comprising one of those specified is an insignificant extra solution activity, specifically necessary data gathering (See Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering), Performing clinical tests on individuals to obtain input for an equation, In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989) and Determining the level of a biomarker in blood, Mayo, 566 U.S. at 79, 101 USPQ2d at 1968. See also PerkinElmer, Inc. v. Intema Ltd., 496 Fed. App'x 65, 73, 105 USPQ2d 1960, 1966 (Fed. Cir. 2012) (assessing or measuring data derived from an ultrasound scan, to be used in a diagnosis)) [See MPEP § 2106.05(g)]. Claim 17: Inputting historical fermentation data into a DNN is an insignificant extra solution activity, specifically necessary data gathering (See Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering), Performing clinical tests on individuals to obtain input for an equation, In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989) and Determining the level of a biomarker in blood, Mayo, 566 U.S. at 79, 101 USPQ2d at 1968. See also PerkinElmer, Inc. v. Intema Ltd., 496 Fed. App'x 65, 73, 105 USPQ2d 1960, 1966 (Fed. Cir. 2012) (assessing or measuring data derived from an ultrasound scan, to be used in a diagnosis)) [See MPEP § 2106.05(g)]. The deep learning neural network is mere instructions to apply the judicial exception in a computer environment (See Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984) [See MPEP § 2106.05(a)]. Claim 20: Outputting the probabilities for each of the states is an insignificant extra solution activity, specifically necessary data outputting (See Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering), Performing clinical tests on individuals to obtain input for an equation, In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989) and Determining the level of a biomarker in blood, Mayo, 566 U.S. at 79, 101 USPQ2d at 1968. See also PerkinElmer, Inc. v. Intema Ltd., 496 Fed. App'x 65, 73, 105 USPQ2d 1960, 1966 (Fed. Cir. 2012) (assessing or measuring data derived from an ultrasound scan, to be used in a diagnosis)) [See MPEP § 2106.05(g)]. Claim 22: Training the network based on one or more of the specified data is an insignificant extra solution activity, specifically necessary data gathering (See Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering), Performing clinical tests on individuals to obtain input for an equation, In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989) and Determining the level of a biomarker in blood, Mayo, 566 U.S. at 79, 101 USPQ2d at 1968. See also PerkinElmer, Inc. v. Intema Ltd., 496 Fed. App'x 65, 73, 105 USPQ2d 1960, 1966 (Fed. Cir. 2012) (assessing or measuring data derived from an ultrasound scan, to be used in a diagnosis)) [See MPEP § 2106.05(g)]. Step 2B: If the claims do not integrate the judicial exception, do the claims provide an inventive concept? [see MPEP § 2106.05] Because the additional claim elements do not integrate the abstract idea into a practical application, the claims are further examined under Step 2B, which evaluates whether the additional elements, individually and in combination, amount to significantly more than the judicial exception itself by providing an inventive concept. The claims do not recite additional elements that are sufficient to amount to significantly more than the judicial exception because the claims recite additional elements that are generic, conventional or nonspecific. These additional elements include: The additional elements of receiving historical fermentation data (Conventional: Karim et al. - page 497, column 1, paragraph 2), inputting the historical fermentation data into a DNN (Conventional: Karim et al. - page 497, column 1, paragraph 2), inputting the first window of time-series data into the DNN (Conventional: Karim et al. - page 497, column 1, paragraph 2), inputting the second window of time-series data into the second DNN (Conventional: Karim et al. - page 497, column 1, paragraph 2), inputting each of the historical time-series data sets into the second DNN (Conventional: Karim et al. - page 497, column 1, paragraph 2), receiving a window of historical fermentation data (Conventional: Karim et al. - page 497, column 1, paragraph 2), outputting an indication of the prediction (Conventional: Bahad et al. - page 78, paragraph 3), training the DNN on the actual ethanol and acetate production during the window of future control decisions (Conventional: Karim et al. - page 497, column 1, paragraph 2; Mariaca-Gaspar et al. - page 211, paragraph 3), training the DNN with windows of historical time-series data (Conventional: Karim et al. - page 497, column 1, paragraph 2; Mariaca-Gaspar et al. - page 211, paragraph 3), outputting an indication of guidance for controlling the fermentation process (Conventional: Mariaca-Gaspar et al. - page 211, paragraph 3), inputting historical fermentation data into a DNN (Conventional: Karim et al. - page 497, column 1, paragraph 2), training of the neural network comprising the use of one or more of the data types provided (Conventional: Karim et al. – page 495, column 2, page 496, column 1, and page 497, column 1), the guidance for controlling the fermentation process being based on the window of future control decisions (Conventional: Mariaca-Gaspar et al. - abstract), guidance comprising one of those specified (Conventional: Mariaca-Gaspar et al. - abstract), training the network based on one or more of the specified data (Conventional: Karim et al. – page 495, column 1, paragraph 2), training the DNN to define a regression function based on ground truth historical data (Conventional: Misra et al. - page 3816, column 1, paragraph 2; Karim et al. - page 497, column 1, paragraph 2), and outputting the probabilities for each of the states (Conventional: Bahad et al. - page 78, paragraph 3), are insignificant extra solution activities, specifically necessary data gathering that are conventional within the art (See Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering), Performing clinical tests on individuals to obtain input for an equation, In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989) and Determining the level of a biomarker in blood, Mayo, 566 U.S. at 79, 101 USPQ2d at 1968. See also PerkinElmer, Inc. v. Intema Ltd., 496 Fed. App'x 65, 73, 105 USPQ2d 1960, 1966 (Fed. Cir. 2012) (assessing or measuring data derived from an ultrasound scan, to be used in a diagnosis)) [See MPEP § 2106.05(g)]. While merely finding an additional element in a single publication is not enough to establish conventionality, both Karim et al. and Mariaca-Gaspar et al. showcase similar methods with slightly different elements that are both applied to the prediction of continuous bioprocess activities, each of them integrating neural networks and each of them published years apart showcasing continued research within the same field using similar methods. Therefore, taken both individually and as whole, the additional elements do not amount to significantly more than the judicial exception by providing an inventive concept. Therefore, claims 1, 5-20, and 22-23, when the limitations are considered individually and as a whole, are rejected under 35 USC § 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 5-9, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Karim et al. (American Control Conference (1992) 495-499) in view of Misra et al. (Proceedings of the AAAI Conference on Artificial Intelligence (2018) 3812-2819). Claim 1 is directed to a method of determining fermentation states through the use of historical data, embeddings, and neural networks. Karim et al. teaches in the abstract “In fermentation processes, direct on-line measurements of primary process variables usually are unavailable. The state of the cultivation, therefore, has to be inferred from measurements of secondary variables and any previous knowledge of process dynamics. This research investigates the learning, recall and generalization characteristics of neural networks trained to model the nonlinear behavior of a fermentation process”, on page 496, column 2, paragraph 2 “recurrent networks are more general, in the sense that connections are allowed both ways between a pair of neurons, and even from a neuron to itself, as shown in Fig. 2(b). They are especially able to perform temporal association”, on page 497, column 1, paragraph 2 “Several sets of batch fermentation data were obtained at different temperatures, providing a suitable candidate for training the neural network on the behavior of the process at various environmental conditions. Five data sets were generated at temperatures 30C, 33C, 35C, 37C and 39C. The estimator was required to predict current biomass, glucose and ethanol concentrations every 15 minutes, using on-line measurements of temperature, redox potential, % CO2 in exhaust bioprocess gas, and optical density. Therefore, each data set consisted of 41 time patterns corresponding to 15 minute sampling during 10 hrs”, and Figure 2 provides “Sample configurations for neural-based state estimation of ethanol fermentation”, reading on receiving, from at least one control system associated with a bioreactor, first historical fermentation data associated with an unknown fermentation state, wherein the unknown fermentation state is associated with a fermentation process of a bioreactor; inputting the first historical fermentation data into a first deep learning neural network, and determining a known fermentation state of the bioreactor; and outputting the determined known fermentation state. Misra et al. teaches on page 3812, column 2, paragraph 3 “Our primary contribution, thus, is an end-to-end method for learning embeddings that are explicitly optimized with both binarization and their use in link prediction/node retrieval in mind. More concretely: in a manner similar to Skip-gram, the likelihood of an edge between two nodes is modeled as a function of the Hamming distance between their bit embeddings… By minimizing expected loss over this (product) distribution of embeddings, and by applying efficient approximations to the Hamming distance (Sec. 3.4), continuous optimization techniques can be applied”, reading on generating, by the first deep learning neural network and based on the first historical fermentation data associated with the fermentation process, a first embedding; generating, based on second historical fermentation data that are associated with a known fermentation state, a second embedding, wherein: the second embedding is based on the second historical fermentation data that are associated with the known fermentation state; determining a distance between the first embedding and the second embedding; determining, based on the distance, a known fermentation state of the bioreactor; and outputting the determined known fermentation state. It would have been obvious at the time of filing to modify the teachings of Karim et al. for the use of neural networks in the prediction of fermentation states with the teachings of Misra et al. for the use of embeddings and Hamming Distance for optimization of the learning process as the latter teaches in the abstract “continuous optimization techniques can be applied to the approximate expected loss. Embeddings optimized in this fashion consistently outperform the quantization of both spectral graph embeddings and various learned real-valued embeddings, on both ranking and pre-ranking tasks for a variety of datasets”. Additionally, it would have been obvious to modify the teachings of Karim et al. for the use of two neural networks that use different neural networks, to use the same neural network twice, as there would be no modification of the network architechture, and Karim et al. already teaches use of multiple networks, thereby reading on wherein a second deep learning neural network is identical to the first deep learning neural network,. One would have had a reasonable expectation of success neural networks use an optimization function such as gradient descent or stochastic gradient descent, and the latter reference merely teaches an improved optimization function that within their own paper is used within a graph network. Therefore, it would have been obvious to a person skilled in the art at the time of filing to modify the teachings of each and to be successful. Claim 5 is directed to the method of claim 1 but further specifies that the distance calculation be an element wise distance between the two embeddings. Misra et al. teaches on page 3812, column 2, paragraph 3 “Our primary contribution, thus, is an end-to-end method for learning embeddings that are explicitly optimized with both binarization and their use in link prediction/node retrieval in mind. More concretely: in a manner similar to Skip-gram, the likelihood of an edge between two nodes is modeled as a function of the Hamming distance between their bit embeddings… By minimizing expected loss over this (product) distribution of embeddings, and by applying efficient approximations to the Hamming distance (Sec. 3.4), continuous optimization techniques can be applied”, the Hamming Distance being an element wise distance therefore reads on wherein the distance is an element-wise difference between the first embedding and the second embedding. Claim 6 is directed to the method of claim 5 and thus claim 1, but further specifies that the historical data comprise a window of rime-series data. Karim et al. teaches on page 497, column 1, paragraph 2 “Several sets of batch fermentation data were obtained at different temperatures, providing a suitable candidate for training the neural network on the behavior of the process at various environmental conditions. Five data sets were generated at temperatures 30C, 33C, 35C, 37C and 39C. The estimator was required to predict current biomass, glucose and ethanol concentrations every 15 minutes, using on-line measurements of temperature, redox potential, % CO2 in exhaust bioprocess gas, and optical density. Therefore, each data set consisted of 41 time patterns corresponding to 15 minute sampling during 10 hrs”, reading on the first historical fermentation data comprise a first window of time-series data, and the second historical fermentation data comprise a second window of time-series data, the method further comprising: inputting the first window of time-series data into the first deep learning neural network; and inputting the second window of time-series data into the second deep learning neural network. Claim 7 is directed to the method of claim 1 but further specifies the use of multiple historical datasets and multiple networks to generate a fermentation state with the smallest distance between embeddings. Karim et al. teaches in the abstract “In fermentation processes, direct on-line measurements of primary process variables usually are unavailable. The state of the cultivation, therefore, has to be inferred from measurements of secondary variables and any previous knowledge of process dynamics. This research investigates the learning, recall and generalization characteristics of neural networks trained to model the nonlinear behavior of a fermentation process”, on page 496, column 2, paragraph 2 “recurrent networks are more general, in the sense that connections are allowed both ways between a pair of neurons, and even from a neuron to itself, as shown in Fig. 2(b). They are especially able to perform temporal association”, on page 497, column 1, paragraph 2 “Several sets of batch fermentation data were obtained at different temperatures, providing a suitable candidate for training the neural network on the behavior of the process at various environmental conditions. Five data sets were generated at temperatures 30C, 33C, 35C, 37C and 39C. The estimator was required to predict current biomass, glucose and ethanol concentrations every 15 minutes, using on-line measurements of temperature, redox potential, % CO2 in exhaust bioprocess gas, and optical density. Therefore, each data set consisted of 41 time patterns corresponding to 15 minute sampling during 10 hrs”. Misra et al. teaches on page 3812, column 2, paragraph 3 “Our primary contribution, thus, is an end-to-end method for learning embeddings that are explicitly optimized with both binarization and their use in link prediction/node retrieval in mind. More concretely: in a manner similar to Skip-gram, the likelihood of an edge between two nodes is modeled as a function of the Hamming distance between their bit embeddings… By minimizing expected loss over this (product) distribution of embeddings, and by applying efficient approximations to the Hamming distance (Sec. 3.4), continuous optimization techniques can be applied”, which in view of Karim et al., reads on wherein the second historical fermentation data associated with a known fermentation state comprises a plurality of historical time-series data sets, wherein each historical time-series data set of the plurality of historical time-series data sets is associated with a known fermentation state, the method further comprising: inputting each of the plurality of historical time-series data sets into the second deep learning neural network; determining, based on the plurality of historical time-series data sets, an embedding for each known fermentation state, wherein determining the known fermentation state of the bioreactor comprises: determining, based on the embedding for each known fermentation state, a known fermentation state that has a smallest distance from the first embedding. Claim 8 is directed to the method of claim 1 but further specifies that the fermentation state comprise one or more of the specified states. Karim et al. teaches in Figure 2, “Sample configurations for neural-based state estimation of ethanol fermentation”, of which a “stable state” would merely be a not changing output layer, thereby reading on wherein the known fermentation state comprises one or more indication of: a stable state, a fermentation performance improvement, a fermentation performance decline, or a fermentation process upset. Claim 9 is directed to the method of claim 1 but further specifies that the training be based on one or more of the specified data types. Karim et al. teaches on page 497, column 1, paragraph 2 “Several sets of batch fermentation data were obtained at different temperatures, providing a suitable candidate for training the neural network on the behavior of the process at various environmental conditions. Five data sets were generated at temperatures 30C, 33C, 35C, 37C and 39C. The estimator was required to predict current biomass, glucose and ethanol concentrations every 15 minutes, using on-line measurements of temperature, redox potential, % CO2 in exhaust bioprocess gas, and optical density. Therefore, each data set consisted of 41 time patterns corresponding to 15 minute sampling during 10 hrs”, reading on training the first deep learning neural network and the second deep learning neural network based on one or more of: historical fermentation data, fermentation state data, or synthetic fermentation data. Claim 12 is directed to the method of claim 9 and thus claim 1, but further specifies that the training be based on windows of historical time-series data. Karim et al. teaches on page 497, column 1, paragraph 2 “Several sets of batch fermentation data were obtained at different temperatures, providing a suitable candidate for training the neural network on the behavior of the process at various environmental conditions. Five data sets were generated at temperatures 30C, 33C, 35C, 37C and 39C. The estimator was required to predict current biomass, glucose and ethanol concentrations every 15 minutes, using on-line measurements of temperature, redox potential, % CO2 in exhaust bioprocess gas, and optical density. Therefore, each data set consisted of 41 time patterns corresponding to 15 minute sampling during 10 hrs”, reading on training the deep learning neural network with windows of historical time-series data. Claims 2, 4, 17-20, and 22-23 are rejected under 35 U.S.C. 103 as being unpatentable over Karim et al. (American Control Conference (1992) 495-499) and Misra et al. (Proceedings of the AAAI Conference on Artificial Intelligence (2018) 3812-2819) as applied to claim 1, 5-9, and 12 above, and further in view of Bahad et al. (Procedia Computer Science (2019) 74-82). Claim 2 is directed to the method of claim 1 but further specifies that the neural network comprise a CNN with pooling layer. Karim et al. and Misra et al. teach the method of claims 1, 5-9 and 12 as previously described. Karim et al. teaches on page496, column 2, paragraph 2 “recurrent networks are more general, in the sense that connections are allowed both ways between a pair of neurons, and even from a neuron to itself, as shown in Fig. 2(b). They are especially able to perform temporal association”. Bahad et al. teaches on page 78, paragraph 1 “LSTMs help to preserve the error that can be back-propagated through time and in lower layers of a deep network. Bi-directional processing is an evident approach for a large text sequence prediction and text classification. As shown in Figure 3, a Bi-Directional LSTM network steps through the input sequence in both directions at the same”, and in paragraph 3 “model selection among CNN, variation of RNN as Vanilla RNN, LSTM-RNN, and Bi-directional LSTM-RNN is carried out…Global Max Pooling layer is used to extract the maximum value from each filter. The resultant is passed through several dense hidden layers with dropout. Finally softmax layers are used to make a binary decision of whether or not the article is credible. Similarly in Bi-Directional LSTM network each embedding layer corresponding to training data is inspected in both orders at the same time”, which in view of the teachings from Karim et al. and Misra et al., read on wherein the first deep learning neural network comprises: at least one convolutional layer, at least one pooling layer that is connected to the convolutional layer, and at least one bidirectional recurrent neural network (RNN) layer that is connected to the convolutional layer, the method further comprising: receiving, by the at least one pooling layer of the first deep learning neural network, output from the at least one convolutional layer of the first deep learning neural network; receiving, by the at least one RNN layer of the first deep learning neural network, output from the at least one convolutional layer; outputting, by the at least one RNN layer of the first deep learning neural network, the first embedding; and outputting, by the at least one RNN layer of the second deep learning neural network, the second embedding. It would have been obvious at the time of filing to have modified the teachings of Karim et al. and Misra et al. for the teachings of claim 1, with the teachings of Bahad et al. for incorporating more specific arcitechtures of neural networks including CNN layers, pooling layers, etc., as each have specific benefits as described in sections 3.1 and 3.2 of Bahad et al. such as the machine translation of CNNs and the sequence prediction (embedding) of RNNs. One would have had a reasonable expectation of success given that Karim et al. is directed to the use of neural networks in predicting bioprocess states, with Misra et al. providing methods for optimization of neural networks, and Bahad et al. while not dealing with such information, is merely teaching the use of newer methods in neural networks for better prediction outcomes. Therefore, it would have been obvious to a person skilled in the art at the time of filing to modify the teachings of each and to be successful. Claim 3 is directed to the method of claim 2 and thus claim 1, but further specifies that the LSTM layer comprises four different layers each with their specified number of units. Karim et al. and Misra et al. teach the method of claims 1, 5-9 and 12 as previously described. Bahad et al. teaches on page 78, paragraph 1 “LSTMs help to preserve the error that can be back-propagated through time and in lower layers of a deep network. Bi-directional processing is an evident approach for a large text sequence prediction and text classification. As shown in Figure 3, a Bi-Directional LSTM network steps through the input sequence in both directions at the same”, and in paragraph 3 “model selection among CNN, variation of RNN as Vanilla RNN, LSTM-RNN, and Bi-directional LSTM-RNN is carried out…Global Max Pooling layer is used to extract the maximum value from each filter. The resultant is passed through several dense hidden layers with dropout. Finally softmax layers are used to make a binary decision of whether or not the article is credible. Similarly in Bi-Directional LSTM network each embedding layer corresponding to training data is inspected in both orders at the same time”. While none of the cited references explicity teach the the exact combination of layers and nodes within the neural network, this is merely an optimization that would be obvious to optimize through routine optimization under MPEP 2144.05(II) as Karim et al. points to on page 497, column 1, paragraph 3 “The number of inputts and outputs define the number of nodes in the input and output layer of the network (see Fig. 2(a)). One hidden layer was used in this study and to determine the number of nodes in the hidden layer, different number of hidden nodes were proposed and evaluated according to the Mean Squared Error criteria”. Therefore, it would have been obvious to a person skilled in the art at the time of filing to optimize the teachings of each and to be successful. Claim 4 is directed to the method of claim 1 but further specifies at least one LSTM layer in the network. Karim et al. and Misra et al. teach the method of claims 1, 5-9 and 12 as previously described. Karim et al. teaches on page496, column 2, paragraph 2 “recurrent networks are more general, in the sense that connections are allowed both ways between a pair of neurons, and even from a neuron to itself, as shown in Fig. 2(b). They are especially able to perform temporal association”. Bahad et al. teaches on page 78, paragraph 1 “LSTMs help to preserve the error that can be back-propagated through time and in lower layers of a deep network. Bi-directional processing is an evident approach for a large text sequence prediction and text classification. As shown in Figure 3, a Bi-Directional LSTM network steps through the input sequence in both directions at the same”, and in paragraph 3 “model selection among CNN, variation of RNN as Vanilla RNN, LSTM-RNN, and Bi-directional LSTM-RNN is carried out…Global Max Pooling layer is used to extract the maximum value from each filter. The resultant is passed through several dense hidden layers with dropout. Finally softmax layers are used to make a binary decision of whether or not the article is credible. Similarly in Bi-Directional LSTM network each embedding layer corresponding to training data is inspected in both orders at the same time”, which in view of the teachings from Karim et al. and Misra et al., read on wherein the first deep learning neural network comprises: at least one long short-term memory (LSTM) layer; and an output layer that is connected to the at least one LSTM layer, the method further comprising: receiving, by the output layer, output from the at least one LSTM layer, and wherein: the first embedding is output by the output layer of the first deep learning neural network, and wherein the second embedding is output by an output layer of the second deep learning neural network. Claims 11 and 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Karim et al. (American Control Conference (1992) 495-499), and Misra et al. (Proceedings of the AAAI Conference on Artificial Intelligence (2018) 3812-2819) as applied to claim 9 above, and further in view of Bahad et al. (Procedia Computer Science (2019) 74-82) and Mariaca-Gaspar et al. (Mexican International Conference on Artificial Intelligence (2012) 211-222). Claim 11 is directed to the method of claim 9 and thus claim 1, but further specifies the metabolites as ethanol and acetate. Karim et al., and Misra et al. et al. teach the method of claims 9 as previously described. Karim et al. teaches in the abstract “Results of the neural network estimators are presented, based on experimental data available from the ethanol production”. Bahad et al. teaches on page 78, paragraph 1 “LSTMs help to preserve the error that can be back-propagated through time and in lower layers of a deep network. Bi-directional processing is an evident approach for a large text sequence prediction and text classification. As shown in Figure 3, a Bi-Directional LSTM network steps through the input sequence in both directions at the same”, and in paragraph 3 “model selection among CNN, variation of RNN as Vanilla RNN, LSTM-RNN, and Bi-directional LSTM-RNN is carried out…Global Max Pooling layer is used to extract the maximum value from each filter. The resultant is passed through several dense hidden layers with dropout. Finally softmax layers are used to make a binary decision of whether or not the article is credible. Similarly in Bi-Directional LSTM network each embedding layer corresponding to training data is inspected in both orders at the same time”. Mariaca-Gaspar et al. teaches in the abstract “The propose of this paper is to introduce a new Kalman Filter based in a Recurrent Neural Network topology (KFRNN) and a recursive Levenberg-Marquardt (L-M) algorithm. Such algorithm is able to estimate the states and parameters of a highly nonlinear continuous fermentation bioprocess in noisy environment”, and while it does not specifically teach the use of an acetate metabolite the paper is directed to the method of modeling the fermentation process, and it would be obvious to a person skilled in the art that the model for one known metabolite could be used for another in the same process, therefore reading on wherein the metabolite production comprises acetate and ethanol production, the method further comprising: determining, after the future time window, actual ethanol and acetate production of the fermentation process during the window of future control decisions; and training the deep learning neural network based on the actual ethanol and acetate production of the fermentation process during the window of future control decisions. It would have been obvious at the time filing to modify the teachings of Karim et al., and Misra et al., for the method of claim 9, with the teachings of Bahad et al. for the use of LSTM neural networks and Mariaca-Gaspar et al. for incorporating continuous bioprocess control as the latter teaches within the abstract “The proposed control scheme is applied for real-time identification and control of continuous stirred tank bioreactor model, taken from the literature, where a fast convergence, noise filtering and low mean squared error of reference tracking were achieved”. One would have had a reasonable expectation of success given that Mariaca-Gaspar et al. is teaching the use of neural networks for control of bioprocesses, which is in line with Karim et al. and does not conflict with either Misra et al. or Bahad et al. in terms of methods, and Bahad et al. is merely teaching newer methods in neural networks. Therefore, it would have been obvious to a person skilled in the art at the time of filing to modify the teachings of each and to be successful. Claim 14 is directed to the method of claim 9 and thus claim 1, but further specifies that the window of future control decisions comprises no changes to the fermentation process. Karim et al., Misra et al., and Bahad et al. teach the method of claims 1-2, 4-9, 12, 17-20, and 22-23 as previously described. Karim et al. teaches in Figure 2, “Sample configurations for neural-based state estimation of ethanol fermentation”, of which a “stable state” or no changes to the fermentation process would merely be a not changing output layer, thereby reading on wherein the window of future control decisions comprises no changes to the fermentation process. Claim 15 is directed to the method of claim 9 and thus claim 1, but further specifies that the guidance be based on the window of future control decisions. Karim et al., Misra et al., and Bahad et al. teach the method of claims 1-2, 4-9, 12, 17-20, and 22-23 as previously described. Karim et al., Misra et al., and Bahad et al. teach the use of output layers, rendering obvious the outputting of Marciaca-Gaspar et al. teachings for bioprocess control. If the data were to be time-series data as directed by Karim et al. and Misra et al., then it would be obvious that so too would be any control decisions to be made, and therefore that the data used train and guide future control decisions would be inherent based on the time-series data and window of time for the control of said data, thereby reading on outputting, and for display, an indication of guidance for controlling the fermentation process, wherein the guidance for controlling the fermentation process is based on the window of future control decisions. Claim 16 is directed to the method of claim 9 and thus claim 1, but further specifies that the guidance comprise at least one of the specified forms of guidance. Mariaca-Gaspar et al. teaches in the abstract “The control scheme is direct…”, reading on wherein the guidance comprises at least one of: direct guidance or indirect guidance. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Karim et al. (American Control Conference (1992) 495-499), Misra et al. (Proceedings of the AAAI Conference on Artificial Intelligence (2018) 3812-2819), Bahad et al. (Procedia Computer Science (2019) 74-82) and Mariaca-Gaspar et al. (Mexican International Conference on Artificial Intelligence (2012) 211-222). Claim 10 is directed to a method for determining future control decisions based upon historical data and a neural network to output a prediction of the future metabolite production. Karim et al. teaches in the abstract “In fermentation processes, direct on-line measurements of primary process variables usually are unavailable. The state of the cultivation, therefore, has to be inferred from measurements of secondary variables and any previous knowledge of process dynamics. This research investigates the learning, recall and generalization characteristics of neural networks trained to model the nonlinear behavior of a fermentation process”, on page 496, column 2, paragraph 2 “recurrent networks are more general, in the sense that connections are allowed both ways between a pair of neurons, and even from a neuron to itself, as shown in Fig. 2(b). They are especially able to perform temporal association”, on page 497, column 1, paragraph 2 “Several sets of batch fermentation data were obtained at different temperatures, providing a suitable candidate for training the neural network on the behavior of the process at various environmental conditions. Five data sets were generated at temperatures 30C, 33C, 35C, 37C and 39C. The estimator was required to predict current biomass, glucose and ethanol concentrations every 15 minutes, using on-line measurements of temperature, redox potential, % CO2 in exhaust bioprocess gas, and optical density. Therefore, each data set consisted of 41 time patterns corresponding to 15 minute sampling during 10 hrs”. Misra et al. teaches on page 3812, column 2, paragraph 3 “Our primary contribution, thus, is an end-to-end method for learning embeddings that are explicitly optimized with both binarization and their use in link prediction/node retrieval in mind. More concretely: in a manner similar to Skip-gram, the likelihood of an edge between two nodes is modeled as a function of the Hamming distance between their bit embeddings… By minimizing expected loss over this (product) distribution of embeddings, and by applying efficient approximations to the Hamming distance (Sec. 3.4), continuous optimization techniques can be applied”. Bahad et al. teaches on page 78, paragraph 3 “Global Max Pooling layer is used to extract the maximum value from each filter. The resultant is passed through several dense hidden layers with dropout. Finally softmax layers are used to make a binary decision of whether or not the article is credible”. Mariaca-Gaspar et al. teaches in the abstract “The propose of this paper is to introduce a new Kalman Filter based in a Recurrent Neural Network topology (KFRNN) and a recursive Levenberg-Marquardt (L-M) algorithm. Such algorithm is able to estimate the states and parameters of a highly nonlinear continuous fermentation bioprocess in noisy environment. The control scheme is direct adaptive and also contains feedback and feedforward recurrent neural controllers. The proposed control scheme is applied for real-time identification and control of continuous stirred tank bioreactor model”, on page 211, paragraph 1 “The KFRNN with L-M learning will be applied for a Continuous Stirred Tank Reactor (CSTR) model, identification and control… The papers proposed to use the neuro-fuzzy and adaptive nonlinear control systems design applying FFNNs”, and Figure 1 provides a “Block-diagram of the closed-loop neural control”. If the data were to be time-series data as directed by Karim et al. and Misra et al., then it would be obvious that so too would be any control decisions to be made and would therefore read on receiving, from at least one control system associated with a bioreactor, a window of historical fermentation data associated with a fermentation process of the bioreactor; receiving a window of future control decisions; inputting the historical fermentation data and the window of future control decisions into a deep learning neural network; predicting, by the deep learning neural network and based on the received window of historical fermentation data and the window of future control decisions, future metabolite production of the fermentation process, wherein the deep learning neural network comprises at least one bi-directional long short-term memory (LSTM) layer; and outputting an indication of the prediction of future metabolite production of the fermentation process. It would have been obvious at the time filing to modify the teachings of Karim et al. for the use of neural networks in the prediction of fermentation states with the teachings of Misra et al. for the use of embeddings and Hamming Distance for optimization of the learning process, and Bahad et al. for the use of LSTM neural network architecture, with the teachings of Mariaca-Gaspar et al. for incorporating continuous bioprocess control as the latter teaches within the abstract “The proposed control scheme is applied for real-time identification and control of continuous stirred tank bioreactor model, taken from the literature, where a fast convergence, noise filtering and low mean squared error of reference tracking were achieved”. One would have had a reasonable expectation of success given that Mariaca-Gaspar et al. is teaching the use of neural networks for control of bioprocesses, which is in line with Karim et al. and does not conflict with either Misra et al. or Bahad et al. in terms of methods. Therefore, it would have been obvious to a person skilled in the art at the time of filing to modify the teachings of each and to be successful. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Karim et al. (American Control Conference (1992) 495-499), and Misra et al. (Proceedings of the AAAI Conference on Artificial Intelligence (2018) 3812-2819) as applied to claim 9 above, and further in view of Bahad et al. (Procedia Computer Science (2019) 74-82), Mariaca-Gaspar et al. (Mexican International Conference on Artificial Intelligence (2012) 211-222), and Doremus et al. (Biotechnology and Bioengineering (1985) 852-860). Karim et al., and Misra et al. teach the method of claim 9 as previously described. Bahad et al. teaches on page 78, paragraph 3 “Global Max Pooling layer is used to extract the maximum value from each filter. The resultant is passed through several dense hidden layers with dropout. Finally softmax layers are used to make a binary decision of whether or not the article is credible”. Mariaca-Gaspar et al. teaches in the abstract “The propose of this paper is to introduce a new Kalman Filter based in a Recurrent Neural Network topology (KFRNN) and a recursive Levenberg-Marquardt (L-M) algorithm. Such algorithm is able to estimate the states and parameters of a highly nonlinear continuous fermentation bioprocess in noisy environment. The control scheme is direct adaptive and also contains feedback and feedforward recurrent neural controllers. The proposed control scheme is applied for real-time identification and control of continuous stirred tank bioreactor model”, on page 211, paragraph 1 “The KFRNN with L-M learning will be applied for a Continuous Stirred Tank Reactor (CSTR) model, identification and control… The papers proposed to use the neuro-fuzzy and adaptive nonlinear control systems design applying FFNNs”, and Figure 1 provides a “Block-diagram of the closed-loop neural control”. Doremus et al. teaches in the abstract “The results show that agitation and pressure are important parameters for solvent productivity in acetone- butanol fermentation”, reading on wherein the future control decisions in the window of future control decisions comprise one or more changes to the fermentation process of: gas flow rate; dilution rate; media flow rate; pressure; or agitation. It would have been obvious at the time of filing to modify the teachings of Karim et al., and Misra et al. for the method of claim 9, with the teachings of Bahad et al. for the use of LSTM neural network architecture, the teachings of Mariaca-Gaspar et al. for incorporating continuous bioprocess control, and the teachings of Doremus et al. for the importance of agitation and pressure in the fermentation process as the latter teaches in the abstract “The results show that agitation and pressure are important parameters for solvent productivity in acetone- butanol fermentation”. One would have had a reasonable expectation of success given that that Mariaca-Gaspar et al. is teaching the use of neural networks for control of bioprocesses, which is in line with Karim et al., does not conflict with either Misra et al. or Bahad et al. in terms of methods, and Doremus et al. is merely teaching features with high importance to the predictability of bioprocess states. Therefore, it would have been obvious to a person skilled in the art at the time of filing to modify the teachings of each and to be successful. Claims 17-23 are rejected under 35 U.S.C. 103 as being unpatentable over Karim et al. (American Control Conference (1992) 495-499), Misra et al. (Proceedings of the AAAI Conference on Artificial Intelligence (2018) 3812-2819), and Bahad et al. (Procedia Computer Science (2019) 74-82). Claim 17 is directed to a computer method determining a fermentation status using historical data, and a bi-directional LSTM RNN. Karim et al. teaches in the abstract “In fermentation processes, direct on-line measurements of primary process variables usually are unavailable. The state of the cultivation, therefore, has to be inferred from measurements of secondary variables and any previous knowledge of process dynamics. This research investigates the learning, recall and generalization characteristics of neural networks trained to model the nonlinear behavior of a fermentation process”, on page 496, column 2, paragraph 2 “recurrent networks are more general, in the sense that connections are allowed both ways between a pair of neurons, and even from a neuron to itself, as shown in Fig. 2(b). They are especially able to perform temporal association”, on page 497, column 1, paragraph 2 “Several sets of batch fermentation data were obtained at different temperatures, providing a suitable candidate for training the neural network on the behavior of the process at various environmental conditions. Five data sets were generated at temperatures 30C, 33C, 35C, 37C and 39C. The estimator was required to predict current biomass, glucose and ethanol concentrations every 15 minutes, using on-line measurements of temperature, redox potential, % CO2 in exhaust bioprocess gas, and optical density. Therefore, each data set consisted of 41 time patterns corresponding to 15 minute sampling during 10 hrs”, and Figure 2 provides “Sample configurations for neural-based state estimation of ethanol fermentation”. Misra et al. teaches on page 3812, column 2, paragraph 3 “Our primary contribution, thus, is an end-to-end method for learning embeddings that are explicitly optimized with both binarization and their use in link prediction/node retrieval in mind. More concretely: in a manner similar to Skip-gram, the likelihood of an edge between two nodes is modeled as a function of the Hamming distance between their bit embeddings… By minimizing expected loss over this (product) distribution of embeddings, and by applying efficient approximations to the Hamming distance (Sec. 3.4), continuous optimization techniques can be applied”. Bahad et al. teaches on page 78, paragraph 1 “LSTMs help to preserve the error that can be back-propagated through time and in lower layers of a deep network. Bi-directional processing is an evident approach for a large text sequence prediction and text classification. As shown in Figure 3, a Bi-Directional LSTM network steps through the input sequence in both directions at the same”, and in paragraph 3 “model selection among CNN, variation of RNN as Vanilla RNN, LSTM-RNN, and Bi-directional LSTM-RNN is carried out…Global Max Pooling layer is used to extract the maximum value from each filter. The resultant is passed through several dense hidden layers with dropout. Finally softmax layers are used to make a binary decision of whether or not the article is credible. Similarly in Bi-Directional LSTM network each embedding layer corresponding to training data is inspected in both orders at the same time”, which in view of the teachings from Karim et al. and Misra et al., read on inputting, into a deep learning neural network, historical fermentation data that are associated with an unknown fermentation state; determining, by the deep learning neural network and based on the historical fermentation data, probabilities for each of a plurality of known fermentation states, wherein the deep learning neural network comprises at least one bi-directional long short-term memory (LSTM) layer; determining a known fermentation state having a highest probability based on the probabilities; and assigning the known fermentation state having the highest probability to the unknown fermentation state. It would have been obvious at the time of filing to have modified the teachings of Karim et al. for the use of neural networks in the prediction of fermentation states with the teachings of Misra et al. for the use of embeddings and Hamming Distance for optimization of the learning process, with the teachings of Bahad et al. for incorporating more specific arcitechtures of neural networks including CNN layers, pooling layers, etc., as each have specific benefits as described in sections 3.1 and 3.2 of Bahad et al. such as the machine translation of CNNs and the sequence prediction (embedding) of RNNs. One would have had a reasonable expectation of success given that Karim et al. is directed to the use of neural networks in predicting bioprocess states, with Misra et al. providing methods for optimization of neural networks, and Bahad et al. while not dealing with such information, is merely teaching the use of newer methods in neural networks for better prediction outcomes. Therefore, it would have been obvious to a person skilled in the art at the time of filing to modify the teachings of each and to be successful. Claim 18 is directed to the method of claim 17 but further specifies that the historical data comprise time-series data. Karim et al. and Misra et al. teach the method of claims 1, 5-9 and 12 as previously described. Karim et al. teaches on page 497, column 1, paragraph 2 “Several sets of batch fermentation data were obtained at different temperatures, providing a suitable candidate for training the neural network on the behavior of the process at various environmental conditions. Five data sets were generated at temperatures 30C, 33C, 35C, 37C and 39C. The estimator was required to predict current biomass, glucose and ethanol concentrations every 15 minutes, using on-line measurements of temperature, redox potential, % CO2 in exhaust bioprocess gas, and optical density. Therefore, each data set consisted of 41 time patterns corresponding to 15 minute sampling during 10 hrs”, reading on training the deep learning neural network with windows of historical time-series data. Claim 19 is directed to the method of claim 17 but further specifies that the training use a regression function. Karim et al. and Misra et al. teach the method of claims 1, 5-9 and 12 as previously described. Misra et al. teaches on page 3816, column 1, paragraph 2 “Additionally, for use in reranking, we perform logistic regression with several observable neighborhood features…”, reading on training the deep learning neural network to define a regression function based on ground truth historical fermentation data and corresponding known fermentation state data. Claim 20 is directed to the method of claim 17 but further specifies that the LSTM layer be connected to a dense output layer. Karim et al. and Misra et al. teach the method of claims 1, 5-9 and 12 as previously described. Bahad et al. teaches on page 78, paragraph 3 “Global Max Pooling layer is used to extract the maximum value from each filter. The resultant is passed through several dense hidden layers with dropout. Finally softmax layers are used to make a binary decision of whether or not the article is credible”, reading on wherein the at least one LSTM layer is connected to a dense output layer, the method further comprising: outputting, by the dense output layer and based on output generated by the at least one LSTM layer, the probabilities for each of the plurality of known fermentation states. Claim 21 is directed to the method of claim 17 but further specifies that the LSTM layer comprises four different layers each with their specified number of units. Karim et al. and Misra et al. teach the method of claims 1, 5-9 and 12 as previously described. Bahad et al. teaches on page 78, paragraph 1 “LSTMs help to preserve the error that can be back-propagated through time and in lower layers of a deep network. Bi-directional processing is an evident approach for a large text sequence prediction and text classification. As shown in Figure 3, a Bi-Directional LSTM network steps through the input sequence in both directions at the same”, and in paragraph 3 “model selection among CNN, variation of RNN as Vanilla RNN, LSTM-RNN, and Bi-directional LSTM-RNN is carried out…Global Max Pooling layer is used to extract the maximum value from each filter. The resultant is passed through several dense hidden layers with dropout. Finally softmax layers are used to make a binary decision of whether or not the article is credible. Similarly in Bi-Directional LSTM network each embedding layer corresponding to training data is inspected in both orders at the same time”. While none of the cited references explicity teach the the exact combination of layers and nodes within the neural network, this is merely an optimization that would be obvious to optimize through routine optimization under MPEP 2144.05(II) as Karim et al. points to on page 497, column 1, paragraph 3 “The number of inputts and outputs define the number of nodes in the input and output layer of the network (see Fig. 2(a)). One hidden layer was used in this study and to determine the number of nodes in the hidden layer, different number of hidden nodes were proposed and evaluated according to the Mean Squared Error criteria”. Therefore, it would have been obvious to a person skilled in the art at the time of filing to optomize the teachings of each and to be successful. Claim 22 is directed to the method of claim 17 but further specifies that the training be performed using one of the specified data types. Karim et al. and Misra et al. teach the method of claims 1, 5-9 and 12 as previously described. Karim et al. teaches on page 497, column 1, paragraph 2 “Several sets of batch fermentation data were obtained at different temperatures, providing a suitable candidate for training the neural network on the behavior of the process at various environmental conditions. Five data sets were generated at temperatures 30C, 33C, 35C, 37C and 39C. The estimator was required to predict current biomass, glucose and ethanol concentrations every 15 minutes, using on-line measurements of temperature, redox potential, % CO2 in exhaust bioprocess gas, and optical density. Therefore, each data set consisted of 41 time patterns corresponding to 15 minute sampling during 10 hrs”, reading on training the first deep learning neural network and the second deep learning neural network based on one or more of: historical fermentation data, fermentation state data, or synthetic fermentation data. Claim 23 is directed to the method of claim 17 but further specifies that the fermentation states comprise one or more the specified states. Karim et al. and Misra et al. teach the method of claims 1, 5-9 and 12 as previously described. Karim et al. teaches in Figure 2, “Sample configurations for neural-based state estimation of ethanol fermentation”, of which a “stable state” would merely be a not changing output layer, thereby reading on wherein the known fermentation state comprises one or more indication of: a stable state, a fermentation performance improvement, a fermentation performance decline, or a fermentation process upset. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEENAN NEIL ANDERSON-FEARS whose telephone number is (571)272-0108. The examiner can normally be reached M-Th, alternate F, 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Karlheinz Skowronek can be reached at 571-272-9047. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.N.A./Examiner, Art Unit 1687 /OLIVIA M. WISE/Supervisory Patent Examiner, Art Unit 1685
Read full office action

Prosecution Timeline

Mar 01, 2022
Application Filed
Jan 26, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592298
Hardware Execution and Acceleration of Artificial Intelligence-Based Base Caller
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
6%
Grant Probability
56%
With Interview (+50.0%)
5y 1m
Median Time to Grant
Low
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month