DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Status Claims 1-20 are pending. Priority This application is a CON of PCT/ US2021 /025921, filed 04/06/2021, which claims benefit of application no. 63/008,196, filed 04/10/2020. The instant application has the effective filing date of 10 April 2020. Information Disclosure Statement The information disclosure statement (IDS) submitted on 02/15/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner. Drawings The drawings, submitted on 09/19/2022, are accepted by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b ) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the appl icant regards as his invention. Claim s 6-8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention for the following reasons. Claims 6-8 recite “the other sample , ” wherein the limitation lacks antecedent basis and renders the metes and bounds of the claims unclear. To overcome this rejection, please provide proper antecedent basis within the instant claims . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under U.S.C 101 because the claimed invention is directed to abstract ideas without significantly more, as detailed in the analysis below. Eligibility Step 1: Subject matter eligibility evaluation in accordance with MPEP § 2106: Claims 1-18 are directed to a statutory category ( method ). Claim 19 is directed to a statutory category ( system ). Claim 20 is directed to a statutory category ( product ). Therefore, in accordance with MPEP § 2106.03 all claims have patent eligible subject matter. [Eligibility Step 1: YES] Eligibility Step 2A : This step determines whether a claim is directed to a judicial exception in accordance with MPEP § 2106. Eligibility Step 2A -- Prong One: Limitations are analyzed to determine if the claims recite any concepts that could equate to a judicial exception (i.e. abstract idea, law of nature, or natural phenomenon). Possible judicial exceptions are explored below. Recitations of Judicial Exceptions: Claims 1, 19, 20: identification of a type of machine-learning model that is to be used; and/or a machine-learning model hyperparameter; (mental process) filtering the population of candidate solutions by: determining, for each of the candidate solutions and for each of the data elements, a predicted sample characteristic by processing the spectrum of the data element with the set of properties; (mathematical concept) generating, for each of the population of candidate solutions, a fitness metric based on the predicted sample characteristics and the known characteristic of the data elements; (mathematical concept) selecting an incomplete subset of the population of candidate solutions based on the fitness metrics; (mental process) performing one or more additional generation iterations by: updating the population of candidate solutions to include a next-generation population of solutions identified using the incomplete subset of the population of candidate solutions and one or more genetic operators; (mathematical concept ) repeating the filtering of the population of candidate solutions using the updated population of candidate solutions; and generating a processing pipeline based on the set of properties of a particular candidate solution in the incomplete subset of the population of candidate solutions selected during a last generation iteration of the additional generation iterations. (mathematical concept ) Claim 2: generating a predicted characteristic of the other sample by processing the other spectrum in accordance with the processing pipeline (mathematical concept ) Claim 4: wherein the set of properties for the particular candidate solution includes a hyperparameter for a particular type of machine- learning model, the particular type of machine-learning model including: partial least squares; random forest; or support vector machine (mathematical concept ) Claim 5: wherein the set of properties for the particular candidate solution includes a selection of or a hyperparameter for a particular type of machine-learning model, the particular type of machine-learning model being configured to generate classification outputs or numeric outputs. (mathematical concept ) Claim 8: wherein the predicted characteristic of the other sample characterizes: a concentration of one or more small-molecule analytes; a solvent; a prevalence of one or more protein variants; a protein higher-order structure; or large molecule impurities. (mathematical concept ) Claim 9: wherein the processing pipeline includes performing an asymmetric least squares technique to reduce or remove a baseline, and wherein the set of properties for the particular candidate solution includes at least one parameter for the asymmetric least squares technique (mathematical concept) Claim 10: wherein the processing pipeline includes performing an smoothing technique to reduce or remove a baseline, and wherein the set of properties for the particular candidate solution includes at least one parameter for the smoothing technique. (mathematical concept) Claim 11: wherein, for at least one sample of the plurality of samples, the plurality of data elements includes multiple data elements corresponding to the sample, the multiple data elements including different replicate spectrum generated using the sample. (mathematical concept) Claim 12: partitioning the plurality of data elements into a training subset of the plurality of data elements and a testing subset of the plurality of data elements; (mental process) wherein the at least some of the plurality of data elements for which the predicted sample characteristics are determined are defined as the testing subset of the plurality of data elements; (mental process) and wherein filtering the population of candidate solutions further includes: learning one or more parameters using the testing subset of the plurality of data elements. (mental process) Claim 13: wherein each of the plurality of samples corresponds to a same target chemical structure and to a same target formulation , wherein the plurality of samples includes multiple lot-specific subsets, each of the multiple lot-specific subsets including multiple samples manufactured during an individual lot , (mental process) and wherein the partitioning of the plurality of data elements includes: partitioning the individual lots into the training subset and the testing subset; (mental process) and partitioning the plurality of data elements based on the lot partitioning. (mental process) Claim 14: generating a predicted characteristic of the other sample by processing the other spectrum with the processing pipeline; (mathematical concept) determining, based on the predicted characteristic, whether a quality-control condition is satisfied; (mental process) when the quality control condition is satisfied, distributing the other sample to be administered to a subject; and when the quality control condition is not satisfied, inhibiting distribution of the other sample for subject administration (mental process). Claim 15: when the quality control condition is not satisfied, dynamically adjusting one or more parameters associated with production of the other sample (mental process) Claim 16: performing a feature-selection process that selects, from a set of intensities of the spectrum, one or more intensities for use in generating the predicted characteristic of the predicted sample, wherein the feature-selection processing is performed prior to generation of the predicted characteristic by the processing pipeline. (mental process) Claim 17: wherein the feature-selection process includes: identifying, from the spectrum, a set of wavenumbers, each wavenumber being associated with an intensity value; defining a score for each wavenumber of the set of wavenumbers using a regression analysis; (mental process , mathematical concept ) sorting the set of wavenumbers according to the score of each wavenumber of the set of wavenumbers; (mental process) performing one or more feature-selection iterations, wherein each feature-selection iteration includes: generating a subset of the set of wavenumbers by removing one or more wavenumbers of the spectrum having a lowest score; (mental process) generating a model-validation score based on a cross-validation of the subset of the set of wavenumbers on the machine-learning model; (mathematical concept) selecting, from the one or more feature-selection iterations, a particular feature- selection iteration of the one or more feature-selection iterations that includes a model-validation score that is closest to a threshold; (mental process) selecting, for use in generating the predicted characteristic by the processing pipeline, intensities that correspond to the subset of the set of wavenumbers of the particular feature-selection iteration. (mental process) Claim 18: generating a predicted characteristic of the other sample by processing the other spectrum in accordance with the processing pipeline; (mathematical concept) determining, based on the predicted characteristic, whether a quality-control condition is satisfied; (mental process) when the quality control condition is satisfied, initiating or completing one or more a manufacture process configured to manufacture additional samples; and when the quality control condition is not satisfied, terminating or modifying the one or manufacture process (mental process). Step 2A - Prong One Analysis: Selecting, sorting, and making determinations of data represent analysis techniques that require no more than mental observations and pen/paper . As such, limitations that recite these techniques fall into the mental process grouping of abstract ideas. Generating secondary data based on machine learning algorithms (random forest, SVM , PLS ) and mathematical calculations ( asymmetrical least squares , smoothing, cross-validation, fitness metrics ) represent analysis techniques that require transformations and organizations on data via mathematical formulas and thus fall under the mathematical concepts grouping of abstract ideas. Eligibility Step 2A – Prong Two: A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. If the claim contains no additional claim elements beyond the abstract idea, the claim fails to integrate the abstract idea into a practical application (MPEP 2106.04(d)). Eligibility Step 2B : Claim elements are probed for inventive concept equating to significantly more than the judicial exception (MPEP 2106.04(II)). Additional Elements within the claimed invention include: Claims 1, 19, and 20: initializing a population of candidate solutions, wherein each of the candidate solutions is defined by a set of properties that include: an indication that a particular type of pre-processing is to be performed; a parameter of a pre-processing to be performed; accessing a data set including a plurality of data elements, each of the data elements including: a spectrum generated based on an interaction between one sample of a plurality of samples and energy from an energy source; and a known characteristic of the sample; Claim 2: accessing another spectrum corresponding to another sample; outputting the predicted characteristic of the other sample. Claim 3: wherein, for each data element of the plurality of data elements, the spectrum includes a Raman spectrum or an infrared spectrum. Claim 6: wherein the other sample includes large molecules. Claim 7: wherein the other sample includes small molecules. Claim s 14: accessing another spectrum corresponding to another sample C laim 18: receiving the predicted characteristic; accessing another spectrum corresponding to another sample The limitations above are directed to inputting, accessing, receiving, specifying, and output ting data necessary to complete the method of the claimed invention. As such, they recite mere data gathering actives that qualify as insignificant extra solution activities that do not separately, or as a whole claimed invention integrate the judicial elements into practical application per MPEP 2106.05(g). [Eligibility Step 2A – Prong Two: YES] The insignificant extra solution data gathering activities, as recited are also found to be well-understood, routine, and conventional per Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) , and Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1354-55, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016) for s electing information, based on types of information and availability of information . [Eligibility Step 2B : NO] Additional Elements that may be categorized differently include: Claim 1: computer-implemented method Claim 19: A system comprising: one or more data processors; and a non-transitory computer readable storage medium containing instructions which, when executed on the one or more data processors, cause the one or more data processors Claim 20: computer-program product tangibly embodied in a non-transitory machine-readable storage medium, including instructions configured to cause data processors The additional elements above represent generic computer components or implementations . When viewed separately or in the context of the whole claimed invention, they merely act as tool s to carry out the judicial exceptions. Elements of t his category do not integrate judicial exception s into practical application per MPEP 2106.05(f). [Eligibility Step 2A – Prong Two: YES] The elements are further found to be well-understood, routine, and conventional per Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015) , for storing and retrieving information in memory; and Bancorp Services v. Sun Life, 687 F.3d 1266, 1278, 103 USPQ2d 1425, 1433 (Fed. Cir. 2012) , for performing repetitive calculations. [Eligibility Step 2B : NO] As such, claims 1- 20 are directed to judicial exceptions without significantly more and are rejected under 35 U.S.C 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness . This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-5, 7, 8, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Koljonen et al. (IDS reference; NPL; cite no. 24; 2008) . Koljonen et al. reviews genetic algorithms in Near Infrared spectroscopy (NIR) and chemometrics. Claims 1, 19, and 20 are directed to computer implemented method, systems, and computer readable mediums that perform the following steps. access a dataset with spectrum, generated based on the interaction between a sample and its energy source; and a known characteristic of the sample; initialize candidate solutions properties that include: indications that a particular type of pre-processing is to be performed; a parameter of pre-processing; a particular machine learning algorithm; and a machine learning algorithm hyperparameter process the spectrum and property data to predict a characteristic of the sample; generate a fitness metric between the known and predicted sample characteristic ; filter solutions based on the results; Use the filtered results and at least one genetic operator to generate a next generation of solutions ; filter the updated population of solutions, with the same method ; to generate a processing pipeline based on the solutions selected during the latest updated generation. Koljonen et al. teaches that 18 wavelength variables of an NIR spectrum are (page 190, column 2) encoded using a binary alphabe t (page 191, column 2) . Koljonen et al. further teaches that in a typical Genetic Algorithm (GA), shown below (page 190, fig. 1), after ( i ) creating an initial population, (ii) evaluati ng+ the fitness of each individual in the population (page 190, column 1), and (iii) checking the stopping condition, (iv) a new generation is generated (page 190, column 2) based off the findings (page 190, fig. 1). Koljonen et al . further teaches that such GA’s are well suited for multi-criteria optimization, with properties such as wavelength inclusion, pre-processing steps, the number of latent variables in the mode l , and the regression (or classifier) model itself (page 195, column 1) ; and in this example, the machine learning model predicts the concentration of functional groups in a sample (page 191, column 1). Koljonen et al. further teaches using a fitness function between the predicted and true organic compound sample concentrati on (page 191, column 1) ; a selection of parents, crossover, and mutation operators (page 190, column 2); and repeating steps ii to iv until the stopping condition is fulfilled (page 190, column 2). Therefore Koljonen et al. teaches filtering the updated population of solutions . Claim 2 is directed to accessing , generating, and outputting a predicted characteristic of another sample with the processing pipeline and spectrum data. Claim 7 is directed to the sample including small molecules; and Claim 8 is directed to predict ing one of the following characteristic s: concentration of one or more small- molecule analytes; a solvent; a prevalence of one or more protein variants; a protein higher-order structure; or large molecule impurity. Koljonen et al. teaches that another researcher compared GA and P artial L east S quares -bootstrap models from NIR spectrum data to analyze its predictive ability (page 192, column 1) , specifically in regards to clavulanic acid concentration (page 192, column 1). Regarding claim 3 , Koljonen et al. teaches that Near Infrared Spectroscopy data sets were examined (page 193, column 1). Regarding claim 4 , Koljonen et al. teaches using the Partial Least Squares (PLS) model and denoting the number of components to be used within it (page 191, column 1) . Regarding claim 5 , Koljonen et al. teaches the results from three NIR data sets showed comparable classification accuracy using Partial Least Squares (page 193, column 1). Though Koljonen et al. does not explicitly teach generating a processing pipeline based on the updated generations, with all the features described above in one singular invention , Koljonen et al. teaches a review of applications that include both G A s in methodology and spectroscopic data recorded in the near infrared range (page 189, column 2) . O ne of ordinary skill in the art would have sufficient motivation to combine the known technique s, taught by Koljonen et al. , in order to arrive at the predictable result of a Genetic Algorithm NIR spectrum processing pipeline , prior to the effective filing date. Claims 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Koljonen et al. (IDS reference; NPL; cite no. 24; 2008) , as applied to claim 1-5, 7, 8, 19-20 above, in view of Liu et al. (IDS reference; NPL; cite no. 28; 2017) . Koljonen et al. teaches a genetic algorithm framework of predicting sample characteristics with machine learning and NIR spectra l data . Claims 9 and 10 are directed to the processing pipeline performing an asymmetric least squares ( claim 9 ) or smoothing ( claim 10 ) technique to reduce or remove a baseline; and the set of candidate solution properties including a parameter for the technique. Koljonen et al. teaches utilizing the tendency of genetic algorithms to get stuck in local optima … to perform a type of noise filtering (page 192, column 1). Koljonen et al. does not teach noise filtering is accomplished with a specific technique. Liu et al . describes differences between deep convolutional neural networks and conventional machine learning techniques for Raman spectrum recognition. Liu et al. teaches that conventional machine learning methods such as SVM and Random Forest are not capable of handling Raman signals which are not properly baseline corrected, and therefore require explicit baseline correction in their processing pipelines (page 9, column 1) . Liu et al. further teaches s elect ing another dataset which contains raw (uncorrected) spectra and six widely-used baseline correction methods , such as : modified polynomial fi tt ing, rubber band, robust local regression estimation , iterative restricted least squares, asymmetric least square smoothing, and rolling ball. (page 10, column 1) . Liu et al. further teaches the difference between raw spectra and corresponding spectra, baseline corrected by asymmetric least squares , specifically , in Figure 4 ( page 10, column 1 ) . Therefore, Koljonen et al. teaches that noise filtering is a known technique in genetic algorithm spectrum prediction frameworks. Liu et al. provides sufficient teachings for one of ordinary skill in the art to accomplish filtering in conventional machine learning models by including a mandatory baseline correction process parameter. Liu et al. further provides motivation for one of ordinary skill in the art to use an asymmetrical least square smoothing technique to enact baseline correction, as it is one of the most widely used techniques, specifically shown to impact spectra of conventional machine learning methods , analogous to Koljonen et al . Claims 6, 11 -13, 16, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Koljonen et al. (IDS reference; NPL; cite no. 24; 2008), as applied to claim 1-5, 7, 8, 19-20 previously, in view of Schwanninger et al. (J. Near Infrared Spectrosc ; Vol. 19; 2011). Koljonen et al. teaches a genetic algorithm framework of predicting sample characteristics with machine learning and NIR spectral data . Claim 6 is directed to the sample including large molecules. Koljonen et al. does not teach explicitly teach samples including large molecules. Schwanninger et al. describes the determination of lignin content in spruce wood via Infrared spectroscopy and Partial least squares regression. Schwanninger et al. teaches samples were taken from inner and outer [tree] rings (page 320, column 2); which had their total lignin content determined (page 320, column 2). Schwanninger et al. further teaches lignin is a major polymeric wood constituent (page 320, column 1). Claim 11 is directed to the dataset including at least two different replicate spectrum of one or more sample s . Schwanninger et al. teaches that replicate spectra must be kept together in the dataset (page 321, column 2) and were treated as one sample (page 321, column 2). Claim 12 is directed to splitting the data into training and testing sets; and using the testing set to determine a predicted sample characteristic and filter the solutions. Regarding claim 12 , Koljonen et al. teaches that a model is calibrated and evaluated for each trial with a training set; and the performance or fitness of the model is evaluated using an independent test set (page 192, column 1). Therefore Koljonen et al. teaches splitting the data and using the testing set to filter the data, with the fitness function. Koljenen et al. does not teach that the testing set is used to determine predicted sample characteristic. Schwanninger et al. teaches when applying the PLS to determine lignin content, care should be taken that sample subsets for cross-validation (CV) and test-set (TS) validation are representative of the whole data set (page 321, column 2). Schwanninger et al. further teaches split ting the reference data set: one half for CV (internal validation) and the other half for TS (external validation ); then chang ing the groups; so, the one previously used for CV ( CV1 ) was then used for TS ( TS1 ) and the one first used for TS ( TS2 ) served for CV ( CV2 ) (page 321, column 2) . Therefore Sch w anninger et al. uses the testing set in the prediction of sample characteristics. Claim 13 is directed to the samples corresponding to an identical target chemical structure and formulation; including lot-specific samples, manufactured during an individual lot; and splitting the entire dataset into training and testing sub sets based on the individual lots. Schwanninger et al. teaches the samples were obtained from a field trial of 50 clones with two to five replicates grown at two sites in south Sweden (page 320, column 2) ; the samples were divided into two groups according to the two sites , into two groups according to ring numbers (4–6 or 11–13) (page 320, column 2) ; and based on the structure of softwood–lignin overtones (first and second) as well as combinations of vibrations of several groups ( cH2 , cH3 , car–H stretching of the aromatic ring and o–H) are expected (page 325, column 2). Claim 16 is directed to selecting at least one intensity from the spectrum data as features before predicting a characteristic of the sample. Claim 17 is directed to selecting the intensity via the following steps: Identify a wavenumber associated with an intensity value; Use regression analysis to assign a “score” for each wavenumber; Sort the wavenumbers according to their score; Remove at least one wavenumber with the lowest score; Perform cross validation on the remaining wavenumbers via machine learning; Complete cross validation by selecting a model validation-score closest to a threshold; and s elect intensities that correspond to the remaining cross-validated wavenumbers as features. Schwanninger et al. teaches associating w avenumber ranges with varying spectral intensities in a property weighting spectrum ( PWS ) (page 324, column 2) ; which, apart from a multiplicative factor, is identical to the first PL S vector (page 324, column 1). The first PLS vector ( rank ) was obtained by regressing the near infrared dataset against the total lignin content by means of full cross validation (page 321, column 2); and the rank with the smallest PRESS (predictive residual error sum of squares=sum of all squared differences between true and predicted values) was searched (page 321, column 2). Schwanninger et al. further teaches selecting relevant spectral ranges meeting both high correlation coefficient and significant PWS signal criteria (page 326, column 2) , with the goal of identify ing a subset of wavenumbers that produce the smallest possible errors in the models for quantitative determinations ; and using the removal of non-informative variables to produce better prediction and simpler models (page 325, column 1). Therefore, Schwanninger et al. teaches using only a portion of wavenumber s after a cross-validation process and splitting lot specific sampled with identical target chemical structures and formulas into different data subsets (training, testing), based on the individual lot they were produced from. Though the samples are natural compounds with a formula instead of formulation, it is obvious for one of ordinary skill in the art to perform the same method of data partitioning for synthetically produced samples with manufactured formulations and remove non-informative variables determined from its ranking system using correlation coefficient determined thresholds . As Koljonen et al. further teaches find ing and selecting the most promising wavelet intervals (page 194, column 2), without a specific method of accomplishing the process, one of ordinary skill in the art would have sufficient motivation to apply the teachings of Schwanninger et al. with a reasonable expectation of success in selecting relevant spectral data features for Genetic Algorithm processing. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Koljonen et al. (IDS reference; NPL; cite no. 24; 2008) , as applied to claim 1-5, 7, 8, 19-20 previously , in view of Frano et al . (BW Tek; 2018) . Claim 14 is directed to accessing spectrum data of a sample; processing the data with the processing pipeline; predicting a characteristic of the sample; using the predicted characteristic to determine if a quality-control condition is met; and distributing or inhibiting sample distribution based on the condition being met or unmet, respectively. Ko l jonen et al. teach es method s of using the genetic algorithm and infrared spectrum to predict characteristics of a sample via a machine learning pipeline . Koljonen et al. does not teach using the processing pipeline to aid in a quality control decision making process . Frano et al. describes R aman spectroscopy for at line content uniformity testing of pharmaceutical tablets. Frano et al. teaches that Content uniformity (CU) testing is a crucial task in pharmaceutical manufacturing, as it ensures that each product that reaches a consumer contains a safe dosage of the active pharmaceutical ingredient (API ) (page 1, column 1). Frano et al. further teaches testing the predicti on of the acetaminophen and lactose content using the models and collecting spectra for each sample (page 7, column 1). Frano et al. further teaches giving e ach model upper and lower limits, for a simple “Pass” or “Fail” result (page 5, column 1) ; and i f the value calculated for either of the components is outside of the set lower and upper limits, the software will present a “Fail” message, signaling the user that a sample is not within the set guidelines (page 7, column 1) ; and i f both components are found to be within the set lower and upper limits, the sample will “Pass” (page 7, column 1) . Therefore Frano et al. teaches using a spectrum processing pipeline to predict concentrations of a sample and applying the results as a quality control metric of determining if the finished pharmaceutical product is suitable to be released and distributed to consumers. As the processing pipeline taught by Frano et al. is also analogous to that taught by Koljonen et al., it is obvious for one of ordinary skill in the art to combine the techniques in order to use yield the predictable results of performing content uniformity testing to ensure distribution of a “tested and validated’ sample. Claims 15 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Koljonen et al. IDS reference; NPL; cite no. 24; 2008) in view of Frano et al. ( BW Tek; 2018 ) , as applied to claim 14 above, in further view of Esmonde-White et al. ( Anal Bioanal Chem; Vol. 409; p.637 -649; 2017 ) . Claim 15 is further directed to adjusting at least one production parameter of the sample, when the quality control condition is NOT satisfied. Claim 18 is directed to further using the predicted characteristic of a sample to determine if a quality-control condition is met. If the condition is met, beginning or completing the manufacturing process of more samples ; and if unmet ; stopping or changing the manufacturing process. Koljonen et al. in view of Frano et al. teach establishing pass/fail quality conditions related to the distribution of samples. Frano et al. does not teach using the quality conditions to adjust production and manufacturing parameters of the sample. Esmonde-White et al. describes Raman spectroscopy a process analytical techno lo gy for pharmaceutical manufacturing and bioprocessing. Esmonde-White et al. teaches that Real-time, in-process analytics have an important role in ensuring quality product and enabling in-process corrections (page 640, column 2) ; and in a process description and batch sheet, a reaction of a sample would have been completed and collected at ~1250 min ( page 641; column 1 ) ; but i n situ Raman data showed reaction completion nearly 600 min before stipulated time (page 641, column 2) . Esmonde-White et al. teaches that this t he data suggest s that batch cycle time could be reduced by several hours when moving up to the commercial manufacturing scale, improving process efficiency ( page 641, column 2 ) . Therefore, Esmonde-White et al. teaches using a spectra processing pipeline analogous to that of Koljonen et al., to predict sample characteristics and adjust or chang e a parameter of production / manufacturing when a sample does not meet an expected quality control metric. Though not explicitly taught, it would be obvious to one of ordinary skill in the art to adopt the pass/fail criteria taught by Frano et al. in order to complete the manufacturing process according to the batch sheet reaction time if the in-process analytics predicted a different time. As, E smo n de- W hite et. further teaches that the techniqu es can also be applied to in-line or off-line Raman measurements of content uniformity (page 642, column 1) , one of ordinary skill in the art can reasonably combine the teachings with a n expectation of success in us ing the framework for quality control relating to content uniformity and in process reactions before completing product distribution or manufacturing. Conclusion No claims are currently allowed. Correspondence Any inquiry concerning this communication or earlier communications from the examiner should be directed to Milana Thompson whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-8740 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday - Friday, 9:00-6:00 ET . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Karlheinz Skowronek can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-1113 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center ( EBC ) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. / M.K.T ./ Examiner, Art Unit 1687 /Karlheinz R. Skowronek/ Supervisory Patent Examiner, Art Unit 1687