DETAILED ACTION
Notice of AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Withdrawal of Objections and Rejections
Applicant's response, filed 03/02/2026, has been fully considered.
The following rejections and/or objections are either maintained or newly applied for claims 1-20. They constitute the complete set applied to the instant application. Herein, "the previous Office action" refers to the Final Rejection of 10/01/2025.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/02/2026 has been entered.
Status of the Claims
Claims 1-20 are pending.
Claims 1-20 are rejected.
Priority
This application has no domestic or foreign applications for which benefit is claimed. The claims to the benefit of priority are acknowledged. The effective filing date of claims 1-20 is 06/30/2021.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one having ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
(A) the claim limitation uses the term "means" or "step" or a term used as a substitute for "means" that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term "means" or "step" or the generic placeholder is modified by functional language, typically, but not always linked by the transition word "for" (e.g., "means for") or another linking word or phrase, such as "configured to" or "so that"; and
(C) the term "means" or "step" or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word "means" (or "step” or the generic placeholder) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word "means" (or "step” or the generic placeholder) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word "means" (or "step” or the generic placeholder) are being interpreted under 35 U.S.C. 112(f) except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word "means" (or "step” or the generic placeholder) are not being interpreted under 35 U.S.C. 112(f) except as otherwise indicated in an Office action.
Such claim limitations that use the terms "first logic" and “third logic” being interpreted under 112(f) are:
first logic to generate one or more peak locations for a chromatogram data set (claims 1-2, and 15).
first logic to: receive a command from a user to train a machine-learning computational model (claims 8 and 15).
first logic is to process the chromatogram data set (claim 16).
“second logic to cause the display of the one or more peak locations and the one or more baselines” (claims 1 and 15).
“second logic is to, for the at least one peak, cause the display of the associated peak start location” (claim 6).
“second logic to provide, to the user after initial training, an option to select the machine-learning computational model” (claim 8).
“second logic is to request, from a user” (claim 13).
“second logic is also to provide to the user” (claim 14).
“second logic is to receive the user adjustment” (claim 20).
third logic to, for individual peaks, generate an associated integrated value (claims 1 and 7).
third logic is to, for individual peaks, identify an associated peak start location (claim 4).
third logic to retrain the machine-learning computational model with the chromatogram data set (claim 15).
Because these claim limitations are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. As it appears in the instant specification and claims, there is insufficient structure/function relationship to accurately describe the recited terms "first logic", "second logic" and “third logic”. The corresponding structure in the instant specification is "any of the logic elements… may be implemented by one or more computing devices" [0020], and thus the steps are interpreted as computer-implemented. The specification must disclose an algorithm for performing the claimed specific computer function, or else the claim is indefinite under 35 U.S.C. 112(b) (MPEP 2181 (II)(B)).
Claims 1-2, and 15 recite “first logic to generate one or more peak locations for a chromatogram data set”. Figs. 2-4 recite the algorithm to perform the function of generating peak locations.
Claims 8 and 15 recite “first logic to: receive a command from a user to train a machine-learning computational model”. Figs. 2-4 recite the algorithm to perform the function of receiving a command to train a model.
Claim 16 recites “first logic is to process the chromatogram data set”. Figs. 2-4 recite the algorithm to perform the function of processing the chromatogram data set.
Claims 1 and 15 recite “second logic to cause the display of the one or more peak locations and the one or more baselines”. Figs. 2-4 recite the algorithm to perform the function of processing the chromatogram data set,
Claim 6 recites “second logic is to, for the at least one peak, cause the display of the associated peak start location”. Figs. 2-4 recite the algorithm to perform the function of processing the chromatogram data set.
Claim 8 recites “second logic to provide, to the user after initial training, an option to select the machine-learning computational model”. Figs. 2-4 recite the algorithm to perform the function of processing the chromatogram data set.
Claim 13 recites “second logic is to request, from a user”. Figs. 2-4 recite the algorithm to perform the function of processing the chromatogram data set.
Claim 14 recites “second logic is also to provide to the user”. Figs. 2-4 recite the algorithm to perform the function of processing the chromatogram data set.
Claim 20 recites “second logic is to receive the user adjustment”. Figs. 2-4 recite to perform the function of processing the chromatogram data set.
Claims 1 and 7 recite “third logic to, for individual peaks, generate an associated integrated value”. Figs. 2-4 recite to perform the function of generating an associated integrated value.
Claim 4 recites “third logic is to, for individual peaks, identify an associated peak start location”. Figs. 2-4 recite to perform the function of identifying an associated peak start location.
Claim 15 recites “third logic to retrain the machine-learning computational model with the chromatogram data set”. Figs. 2-4 recite to perform the function of retraining the machine-learning computational model.
If applicant does not intend to have this limitation interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f).
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION —The specification shall conclude with one or more claims
particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1-7 and 11-20 are rejected under 35 U.S.C. 112(b)as being indefinite for failing to particularly point out and distinctly claim the subject matter the invention. Dependent claims are rejected similarly, unless otherwise noted below. Any newly recited portions are necessitated by claim amendment. The following issues cause the respective claims to be rejected under 112(b) as indefinite:
In the claim 11, the relationship is unclear between the wherein clause and the "first logic is to receive …". As it is currently recited, it is unclear if "the first logic is to receive …" is part of the wherein clause or it is meant to further limit the "first machine-learning computational model:" which precedes a colon. Colons should be used to begin lists in which list elements are separated by newlines, and usually are accompany by terms such as "consisting of", "comprising" or "wherein." To overcome this rejection, the claim may be amended to recite "wherein: the machine-learning computational model is a first machine-learning computational model comprising: the first logic is to receive a command from a user to train a second machine-learning computational model " including the "comprising" or any other appropriate term to clarify the relationship between the claim elements.
In claim 15, the recited "third logic to retrain the machine-learning computational model" is unclear because there is no previous recitation of a training step in this independent claim. To overcome this rejection, the claim may be amended to clarify the initial training of the machine learning computational model.
The following recitations require but lack antecedent basis, rendering their claims indefinite because there is no previous recitations of the followings terms as written:
claims 1, 6, and 15, "the display"
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 USC § 101 because the claimed inventions are directed to one or more Judicial Exceptions (JEs) without significantly more. Regarding JEs, "Claims directed to nothing more than abstract ideas..., natural phenomena, and laws of nature are not eligible for patent protection" (MPEP 2106.04 §I). Abstract ideas include mathematical concepts and procedures for evaluating, analyzing or organizing information, which are a type of mental process (MPEP 2106.04(a)(2)). Any newly recited portions are necessitated by claim amendment.
101 background
MPEP 2106 organizes JE analysis into Steps 1, 2A (Prong One & Prong Two), and 2B as analyzed below. MPEP 2106 and the following USPTO website provide further explanation and case law citations: uspto.gov/patent/laws-and-regulations/examination-policy/examination-guidance-and-training-materials.
Step 1: Are the claims directed to a process, machine, manufacture, or composition of matter (MPEP 2106.03)?
Step 2A, Prong One: Do the claims recite a judicially recognized exception, i.e., a law of nature, a natural phenomenon, or an abstract idea (MPEP 2106.04(a-c))?
Step 2A, Prong Two: If the claims recite a judicial exception under Prong One, then is the judicial exception integrated into a practical application by an additional element (MPEP 2106.04(d))?
Step 2B: Do the claims recite a non-conventional arrangement of elements in addition to any identified judicial exception(s) (MPEP 2106.05)?
Analysis of instant claims
Step 1: Are the claims directed to a 101 process, machine, manufacture, or composition of matter (MPEP 2106.03)?
The instant claims are directed to a method (claims 1-20), which falls within one of the categories of statutory subject matter.
[Step 1: claims 1-20: Yes]
Step 2A, Prong One: Do the claims recite a judicially recognized exception, i.e., a law of nature, a natural phenomenon, or an abstract idea (MPEP 2106.04(a-c))?
Background
With respect to Step 2A, Prong One, the claims recite judicial exceptions in the form of abstract ideas. MPEP § 2106.04(a)(2) further explains that abstract ideas are defined as:
• mathematical concepts (mathematical formulas or equations, mathematical relationships
and mathematical calculations) (MPEP 2106.04(a)(2)(I));
• certain methods of organizing human activity (fundamental economic principles or practices, managing personal behavior or relationships or interactions between people) (MPEP 2106.04(a)(2)(II)); and/or
• mental processes (concepts practically performed in the human mind, including observations, evaluations, judgments, and opinions) (MPEP 2106.04(a)(2)(III)).
Analysis of instant claims
With respect to the instant claims, under the Step 2A, Prong One evaluation, the claims are found to recite abstract ideas that fall into the grouping of mental processes (in particular procedures for observing, analyzing and organizing information) and mathematical concepts (in particular mathematical relationships and formulas) are as follows:
• "first logic to generate one or more peak locations for a chromatogram data set and to generate one or more baselines for the chromatogram data set" (claims 1 and 15);
• "generate an associated integrated value representing an area" (claims 1 and 7);
• "generate one or more baselines for the chromatogram data set by combining one or more estimated baselines that are output" (claim 2);
• "selecting a single estimated peak location from a cluster of estimated peak locations” (claim 3);
• "the third logic is to, for individual peaks, identify an associated peak start location and an associated peak end location” (claim 4);
• "train … based on the multiple chromatogram data sets and the associated labeled peak locations and baselines" (claim 8); and
• "third logic to retrain … with the chromatogram data set and the confirmed set of peak locations and baselines” (claim 15).
Dependent claims 5, 9-12, and 17-19 recite further steps that limit the judicial exceptions in independent claims 1, 8 and 15 and, as such, also are directed to those abstract ideas. For example, claim 5 limits the associated peak start location to not being at a baseline start location of the associated baseline; claim 10 limits at least one baseline to being associated with multiple peaks; claim 17 limits the chromatogram data sets to including scaling magnitudes; and claim 18 limits the chromatogram data sets to including signal resampling.
The abstract ideas recited in the claims are evaluated under the Broadest Reasonable Interpretation (BRI) and determined to each cover performance either in the mind and/or by mathematical operation. Without further detail as to the methodology involved in "generating chromatogram data sets", under the BRI, one may simply, for example, using integration calculations, arrive at the data set which comprises integrated value associated with the peak. According to the specification at [0027] claims directed to “generating peaks or baselines” are performed by a machine learning model that may take any of a number of forms. Therefore, the BRI in light of the specification indicate no specific details and therefore, under a BRI, the first logic is performing a verbal equivalent defining mathematical calculations to be performed. According to the specification at [0041] claims directed to “generating an associated integrated value” are performed by calculations “by the integration logic to generate the integrated value associated with the peak”. Therefore, the BRI in light of the specification indicate no specific details and therefore, under a BRI, the third logic is performing a verbal equivalent defining mathematical calculations to be performed. Thus, the recited terms correspond to verbal equivalents of mathematical concepts because they constitute actions executed by a group of mathematical steps in a form of a mathematical algorithm; thus mathematical concepts (MPEP 2106.04(a)(2)). A mathematical concept need not be expressed in mathematical symbols, because "words used in a claim operating on data to solve a problem can serve the same purpose as a formula." In re Grams, 888 F.2d 835, 837 and n.1, 12 USPQ2d 1824, 1826 and n.1 (Fed. Cir. 1989). MPEP 2106.04(a)(2) pertains. The human mind is sufficiently capable of identifying, evaluating, and comparing information; therefore the recited steps are interpreted as mental processes. A human would be capable to evaluate the output of the classifier to determine if it corresponds to a non-responder or a responder.
[Step 2A Prong One: claims 1-20: Yes ]
Step 2A, Prong Two: If the claims recite a judicial exception under Prong One, then is the judicial exception integrated into a practical application by an additional element (MPEP 2106.04(d))?
Background
MPEP 2106.04(d).I lists the following example considerations for evaluating whether a judicial exception is integrated into a practical application:
An improvement in the functioning of a computer or an improvement to other technology or another technical field, as discussed in MPEP §§ 2106.04(d)(1) and 2106.05(a);
Applying or using a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, as discussed in MPEP § 2106.04(d)(2);
Implementing a judicial exception with, or using a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim, as discussed in MPEP § 2106.05(b);
Effecting a transformation or reduction of a particular article to a different state or thing, as discussed in MPEP § 2106.05(c); and
Applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception, as discussed in MPEP § 2106.05(e).
Analysis of instant claims
Instant claims 1-2, 6, 8-9, 11-16, and 19-20 recite additional elements that are not abstract ideas:
• "machine-learning computational model" (claims 1-2, 8-9, 11-16, and 19);
• "display of the one or more peak locations and the one or more baselines concurrently with the display of the chromatogram data set" (claims 1 and 15);
• "display of the associated peak start location as a dropline concurrently with the display of the chromatogram data set" (claim 6);
• "receive a command from a user to train a second machine-learning computational model" (claim 11);
• "receive a user adjustment of a peak location or a baseline" (claim 15);
• "store the one or more peak locations and one or more baselines" (claim 15);
• "receive the user adjustment through a user manipulation of the displayed one or more peak locations and one or more baselines in a graphical user interface" (claim 20).
Dependent claim 9 limits the machine-learning computational model to being an one-dimensional array of chromatogram data. Dependent claim 11 limits the command from the user to being an identification of multiple chromatogram data sets and labeled peak locations. Dependent claim 12 limits the data sets used to train the first and second machine-learning computational models to being different.
Considerations under Step 2A, Prong Two
The recited limitations in claims 1-2, 6, 8-9, 11-16, and 19-20 are interpreted as requiring the use of a computer. Hence, the claims explicitly recite steps executed by computers and therefore can be described as computer functions or instructions to implement on a generic computer. Further steps directed to additional non-abstract elements of a computing device/computer do not describe any specific computational steps by which the "computer parts" perform or carry out the judicial exceptions, nor do they provide any details of how specific structures of the computer are used to implement these functions. The claims state nothing more than a generic computer which performs the functions that constitute the judicial exceptions.
Claims directed to "receive" and "store" data (claims 11, 15, and 20) read on receiving or transmitting data over a network -Symantec, 838 F.3d at 1321 - MPEP 2106.05(a) pertains; which constitutes just necessary data gathering and therefore correspond to insignificant extra-solution activity. Claims reciting "display" (claims 1, 6, and 15) are interpreted as data output and as such insignificant extra-solution activity.
With respect to claims 1-2, 8-9, 11-16, and 19, the computer-related elements or the general purpose computer and the recited machine-learning computational model do not rise to the level of significantly more than the judicial exception. The claims state nothing more than a generic computer which performs the functions that constitute the judicial exceptions. Hence, these are mere instructions to apply the judicial exceptions using a computer, which the courts have found to not provide significantly more when recited in a claim with a judicial exception (Alice Corp., 573 U.S. at225-26, 110 USPQ2d at 1984; see MPEP 2106.05(A)). The specification as published also notes that the architecture of the machine-learning computational model may take any of a number of forms that may be used without limitation [0026]. The additional elements are set forth at such a high level of generality that they can be met by a general purpose computer. Therefore, the computer components constitute no more than a general link to a technological environment, which is insufficient to constitute an inventive concept that would render the claims significantly more than the judicial exceptions (see MPEP 2106.05(b)I-III).
There are no additional limitations to indicate details of exactly how the judicial exception is being integrated into a practical application. Claims that amount to instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983. See also 573 U.S. at 224, 110 USPQ2d at 1984. MPEP 2106.05(b).
Hence, these are mere instructions to apply the abstract idea using a computer and insignificant extra-solution activity and therefore the claims do not integrate that abstract idea into a practical application (see MPEP 2106.04(d) § I; 2106.05(f); and 2106.05(g)). None of the dependent claims recite any additional non-abstract elements; they are all directed to further aspects of the information being analyzed, the manner in which that analysis is performed, or the mathematical operations performed on the information.
In Step 2A, Prong One above, claim steps and/or elements were identified as part of one or more judicial exceptions (JEs).
In this Step 2A, Prong Two immediately above claim steps and/or elements were identified as part of one or more additional elements. Additional elements are further discussed in Step 2B below.
Here in Step 2A, Prong Two, no additional step or element clearly demonstrates integration of the JE(s) into a practical application.
[Step 2A Prong Two: claims 1-20: No]
Step 2B: Do the claims recite a non-conventional arrangement of elements in addition to any identified judicial exception(s) (MPEP 2106.05)?
According to analysis so far, the additional elements described above do not provide significantly more than the judicial exception. A determination of whether additional elements provide significantly more also rests on whether the additional elements or a combination of elements represents other than what is well-understood, routine, and conventional. Conventionality is a question of fact and may be evidenced as: a citation to an express statement in the specification or to a statement made by an applicant during examination that demonstrates a well-understood, routine or conventional nature of the additional element(s); a citation to one or more of the court decisions as discussed in MPEP 2106(d)(II) as noting the well-understood, routine, conventional nature of the additional element(s); a citation to a publication that demonstrates the well-understood, routine, conventional nature of the additional element(s); and/or a statement that the examiner is taking official notice with respect to the well-understood, routine, conventional nature of the additional element(s).
Claims 1-2, 8-9, 11-16, and 19 recite a computer or computer functions, interpreted as instructions to apply the abstract idea using a computer, where the computer does not impose meaningful limitations on the judicial exceptions; which can be performed without the use of a computer (MPEP 2106.04(d) § I; and MPEP 2106.05(f)).
The computer-related elements or the general purpose computer and the machine learning model do not rise to the level of significantly more than the judicial exception. The claims state a generic computer which performs the functions that constitute the judicial exceptions. Hence, these are mere instructions to apply the judicial exceptions using a computer, which the courts have found to not provide significantly more when recited in a claim with a judicial exception (Alice Corp., 573 U.S. at225-26, 110 USPQ2d at 1984; see MPEP 2106.05(A)).
Claims directed to receiving and displaying information (claims 1, 6, 11, 15, and 20) recite additional elements that are well-understood, routine, conventional activities previously known to the industry. See MPEP 2106.05(d).
Claims directed to using mathematical calculations to generate peaks and baselines and displaying the chromatogram results recite steps known in the art as conventional (Lytle “Automatic Processing of Chromatograms in a High-Throughput Environment” Clinical Chemistry 62(1):144–153 (2016)).
When the claims are considered as a whole, they do not integrate the abstract idea into a practical application; they do not confine the use of the abstract idea to a particular technology; they do not solve a problem rooted in or arising from the use of a particular technology; they do not improve a technology by allowing the technology to perform a function that it previously was not capable of performing; and they do not provide any limitations beyond generally linking the use of the abstract idea to a broad technological environment. See MPEP 2106.05(a) and 2106.05(h).
The instant claims constitute insignificant extra solution activity, and when considered individually, are insufficient to constitute inventive concepts that would render the claims significantly more than an abstract idea (see MPEP 2106.05(g)). Hence, these elements, when considered individually, are insufficient to constitute inventive concepts that would render the claims significantly more than an abstract idea (see MPEP 2106.05(d)).
[Step 2B: claims 1-20: No]
Conclusion: Instant claims are directed to non-statutory subject matter
For the reasons above, the claims in this instant application, when the limitations are considered individually and as a whole, are directed to an abstract idea and lack an inventive concept not clearly anything significantly more.
Response to applicant's remarks in regard to Claim Rejection 35 U.S.C. ~ 101
The Remarks of 03/02/2026 have been fully considered but are not persuasive for the reasons below:
It appears that pg. 6 para. 3 represents the only Applicant remarks specific to 101 and the instant claims (emphasis added):
As can be seen in claim 1, the generated one or more peak locations and one or more baselines are based on a structured output of the machine-learning computational model. This is not merely analyzing data using machine learning but, rather, a specific signal-processing architecture where a machine-learning computation model produces structured output tied to chromatogram retention time positions, peak locations and associated baselines are determined from that structured output, and corresponding integrated values are calculated. … The claims recite concrete data structures tied to a specific technical application in chromatography analysis. Thus, the claims recite significantly more than any alleged abstract idea
It appears that these remarks address Step 2A, Prong One – regarding claims being directed to machine-learning processing of data and not directed to a mental process. The argued structured output is based on indicators for peak location and boundary-information information; which is the calculated output from the identified judicial elements. The claims recite actions that are capable of being performed mentally or mathematically. The machine learning computational model do not rise to the level of significantly more than the judicial exception. The claims state nothing more than a generic computer which performs the functions that constitute the judicial exceptions. Hence, these are mere instructions to apply the judicial exceptions using a computer, which the courts have found to not provide significantly more when recited in a claim with a judicial exception (Alice Corp., 573 U.S. at225-26, 110 USPQ2d at 1984; see MPEP 2106.05(A)).
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
A. Claims 1-5, 7-8, 11-13, 15-20 are rejected under 35 U.S.C. 103(a) as being unpatentable over Li (“Peak alignment of gas chromatography–mass spectrometry data with deep learning” Journal of Chromatography A 1604:460476 (2019)) in view of I (“Intelligent automation of high-performance liquid chromatography method development by means of a real-time knowledge-based approach” Journal of Chromatography A, 972:27–43 (2002)), as cited on the attached Form PTO-892. Any newly recited portions are necessitated by claim amendment. Bullet points indicate the teachings of the instant features over the prior art. Instantly claimed elements which are considered to be equivalent to the prior art teachings are described in bold for all claims.
Claim 1 recites:
first logic to generate one or more peak locations for a chromatogram data set and to generate one or more baselines for the chromatogram data set, wherein an individual peak has an associated baseline, wherein the first logic includes a machine-learning computational model that outputs, for each point of the chromatogram data set, an indicator of whether the point corresponds to a peak location and boundary-location information that, when the indicator indicates that the point corresponds to the peak location, identifies a baseline start location and a baseline end location relative to the peak location, and wherein the one or more peak locations and the one or more baselines generated by the first logic are determined based on the indicator and the boundary-location information output by the machine-learning computational model;
second logic to cause the display of the one or more peak locations and the one or more baselines concurrently with the display of the chromatogram data set; and
third logic to, for individual peaks, generate an associated integrated value representing an area above the associated baseline and under a portion of the chromatogram data set corresponding to the individual peaks
• Li teaches a method for chromatogram retention time alignment through the use of deep learning neural networks for more complex, fuzzy chromatogram data sets (pg. 1 para. 1) (i.e. reading on logic and machine-learning computational model); wherein a network architecture comprises the ‘Mass Spectrum Encoder’, ‘Peak Encoder’ and ‘Chromatogram Encoder’ (pg. 2 Fig. 1) (i.e. support apparatus) to inform the peak profile with information along the peak’s retention time range to describe its shape (i.e. indicator of the one or more peak locations) (pg. 2 col. 2 para. 4 and pg. 8 Fig. 3) – that is, the intensity at all timepoints from the beginning to the end of each peak; wherein the peak encoder takes in information about the segment of the chromatogram centered on the peak (pg. 3 col. 1 para. 2) to identify the maximum intensity and the start and ends of the peak (pg. 4 col. 1 para. 2) (i.e. reading on baseline start location and a baseline end location relative to the peak location and indicator and the boundary-location information output) (pg. 2 col. 2 para. 4); wherein during peak detection, the baseline drift of each single chromatogram gets corrected using Asymmetric Least Square smoothing algorithm (i.e. reading on using logic to generate one or more baselines as the baselines are getting corrected) (pg. 4 col. 1 para. 1); wherein the area under the curve for each peak from the start to the end of the peak being calculated and attributed as the total count of the peak (i.e. generate an associated integrated value of area above the baseline corresponding to individual peaks) (pg. 4 col. 1 para. 2).
• Li does not teach "second logic to cause the display of the one or more peak locations and the one or more baselines concurrently with the display of the chromatogram data set." However, I teaches a of a type of artificial intelligence system for automation of HPLC method development (pg. 27 para. 1) with a computer screen presentation (pg. 33 Fig. 2); wherein the chromatogram obtained in this machine-learnt sequence displays excellent resolution and peak shape (pg. 40 col. 2 para. 1); wherein a stable baseline is detected for each run (pg. 36 col. 2 para. 2).
Claim 2 recites:
wherein the first logic is to generate one or more baselines for the chromatogram data set by combining one or more estimated baselines that are output by the machine-learning computational model
• Li teaches a method for chromatogram retention time alignment through the use of deep learning neural networks (pg. 27 para. 1); wherein each sub-network will be composed of two branches, to which the input will be the relevant information from a pair of peaks (i.e. combining one or more estimated baselines) and output of the overall network is a probability indicating how likely these peaks should be aligned together (pg. 2 col. 2 para. 2).
Claim 2 recites:
wherein the first logic is to generate one or more baselines for the chromatogram data set by combining one or more estimated baselines that are output by the machine-learning computational model
• Li teaches a method for chromatogram retention time alignment through the use of deep learning neural networks (pg. 27 para. 1); wherein each sub-network will be composed of two branches, to which the input will be the relevant information from a pair of peaks (i.e. combining one or more estimated baselines) and output of the overall network is a probability indicating how likely these peaks should be aligned together (pg. 2 col. 2 para. 2).
Claim 3 recites:
wherein generating one or more peak locations for the chromatogram data set includes selecting a single estimated peak location from a cluster of estimated peak locations
• Li teaches that outputs from the network are converted into distance values by taking the inverse, which then allow peaks to be grouped together using a hierarchical clustering algorithm (pg. 3 col. 2 para. 2) and later using a condition in the grouping algorithm to separate peaks from the same sample into different groups based on the order of their retention times (pg. 7 col. 2 para. 2).
Claim 4 recites:
wherein the third logic is to, for individual peaks, identify an associated peak start location and an associated peak end location
• Li teaches the area under the curve for each peak from the start to the end of the peak being calculated and attributed as the total count of the peak (i.e. generate an associated integrated value of area above the baseline corresponding to individual peaks) (pg. 4 col. 1 para. 2).
Claim 5 recites:
wherein, for at least one peak, the associated peak start location is not at a baseline start location of the associated baseline
• Li teaches the area under the curve for each peak from the start to the end of the peak being calculated and attributed as the total count of the peak (i.e. the correction of the baseline inherently teaches "the associated peak start location is not at a baseline start location of the associated baseline”) (pg. 4 col. 1 para. 2).
Claim 7 recites:
wherein the third logic is to, for individual peaks, generate the associated integrated value representing an area above the associated baseline, under the chromatogram data set, and between the associated peak start location and the associated peak end location
• Li teaches area under the curve for each peak from the start to the end of the peak being calculated and attributed as the total count of the peak (i.e. reading on associated integrated value and “area above the baseline) (pg. 4 col. 1 para. 2); wherein a network architecture comprises the ‘Mass Spectrum Encoder’, ‘Peak Encoder’ and ‘Chromatogram Encoder’ (pg. 2 Fig. 1) to inform the peak profile with information along the peak’s retention time range to describe its shape (pg. 2 col. 2 para. 4 and pg. 8 Fig. 3) – that is, the intensity at all timepoints from the beginning to the end of each peak (i.e. reading on between the associated peak start location and the associated peak end location) (pg. 2 col. 2 para. 4).
Claim 8 recites:
first logic to: receive a command from a user to train a machine-learning computational model, wherein the command from the user includes an identification of multiple chromatogram data sets and labeled peak locations and baselines associated with the multiple chromatogram data sets for training the machine-learning computational model, and initially train the machine-learning computational model based on the multiple chromatogram data sets and the associated labeled peak locations and baselines, wherein the machine-learning computational model is to output, for each point of an input chromatogram data set, an indicator of whether the point corresponds to a peak location and boundary-location information that, when the indicator indicates that the point corresponds to a peak location, identifies a baseline start location and a baseline end location relative to the peak location; and
second logic to provide, to the user after initial training, an option to select the machine-learning computational model for application to a subsequent chromatogram data set
• Li teaches the user defined parameters being (i.e. received commands from user): the number of training epochs; the split in the training data set for validation during training, the batch size for training, and the cut-off time between pairs of peaks to compare (i.e., reading on user adjustment of a peak location or a baseline) (pg. 9 col. 1 para. 1). It is interpreted that, In the context of analyzing chromatograms using machine learning, adjusting the parameters described allows the model to better identify and refine the shape of chromatographic peaks by iteratively learning from the data. Regarding the user defining the number of training epochs, it is interpreted that a "training epoch" in machine learning refers to a single complete pass of the entire training dataset through a model, meaning the model has seen every data point in the dataset once during that epoch; which reads on the “identification of multiple chromatogram data sets and labeled peak locations and baselines associated with the multiple chromatogram data sets for training.” Li also teaches a method for chromatogram retention time alignment through the use of deep learning neural networks for more complex, fuzzy chromatogram data sets (pg. 1 para. 1) (i.e. reading on logic and machine-learning computational model); wherein a network architecture comprises the ‘Mass Spectrum Encoder’, ‘Peak Encoder’ and ‘Chromatogram Encoder’ (pg. 2 Fig. 1) (i.e. support apparatus) to inform the peak profile with information along the peak’s retention time range to describe its shape (i.e. indicator of the one or more peak locations) (pg. 2 col. 2 para. 4 and pg. 8 Fig. 3) – that is, the intensity at all timepoints from the beginning to the end of each peak (i.e. reading on baseline start location and a baseline end location relative to the peak location and indicator and the boundary-location information output) (pg. 2 col. 2 para. 4).
• Li does not teach "second logic to provide, to the user after initial training, an option to select the machine-learning computational model for application to a subsequent chromatogram data set." However, I teaches an analytical-session-development session that inputs the plan of operations and has all the information necessary to reason and decide the next steps such as the selection of attributes for the next experiment (pg. 34 col. 2 para. 1); wherein the information is used by an user via LabExpert (pg. 34 col. 1 para. 1).
Claim 11 recites:
wherein: the machine-learning computational model is a first machine-learning computational model: the first logic is to receive a command from a user to train a second machine-learning computational model, wherein the command from the user includes an identification of multiple chromatogram data sets and labeled peak locations and baselines associated with the multiple chromatogram data sets for training the second machine-learning computational model
• Li teaches the user defined parameters being (i.e. received commands from user): the number of training epochs; the split in the training data set for validation during training, the batch size for training, and the cut-off time between pairs of peaks to compare (i.e., reading on user adjustment of a peak location or a baseline) (pg. 9 col. 1 para. 1); wherein a total of seven training sets were generated (pg. 4 Table 1) and a total of 7 models were trained and tested (pg. 4 Table 2).
• It is interpreted that, In the context of analyzing chromatograms using machine learning, adjusting the parameters described allows the model to better identify and refine the shape of chromatographic peaks by iteratively learning from the data. Regarding the user defining the number of training epochs, it is interpreted that a "training epoch" in machine learning refers to a single complete pass of the entire training dataset through a model, meaning the model has seen every data point in the dataset once during that epoch; which reads on the “identification of multiple chromatogram data sets and labeled peak locations and baselines associated with the multiple chromatogram data sets for training.”
Claim 12 recites:
wherein the multiple chromatogram data sets and the labeled peak locations and baselines used to train the second machine-learning computational model are different from the multiple chromatogram data sets and labeled peak locations and baselines used to train the first machine-learning computational model
• Li teaches different training combinations and training data sets generated (pg. 4 Table 1); wherein the combination of data sets which were used to train each of the seven models (i.e. reading on first and second machine- learning computational models) (pg. 4 Table 2).
Claim 13 recites:
wherein the second logic is to request, from a user, a selection of which of multiple computational models, including the first machine-learning computational model and the second machine-learning computational model, to use to analyze a subsequent chromatogram data set
• Li does not teach the recitation above. However, I teaches an analytical-session-development session inputting objects, chromatographic selectivity objects, a planning object containing the current plan of operations, and results from previous and present HPLC separation methods; containing all of the HPLC-method-records for a particular method development session; that, when received by a decision tree provides all of the information necessary to reason about the method development process and to decide the next step, such as selection of the preferred attributes of the next experiment (i.e. use to analyze a subsequent chromatogram data set) (pg. 34 col.2 para. 1); wherein the information is used by an user via LabExpert (pg. 34 col. 1 para. 1); wherein the programming detail associated with one decision tree node within the intelligent machine-learning system calls for a certain procedure; wherein the combination of data sets which were used to train each of the seven models (i.e. reading on first and second machine- learning computational models) (pg. 4 Table 2).
Claim 15 recites:
first logic to generate one or more peak locations for a chromatogram data set and to generate one or more baselines for the chromatogram data set, wherein an individual peak has an associated baseline, and wherein the one or more peak locations and the one or more baselines are based on an output of a machine-learning computational model that outputs, for each point of the chromatogram data set, an indicator of whether the point corresponds to a peak location and boundary-location information that, when the indicator indicates that the point corresponds to a peak location, identifies a baseline start location and a baseline end location relative to the peak location; second logic to: cause the display of the one or more peak locations and the one or more baselines concurrently with the display of the chromatogram data set, receive a user adjustment of a peak location or a baseline, and store the one or more peak locations and one or more baselines, including the user-adjusted peak or baseline, as a confirmed set of peak locations and baselines; and third logic to retrain the machine-learning computational model with the chromatogram data set and the confirmed set of peak locations and baselines
• Li teaches a method for chromatogram retention time alignment through the use of deep learning neural networks for more complex, fuzzy chromatogram data sets (pg. 1 para. 1) (i.e. reading on logic and machine-learning computational model); wherein a network architecture comprises the ‘Mass Spectrum Encoder’, ‘Peak Encoder’ and ‘Chromatogram Encoder’ (pg. 2 Fig. 1) (i.e. support apparatus) to inform the peak profile with information along the peak’s retention time range to describe its shape (i.e. indicator of the one or more peak locations) (pg. 2 col. 2 para. 4 and pg. 8 Fig. 3) – that is, the intensity at all timepoints from the beginning to the end of each peak (i.e. reading on baseline start location and a baseline end location relative to the peak location and indicator and the boundary-location information output) (pg. 2 col. 2 para. 4); wherein during peak detection, the baseline drift of each single chromatogram gets corrected using Asymmetric Least Square smoothing algorithm (i.e. reading on using logic to generate one or more baselines as the baselines are getting corrected) (pg. 4 col. 1 para. 1); wherein the area under the curve for each peak from the start to the end of the peak being calculated and attributed as the total count of the peak (i.e. generate an associated integrated value of area above the baseline corresponding to individual peaks) (pg. 4 col. 1 para. 2); wherein validation samples are used during the training process to verify the increase in accuracy over training iterations (i.e. reading on retraining steps), using data the network hadn’t encountered during training (pg. 3 col. 1 para. 5).
• Li does not teach "second logic to cause the display of the one or more peak locations and the one or more baselines concurrently with the display of the chromatogram data set." However, I teaches a of a type of artificial intelligence system for automation of HPLC method development (pg. 27 para. 1) with a computer screen presentation (pg. 33 Fig. 2); wherein the chromatogram obtained in this machine-learnt sequence displays excellent resolution and peak shape (pg. 40 col. 2 para. 1); wherein a stable baseline is detected for each run (pg. 36 col. 2 para. 2).
Claim 16 recites:
wherein the first logic is to process the chromatogram data set and provide the processed chromatogram data set to the machine-learning computational model
• Li teaches that the detected peaks are processed together with the full GC–MS data to gather the three components (mass spectrum, peak profile and chromatogram segment) needed for the deep learning network (pg. 4 col. 1 para. 2).
Claim 17 recites:
wherein processing the chromatogram data set includes scaling magnitudes of the chromatogram data set
• Li teaches that a normalization process was applied to the data, whereby a standard chemical is analyzed on each day of analysis and used to normalize all data from that day (pg. 3 col. 2 para. 8).
Claim 18 recites:
wherein processing the chromatogram data set includes signal resampling of the chromatogram data set
• Li teaches the result of alignment being a set of groups, where each group contains peaks from multiple samples that are predicted to be the same compound (i.e. reading on resampling of the same compound for the processing of the chromatogram data set) (pg. 3 col. 1 para. 6)
Claim 19 recites:
wherein the machine-learning computational model outputs estimated peak start locations and estimated peak end locations
• Li teaches a network architecture comprises the ‘Mass Spectrum Encoder’, ‘Peak Encoder’ and ‘Chromatogram Encoder’ (pg. 2 Fig. 1) (i.e. support apparatus) to inform the peak profile with information along the peak’s retention time range to describe its shape (i.e. indicator of the one or more peak locations) (pg. 2 col. 2 para. 4 and pg. 8 Fig. 3) – that is, the intensity at all timepoints from the beginning to the end of each peak (i.e. reading on baseline start location and a baseline end location relative to the peak location and indicator and the boundary-location information output) (pg. 2 col. 2 para. 4).
Claim 20 recites:
wherein the second logic is to receive the user adjustment through a user manipulation of the displayed one or more peak locations and one or more baselines in a graphical user interface
• Li does not teach the recitation above. However, I teaches that the development of a chromatography separation method involves an iterative process of planning, method implementation, execution and interpretation, wherein the execution of the plan provides the chromatographer with additional data, which, after interpretation, suggests what adjustments need to be implemented as a sequence of further decision and action steps (i.e., reading on adjustments through a user) (pg. 31 col. 2 para. 1); wherein the chromatogram obtained in this machine-learnt sequence displays excellent resolution and peak shape (pg. 40 col. 2 para. 1); wherein LabExpert provides the interface to the user (pg. 41 Fig. 14)
Rationale for combining (MPEP §2142-2143)
Regarding claims 1-5, 7-8, 11-13, 15-20, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine, in the course of routine experimentation and with a reasonable expectation of success, the methods of Li in view of I because all references disclose methods for automation of chromatography method development. The motivation would have been to allow a dynamic self-monitoring system, capable of learning as new information is acquired or added (pg. 42 col. 2 para. 2 I).
Therefore it would have been obvious to one of ordinary skill in the art to substitute the chromatography development method of Li to the methods by I because such a substitution is no more than the simple substitution of one known element for another. One of ordinary skill in the art would be able to motivated to combine the teachings in these references with a reasonable expectation of success since the described teachings pertain to methods for automation of chromatography method development.
B. Claims 6, 9-10, and 14 are rejected under 35 U.S.C. 103(a) as being unpatentable over Li and I as applied to claims 1, 4-5, 8, and 11 above further in view of Vaz (“Chromophoreasy, an Excel-Based Program for Detection and Integration of Peaks from Chromatographic and Electromigration Techniques” J. Braz. Chem. Soc. 27(10):1899-1911 (2016)) as evidenced by Ni ("One-and two-dimensional gas chromatography–mass spectrometry and high performance liquid chromatography–diode-array detector fingerprints of complex substances: A comparison of classification performance of similar, complex Rhizoma Curcumae samples with the aid of chemometrics." Analytica chimica acta 712:37-44 (2012)), as cited on the attached Form PTO-892. Any newly recited portions are necessitated by claim amendment. Bullet points indicate the teachings of the instant features over the prior art. Instantly claimed elements which are considered to be equivalent to the prior art teachings are described in bold for all claims.
Claim 6 recites:
wherein the second logic is to, for the at least one peak, cause the display of the associated peak start location as a dropline concurrently with the display of the chromatogram data set, the one or more peak locations, and the one or more baselines
• Li does not teach the recitation above. However, Vaz teaches an Excel-based program for recognition and integration of chromatographic and electrophoretic peaks (pg. 1899 para. 1); wherein the inset shows the recognition profile of these peaks, which were split through drop-line mode in a further step (pg. 1907 Fig. 7); which demonstrates the “peak start location as a dropline concurrently with the display of the chromatogram data set” while associating multiple peaks to one baseline.
Claim 9 recites:
wherein an input to the machine-learning computational model is a one-dimensional array of chromatogram data
• Li does not teach the recitation above. However, Vaz teaches an LC system being equipped with a Diode Array Detector (pg. 1900 col. 2 para. 1) which is an array detector that produces a one-dimensional chromatography data for a given sample, as evidenced by Ni (pg. 37 para. 1).
Claim 10 recites:
wherein at least one baseline is associated with multiple peaks
• Li teaches an electropherogram showing partial separation and multiple peaks associated with the same baseline (pg. 1907 Fig. 7).
Claim 14 recites:
wherein the second logic is also to provide to the user, selectable options for non-machine-learning peak and baseline detection computational models to apply to a subsequent chromatogram data set
• Li does not teach the recitation above. However, Vaz teaches an Excel-based program (i.e. non-machine-learning model) for recognition and integration of chromatographic and electrophoretic peaks (i.e. computational models to apply to a subsequent chromatogram data set) (pg. 1899 para. 1); wherein analyses on every chromatogram segment (or a time range only, set by user) is performed (pg. 1901 col. 2 para. 1).
Rationale for combining (MPEP §2142-2143)
Regarding claims 6, 9-10, and 14, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine, in the course of routine experimentation and with a reasonable expectation of success, the methods of Li in view of I because all references disclose methods for automation of chromatography method development. The motivation would have been to allow a dynamic self-monitoring system, capable of learning as new information is acquired or added (pg. 42 col. 2 para. 2 I).
Therefore it would have been obvious to one of ordinary skill in the art to substitute the chromatography development method of Li to the methods by I because such a substitution is no more than the simple substitution of one known element for another. One of ordinary skill in the art would be able to motivated to combine the teachings in these references with a reasonable expectation
Response to applicant's remarks in regard to Claim Rejection 35 U.S.C. ~ 103
The Remarks of 03/02/2026 have been fully considered but are not persuasive for the reasons below:
It appears that pg. 7 para. 2 represents the only Applicant remarks specific to 103 and the instant claims (emphasis added):
While Li may output a chromatogram (shift a chromatogram for alignment), nowhere does Li discuss outputting with a machine-learning computational model an indicator for peak location or boundary-location information including baseline start or baseline end locations. Li doesn't contemplate this because Li is only concerned with alignment of two different chromatograms and not peak analysis with machine learning. Li outlines in the Abstract that their implementation does not require training which explicitly contradicts Claims 8 and 15
The Examiner disagrees. Li's abstract does not state that "implementation does not require training." Instead, Li's abstract teaches that the method “is very easy to use as it requires no user input of reference chromatograms and parameters” which does not equate to "not requiring training." Li explicitly teaches training steps. In the instant application, Li teaches a method for chromatogram retention time alignment through the use of deep learning neural networks for more complex, fuzzy chromatogram data sets (pg. 1 para. 1) (i.e. reading on logic and machine-learning computational model); wherein a network architecture comprises the ‘Mass Spectrum Encoder’, ‘Peak Encoder’ and ‘Chromatogram Encoder’ (pg. 2 Fig. 1) (i.e. structured output) to inform the peak profile with information along the peak’s retention time range to describe its shape (i.e. indicator of the one or more peak locations) (pg. 2 col. 2 para. 4 and pg. 8 Fig. 3) – that is, the intensity at all timepoints from the beginning to the end of each peak (i.e. reading on baseline start location and a baseline end location relative to the peak location and indicator and the boundary-location information output) (pg. 2 col. 2 para. 4); wherein activation functions were chosen to speed up the training process (i.e. reading on training steps) (pg. 2 Fig. 1); wherein training iterations are performed (pg. 3 col. 1 para. 5); wherein data will be used to generate training and validation samples (pg. 3 col. 2 para. 6); wherein chromatogram segments are used to generate several training sets (pg. 4 Table 1).
Obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). "It is well-established that a determination of obviousness based on teachings from multiple references does not require an actual, physical substitution of elements." In re Mouttet, 686 F.3d 1322, 1332, 103 USPQ2d 1219, 1226 (Fed. Cir. 2012). Due to the described reasons above, it is interpreted that the claims do not patentably distinguish the claimed invention from the teachings found in the prior art. Furthermore, in this instant application, the amendments support existing claim rejections, in which the recited limitations are all addressed, see Claim Rejections above.
Conclusion
No claims are allowed.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANCINI A FONSECA LOPEZ whose telephone number is (571)270-0899. The examiner can normally be reached Monday - Friday 8AM - 5PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Olivia Wise can be reached at (571) 272-2249. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/F.F.L./Examiner, Art Unit 1685
/JANNA NICOLE SCHULTZHAUS/Examiner, Art Unit 1685