Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Applicant’s submission filed 7/28/25 has been entered. Claims 1-20 are presented for examination.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 10/3/25 and 12/18/25 have been considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
The claims disclose the abstract idea of obtaining training data to train a neural network model, evaluating the training data based on gradient information corresponding to the neural network model, adjusting an index table based on the evaluation results, and using the adjusted index table to obtain new training data subset for another round of iterative training.
STEP 1
Are the claims directed to a process, machine, manufacture or composition of matter?
The claims are all directed to a statutory category (e.g., a process, machine, manufacture, or composition of matter). The answer is YES.
STEP 2A. Prong 1
Exemplary claim 1 recites the following abstract concepts that are found to include “abstract idea”:
“--obtaining, in an nth training round of iterative training on a neural network model, a first training data subset from a training data set based on an index table, wherein n is a positive integer;
--training the neural network model based on training data in the first training data subset;
--obtaining gradient information corresponding to the neural network model;
--evaluating the training data based on the gradient information, to obtain an evaluation result;
--adjusting the index table based on the evaluation result, to obtain an adjusted index table; and
-- using the adjusted index table to obtain a second training data subset in subset for an (n+1)th round of the iterative training. “
The remaining limitations are no more than computer elements (i.e., a computer device, a processor) to be used as a tool to perform this abstract idea.
The recited limitations cover a process that, under its broadest reasonable interpretation, covers subject matter viewed as a certain method of organizing human activity with the additional recitation of generic computer components. For example, but for the “by a computer device” language, “obtaining, training, obtaining, evaluating, adjusting, using’ in the context of this claim encompasses the user obtaining the training data and evaluating the training data base on information on the model, adjusting the database/table and reusing the database/table for future trainings.
The practice of obtaining, evaluating, adjusting, and using data to adjust records, is a commercial or legal interaction long prevalent in our system of commerce. The claims recite the idea of performing various conceptual steps generically resulting in the adjustment of the index table. As determined earlier, none of these steps recites specific technological implementation details, but instead get to this result by receiving, selecting and determining data. Thus, the claims are directed to a certain method of organizing human activity
STEP 2A, Prong 2
Are there additional elements or a combination of elements in the claim that apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that it is more than a drafting effort designed to monopolize the exception?
The claim recites one additional element: that a computer device is used to perform the steps.
The computer device in the steps is recited at a high level of generality, i.e., as a generic processor performing a generic computer function of processing data (obtaining, by a computer device, training data). This generic processor limitation is no more than mere instructions to apply the exception using a generic computer component.
Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
The claim is directed to an abstract idea.
STEP 2B
The next issue is whether the claims provide an inventive concept because the additional elements recited in the claims provide significantly more than the recited judicial exception. Taking the claim elements separately, the function performed by the computer system at each step of the process is purely conventional. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a computer device/processor to perform steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Considered as an ordered combination, the computer components of Applicants' claims add nothing that is not already present when the steps are considered separately. The claimed invention does not focus on an improvement in computers as tools, but rather certain independently abstract ideas that use computers as tools. {Elec. Power, 830 F.3d at 1354). (Step 2B: NO).
There is no indication that indication that the computer device or processor is anything other than a generic, off-the-shelf computer component, and the Symantec, TLI, and OIP Techs. Court decisions cited in MPEP 2106.05(d)(II) indicate that mere collection or receipt of data over a network is a well‐understood, routine, conventional function when it is claimed in a merely generic manner (as it is here).
Independent claim 11 recites similar limitations as claim 1 and is therefore rejected under the same rationale.
The dependent claims when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitations fail to establish that the claims are not directed to an abstract idea. The claims provide minimal technical structure or components for further consideration either individually or as ordered combinations with the independent claims. As such, additional recited limitations in the dependent claims only refine the identified abstract idea further. Further refinement of an abstract idea does not convert an abstract idea into something concrete.
Accordingly, a conclusion that the collecting step is well-understood, routine, conventional activity is supported under Berkheimer Option 2.
See MPEP 2106.05(d)(II) The courts have recognized the following computer functions as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350,1355,112 USPQ2d 1093,1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hoteis.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result-a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added));
iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306,1334,115 USPQ2d 1681,1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363,115 USPQ2d at 1092-93.
The claims are ineligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7, 10-17, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Hammond et al. (US 20170213155 A1), and further in view of TAKEDA et al. (US 20170358045 A1).
Re-claim 1, Hammond et al. teach A method implemented by a computer device and comprising:
--obtaining, in an nth training round of iterative training on a neural network model, a first training data subset from a training data set based on an index table, wherein n is a positive integer; --training the neural network model based on training data in the first training data subset;
(see e.g. [0032] The training data source 219 can send the training data to the instructor module 324 upon the request. The instructor module 324 can subsequently instruct the learner module 328 on training the AI object with pedagogical software programming language based curricula for training the concepts into the AI objects. Training an AI model can take place in one or more training cycles to yield a trained state of the AI model 106.
[0070] In addition to the foregoing, the AI system can include a training data loader 621 configured to load training data from a training data database 614a, a simulator 614b, and a streaming data server. The training data can be batched training data, streamed training data, or a combination thereof, and the AI engine can be configured to push or pull the training data from one or more training data sources selected from the simulator 614b, a training data generator, the training data database 614a, or a combination thereof.
[0079] The instructor module 324 can train easier-to-understand tasks earlier than more complex tasks. Thus, the instructor module 324 can train sub-concept AI objects and then higher-level AI objects. The instructor module 324 can train sub-concept AI objects that are dependent on other nodes after those other AI objects are trained. However, multiple nodes in a graph may be trained in parallel. The instructor module 324 can run simulations on the AI objects with input data including statistics and feedback on results from the AI object being trained from the learner module 328. The learner module 328 and instructor module 324 can work with a simulator or other data source to iteratively train an AI object with different data inputs.
--obtaining gradient information corresponding to the neural network model;
(see e.g. [0066] The AI engine can be built with an infrastructure that supports streaming data efficiently through the system. The AI engine can use a set of heuristics to make choices about which learning algorithms to use to train each BRAIN.
[0075] The architect module 326 can reference a database of algorithms to use as well as a database of network topologies to utilize. The architect module 326 can reference a table or database of best suggested topology arrangements including how many layers of levels in a topology graph for a given problem, if available.)
[0079] The instructor module 324 can reference a knowledge base of how to train an AI object efficiently by different ways of flowing data to one or more AI objects in the topology graph in parallel, or, if dependencies exist, the instructor module 324 can train serially with some portions of lessons taking place only after earlier dependencies have been satisfied. )
--evaluating the training data based on the gradient information, to obtain an evaluation result;
(see e.g. [0075] The architect module 326 can instantiate a main concept and layers of sub-concepts feeding into the main concept. The architect module 326 can generate each concept including the sub-concepts with a tap that stores the output action/decision and the reason why that node reached that resultant output (e.g., what parameters dominated the decision and/or other factors that caused the node to reach that resultant output). This stored output of resultant output and the reasons why the node reached that resultant output can be stored in the trained intelligence model. The tap created in each instantiated node allows explainability for each step on how a trained intelligence model produces its resultant output for a set of data input. --- The architect module 326 can also instantiate multiple topology arrangements all to be tested and simulated in parallel to see which topology comes away with optimal results. The optimal results can be based on factors such as performance time, accuracy, computing resources needed to complete the training simulations, etc.
[0036] Note, the search engine 343 in query results will return relevant AI objects. The relevant AI objects can be evaluated and return based on a number of different weighting factors including amount of resources consumed to train that concept learned by the AI object, an estimated amount time to train the concept to achieve an accuracy threshold for the algorithm itself, data input and output types, closeness of the nature of the problem to be solved between the previous training and the user's current plans, etc.
[0083] The learner module 328 can also write the stored output of each node and why that node arrived at that output into the trained AI model, which gives explainability as to how and why the AI proposes a solution or arrives at an outcome.
[0105] the user submits data (of the same type as the trained AI model was trained with) to a trained AI model-server API and receives the trained AI model's evaluation of that data.)
--adjusting the index table […], to obtain an adjusted index table; and
-- using the adjusted index table to obtain a second training data subset in subset for an (n+1)th round of the iterative training.
(see e.g. [0130] In step 104, the AI database stores and indexes trained AI objects and its class of AI objects to have searchable criteria.
[0131] In step 105, parts of a trained artificial intelligence model are stored and indexed as a collection of trained AI objects corresponding to a main concept and a set of sub concepts feeding parameters into the main concept so that reuse, recomposition, and reconfiguration of all or part of a trained artificial intelligence model is possible.
[0125] In such embodiments, the AI engine can further include keeping a record in the one or more databases with a meta-learning module. The record can include i) the source code processed by the AI engine, ii) mental models of the source code, iii) the training data used for training the neural networks, iv) the trained AI models, v) how quickly the trained AI models were trained to a sufficient level of accuracy, and vi) how accurate the trained AI models became in making predictions on the training data.)
[0089] Because the process of building pedagogical programs is iterative, the AI engine in training mode can also provide incremental training. That is to say, if the pedagogical programming language code is altered with respect to a concept that comes after other concepts that have already been trained, those antecedent concepts do not need to be retrained.
Hammond et al. do not explicitly teach -[-adjusting the index table] based on the evaluation result,
However, TAKEDA et al. teach --[0010] The data analysis system may further include an evaluation integration unit that generates an integrated index which integrates evaluation results by the data evaluation unit.)
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Hammond et al., and include the steps cited above, as taught by TAKEDA et al., in order to examine and judge the relevance of the data (see e.g. [0109]).
Re-claim 2, Hammond et al. teach -- The method according to claim 1, wherein the evaluating the training data comprises:
--obtaining a preset evaluation rule; and --evaluating the training data in the first training data subset based on the preset evaluation rule.
(see e.g. [0081] When starting a training operation, the instructor module 324 first generates an execution plan. This is the ordering it intends to use when teaching the concepts, and for each concept which lessons it intends to teach in what order.
[0105] the user submits data (of the same type as the trained AI model was trained with) to a trained AI model-server API and receives the trained AI model's evaluation of that data.)
Re-claim 3, Hammond et al. teach-- The method of claim 1, wherein the evaluation result comprises an effect of the training data on model or a manner of processing the training data in a next training round.
(see e.g. [0083] The learner module 328 can also write the stored output of each node and why that node arrived at that output into the trained AI model, which gives explainability as to how and why the AI proposes a solution or arrives at an outcome.
[0084] The hyperlearner module 325 can reference archived, previously built and trained intelligence models to help guide the instructor module 324 to train the current model of nodes.
[0032] The learner module 328 or the predictor 329 can elicit a prediction from the trained AI model 106 and send the prediction to the instructor module 324. The instructor module 324, in turn, can send the prediction to the training data source 219 for updated training data based upon the prediction and, optionally, instruct the learner module 328 in additional training cycles.
[0119] The trained AI model can be instantiated by the AI engine based on the one or more concepts learned by the neural network in the one or more training cycles. )
Re-claim 4, Hammond et al. do not teach the limitations as claimed.
However, TAKEDA et al. teach --The method according to claim 3, wherein the effect is "invalid," "inefficient," "efficient," or "indeterminate,"
(see e.g. [0024] The data analysis system evaluates the relation between data elements included in the training data and the classification information and evaluates the possibility of falling under invalid materials from a large amount of search object data (for example, unknown data such as patent documents and papers) by using the results of the above-described evaluation.
[0036] The relation evaluation unit 120 evaluates the relation between data elements included in the training data and the classification information. More specifically, the relation evaluation unit 120 evaluates the data elements extracted from the training data acquired by the data acquisition unit 110 in accordance with specified standards. In other words, the relation evaluation unit 120 can learn patterns (widely including abstract concepts and meanings and without limitation to so-called “specified patterns” [for example, specified design patterns or regularity]) included in the training data by evaluating the degree of contribution to combinations included in the training data set, which has been acquired by the data acquisition unit 110, by the data elements constituting at least part of the training data. Incidentally, the “specified standards” will be explained later.
[0088] The data evaluation unit 150 calculates the score indicating the relation between each piece of the partial unknown data and the training data on the basis of the evaluation results stored in the evaluation memory unit 220 in the memory unit 200 (S230). The evaluation integration unit 160 generates the integrated score with respect to each piece of the unknown data by integrating the scores calculated by the data evaluation unit 150 with respect to the partial unknown data obtained by breaking down the unknown data (S240).
[0081] The score indicating the relation to the training data with respect to the plurality of pieces of unknown data is calculated by using the evaluation results. As a result, it is possible to analyze the unknown data mechanically according to certain standards and support finding of data related to data in which specific ideas, events, etc. are described from a large amount of unknown data.)
With respect to the following limitation,” wherein "invalid" indicates that a contribution provided by the training data to training precision to be achieved by the model training is 0; wherein "inefficient" indicates that the contribution reaches a first contribution degree; wherein "efficient" indicates that the contribution reaches a second contribution degree that is greater than the first contribution degree; and wherein "indeterminate" indicates that the contribution is indeterminate.”
TAKEDA et al. teach [0034] Alternatively, the data acquisition unit 110 can acquire the training data from a storage device which is connected in a manner capable of communications. The classification information may be, by way of example and without limitation to, “1” assigned to the correct data and “−1” assigned to the incorrect data.
[0011] The data evaluation unit may calculate a score indicative of strength of a relation between the partial unknown data and the classification information so that when a relation between a data element included in the partial unknown data and the classification information is strong, a value of the score will become larger than a case where the relation is weak; and the evaluation integration unit may generate an integrated score as the integrated index by summing a specified number of the score, which is calculated by the data evaluation unit, in descending order.
***TAKEDA et al. assign scores to training data. It is considered an obvious variation of TAKEDA et al. label the contribution degree as claimed.
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Hammond et al., and include the steps cited above, as taught by TAKEDA et al., in order to examine and judge the relevance of the data (see e.g. [0109]).
Re-claim 5, Hammond et al. teach-- The method according to claim 3, wherein the manner comprises deleting the training data, decreasing a weight of the training data, increasing the weight, or retaining the training data.
(see e.g. [0085] Note, if the system trains using data, then the data is optionally filtered/augmented in the lessons before being passed to the learning system.
[0096] Thus, the architect module 326, by virtue of proposing, exploring, and optimizing learning models, can observe what works and what doesn't, and use that to learn what models it should try in the future when it sees similar signatures.
Re-claim 6, Hammond et al. teach-- The method of claim 2, further comprising:
--testing the neural network model, using test data to obtain a test result; and updating the preset evaluation rule based on a preset target value and the test result.
(see e.g. [0085] When, the curriculum trains using a simulation or procedural generation, then the data for a lesson is not data to be passed to the learning system, but the data is to be passed to the simulator. The simulator can use this data to configure itself, and the simulator can subsequently produce a piece of data for the learning system to use for training. This separation permits a proper separation of concerns.
[0079] The instructor module 324 can run simulations on the AI objects with input data including statistics and feedback on results from the AI object being trained from the learner module 328. The learner module 328 and instructor module 324 can work with a simulator or other data source to iteratively train an AI object with different data inputs. )
[0075] The architect module 326 can reference a database of algorithms to use as well as a database of network topologies to utilize. The architect module 326 can reference a table or database of best suggested topology arrangements including how many layers of levels in a topology graph for a given problem, if available. The architect module 326 also has logic to reference similar problems solved by comparing signatures. If the signatures are close enough, the architect module 326 can try the topology used to optimally solve a problem stored in an archive database with a similar signature. The architect module 326 can also instantiate multiple topology arrangements all to be tested and simulated in parallel to see which topology comes away with optimal results. The optimal results can be based on factors such as performance time, accuracy, computing resources needed to complete the training simulations, etc.)
Re-claim 7, Hammond et al. teach-- The method of claim 6, further comprising further updating the preset evaluation rule based on a positive feedback mechanism when the test result reaches or is better than the preset target value.
(see e.g. [0079] The instructor module 324 can run simulations on the AI objects with input data including statistics and feedback on results from the AI object being trained from the learner module 328.
[0080] A simulator can give data and get feedback from the instructor module 324 during the simulation that can create an iterative reactive loop from data inputs and data outputs from the AI objects).
[0087] A machine learning algorithm may have of a target/outcome variable (or dependent variable) which is to be predicted from a given set of predictors (independent variables). Using this set of variables, the AI engine generates a function that map inputs to desired outputs. The coefficients and weights plugged into the equations in the various learning algorithms are then updated after each epoch/pass of training session until a best set of coefficients and weights are determined for this particular concept. The training process continues until the model achieves a desired level of accuracy on the training data.).
Re-claim 10, Hammond et al. teach-- The method of claim 1,further comprising receiving configuration information from a user and through an interface, wherein the configuration information comprises dynamic training information comprises information about the neural network model, information about the training data set, a running parameter for model training, or computing resource information for the model training.
(see e.g. [0022] The AI engine has a set of user interfaces 212 to import from either or both 1) scripted software code written in a pedagogical software programming language, such as Inkling™, and/or 2) from the user interface 212 with defined fields that map user supply criteria to searchable criteria of the AI objects indexed in the AI database 341.
[0029] One or more user interfaces 212, such a web interface, a graphical user interface, and/or command line interface, will handle assembling the scripted code written in the pedagogical software programming language, as well as other ancillary steps like registering the line segments with the AI engine, together with a single command. )
Claim 11 recites similar limitations as claim 1 and is therefore rejected under the same arts and rationale.
Claim 12 recites similar limitations as claim 2 and is therefore rejected under the same arts and rationale.
Claim 13 recites similar limitations as claim 3 and is therefore rejected under the same arts and rationale.
Claim 14 recites similar limitations as claim 4 and is therefore rejected under the same arts and rationale.
Claim 15 recites similar limitations as claim 5 and is therefore rejected under the same arts and rationale.
Claim 16recites similar limitations as claim 6 and is therefore rejected under the same arts and rationale.
Claim 17 recites similar limitations as claim 7 and is therefore rejected under the same arts and rationale.
Claim 20 recites similar limitations as claim 10 and is therefore rejected under the same arts and rationale.
Claims 8, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Hammond et al. (US 20170213155 A1), and further in view of OUYANG et al. (CN 106502626 A).
Re-claim 8, Hammond et al. do not teach the limitations as claimed.
However, OUYANG et al. teach - The method of claim 1, wherein the neural network model comprises computing layers; and wherein the method further comprises further obtaining the gradient information for at least one of the computing layers.
(see e.g. OUYANG et al. --neural network model may include at least one computing layer comprises calculating the layer of the predetermined algorithm and preset weight data, and also may include setting a preset weight of data calculating layer,
step 401, converting the received data to be preset weight processing data and each computing layer into fixed point data to obtain the preset weight fixed point data to be processed fixed point data and each computing layer.
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Hammond et al., and include the steps cited above, as taught by OUYANG et al., in order to improve the efficiency of data processing (see e.g. abstract).
Claim 18 recites similar limitations as claim 8 and is therefore rejected under the same arts and rationale.
Claims 9, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Hammond et al. (US 20170213155 A1), and further in view of Hammond et al. (US 20170213155 A1), and further in view of Lewis et al. (US 20180314935 A1).
Re-claim 9, Hammond et al. do not teach the limitations as claimed.
However, Lewis et al. teach - The method of claim 6, further comprising further updating the preset evaluation rule based on a negative feedback mechanism when the test result does not reach the preset target value.
See e.g. [0209] Once the neural network is structured, a learning model can be applied to the network to train the network to perform specific tasks. The learning model describes how to adjust the weights within the model to reduce the output error of the network. Backpropagation of errors is a common method used to train neural networks. An input vector is presented to the network for processing. The output of the network is compared to the desired output using a loss function and an error value is calculated for each of the neurons in the output layer. The error values are then propagated backwards until each neuron has an associated error value which roughly represents its contribution to the original output. The network can then learn from those errors using an algorithm, such as the stochastic gradient descent algorithm, to update the weights of the of the neural network.)
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Hammond et al., and include the steps cited above, as taught by Lewis et al., in order to facilitate improved training with adaptive runtime and precision profiling (see e.g. [0001]).
Claim 19 recites similar limitations as claim 9 and is therefore rejected under the same arts and rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LUNA CHAMPAGNE whose telephone number is (571)272-7177. The examiner can normally be reached M-F 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Florian Zeender can be reached at 571 272-6790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LUNA CHAMPAGNE/Primary Examiner, Art Unit 3627
January 26, 2026