DETAILED ACTION
This action is in response to the claims filed 04/04/2023 for Application number 18/130,776. Claims 1-23 are currently pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 10/09/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 1,
Step 1 Analysis: Claim 1 is directed to a process, which falls within one of the four statutory categories.
Step 2A Prong 1 Analysis: Claim 1 recites, in part, The limitations of:
generate, based on the reinforcement learning neural network and the plurality of input data, an action output for generating a signal for communicating the task request can be considered to be an evaluation in the human mind.
This limitation as drafted, are processes that, under broadest reasonable interpretation, covers performance of the limitation in the mind or with the aid of pen and paper which falls within the “Mental Processes” grouping of abstract ideas.
Addtionally,
The limitations of:
compute a reward based on the action output and the plurality of input date can be considered to be a mathematical calculation
update the reinforcement learning neural network based on the reward can be considered to be a mathematical calculation
These limitations as drafted, are processes that, under broadest reasonable interpretation, covers the recitation of mathematical calculations which falls within the “Mathematical concepts” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2 Analysis: This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements - “A computer-implemented system for processing multiple input objectives by a reinforcement learning agent”, “at least one processor”, “memory in communication with the at least one processor”, “software code stored in the memory, which when executed at the at least one processor causes the system to…”, and “Instantiate a reinforcement learning agent that maintains a reinforcement learning neural network”. Thus, these elements in the claim are recited at a high level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP 2106.05(f). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
The claim further recites: generates, according to outputs of the reinforcement learning neural network, signals for communicating task requests;
receive a plurality of input data representing a plurality of user objectives associated with a task request. These limitations are insignificant extra-solution activities. Please see MPEP 2106.05(f). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim as a whole is directed to an abstract idea.
Step 2B Analysis: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of utilizing a computer-implement system, processor, memory, reinforcement learning neural network and reinforcement learning agent to perform the steps of the claimed process amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Furthermore, the limitations of
generates, according to outputs of the reinforcement learning neural network, signals for communicating task requests;
receive a plurality of input data representing a plurality of user objectives associated with a task request are well-understood, routine, and conventional, as evidenced by MPEP §2106.05(d)(II)(I), “receiving or transmitting data over a network”. These limitations therefore remain insignificant extra-solution activity even upon reconsideration, and does not amount to significantly more. Even when considered in combination, these additional elements amount to mere instructions to apply the exception using generic computer components and insignificant extra-solution activity, which cannot provide an inventive concept. The claim is not patent eligible.
Regarding claim 2, the rejection of claim 1 is further incorporated, and further, the claim recites: wherein the plurality of input data comprises a weighted vector with weights defining a relative importance of each of the plurality of user objectives. This claim recites additional mathematical calculations in addition to the judicial exception identified in the rejection of claim 1, thus recites a judicial exception.
The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible.
Regarding claim 3, the rejection of claim 1 is further incorporated, and further, the claim recites: wherein the reward is weighted based on the weighted vector. This claim recites additional mathematical calculations in addition to the judicial exception identified in the rejection of claim 1, thus recites a judicial exception.
The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible.
Regarding claim 4, the rejection of claim 1 is further incorporated, and further, the claim recites: wherein the reward comprises a vector having a plurality of individual reward values, each of the plurality of individual reward values being a weighted value computed based on the relative importance of each respective objective from the plurality of user objectives. This claim recites additional mathematical calculations in addition to the judicial exception identified in the rejection of claim 1, thus recites a judicial exception.
The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible.
Regarding claim 5, the rejection of claim 1 is further incorporated, and further, the claim recites: wherein the plurality of user objectives comprises at least two of: an asset, an amount for execution, a priority for execution, or a time limit for execution. This limitation amounts to more specifics of the judicial exception identified in the rejection of claim 1 above.
The claim does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible.
Regarding claim 6, the rejection of claim 1 is further incorporated, and further, the claim recites: wherein the reinforcement learning neural network comprises at least one of: a Feed Forward Neural Networks (FFNN), a multi-layer perceptron (MPL), a recurrent neural network (RNN), or an asynchronous actor critic (A3C) neural network. This limitation amounts to merely generally linking the judicial exception to a field of use. Please see MPEP 2106.05(h).
The claim does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible.
Regarding claim 7, the rejection of claim 1 is further incorporated, and further, the claim recites: compute a loss based on the reward using a loss function; and update the reinforcement learning neural network based on the loss. This claim recites additional mathematical calculations in addition to the judicial exception identified in the rejection of claim 1, thus recites a judicial exception.
The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible.
Regarding claim 8, the rejection of claim 1 is further incorporated, and further, the claim recites: receive a set of historical task data including one or more of: at least one historical state data for a historical task associated with the task request, a plurality of historical user objectives, and at least one historical action output for the at least one historical state data; This limitation amounts to mere data gathering and thus is an insignificant extra-solution activity. Please see MPEP 2106.05(h).
generate an augmented data based on the set of historical task data and the plurality of user objectives associated with the task request; This limitation recites additional mental steps in addition to the judicial exception identified in the rejection of claim 1, thus recites a judicial exception and
compute an updated reward based on the augmented data. This limitation recites additional mathematical calculations in addition to the judicial exception identified in the rejection of claim 1, thus recites a judicial exception.
The claim does not include any additional elements that amount to significantly more than the judicial exception. The limitation of “receive a set of historical task data including one or more of…” is just a nominal or tangential addition to the claim, and is also well-understood, routine and conventional as evidenced by MPEP §2106.05(d)(II)(I), “receiving or transmitting data over a network”. This limitation therefore remains insignificant extra-solution activity even upon reconsideration, and does not amount to significantly more. Even when considered in combination, this additional element represents an insignificant extra-solution activity which cannot provide an inventive concept. The claim is not patent eligible.
Regarding claim 9, the rejection of claim 1 is further incorporated, and further, the claim recites: compute an updated loss based on the updated reward using a loss function; and update the reinforcement learning neural network based on the updated loss. This claim recites additional mathematical calculations in addition to the judicial exception identified in the rejection of claim 1, thus recites a judicial exception.
The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible.
Regarding claim 10, the rejection of claim 1 is further incorporated, and further, the claim recites: generate a historical weighted vector based on the plurality of historical user objectives, the historical weighted vector with weights defining a relative importance of each of the plurality of historical user objectives. This claim recites additional mathematical calculations in addition to the judicial exception identified in the rejection of claim 1, thus recites a judicial exception.
The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible.
Regarding claim 11, the rejection of claim 1 is further incorporated, and further, the claim recites: wherein the updated reward is computed based on the historical weighted vector. This claim recites additional mathematical calculations in addition to the judicial exception identified in the rejection of claim 1, thus recites a judicial exception.
The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible.
Regarding Claim 12, it recites features similar to claim 1 and is rejected for at least the same reasons therein.
Regarding Claims 13-22, they recite features similar to claims 2-11 and are rejected for at least the same reasons therein.
Claim 23 recites features similar to claim 1 and is rejected for at least the same reasons therein. Claim 23 additionally requires analysis for “non-transitory computer-readable storage medium storing instructions which when executed cause at least one computing device to…” This is an additional element however it is recited at a high level of generality such that it amounts to mere instructions to apply the judicial exception using a generic computer component. Please see MPEP 2106.05(f).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 5, 6, 12, 16, 17, and 23 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Burhani et al. ("US 20190370649 A1", hereinafter "Burhani").
Regarding claim 1, Burhani teaches A computer-implemented system for processing multiple input objectives by a reinforcement learning agent (Abstract), the system comprising:
at least one processor; memory in communication with the at least one processor; software code stored in the memory, which when executed at the at least one processor causes the system to (“The system includes a communication interface, at least one processor, memory in communication with the at least one processor, and software code stored in the memory. The software code, when executed at the at least one processor causes the system to:” [¶0004]):
instantiate a reinforcement learning agent that maintains a reinforcement learning neural network and generates, according to outputs of the reinforcement learning neural network, signals for communicating task requests (“instantiate an automated agent that maintains a reinforcement learning neural network and generates, according to outputs of the reinforcement learning neural network, signals for communicating resource task requests;” [¶0004]);
receive a plurality of input data representing a plurality of user objectives associated with a task request (“The platform 100 can connect to an interface application 130 installed on user device to receive input data… The platform 100 can process trade orders using the reinforcement learning network 110 in response to commands from trade entities 150a, 150b, in some embodiments.” [¶0046]);
generate, based on the reinforcement learning neural network and the plurality of input data, an action output for generating a signal for communicating the task request (“Reward system 126 is configured to receive control the reinforcement learning network 110 to process input data in order to generate output signals… Output signals may include signals for communicating resource task requests, e.g., a request to trade in a certain security. (“action output”) For convenience, a good signal may be referred to as a “positive reward” or simply as a reward, and a bad signal may be referred as a “negative reward” or as a “punishment.” [¶0052]);
compute a reward based on the action output and the plurality of input data (“Output signals may include signals for communicating resource task requests, e.g., a request to trade in a certain security. (“action output”) For convenience, a good signal may be referred to as a “positive reward” or simply as a reward, and a bad signal may be referred as a “negative reward” or as a “punishment.” [¶0052]); and
update the reinforcement learning neural network based on the reward (“provide the reward to the reinforcement learning neural network of the automated agent to train the automated agent.” [¶0004; using the reward to train the agent corresponds to updating the reinforcement neural network]).
Regarding claim 5, Burhani teaches The system of claim 1, wherein the plurality of user objectives comprises at least two of:
an asset (“In such embodiments, the automated agent may generate requests for tasks to be performed in relation to securities (e.g., stocks, bonds, options or other negotiable financial instruments).” [¶0040]),
an amount for execution (“Input data may include trade orders, various feedback data (e.g., rewards), or feature selection data, or data reflective of completed tasks (e.g., executed trades), data reflective of trading schedules, etc.” [¶0052]),
a priority for execution (“automated agents may be trained to request tasks earlier which may result in higher priority of task completion.” [¶0155]), or
a time limit for execution (“As will be appreciated, having a time interval substantially less than one day provides opportunity for automated agents 180 to learn and change how task requests are generated over the course of a day. In some embodiments, the duration of the time interval may be adjusted in dependence on the volume of trade activity for a given trade venue. In some embodiments, duration of the time interval may be adjusted in dependence on the volume of trade activity for a given resource.” [¶0072]).
Regarding claim 6, Burhani teaches The system of claim 1, wherein the reinforcement learning neural network comprises at least one of: a Feed Forward Neural Networks (FFNN), a multi-layer perceptron (MPL), a recurrent neural network (RNN), or an asynchronous actor critic (A3C) neural network. (“FIG. 2 is a schematic diagram of an example neural network 200 according to some embodiments. The example neural network 200 can include an input layer, a hidden layer, and an output layer. The neural network 200 processes input data using its layers based on reinforcement learning, for example.” [¶0051; note: The claim recites “at least one of” thus under BRI the examiner is only required to map to one of the recited elements. Therefore, the provided mapping corresponds to a multi-layer perceptron])
Regarding claim 12, it is substantially similar to claim 1 respectively, and is rejected in the same manner, the same art, and reasoning applying.
Regarding claims 16 and 17, they are substantially similar to claims 5 and 6 respectively, and is rejected in the same manner, the same art, and reasoning applying.
Claim 23 recites features similar to claim 1 and is rejected for at least the same reasons therein. Claim 23 additionally requires A non-transitory computer-readable storage medium storing instructions which when executed cause at least one computing device to… (Burhani, ¶0006, “In accordance with yet another aspect, there is provided a non-transitory computer-readable storage medium storing instructions.”)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 2-4, 7, 13-15, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Burhani in view of Abels ("Dynamic Weights in Multi-Objective Deep Reinforcement Learning", hereinafter "Abels").
Regarding claim 2, Burhani teaches The system of claim 1, however fails to explicitly teach wherein the plurality of input data comprises a weighted vector with weights defining a relative importance of each of the plurality of user objectives.
Abels teaches wherein the plurality of input data comprises a weighted vector with weights defining a relative importance of each of the plurality of user objectives. (“In this paper we focus on linear f; each objective, i, is given a weight wi . [pg. 2, §2.2, ¶1])
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Burhani’s reinforcement learning system in order to implement the multi-objective weight estimation method as taught by Abels. One would have been motivated to make this modification as an dynamic weight estimation method would be useful to increase learning speed of the reinforcement learning neural network. [pg. 1, right col ,¶3, Abels]
Regarding claim 3, Burhani teaches The system of claim 2, however fails to explicitly teach wherein the reward is weighted based on the weighted vector.
Abels teaches wherein the reward is weighted based on the weighted vector (“Multi-Objective MDPs (MOMDP) (White & Kim, 1980) are MDPs with a vector-valued reward function rt = R(st, at). Each component of rt corresponds to one objective. A scalarization function f maps the multi-objective value Vπ of a policy π to a scalar value, i.e., the user utility. In this paper we focus on linear f; each objective, i, is given a weight wi , such that the scalarization function becomes f(Vπ , w) = w · Vπ .” [pg. 2, §2.2, ¶1]).
Same motivation to combine the teachings of Burhani/Abels as claim 2.
Regarding claim 4, Burhani teaches The system of claim 3, however fails to explicitly teach wherein the reward comprises a vector having a plurality of individual reward values, each of the plurality of individual reward values being a weighted value computed based on the relative importance of each respective objective from the plurality of user objectives.
Abels teaches wherein the reward comprises a vector having a plurality of individual reward values, each of the plurality of individual reward values being a weighted value computed based on the relative importance of each respective objective from the plurality of user objectives. (“We evaluate policies based on their regret, i.e., the difference between optimal value and actual return, ∆(g, w) = V∗ w·w −g·w = V∗ w·w − PT t=0 γ t rt·w, where g is the discounted cumulative reward, V∗ w denotes the optimal value for w, {r0, ..., rT } is the set of vector-valued rewards collected during an episode of length T. [pg. 6, §5.1, ¶2])
Same motivation to combine the teachings of Burhani/Abels as claim 2.
Regarding claim 7, Burhani teaches The system of claim 1, however fails to explicitly teach wherein the software code, when executed at the at least one processor, further causes the system to:
compute a loss based on the reward using a loss function; and
update the reinforcement learning neural network based on the loss.
Abels teaches wherein the software code, when executed at the at least one processor, further causes the system to:
compute a loss based on the reward using a loss function; and update the reinforcement learning neural network based on the loss.
(“For each transition (sj , aj , rj , sj+1) of a mini-batch, we sample wj from the set of encountered weights and minimize the loss” [pg. 598, top left col, ¶1])
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Burhani’s teachings in order to use a loss function to update the RL neural network as taught by Abels. One would have been motivated to make this modification in order to allow the network to generalize across weight vectors. [7. Conclusion and Future Work, ¶1, Abels]
Regarding claims 13-15 and 18, they are substantially similar to claims 2-4 and 7 respectively, and are rejected in the same manner, the same art, and reasoning applying.
Claims 8, 9 and 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Burhani in view of Vasudevan et al. ("US 20190354895 A1", hereinafter "Vasudevan").
Regarding claim 8, Burhani teaches The system of claim 1, wherein the software code, when executed at the at least one processor, further causes the system to:
receive a set of historical task data including one or more of: at least one historical state data for a historical task associated with the task request, a plurality of historical user objectives, and at least one historical action output for the at least one historical state data (“In some embodiments, automated agents may train on data reflective of trading volume throughout a day, and the generation of resource requests by such automated agents need not be tied to historical volumes. For example, conventionally, some agent upon reaching historical bounds (e.g., indicative of the agent falling behind schedule) may increase aggression to stay within the bounds, or conversely may also increase passivity to stay within bounds, which may result in less optimal trades.” [¶0188]);
However fails to explicitly teach
generate an augmented data based on the set of historical task data and the plurality of user objectives associated with the task request; and
compute an updated reward based on the augmented data.
Vasudevan teaches generate an augmented data based on the set of historical task data and the plurality of user objectives associated with the task request (“At each of multiple time steps, a current data augmentation policy is generated based on quality measures of data augmentation policies generated at previous time steps… determining an augmented batch of training data by transforming the training inputs in the batch of training data in accordance with the data augmentation policy” [¶0007]); and
compute an updated reward based on the augmented data. (“The policy network can be trained using reinforcement learning techniques, where the rewards (i.e., that the policy network is trained to maximize) are provided by the quality measures corresponding to the data augmentation policies generated by the policy neural network” [¶0066])
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Burhani’s teachings with the data augmentation technique as taught by Vasudevan. One would have been motivated to make this modification in order to improve the quantity and diversity of training inputs. [¶0039, Vasudevan]
Regarding claim 9, Burhani/Vasudevan teaches The system of claim 8, where Vasudevan further teaches wherein the software code, when executed at the at least one processor, further causes the system to:
compute an updated loss based on the updated reward using a loss function; and update the reinforcement learning neural network based on the updated loss. (“In some implementations, the machine learning model is a neural network, and adjusting the current values of the machine learning model parameters based on the augmented batch of training data includes: determining a gradient of a loss function using the augmented batch of training data; and adjusting the current values of the machine learning model parameters using the gradient.” [¶0014])
Same motivation to combine the teachings of Burhani/Vasudevan as claim 8.
Regarding claims 19 and 20, they are substantially similar to claims 8 and 9 respectively, and are rejected in the same manner, the same art, and reasoning applying.
Claims 10, 11, 21 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Burhani in view of Vasudevan and further in view of Abels.
Regarding claim 10, Burhani/Vasudevan teaches The system of claim 8, however fails to explicitly teach wherein the software code, when executed at the at least one processor, further causes the system to:
generate a historical weighted vector based on the plurality of historical user objectives, the historical weighted vector with weights defining a relative importance of each of the plurality of historical user objectives.
Abels teaches generate a historical weighted vector based on the plurality of historical user objectives, the historical weighted vector with weights defining a relative importance of each of the plurality of historical user objectives. (“Each policy πw is trained for the active weight vector w following scalarized deep Q-learning. When the active weights change, the stateless value of the policy πw, Vπw , is compared to all previously saved policies. If Vπw improves upon the maximum scalarized value of the policies already in Π for at least one past weight vector or for the current weight vector w, it is saved, otherwise it is discarded. To limit memory usage and ensure fast retrieval by keeping Π small, all old policies made redundant by πw are removed from Π. A policy is redundant if it is not the best policy for any encountered weight vector. [pg. 5, §4.2, ¶3]])
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Burhani’s/Vasudevan’s teachings in order to implement the multi-objective dynamic weight estimation method as taught by Abels. One would have been motivated to make this modification as noted by Abels, “our proposed loss, on the active weight vector and a random past weight vector, enables the network to generalize across weight vectors.” [7. Conclusion and Future Work, ¶1, Abels]
Regarding claim 11, Burhani/Vasudevan/Abels teaches The system of claim 10, where Abels further teaches wherein the updated reward is computed based on the historical weighted vector. (“To efficiently train this network, we propose an update rule specific to the dynamic weights setting. We further propose Diverse Experience Replay (DER), to improve sample-efficiency and reduce replay buffer bias.” [pg. 1, right col, 4th para])
Same motivation to combine the teachings of Burhani/Vasudevan/Abels as claim 10.
Regarding claims 21 and 22, they are substantially similar to claims 10 and 11 respectively, and are rejected in the same manner, the same art, and reasoning applying.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL H HOANG whose telephone number is (571)272-8491. The examiner can normally be reached Mon-Fri 8:30AM-4:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL H HOANG/Examiner, Art Unit 2122