DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Application Serial Number
While the initial filing of an application is done without an official serial number, once the application has been assigned a serial number that number should appear on submitted documents in the header or footer of each page.
Claim Interpretation
For clarity of record, the limitation in independent claim 1 of optimizing a performance of an Al model by selecting one or more internal settings for the Al model using reinforcement learning based on a task-specific reward function that measures the performance of the Al model on a specified task under broadest reasonable interpretation can be interpreted as either 1) selecting settings for the model and the model is using reinforcement learning, or 2) selecting settings, using reinforcement learning, for the model. Claims 9 and 25 contain similar language and allow similar interpretations.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-2, 4, 6, 8-10, 12, 14, 16, 25-26, 28, 30 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Specifically, exemplary claim 1 recites RAG without indicating what the abbreviation stands for. For abbreviations used in claims, the first appearance of the term should be the full text followed by the abbreviation in parathesis. For example, “United States Patent and Trademark Office (USPTO).” Then subsequent uses can be just the abbreviation.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-2, 4, 6, 8-10, 12, 14, 16, 25-26, 28, 30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Claim 1 is a system claim. Claim 9 is a method claim. Claim 25 is a CRM claim. Therefore, claims 1, 9, and 25 are directed to either a process, machine, manufacture or composition of matter.
With respect to Claim 1:
Step 2A Prong 1:
optimizing a performance of an Al model by selecting one or more internal settings for the Al model using reinforcement learning based on a task-specific reward function that measures the performance of the Al model on a specified task, wherein the reinforcement learning comprises sampling a group of outputs of the Al model based on an input query, computing one or more rewards based on the task- specific reward function, and updating the one or more internal settings based on the computed rewards (mental process – user can manually optimize a performance of an Al model by selecting one or more internal settings for the Al model using reinforcement learning based on a task-specific reward function that measures the performance of the Al model on a specified task, wherein the reinforcement learning comprises sampling a group of outputs of the Al model based on an input query, computing one or more rewards based on the task- specific reward function, and updating the one or more internal settings based on the computed rewards)
validating the performance of the Al model against one or more authoritative datastores, wherein the one or more authoritative datastores comprises a relational datastore, a NoSQL datastore, a graph datastore, a knowledge graph, a vector datastore, a document datastore, or a hybrid vectorized knowledge graph, or against one or more rule sets (mental process – user can manually validate the performance of the Al model against one or more authoritative datastores, wherein the one or more authoritative datastores comprises a relational datastore, a NoSQL datastore, a graph datastore, a knowledge graph, a vector datastore, a document datastore, or a hybrid vectorized knowledge graph, or against one or more rule sets)
optimizing a stability of the Al model using input perturbation (mental process – user can manually optimize a stability of the Al model using input perturbation)
optimizing a reliability of the Al model for the specified task using one or more techniques measured against a fitness function, wherein the techniques include model type search, attention mechanism search, model blending with weighted consensus, expert synthesis, RAG, knowledge graph verification, or composite vectorized knowledge graphs (mental process – user can manually optimize a reliability of the Al model for the specified task using one or more techniques measured against a fitness function, wherein the techniques include model type search, attention mechanism search, model blending with weighted consensus, expert synthesis, RAG, knowledge graph verification, or composite vectorized knowledge graphs)
Step 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements:
one or more hardware processors (mere instructions to apply the exception using a generic computer component)
optimizing a robustness of the Al model using adversarial training (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f))
wherein optimization is performed through a distributed computational graph that automatically parallelizes processing across heterogeneous computing resources (mere instructions to apply the exception using a generic computer component)
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception. Additional elements:
one or more hardware processors (mere instructions to apply the exception using a generic computer component)
optimizing a robustness of the Al model using adversarial training (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f))
wherein optimization is performed through a distributed computational graph that automatically parallelizes processing across heterogeneous computing resources (mere instructions to apply the exception using a generic computer component)
Conclusion: The claim is not patent eligible.
Claims 9 and 25 are rejected on the same grounds as claims 1. Claims 9 and 25 each include additional generic computer components which do not integrate the abstract idea into practical application or provide significantly more than the abstract idea.
Regarding Claims 4, 8, 12, 16, 28: These limitations, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind. That is, nothing in the claim limitation precludes the step from practically being performed in the mind.
For claims 4, 12, 28: the limitation encompasses the user manually comparing responses of the AI model to responses provided by the one or more authorities.
For claims 8, 16: the limitation encompasses the user manually selecting from or blending outputs from multiple models or authoritative knowledge bases, each trained on, or obtained from, defined resource collections or with different retrieval strategies or hyperparameters.
These judicial exceptions are not integrated into a practical application. In particular, the claims do not recite any additional elements. Accordingly, this does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, no additional elements are cited. Accordingly, the claim is not patent eligible.
Regarding Claims 2, 6, 10, 14, 26, 30: The limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, other than the additional elements, nothing in the claim limitation precludes the step from practically being performed in the mind.
For claims 2, 10, 26: the limitation includes the additional element of optimizing the performance of the AI model includes one or more reinforcement learning algorithms comprising Proximal Policy Optimization or Asynchronous Advantage Actor-Critic algorithms.
For claims 6, 14, 30: the limitation includes the additional element of incorporates malicious examples into training data to make the AI model more resilient against manipulated predictions.
These judicial exceptions are not integrated into a practical application. In particular, the additional elements are merely adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). Accordingly, this does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element are merely adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Accordingly, the claim is not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 4, 6, 9-10, 12, 14, 25-26, 28, 30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mermoud et al. (hereinafter Mermoud), U.S. Patent Application Publication 2025/0158895, in view of He et al. (hereinafter He), U.S. Patent Application Publication 2026/0024014, further in view of Chunduru et al. (hereinafter Chunduru), U.S. Patent Application Publication 2025/0209737, further in view of Wang et al. (hereinafter Wang), A match made in consistency heaven: when large language models meet evolutionary algorithms, further in view of Vasiljevic et al. (hereinafter Vasiljevic), U.S. Patent 11,960,885.
Regarding Claim 1, Mermoud discloses a computing system for optimizing and securing generative Al models employing an advanced model management platform, the computing system comprising:
one or more hardware processors [“the processor 220” ¶66; Fig. 2] configured for:
optimizing a performance of an Al model by selecting one or more internal settings for the Al model using reinforcement learning based on a task-specific reward function that measures the performance of the Al model on a specified task [“an architecture that addresses the above challenges through the use of reinforcement learning, whereby an LLM-based agent is trained to take actions in a rich environment whereby a vast number of actions can be taken to maximize a notion of cumulative reward” ¶65, 83; Fig. 7] wherein the reinforcement learning comprises sampling a group of outputs of the Al model based on an input query, computing one or more rewards based on the task- specific reward function, and updating the one or more internal settings based on the computed rewards [“sampling different actions from the policy or from the large language models used to trigger function calls and to produce the final answer, allowing for more exploration. The value of R can either be binary (success or failure) or a score that is proportional to how accurate or useful the answer is. Then, agent training framework 504 may update the policy by using an appropriate algorithm” ¶138];
validating the performance of the Al model [“Cross-validation of their output with similar or correlated actions” ¶126] against one or more authoritative datastores [“may perform a review process with respect to action library” ¶120; “Cross-validation of their output with similar or correlated actions” ¶126], wherein the one or more authoritative datastores comprises a relational datastore, a NoSQL datastore, a graph datastore, a knowledge graph, a vector datastore, a document datastore, or a hybrid vectorized knowledge graph, or against one or more rule sets;
optimizing a robustness of the Al model using adversarial training [“Example machine learning techniques… generative adversarial networks (GANs)” ¶40].
However, Mermoud fails to explicitly disclose validating the performance of the Al model against one or more authoritative datastores, wherein the one or more authoritative datastores comprises a relational datastore, a NoSQL datastore, a graph datastore, a knowledge graph, a vector datastore, a document datastore, or a hybrid vectorized knowledge graph, or against one or more rule sets.
He discloses validating the performance of the Al model against one or more authoritative datastores [“sub-divides the training data into a first portion of data for training the machine-learning model(s) 124, and a second portion of data for validating the example machine-learning (ML) model(s)” ¶26; “the training data may originate from a datastore” ¶26], wherein the one or more authoritative datastores comprises a relational datastore, a NoSQL datastore, a graph datastore, a knowledge graph, a vector datastore, a document datastore, or a hybrid vectorized knowledge graph, or against one or more rule sets [“the data stored in the datastore 120 may be in any data format such as, for example, binary data, comma delimited data, tab delimited data, structured query language (SQL) structures, an executable, etc.” ¶47; Examiner Note: an SQL datastore is a type of relational datastore].
It would have been obvious to one having ordinary skill in the art, having the teachings of Mermoud and He before him before the effective filing date of the claimed invention, to modify the system of Mermoud to incorporate the validation of He.
Given the advantage of ensuring model accuracy, one having ordinary skill in the art would have been motivated to make this obvious modification.
However, Mermoud fails to explicitly disclose optimizing a stability the Al model using input perturbation.
Chunduru discloses optimizing a stability the Al model using input perturbation [“introduce perturbations to the input data” ¶68].
It would have been obvious to one having ordinary skill in the art, having the teachings of Mermoud, He and Chunduru before him before the effective filing date of the claimed invention, to modify the combination to incorporate input perturbations of Chunduru.
Given the advantage of reducing the impact of noise or irrelevant features, one having ordinary skill in the art would have been motivated to make this obvious modification.
However, Mermoud fails to explicitly disclose optimizing a reliability of the Al model for the specified task using one or more techniques measured against a fitness function, wherein the techniques include model type search, attention mechanism search, model blending with weighted consensus, expert synthesis, RAG, knowledge graph verification, or composite vectorized knowledge graphs.
Wang discloses optimizing a reliability of the Al model for the specified task using one or more techniques measured against a fitness function [“Individuals are sorted in descending order of fitness” §1.2 ¶2; Table I], wherein the techniques include model type search, attention mechanism search, model blending with weighted consensus, expert synthesis, RAG, knowledge graph verification, or composite vectorized knowledge graphs [“attention mechanism directly performs feature transformation” §1.3 ¶1; Table I].
It would have been obvious to one having ordinary skill in the art, having the teachings of Mermoud, He, Chunduru, and Wang before him before the effective filing date of the claimed invention, to modify the combination to incorporate both a fitness function and an attention mechanism of Wang.
Given the advantage of improving accuracy, one having ordinary skill in the art would have been motivated to make this obvious modification.
However, Mermoud fails to explicitly disclose wherein optimization is performed through a distributed computational graph that automatically parallelizes processing across heterogeneous computing resources.
Vasiljevic discloses wherein optimization is performed through a distributed computational graph that automatically parallelizes processing across heterogeneous computing resources [“execute an application data flow graph on a set of computational nodes” col. 5, lines 7-8; “parallel computing using heterogeneous networks of computational nodes” col. 4, lines 16-17].
It would have been obvious to one having ordinary skill in the art, having the teachings of Mermoud, He, Chunduru, Wang, and Vasiljevic before him before the effective filing date of the claimed invention, to modify the combination to incorporate distributed processing on heterogeneous resources of Vasiljevic.
Given the advantage of faster processing and more efficient use of resources, one having ordinary skill in the art would have been motivated to make this obvious modification.
Regarding Claim 2, Mermoud, He, Chunduru, Wang, and Vasiljevic disclose the computing system of claim 1. Mermoud further discloses wherein optimizing the performance of the Ai Model includes one or more the reinforcement learning algorithms comprise Proximal Policy Optimization or Asynchronous Advantage Actor-Critic algorithms [“trained using policy gradients or proximal policy optimization” ¶129].
Regarding Claim 4, Mermoud, He, Chunduru, Wang, and Vasiljevic disclose the computing system of claim 1.
However, Mermoud fails to explicitly disclose wherein validating the performance of the AI model comprises comparing responses of the AI model to responses provided by the one or more authorities.
He discloses wherein validating the performance of the AI model comprises comparing responses of the AI model to responses provided by the one or more authorities [“sub-divides the training data into a first portion of data for training the machine-learning model(s) 124, and a second portion of data for validating the example machine-learning (ML) model(s)” ¶26; “the training data may originate from a datastore” ¶26].
It would have been obvious to one having ordinary skill in the art, having the teachings of Mermoud, He, Chunduru, Wang, and Vasiljevic before him before the effective filing date of the claimed invention, to modify the combination to incorporate the validation of He.
Given the advantage of ensuring model accuracy, one having ordinary skill in the art would have been motivated to make this obvious modification.
Regarding Claim 6, Mermoud, He, Chunduru, Wang, and Vasiljevic disclose the computing system of claim 1. Mermoud further discloses wherein the adversarial training incorporates malicious examples into training data to make the AI model more resilient against manipulated predictions [“troublemaker module 922 to perform some (malicious) changes” ¶100].
Claims 9, 10, 12, 14 are rejected on the same grounds as claims 1, 2, 4, 6, respectively.
Claims 25, 26, 28, 30 are rejected on the same grounds as claims 1, 2, 4, 6, respectively.
Claim(s) 8 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mermoud, He, Chunduru, Wang, and Vasiljevic, further in view of Brownlee, Blending Ensemble Machine Learning With Python.
Regarding Claim 8, Mermoud, He, Chunduru, Wang, and Vasiljevic disclose the computing system of claim 1.
However, Mermoud fails to explicitly disclose wherein model blending comprises selecting from or blending outputs from multiple models or authoritative knowledge bases, each trained on, or obtained from, defined resource collections or with different retrieval strategies or hyperparameters.
Brownlee discloses wherein model blending comprises selecting from or blending outputs from multiple models or authoritative knowledge bases, each trained on, or obtained from, defined resource collections or with different retrieval strategies or hyperparameters [“Blending may suggest developing a stacking ensemble where the base-models are machine learning models of any type, and the meta-model is a linear model that "blends" the predictions of the base models.” pg. 3].
It would have been obvious to one having ordinary skill in the art, having the teachings of Mermoud, He, Chunduru, Wang, Vasiljevic, and Brownlee before him before the effective filing date of the claimed invention, to modify the combination to incorporate the blending of Brownlee.
Given the advantage of boosted prediction accuracy, one having ordinary skill in the art would have been motivated to make this obvious modification.
Claim 16 is rejected on the same grounds as claim 8.
Examiner’s Note
The Examiner respectfully requests of the Applicant in preparing responses, to fully consider the entirety of the reference(s) as potentially teaching all or part of the claimed invention. It is noted, REFERENCES ARE RELEVANT AS PRIOR ART FOR ALL THEY CONTAIN. “The use of patents as references is not limited to what the patentees describe as their own inventions or to the problems with which they are concerned. They are part of the literature of the art, relevant for all they contain.” In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968)). A reference may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art, including non-preferred embodiments (see MPEP 2123). The Examiner has cited particular locations in the reference(s) as applied to the claim(s) above for the convenience of the Applicant. Although the specified citations are representative of the teachings of the art and are applied to the specific limitations within the individual claim(s), typically other passages and figures will apply as well.
Additionally, any claim amendments for any reason should include remarks indicating clear support in the originally filed specification.
Response to Arguments
Regarding the 101 rejections, Applicant's arguments have been fully considered but have been found unpersuasive. Applicant argues that 1) under Step 2A Prong 1 the claims do not recite a judicial exception, 2) under Step 2A Prong 2 the claims as a whole integrate the judicial exception into a practical application, and 3) the claims are not well-understood, routine, and conventional. Examiner disagrees for at least the following reasons.
First, the recited claim contains abstract ideas. Selecting settings for the model is something that a person can do manually, for example with pen and paper. Reinforcement learning is recited as merely a description of the model (e.g., the AI model using reinforcement learning). Furthermore, validating the performance of the model against an authoritative datastore is likewise an abstract idea since a person can perform the steps manually, for example by observation of data. Applicant’s argument that the “validation is performed against authoritative datastores of different types, which requires rapid analysis of large, computing-specific data constructs” (see Remarks pg. 10) is much narrower than the claim language. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. In reVan Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Similarly, optimizing a stability of the model using input perturbation is claimed broadly so that manual alterations of information satisfy the limitation and can be done manually with pen and paper. Lastly, optimizing a reliability of the model for the specified task using one or more techniques measured against a fitness function, wherein the techniques include model type search, attention mechanism search, model blending with weighted consensus, expert synthesis, RAG, knowledge graph verification, or composite vectorized knowledge graphs is likewise claim broadly so that mental mathematics can satisfy the limitation. The limitation of wherein optimization is performed through a distributed computational graph that automatically parallelizes processing across heterogeneous computing resources is an additional element and not an abstract idea. For at least these reasons, the claim contains abstract ideas.
Second, the additional elements do not integrate the abstract idea into a practical application. Applicant points to two specific elements: parallelized and distributed processing, and optimizing the model. Parallelized and distributed processing does not integrate the abstract idea into a practical application. Optimizing the model is merely training the model, and does not integrate the abstract idea into a practical application. Furthermore, as discussed above, several of the optimization limitations do not even require model training. They are recited in such a way to be part of the abstract idea, and not additional elements. Abstract limitations cannot integrate abstract limitations into practical applications. For at least these reasons, the claim’s abstract ideas are not integrated into a practical application.
Third, the additional elements are supported by Berkheimer evidence as outlined in the rejection. According to the Berkheimer memo, there are several ways to support the rejection. Court decisions found in the MPEP are one of those ways. Accordingly, the above rejection outlines the well-understood, routine, and conventional nature of the additional elements as found in MPEP 2106.05. Specifically, generic computer components and merely applying the abstract idea in the case of affirmatively claiming generic training. For at least these reasons, the additional elements are support by Berkheimer and not found, alone or in combination, to be significantly more than the judicial exception.
For at least these reasons, the rejections are maintained.
Regarding the prior art rejections, Applicant's arguments have been fully considered but have been found unpersuasive. Applicant argues that 1) Mermoud fails to disclose optimizing a performance of an Al model by selecting one or more internal settings for the Al model using reinforcement learning based on a task-specific reward function that measures the performance of the Al model on a specified task, wherein the reinforcement learning comprises sampling a group of outputs of the Al model based on an input query, computing one or more rewards based on the task- specific reward function, and updating the one or more internal settings based on the computed rewards; 2) Mermoud fails to disclose validating the performance of the Al model against one or more authoritative datastores, wherein the one or more authoritative datastores comprises a relational datastore, a NoSQL datastore, a graph datastore, a knowledge graph, a vector datastore, a document datastore, or a hybrid vectorized knowledge graph, or against one or more rule sets; 3) Chunduru fails to disclose optimizing a stability of the Al model using input perturbation because Chunduru is in the context of enhancing visual quality which is the not the context of the claims; 4) Wang fails to disclose optimizing a reliability of the Al model for the specified task using one or more techniques measured against a fitness function, wherein the techniques include … attention mechanism search; 5) Vasiljevic fails to disclose wherein optimization is performed through a distributed computational graph that automatically parallelizes processing across heterogeneous computing resources because the reference relates to compiler-based routing, not to optimizing a model using parallel processing; and 6) the combination is made with impermissible hindsight. Examiner disagrees for at least the following reasons.
First, Mermoud discloses optimizing a performance of a model by selecting one or more internal settings for the Al model by disclosing training of the model which optimizes it, and by disclosing that the model uses reinforcement learning. The claim does not require Applicant’s specific interpretation.
Second, Applicant's arguments with respect to the claims have been considered but are moot because the arguments do not apply to the references being used in the current rejection of the limitations.
Third, both the instant claim and Chunduru pertain to the same problem of reducing the impact of noise or irrelevant features by using input perturbations, which makes the model more robust.
Fourth, Wang identifies several key characteristics shared by LLMs and EAs, which constitute common mechanisms in existing technologies. Establishing this consistency not only offers a fundamental theoretical explanation for the current coupling of LLMs and EAs but also suggests potential future directions. Accordingly, as shown in at least Table 1, the common mechanisms between the two approaches include both the use of a fitness function as well as an attention mechanism. Furthermore, the term fitness function under broadest reasonable interpretation includes merely evaluation/validation for a value. Additionally, attention mechanism under broadest reasonable interpretation is a method that determines the importance of each component in a sequence relative to the other components in that sequence. These concepts are related and disclosed as such in Wang.
Fifth, Vasiljevic solves the same problem as the instant claim using a data flow graph to implement parallel computing using heterogeneous networks of computational nodes.
Sixth, the motivation for combination is listed for each reference in the rejection. While the references may be stretch over several technical fields, they are all related to problems encountered by the instant invention, for example, model accuracy, robustness, and implementation efficiency.
For at least these reasons, the rejections are maintained.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT H BEJCEK II whose telephone number is (571)270-3610. The examiner can normally be reached Monday - Friday: 9:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T. Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/R.B./ Examiner, Art Unit 2148
/MICHELLE T BECHTOLD/ Supervisory Patent Examiner, Art Unit 2148