DETAILED ACTION
This final action is in response to the amendment and remarks filed on 07/25/2025 for application 17/734,667.
Claims 1 and 6 have been amended. Claims 2-5 are cancelled. Claims 1 and 6 are pending in the application. Claim 1 is an independent claim.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed 07/25/2025 has been entered.
Applicant’s amendment to the title has been considered, and overcomes the objection set forth in the office action mailed 05/21/2025. Consequently, the objection has been withdrawn.
Applicant’s amendment to the claims with respect to resolving claim objections has been considered, and overcomes the objections set forth in the office action mailed 05/21/2025. Consequently, the previous objections have been withdrawn.
Applicant’s amendment to the claims with respect to resolving indefiniteness rejections under 35 U.S.C. 112(b) has been considered, but does not fully overcome the rejections set forth in the office action mailed 05/21/2025. The issues previously raised in the rejection of claim 1 are not fully resolved; applicant is directed towards the grounds of rejection under 112(b) with respect to amended claim 1 set forth below. All other previously raised indefiniteness rejections have been withdrawn.
Claim Objections
Claim 1 is objected to because of the following informalities:
In claim 1, “a dataset integration unit that acquires a plurality of designated datasets or a plurality of datasets corresponding to designated tag information from the dataset storage unit, integrates the acquired datasets, and outputs the datasets to the first training unit as an integrated dataset” should read “a dataset integration unit that acquires, from the dataset storage unit, a plurality of designated datasets or a plurality of datasets corresponding to designated tag information, integrates the acquired datasets, and outputs the acquired datasets as an integrated dataset” or be likewise amended to have clearer ordering and antecedent basis for claim terms.
In claim 1, “wherein the first training unit performs pre-training by self-supervised learning using the integrated dataset and outputs the pre-trained model to the pre-trained model database” should read “wherein the first training unit performs pre-training by self-supervised learning using the integrated dataset, and outputs the pre-trained model to the pre-trained model database” to improve grammatical clarity (see inserted comma).
In claim 1, “a second training unit that performs transfer learning using the pre-trained model stored in the pre-trained model database and the given dataset and outputs the trained model” should read “a second training unit that performs, using the pre-trained model stored in the pre-trained model database and the given dataset, transfer learning, and outputs the trained model” or be likewise amended to have clearer ordering for claim terms.
Appropriate corrections are required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1 and 6 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 1, it recites the limitations “a dataset storage unit that stores one or more datasets”, “a first training unit that performs training using a dataset stored in the dataset storage unit”, “the dataset storage unit stores tag information including any one or more of: domain information….of data included in a dataset to be stored, class information….included in the data, and data acquisition condition information related to an acquisition condition of the data and the dataset in a manner that the tag information and the dataset are associated with each other”. The underlined recitations of “data” and “dataset” are unclear in scope, because it is not clear whether previously recited data and dataset elements, or entirely separate claim elements, are being referenced. For example, is unclear if “a dataset to be stored” refers to the same “dataset stored in the dataset storage unit” that is previously recited, or refers to a separate dataset. Further, given that domain information, class information, and data acquisition condition information are recited as possible selections within an alternative limitation, it is unclear if “the data”, as recited in correspondence to class information and data acquisition condition information, is referring to “data included in the dataset” as recited in correspondence to domain information, or is referring to entirely separate data elements. It is further unclear which dataset that “the dataset”, as recited in correspondence to class information and data acquisition condition information, is referring to. It is further clear what element, or elements, the modifying phrase “in a manner that the tag information and the dataset are associated with each other” is referring to. Consequently, one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
For purposes of examination and as best understood in light of the specification [¶ 0045], the above limitations are interpreted as follows:
the dataset storage unit stores, for each dataset stored in the dataset storage unit, tag information including any one or more of:
domain information indicating a target object of data included in the dataset,
class information indicating a class of data included in the dataset, and
data acquisition condition information related to an acquisition condition of data included in the dataset, and
stores the tag information in such a manner that the tag information and the dataset are associated with each other.
Regarding claim 6, it inherits the deficiencies of its parent claim. Consequently, it is also rejected under 35 U.S.C. 112(b) as being indefinite for depending on an indefinite parent claim.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over Mao, ("A Survey on Self-supervised Pre-training for Sequential Transfer Learning in Neural Networks", available arXiv Jul. 2020), further in view of Guan et al., (Pub. No. US 20210011961 A1, “Systems and Methods for Content Management”, published 01/14/2021), hereinafter Guan, and Becherer et al., ("Improving optimization of convolutional neural networks through parameter fine-tuning", published Nov. 2017), hereinafter Becherer.
Regarding claim 1, Mao teaches A machine learning system that performs transfer learning to output a trained model by performing training using a given dataset and a parameter of a pre-trained model ("In this paper, we focus on surveying self-supervised learning methods for sequential transfer learning. Self-supervised learning is a type of unsupervised learning where a model is trained on labels that are automatically derived from the data itself without human annotation (Erhan et al., 2010; Hinton et al., 2006). Self-supervised learning methods enable a model to learn useful knowledge about an unlabeled dataset by learning useful representations and parameters. Transfer learning focuses on how to transfer or adapt this learned knowledge from a source task to a target task (Pan and Yang, 2010). Specifically, we focus on a specific type of transfer learning called sequential transfer learning (Ruder, 2019) which adopts a “pre-train then fine-tune” paradigm. Self-supervised learning and transfer learning are two complementary research areas that, together, enable us to harness a source task with a large amount of unlabeled examples and transfer the learned knowledge to a target task of interest" [Mao pages 1-2 Introduction]), the machine learning system comprising:
a first training unit that performs training using a dataset to generate the pre-trained model ("In multi-task learning, tasks Ts and Tt are learned simultaneously, typically through the joint optimization of multiple objective functions. In sequential transfer learning, Ts is first learned, then the downstream task Tt is learned. The first stage is often called pre-training while the second stage of learning is often called fine-tuning in the context of neural networks. The primary difference between these two types of transfer learning is when the target task is learned… Sequential transfer learning is our primary focus in this paper. Sequential transfer learning is more popular in practice as it is simple to set up a two-phase training pipeline and easy to distribute pre-trained models without needing to disclose the pre-training dataset. Most of the self-supervised learning techniques we review can be categorized under sequential transfer learning" [Mao page 3 Transfer Learning Scenarios]),
wherein the first training unit performs pre-training by self-supervised learning using the dataset and outputs the pre-trained model, ([Mao pages 1-2 Introduction] and [Mao page 3 Transfer Learning Scenarios] as detailed above)
a second training unit that performs transfer learning using the pre-trained model and the given dataset and outputs the trained model (“In multi-task learning, tasks Ts and Tt are learned simultaneously, typically through the joint optimization of multiple objective functions. In sequential transfer learning, Ts is first learned, then the downstream task Tt is learned. The first stage is often called pre-training while the second stage of learning is often called fine-tuning in the context of neural networks” [Mao page 3 Transfer Learning Scenarios]).
However, Mao is silent about a configuration of dataset storage and processing, and therefore does not expressly teach a dataset storage unit that stores one or more datasets, perform[ing] training using a dataset stored in the dataset storage unit, or stor[ing] [a] generated pre-trained model in a pre-trained model database.
Mao thereby also does not expressly teach wherein the dataset storage unit stores tag information including any one or more of: domain information indicating a target object of data included in a dataset to be stored, class information indicating a class included in the data, and data acquisition condition information related to an acquisition condition of the data and the dataset in a manner that the tag information and the dataset are associated with each other.
Mao thereby also does not expressly teach a dataset integration unit that acquires a plurality of designated datasets or a plurality of datasets corresponding to designated tag information from the dataset storage unit, integrates the acquired datasets, and outputs the datasets to the first training unit as an integrated dataset.
Mao thereby also does not expressly teach wherein the pre-trained model database stores: information regarding the integrated dataset used for the pre-training by the first training unit, or tag information used in the integrated dataset in association with the pretrained model.
Mao thereby also does not expressly teach a dataset relevance evaluation unit that evaluates similarity between the datasets stored in the dataset storage unit.
In the same field of endeavor, Guan teaches a system for deriving knowledge from raw data via unsupervised learning methods ("The content management systems and methods solve the problems discussed above using an operations real-time trend, topic, and data source detection, monitoring, and recommendation service platform....In addition to handling various content management requests, embodiments may compile useful information on events, news, and trends, using processes automated with sophisticated algorithmic machine-learning methods and computations to process signals impacting their content management activities" [Guan ¶ 0010]; "The present embodiments may also process these data feeds through a broad spectrum of artificial intelligence (Al) models on a real-time basis, to score, rank, filter, classify, cluster, identify, classify, and summarize data feeds from numerous data sources. These Al models may span supervised, semi-supervised, and unsupervised learning" [Guan ¶ 0071]) comprising a dataset storage unit that stores one or more datasets; ("A storage of raw data sources and aggregated data may be written to the database once per hour, affording near real-time data source aggregation and processing for timely statistical analysis and insights" [Guan ¶ 0092]),
performing training using a dataset stored in the dataset storage unit, and stor[ing] [a] generated pre-trained model in a pre-trained model database ( “Using feedback loops, the present embodiments may also improve the functioning of a computer by decreasing the amount of computing capacity and network bandwidth required to process the information, by, for example, enabling the identification of relevant information more quickly, with fewer iterations and fewer queries to a central processing unit. For example, embodiments may aggregate and process historical patterns in data feeds, user input (e.g., regarding relevancy), and user activity, which may be later used to train new artificial intelligence models for more effective detection of real-time events in the present time" [Guan ¶ 0072]; “A content management and processing engine 110 may be configured to streamline reviewer and SME operations 102 and reviewer/SME interaction with third-party content management platform 104, to extract useful data from ongoing interactions with content. Based on content management metadata 106, a data derived from content management data storage module 140, a set of application engines 160 may be configured for performing data streamlining and relevance scoring 162…a reviewer/SME 102 may initiate a loop of interactions between the third-party content management platform 104 and the content management and processing engine 110, which may contain content management module 120, content processing module 130, and content management data storage module 140. Some implementations of data storage module 140 may be further configured to process operations data 142, content type data 144 (e.g., image, video, share and text), and policy and process updates data 146. Once content review is initiated, content management metadata 106 may be established to convert content-related data streams from a content management environment to application engines 160.… Depending on the service setup, the application engines 160 may include a set of other advanced machine-learning and statistical functional engines, each developed to perform analytic tasks on processed content” [Guan ¶ 0076-0078]; The data derived from the data storage module may be further processed and used to train artificial intelligence models which can be hosted, e.g., on third-party content platforms),
wherein the dataset storage unit stores tag information including any one or more of: domain information indicating a target object of data included in a dataset to be stored, class information indicating a class included in the data, and data acquisition condition information related to an acquisition condition of the data and the dataset in a manner that the tag information and the dataset are associated with each other (“Thus, the content management and processing engine 110 may be configured to collect and store descriptive data related to content management activities. Examples of such data include: content type such as image, video, share, or text; content decision types such as skip, delete, and escalate; content handling time, review consistency, and error rate; and content-related policy and process updates; all of which can be saved as metadata 106 for real-time continuous reporting to application engines 160. In other embodiments, content data stored in metadata can be also cross-linked to reviewer identifiers, reviewer sites, and other reviewer-related information" [Guan ¶ 0077]; The stored data includes metadata (i.e., associated tag information) such as content types and/or content decision types (i.e., domain information and/or class information) and reviewer identifiers/sites (i.e., data acquisition information)),
a dataset integration unit that acquires a plurality of designated datasets or a plurality of datasets corresponding to designated tag information from the dataset storage unit, integrates the acquired datasets, and outputs the datasets to the first training unit as an integrated dataset ("The data source processing module 232 may provide a process by which data from data streaming engine 220 is gathered and expressed as a summary for statistical and machine-learning implementations. The data source processing may involve various steps, such as ensuring data sources are correct and relevant, arranging data sources into different sets, summarizing data via keyword tags, combining multiple sources of data by their similarities and relevance, analysis and data interpretation, summary data of computed information, and classification of data sources into various categories. For example, raw data sources can be aggregated as a single source over a given time period and/or as multiple sources gathered spatially over a given time period” [Guan ¶ 0091]; Data sources can be arranged into datasets and combined (i.e., integrated) with other datasets in the source processing module),
wherein the pre-trained model database stores: information regarding the integrated dataset used for the pre-training by the first training unit, or tag information used in the integrated dataset in association with the pretrained model ([Guan ¶ 0076-0078, 0091] as detailed above; The stored data, including, e.g., datasets comprising data integrated from multiple sources, also includes metadata (i.e., associated tag information), wherein both the data and its associated metadata are used to configure application engines (which include, e.g., machine learning models, and can be hosted on third-party content platforms) for further learning tasks),
and a dataset relevance evaluation unit that evaluates similarity between the datasets stored in the dataset storage unit ([Guan ¶ 0091] as detailed above; Integrated datasets can, e.g., combine multiple sources of data based on their similarities and relevance).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the dataset storage and processing configuration taught by Guan into Mao because they are both directed towards deriving knowledge from raw data via unsupervised learning methods. Incorporating the teachings of Guan, such as performing initial data processing tasks of aggregating data and combining similar sources, would improve functioning of the overall system by decreasing computing capacity and/or network bandwidth required to process information ("Using feedback loops, the present embodiments may also improve the functioning of a computer by decreasing the amount of computing capacity and network bandwidth required to process the information, by, for example, enabling the identification of relevant information more quickly, with fewer iterations and fewer queries to a central processing unit. For example, embodiments may aggregate and process historical patterns in data feeds, user input (e.g., regarding relevancy), and user activity, which may be later used to train new artificial intelligence models for more effective detection of real-time events in the present time" [Guan ¶ 0072]) and efficiently organizing the data for downstream tasks (e.g., training ML models) ("The data source processing module 232 may provide a process by which data from data streaming engine 220 is gathered and expressed as a summary for statistical and machine-learning implementations" [Guan ¶ 0091]).
However, the combination does not expressly teach wherein the second training unit trains only an output layer of the pre-trained model.
In the same field of endeavor, Becherer teaches a system of transfer learning (“Parameter finetuning is a method wherein a network is pre-trained on a different data set and then retrained on the target set. Research has shown this method boosts the performance of a network over random initialization [3]. However, research on this topic is still relatively scarce. This paper will present several gaps in our understanding as well a framework for investigating them…Transfer learning is the study of using data gained from one problem in machine learning and applying it to another related, yet different, problem. With CNNs, there are two main ways to apply transfer learning. The first is to remove the output layer of a trained network and use the raw output of the previous fully connected layer as a generic feature vector that describes a particular image. These features are then used in a number of algorithms which were originally designed around using SIFT or SURF features [16, 17]. Since CNNs are inherently only capable of image classification, extra algorithmic work is necessary to apply it to another problem. The second transfer learning technique is known as parameter fine-tuning, detailed above. This is the focus of this paper” [Becherer page 2, Introduction and Related works in transfer learning for CNNs]) wherein the second training unit trains only an output layer of a pre-trained model ("Fig. 1 shows a simple example of parameter fine-tuning, depicting a network trained on a source task (left) to be applied to a network trained on a target task (right). The source task network, with green nodes, proceeds through training with learned weights represented by blue lines and mapped to two outputs. To transfer the network, a layer is cut off (in this case only the output layer, but not necessarily so)…the weights in the final remaining layer must be reinitialized, which is represented in the figure by red lines. After transfer, the learned weights optimize SGD to the target task with three outputs" [Becherer page 3 Experimental framework and dataset]; "Fig. 1 Fine-tuning visualized, where the network on the left is trained on a task similar to the transfer task. At right the pre-trained network, the output layer is cut off and the final weight layer is transferred to a network with varied outputs (colour figure online)" [Becherer see Figure 1 on page 3]; In light of the specification (“In the present invention, one or more layers in front of the output layer are referred to as feature extraction layers” [¶ 0017]; “Training of the feature extraction layer may be sufficiently completed by pre-training. In such a case, if only the output layer is newly trained, the trained model 204 that is highly accurate can be acquired at high speed” [¶ 0094]), the examiner has interpreted the limitation “wherein the second training unit trains only an output layer of a pre-trained model” to encompass a procedure of removing the output layer of the pre-trained model, re-initializing weights of a new output layer to match output of the target task, and re-training the weights of the new output based on the target task, which is functionally equivalent to the procedure taught by Becherer as detailed above).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated wherein the second training unit trains only an output layer of a pre-trained model as taught by Becherer into the combination because both Mao and Becherer are both directed towards transfer learning techniques. It is well known in the art that training complicated models, even on high-powered hardware architectures, can be a lengthy procedure ("Second, even with high-powered GPUs, training a CNN takes days, and more complicated models take longer (the authors of VGG reported that training took 2-3 weeks on similar hardware [13]). Given the number of CNNs that need to be trained for this experiment, time is a nontrivial factor" [Becherer page 5 Experimental framework and dataset]). Given further recognition that pre-training and parameter fine-tuning procedures improve accuracy of ML models at the cost of increased training time ([Becherer page 10 Conclusion]), a person of ordinary skill in the art would have recognized the value of incorporating the teachings of Becherer to achieve an optimal trade-off between training time and model accuracy.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Mao, Guan, and Becherer, as applied to claim 1 above, further in view of Rafati et al., ("Quasi-Newton Optimization Methods for Deep Learning Applications", published Feb. 2020), hereinafter Rafati.
Regarding claim 6, the combination of Mao, Guan, and Becherer teaches the limitations of parent claim 1.
However, the combination does not expressly teach perform[ing] training by a quasi- Newton method or a natural gradient method.
In the same field of endeavor, Rafati teaches a system for deriving knowledge from data via deep learning methods (e.g., unsupervised learning) ("Deep learning algorithms attempt to train a function approximation (model), usually a deep convolutional neural network (CNN), over a large dataset. In most of the deep learning and deep reinforcement learning (RL) algorithms, solving an empirical risk minimization (ERM) problem is required [3]" [Rafati page 2 Introduction]; "In this chapter, we present methods based on quasi-Newton optimization for solving the ERM problem for deep learning applications. For numerical experiments, we focus on two deep learning applications, one in supervised learning and the other one in reinforcement learning. The proposed methods are general purpose and can be employed for solving optimization steps of other deep learning applications" [Rafati pages 4-5 Applications and Objectives]) that perform[s] training by a quasi- Newton method or a natural gradient method ("The Broyden–Fletcher–Goldfarb–Shanno (BFGS) method [21–24] is considered the most widely used quasi-Newton algorithm, which produces a positive definite matrix Bk for each iteration" [Rafati page 3 Motivation]; "In this chapter, we present methods based on quasi-Newton optimization for solving the ERM problem for deep learning applications....First, we introduce novel large-scale L-BFGS optimization methods using the trust-region strategy--as an alternative to the gradient descent methods....Next, we investigate the utility of quasi-Newton optimization methods in deep reinforcement learning (RL) applications" [Rafati pages 4-5 Applications and Objectives])
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated perform[ing] training by a quasi- Newton method or a natural gradient method and an extension thereof as taught by Rafati into the combination because Mao and Rafati are both directed towards deriving knowledge from data via deep learning methods (e.g., unsupervised learning). Incorporating the teachings of Rafati would provide a memory-efficient, training time-efficient, and scalable training algorithm applicable to a variety of unsupervised learning (e.g., transfer learning) applications ("Due to the nonconvex and nonlinear loss functions arising in deep reinforcement learning, our numerical experiments show that using the curvature information in computing the search direction leads to a more robust convergence when compared to the SGD results. Our proposed deep L-BFGS Q-Learning method is designed to be efficient for parallel computations on GPUs. Our method is much faster than the existing methods in the literature, and it is memory efficient since it does not need to store a large experience replay memory. Since our proposed limited-memory quasi-Newton optimization methods rely only on first-order gradients, they can be efficiently scaled and employed for larger scale supervised learning, unsupervised learning, and reinforcement learning applications. The overall enhanced performance of our proposed optimization methods on deep learning applications can be attributed to the robust convergence proper ties, fast training time, and better generalization characteristics of these optimization methods" [Rafati page 27 Conclusions]).
Response to Arguments
The remarks filed 07/25/2025 have been fully considered.
Applicant’s remarks [Remarks page 6] traversing the obviousness rejections under 35 U.S.C. 103 set forth in the office action mailed 05/21/2025, in view of claims 1 and 6 as amended, have been considered but are not persuasive.
Applicant alleges that Becherer does not disclose “the second training unit train[ing] only an output layer of the pre-trained model”, because Becherer discloses that the output layer of a trained network is removed and the raw output of the previous fully connected layer is used as a generic feature vector that describes a particular image.
The examiner respectfully disagrees, and has elaborated on the rejection to emphasize that when considering the limitation in light of the specification, it is apparent that Becherer discloses a functionally equivalent procedure to that which is recited in the claim. Applicant is directed towards the rejection of amended claim 1 set forth above [see Claim Rejections - 35 USC § 103 pages 14-16].
Applicant has not presented further arguments with respect to the dependent claims. As such, amended claims 1 and 6 stand rejected under 35 U.S.C. 103.
Conclusion
Applicant’s amendment necessitated the new ground(s) of rejection presented in this office action. Accordingly, THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VIJAY M BALAKRISHNAN whose telephone number is (571) 272-0455. The examiner can normally be reached 10am-5pm EST Mon-Thurs.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JENNIFER WELCH can be reached on (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/V.M.B./
Examiner, Art Unit 2143
/JENNIFER N WELCH/Supervisory Patent Examiner, Art Unit 2143