DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1, 6, 9, and 17 have been amended by Applicant. Claims 3, 10, and 16 are cancelled and no new claims have been added. Claims 1-2, 4-9,11-15, and17 are currently pending.
Response to Arguments
Claim Rejections under 35 U.S.C. 112(b)
The rejection of claim 7 under 35 U.S.C. 112(b) for lack of antecedence basis has been withdrawn in view of applicant’s amendment to claim 6 (from which claim 7 is dependent). However, new claim rejections under 35 U.S.C. 112(b) have been made herein as to claims 4, 5, and 8 and claims 11, 12, and 15. (See claim rejections under 35 U.S.C. 112).
Claim Rejections under 35 U.S.C. 102
The rejection of claims 1, 2, 6, 7, 9, 13, 14, and 17 under 35 U.S.C. 102 has been maintained.
Applicant's arguments filed 11/25/2025 have been fully considered but they are not persuasive.
Applicant has amended claim 1 to incorporate the limitations of cancelled claim 3.
Applicant first argues (in page 7 of Applicant’s remarks) that the claimed invention is intended to implement a stand-alone machine learning engine on a mobile terminal device and that this differs from the disclosed invention in Heimendinger.
In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., a stand-alone machine learning engine on a mobile terminal device) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Examiner notes that Heimendinger teaches the claimed prediction output service module located in a system process. As set forth in the Non-Final Office Action dated 08/27/2025, Heimendinger, Paragraph [0017] was cited as teaching throughout this specification, the term "platform" may be a combination of software and hardware components for providing data processing applications that may employ predictive data caching in executing queries. Examples of platforms include, but are not limited to, a hosted service executed over a plurality of servers, an application executed on a single server, and comparable systems. The term "server" refers to a computing device executing one or more software programs typically in a networked environment. The term "client" refers to a computing device or software application that provides a user access to data and other software applications through a network connection with other clients and/or servers.
Applicant further argues (in page 10 of applicant’s remarks) that Heimendinger does not disclose the prediction output service located in the system process. Applicant further argues that the query request taught by Heimendinger comes from applications of the client devices, rather than from other modules in the system process. To this effect, Applicant maintains that Heimendinger fails to disclose the amended limitation “wherein the prediction output service module is further configured to return a buffered prediction result to other modules in the system process in response to a reception of a request for the prediction result from the other modules.”
Examiner notes that the limitation “located in a system process” is not specifically defined in Applicant’s specification. To this effect, the limitation has been understood, under broadest reasonable interpretation, as process executed in any computing environment.
Examiner further maintains that Heimendinger was shown to teach the argued limitation wherein the prediction output service module is further configured to return a buffered prediction result to other modules in the system process in response to a reception of a request for the prediction result from the other modules. Accordingly, Heimendinger, Paragraph [0028], was cited as teaching FIG. 3 illustrates major components and operations in a predictive data caching system in diagram 300. A system according to embodiments performs four main operations: (1) storing query information for use in building or refining a predictive model; (2) using the predictive model to preemptively query the data source; (3) caching the results of preemptive queries; and (4) returning cached results that match a user-executed query.; Heimendinger, Paragraph [0041] further teaching data processing application 522 and caching module 524 may be separate applications or integral modules of a hosted service that provides computing services to client applications/devices [i.e., [0041] reading on a “system process”]. Data processing application 522 may provide data retrieval, organization, analysis, and similar functions including reporting retrieved data to clients. Caching module 524 may, among other things, maintain a predictive model to schedule preemptive query execution as discussed in more detail above.; See also Fig. 3, 348 (cached results), 338 (return cached results – i.e., reading on output service module configured to return a buffered prediction result) and 332 (execute query).
Examiner further notes that the limitation “prediction output service module”, in claim 1, was interpreted under 35 U.S.C. 112(f) to mean modules executed by a processor or as part of an integrated circuit, as set forth in Applicant’s specification at page 15, lines 16-33 and page 16, lines 1-13.
Because Applicant has failed to claim the “system process” as more narrowly argued in its remarks, Examiner respectfully maintains that Heimendinger does teach, under broadest reasonable interpretation, each and every limitation, as currently claimed in amended claim 1.
Therefore, the rejection of claim 1 (as amended), under 35 U.S.C. 102 has been maintained herein.
For at least the same reasons stated above for claim 1 (as amended), the rejection of analogous independent claims 9 and 17 (as amended) have also been maintained under 35 U.S.C. 102.
Claim Rejections under 35 U.S.C. 103
The rejection of claims 4-5, 8, 11, 12, and 15 under 35 U.S.C. 103 has been maintained herein.
For at least the same reasons set forth above for claim 1 (as amended), the rejection of dependent claims 4-5, 8, 11, 12, and 15 have also been maintained.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“core learning prediction module” , in claim 1
“prediction output service module”, in claim 1
[EXAMINER NOTE: The corresponding structure described in the specification as performing the claimed function of these modules has been identified in page 15, lines 16-33 and page 16, lines 1-13, which state that the modules may be executed by a processor or as part of an integrated circuit]
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
Claims 4, 5, and 8 are recited as still dependent on cancelled claim 3. Therefore, claims 4, 5, and 8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. For purposes of compact prosecution, Examiner has interpreted these claims to be dependent on claim 1 (as amended).
Claims 11, 12, and 15 are recited as still dependent on cancelled claim 10. Therefore, claims 11, 12, and 15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. For purposes of compact prosecution, Examiner has interpreted these claims to be dependent on amended claim 9 (as amended).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 2, 6, 7, 9, 13, 14, and 17 (as amended) are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Heimendinger (US 20110055202 A1, filed Aug. 31, 2009 and published Mar. 3, 2011)
Regarding claim 1, Heimendinger teaches an apparatus for implementing a machine learning engine, comprising:
a core learning application module with an independent application process (Heimendinger, Paragraph [0014] teaches embodiments will be described in the context of program modules that execute in conjunction with an application program that runs on an operating system on a personal computer); and
a prediction output service module located in a system process (Heimendinger, Paragraph [0017] teaches throughout this specification, the term "platform" may be a combination of software and hardware components for providing data processing applications that may employ predictive data caching in executing queries. Examples of platforms include, but are not limited to, a hosted service executed over a plurality of servers, an application executed on a single server, and comparable systems. The term "server" refers to a computing device executing one or more software programs typically in a networked environment. The term "client" refers to a computing device or software application that provides a user access to data and other software applications through a network connection with other clients and/or servers. More detail on these technologies and example operations is provided below.; See also Fig. 5, 506 (program modules), 522 (data processing application), processing unit (502, and 524 (caching module));
wherein the core learning application module is configured to output a prediction result generated by machine learning to the prediction output service module, and the prediction output service module is configured to buffer the prediction result in response to a reception of the prediction result sent by the core learning application module (Heimendinger, Paragraph [0005] teaches a predictive model to schedule "preemptive" queries based on frequently utilized query paths in hierarchically structured data. The predictive model may be formed/adjusted based on user or organization profiles, usage history, and similar factors. Queries may then be executed in a preemptive fashion (prior to an actual request by a user) based on the predictive model and parameterizable thresholds and results cached. Cached results may then be provided to a requesting user more rapidly saving network and computing resources.; Heimendinger, Paragraph [0021] teaches results cached before the user asks for the query, thus making queries appear to be more responsive. The results may be cached at the server (cache memory 112), at the client device 104 (cache memory 106), or even at a designated data store. [NOTE: results cached from the predictive model, as disclosed in Heimendinger has been understood to read on “buffer the prediction result” as claimed]; Heimendinger, Paragraph [0031], further teaches the model may employ one or more algorithms for creating predictions. The algorithms may include neural networks, Bayesian trees, collaborative filtering, and other techniques often found in data mining applications such as various machine learning algorithms; See also Fig. 3 illustrating an example of how a query is processed through the predictive model “platform”; );
wherein the prediction output service module is further configured to return a buffered prediction result to other modules in the system process in response to a reception of a request for the prediction result from the other modules (Heimendinger, Paragraph [0028], teaches FIG. 3 illustrates major components and operations in a predictive data caching system in diagram 300. A system according to embodiments performs four main operations: (1) storing query information for use in building or refining a predictive model; (2) using the predictive model to preemptively query the data source; (3) caching the results of preemptive queries; and (4) returning cached results that match a user-executed query.; Heimendinger, Paragraph [0041]. teaches data processing application 522 and caching module 524 may be separate applications or integral modules of a hosted service that provides computing services to client applications/devices. Data processing application 522 may provide data retrieval, organization, analysis, and similar functions including reporting retrieved data to clients. Caching module 524 may, among other things, maintain a predictive model to schedule preemptive query execution as discussed in more detail above.; See also Fig. 3, 348 (cached results), 338 (return cached results – i.e., reading on output service module configured to return a buffered prediction result) and 332 (execute query))
Regarding claim 2, Heimendinger teaches all of the limitations of claim 1, and further teaches wherein the core learning application module is further configured to perform machine learning to generate the prediction result (Heimendinger, Paragraph [0014] teaches embodiments will be described in the context of program modules that execute in conjunction with an application program that runs on an operating system on a personal computer.; Heimendinger, Paragraph [0031], further teaches the model may employ one or more algorithms for creating predictions. The algorithms may include neural networks, Bayesian trees, collaborative filtering, and other techniques often found in data mining applications such as various machine learning algorithms).
Regarding claim 6, Heimendinger teaches all of the limitations of claim 1, and further discloses wherein: the prediction output service module is further configured to send an update notification according to a preset strategy to inform the core learning application module to perform a new machine learning operation; and the core learning application module is further configured to start a new machine learning operation and output an updated prediction result to the prediction output service module after receiving the update notification from the prediction output service module (Heimendinger, Paragraph [0032] teaches when a query is executed against data source 350, the system feeds the information about that query (similar to the information captured above) into the predictive model 344 as input variables [update notification], which may also be used to update the predictive model (342). The model returns a set of queries likely to follow from the most recent query, along with the statistical probabilities for each suggestion. Based on a configurable parameter, specified by a user or specified as the result of the predictive model 344 self-optimizing, suggested queries with a statistical likelihood over a predefined threshold or with a historical query response time below another predefined threshold, may be chosen for execution (346) and caching.).
Regarding claim 7, Heimendinger teaches all of the limitations of claim 6, and further teaches wherein the preset strategy comprises at least one of following: an expiration of a periodic interval, an occurrence of a preset event, or a detection of an expiration of the validity period of the buffered prediction result (Heimendinger, [Claim 18] teaches updating the cached preemptive query results based on a trigger event; Heimendinger, [Claim 15] further teaches wherein the cached results of the preemptive query are configured to expire based on one of: a user preference and a frequency of updates to the at least one data source).
Regarding claim 9,
Claim 9 recites similar or analogous limitations as claim 1 (as amended) and therefore it is rejected under the same rationale as stated for claim 1 (as amended).
Regarding claim 13,
Claim 13 recites similar or analogous limitations as claim 6 and, therefore, it is rejected under the same rationale as claim 6.
Regarding claim 14,
Claim 14 recites similar or analogous limitations as claim 7 and, therefore, it is rejected under the same rationale as claim 7.
Regarding claim 17,
Claim 17 recites similar or analogous limitations as claim 1 (as amended) and, therefore, it is rejected under the same rationale as claim 1 (as amended).
Heimendinger further teaches a non-transitory computer-readable storage medium storing at least one program, wherein the at least one program is executable by at least one processor and cause the at least one processor to perform a method for implementing a machine learning engine (Heimendinger Paragraph [0016], teaches embodiments may be implemented as a computer-implemented process (method), a computing system, or as an article of manufacture, such as a computer program product or computer readable media. The computer program product may be a computer storage medium readable by a computer system [i.e., by at least one processor] and encoding a computer program that comprises instructions for causing a computer or computing system to perform example process(es). The computer-readable storage medium can for example be implemented via one or more of a volatile computer memory, a non-volatile memory, a hard drive, a flash drive, a floppy disk, or a compact disk, and comparable media.)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 4-5, 8, 11, 12, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Heimendinger, as applied to claims 3 and 10, in further view of Chen et al., “Caching with Time Domain Buffer Sharing” (published Dec. 4, 2018).
Regarding claim 4, Heimendinger teaches all of the limitations of claim 1, and however Heimendinger does not distinctly disclose wherein the prediction output service module is further configured to buffer a validity period of the prediction result; and to return the buffered prediction result to the other modules in response to a judgement that the prediction result is within the corresponding validity period.
Nevertheless, Chen teaches wherein the prediction output service module is further configured to buffer a validity period of the prediction result; and to return the buffered prediction result to the other modules in response to a judgement that the prediction result is within the corresponding validity period (Chen, Abstract, teaches, storage efficient caching based on time domain buffer sharing. The caching policy allows a user’s device to determine [i.e., judge] whether and how long it should cache a content item according to the prediction of the user’s random request time, also referred to as the request delay information (RDI) [i.e., a validity period]. In particular, the aim is to maximize the caching gain for communications while limiting its storage cost; Chen, pg. 2731, col. 1, teaches time-varying content popularity models were proposed in [21]–[23]. In our previous work [23], we were aware of the fact that most data is requested only once. Based upon this observation, we defined the request delay information (RDI) as the probability density function (p.d.f.) of a user’s request time/delay for a content item. In practice, the RDI can be estimated from a content item’s labels or key words and then applied to predict the user’s request time for a content item.; Chen, pg. 2731, col. 1, teaches The time-varying popularity allows a user to remove a cached content item from its buffer when this content item becomes outdated or less popular. Once a content item is removed, the occupied buffer space can be released in order to cache other content items. As such, the user may reuse its buffer in the time domain.; See also, Fig. 1).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the predictive model, as taught by Heimendinger, to further include the caching base on time domain buffer sharing, as taught by Chen, in order to maximize the caching gain for communications while limiting its storage cost. (Chen, Abstract and pg. 2731, col. 1)
[EXAMINER NOTE: Examiner notes that Heimendinger at [Claim 15] teaches wherein the cached results of the preemptive query are configured to expire based on one of: a user preference and a frequency of updates to the at least one data source, which at least suggests a buffer validity period].
Regarding claim 5, the combination of Heimendinger in view of Chen teaches all of the limitations of claim 4, and the combination further teaches wherein the prediction output service module is further configured to obtain the validity period from the core learning application module, or to set the validity period independently (Chen, the user’s demand probability for the content item is given by (1 − qi). Hence its content popularity is directly proportional to the sum of all users’ demand probabilities. Thus the RDI is a more precise description of the content popularity. Along with the content arrival time _k, the RDI characterizes a content’s instantaneous popularity for a specific user, which may be time-varying. In the following simplified scenario, we further demonstrate how the RDI is related to the content popularity within a time interval. For a group of users with the same content preference, we may estimate a user’s demand probability for a content item within a time interval, which can be regarded as a function of the end time of the time interval. By differentiating this function, we obtain the RDI. By this means, not only the content popularity but also the RDI can be predicted based on a content item’s keywords or labels.; Chen, further teaches the online algorithm begins by setting an initial value for the Lagrange multiplier randomly. Then the user will implement the caching policy, with parameters determined by Subsection IV-B and r_i () in Eq. (9), to decide whether and how long a content file should be cached based on its RDI pi(x). The implementation time is long enough that the arithmetic mean of the buffer consumption converges well to its expectation. In this case, the user may estimate its storage cost locally, which is a function of , denoted by ˆ S(). If ˆ S() < S, i.e., the receiver buffer is under-utilized, then we may increase to achieve higher effective throughput by paying an additional storage cost. If ˆ S() > S, i.e., the receiver buffer is over-utilized, then we should decrease to reduce the storage cost. After updating , the user will implement the caching policy with updated r_ i () again, until the arithmetic mean of the buffer consumption converges well.; see also Fig. 1).
Motivation to combine same as stated in claim 4.
Regarding claim 8, the combination of Heimendinger in view of Chen teaches all of the limitations of claim 4, and the combination further teaches wherein the prediction output service module is further configured to forcibly update the validity period of the prediction result or clear the prediction result, in response to an occurrence of a preset specific event (Chen, pg. 2736, col. 2, teaches In this subsection, we are interested in a practical situation, in which a user may not have any global knowledge about the content arrival processes, namely, _ and _i. Therefore, it is not possible for the user to formulate a joint rate-cost allocation problem (8). Fortunately, the structural result in Theorem 4 implies a caching policy without any need of _ and _i. In particular, we propose an online algorithm to maximize the effective throughput, while limiting the average buffer occupation to be less than or equal to a target value S. The online algorithm begins by setting an initial value for the Lagrange multiplier randomly. Then the user will implement the caching policy, with parameters determined by Subsection IV-B and r_ i () in Eq. (9), to decide whether and how long a content file should be cached based on its RDI pi(x). The implementation time is long enough that the arithmetic mean of the buffer consumption converges well to its expectation. In this case, the user may estimate its storage cost locally, which is a function of , denoted by ˆ S(). If ˆ S() < S, i.e., the receiver buffer is under-utilized, then we may increase to achieve higher effective throughput by paying an additional storage cost. If ˆ S() > S, i.e., the receiver buffer is over-utilized, then we should decrease to reduce the storage cost. After updating , the user will implement the caching policy with updated r_i () again, until the arithmetic mean of the buffer consumption converges well. The above iteration assures that ˆ S() approaches its target value, while the effective throughput is maximized.).
Motivation to combine same as stated in claim 4.
Regarding claim 11,
Claim 11 recites similar or analogous limitations as claim 4 and, therefore, it is rejected under the same rationale and motivation as claim 4.
Regarding claim 12,
Claim 12 recites similar or analogous limitations as claim 5 and, therefore, it is rejected under the same rationale and motivation as claim 5.
Regarding claim 15,
Claim 15 recites similar or analogous limitations as claim 8 and, therefore, it is rejected under the same rationale and motivation as claim 8.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEATRIZ RAMIREZ BRAVO whose telephone number is 571-272-2156. The examiner can normally be reached Mon. - Fri. 7:30a.m.-5:00p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, USMAAN SAEED can be reached at 571-272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.R.B./Examiner, Art Unit 2146
/USMAAN SAEED/Supervisory Patent Examiner, Art Unit 2146