DETAILED ACTION
This action is in response to the application filed 05/23/2023. Claims 1-6 are pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2-4 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The term “data having a larger contribution” in claim 2 is a relative term which renders the claim indefinite. The term “larger contribution” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. It is unclear as to what a contribution to the variation in load or the decrease in inference accuracy entails and by that matter, it is thus unclear as to what a larger contribution comprises. For purposes of examination, Examiner has interpreted “data having a larger contribution” to be data that was used for training and resulted in the variation in load or the decrease in inference accuracy.
Claim 3 recites “target data” in line 1. It is unclear as to whether this target data is associated with the target data group recited in claim 1, or if this is different target data. For purposes of examination, Examiner has interpreted “target data” to be data within the target data group.
Claim 3 recites the limitation "the target data in the target data group" in line 3. There is insufficient antecedent basis for this limitation in the claim. It is unclear as to whether this target data in the target data group is the same as the previously recited target data in line 1 or if this target data in the target data group is referring different target data.
Claim 4 recites “target data” in line 2. It is unclear as to whether this target data is associated with the target data group recited in claim 1, or if this is different target data. For purposes of examination, Examiner has interpreted “target data” to be data within the target data group.
Claim 4 recites the limitation "the target data in the target data group" in line 4. There is insufficient antecedent basis for this limitation in the claim. It is unclear as to whether this target data in the target data group is the same as the previously recited target data in line 1 or if this target data in the target data group is referring different target data.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Claim 1:
Subject Matter Eligibility Analysis Step 1:
Claim 1 recites a method and is thus a process, one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 1 recites
determining whether or not a tendency of a target data group on which inference is performed is changed in at least one of the edge device or the server device on a basis of a variation in load or a decrease in inference accuracy in at least one of the edge device or the server device; (This limitation is a mental process as it encompasses a human mentally determining whether or not a group is changed.)
Therefore, claim 1 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 1 further recites additional elements of
A processing method executed by a processing system that performs first inference in an edge device and performs second inference in a server device, the processing method comprising: (This element does not integrate the abstract idea into a practical application because it recites generic computing components on which to perform the abstract idea (see MPEP 2106.05(f)).)
executing relearning of at least one of a first model that performs the first inference or a second model that performs the second inference in a case where it is determined in the determination process that the tendency of the target data group is changed. (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).)
Therefore, claim 1 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 1 do not provide significantly more than the abstract idea itself, taken alone and in combination because
A processing method executed by a processing system that performs first inference in an edge device and performs second inference in a server device, the processing method comprising uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
executing relearning of at least one of a first model that performs the first inference or a second model that performs the second inference in a case where it is determined in the determination process that the tendency of the target data group is changed uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
Therefore, claim 1 is subject-matter ineligible.
Regarding Claim 2:
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 2 recites the same abstract ideas as claim 1. Therefore, claim 2 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 2 further recites additional elements of
the relearning of at least one of the first model or the second model is executed by using data having a larger contribution to the variation in load or the decrease in inference accuracy in the target data group. (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).)
Therefore, claim 2 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 2 do not provide significantly more than the abstract idea itself, taken alone and in combination because
the relearning of at least one of the first model or the second model is executed by using data having a larger contribution to the variation in load or the decrease in inference accuracy in the target data group uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
Therefore, claim 2 is subject-matter ineligible.
Regarding Claim 3:
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 3 recites
target data on which the second inference is executed and an inference result in the second inference of the target data in the target data group are set as learning data, (This limitation is a mental process as it encompasses a human mentally setting learning data.)
Therefore, claim 3 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 3 further recites additional elements of
the relearning of the first model is executed (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).)
Therefore, claim 3 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 3 do not provide significantly more than the abstract idea itself, taken alone and in combination because
the relearning of the first model is executed uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
Therefore, claim 3 is subject-matter ineligible.
Regarding Claim 4:
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 4 recites
target data on which the second inference is executed and a corrected inference result obtained by correcting an inference result in the second inference of the target data in the target data group are set as learning data, (This limitation is a mental process as it encompasses a human mentally setting learning data.)
Therefore, claim 4 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 4 further recites additional elements of
the relearning of the second model is executed (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).)
Therefore, claim 4 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 4 do not provide significantly more than the abstract idea itself, taken alone and in combination because
the relearning of the second model is executed uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
Therefore, claim 4 is subject-matter ineligible.
Regarding Claim 5:
Subject Matter Eligibility Analysis Step 1:
Claim 5 recites a system and is thus an apparatus, one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 5 recites
determine whether or not a tendency of a target data group on which inference is performed is changed in at least one of the edge device or the server device on a basis of a variation in load or a decrease in inference accuracy in at least one of the edge device or the server device; (This limitation is a mental process as it encompasses a human mentally determining whether or not a group is changed.)
Therefore, claim 5 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 5 further recites additional elements of
A processing system that performs first inference in an edge device and performs second inference in a server device, the processing system comprising: processing circuitry configured to: (This element does not integrate the abstract idea into a practical application because it recites generic computing components on which to perform the abstract idea (see MPEP 2106.05(f)).)
execute relearning of at least one of a first model that performs the first inference or a second model that performs the second inference in a case where it is determined in the determination process that the tendency of the target data group is changed. (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).)
Therefore, claim 5 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 5 do not provide significantly more than the abstract idea itself, taken alone and in combination because
A processing method executed by a processing system that performs first inference in an edge device and performs second inference in a server device, the processing method comprising uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
executing relearning of at least one of a first model that performs the first inference or a second model that performs the second inference in a case where it is determined in the determination process that the tendency of the target data group is changed uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
Therefore, claim 5 is subject-matter ineligible.
Regarding Claim 6:
Subject Matter Eligibility Analysis Step 1:
Claim 6 recites a non-transitory computer-readable recording medium and is thus an article of manufacture, one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 6 recites
determining whether or not a tendency of a target data group on which inference is performed is changed in at least one of the edge device or the server device on a basis of a variation in load or a decrease in inference accuracy in at least one of the edge device or the server device; (This limitation is a mental process as it encompasses a human mentally determining whether or not a group is changed.)
Therefore, claim 6 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 6 further recites additional elements of
A non-transitory computer-readable recording medium storing therein a processing program that causes a computer to execute a process comprising: (This element does not integrate the abstract idea into a practical application because it recites generic computing components on which to perform the abstract idea (see MPEP 2106.05(f)).)
executing relearning of at least one of a first model that performs the first inference or a second model that performs the second inference in a case where it is determined in the determination process that the tendency of the target data group is changed. (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).)
Therefore, claim 6 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 6 do not provide significantly more than the abstract idea itself, taken alone and in combination because
A non-transitory computer-readable recording medium storing therein a processing program that causes a computer to execute a process comprising uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
executing relearning of at least one of a first model that performs the first inference or a second model that performs the second inference in a case where it is determined in the determination process that the tendency of the target data group is changed uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
Therefore, claim 6 is subject-matter ineligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-6 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Xiong et al. (US 2019/0079898 A1) (hereafter referred to as Xiong).
Regarding claim 1, Xiong teaches
A processing method executed by a processing system that performs first inference in an edge device and performs second inference in a server device, the processing method comprising (Xiong, page 13, paragraph 0034, “In accordance with one aspect of the configuration disclosed in FIG. 5, edge device 3 and/or fog node 2 may run inferencing locally, thus distributing computation to the lower level. By running inferencing locally, network bandwidth may be conserved and latency of the system may be reduced. Alternatively, edge device 3 and/or fog node 2 may request that cloud server 4 provide an inference if edge device 3 and/or fog node 2 is not confidence in the local inference or otherwise questions the accuracy of the local inference.” Examiner notes that the first inference is the inference running locally and the second inference is the cloud server providing an inference.):
determining whether or not a tendency of a target data group on which inference is performed is changed in at least one of the edge device or the server device on a basis of a variation in load or a decrease in inference accuracy in at least one of the edge device or the server device (Xiong, page 14, paragraph 0041, “At decision 35 the lower level devices may consider whether this suggested media content is acceptable by monitoring data distribution, monitoring the confidence level of inferences, and/or testing with unused historical data. If deemed acceptable, at step 36 the suggested media content may be shared with the user using the user device. After sharing the suggested media content with the user at step 36, or if the suggested media content is determined to not be acceptable at decision 35, the lower level devices may collect any useful data regarding the correct action taken, or the unacceptable suggested media content, and send this data to the cloud. At step 38, the cloud service may retrain the model based on the new data received and the process may start over at step 33.” Examiner notes that monitoring the confidence level of inferences and deeming it unacceptable is a basis of a decrease in inference accuracy. Examiner notes that the lower level devices are the edge device, the cloud is the server device, and the suggested media content is the target data group.);
executing relearning of at least one of a first model that performs the first inference or a second model that performs the second inference in a case where it is determined that the tendency of the target data group is changed (Xiong, page 14, paragraph 0041, “At decision 35 the lower level devices may consider whether this suggested media content is acceptable by monitoring data distribution, monitoring the confidence level of inferences, and/or testing with unused historical data. If deemed acceptable, at step 36 the suggested media content may be shared with the user using the user device. After sharing the suggested media content with the user at step 36, or if the suggested media content is determined to not be acceptable at decision 35, the lower level devices may collect any useful data regarding the correct action taken, or the unacceptable suggested media content, and send this data to the cloud. At step 38, the cloud service may retrain the model based on the new data received and the process may start over at step 33.” Examiner notes that monitoring the confidence level of inferences and deeming it unacceptable is change in tendency of the target data group. Examiner notes that the lower level devices are the edge device, the cloud is the server device, and the suggested media content is the target data group. Examiner further notes that retraining the model is relearning the first model.).
Regarding claim 2, Xiong teaches
The processing method according to claim 1, wherein the relearning of at least one of the first model or the second model is executed by using data having a larger contribution to the variation in load or the decrease in inference accuracy in the target data group (Xiong, page 14, paragraph 0041, “At decision 35 the lower level devices may consider whether this suggested media content is acceptable by monitoring data distribution, monitoring the confidence level of inferences, and/or testing with unused historical data. If deemed acceptable, at step 36 the suggested media content may be shared with the user using the user device. After sharing the suggested media content with the user at step 36, or if the suggested media content is determined to not be acceptable at decision 35, the lower level devices may collect any useful data regarding the correct action taken, or the unacceptable suggested media content, and send this data to the cloud. At step 38, the cloud service may retrain the model based on the new data received and the process may start over at step 33.” Examiner notes that monitoring the confidence level of inferences and deeming it unacceptable is a basis of a decrease in inference accuracy. Examiner notes that the lower level devices are the edge device, the cloud is the server device, and new data received is the data having a larger contribution to the decrease in inference accuracy. Examiner additionally notes that the model is the first model, and the retrained model is the second model.).
Regarding claim 3, Xiong teaches
The processing method according to claim 1, wherein target data on which the second inference is executed and an inference result in the second inference of the target data in the target data group are set as learning data, and the relearning of the first model is executed (Xiong, page 13, paragraph 0033-0034, “Also, data may be sent from edge device 3 to fog node 2 and from fog node 2 to cloud server 4. Data received from fog node 2 may be used by cloud server 4 for learning purposes. Specifically, at cloud server 4 computers may be trained and retrained using the data received from fog node 2. Learning algorithms may run over the data ultimately resulting in new or updated models 27 that may be shared with fog node 2 and edge device 3 and may be used for inferencing. [0034] In accordance with one aspect of the configuration disclosed in FIG. 5, edge device 3 and/or fog node 2 may run inferencing locally, thus distributing computation to the lower level. By running inferencing locally, network bandwidth may be conserved and latency of the system may be reduced. Alternatively, edge device 3 and/or fog node 2 may request that cloud server 4 provide an inference if edge device 3 and/or fog node 2 is not confident in the local inference or otherwise questions the accuracy of the local inference.” Examiner notes that the first inference is the inference running locally and the second inference is the cloud server providing an inference. Examiner further notes that the first model is the new model and the retrained model is the second model. Examiner notes that the cloud server has the inference result and the target data. Examiner further notes that the inference and the target data from the cloud service is sent to the edge device in order for the new model to be retrained.).
Regarding claim 4, Xiong teaches
The processing method according to claim 1, wherein target data on which the second inference is executed and a corrected inference result obtained by correcting an inference result in the second inference of the target data in the target data group are set as learning data, and the relearning of the second model is executed (Xiong, page 13, paragraph 0033-0034, “Also, data may be sent from edge device 3 to fog node 2 and from fog node 2 to cloud server 4. Data received from fog node 2 may be used by cloud server 4 for learning purposes. Specifically, at cloud server 4 computers may be trained and retrained using the data received from fog node 2. Learning algorithms may run over the data ultimately resulting in new or updated models 27 that may be shared with fog node 2 and edge device 3 and may be used for inferencing. [0034] In accordance with one aspect of the configuration disclosed in FIG. 5, edge device 3 and/or fog node 2 may run inferencing locally, thus distributing computation to the lower level. By running inferencing locally, network bandwidth may be conserved and latency of the system may be reduced. Alternatively, edge device 3 and/or fog node 2 may request that cloud server 4 provide an inference if edge device 3 and/or fog node 2 is not confident in the local inference or otherwise questions the accuracy of the local inference.” Examiner notes that the first inference is the inference running locally and the second inference is the cloud server providing an inference. Examiner further notes that the first model is the new model and the retrained model is the second model. Examiner notes that the cloud server obtains the local inference, or inference result, and the target data from the edge device. Examiner further notes that the cloud server creates a corrected inference and proceeds to use the corrected inference and the target data to restart the cycle of training and retraining the edge device and cloud server.).
Regarding claim 5, Xiong teaches
A processing system that performs first inference in an edge device and performs second inference in a server device, the processing system comprising: processing circuitry configured to (Xiong, page 13, paragraph 0034, “In accordance with one aspect of the configuration disclosed in FIG. 5, edge device 3 and/or fog node 2 may run inferencing locally, thus distributing computation to the lower level. By running inferencing locally, network bandwidth may be conserved and latency of the system may be reduced. Alternatively, edge device 3 and/or fog node 2 may request that cloud server 4 provide an inference if edge device 3 and/or fog node 2 is not confidence in the local inference or otherwise questions the accuracy of the local inference” where “Software 15 may be non-transitory computer readable medium run on processor 8” (Xiong, page 12, paragraph 0022). Examiner notes that the first inference is the inference running locally and the second inference is the cloud server providing an inference.):
determine whether or not a tendency of a target data group on which inference is performed is changed in at least one of the edge device or the server device on a basis of a variation in load or a decrease in inference accuracy in at least one of the edge device or the server device (Xiong, page 14, paragraph 0041, “At decision 35 the lower level devices may consider whether this suggested media content is acceptable by monitoring data distribution, monitoring the confidence level of inferences, and/or testing with unused historical data. If deemed acceptable, at step 36 the suggested media content may be shared with the user using the user device. After sharing the suggested media content with the user at step 36, or if the suggested media content is determined to not be acceptable at decision 35, the lower level devices may collect any useful data regarding the correct action taken, or the unacceptable suggested media content, and send this data to the cloud. At step 38, the cloud service may retrain the model based on the new data received and the process may start over at step 33.” Examiner notes that monitoring the confidence level of inferences and deeming it unacceptable is a basis of a decrease in inference accuracy. Examiner notes that the lower level devices are the edge device, the cloud is the server device, and the suggested media content is the target data group.);
execute relearning of at least one of a first model that performs the first inference or a second model that performs the second inference in a case where it is determined that the tendency of the target data group is changed (Xiong, page 14, paragraph 0041, “At decision 35 the lower level devices may consider whether this suggested media content is acceptable by monitoring data distribution, monitoring the confidence level of inferences, and/or testing with unused historical data. If deemed acceptable, at step 36 the suggested media content may be shared with the user using the user device. After sharing the suggested media content with the user at step 36, or if the suggested media content is determined to not be acceptable at decision 35, the lower level devices may collect any useful data regarding the correct action taken, or the unacceptable suggested media content, and send this data to the cloud. At step 38, the cloud service may retrain the model based on the new data received and the process may start over at step 33.” Examiner notes that monitoring the confidence level of inferences and deeming it unacceptable is change in tendency of the target data group. Examiner notes that the lower level devices are the edge device, the cloud is the server device, and the suggested media content is the target data group. Examiner further notes that retraining the model is relearning the first model.).
Regarding claim 6, Xiong teaches
A non-transitory computer-readable recording medium storing therein a processing program that causes a computer to execute a process comprising (Xiong, page 12, paragraph 0022, “Software 15 may be non-transitory computer readable medium run on processor 8.”):
determining whether or not a tendency of a target data group on which inference is performed is changed in at least one of an edge device or a server device on a basis of a variation in load or a decrease in inference accuracy in at least one of the edge device or the server device (Xiong, page 14, paragraph 0041, “At decision 35 the lower level devices may consider whether this suggested media content is acceptable by monitoring data distribution, monitoring the confidence level of inferences, and/or testing with unused historical data. If deemed acceptable, at step 36 the suggested media content may be shared with the user using the user device. After sharing the suggested media content with the user at step 36, or if the suggested media content is determined to not be acceptable at decision 35, the lower level devices may collect any useful data regarding the correct action taken, or the unacceptable suggested media content, and send this data to the cloud. At step 38, the cloud service may retrain the model based on the new data received and the process may start over at step 33.” Examiner notes that monitoring the confidence level of inferences and deeming it unacceptable is a basis of a decrease in inference accuracy. Examiner notes that the lower level devices are the edge device, the cloud is the server device, and the suggested media content is the target data group.);
and executing relearning of at least one of a first model that performs first inference in the edge device or a second model that performs second inference in the server device in a case where it is determined that the tendency of the target data group is changed (Xiong, page 14, paragraph 0041, “At decision 35 the lower level devices may consider whether this suggested media content is acceptable by monitoring data distribution, monitoring the confidence level of inferences, and/or testing with unused historical data. If deemed acceptable, at step 36 the suggested media content may be shared with the user using the user device. After sharing the suggested media content with the user at step 36, or if the suggested media content is determined to not be acceptable at decision 35, the lower level devices may collect any useful data regarding the correct action taken, or the unacceptable suggested media content, and send this data to the cloud. At step 38, the cloud service may retrain the model based on the new data received and the process may start over at step 33.” Examiner notes that monitoring the confidence level of inferences and deeming it unacceptable is change in tendency of the target data group. Examiner notes that the lower level devices are the edge device, the cloud is the server device, and the suggested media content is the target data group. Examiner further notes that retraining the model is relearning the first model.).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Dunne et al. (US 2020/0272899 A1) also describes edge and server devices that update or relearn neural networks. Khan et al. (US 2020/0027009 A1) also discusses edge and server devices that update local and global models.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAITLYN R HAEFNER whose telephone number is (571)272-1429. The examiner can normally be reached Monday - Thursday: 7:15 am - 5:15 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K.R.H./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148