Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
The following action is in response to the communication(s) received on 02/09/2026.
As of the claims filed 02/09/2026:
Claims 1, 10, and 19 have been amended.
Claims 1-7, 9-16, 18, and 19 are pending.
Claims 1, 10, and 19 are independent claims.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/09/2026 has been entered.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 07/14/2025 was/were filed in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Response to Arguments
Applicant’s arguments filed 02/09/2026 have been fully considered, but are not fully persuasive.
With respect to the rejection under 35 USC § 102:
Applicant asserts that Kang does not teach the amended claim the plurality of data sources is selected from a random subset of the multiple data sources. This argument has been considered but is moot in view of the new prior art rejection under the combination of Kang/Nishio (Nishio [abstract] The FL protocol iteratively asks random clients to download a trainable model from a server...), where the FL protocol iteratively asking random clients to download the model corresponds to selecting from a random subset of the multiple data sources.
Claims 10 and 19 are rejected for at least the same reasons given above.
The dependent claims 2-5, 8-9, 11-14 and 17-18 remain rejected by virtue of dependency to their respective parent claims.
With respect to the rejection under 35 USC § 103:
Applicant further asserts that Coyner, Kendall, and Sheller do not teach the above limitation. This argument has been considered but is unpersuasive, as Kang does teach the limitation as explained above.
The dependent claims 6-7 and 15-16 remain rejected by virtue of dependency to their respective parent claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-5, 8, 9, 10-14, and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Kang et al., “Incentive Mechanism for Reliable Federated Learning: A Joint Optimization Approach to Combining Reputation and Contract Theory” (hereinafter Kang), further in view of Nishio et al., “Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge” (hereinafter Nishio)
Regarding Claim 1, Kang teaches:
A method of federated machine learning using at least one processor, (Kang [p.4 2nd col last ¶] Each task publisher broadcasts its federated learning task with specific resource requirements (e.g., data types, data sizes, and accuracy, time range, and CPU cycles) …) (Note: measuring CPU cycles requires at least one processor)
binning multiple data sources into a plurality of quality ranges based on the weighted average; selecting the plurality of data sources from the multiple data sources…; (Kang[p.5 1st col 2nd ¶] Step 3 (Select Workers for Federated Learning): After reputation calculation, the worker candidates with reputation larger than a threshold can be selected as the workers. These workers make their own optimal decisions to select a contract item given by the task publisher according to their types related to quality of local dataset and resource conditions. The quality of local dataset directly determines the quality of local model updates [4]. The details about contract designing are given in Section VI.) (Note: assigning the worker candidates (data sources) as below or above the threshold corresponds to binning binned multiple data sources into a subset of the plurality of quality ranges.)
receiving the plurality of training updates from the plurality of data sources, respectively, each of the plurality of training updates being generated by the respective data source in response to the global machine learning model received; (Kang [p.5 1st col last ¶] Step 4 (Perform Federated Learning and Evaluate Quality of Local Model Updates): After worker selection, the federated learning tasks can be trained by different optimization algorithms, e.g., SGD. Specifically, an initial SGD model (i.e., initial parameters) is randomly chosen from predefined ranges as the shared global model. After receiving this model, the workers collaboratively train the model over their own local data and upload their local model updates to the task publisher)
and updating the current global machine learning model based on the plurality of training updates received and the plurality of data quality parameters associated with the plurality of data sources, respectively, to generate an updated global machine learning model. (Kang [p.4 1st col last ¶] For a specific mobile device, the task publisher integrates its direct reputation opinion with the latest indirect reputation opinions from other task publishers to generate a compositive reputation value for the mobile devices. The reputation value is an important metric for reliable worker selection during federated learning.
[p.5 1st col last ¶] With the above unreliable worker and attacker detection schemes, the task publisher can remove unreliable local model updates from the unreliable workers as well as malicious updates from the poisoning attacks. The task publisher integrates all the reliable local model updates into an average value and sets the average value as the new global model for the next iteration. The task publisher pushes this new model to the selected workers for the next model iteration until the latest global model satisfies a predefined convergence condition.) (Note: the reputation values of the local models correspond to the plurality of data quality parameters associated with the plurality of data sources)
Kang does not explicitly teach, but Nishio further teaches:
the method comprising: transmitting a current global machine learning model to each of a plurality of data sources; (Nishio [abstract] The FL protocol iteratively asks random clients to download a trainable model from a server, update it with own data, and upload the updated model to the server, while asking the server to aggregate multiple client updates to further improve the model.)
…; wherein the plurality of data sources is selected from a random subset of the multiple data sources (Nishio [abstract] The FL protocol iteratively asks random clients to download a trainable model from a server, update it with own data, and upload the updated model to the server, while asking the server to aggregate multiple client updates to further improve the model.) (Note: the FL protocol iteratively asking random clients to download the model corresponds to selecting from a random subset of the multiple data sources)
wherein the current global machine learning model comprises a weighted average of a plurality of training updates… (Nishio [p.2 right protocol 1]
PNG
media_image1.png
333
429
media_image1.png
Greyscale
)
Nishio and Kang are analogous to the present invention because both are from the same field of endeavor of federated learning related to mobile devices. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the random client selection for sending the global model from Nishio into Kang’s federated learning method. The motivation would be to “Our new FL protocol, which we refer to as FedCS, mitigates this problem and performs FL efficiently while actively managing clients based on their resource conditions.” (Nishio, abstract).
Kang, via Kang/Nishio, further teaches:
based on a plurality of data quality parameters associated with the plurality of data sources; (Kang [p.5 1st col last ¶] The unreliable workers can be detected as they repeatedly upload similar-looking gradients as local model updates in each iteration [19]. With the above unreliable worker and attacker detection schemes, the task publisher can remove unreliable local model updates from the unreliable workers as well as malicious updates from the poisoning attacks. The task publisher integrates all the reliable local model updates into an average value and sets the average value as the new global model for the next iteration. The task publisher pushes this new model to the selected workers for the next model iteration until the latest global model satisfies a predefined convergence condition. Then the workers obtain rewards from the task publisher according to the preset rewards in the contact items based on resource contribution and model training behaviors [4], [5]. In every iteration, the interaction either with unreliable workers or with poisoning attackers is treated as a negative interaction and recorded by the task publisher. Finally, the task publisher generates the direct reputation opinions for all the workers in the federated learning task according to past interactions (steps 2 and 3 in Fig. 1).
PNG
media_image2.png
637
692
media_image2.png
Greyscale
) (the reputation opinions correspond to the reputation values and thus the data quality parameters)
Regarding Claim 2, Kang/Nishio respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Kang, via Kang/Nishio, further teaches:
The method according to claim 1, wherein each of the plurality of training updates is generated by the respective data source based on the global machine learning model received and labelled data stored by the respective data source. (Kang [p.5 1st col last ¶] Step 4 (Perform Federated Learning and Evaluate Quality of Local Model Updates): After worker selection, the federated learning tasks can be trained by different optimization algorithms, e.g., SGD. Specifically, an initial SGD model (i.e., initial parameters) is randomly chosen from predefined ranges as the shared global model. After receiving this model, the workers collaboratively train the model over their own local data and upload their local model updates to the task publisher) (Note: training the shared global model with their own local data corresponds to generating training updates based on the global machine model received and labelled data stored by the respective data source)
Regarding Claim 3, Kang respectively teaches and incorporates the claimed limitations and rejections of Claim 2. Kang, via Kang/Nishio, further teaches:
The method according to claim 2, wherein each of the plurality of training updates comprises a difference between the current global machine learning model and a local machine learning model trained by the respective data source based on the current global machine learning model and labelled data stored by the respective data source. (Kang [p.3 2nd col 2nd ¶]
PNG
media_image3.png
283
977
media_image3.png
Greyscale
) (Note: the local model update (2) comprises of the difference of the global model and the average gradient step Λ of the local model)
Regarding Claim 4, Kang/Nishio respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Kang, via Kang/Nishio, further teaches:
The method according to claim 1, wherein said updating the current global machine learning model comprises determining an updated weighted average of the plurality of training updates based on the plurality of data quality parameters associated with the plurality of data sources, respectively. (Kang [p.5 1st col last ¶] The unreliable workers can be detected as they repeatedly upload similar-looking gradients as local model updates in each iteration [19]. With the above unreliable worker and attacker detection schemes, the task publisher can remove unreliable local model updates from the unreliable workers as well as malicious updates from the poisoning attacks. The task publisher integrates all the reliable local model updates into an average value and sets the average value as the new global model for the next iteration. The task publisher pushes this new model to the selected workers for the next model iteration until the latest global model satisfies a predefined convergence condition. Then the workers obtain rewards from the task publisher according to the preset rewards in the contact items based on resource contribution and model training behaviors [4], [5]. In every iteration, the interaction either with unreliable workers or with poisoning attackers is treated as a negative interaction and recorded by the task publisher. Finally, the task publisher generates the direct reputation opinions for all the workers in the federated learning task according to past interactions (steps 2 and 3 in Fig. 1).
PNG
media_image2.png
637
692
media_image2.png
Greyscale
) (the reputation opinions correspond to the reputation values and thus the data quality parameters; detecting unreliable workers every iteration corresponds to updating the weighted average)
Regarding Claim 5, Kang respectively teaches and incorporates the claimed limitations and rejections of Claim 2. Kang, via Kang/Nishio, further teaches:
The method according to claim 2, wherein the labelled data stored by the respective data source comprises features and labels, and the data quality parameter associated with the respective data source comprises at least one of a feature quality parameter associated with the features and a label quality parameter associated with the labels. (Kang [p.7 2nd col 1st ¶] We consider a federated learning task as a monopoly market with a monopolist operator (a task publisher) and a set of mobile devices N={1,…,N} . Each worker n∈N with a local training dataset uses a size sn of its local data samples to participate in the federated learning task. There is an input–output pair in each data sample, in which the input is a sample vector with various data features and the output is the label value for the input generated through mobile apps [3]. The contributed computation resources for local model training, i.e., CPU cycle frequency, from the worker n is denoted as fn . The number of CPU cycles for a worker n to perform one sample of data1 in local model training is denoted by cn. Hence, for worker n, the computation time of a local iteration in local model training is [cnsn/fn]. According to [3], the CPU energy consumption of the worker for one local iteration is expressed as follows:
PNG
media_image4.png
46
559
media_image4.png
Greyscale
where ζ is the effective capacitance parameter of computing chipset for worker n [26].)
Regarding Claim 9, Kang/Nishio respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Kang, via Kang/Nishio, further teaches:
The method according to claim 1, wherein the plurality of data quality parameters are a plurality of data quality indices. (Kang [p.4 1st col last ¶] For a specific mobile device, the task publisher integrates its direct reputation opinion with the latest indirect reputation opinions from other task publishers to generate a compositive reputation value for the mobile devices. The reputation value is an important metric for reliable worker selection during federated learning.) (Note: the reputation values as metrics for the respective workers corresponds to a plurality of data quality indices)
Independent Claim 10 recites A server for federated machine learning comprising: a memory; and at least one processor communicatively coupled to the memory and configured to (Kang [p.4 2nd col last ¶] Each task publisher broadcasts its federated learning task with specific resource requirements (e.g., data types, data sizes, and accuracy, time range, and CPU cycles) …) (Note: measuring CPU cycles requires at least one processor) to perform precisely the methods of Claim 1. Thus, Claim 10 is rejected for reasons set forth in Claim 1.
Claims 11-14, and 18, dependent on Claim 10, also recite the system configured to perform precisely the methods of Claims 2-5, and 9, respectively, and thus are rejected for reasons set forth in these claims.
Independent Claim 19 recites A computer program product, embodied in one or more non-transitory computer-readable storage mediums, comprising instructions executable by at least one processor to perform a method of federated machine learning, the method comprising (Kang [p.4 2nd col last ¶] Each task publisher broadcasts its federated learning task with specific resource requirements (e.g., data types, data sizes, and accuracy, time range, and CPU cycles) …) (Note: measuring CPU cycles requires a computer program product) to perform precisely the methods of Claim 1. Thus, Claim 19 is rejected for reasons set forth in Claim 1.
Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Kang further in view of Coyner et al., “Deep Learning for Image Quality Assessment of Fundus Images in Retinopathy of Prematurity” (hereinafter Coyner), further in view of Kendall et al., “What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?” (hereinafter Kendall).
Regarding Claim 6, Kang respectively teaches and incorporates the claimed limitations and rejections of Claim 5. Kang, via Kang/Nishio, further teaches:
The method according to claim 5, wherein one or more of the plurality of data quality parameters are each based on at least one of a first data quality factor, … wherein the first data quality factor relates to a quality of the corresponding data source (Kang [Abstract] To address the above challenges, in this article, we first introduce reputation as the metric to measure the reliability and trustworthiness of the mobile devices.)
Kang does not teach, but Coyner further teaches:
a second data quality factor,… the second data quality factor relates to a quality of labelled data stored by the corresponding data source (Coyner [p.1226 last ¶] Using the CNN, test set predictions for each image were determined. Briefly, batches of images were fed through the CNN, and a score between 0 and 1 was determined for each image. A score less than 0.5 placed the image into the “possibly acceptable quality” category, and a score greater than or equal to 0.5 placed the image into the “acceptable quality” category. The overall accuracy of the CNN was evaluated, as was the area under the receiver operating characteristics curve (AUROC), and the area under the precision-recall curve (AUPR)
[1230 4th ¶] If the CNN performs well in those scenarios, the applications of this algorithm could range from a simple prescreening method for other programs to full implementation in a retinal fundus camera. For instance, an imaging technician could capture an image of a retina and instantly be alerted as to whether or not the image was of acceptable quality for diagnosis of disease) (Note: the quality assessment score corresponds to the second data quality factor)
Coyner and Kang are analogous to the present invention because both are from the same field of endeavor of factors in individual machine learning models. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the acceptability measurement method from Coyner into Kang’s federated learning method. The motivation would be to provide a “useful prescreening method for telemedicine and computer-based image analysis applications” (Coyner [Abstract]).
Kang/Coyner does not teach, but Kendall further teaches:
and a third data quality factor, …and the third data quality factor relates to a statistical derivation of data uncertainty. (Kendall [Abstract] For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation)
Kendall and Kang/Coyner are analogous to the present invention because both are from the same field of endeavor of loss functions in machine learning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the aleatoric uncertainty loss function from Kendall with Kang/Coyner’s federated learning method. The motivation would be to “This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks”.
Claim(s) 15, dependent on Claim(s) 10, also recite the system configured to perform precisely the methods of Claim(s) 6, and thus are rejected for reasons set forth in these claims.
Claims 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Kang/Coyner/Kendall further in view of Sheller et al., “NIMG-68. FEDERATED LEARNING IN NEURO-ONCOLOGY FOR MULTI-INSTITUTIONAL COLLABORATIONS WITHOUT SHARING PATIENT DATA” (hereinafter Sheller).
Regarding Claim 7, Kang/Coyner/Kendall respectively teaches and incorporates the claimed limitations and rejections of Claim 6. Kang, via Kang/Nishio, further teaches:
The method according to claim 6, wherein the first data quality factor is based on at least one of a reputation level associated with the data source, a competence level of one or more data annotators of the labelled data stored by the corresponding data source, and a method value associated with a type of annotation method used to produce the labelled data stored by the corresponding data source, (Kang [Abstract] To address the above challenges, in this article, we first introduce reputation as the metric to measure the reliability and trustworthiness of the mobile devices.)
Kang does not teach, but Coyner further teaches:
and the second data quality factor is based on at least one of image acquisition characteristics and a level of image artifacts in the images. (Coyner [Abstract] Accurate image-based medical diagnosis relies upon adequate image quality and clarity …
PNG
media_image5.png
419
1238
media_image5.png
Greyscale
[p.1226 last ¶] Using the CNN, test set predictions for each image were determined. Briefly, batches of images were fed through the CNN, and a score between 0 and 1 was determined for each image. A score less than 0.5 placed the image into the “possibly acceptable quality” category, and a score greater than or equal to 0.5 placed the image into the “acceptable quality” category. The overall accuracy of the CNN was evaluated, as was the area under the receiver operating characteristics curve (AUROC), and the area under the precision-recall curve (AUPR)) (Note: the score of the image quality, which relates to the amount of clarity of the image, corresponds to the level of image artifacts in the image)
Kang/Coyner/Kendall does not teach, but Sheller further teaches:
and wherein the features of the labelled data are related to images, (Sheller [p.2 1st col 1st ¶] In this study we evaluate the hypothesis that federated learning can provide a method to overcome these concerns and facilitate a shift in the paradigm of multi-institutional collaborations without sharing patient data. We attempt to investigate this hypothesis in a feasibility study of automatically delineating the glioblastoma extent in T2-FLAIR scans. METHODS: We identified a retrospective cohort of 165 glioblastoma patients with available clinically acquired pre-operative multi-parametric structural MRI (mpMRI) scans (i.e., T1, T1Gd, T2, T2-FLAIR), with corresponding expert tumor boundary annotations, from 10 independent institutions. We implemented a 3D deep learning algorithm (3D-UNet) to predict the boundaries of the whole tumor extent, by virtue of the abnormal hyper-intense signal of T2-FLAIR scans.) (Note: the T2-FLAIR scans correspond to the features of labelled data related to images)
Sheller and Kang/Coyner/Kendall are analogous to the present invention because both are from the same field of endeavor of federated learning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the T2-FLAIR scans from Sheller to Kang/Coyner/Kendall’s federated learning method. The motivation would be to “provide a method to overcome these concerns and facilitate a shift in the paradigm of multi-institutional collaborations without sharing patient data” (Sheller [p.2 1st col 1st ¶]).
Regarding Claim 16, Kang/Coyner/Kendall respectively teaches and incorporates the claimed limitations and rejections of Claim 15. Kang, via Kang/Nishio, further teaches:
The server according to claim 15, wherein the first data quality factor is based on at least one of a reputation level associated with the data source, a competence level of one or more data annotators of the labelled data stored by the corresponding data source, and a method value associated with a type of annotation method used to produce the labelled data stored by the corresponding data source, (Kang [Abstract] To address the above challenges, in this article, we first introduce reputation as the metric to measure the reliability and trustworthiness of the mobile devices.)
Kang does not teach, but Coyner further teaches:
and the second data quality factor is based on at least one of image acquisition characteristics and a level of image artifacts in the images. (Coyner [Abstract] Accurate image-based medical diagnosis relies upon adequate image quality and clarity …
PNG
media_image5.png
419
1238
media_image5.png
Greyscale
[p.1226 last ¶] Using the CNN, test set predictions for each image were determined. Briefly, batches of images were fed through the CNN, and a score between 0 and 1 was determined for each image. A score less than 0.5 placed the image into the “possibly acceptable quality” category, and a score greater than or equal to 0.5 placed the image into the “acceptable quality” category. The overall accuracy of the CNN was evaluated, as was the area under the receiver operating characteristics curve (AUROC), and the area under the precision-recall curve (AUPR)) (Note: the score of the image quality, which relates to the amount of clarity of the image, corresponds to the level of image artifacts in the image)
Kang/Coyner/Kendall does not teach, but Sheller further teaches:
and wherein the features of the labelled data are related to images, (Sheller [p.2 1st col 1st ¶] In this study we evaluate the hypothesis that federated learning can provide a method to overcome these concerns and facilitate a shift in the paradigm of multi-institutional collaborations without sharing patient data. We attempt to investigate this hypothesis in a feasibility study of automatically delineating the glioblastoma extent in T2-FLAIR scans. METHODS: We identified a retrospective cohort of 165 glioblastoma patients with available clinically acquired pre-operative multi-parametric structural MRI (mpMRI) scans (i.e., T1, T1Gd, T2, T2-FLAIR), with corresponding expert tumor boundary annotations, from 10 independent institutions. We implemented a 3D deep learning algorithm (3D-UNet) to predict the boundaries of the whole tumor extent, by virtue of the abnormal hyper-intense signal of T2-FLAIR scans.) (Note: the T2-FLAIR scans correspond to the features of labelled data related to images)
Sheller and Kang/Coyner/Kendall are analogous to the present invention because both are from the same field of endeavor of federated learning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the T2-FLAIR scans from Sheller to Kang/Coyner/Kendall’s federated learning method. The motivation would be to “provide a method to overcome these concerns and facilitate a shift in the paradigm of multi-institutional collaborations without sharing patient data” (Sheller [p.2 1st col 1st ¶]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEP HAN whose telephone number is (703)756-1346. The examiner can normally be reached Mon-Fri 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached on (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.H./Examiner, Art Unit 2122
/KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122