DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This Office Action is in response to the communication filed on 18 Nov 2025.
Claims 1-3, 6-8, 10-14, 17, 23, 25-32, 35-36, and 41-44 are being considered on the merits.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 23 Jan 2026 has been considered. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, initialed and dated copies of Applicant's IDS form 1499 is attached to the instant Office action.
Claim Objections
Claim 1 is objected to because of the following informalities:
The third line of the first limitation includes what appears to be an extraneous “and”: “wherein, and each node is a local computational node…”
Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 30-31 and 35-36 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ongati, Felix & Dr. Eng. Lawrence Muchemi (“Big Data Intelligence Using Distributed Deep Neural Networks”, arXiv:1909.02873 [cs.DC]; hereinafter, “Ongati”).
Claim 30, Ongati as modified teaches claim 35 below. Ongati further teaches:
The cloud based computation system as claimed in claim 35, further comprising: the plurality of local computational nodes, each local computational node comprising one or more processors, one or more memories, one or more network interfaces, and one or more storage devices which store the local node dataset. (Ongati, sec. III: “A study of other distributed machine learning open source and commercial projects was also conducted, including a study of federated learning approach widely used on training centralized models in mobile devices such as smartphones and tablets.”)
Claim 31, Ongati as modified teaches claim 30 above. Ongati further teaches:
The system as claimed in claim 30, wherein one or more of the plurality of local computational nodes are cloud based computational nodes. (Ongati, sec. II: “In this approach, there is a shared model that resides in the cloud. A mobile device gets the current up-to-date shared model from the cloud”)
Claim 35, Ongati teaches:
A cloud based computation system for training an Artificial Intelligence (AI) model on a distributed dataset comprising: (Ongati, sec. II: “In this approach, there is a shared model that resides in the cloud. A mobile device gets the current up-to-date shared model from the cloud”)
at least one cloud based central node (Ongati, sec. III: “A study of other distributed machine learning open source and commercial projects was also conducted, including a study of federated learning approach widely used on training centralized models in mobile devices such as smartphones and tablets.”)
comprising one or more processors, one or more memories, (Prakash, para. 0076: “In this regard, the network 150 comprises one or more network elements that may include one or more processors, communications systems (e.g., including network interface controllers, one or more transmitters/receivers connected to one or more antennas, etc.), and computer readable media”) one or more network interfaces, (Prakash, para 0073: “The CN 120 is shown to be communicatively coupled to an application server 130 and a network 150 via an IP communications interface 125.”) and one or more storage devices, (Prakash, para. 0072: “The components of the CN 120 may be implemented in one physical node or separate physical nodes including components to read and execute instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium)”)
wherein the at least one cloud based central node (Ongati, sec. III: “A study of other distributed machine learning open source and commercial projects was also conducted, including a study of federated learning approach widely used on training centralized models in mobile devices such as smartphones and tablets.”) is in communication with a plurality of local computational nodes (Ongati, sec. II and Fig. 3: “Federated learning is an approach to training models from user interaction with mobile devices; models are trained on the devices. Federating learning decouples machine learning from data storage by enabling learning a shared prediction model with the training data on the device without sharing the data (McMahan and Ramage, 2017).” Examiner notes Figure 3 illustrates multiple nodes that are prevented from accessing and sharing with other nodes).
where each local computational nodes stores a local node dataset (Ongati, sec. II: “A mobile device gets the current up-to-date shared model from the cloud, runs it on the local data on the phone and stores the improvements as a small focused update. These changes to the model are then sent to the cloud, averaged with updates from other devices and applied to the central main model in the cloud”)
wherein access to the local node dataset is limited to the respective computational node, and (Ongati, sec. II and Fig. 3: “Federated learning is an approach to training models from user interaction with mobile devices; models are trained on the devices. Federating learning decouples machine learning from data storage by enabling learning a shared prediction model with the training data on the device without sharing the data (McMahan and Ramage, 2017).” Examiner notes Figure 3 illustrates multiple nodes that are prevented from accessing and sharing with other nodes).
the least one cloud based central node are configured to train an Artificial Intelligence (AI) model on a distributed dataset formed of the local node datasets by receiving, by the cloud based central node, (Ongati, sec. III: “A study of other distributed machine learning open source and commercial projects was also conducted, including a study of federated learning approach widely used on training centralized models in mobile devices such as smartphones and tablets.”) a plurality of trained Teacher models from the plurality of local computational nodes (Ongati, abstract and fig. 1: “This paper proposes an improved algorithm to securely train deep neural networks over several data sources in a distributed way” “A mobile device (A) localizes the model in the context of a user’s interaction with the device”).
wherein each Teacher model is a deep neural network model which is locally trained at a local computational node on the respective local node dataset, and receiving each Teacher model comprises receiving a set of weights representing the Teacher model; and (Ongati, abstract and fig. 1: “This paper proposes an improved algorithm to securely train deep neural networks over several data sources in a distributed way” “A mobile device (A) localizes the model in the context of a user’s interaction with the device”).
training a Student model using the plurality of trained Teacher models and a transfer dataset using knowledge distillation. (Ongati, abstract: “Only a representation of the trained models (network architecture and weights) are shared.”)
Claim 36, Ongati teaches claim 35 above. Ongati further teaches:
The system as claimed in claim 35 wherein each local node dataset is medical dataset comprising a plurality of medical images and/or medical related test data for performing medical assessments in relation to a patient. (Ongati, sec. I: This project explores solutions to the above questions by modeling and simulating distributed learning to train a common model over disparate databases consisting of medical xray images.”)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6-7, 28-29 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ongati, Felix & Dr. Eng. Lawrence Muchemi (“Big Data Intelligence Using Distributed Deep Neural Networks”, arXiv:1909.02873 [cs.DC]; hereinafter, “Ongati”) in view of Prakash, et. al. (US 2019/0220703 A1; hereinafter, “Prakash”)
Claim 1, Ongati teaches:
A computer implemented method for training an Artificial Intelligence (AI) model in a cloud based computational system on a distributed dataset comprising a plurality of nodes and a central node, (Ongati, sec. II: “In this approach, there is a shared model that resides in the cloud. A mobile device gets the current up-to-date shared model from the cloud, runs it on the local data on the phone and stores the improvements as a small focused update. These changes to the model are then sent to the cloud, averaged with updates from other devices and applied to the central main model in the cloud.”)
wherein, and each node is a local computational node comprising one or more processors, one or more memories, (Prakash, para. 0076: “In this regard, the network 150 comprises one or more network elements that may include one or more processors, communications systems (e.g., including network interface controllers, one or more transmitters/receivers connected to one or more antennas, etc.), and computer readable media”) one or more network interfaces, (Prakash, para 0073: “The CN 120 is shown to be communicatively coupled to an application server 130 and a network 150 via an IP communications interface 125.”) and one or more storage devices storing a node dataset (Prakash, para. 0072: “The components of the CN 120 may be implemented in one physical node or separate physical nodes including components to read and execute instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium)”) and the nodes are prevented from accessing other node datasets, and (Ongati, sec. II and Fig. 3: “Federated learning is an approach to training models from user interaction with mobile devices; models are trained on the devices. Federating learning decouples machine learning from data storage by enabling learning a shared prediction model with the training data on the device without sharing the data (McMahan and Ramage, 2017).” Examiner notes Figure 3 illustrates multiple nodes that are prevented from accessing and sharing with other nodes).
the central node is a cloud based central node (Ongati, sec. II: “Federated learning proposes a mechanism suitable for training centralized models in an unreliable network connection environment where sharing data would be expensive in addition to privacy concerns. This work borrows a lot from federated learning techniques, albeit on training a decentralised model.”)
comprising one or more processors, one or more memories, (Prakash, para. 0076: “In this regard, the network 150 comprises one or more network elements that may include one or more processors, communications systems (e.g., including network interface controllers, one or more transmitters/receivers connected to one or more antennas, etc.), and computer readable media”) one or more network interfaces, (Prakash, para 0073: “The CN 120 is shown to be communicatively coupled to an application server 130 and a network 150 via an IP communications interface 125.”) and one or more storage devices, dataset (Prakash, para. 0072: “The components of the CN 120 may be implemented in one physical node or separate physical nodes including components to read and execute instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium)”)
wherein the at least one cloud based central node (Ongati, sec. III: “A study of other distributed machine learning open source and commercial projects was also conducted, including a study of federated learning approach widely used on training centralized models in mobile devices such as smartphones and tablets.”) is in communication with the plurality of local computational nodes, the method comprising: (Ongati, sec. II and Fig. 3: “Federated learning is an approach to training models from user interaction with mobile devices; models are trained on the devices. Federating learning decouples machine learning from data storage by enabling learning a shared prediction model with the training data on the device without sharing the data (McMahan and Ramage, 2017).” Examiner notes Figure 3 illustrates multiple nodes that are prevented from accessing and sharing with other nodes).
generating a plurality of trained Teacher models, wherein each Teacher model is a deep neural network model which is locally trained at a node on the node dataset; (Ongati, abstract and fig. 1: ““This paper proposes an improved algorithm to securely train deep neural networks over several data sources in a distributed way” “A mobile device (A) localizes the model in the context of a user’s interaction with the device”).
moving the plurality of trained Teacher models to a central node, wherein moving a Teacher model comprises transmitting a set of weights representing the Teacher model to the central node; and (Ongati, sec. III: “A study of other distributed machine learning open source and commercial projects was also conducted, including a study of federated learning approach widely used on training centralized models in mobile devices such as smartphones and tablets.”)
training a Student model using the plurality of trained Teacher models and a transfer dataset using knowledge distillation. (Ongati, abstract: “Only a representation of the trained models (network architecture and weights) are shared.”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Prakash into Ongati. Ongati teaches an improved algorithm to securely train deep neural networks over several data sources in a distributed way; Prakash teaches load partitioning in distributed machine learning (ML) training using heterogeneous compute nodes in a heterogeneous computing environment. One of ordinary skill would have been motivated to combine the teachings of Prakash into Ongati in order to reduce training time of ML by offloading GD computations to multiple secondary computing nodes (Prakash, para. 0018).
Claim 3, Ongati as modified teaches claim 1 above. Ongati further teaches:
The method as claimed in claim 1, wherein the transfer dataset is an agreed-upon transfer data drawn from the plurality of node datasets, and/or the transfer dataset is a distributed dataset comprised of a plurality of node transfer datasets, wherein node transfer dataset is local to a node, or (Ongati, sec. II: “A mobile device gets the current up-to-date shared model from the cloud, runs it on the local data on the phone and stores the improvements as a small focused update. These changes to the model are then sent to the cloud, averaged with updates from other devices and applied to the central main model in the cloud”)
the transfer dataset is a mixture of agreed-upon transfer data drawn from the plurality of node datasets, and a plurality of node transfer datasets, wherein node local transfer dataset is local to a node. (Ongati, sec. II and figure 1: “A mobile device gets the current up-to-date shared model from the cloud, runs it on the local data on the phone and stores the improvements as a small focused update. These changes to the model are then sent to the cloud, averaged with updates from other devices and applied to the central main model in the cloud” Examiner notes that Ongati teaches federated learning where the transfer dataset is a distributed data set of end devices, each having a local dataset with updates transferred to the main model).
Claim 6, Ongati as modified teaches claim 1 above. Ongati further teaches:
The method as claimed in claim 1, wherein the nodes exist across separate, geographically isolated localities (Ongati, sec. II and figure 1: “A mobile device gets the current up-to-date shared model from the cloud, runs it on the local data on the phone and stores the improvements as a small focused update. These changes to the model are then sent to the cloud, averaged with updates from other devices and applied to the central main model in the cloud” Examiner notes that Ongati teaches federated learning where the transfer dataset is a distributed data set of end devices, each end device existing physically separate from another).
Claim 7, Ongati as modified teaches claim 1 above. Ongati further teaches:
The method as claimed in claim 1, wherein the step of training the Student model comprises: training the Student model using the plurality of trained Teacher models at each of the nodes using the node dataset. (Ongati, sec. II and figure 1: “A mobile device gets the current up-to-date shared model from the cloud, runs it on the local data on the phone and stores the improvements as a small focused update. These changes to the model are then sent to the cloud, averaged with updates from other devices and applied to the central main model in the cloud”).
Claim 28, Ongati as modified teaches claim 1 above. Ongati further teaches
The method as claimed in claim 1, wherein each node dataset is medical dataset comprising one or more medical images or medical diagnostic datasets. (Ongati, sec. I: This project explores solutions to the above questions by modeling and simulating distributed learning to train a common model over disparate databases consisting of medical xray images.”)
Claim 29, Ongati as modified teaches claim 1 above. Ongati further teaches:
The method as claimed in claim 1, further comprising deploying the trained Artificial Intelligence (AI) model. (Ongati, sec. I: The proposed method allows training of deep neural networks using data from multiple de-linked nodes in a distributed environment and to secure the representation shared during training. Only a representation of the trained models (network architecture and weights) are shared.”)
Claims 41-44 are rejected under 35 U.S.C. 103 as being unpatentable over Ongati in view of Mamoshina, Polina, et al. ("Converging blockchain and next-generation artificial intelligence technologies to decentralize and accelerate biomedical research and healthcare." Oncotarget [Online], 9.5 (2018): 5665-5690; hereinafter, “Mamoshina”)
Claim 41, Ongati teaches:
cloud based computation system for generating an AI based assessment from one or more images or datasets, the cloud based computation system comprising: (Ongati, sec. I: This project explores solutions to the above questions by modeling and simulating distributed learning to train a common model over disparate databases consisting of medical xray images.”)
one or more computation servers comprising one or more processors and one or more memories configured to store an Artificial Intelligence (AI) model configured to generate an assessment from one or more images or datasets and the one or more computational servers are configured to: (Ongati, abstract and fig. 1: “This paper proposes an improved algorithm to securely train deep neural networks over several data sources in a distributed way” “A mobile device (A) localizes the model in the context of a user’s interaction with the device”).
receive, from a user via a user interface of the computational system, one or more images or datasets; (Ongati, sec. II: “This algorithm implements a technique that was used to train deep neural networks over multiple data sources in a secure electronic medical records environment while mitigating the need to share raw data directly.”)
provide the one or more images or datasets to the AI Model to obtain an assessment; (Ongati, sec. V: “Accuracy was used to evaluate the performance of this model. The plots below indicate that the model generally improved every epoch until it hit a maximum accuracy of 93.63% before smoothing and and a final accuracy of 93.55% after smoothing. Accuracy was calculated by finding the ratio between the correctly predicted classes and the total number of predictions.”)
wherein the AI model is generated by a cloud based computational training system comprising at least one cloud based central node comprising one or more processors, one or more memories, one or more network interfaces, and one or more storage devices, and a plurality of local computational nodes where each local computational nodes stores a local node dataset (Ongati, sec. II: “A mobile device gets the current up-to-date shared model from the cloud, runs it on the local data on the phone and stores the improvements as a small focused update. These changes to the model are then sent to the cloud, averaged with updates from other devices and applied to the central main model in the cloud”)
wherein access to the local node dataset is limited to the respective local computational node, and the AI model is generated by: generating a plurality of trained Teacher models, wherein each Teacher model is a deep neural network model which is locally trained at one of the plurality of local computational node on the respective local node dataset; (Ongati, abstract and fig. 1: ““This paper proposes an improved algorithm to securely train deep neural networks over several data sources in a distributed way” “A mobile device (A) localizes the model in the context of a user’s interaction with the device”).
moving the plurality of trained Teacher models to the at least one cloud based central node, wherein moving a Teacher model comprises transmitting a set of weights representing the Teacher model to the at least one cloud based central node; and (Ongati, sec. III: “A study of other distributed machine learning open source and commercial projects was also conducted, including a study of federated learning approach widely used on training centralized models in mobile devices such as smartphones and tablets.”)
training a Student model using the plurality of trained Teacher models and a transfer dataset using knowledge distillation. (Ongati, abstract: “Only a representation of the trained models (network architecture and weights) are shared.”)
Ongati does not explicitly disclose:
and send the assessment to the user, via the user interface,
However, Mamoshina teaches:
and send the assessment to the user, via the user interface, (Mamoshina, pg. 5671: “The client-side validation could further utilize secure user interfaces and key management”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Mamoshina into Ongati. Mamoshina teaches an overview of the next-generation artificial intelligence and blockchain technologies and present innovative solutions. One of ordinary skill would have been motivated to combine the teachings of Mamoshina into Ongati in order to accelerate the biomedical research and enable patients with new tools to control and profit from their personal data (Mamoshina, abstract).
Claim 42, Ongati as modified teaches claim 41 above. Ongati further teaches:
The system as claimed in claim 41, wherein the one or more image or datasets are medical images and medical datasets and the assessment is a medical assessment of a medical condition, diagnosis or treatment. (Ongati, sec. IV: “The data used to train the model and test the algorithm consisted of chest x-ray images obtained from 28780 patients. There were two sets of data; xray images from normal patients and those from patients with pneumonia.”)
Claim 43, Ongati teahes:
A computation system for generating an AI based assessment from one or more images or datasets, (Ongati, sec. IV: “The data used to train the model and test the algorithm consisted of chest x-ray images obtained from 28780 patients. There were two sets of data; xray images from normal patients and those from patients with pneumonia.”) the computation system comprising at least one processor, and at least one memory comprising instructions to configure the at least one processor to: (Ongati, sec. III: “A study of other distributed machine learning open source and commercial projects was also conducted, including a study of federated learning approach widely used on training centralized models in mobile devices such as smartphones and tablets.”)
wherein the AI model is generated by: generating a plurality of trained Teacher models, wherein each Teacher model is a deep neural network model which is locally trained at a local computational node on the local node dataset; (Ongati, abstract and fig. 1: ““This paper proposes an improved algorithm to securely train deep neural networks over several data sources in a distributed way” “A mobile device (A) localizes the model in the context of a user’s interaction with the device”).
moving the plurality of trained Teacher models to the central computational node, wherein moving a Teacher model comprises transmitting a set of weights representing the Teacher model to the central node; and (Ongati, sec. III: “A study of other distributed machine learning open source and commercial projects was also conducted, including a study of federated learning approach widely used on training centralized models in mobile devices such as smartphones and tablets.”)
training a Student model using the plurality of trained Teacher models and a transfer dataset using knowledge distillation. (Ongati, abstract: “Only a representation of the trained models (network architecture and weights) are shared.”)
Ongati does not explicitly disclose:
upload, via a user interface, an image or dataset to a cloud based Artificial Intelligence (AI) model configured to generate an assessment from one or more images or datasets; and
receive the assessment from the cloud based AI model via the user interface,
However, Mamoshina teaches:
upload, via a user interface, an image or dataset to a cloud based Artificial Intelligence (AI) model configured to generate an assessment from one or more images or datasets; and (Mamoshina, pg. 5673: “Private endpoints could be compared with administrative interfaces in Web services; they are intended to process and manage local storage associated with a particular full node” Examiner notes that Mamoshina teaches endpoint interfaces to manage storage i.e. upload or delete data)
receive the assessment from the cloud based AI model via the user interface, (Mamoshina, pg. 5671: “The client-side validation could further utilize secure user interfaces and key management”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Mamoshina into Ongati. Mamoshina teaches an overview of the next-generation artificial intelligence and blockchain technologies and present innovative solutions. One of ordinary skill would have been motivated to combine the teachings of Mamoshina into Ongati in order to accelerate the biomedical research and enable patients with new tools to control and profit from their personal data (Mamoshina, abstract).
Claim 44, Ongati as modified teaches claim 43 above. Ongati further teaches:
The system as claimed in claim 43, wherein the one or more image or datasets are medical images and medical datasets and the assessment is a medical assessment of a medical condition, diagnosis or treatment. (Ongati, sec. IV: “The data used to train the model and test the algorithm consisted of chest x-ray images obtained from 28780 patients. There were two sets of data; xray images from normal patients and those from patients with pneumonia.”)
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Ongati in view of Prakash, and further in view of H. Kanaan, K. Mahmood and V. Sathyan, ("An Ontological Model for Privacy in Emerging Decentralized Healthcare Systems," 2017 IEEE 13th International Symposium on Autonomous Decentralized System (ISADS), Bangkok, Thailand, 2017, pp. 107-113; hereinafter, “Kanaan”)
Claim 2, Ongati as modified teaches claim 1 above. Kanaan further teaches:
The method as claimed in claim 1, wherein prior to moving the plurality of trained Teacher models to a central node, a compliance check is performed on each trained teacher node to check that the respective model does not contain private data from the node it was trained at by checking if the respective model has memorized specific examples of the data and (Kanaan, sec. IV: “Any required information is queried across the filters, and once a filter has marks a set as a policy infringing set, it would be ultimately blocked and the result is considered as fact. Decentralization in our system stems out from the fact that there is not central repository of data for decision making.”)
if the compliance check returns a FALSE value, the respective model is retrained on the data with different parameters until a model that satisfies the compliance check is obtained, or if no model is obtained after N attempts, then either discarding the model or encrypting the model and sharing the model if a data policy allows encrypted sharing of data from the respective node. (Kanaan, sec. IV: “Any required information is queried across the filters, and once a filter has marks a set as a policy infringing set, it would be ultimately blocked and the result is considered as fact. Decentralization in our system stems out from the fact that there is not central repository of data for decision making.” Examiner notes that Kanaan teaches no model obtained after 0 attempts for node models that contain private information because such information is blocked.)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Kanaan into Ongati. Ongati teaches an improved algorithm to securely train deep neural networks over several data sources in a distributed way; Kanaan teaches a decentralized ontology based on system architecture that caters to healthcare privacy concerns. One of ordinary skill would have been motivated to combine the teachings of Kannan into Ongati as modified in order to enable faster knowledge sharing, and privacy-preserving environment (Kannan, sec. IV).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Ongati in view of Prakash, and further in view of Mamoshina, Polina, et al. ("Converging blockchain and next-generation artificial intelligence technologies to decentralize and accelerate biomedical research and healthcare." Oncotarget [Online], 9.5 (2018): 5665-5690; hereinafter, “Mamoshina”)
Claim 8, Ongati as modified teaches claim 1 above. Ongati further teaches:
wherein the transfer dataset comprises each of the node datasets, and (Ongati, sec. II and figure 1: “A mobile device gets the current up-to-date shared model from the cloud, runs it on the local data on the phone and stores the improvements as a small focused update. These changes to the model are then sent to the cloud, averaged with updates from other devices and applied to the central main model in the cloud”).
wherein after training the Student model at each of the nodes, the Student model is sent to a master node, and (Ongati, sec. II and figure 1: “A mobile device gets the current up-to-date shared model from the cloud, runs it on the local data on the phone and stores the improvements as a small focused update. These changes to the model are then sent to the cloud, averaged with updates from other devices and applied to the central main model in the cloud”).
the master node collects and averages the weights of all worker nodes after each batch to update the Student model. (Ongati, sec. II and figure 1: “A mobile device gets the current up-to-date shared model from the cloud, runs it on the local data on the phone and stores the improvements as a small focused update. These changes to the model are then sent to the cloud, averaged with updates from other devices and applied to the central main model in the cloud”).
Ongati does not explicitly disclose:
The method as claimed in claim 7, wherein prior to training the Student model using the plurality of trained Teacher models, the method further comprises: forming a single training cluster for training the Student model by establishing a plurality of inter-region peering connections between each of the nodes, and
copies of the Student model are sent to each of the nodes and assigned as worker nodes, and
However, Mamoshina teaches:
The method as claimed in claim 7, wherein prior to training the Student model using the plurality of trained Teacher models, the method further comprises: forming a single training cluster for training the Student model by establishing a plurality of inter-region peering connections between each of the nodes, and (Mamoshina, pg. 5668: It is usually specifically used to refer to either a distributed database where users store information on a number of nodes, or a computer network in which users store information on a number of peer network nodes.”)
copies of the Student model are sent to each of the nodes and assigned as worker nodes, and (Mamoshina, pg. 5668: “These records are composed into blocks, which are locked together using certain cryptographic mechanisms to maintain consistency of the data. Normally a blockchain is maintained by a peer-to-peer network of users who collectively adhere to agreed rules (which are insured by the software) for accepting new blocks. Each record in the block contains a timestamp or signature and a link to a previous block in the chain. By design, blockchain is made to ensure immutability of the data. So once recorded, the data in any given block cannot be modified afterwards without the alteration of all subsequent blocks and the agreement of the members of the network” Examiner note that Mamoshina teaches block chain wherein each of the nodes is a worker node verifying all data added to the chain).
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Mamoshina into Ongati as modified. Mamoshina teaches an overview of the next-generation artificial intelligence and blockchain technologies and present innovative solutions. One of ordinary skill would have been motivated to combine the teachings of Mamoshina into Ongati as modified in order to accelerate the biomedical research and enable patients with new tools to control and profit from their personal data (Mamoshina, abstract).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Ongati in view of Prakash, in view of Mamoshina and further in view of Kanaan.
Claim 10, Ongati as modified teaches claim 8 above. Kanaan further teaches:
The method as claimed in claim 8, wherein prior to sending the Student model to the master node a compliance check is performed on the Student model to check that the Student model does not contain private data from the node it was trained at by checking if the Student model has memorized specific examples of the data and if the compliance check returns a FALSE value, the Student model is retrained on the data with different parameters until a Student model that satisfies the compliance check is obtained, or if no Student model is obtained after N attempts, then either discarding the Student model or encrypting the Student model and sharing the Student model if a data policy allows encrypted sharing of data from the respective node. (Kanaan, sec. IV: “Any required information is queried across the filters, and once a filter has marks a set as a policy infringing set, it would be ultimately blocked and the result is considered as fact. Decentralization in our system stems out from the fact that there is not central repository of data for decision making.” Examiner notes that Kanaan teaches no model obtained after 0 attempts for node models that contain private information because such information is blocked.)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Kanaan into Ongati, as modified. Kanaan teaches a decentralized ontology based on system architecture that caters to healthcare privacy concerns. One of ordinary skill would have been motivated to combine the teachings of Kannan into Ongati as modified in order to enable faster knowledge sharing, and privacy-preserving environment (Kannan, sec. IV).
Claims 11, 14, 17, 23, 25-27 are rejected under 35 U.S.C. 103 as being unpatentable over Ongati in view of Prakash, in view of Samad Nejatian, Hamid Parvin, Eshagh Faraji, (“Using sub-sampling and ensemble clustering techniques to improve performance of imbalanced classification, Neurocomputing, Volume 276, 2018, Pages 55-66; hereinafter, “Nejatian”)
Claim 11, Ongati as modified teaches claim 1 above. Ongati further teaches:
The method as claimed in claim 1, wherein the step of training the Student model comprises: training a plurality of Student models, wherein each Student model is a Teacher model at a first node which is trained by a plurality of Teacher models at other nodes by moving the Student model to another node and training the Student model using the Teacher model at the node using the node dataset, and (Ongati, sec. II and figure 1: “A mobile device gets the current up-to-date shared model from the cloud, runs it on the local data on the phone and stores the improvements as a small focused update. These changes to the model are then sent to the cloud, averaged with updates from other devices and applied to the central main model in the cloud”).
Ongati does not explicitly disclose:
once the plurality of Student models are each trained, an ensemble model is generated from the plurality of trained Student models
However, Nejatian teaches:
once the plurality of Student models are each trained, an ensemble model is generated from the plurality of trained Student models. (Nejatian, sec. 2: “In EasyEnsemble algorithm, an ensemble learning system is created through sampling several subsets from majority class and making several classifiers based on combination of each subset with minority class data...”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Nejatian into Ongati as modified. Nejatian teaches methods of machine learning more suitable for use with imbalanced data having very little data in class of minority. One of ordinary skill would have been motivated to combine the teachings of Nejatian into Ongati as modified in order to enable machine learning with imbalanced data with lower computational complexity and faster implementation time (Nejatian, abstract).
Claim 14, Ongati as modified as modified teaches claim 11 above. Ongati further teaches:
The method as claimed in claim 11, wherein each Student model is trained after it has been trained at a predetermined threshold number of nodes, or each Student model is trained after it has been trained on a predetermined quantity of data at at least a threshold number of nodes, or each Student model is trained after it has been trained at each of the plurality of nodes. (Ongati, fig. 1: “A mobile device (A) localizes the model in the context of a user’s interaction with the device. Users' changes are summed up (B) to form an aggregate change (C) which is then applied to the shared model and procedure repeated whenever new data is available.” Examiner notes that Ongai teaches training at a local device (i.e. a teacher) and then sent to the shared model (i.e a student model).)
Claim 17, Ongati as modified as modified teaches claim 11 above. Ongati further teaches:
The method as claimed in claim 11, wherein the ensemble model is obtained using an Average Voting method, or the ensemble model is obtained using weighted averaging, or the ensemble model is obtained using a Mixture of Experts Layers (learned weighting), or the ensemble model is obtained using a distillation method, wherein a final model is distilled from the plurality of student models (Ongati, sec. II: “A mobile device gets the current up-to-date shared model from the cloud, runs it on the local data on the phone and stores the improvements as a small focused update. These changes to the model are then sent to the cloud, averaged with updates from other devices and applied to the central main model in the cloud”)
Claim 23, Ongati as modified teaches claim 1 above. Nejatian further teaches:
The method as claimed in claim 1, further comprising using weighting to adjust a distillation loss function to compensate for differences in the number of data points at each node. (Nejatian, sec. 2: “The purpose of these methods is to overcome the problem of data loss in random methods. In EasyEnsemble algorithm, an ensemble learning system is created through sampling several subsets from majority class and making several classifiers based on combination of each subset with minority class data.” Examiner notes that “to compensate for differences…” is interpreted as an intended use).
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Nejatian into Ongati as modified. Nejatian teaches methods of machine learning more suitable for use with imbalanced data having very little data in class of minority. One of ordinary skill would have been motivated to combine the teachings of Nejatian into Ongati as modified in order to enable machine learning with imbalanced data with lower computational complexity and faster implementation time (Nejatian, abstract).
Claim 25, Ongati as modified teaches claim 1 above. Ongati further teaches:
The method as claimed in claim 1, wherein an epoch comprises a full training pass of each node dataset, and (Ongati, sec. IV: “Charity compiles the model and trains with its local data for 1 epoch, writing the best output model to a file”)
Ongati does not explicitly disclose:
during each epoch, each worker samples a subset of the available sample dataset, wherein the subset size is based a size of the smallest dataset, and
the number of epochs is increased based on the ratio of a size of the largest dataset to the size of the smallest dataset.
However, Nejatian teaches:
during each epoch, each worker samples a subset of the available sample dataset, wherein the subset size is based a size of the smallest dataset, and (Nejatian, sec. 2: “In EasyEnsemble algorithm, an ensemble learning system is created through sampling several subsets from majority class and making several classifiers based on combination of each subset with minority class data. The size of each subset from majority class is equal to size of minority class.”)
the number of epochs is increased based on the ratio of a size of the largest dataset to the size of the smallest dataset. (Nejatian, sec. 3 and fig. 4: “So, the ratio of column indicates the data distribution in dataset, and each criterion using the values of two columns is inherently sensitive to imbalance. For example, the accuracy criterion uses both columns and is sensitive to imbalance and changes as the class distribution changes” Examiner notes that Nejatian teaches repeating training until i = T such that data imbalance would require a greater number of repeats).
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Nejatian into Ongati as modified. Nejatian teaches methods of machine learning more suitable for use with imbalanced data having very little data in class of minority. One of ordinary skill would have been motivated to combine the teachings of Nejatian into Ongati as modified in order to enable machine learning with imbalanced data with lower computational complexity and faster implementation time (Nejatian, abstract).
Claim 26, Ongati as modified teaches claim 1 above. Nejatian further teaches:
The method as claimed in claim 1, wherein the plurality of nodes are separated into k clusters where k is less than the total number nodes, and the method is performed separately in each cluster to generate k cluster models, (Nejatian, fig. 1 and sec. 1: “Another form of imbalance is intra-class which corresponds to the distribution of representation data for sub-concepts in a class. In Fig. 1(B), class B and C represent the dominant minority and majority sub-concept, respectively. In addition, A and D are dominant concept and dominant sub-concept for majority class, respectively.”)
wherein each cluster model is stored at a cluster representative node, and the method is performed on the k cluster representative nodes, wherein the plurality of nodes comprises the k cluster representative nodes. (Nejatian, sec. 1: “For each class, the number of samples existing in the dominant cluster of that class eliminates the sub-concept. As it is clear, this data space represents inter-classes and intra-class imbalance.”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Nejatian into Ongati as modified. Nejatian teaches methods of machine learning more suitable for use with imbalanced data having very little data in class of minority. One of ordinary skill would have been motivated to combine the teachings of Nejatian into Ongati as modified in order to enable machine learning with imbalanced data with lower computational complexity and faster implementation time (Nejatian, abstract).
Claim 27, Ongati as modified as modified teaches claim 26 above. Nejatian further teaches:
The method as claimed in claim 26, wherein one or more additional layers of nodes are created and each lower layer is generated by separating the cluster representative nodes in the previous layer into j clusters where j is less than the number of cluster representative nodes in the previous layer, and then the method is performed separately in each cluster to generate j cluster models, wherein each cluster model is stored at a cluster representative node, and the method is performed on the j cluster representative nodes, wherein the plurality of nodes comprises the j cluster representative nodes. (Nejatian, sec. 2: “In this algorithm, first, k samples are selected from each cluster, and the average feature vector is calculated for these samples. Next, the cluster centers are determined. Then, Euclidean distance is calculated for each sample from each cluster center, and is assigned to the cluster with the nearest center, and the cluster center is updated. According to CBO algorithm and using over-sampling method, all clusters of majority class are extended the same size as the majority class with the highest sample. Then, over-sampling is done on clusters of minority class, and the clusters' size is increased…As it is clear, ultimately, a strong representation of little concepts in the final dataset is obtained.”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Nejatian into Ongati as modified. Nejatian teaches methods of machine learning more suitable for use with imbalanced data having very little data in class of minority. One of ordinary skill would have been motivated to combine the teachings of Nejatian into Ongati as modified in order to enable machine learning with imbalanced data with lower computational complexity and faster implementation time (Nejatian, abstract).
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Ongati in view Prakash, in view of Nejatian, and further in view of Mamoshina.
Claim 12, Ongati as modified teaches claim 11 above. Mamoshina further teaches:
The method as claimed in claim 11, wherein prior to training a plurality of Student models, the method further comprises: forming a single training cluster for training the Student model by establishing a plurality of inter-region peering connections between each of the nodes. (Mamoshina, pg. 5668: It is usually specifically used to refer to either a distributed database where users store information on a number of nodes, or a computer network in which users store information on a number of peer network nodes.”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Mamoshina into Ongati as modified. Mamoshina teaches an overview of the next-generation artificial intelligence and blockchain technologies and present innovative solutions. One of ordinary skill would have been motivated to combine the teachings of Mamoshina into Ongati as modified in order to accelerate the biomedical research and enable patients with new tools to control and profit from their personal data (Mamoshina, abstract).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Ongati in view of Prakash, in view of Samad Nejatian, and in view of Mamoshina, and further in view of Kanaan.
Claim 13, Ongai as modified, teaches claim 11 above. Kanaan further teaches:
The method as claimed in claim 11, wherein prior to moving the Student model to another node a compliance check is performed on the Student model to check that the model does not contain private data from the node it was trained at by checking if the Student model has memorized specific examples of the data and (Kanaan, sec. IV: “Any required information is queried across the filters, and once a filter has marks a set as a policy infringing set, it would be ultimately blocked and the result is considered as fact. Decentralization in our system stems out from the fact that there is not central repository of data for decision making.”)
if the compliance check returns a FALSE value, the Student model is retrained on the data with different parameters until a Student model that satisfies the compliance check is obtained, or if no Student model is obtained after N attempts, then either discarding the Student model or encrypting the Student model and sharing the Student model if a data policy allows encrypted sharing of data from the respective node. (Kanaan, sec. IV: “Any required information is queried across the filters, and once a filter has marks a set as a policy infringing set, it would be ultimately blocked and the result is considered as fact. Decentralization in our system stems out from the fact that there is not central repository of data for decision making.” Examiner notes that Kanaan teaches no model obtained after 0 attempts for node models that contain private information because such information is blocked.)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Kanaan into Ongati, as modified. Kanaan teaches a decentralized ontology based on system architecture that caters to healthcare privacy concerns. One of ordinary skill would have been motivated to combine the teachings of Kannan into Ongati, as modified, in order to enable faster knowledge sharing, and privacy-preserving environment (Kannan, sec. IV).
Claim 32 is rejected under 35 U.S.C. 103 as being unpatentable over Ongati in view of Mamoshina, and further in view of L. Pineda-Morales, A. Costan and G. Antoniu, ("Towards Multi-site Metadata Management for Geographically Distributed Cloud Workflows," 2015 IEEE International Conference on Cluster Computing, Chicago, IL, USA, 2015, pp. 294-303; hereinafter, “Pineda-Morales”)
Claim 32, Ongati teaches claim 31 above. Ongati further teaches:
The system as claimed in claim 31, wherein the system is configured to automatically provision the required hardware and software defined networking functionality at at least one of the cloud based computational nodes and the at least one cloud based central node is configured to implement: (Ongati, sec. II: “In this approach, there is a shared model that resides in the cloud. A mobile device gets the current up-to-date shared model from the cloud”)
the distribution service is configured to send a model configuration to a group of servers to begin training a model, and on completion of model training, the provisioning module is configured to shut down the group of servers. (Ongati, sec. IV: “While training, Alice k monitors model improvements in terms of accuracy and loss at the end of every epoch, with an early stopping patience of 10.” Examiner notes that Ongati teaches early stopping i.e. shutting down of training).
Ongati does not explicitly disclose:
a cloud provisioning module and a distribution service, wherein the cloud provisioning module is configured to search available server configurations for each of a plurality of cloud services providers, wherein each cloud service provider has a plurality of servers in an associated region, and
the cloud provisioning module is configured to apportion a group of servers from one or more of plurality of cloud service providers with tags and metadata to allow a group to be managed,
wherein the number of servers in a group is based on number of node locations within a region associated with a cloud service providers, and
However, Mamoshina teaches:
a cloud provisioning module and a distribution service, wherein the cloud provisioning module is configured to search available server configurations for each of a plurality of cloud services providers, wherein each cloud service provider has a plurality of servers in an associated region, and (Mamoshina, 5681: “Blockchain full nodes and cloud storage are the remaining two parts of the ecosystem. Cloud storage could be an existing cloud storage, for example, Amazon Web Services (AWS), which allows for building HIPAA-compliant applications or Google Cloud Platform.”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Mamoshina into Ongati as modified. Mamoshina teaches an overview of the next-generation artificial intelligence and blockchain technologies and present innovative solutions. One of ordinary skill would have been motivated to combine the teachings of Mamoshina into Ongati as modified in order to accelerate the biomedical research and enable patients with new tools to control and profit from their personal data (Mamoshina, abstract).
Moreover, Pineda-Morales teaches:
the cloud provisioning module is configured to apportion a group of servers from one or more of plurality of cloud service providers with tags and metadata to allow a group to be managed, (Pineda-Morales, sec. II and IV: “A simple experiment conducted on the Azure cloud and isolating the metadata access times for up to 5000 files (Figure 1) confirms that remote metadata operations take orders of magnitude more than local ones.” “Both b) and c) can also be referred to as remote scenarios. Our design accounts for several datacenters in various geographic regions in order to cover all these scenarios.”)
wherein the number of servers in a group is based on number of node locations within a region associated with a cloud service providers, and (Pineda-Morales, sec. VI: “The goal of the first experiment is to compare the performance of our implementation to the baseline centralized data management as the number of files to be processed increases. For this purpose, we keep a constant number of 32 nodes evenly distributed in our datacenters, while varying the number of entries to be written/read to/from the registry”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Pineda-Morales into Ongati, as modified. Pineda-Morales teaches workflow semantics in a 2-level metadata partitioning hierarchy that combines distribution and replication. One of ordinary skill would have been motivated to combine the teachings of Pineda-Morales into Ongati as modified in order to achieve gain in execution time for a parallel, geo-distributed real-world application (Pineda-Morales, abstract).
Response to Applicant Remarks/Argument
35 USC § 112 Rejection
In light of applicant’s amendments and remarks, previously asserted § 112 rejection has been withdrawn.
35 USC § 101
In light of applicant’s amendments and remarks, previously asserted § 112 rejection has been withdrawn.
35 USC §§ 102/103 – Claim 1 and corresponding dependent claims
Claim 1 has been updated in light of applicant’s amendments. Claim 1 remains rejected for the reasons set forth in the rejection above.
Beginning on page 14 of applicant’s remarks, applicant argues that Ongati does not teach, “a computer implemented method for training an Artificial Intelligence model in a cloud based computational system on a distributed dataset comprising a plurality of nodes and a central node having such features”. However, applicant concedes at the bottom of page 14 of applicant’s remarks that Ongati discusses, “Federated learning, distributed learning and blockchain, then discuses a method of training for a deep neural network while mitigating the need to share raw data”. In other words, Ongati teaches computer implemented method of training AI (deep learning) in a cloud as set forth above:
In this approach, there is a shared model that resides in the cloud. A mobile device gets the current up-to-date shared model from the cloud, runs it on the local data on the phone and stores the improvements as a small focused update. These changes to the model are then sent to the cloud, averaged with updates from other devices and applied to the central main model in the cloud.
See Ongati Sec. II and figures 1 & 3. As applicant concedes, Ongati teaches distributed data sets in the form of siloed data in healthcare clinics.
On page 15, applicant argues that Ongati teaches, “rather than using locally trained models from all of the 5 nodes to distill a Student model at the central node, Ongati teaches sequentially replacing the central model with a new model if it has been improved by the local model”. However, Ongati teaches comparing a central model before and after application of each local model such that the system is using locally trained models of all of the nodes insofar as at least using each of the models as comparison to the central model.
At the bottom of page 15, applicant further argues that Ongati teaches a system which requires a “starting model” at the central node whereas applicant’s system does not require such “starting model”. However, applicant’s claims do not explicitly claim a system not using “starting model” nor does Ongati explicitly teach that a “starting model” is used or required for their method.
At the top of page 16, applicant argues that, “Ongati only generally suggests that training is performed on a dataset at each mode. However, Ongati does not teach or suggest the limitation of a Teacher model being locally trained at a node on the node dataset (e.g., for use in knowledge distillation).” In other words, applicant concedes that Ongati teaches training performed on a dataset at each model. Additionally, Figure 1 expressly illustrates localizing a global model to the local node as the caption states: “A mobile device (A) localizes the model in the context of a user’s interaction with the device. Users' changes are summed up (B) to form an aggregate change (C) which is then applied to the shared model and procedure repeated whenever new data is available”
In the middle of page 16, applicant argues that claim 1 requires training of models at each local node, and after training, the entire full [local] model is moved to the central node where each local model are combined in the central node. However, similar to the previous argument, applicant’s distinction is not clear in the claim language. Claim 1 claims each node transmits weights to a teacher model and training the central model using a plurality of local models and using “knowledge distillation”. Where knowledge distillation is a model compression technique where a large model transfers knowledge to a smaller model which is then able to approximate the teacher’s model on a (usually) smaller resource-constrained device. Such knowledge distillation is taught by Ongati.
Beginning at the top of page 17, applicant argues that claim 1, “includes sending the full trained AI model to the central nodes”. However, claim 1 actually claims, “wherein moving a teacher model comprises transmitting a set of weights representing the teach model to a central node”. In other words, the “full trained AI model” is not transmitted but only a set of weights representing of the model is transmitted. Moreover, claim 1 does not even teach the full set of weights of the local model being transmitted, just “a set”. Indeed, Ongati teaches each local model being transmitted at least for comparison with the global model.
Towards the middle and bottom of page 17, applicant further argues that none of the remaining references cure the alleged defects of Ongati. However, applicant’s arguments regarding the claim limitations vis a vis Ongati are not persuasive.
Finally, toward the top of page 18, applicant argues that dependent claims are additionally allowable because they recite additional limitations. Applicant argues that claim 3 recites the transfer of dataset to be a distributed dataset across nodes. However, claim 3 does teach transfer of a dataset as set forth above, even as part of claim 1. Indeed, the concept of transfer of some unspecified dataset comprising some unspecified data across nodes is a fundamental concept of distributed learning generally.
Applicant further argues that Ongati teaches an “up-to-date model” which is distinct from the claimed parallel then distilled method. However, Ongati’s method does not preclude parallel training among local models and, as set forth above, Ongati does teach knowledge distillation. Applicant further argues that the cited portion of Ongati teaches running an AI (inferencing) rather than AI training. However, the portion itself states that the model “stores the improvements as a small focused update”. Where a model is merely inferencing, there would be no improvements to store.
Towards the bottom of page 18, applicant argues that claim 7, “teaches sending all of the trained Teacher models to each node to train the Student model on the local node dataset.” However, claim 7 actually reads: “training the Student model using the plurality of trained Teacher models at each of the nodes using the node dataset”. There is no explicit recitation of “sending” anything in particular.
With respect to dependent claims, applicant’s arguments are also unpersuasive.
35 USC §§ 102/103 – Claim 35 and corresponding dependent claims
Applicant concedes that claim 35 and corresponding dependent claims are allowable for the same reasons set forth regarding claim 1 and its corresponding dependent claims. For the same reasons set forth above, applicant’s arguments are unpersuasive with respect to claim 35 and corresponding dependent claims.
35 USC §§ 102/103 – Claim 41 and corresponding dependent claim 42
Applicant concedes that claim 41 and corresponding dependent claim 42 are allowable for the same reasons set forth regarding claim 1 and its corresponding dependent claims. For the same reasons set forth above, applicant’s arguments are unpersuasive with respect to claim 41 and corresponding dependent claim 42.
35 USC §§ 102/103 – Claim 43 and corresponding dependent claim 44
Applicant concedes that claim 43 and corresponding dependent claim 44 are allowable for the same reasons set forth regarding claim 1 and its corresponding dependent claims. For the same reasons set forth above, applicant’s arguments are unpersuasive with respect to claim 43 and corresponding dependent claim 44.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sally T. Ley whose telephone number is (571)272-3406. The examiner can normally be reached Monday - Thursday, 10:00am - 6:00pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STL/Examiner, Art Unit 2147
/VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147