DETAILED ACTION
Applicant’s Application filed on December 28, 2022 has been reviewed.
Claims 19-25 were cancelled in the Preliminary Amendment.
Claims 1-18 and 26-27 were amended in the Preliminary Amendment.
Claims 1-18 and 26-27 have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on January 2, 2023 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-8, 11-15 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over O et al. (US 2021/0168195 A1), hereinafter referred to as O, in view of Huang et al. (US 2023/0273826 A1), hereinafter referred to as Huang.
With respect to claim 1, O teaches A system comprising at least one model server (the server 100 include a communicator 110, a memory 120, and a processor 130, para. 0057), wherein the at least one model server(the second neural network model stored in the memory 120; when the first neural network model is obtained, the obtained first neural network model stored in the memory 120, para. 0062; the communicator 110 can transmit the first neural network model to an external device and a model deploy server or transmit information on at least one layer of the layers included in the first neural network model to an external device and a model deploy server, para. 0060) and to upload the pluralities of neural network models to the model database (the communicator 110 receive a first neural network model from an external server or an external device, para. 0060), wherein a proxy unit (the server 100 include a communicator 110, a memory 120, and a processor 130, para. 0057; the communicator 110 communicate with an external server, an external device, a model deploy server, para. 0060), is provided between the at least one model serverthe client devices (the server 100 include a communicator 110, a memory 120, and a processor 130, para. 0057; the communicator 110 communicate with an external server, an external device, a model deploy server, para. 0060); the at least one processor unit
- accessing the neural network models (the server 100 identify whether the second neural network model of which at least one layer is different from the first neural network model is stored in the server 100 using the metadata file included in the first neural network model, para. 0086),
- wherein the at least one client device requests a neural network modelthe neural network modelat least one client device (an external device request an update for the second neural network model to a designated model deploy server or the server 100, para. 0074; the plurality of model deploy servers can transmit the entirety of the first neural network model to an external device designated to each of the plurality of model deploy servers, or transmit information about the changed at least one layer, para. 0073),
- wherein the at least one client device requests an uploading of an edited neural network modelat least one client deviceat least one model serverthe modified layers (an external device request an update for the second neural network model to a designated model deploy server or the server 100, para. 0074; the second neural network model 70 trained by the external device 300 to obtain a first neural network model 80 in which at least one layer is changed in the second neural network model 70 through the trained second neural network model 70; a neural network model trained through learning data in an external device such as a smart phone, and only a trained neural network model is transmitted to a central server, and a central server can update the neural network model by collecting a neural network model trained from a plurality of external devices, para. 0104).
O does not explicitly teach at least one of pluralities of pre-trained neural network models
However, Huang teaches at least one of pluralities of pre-trained neural network models (pre-trained neural network models can be loaded, para. 0066; neural network model is a pre-trained neural network and the parameters of the pre-trained neural network make the pre-trained neural network to have a minimum error; the network structure of the neural network takes layers; each of the layers in the neural network structure also includes a large number of parameters, and these parameters include but are not limited to: a weight, a bias and the like, para. 0051) in order to improve the computation efficiency of the neural networks as taught by Huang (para. 0062),
Therefore, based on O in view of Huang, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Huang to the system of O in order to improve the computation efficiency of the neural networks as taught by Huang (para. 0062).
With respect to claim 2, O teaches The system according to claim 1, wherein the at least one processor unit
- fetching the neural network models from the at least one model server and recording the neural network models to a memory unit (the plurality of model deploy servers store the entirety of the first neural network model and the second neural network model, or only store the at least one changed layer in the second neural network model, para. 0073),
- accessing (the plurality of model deploy servers can transmit the entirety of the first neural network model to an external device designated to each of the plurality of model deploy servers, or transmit information about the changed at least one layer, para. 0073),
- recording the accessed metadata to a control table (the first neural network model 80 configured as a metadata file, an index file, and files for each of at least one layer, the first neural network model 80 consist of a metadata file, an index file, and a model file, and the model file include at least one layer divided through an offset table included in the index file, para. 0043).
With respect to claim 3, O teaches The system according to claim 2, wherein the at least one processor unit
- wherein a client unit requests a neural network modelat least one client device,wherein it is detected that the neural network model(If the external device makes a request for at least one changed layer to the model deploy server designated in the external device; if the information for the changed layer is stored in the deploy server designated in the external device, the model deploy server designated at the external device may transmit information about the modified layer to the external device, para. 0080), and fetching the neural network modelat least one model serverat least one client device,wherein it is detected that the neural network model(if the external device makes a request for at least one changed layer to the model deploy server designated in the external device, and the information about the changed layer is not stored in the model deploy server designated in the external device, the model deploy server designated in the external device can receive information about the changed layer from the server 100 and transmit the information to the external device, para. 0080; If the second neural network model 70 is identified as not being present in the server 100, the server 100 may transmit the entirety of the first neural network model 80 to at least one of the plurality of external devices, para. 0045).
With respect to claim 4, O teaches The system according to claim 2, wherein the at least one processor unitat least one model server and recordingthe neural network models to the memory unit (the processor 130 automatically transmit information about at least one changed layer to the model deploy server without request by the external device; the external device receive information about whether the second neural network model has been updated at a predetermined periodic interval (e.g. one week) from the model deploy server or the server 100 designated in the external device, para. 0079).
With respect to claim 5, O teaches The system according to claim 1, wherein the at least one processor unita cache memory (the server 100 transmit information about the at least one identified layer to the first external device 300-1 storing the second neural network model 70, para. 0047).
With respect to claim 6, O teaches The system according to claim 5, wherein the at least one processor unita parsing process to the neural network modelsthe neural network models to be used in the future (If the at least one changed layer is identified, the server 100 may extract the data for the changed layer through the offset table from a file including the model data 53 or the model data 56 and may transmit the extracted data, index data 52, 55 and the metadata 51, 54 to an external server or model deploy server; when the server 100 transmits information about the changed layer to an external device using the neural network model 50, 50-1, para. 0095; to the first external device 300-1 storing the second neural network model, para. 0047).
With respect to claim 7, O teaches The system according to claim 1, wherein the at least one processor unit, wherein the layers of the neural network model are changed after a pre-selected date, where the neural network modelat least one client device (n external device request an update for the second neural network model to a designated model deploy server or the server 100, para. 0074; the plurality of model deploy servers can transmit the entirety of the first neural network model to an external device designated to each of the plurality of model deploy servers, or transmit information about the changed at least one layer, para. 0073; the external device receive information about whether the second neural network model has been updated at a predetermined periodic interval (e.g. one week) from the model deploy server designated in the external device, para. 0079).
With respect to claim 8, O teaches The system according to claim 1, wherein the client deviceat least one processor unit (the second neural network model 70 trained by the external device 300 to obtain a first neural network model 80 in which at least one layer is changed in the second neural network model 70 through the trained second neural network model 70; a neural network model trained through learning data in an external device such as a smart phone, and only a trained neural network model is transmitted to a central server, and a central server can update the neural network model by collecting a neural network model trained from a plurality of external devices, para. 0104).
With respect to claim 11, O teaches The system according to claim 7, wherein the at least one client deviceat least one processor unitthe hash values with the hash values of the layers (when the second neural network is stored in the server 100, the server 100 can compare the second neural network model with the first neural network model to identify the at least one changed layer, the server 100 can identify the at least one changed layer by identifying a hash value for at least one changed layer through an index file included in the first neural network model, para. 0119).
With respect to claim 12, O teaches The system according to claim 1, wherein the at least one processor unit (when the second neural network is stored in the server 100, the server 100 can compare the second neural network model with the first neural network model to identify the at least one changed layer, the server 100 can identify the at least one changed layer by identifying a hash value for at least one changed layer through an index file included in the first neural network model, para. 0119), and wherein the neural network model receives a request indicating that the edited neural network modelat least one client deviceat least one model server (an external device request an update for the second neural network model to a designated model deploy server or the server 100, para. 0074; the second neural network model 70 trained by the external device 300 to obtain a first neural network model 80 in which at least one layer is changed in the second neural network model 70 through the trained second neural network model 70; a neural network model trained through learning data in an external device such as a smart phone, and only a trained neural network model is transmitted to a central server, and a central server can update the neural network model by collecting a neural network model trained from a plurality of external devices, para. 0104), to query the hash values of the layersat least one client device(when the second neural network is stored in the server 100, the server 100 can compare the second neural network model with the first neural network model to identify the at least one changed layer, the server 100 can identify the at least one changed layer by identifying a hash value for at least one changed layer through an index file included in the first neural network model, para. 0119) and to request transmitting of at least one of the modified layersat least one client device (an external device request an update for the second neural network model to a designated model deploy server or the server 100, para. 0074; the second neural network model 70 trained by the external device 300 to obtain a first neural network model 80 in which at least one layer is changed in the second neural network model 70 through the trained second neural network model 70; a neural network model trained through learning data in an external device such as a smart phone, and only a trained neural network model is transmitted to a central server, and a central server can update the neural network model by collecting a neural network model trained from a plurality of external devices, para. 0104).
With respect to claim 13, O teaches The system according to claim 1, wherein the at least one processor unit ((when the second neural network is stored in the server 100, the server 100 can compare the second neural network model with the first neural network model to identify the at least one changed layer, the server 100 can identify the at least one changed layer by identifying a hash value for at least one changed layer through an index file included in the first neural network model, para. 0119),wherein the neural network model receives a request indicating that the neural network model, changed by the at least one client deviceat least one model server (an external device request an update for the second neural network model to a designated model deploy server or the server 100, para. 0074; the second neural network model 70 trained by the external device 300 to obtain a first neural network model 80 in which at least one layer is changed in the second neural network model 70 through the trained second neural network model 70; a neural network model trained through learning data in an external device such as a smart phone, and only a trained neural network model is transmitted to a central server, and a central server can update the neural network model by collecting a neural network model trained from a plurality of external devices, para. 0104), to fetch the neural network model(the plurality of model deploy servers store the entirety of the first neural network model and the second neural network model, or only store the at least one changed layer in the second neural network model, para. 0073), and to determine the hash value of the layers(when the second neural network is stored in the server 100, the server 100 can compare the second neural network model with the first neural network model to identify the at least one changed layer, the server 100 can identify the at least one changed layer by identifying a hash value for at least one changed layer through an index file included in the first neural network model, para. 0119) and to determine the changed layers (an external device request an update for the second neural network model to a designated model deploy server or the server 100, para. 0074; the second neural network model 70 trained by the external device 300 to obtain a first neural network model 80 in which at least one layer is changed in the second neural network model 70 through the trained second neural network model 70; a neural network model trained through learning data in an external device such as a smart phone, and only a trained neural network model is transmitted to a central server, and a central server can update the neural network model by collecting a neural network model trained from a plurality of external devices, para. 0104).
With respect to claim 14, O teaches The system according to claim 1, wherein the proxy unitat least one model server (the server 100 include a communicator 110, a memory 120, and a processor 130, para. 0057; the communicator 110 communicate with an external server, an external device, a model deploy server, para. 0060).
With respect to claim 15, O teaches The system according to claim 1, wherein the proxy unit (the server 100 include a communicator 110, a memory 120, and a processor 130, para. 0057; the communicator 110 communicate with an external server, an external device, a model deploy server, para. 0060).
With respect to claim 17, O teaches A model management method for a system comprising at least one model server (the server 100 include a communicator 110, a memory 120, and a processor 130, para. 0057), wherein the at least one model server layers(the second neural network model stored in the memory 120; when the first neural network model is obtained, the obtained first neural network model stored in the memory 120, para. 0062; the communicator 110 can transmit the first neural network model to an external device and a model deploy server or transmit information on at least one layer of the layers included in the first neural network model to an external device and a model deploy server, para. 0060) and to upload the pluralities of neural network models to the model database (the communicator 110 receive a first neural network model from an external server or an external device, para. 0060), wherein the following steps are provided:
- accessing the neural network models (the server 100 identify whether the second neural network model of which at least one layer is different from the first neural network model is stored in the server 100 using the metadata file included in the first neural network model, para. 0086),
- wherein the at least one client device requests a neural network modelthe neural network modelat least one client device (an external device request an update for the second neural network model to a designated model deploy server or the server 100, para. 0074; the plurality of model deploy servers can transmit the entirety of the first neural network model to an external device designated to each of the plurality of model deploy servers, or transmit information about the changed at least one layer, para. 0073),
- wherein the at least one client device requests an uploading of an edited neural network modelat least one client deviceat least one model serverthe modified layers (an external device request an update for the second neural network model to a designated model deploy server or the server 100, para. 0074; the second neural network model 70 trained by the external device 300 to obtain a first neural network model 80 in which at least one layer is changed in the second neural network model 70 through the trained second neural network model 70; a neural network model trained through learning data in an external device such as a smart phone, and only a trained neural network model is transmitted to a central server, and a central server can update the neural network model by collecting a neural network model trained from a plurality of external devices, para. 0104).
O does not explicitly teach at least one of pluralities of pre-trained neural network models
However, Huang teaches at least one of pluralities of pre-trained neural network models (pre-trained neural network models can be loaded, para. 0066; neural network model is a pre-trained neural network and the parameters of the pre-trained neural network make the pre-trained neural network to have a minimum error; the network structure of the neural network takes layers; each of the layers in the neural network structure also includes a large number of parameters, and these parameters include but are not limited to: a weight, a bias and the like, para. 0051) in order to improve the computation efficiency of the neural networks as taught by Huang (para. 0062),
Therefore, based on O in view of Huang, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Huang to the method of O in order to improve the computation efficiency of the neural networks as taught by Huang (para. 0062).
Claims 9-10, 18 and 26-27 are rejected under 35 U.S.C. 103 as being unpatentable over O et al. (US 2021/0168195 A1), hereinafter referred to as O, in view of Huang et al. (US 2023/0273826 A1), hereinafter referred to as Huang, and further in view of Hada et al. (US 2020/0380306 A1), hereinafter referred to as Hada.
With respect to claim 9, O in view of Huang teaches The system according to claim 1 as described above,
O in view of Huang does not explicitly teach wherein the at least one client deviceat least one processor unitof the predetermined parameters.
However, Hada teaches wherein the at least one client deviceat least one processor unitof the predetermined parameters (once the re-trained neural network model is deployed on the edge device 206, a part of the training data used to verify accuracy of the re-trained neural network model, the model verifying module 222 of the edge device 206 check the accuracy of the re-trained neural network model by running test data on the deployed re-trained neural network model; the model verifying module 222 further compute a F-Score, para. 0091) in order to compute the resource usage efficiency as taught by Hada (para. 0093).
Therefore, based on O in view of Huang, and further in view of Hada, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Hada to the system of O in view of Huang in order to compute the resource usage efficiency as taught by Hada (para. 0093).
With respect to claim 10, O in view of Huang, and further in view of Hada The system according to claim 9 as described above,
Further, Hada teaches wherein the predetermined parameter is at least one of a size and a layer(once the re-trained neural network model is deployed on the edge device 206, a part of the training data used to verify accuracy of the re-trained neural network model, the model verifying module 222 of the edge device 206 check the accuracy of the re-trained neural network model by running test data on the deployed re-trained neural network model; the model verifying module 222 further compute a F-Score, para. 0091) in order to compute the resource usage efficiency as taught by Hada (para. 0093).
Therefore, based on O in view of Huang, and further in view of Hada, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Hada to the system of O in view of Huang in order to compute the resource usage efficiency as taught by Hada (para. 0093).
With respect to claim 16, O in view of Huang teaches The system according to claim 1 as described above,
O in view of Huang does not explicitly teach wherein the proxy unitan edge device and/or the client devices are the Internet of Things (IoT) devices.
However, Hada teaches wherein the proxy unitan edge device and/or the client devices are the Internet of Things (IoT) devices (a system 100 for implementing neural network models on edge devices in an Internet of Things (IoT) network, para. 0020) in order to compute the resource usage efficiency as taught by Hada (para. 0093).
Therefore, based on O in view of Huang, and further in view of Hada, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Hada to the system of O in view of Huang in order to compute the resource usage efficiency as taught by Hada (para. 0093).
With respect to claim 18, O teaches The model management method according to claim 17(the server 100 identify whether the second neural network model of which at least one layer is different from the first neural network model is stored in the server 100 using the metadata file included in the first neural network model, para. 0086), the following steps are provided:
- fetching the neural network models from the at least one model server and recording the neural network models in a memory unit (the plurality of model deploy servers store the entirety of the first neural network model and the second neural network model, or only store the at least one changed layer in the second neural network model, para. 0073),
- accessing (the plurality of model deploy servers can transmit the entirety of the first neural network model to an external device designated to each of the plurality of model deploy servers, or transmit information about the changed at least one layer, para. 0073),
- recording the accessed metadata to a control table[[.]] (the first neural network model 80 configured as a metadata file, an index file, and files for each of at least one layer, the first neural network model 80 consist of a metadata file, an index file, and a model file, and the model file include at least one layer divided through an offset table included in the index file, para. 0043);
wherein a client unit requests a neural network model, it is queried in the control table whether the neural network model is recorded in the memory unit or not, and the neural network model is transmitted to the at least one client device, wherein it is detected that the neural network model is recorded (If the external device makes a request for at least one changed layer to the model deploy server designated in the external device; if the information for the changed layer is stored in the deploy server designated in the external device, the model deploy server designated at the external device may transmit information about the modified layer to the external device, para. 0080), and the neural network model is fetched from the at least one model server and the neural network model is transmitted to the at least one client device, wherein it is detected that the neural network model is not recorded (if the external device makes a request for at least one changed layer to the model deploy server designated in the external device, and the information about the changed layer is not stored in the model deploy server designated in the external device, the model deploy server designated in the external device can receive information about the changed layer from the server 100 and transmit the information to the external device, para. 0080; If the second neural network model 70 is identified as not being present in the server 100, the server 100 may transmit the entirety of the first neural network model 80 to at least one of the plurality of external devices, para. 0045);
wherein the step of "fetching the neural network models from the at least one model server and recording the neural network models in the memory unit" is repeated at predetermined periods and the neural network models in the memory unit are updated (the processor 130 automatically transmit information about at least one changed layer to the model deploy server without request by the external device; the external device receive information about whether the second neural network model has been updated at a predetermined periodic interval (e.g. one week) from the model deploy server or the server 100 designated in the external device, para. 0079);
wherein the neural network models, transmitted to the client devices, are fetched to a cache memory (the server 100 transmit information about the at least one identified layer to the first external device 300-1 storing the second neural network model 70, para. 0047);
wherein a parsing process is applied to the neural network models, fetched to the cache memory, in order for the neural network models to be used in the future (If the at least one changed layer is identified, the server 100 may extract the data for the changed layer through the offset table from a file including the model data 53 or the model data 56 and may transmit the extracted data, index data 52, 55 and the metadata 51, 54 to an external server or model deploy server; when the server 100 transmits information about the changed layer to an external device using the neural network model 50, 50-1, para. 0095; to the first external device 300-1 storing the second neural network model, para. 0047);
wherein only the layers of the neural network model are transmitted to the at least one client device, wherein the layers of the neural network model are subject to change after a pre-selected date as requested by the at least one client device (an external device request an update for the second neural network model to a designated model deploy server or the server 100, para. 0074; the plurality of model deploy servers can transmit the entirety of the first neural network model to an external device designated to each of the plurality of model deploy servers, or transmit information about the changed at least one layer, para. 0073; the external device receive information about whether the second neural network model has been updated at a predetermined periodic interval (e.g. one week) from the model deploy server designated in the external device, para. 0079);
Further, Hada teaches
wherein values of predetermined parameters of the modified layers are determined by the at least one client device and the modified layers are transmitted to the at least one model server according to a priority order formed according to the values of the predetermined parameters (once the re-trained neural network model is deployed on the edge device 206, a part of the training data used to verify accuracy of the re-trained neural network model, the model verifying module 222 of the edge device 206 check the accuracy of the re-trained neural network model by running test data on the deployed re-trained neural network model; the model verifying module 222 further compute a F-Score, para. 0091);
wherein the predetermined parameter is at least one of a size and a layer accuracy (once the re-trained neural network model is deployed on the edge device 206, a part of the training data used to verify accuracy of the re-trained neural network model, the model verifying module 222 of the edge device 206 check the accuracy of the re-trained neural network model by running test data on the deployed re-trained neural network model; the model verifying module 222 further compute a F-Score, para. 0091) in order to compute the resource usage efficiency as taught by Hada (para. 0093).
Therefore, based on O in view of Huang, and further in view of Hada, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Hada to the method of O in view of Huang in order to compute the resource usage efficiency as taught by Hada (para. 0093).
With respect to claim 26, O teaches The model management method according to claim 18, wherein the following steps are provided:
- determining the hash values of the layers(when the second neural network is stored in the server 100, the server 100 can compare the second neural network model with the first neural network model to identify the at least one changed layer, the server 100 can identify the at least one changed layer by identifying a hash value for at least one changed layer through an index file included in the first neural network model, para. 0119),
- detecting the changed layers by querying the hash information of the layersat least one client device (when the second neural network is stored in the server 100, the server 100 can compare the second neural network model with the first neural network model to identify the at least one changed layer, the server 100 can identify the at least one changed layer by identifying a hash value for at least one changed layer through an index file included in the first neural network model, para. 0119),wherein the neural network modelat least one client deviceat least one model server (an external device request an update for the second neural network model to a designated model deploy server or the server 100, para. 0074; the second neural network model 70 trained by the external device 300 to obtain a first neural network model 80 in which at least one layer is changed in the second neural network model 70 through the trained second neural network model 70; a neural network model trained through learning data in an external device such as a smart phone, and only a trained neural network model is transmitted to a central server, and a central server can update the neural network model by collecting a neural network model trained from a plurality of external devices, para. 0104), and requesting from the processor unit (an external device request an update for the second neural network model to a designated model deploy server or the server 100, para. 0074; the second neural network model 70 trained by the external device 300 to obtain a first neural network model 80 in which at least one layer is changed in the second neural network model 70 through the trained second neural network model 70; a neural network model trained through learning data in an external device such as a smart phone, and only a trained neural network model is transmitted to a central server, and a central server can update the neural network model by collecting a neural network model trained from a plurality of external devices, para. 0104).
With respect to claim 27, O teaches The model management method according to claim 18, wherein (when the second neural network is stored in the server 100, the server 100 can compare the second neural network model with the first neural network model to identify the at least one changed layer, the server 100 can identify the at least one changed layer by identifying a hash value for at least one changed layer through an index file included in the first neural network model, para. 0119), the neural network model(the plurality of model deploy servers store the entirety of the first neural network model and the second neural network model, or only store the at least one changed layer in the second neural network model, para. 0073), wherein the neural network model is desired to be transmitted, wherein the neural network modelat least one client deviceat least one model server (an external device request an update for the second neural network model to a designated model deploy server or the server 100, para. 0074; the second neural network model 70 trained by the external device 300 to obtain a first neural network model 80 in which at least one layer is changed in the second neural network model 70 through the trained second neural network model 70; a neural network model trained through learning data in an external device such as a smart phone, and only a trained neural network model is transmitted to a central server, and a central server can update the neural network model by collecting a neural network model trained from a plurality of external devices, para. 0104), the hash value of the layers(when the second neural network is stored in the server 100, the server 100 can compare the second neural network model with the first neural network model to identify the at least one changed layer, the server 100 can identify the at least one changed layer by identifying a hash value for at least one changed layer through an index file included in the first neural network model, para. 0119) and the determined changed layers (an external device request an update for the second neural network model to a designated model deploy server or the server 100, para. 0074; the second neural network model 70 trained by the external device 300 to obtain a first neural network model 80 in which at least one layer is changed in the second neural network model 70 through the trained second neural network model 70; a neural network model trained through learning data in an external device such as a smart phone, and only a trained neural network model is transmitted to a central server, and a central server can update the neural network model by collecting a neural network model trained from a plurality of external devices, para. 0104).
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAO NGUYEN whose telephone number is (571)272-2666. The examiner can normally be reached on Monday through Friday from 7:30 A.M. to 4:00 P.M. (EST).
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joon H. Hwang can be reached on 571-272-4036. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.H.N/Examiner, Art Unit 2447
December 18, 2025
/JOON H HWANG/Supervisory Patent Examiner, Art Unit 2447