Prosecution Insights
Last updated: April 19, 2026
Application No. 17/723,922

ELECTRONIC DEVICE, USER TERMINAL, AND METHOD FOR RUNNING SCALABLE DEEP LEARNING NETWORK

Final Rejection §101§103§112
Filed
Apr 19, 2022
Examiner
SUSSMAN MOSS, JACOB ZACHARY
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
2 (Final)
14%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
-6%
With Interview

Examiner Intelligence

Grants only 14% of cases
14%
Career Allow Rate
1 granted / 7 resolved
-40.7% vs TC avg
Minimal -20% lift
Without
With
+-20.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
26 currently pending
Career history
33
Total Applications
across all art units

Statute-Specific Performance

§101
37.3%
-2.7% vs TC avg
§103
35.2%
-4.8% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
15.5%
-24.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the Application filed on April 19th, 2022. Claims 1-15 are pending in the case. Claims 1 and 11 are independent claims. This action is in response to amendments filed November 18th, 2025, in which claims 1-4, 6, 8-12, and 15 have been amended. No claims have been cancelled nor added. The amendments have been entered, and claims 1-15 are currently pending in the case. Claims 1 and 11 are independent claims. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation "reconstruct a deep learning network" in line 18 of the claim. It is unclear whether this deep learning network is the same as the deep learning network referred to in lines 6 and 8, or a new second deep learning network. For examination purposes this limitation has been interpreted as “reconstruct the deep learning network”. Claim 1 recites the limitation "the plurality of blocks comprises parameter related to at least one layer included in a block and connection information" in lines 9-10. It is unclear what the parameter is and whether the parameter relates to just at least one layer in a block and the block itself comprises connection information, or whether the parameter relates to both the layer in a block and connection information. Further, there is insufficient antecedent basis for this limitation in the claim. Claim 1 recites the limitation “the plurality of blocks comprises parameter related to at least one layer included in a block” in line 9 of the claim. It is unclear whether this “a block” is one of the plurality of blocks, or a new unrelated block. For examination purposes this limitation has been interpreted as “the plurality of blocks comprises a parameter related to at least one layer included in a block of the plurality of blocks”. Claim 11 recites the limitation "reconstruct a deep learning network" in line 11 of the claim. It is unclear whether this deep learning network is the same as the deep learning network referred to in lines 6 and 8, or a new second deep learning network. For examination purposes this limitation has been interpreted as “reconstruct the deep learning network”. Claim 11 recites the limitation "the plurality of blocks comprises parameter related to at least one layer included in a block and connection information" in lines 14-15. It is unclear what the parameter is and whether the parameter relates to just at least one layer in a block and the block itself comprises connection information, or whether the parameter relates to both the layer in a block and connection information. Further, there is insufficient antecedent basis for this limitation in the claim. Claim 11 recites the limitation “the plurality of blocks comprises parameter related to at least one layer included in a block” in line 14 of the claim. It is unclear whether this “a block” is one of the plurality of blocks, or a new unrelated block. For examination purposes this limitation has been interpreted as “the plurality of blocks comprises a parameter related to at least one layer included in a block of the plurality of blocks”. Claims 2-10 and 12-15 are rejected for being dependent on a rejected base claim without curing any of the deficiencies. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1: Step 1: Claim 1 is directed to An electronic device, therefore it falls under the statuary category of machine. Step 2A Prong 1: The claim recites, in part: “determine scalability of a … network including a plurality of layers” this encompasses the mental determining of the scalability of an observed network. “divide the … network into a plurality of blocks, based on the scalability, wherein each of the plurality of blocks comprises parameter related to at least one layer included in a block and connection information, the connection information including information on connection between layers included in a block and information on connection between blocks” this encompasses the mental division of an observed network into blocks based on an observed scalability. “select at least one block among the plurality of blocks, based on the received information” this encompasses the mental selection of an observed block based on observed information. “reconstruct a…network by using the at least one block” this encompasses the mental reconstruction of a network using an observed block. Step 2A Prong 2: The judicial exception is not integrated into a practical application; the remaining limitations of the claim are as follows: “a communication circuit; a processor; and a memory operatively connected to the processor, wherein the memory stores instructions”, “deep learning network” (line 6 of the claim), “deep learning network” (line 8 of the claim) these limitations are an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP § 2106.05(h). “that cause, when executed, the processor to”, “to cause the external user terminal to” the limitations are an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP § 2106.05(f)(2). “receive information about processing capability of an external user terminal from the external user terminal”, “transmit the selected at least one block to the external user terminal”, “received from the electronic device” these limitations are an additional element that amounts to adding insignificant extra-solution activity to the judicial exception. See MPEP § 2106.05(g). Step 2B: The additional elements, taken individually and in combination, do not provide an inventive concept of significantly more than the abstract idea itself for the reasons set forth in step 2A prong 2 above. Therefore, the claim is ineligible. “receive information about processing capability of an external user terminal from the external user terminal”, “transmit the selected at least one block to the external user terminal”, “received from the electronic device” these limitations are an additional element that amounts to adding insignificant extra-solution activity to the judicial exception. See MPEP § 2106.05(g). Furthermore the additional element is directed to receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362. See MPEP § 2106.05(d)/(II). Regarding claim 2, the rejection of claim 1 is incorporated and further: Step 2A Prong 1: The claim recites, in part: “determine the scalability of the … network, based on a number of scalable structures of the … network” this encompasses the mental determination of the scalability of an observed network, based on a number of observed structures. Step 2A Prong 2: The judicial exception is not integrated into a practical application; the remaining limitations of the claim are as follows: “deep learning network” the limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP §2106.05(h). Step 2B: The additional elements, taken individually and in combination, do not provide an inventive concept of significantly more than the abstract idea itself for the reasons set forth in step 2A prong 2 above. Therefore, the claim is ineligible. Regarding claim 3, the rejection of claim 1 is incorporated and further: Step 2A Prong 1: a continuation of the abstract idea identified in the parent claim. Step 2A Prong 2: The judicial exception is not integrated into a practical application; the remaining limitations of the claim are as follows: “the information about the processing capability of the external user terminal includes at least one of information about operation processing capability of the user terminal or a communication network speed” the limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP §2106.05(h). Step 2B: The additional elements, taken individually and in combination, do not provide an inventive concept of significantly more than the abstract idea itself for the reasons set forth in step 2A prong 2 above. Therefore, the claim is ineligible. Regarding claim 4, the rejection of claim 1 is incorporated and further: Step 2A Prong 1: The claim recites, in part: “decide a … structure suitable for the external user terminal from among scalable structures …, based on the received information” this encompasses the mental deciding of a network structure suitable from among observed structures. “select at least one block corresponding to the decided … structure from among the plurality of blocks” this encompasses the mental selection of a block from among other observed blocks. Step 2A Prong 2: The judicial exception is not integrated into a practical application; the remaining limitations of the claim are as follows: “deep learning network” the limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP §2106.05(h). Step 2B: The additional elements, taken individually and in combination, do not provide an inventive concept of significantly more than the abstract idea itself for the reasons set forth in step 2A prong 2 above. Therefore, the claim is ineligible. Regarding claim 5, the rejection of claim 1 is incorporated and further: Step 2A Prong 1: The claim recites, in part: “the plurality of blocks contain information about a … structure for each of the plurality of blocks, a parameter corresponding to at least one layer included in each of the plurality of blocks, and connection information between the at least one layer” a continuation of the abstract idea identified in the parent claim. Step 2A Prong 2: The judicial exception is not integrated into a practical application; the remaining limitations of the claim are as follows: “deep learning network” the limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP §2106.05(h). Step 2B: The additional elements, taken individually and in combination, do not provide an inventive concept of significantly more than the abstract idea itself for the reasons set forth in step 2A prong 2 above. Therefore, the claim is ineligible. Regarding claim 6, the rejection of claim 1 is incorporated and further: Step 2A Prong 1: The claim recites, in part: “output a number of result values corresponding to the determined scalability” this encompasses the mental output of result values based on an observed scalability. Further, this limitation is a mathematical concept. Step 2A Prong 2: The judicial exception is not integrated into a practical application; the remaining limitations of the claim are as follows: “train the deep learning network” the limitation is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP § 2106.05(f)(2). Step 2B: The additional elements, taken individually and in combination, do not provide an inventive concept of significantly more than the abstract idea itself for the reasons set forth in step 2A prong 2 above. Therefore, the claim is ineligible. Regarding claim 7, the rejection of claim 1 is incorporated and further: Step 2A Prong 1: A continuation of the abstract idea identified in the parent claim. Step 2A Prong 2: The judicial exception is not integrated into a practical application; the remaining limitations of the claim are as follows: “the deep learning network includes at least one of a deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN), an auto encoder, a generative adversarial network (GAN), or a deep belief network (DBN)” the limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP §2106.05(h). Step 2B: The additional elements, taken individually and in combination, do not provide an inventive concept of significantly more than the abstract idea itself for the reasons set forth in step 2A prong 2 above. Therefore, the claim is ineligible. Regarding claim 8, the rejection of claim 1 is incorporated and further: Step 2A Prong 1: The claim recites, in part: “add a new layer to a specific block among the plurality of blocks, and wherein the added new layer is a layer not included in the plurality of layers” this encompasses the mental addition of a new layer to a specific block amongst observed layers, wherein the layer is not one of the observed layers. Step 2A Prong 2: The claim does not recite any additional limitations, thus does not further recite any additional elements that integrates the judicial exception into a practical application or amount to significantly more. Regarding claim 9, the rejection of claim 1 is incorporated and further: Step 2A Prong 1: A continuation of the abstract idea identified in the parent claim. Step 2A Prong 2: The judicial exception is not integrated into a practical application; the remaining limitations of the claim are as follows: “receive a request for updating a specific block from the external user terminal, and transmit, in response to the request, an updated specific block to the external user terminal” the limitation is an additional element that amounts to adding insignificant extra-solution activity to the judicial exception. See MPEP § 2106.05(g). Step 2B: The additional elements, taken individually and in combination, do not provide an inventive concept of significantly more than the abstract idea itself for the reasons set forth in step 2A prong 2 above. Therefore, the claim is ineligible. “receive a request for updating a specific block from the external user terminal, and transmit, in response to the request, an updated specific block to the external user terminal” the limitation is an additional element that amounts to adding insignificant extra-solution activity to the judicial exception. See MPEP § 2106.05(g). Furthermore the additional element is directed to receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362. See MPEP § 2106.05(d)/(II). Regarding claim 10, the rejection of claim 1 is incorporated and further: Step 2A Prong 1: The claim recites, in part: “generate a plurality of different blocks each including at least one layer among the plurality of layers, based on the scalability, and wherein respective layers included in the plurality of blocks overlap in part with each other.” this encompasses the mental generation of blocks including overlapping respective layers based on an observed scalability. Step 2A Prong 2: The claim does not recite any additional limitations, thus does not further recite any additional elements that integrates the judicial exception into a practical application or amount to significantly more. Regarding claim 11: Step 1: Claim 11 is directed to A user terminal, therefore it falls under the statuary category of machine. Step 2A Prong 1: The claim recites, in part: “reconstruct a …network by using the at least one block” this encompasses the mental reconstruction of a network based on an observed block. “the at least one block is generated…by dividing the…network into a plurality of blocks based on scalability, each of the plurality of blocks comprising parameter related to at least one layer included in a block and connection information, the connection information including information on connection between layers included in a block and information on connection between blocks” this encompasses the mental creation of a block by dividing an observed network into blocks based on an observed scalability. Step 2A Prong 2: The judicial exception is not integrated into a practical application; the remaining limitations of the claim are as follows: “a communication circuit; a processor; and a memory operatively connected to the processor”, “deep learning network” (line 10 of the claim), “deep learning network” (line 12 of the claim) the limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP § 2106.05(h). “transmit information about processing capability of the user terminal to an external electronic device, receive at least one block including at least one of a plurality of layers of a deep learning network from the external electronic device” the limitation is an additional element that amounts to adding insignificant extra-solution activity to the judicial exception. See MPEP § 2106.05(g). “the processor is configured to”, “by the external electronic device” the limitations are an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP § 2106.05(f)(2). Step 2B: The additional elements, taken individually and in combination, do not provide an inventive concept of significantly more than the abstract idea itself for the reasons set forth in step 2A prong 2 above. Therefore, the claim is ineligible. “transmit information about processing capability of the user terminal to an external electronic device, receive at least one block including at least one of a plurality of layers of a deep learning network from the external electronic device” the limitation is an additional element that amounts to adding insignificant extra-solution activity to the judicial exception. See MPEP § 2106.05(g). Furthermore the additional element is directed to receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362. See MPEP § 2106.05(d)/(II). Regarding claim 12, the rejection of claim 11 is incorporated and further: Step 2A Prong 1: The claim recites, in part: “analyze data” this encompasses the mental analyzation of observed data. Step 2A Prong 2: The judicial exception is not integrated into a practical application; the remaining limitations of the claim are as follows: “through the reconstructed deep learning network” the limitation is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP § 2106.05(f)(2). Step 2B: The additional elements, taken individually and in combination, do not provide an inventive concept of significantly more than the abstract idea itself for the reasons set forth in step 2A prong 2 above. Therefore, the claim is ineligible. Regarding claim 13, the rejection of claim 11 is incorporated and further: Step 2A Prong 1: The claim recites, in part: “the at least one block contains information about a … structure for each of the at least one block, a parameter corresponding to at least one layer included in each of the at least one block, and connection information between the at least one layer” a continuation of the abstract idea identified in the parent claim. Step 2A Prong 2: The judicial exception is not integrated into a practical application; the remaining limitations of the claim are as follows: “deep learning network” the limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP §2106.05(h). Step 2B: The additional elements, taken individually and in combination, do not provide an inventive concept of significantly more than the abstract idea itself for the reasons set forth in step 2A prong 2 above. Therefore, the claim is ineligible. Regarding claim 14, the rejection of claim 11 is incorporated and further: Step 2A Prong 1: a continuation of the abstract idea identified in the parent claim. Step 2A Prong 2: The judicial exception is not integrated into a practical application; the remaining limitations of the claim are as follows: “the information about the processing capability of the user terminal includes at least one of information about operation processing capability of the user terminal or a communication network speed” the limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP §2106.05(h). Step 2B: The additional elements, taken individually and in combination, do not provide an inventive concept of significantly more than the abstract idea itself for the reasons set forth in step 2A prong 2 above. Therefore, the claim is ineligible. Regarding claim 15, the rejection of claim 11 is incorporated and further: Step 2A Prong 1: The claim recites, in part: Step 2A Prong 2: The judicial exception is not integrated into a practical application; the remaining limitations of the claim are as follows: “in response to a need to update a specific block among the at least one block, transmit a request for updating the specific block to the external electronic device”, “receive an updated specific block from the external electronic device” these limitations are an additional element that amounts to adding insignificant extra-solution activity to the judicial exception. See MPEP § 2106.05(g). Step 2B: The additional elements, taken individually and in combination, do not provide an inventive concept of significantly more than the abstract idea itself for the reasons set forth in step 2A prong 2 above. Therefore, the claim is ineligible. “in response to a need to update a specific block among the at least one block, transmit a request for updating the specific block to the external electronic device”, “receive an updated specific block from the external electronic device” these limitations are an additional element that amounts to adding insignificant extra-solution activity to the judicial exception. See MPEP § 2106.05(g). Furthermore the additional element is directed to receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362. See MPEP § 2106.05(d)/(II). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-15 are rejected under 35 U.S.C. 103 as being unpatentable over Gomez et al. (US 20160098646 A1) hereinafter Gomez in further view of Biesemann et al. (US 20190180189 A1) hereinafter Biesemann in further view of Zhu et al. (US 20200050939 A1), (English filing of Zhu et al. (CN 109919308 A), 2017), hereinafter Zhu. Regarding claim 1: Gomez teaches An electronic device comprising: a communication circuit (Gomez, ¶51 “The various embodiments described above may be implemented using circuitry and/or software modules that interact to provide particular results.”); a processor (Gomez, ¶17 “The processor 106 may include any combination of general-purpose or special-purpose logic circuitry, such as a central processing unit (CPU), field-programmable gate array (FPGA), digital signal processor (DSP), etc.”); and a memory operatively connected to the processor, wherein the processor is configured to (Gomez, ¶51 “Such instructions may be stored on a non-transitory computer-readable medium and transferred to the processor for execution as is known in the art.”): determine scalability of a deep learning network including a plurality of layers (Gomez, ¶22 “A boundary 126 between the first and second portions 122, 124 of the network 120 may defined by enumerating which layers 120 a-d of the deep-belief network 120 that are operating within the respective user device 102 and network server 110.” Here, which of the layers a-d are selected for the user device can be considered the scalability), divide the deep learning network into a plurality of blocks, based on the scalability (Gomez, ¶22 “A boundary 126 between the first and second portions 122, 124 of the network 120 may defined by enumerating which layers 120 a-d of the deep-belief network 120 that are operating within the respective user device 102 and network server 110.” Here, the first and second portions can be considered the plurality of blocks) select at least one block among the plurality of blocks, based on the received information (Gomez, ¶23 “Choice of the boundary 126 may depend on factors such as expected network performance, capability of the user device 102, and robustness of the deep learning network (e.g., ability to deal with data transmission errors and delays).”) Gomez does not teach “receive information about processing capability of an external user terminal from the external user terminal, transmit the selected at least one block to the external user terminal” However, Biesemann teaches receive information about processing capability of an external user terminal from the external user terminal (Biesemann, ¶63 “At 410, a current context of the client device and/or a request associated with the identified operations associated with the execution can be determined. In particular, a determination can be as to the current network connectivity and connection strength and reliability of the client device. If the device is connected to a WiFi connection, then an online neural network execution may be proper. If, however, the device has no network connectivity or if the available network connectivity is poor or below a particular predefined threshold, then an offline neural network execution may be used.”), transmit the selected at least one block to the user terminal (Biesemann, ¶4 “Then, a representation of the trained neural network is transmitted to the client device, wherein the transmitted representation includes an offline version of the neural network model and the current configuration of the trained neural network and the obtained set of data.” Here, the transmitted configuration can be considered the at least one block and the client device can be considered the user terminal). Gomez and Biesemann are analogous art because both references concern modifying neural networks for computation on a client device. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Gomez’ system to incorporate the transmission of blocks as taught by Biesemann. The motivation for doing so would have been to have updated blocks to provide better and more accurate outputs after additional training as stated in Biesemann, ¶28 “As described, these neural networks 122 can be trained based on existing and newly created data to improve those neural networks 122 to provide better and more accurate outputs after additional training and refinement.” Gomez in view of Biesemann does not teach “wherein each of the plurality of blocks comprises parameter related to at least one layer included in a block and connection information, the connection information including information on connection between layers included in a block and information on connection between block, to cause the external user terminal to reconstruct a deep learning network by using the at least one block received from the electronic device” However, Zhu teaches wherein each of the plurality of blocks comprises parameter related to at least one layer included in a block and connection information (Zhu, ¶66 “Loading a corresponding target operation parameter in the target network layer corresponding to each network layer by using a preset parameter loading method of the Layer class separately according to the target operation parameter of each network layer, to obtain a target neural network model deployed in the terminal device.”), the connection information including information on connection between layers included in a block and information on connection between blocks (Zhu, ¶10 “a network layer connection module, configured to connect the target network layers by using a Net class;”), to cause the external user terminal to reconstruct a deep learning network by using the at least one block received from the electronic device (Zhu, ¶75 “Moreover, after the target network layers are connected by using the Net class, the corresponding target operation parameter converted into the predetermined format may be loaded in each target network layer, thus reconstructing the initial neural network model needing to be deployed in the terminal device.”) Gomez in view of Biesemann and Zhu are analogous art because both references concern modifying neural networks for distribution and computation on a client device. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Gomez/Biesemann’s system to incorporate the network connections and reconstruction of networks taught by Zhu. The motivation for doing so would have been to have a fast deployment of a variety of networks to the external user terminals as stated in Zhu, ¶100 “The embodiment of this application can support deployment of neural network models trained by using different training frameworks into a terminal device, for example, support fast deployment of neural network models trained by using learning frameworks such as torch, Tensorflow, and caffe into the terminal device, thus reducing the usage limitation of deployment of the neural network models.” Regarding claim 2: Gomez in view of Biesemann in further view of Zhu teaches The electronic device of claim 1, wherein the processor is further configured to: determine the scalability of the deep learning network, based on a number of scalable structures of the deep learning network (Gomez, ¶22 “A boundary 126 between the first and second portions 122, 124 of the network 120 may defined by enumerating which layers 120 a-d of the deep-belief network 120 that are operating within the respective user device 102 and network server 110.” Here, which of the layers a-d are selected for the user device can be considered the scalability, and each layer a-d can be can be considered the scalable structures in light of the specification, ¶61 “The deep learning network may have a structure in which a plurality of layers performing specific operations are stacked.”). Regarding claim 3: Gomez in view of Biesemann in further view of Zhu teaches The electronic device of claim 1, wherein the information about the processing capability of the external user terminal includes at least one of information about operation processing capability of the external user terminal or a communication network speed (Biesemann, ¶63 “At 410, a current context of the client device and/or a request associated with the identified operations associated with the execution can be determined. In particular, a determination can be as to the current network connectivity and connection strength and reliability of the client device. If the device is connected to a WiFi connection, then an online neural network execution may be proper. If, however, the device has no network connectivity or if the available network connectivity is poor or below a particular predefined threshold, then an offline neural network execution may be used.”). It would have been obvious to combine the teachings of Gomez, Biesemann and Zhu for the reasons set forth in connection with claim 1 above. Regarding claim 4: Gomez in view of Biesemann in further view of Zhu teaches The electronic device of claim 1, wherein the processor is further configured to: decide a deep learning network structure suitable for the external user terminal from among scalable structures of the deep learning network, based on the received information (Gomez, ¶23 “Choice of the boundary 126 may depend on factors such as expected network performance, capability of the user device 102, and robustness of the deep learning network (e.g., ability to deal with data transmission errors and delays).” Here, the boundary decided by the capability of the user device can be considered the structure suitable for the user terminal); and select at least one block corresponding to the decided deep learning network structure from among the plurality of blocks (Gomez, ¶49 “A first portion of the deep learning network operates on a user device and a second portion of the deep learning network operates on a network server.” Here, the first portion selected to operate on the user device can be considered the one block selected among the plurality of blocks). Regarding claim 5: Gomez in view of Biesemann in further view of Zhu teaches The electronic device of claim 1, wherein the plurality of blocks contain information about a deep learning network structure for each of the plurality of blocks, a parameter corresponding to at least one layer included in each of the plurality of blocks, and connection information between the at least one layer (Gomez, ¶38 “The layer 412 and newly started layer 408 may have transfer state data (e.g., via control units 404, 410) as needed before layer 413 is stopped. Thereafter, connections 432 and 433 can be established between layers 407, 408, and 413 as shown.” Here, the transfer state data can be considered parameter corresponding to at least one layer, further as it helps establish connections it can be considered connection information). Regarding claim 6: Gomez in view of Biesemann in further view of Zhu teaches The electronic device of claim 1, wherein the processor is further configured to: train the deep learning network to output a number of result values corresponding to the determined scalability (Gomez, ¶20 “Each of the layers 120 a-d may process a different level of representation (e.g., layer of abstraction) of a particular problem.” Here, each layer of abstraction can be considered a result value and as each layer may process a different level of the representation, it can be considered corresponding to scalability in light of the specification, ¶65 “For example, the scalability of the scalable deep learning network may refer to the number of result values outputted through the scalable deep learning network.”). Regarding claim 7: Gomez in view of Biesemann in further view of Zhu teaches The electronic device of claim 1, wherein the deep learning network includes at least one of a deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN), an auto encoder, a generative adversarial network (GAN), or a deep belief network (DBN) (Gomez, ¶15 “Other types of deep learning networks include deep neural networks and convolutional deep neural networks. For purposes of the following discussion, systems that are described that implement deep learning networks. The deep learning networks may include any of these particular types of hierarchical machine-learning networks such as deep-belief networks, deep-neural networks, etc.” It is noted the claim recites alternative language, and Gomez teaches at least one of the alternatives.). Regarding claim 8: Gomez in view of Biesemann in further view of Zhu teaches The electronic device of The electronic device of wherein he processor is further configured to: add a new layer to a specific block among the plurality of blocks, and wherein the added new layer is a layer not included in the plurality of layers (Gomez, ¶24 “A change in boundary location will at least change the definition of the first portion 122 and the second portion 124 of the deep learning network 120. For example, where network performance is poor, the user device 102 may take on processing of more layers 120 a-d of the deep learning network 120 if such a change results in reducing the amount of data that needs to be sent over the connection.” Here, taking on a new layer in a-d can be considered adding a new layer. Further, as a-d are unique layers it will not be including in the plurality of layers). Regarding claim 9: Gomez in view of Biesemann in further view of Zhu teaches The electronic device of claim 1, wherein the processor is further configured to: receive a request for updating a specific block from the user terminal (Biesemann, claim 1 “identifying a request to synchronize a trained neural network from a backend system to a client device”, and transmit, in response to the request, an updated specific block to the user terminal (Biesemann, ¶59 “As noted, a subset of the logic, configuration, and the backend data can be transmitted to the client device when the synchronization represents a delta or partial sync. The information transmitted in those instances can overwrite or otherwise replace the offline neural network definition and/or data already stored at the client device, such that the offline neural network is considered updated and current at the time of the transmission.” Here, the subset of the configuration can be considered the updated block). It would have been obvious to combine the teachings of Gomez, Biesemann and Zhu for the reasons set forth in connection with claim 1 above. Regarding claim 10: Gomez in view of Biesemann in further view of Zhu teaches The electronic device of claim 1, wherein the processor is further configured to: generate a plurality of different blocks each including at least one layer among the plurality of layers (Gomez, ¶22 “A boundary 126 between the first and second portions 122, 124 of the network 120 may defined by enumerating which layers 120 a-d of the deep-belief network 120 that are operating within the respective user device 102 and network server 110.” Here, the first and second portions can be considered the plurality of blocks), based on the scalability, and wherein respective layers included in the plurality of blocks overlap in part with each other (Gomez, ¶47 “The devices 602-604 together form a deep learning network 612 having multiple layers 612 a-d. The devices 602-604 are capable of operating overlapping portions of the deep learning network 612.” Here, the overlapping portions can be considered the overlapping layers of blocks). Regarding claim 11: Gomez teaches A user terminal comprising: a communication circuit (Gomez, ¶51 “The various embodiments described above may be implemented using circuitry and/or software modules that interact to provide particular results.”); a processor (Gomez, ¶17 “The processor 106 may include any combination of general-purpose or special-purpose logic circuitry, such as a central processing unit (CPU), field-programmable gate array (FPGA), digital signal processor (DSP), etc.”); and a memory operatively connected to the processor, wherein the processor is configured to (Gomez, ¶51 “Such instructions may be stored on a non-transitory computer-readable medium and transferred to the processor for execution as is known in the art.”): reconstruct a deep learning network by using the at least one block (Gomez, ¶34 “For example, the communication between processing layers 406-408, 412-414 may include control channels that allow the layers themselves to reconfigure the deep learning network without been overseen by a separate control process.” Here, the reconfiguring of the deep learning network from the layers can be considered the reconstruction using the block) wherein the at least one block is generated by the external electronic device by dividing the deep learning network into a plurality of blocks based on scalability (Gomez, ¶22 “A boundary 126 between the first and second portions 122, 124 of the network 120 may defined by enumerating which layers 120 a-d of the deep-belief network 120 that are operating within the respective user device 102 and network server 110.” Here, the first and second portions can be considered the plurality of blocks) Gomez does not teach “transmit information about processing capability of the user terminal to an external electronic device, receive at least one block including at least one of a plurality of layers of a deep learning network from the external electronic device” However, Biesemann teaches transmit information about processing capability of the user terminal to an external electronic device (Biesemann, ¶63 “At 410, a current context of the client device and/or a request associated with the identified operations associated with the execution can be determined. In particular, a determination can be as to the current network connectivity and connection strength and reliability of the client device. If the device is connected to a WiFi connection, then an online neural network execution may be proper. If, however, the device has no network connectivity or if the available network connectivity is poor or below a particular predefined threshold, then an offline neural network execution may be used. ”), receive at least one block including at least one of a plurality of layers of a deep learning network from the external electronic device (Biesemann, ¶4 “Then, a representation of the trained neural network is transmitted to the client device, wherein the transmitted representation includes an offline version of the neural network model and the current configuration of the trained neural network and the obtained set of data.” Here, the transmitted configuration can be considered the at least one block, further, it can be seen to have a plurality of layers Biesemann, ¶24 “The middle layer of the illustrated model is called a hidden layer 210. Only a single hidden layer 210 is illustrated, but other neural networks can include multiple hidden layers. Where multiple hidden layers are present, the neural network 200 is called a deep neural network”) Gomez and Biesemann are analogous art because both references concern modifying neural networks for computation on a client device. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Gomez’ system to incorporate the transmission of blocks as taught by Biesemann. The motivation for doing so would have been to have updated blocks to provide better and more accurate outputs after additional training as stated in Biesemann, ¶28 “As described, these neural networks 122 can be trained based on existing and newly created data to improve those neural networks 122 to provide better and more accurate outputs after additional training and refinement.” Gomez in view of Biesemann does not teach “each of the plurality of blocks comprising parameter related to at least one layer included in a block and connection information, the connection information including information on connection between layers included in a block and information on connection between blocks” However, Zhu teaches each of the plurality of blocks comprising parameter related to at least one layer included in a block and connection information (Zhu, ¶66 “Loading a corresponding target operation parameter in the target network layer corresponding to each network layer by using a preset parameter loading method of the Layer class separately according to the target operation parameter of each network layer, to obtain a target neural network model deployed in the terminal device.”), the connection information including information on connection between layers included in a block and information on connection between blocks (Zhu, ¶10 “a network layer connection module, configured to connect the target network layers by using a Net class;”), Gomez in view of Biesemann and Zhu are analogous art because both references concern modifying neural networks for distribution and computation on a client device. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Gomez/Biesemann’s system to incorporate the network connections and reconstruction of networks taught by Zhu. The motivation for doing so would have been to have a fast deployment of a variety of networks to the external user terminals as stated in Zhu, ¶100 “The embodiment of this application can support deployment of neural network models trained by using different training frameworks into a terminal device, for example, support fast deployment of neural network models trained by using learning frameworks such as torch, Tensorflow, and caffe into the terminal device, thus reducing the usage limitation of deployment of the neural network models.” Regarding claim 12: Gomez in view of Biesemann in further view of Zhu teaches The user terminal of claim 11, wherein the processor is further configured to: analyze data through the reconstructed deep learning network (Gomez, ¶14 “The neural network can be generic, e.g., does not need to have domain-specific knowledge of the data being analyzed.”). Regarding claim 13: Gomez in view of Biesemann in further view of Zhu teaches The user terminal of claim 11, wherein the at least one block contains information about a deep learning network structure for each of the at least one block, a parameter corresponding to at least one layer included in each of the at least one block, and connection information between the at least one layer (Gomez, ¶38 “The layer 412 and newly started layer 408 may have transfer state data (e.g., via control units 404, 410) as needed before layer 413 is stopped. Thereafter, connections 432 and 433 can be established between layers 407, 408, and 413 as shown.” Here, the transfer state data can be considered parameter corresponding to at least one layer, further as it helps establish connections it can be considered connection information). Regarding claim 14: Gomez in view of Biesemann in further view of Zhu teaches The user terminal of claim 11, wherein the information about the processing capability of the user terminal includes at least one of information about operation processing capability of the user terminal or a communication network speed (Biesemann, ¶63 “At 410, a current context of the client device and/or a request associated with the identified operations associated with the execution can be determined. In particular, a determination can be as to the current network connectivity and connection strength and reliability of the client device. If the device is connected to a WiFi connection, then an online neural network execution may be proper. If, however, the device has no network connectivity or if the available network connectivity is poor or below a particular predefined threshold, then an offline neural network execution may be used. ”). It would have been obvious to combine the teachings of Gomez, Biesemann and Zhu for the reasons set forth in connection with claim 11 above. Regarding claim 15: Gomez in view of Biesemann in further view of Zhu teaches The user terminal of claim 11, wherein the processor is further configured to: in response to a need to update a specific block among the at least one block, transmit a request for updating the specific block to the external electronic device (Biesemann, ¶4 “The example method can comprise identifying a request to synchronize a trained neural network from a backend system to a client device, wherein synchronizing the trained neural network to the client device enables offline execution of the trained neural network”); receive an updated specific block from the external electronic device (Biesemann, ¶59 “As noted, a subset of the logic, configuration, and the backend data can be transmitted to the client device when the synchronization represents a delta or partial sync. The information transmitted in those instances can overwrite or otherwise replace the offline neural network definition and/or data already stored at the client device, such that the offline neural network is considered updated and current at the time of the transmission.” Here, the subset of the configuration can be considered the updated block); and reconstruct an updated deep learning network by using the updated specific block (Gomez, ¶34 “For example, the communication between processing layers 406-408, 412-414 may include control channels that allow the layers themselves to reconfigure the deep learning network without been overseen by a separate control process.” Here, the reconfiguring of the deep learning network from the layers can be considered the reconstruction using the block). It would have been obvious to combine the teachings of Gomez, Biesemann and Zhu for the reasons set forth in connection with claim 11 above. Response to Arguments Applicant's arguments filed November 18th, 2025, hereinafter “Remarks”, have been fully considered but they are not persuasive. Regarding the objections to the Specification, Applicant’s amended Specification has overcome the objections, which are withdrawn. Applicant’s arguments regarding the 35 U.S.C. 112(b) rejections of the previous office action have been fully considered, and are persuasive. The rejections have been withdrawn due to claim amendments. However, the amendments have required additional indefiniteness rejections to be made in this action. Regarding the 35 U.S.C. 101 rejections, applicant’s arguments have been considered, but they are not persuasive. Argument 1: Applicant first argues, “the amended independent claims 1 and 11 now include the technical feature of 'block-based partitioning and reconstruction including connection information.' Accordingly, the amended claims are considered to satisfy "Yes" under Step 2A, Prong Two of the Alice-Mayo test in MPEP §2106, or alternatively "Yes" under Step 2B, such that the claims may be deemed patent-eligible under 35 U.S.C. §101.” Remarks, page 7. Examiners response: Examiner respectfully disagrees, The MPEP states “Does the claim recite additional elements that amount to significantly more than the judicial exception? Examiners should answer this question by first identifying whether there are any additional elements (features/limitations/steps) recited in the claim beyond the judicial exception(s), and then evaluating those additional elements individually and in combination to determine whether they contribute an inventive concept (i.e., amount to significantly more than the judicial exception(s))”. See MPEP § 2106.05(II). The inclusion of further details of the block-based partitioning and reconstruction including connection information is a continuation of the abstract idea, and do not significantly more than the judicial exception. Therefore, the claims do not satisfy Step 2A, Prong Two or Step 2B of the Alice-Mayo test and remain rejected under 35 U.S.C. § 101. Argument 2: Applicant next argues “actual hardware components (e.g., a communication circuit, processor, and memory) cooperate to reorganize the structure of a deep learning network through the 'block- based partitioning and reconstruction including connection information.' This is not merely a mathematical calculation or an abstract mental process, but rather a physical modification or reconfiguration of the network structure, which constitutes an additional element that integrates the claimed invention into a practical application”. Remarks, pages 7-8. Examiners response: Examiner respectfully disagrees, the MPEP states “The courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011).” The reorganization of a network through block-based partitioning and reconstruction can be considered a mental process. A person could reorganize a network using pen and paper. The use of “actual hardware components” to perform the judicial exception does not constitute an additional element that integrates the claimed invention into a practical application, but instead the limitation is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP § 2106.05(f)(2). Argument 3: Applicant next argues “each block itself contains connection information, enabling reconstruction of the network within the user terminal without any involvement from an external device such as a server. As a result, the present invention provides the user terminal with autonomy to independently perform network reconstruction without external communication or control, thereby establishing a stable and self-sustaining structure that does not rely on external environments. Thus, Applicant submits that the present invention provides a concrete technical improvement in the relevant technical field beyond, merely performing an algorithm, which constitutes "significantly more."” Remarks, page 8. Examiners response: Examiner respectfully disagrees, MPEP states "it is important to keep in mind that an improvement in the abstract idea itself (e.g. a recited fundamental economic concept) is not an improvement in technology." See MPEP § 2106.05(a)(II). This argument is unpersuasive — the applicant merely uses a computer to perform processes which can be performed by a mental process. An improvement to the reconstruction of a network may be an improvement in an abstract idea, but not an improvement in the functioning of a computer, as a computer. Argument 4: Applicant next argues “Applicant has amended claims 1, 2, 4, 6, 8, 9, 10, 11, 12, and 15 by changing the current memory-instruction format ("wherein the instructions cause the processor to ~") to a format stating that the processor is configured to perform specific operations ("wherein the processor is configured to ~"), thereby clarifying that each operation is a hardware operation actually performed by a general-purpose processor rather than an abstract computation.” Remarks, page 8. Examiners response: Examiner respectfully disagrees, “As the Supreme Court explained in Alice Corp., mere physical or tangible implementation of an exception is not in itself an inventive concept and does not guarantee eligibility: “The fact that a computer "necessarily exist[s] in the physical, rather than purely conceptual, realm," is beside the point. There is no dispute that a computer is a tangible system (in § 101 terms, a "machine"), or that many computer-implemented claims are formally addressed to patent-eligible subject matter. But if that were the end of the § 101 inquiry, an applicant could claim any principle of the physical or social sciences by reciting a computer system configured to implement the relevant concept. Such a result would make the determination of patent eligibility "depend simply on the draftsman’s art," Flook, supra, at 593, 98 S. Ct. 2522, 57 L. Ed. 2d 451, thereby eviscerating the rule that "‘[l]aws of nature, natural phenomena, and abstract ideas are not patentable,’" Myriad, 133 S. Ct. 1289, 186 L. Ed. 2d 124, 133).” Alice Corp., 573 U.S. at 224, 110 USPQ2d at 1983-84 (alterations in original).” See MPEP § 2106.05(I)(A). Therefore, the recitation of hardware components amount to an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP § 2106.05(f)(2). Argument 5: Applicant next argues “In addition, in independent claim 1, Applicant submits that the concrete interaction between the two devices by adding the subsequent operation caused on the user terminal after the electronic device transmits the blocks namely, "to cause the external user terminal to reconstruct a deep learning network by using the at least one block received from the electronic device." This further clarifies that the series of operations in the present invention are not mentally performed steps, but physical processes carried out respectively by the electronic device and the external user terminal.” Remarks, pages 8-9. Examiners response: Examiner respectfully disagrees, a person could reconstruct a network from an observed block. The recitation that the blocks are transmitted and received amount to no more than adding insignificant extra-solution activity to the judicial exception. See MPEP § 2106.05(g). Furthermore the additional element is directed to receiving or transmitting data over a network, which the courts have found to be well‐understood, routine, and conventional activity e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d and computer receives and sends information over a network, BuySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014); Argument 6: In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Argument 7: In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, Gomez and Biesemann are analogous art because both references concern modifying neural networks for computation on a client device. It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Gomez’ system to incorporate the transmission of blocks as taught by Biesemann. The motivation for doing so would have been to have updated blocks to provide better and more accurate outputs after additional training as stated in Biesemann, ¶28 “As described, these neural networks 122 can be trained based on existing and newly created data to improve those neural networks 122 to provide better and more accurate outputs after additional training and refinement.. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB Z SUSSMAN MOSS whose telephone number is (571) 272-1579. The examiner can normally be reached Monday - Friday, 9 a.m. - 5 p.m. ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached on (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.S.M./Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Apr 19, 2022
Application Filed
Aug 04, 2025
Non-Final Rejection — §101, §103, §112
Nov 18, 2025
Response Filed
Feb 06, 2026
Final Rejection — §101, §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
14%
Grant Probability
-6%
With Interview (-20.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month