Prosecution Insights
Last updated: April 19, 2026
Application No. 18/025,227

ENCODING METHOD AND NEURAL NETWORK ENCODER STRUCTURE USABLE IN WIRELESS COMMUNICATION SYSTEM

Non-Final OA §101§103§112
Filed
Mar 08, 2023
Examiner
BALDWIN, RANDALL KERN
Art Unit
2125
Tech Center
2100 — Computer Architecture & Software
Assignee
LG Electronics Inc.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
185 granted / 232 resolved
+24.7% vs TC avg
Strong +27% interview lift
Without
With
+26.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
12 currently pending
Career history
244
Total Applications
across all art units

Statute-Specific Performance

§101
17.4%
-22.6% vs TC avg
§103
43.2%
+3.2% vs TC avg
§102
6.4%
-33.6% vs TC avg
§112
26.6%
-13.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 232 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the application and preliminary amendment filed 3/08/2023. In the amendment, no claims were amended, claim 15 was cancelled, and no claims were added. As such, claims 1-14 are pending and have been examined. Claims 1-14 are rejected. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. The present application is a national stage application under 35 U.S.C. 371 of International Application No. PCT/KR2020/012173, filed on September 9, 20201. Information Disclosure Statement Acknowledgment is made of the information disclosure statement filed 3/08/2023, which complies with 37 CFR 1.97. As such, the information disclosure statement has been placed in the application file and, aside from foreign patent document no. 3 (see below), the information referred to therein has been considered by the examiner. However, the information disclosure statement (IDS) filed 3/08/2023 fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. In particular, the information disclosure statement filed 3/08/2023 lists a foreign patent document (cite no. 3, JP 08-063203 A, published 1996-03-08, with patentee/applicant HITACHI LTD), but a copy of this document was not submitted. Instead of cite no. 3, applicant submitted a copy of Japanese application Pub. No. JP2008063203 A, published 2008-03-21, entitled “COMPOSITION FOR MORTAR” with patentee/applicant NIPPON POLY GLU CO LTD, which is unrelated to cite no. 3 on the 3/08/2023 IDS. Examiner was unable to obtain a copy of this document and the reference cited therein. As such, this foreign patent document has not been considered. Drawings The drawings are also objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference characters not mentioned in the description: Reference characters 1 and 200 shown in Figure 1 are not found in the detailed description (see, e.g., paragraphs 65-67 describing FIG. 1); and Reference characters 512, 522, 532 and 552 shown in Figure 5 are not found in the detailed description (see, e.g., paragraphs 89-92 describing FIG. 5). The drawings are also objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference signs mentioned in the description: Reference characters 100, 100x, 120, 120a and 130 (see, e.g., paragraphs 65-66 describing FIG. 1 and reciting “Referring to FIG. 1, the communication system 100 applied to the present disclosure includes a wireless device, a base station, and a network. … the base station 120 and the network 130 may be implemented as a wireless device, and a specific wireless device 120a may operate as a base station/network node to other wireless devices.”, “The wireless devices 100a to 100f may be connected to the network 130 via the BSs120.” and paragraph 69 describing FIG. 2 and reciting, with reference to FIG. 1, “wireless device 100x, base station 120} and/or {wireless device 100x, wireless device 100x} of FIG. 1.”); Reference characters 204, 206 and 208 (see, e.g., paragraph 71 describing FIG. 2 and reciting “The second wireless device 200b may include one or more processors 202 and one or more memories 204 and additionally further include one or more transceivers 206 and/or one or more antennas 208.”); Reference character 640d (see, e.g., paragraph 95 describing FIG. 6 and reciting “Here, blocks 610 to 630/640a to 640d correspond to blocks 310 to 330/340 of FIG. 3, respectively.”). The drawings are further objected to as failing to comply with 37 CFR 1.84(p)(3) because Figures 9-10, 12-16, 18, 20, 23, 25-26, 28-30 include letters which do not measure at least .32 cm. (1/8 inch) in height (i.e., most of the lowercase characters in FIGs. 9-10, 12-16, 18, 20, 30 the exponents in FIGs. 16 and 29-30, and many of the subscript characters in FIGs. 23, 25-26, 28-30). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following informalities: The specification is objected to as failing to provide proper antecedent basis for the claimed subject matter. See 37 CFR 1.75(d)(1) and MPEP § 608.01(o). Correction of the following is required: independent claims 1 and 13-14 do not appear to have support in the originally filed specification filed on 3/08/2023 (or the specification of priority PCT application PCT/KR2020/0121732, filed 09/09/2020). There does not appear to be any discussion of “a first encoding step of encoding input data transmitted from a higher layer” as recited in claims 1 and 13-14 (see, line 3 of claim 1, line 7 of claim 13 and line 8 of claim 14), in these specifications. In particular, the instant specification is silent regarding any 1st encoding step or operation of “encoding input data transmitted from a higher layer” as recited in claims 1, 13 and 14. Paragraph 320 of the instant application discloses “Referring to FIG. 37, the neural network encoder encodes input data transmitted from an upper layer (S3710). Here, as an example, the upper layer may be a MAC layer.” However, the specification, in this portion, and in other portions, is silent regarding “a first encoding step of encoding input data transmitted from a higher layer” as recited in claims 1, 13 and 14. Appropriate correction is required. Reference characters 1 and 200 shown in Figure 1 are not found in the detailed description (see, e.g., paragraphs 65-67 describing FIG. 1). Appropriate correction is required. Reference characters 512, 522, 532 and 552 shown in Figure 5 are not found in the detailed description (see, e.g., paragraphs 89-92 describing FIG. 5). Appropriate correction is required. Applicant is reminded of the proper content of an abstract of the disclosure. A patent abstract is a concise statement of the technical disclosure of the patent and should include that which is new in the art to which the invention pertains. The abstract should not refer to purported merits or speculative applications of the invention and should not compare the invention with the prior art. If the patent is of a basic nature, the entire technical disclosure may be new in the art, and the abstract should be directed to the entire disclosure. If the patent is in the nature of an improvement in an old apparatus, process, product, or composition, the abstract should include the technical disclosure of the improvement. The abstract should also mention by way of example any preferred modifications or alternatives. Where applicable, the abstract should include the following: (1) if a machine or apparatus, its organization and operation; (2) if an article, its method of making; (3) if a chemical compound, its identity and use; (4) if a mixture, its ingredients; (5) if a process, the steps. Extensive mechanical and design details of an apparatus should not be included in the abstract. The abstract should be in narrative form and generally limited to a single paragraph within the range of 50 to 150 words in length. See MPEP § 608.01(b) for guidelines for the preparation of patent abstracts. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. The abstract of the disclosure is objected to because it uses a phrase which can be implied, it merely repeats information given in the title, and because it does not describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. In particular, the entire abstract consists of the sentence “This specification proposes a neural network encoder structure and encoding method usable in a wireless communication system”. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Claim Objections Claims 13 and 14 are objected to because of the following informalities: Independent claims 13 and 14 both recite “wherein the at least one processor execute the instructions” (see, line 5 of claim 13 and line 6 of claim 14). These recitations are grammatically incorrect and should read “wherein the at least one processor executes the instructions”. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. Independent claims 1, 13 and 14 each recite a first encoding step of encoding input data transmitted from a higher layer3; an interleaving step of performing interleaving on a first output which is an output of the first encoding step; and a second encoding step of encoding a second output which is an output obtained by interleaving the first output. While the above-noted limitations recite “step of”, these claim limitations are not being interpreted under 35 U.S.C. 112(f) because claim 1 is a method claim, “the method comprising” the above-noted steps, and because claims 13 and 14 both explicitly recite “wherein the at least one processor performs” each of the above-noted steps. In particular, claim 1 recites “An encoding method performed by a neural network encoder in a wireless communication system, the method comprising:” <the above-noted steps>, claim 13 recites “A neural network encoder, the neural network encoder comprising: at least one memory storing instructions; at least one transceiver; and at least one processor coupling the at least one memory and the at least one transceiver, wherein the at least one processor execute the instructions, wherein the at least one processor performs:” <the above-noted steps> and claim 14 recites “An apparatus configured to control a neural network encoder, the apparatus comprising: at least one processor; and at least one memory executablely coupled to the at least one processor and storing instructions, wherein the at least one processor execute the instructions, wherein the at least one processor performs:” <the above-noted steps>. As such, 35 U.S.C. 112(f) is not invoked, and the claims recite sufficient structure to perform the claimed functions. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1-14 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Independent claims 1, 13 and 14 each recite “a first encoding step of encoding input data transmitted from a higher layer” (see, line 3 of claim 1, line 7 of claim 13 and line 8 of claim 14). The term “a higher layer” is a relative term which renders the claims indefinite. The term “a higher layer " is not defined by the claims, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. In particular, it is unclear what metrics are used for ascertaining the requisite degree of height or what range of height values is covered by the term “a higher layer” in the phrase “encoding input data transmitted from a higher layer”. It is also unclear what “a higher layer” refers to. That is, due to the fact that no layers, no neural network or other claim elements composed of layers is recited in these claims prior to the recitation of “a higher layer”, it is unclear if the “higher layer” is a layer of a neural network or some other type of layer. As noted above in the objections to the specification, the specification is silent regarding “encoding input data transmitted from a higher layer”. With reference to FIG. 37, the specification mentions “the neural network encoder encodes input data transmitted from an upper layer”. For examination purposes, “encoding input data transmitted from a higher layer” in claims 1, 13 and 14 is being interpreted as encoding input data transmitted from any previous, preceding, upper, or higher layer of a neural network than a current, subsequent, following, or lower layer of the neural network (see, e.g., the recitation of “wherein each of the first encoding step and the second encoding step is performed based on one or more neural networks.” in the last steps/operations of these claims). Appropriate correction is required. Claims 2-12, which each depend directly or indirectly from claim 1, are rejected under 35 U.S.C. 112(b) as being indefinite under the same rationale as claim 1. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis below of the claims’ subject matter eligibility follows the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50-57 (January 7, 2019) (“2019 PEG”). and the 2024 Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence, 89 Fed. Reg. 58128-58138 (July 17, 2024) (“2024 AI SME Update”). When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter (Step 1). If the claim does fall within one of the statutory categories, the second step in the analysis is to determine whether the claim is directed to a judicial exception (Step 2A). The Step 2A analysis is broken into two prongs. In the first prong (Step 2A, Prong 1), it is determined whether or not the claims recite a judicial exception (e.g., mathematical concepts, mental processes, certain methods of organizing human activity). If it is determined in Step 2A, Prong 1 that the claims recite a judicial exception, the analysis proceeds to the second prong (Step 2A, Prong 2), where it is determined whether or not the claims integrate the judicial exception into a practical application. If it is determined at step 2A, Prong 2 that the claims do not integrate the judicial exception into a practical application, the analysis proceeds to determining whether the claim is a patent-eligible application of the exception (Step 2B). If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim integrates the judicial exception into a practical application, or else amounts to significantly more than the abstract idea itself. Regarding independent claims 1, 14 and 15 these claims are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 1 is directed to a method, corresponding to a process, claim 14 is directed to an encoder comprising at least one memory, at least one transceiver and at least one processor, corresponding to an article of manufacture, and claim 15 is directed to an apparatus comprising at least one memory, at least one transceiver and at least one processor, corresponding to an article of manufacture, which are each one of one of the four statutory categories of invention. Step 2A Prong One Analysis: The claims are directed to an abstract idea. In particular, the claims recite mathematical concepts (including mathematical relationships, mathematical formulas or equations, and mathematical calculations). Claims 1, 14 and 15 each recite: a first encoding step of encoding input data transmitted from a higher layer4; an interleaving step of performing interleaving on a first output which is an output of the first encoding step; and a second encoding step of encoding a second output which is an output obtained by interleaving the first output, The above-noted encoding and interleaving limitations, as drafted, are a process that, under its broadest reasonable interpretation (BRI), covers mathematical concepts (i.e., mathematical relationships and calculations to encode input and output data, and to interleave output of the 1st encoding, based on a result of a mathematical calculation for the 1st encoding). Regarding the 2nd encoding, this “encoding a second output which is an output obtained by interleaving the first output” step recites a mathematical calculation operation based on the output/result of interleaving the 1st output, which are mathematical concepts (i.e., mathematical relationships and calculations). Under their BRI, in light of the specification, the encoding and interleaving limitations encompass mathematical concepts as described in the specification in paragraphs 275-276, 288 and 219 and shown in FIGs. 28-32 (disclosing “FIG. 28 shows an encoder structure with a code rate of 1/3, where fi,ᶿ represent a neural network and h(.) represents a power constraint. Also, π means an interleaver.”, “The autoencoder structure as shown in FIG. 28 that solves this problem … and EMO represents an elementary math operation.”, “Referring to FIG. 31, each of NN1 and NN2 may be referred to as an outer encoder, and each of NN3 and NN4 may be referred to as an inner encoder .… the P/S block is a block that performs an operation of converting from parallel to serial, that is, a parallel-to-serial operation, and the INT block means an interleaver.” and “INT of FIG. 32 is an interleaver, and may be used to match dimensions between inputs and outputs.” where the “neural network [NN] is a simple mathematical model” and “we convert this method into a mathematical model” as disclosed in paragraphs 230 and 233. If the claim limitations, under their broadest reasonable interpretations, cover mathematical relationships, mathematical formulas or equations, or mathematical calculations, then they fall within the “Mathematical Concepts” grouping of abstract ideas. See MPEP 2106.04(a)(2) § I. But for the recitation of generic computer components (i.e., “a neural network encoder in a wireless communication system” of claim 1, “the neural network encoder comprising: at least one memory storing instructions; at least one transceiver; and at least one processor” of claim 13 and “the apparatus comprising: at least one processor; and at least one memory” of claim 14), the limitations of claims 1, 13 and 14, cover mathematical relationships, mathematical formulas or equations, and mathematical calculations. Accordingly, claims 1, 13 and 14 recite an abstract idea. Therefore, the claims are directed to an abstract idea (mathematical concept). Step 2A Prong Two Analysis: This judicial exception is not integrated into a practical application. Claims 1, 13 and 14 do not recite any additional limitations or elements which integrate the abstract idea into a practical application. The claims each recite this additional element wherein each of the first encoding step and the second encoding step is performed based on one or more neural networks. Regarding the “one or more neural networks”, no details of the neural networks are recited and the networks are recited at a high level of generality and can be constructed by hand with pen and paper. Aside from merely repeating the claim language (see, e.g., paragraph 323) and providing general examples and a general operating environment (see, e.g., paragraphs 232 stating “Usually, neural networks are directed graphs.” and 254-267 and FIGs. 26 and 31-16), applicant’s specification does not explicitly define nor provide details of the recited “neural networks”. Thus, the claimed “neural networks”, under the broadest reasonable interpretation (BRI), in light of the specification, could be constructed by hand with pen and paper based on a reasonable amount of observed data (i.e., the “input data”). The neural network is recited at a high level of generality and therefore is being interpreted as performing mathematical calculations and operations on a generic computer. The claims also recite these additional elements: an “encoding method performed by a neural network” (claim 1), a “neural network encoder, the neural network encoder comprising: at least one memory storing instructions; at least one transceiver; and at least one processor coupling the at least one memory and the at least one transceiver, wherein the at least one processor execute the instructions, wherein the at least one processor performs:” <steps> (claim 13) and an “apparatus configured to control a neural network encoder, the apparatus comprising: at least one processor; and at least one memory executablely coupled to the at least one processor and storing instructions, wherein the at least one processor execute the instructions, wherein the at least one processor performs:” <steps> (claim 14). The above-noted additional elements in the claims amount to recitation of the words "apply it" (or an equivalent) or are mere instructions to implement an abstract idea or other exception on a computer, which does not integrate a judicial exception into a practical application. See MPEP 2106.05(f). Merely asserting that a judicial exception is to be carried out on a generic computer (i.e., with the generically-recited “neural network” and encoder and apparatus including “at least one memory” and “at least one processor”) cannot meaningfully integrate the judicial exception into a practical application. See MPEP § 2106.05(f). The above elements are considered to be mere instructions to apply the judicial exception (abstract idea). Also, the additional limitation in claim 1 of an “encoding method performed by a neural network encoder in a wireless communication system” recites a field of use exception that generally links a judicial exception to a particular technological environment. (See MPEP 2106.05(h)). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. See MPEP 2106.04(d). The claims are directed to an abstract idea. Step 2B Analysis: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself. These claims do not recite any additional elements that integrate the abstract idea into a practical application or provide significantly more than the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, there are no additional elements recited that impose any meaningful limits on practicing the abstract idea. Therefore, the additional elements of these claims are not sufficient to amount to significantly more than the abstract idea. These claims are not patent eligible. Regarding claim 2, this claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 2 is directed to a method as depending from claim 1, thus the analysis for patent eligibility of claim 1 is incorporated herein. Step 2A Prongs 1-2: The claim recites “wherein each of the first encoding step and the second encoding step is performed based on a plurality of parallel-connected neural networks.” This limitation does nothing to alter the fundamental nature of the claim as a mathematical concept. This is because the additional limitation merely limits the invention to a narrower abstract idea by further narrowing what the encoding steps are “based on”, i.e., generically-recited “parallel-connected neural networks.” Dependent claim 2, when analyzed as a whole, is not patent eligible under 35 U.S.C. 101 because the additional recited limitation fails to establish that the claim is not directed to an abstract idea. The “parallel-connected neural networks” are recited at a high level of generality as mere instructions to implement an abstract idea on a computer (i.e., a system implementing the “parallel-connected neural networks”) and amount to the recitation of the words “apply it” (or an equivalent) or amount to no more than mere instructions to implement an abstract idea or other exception on a computer or merely use a computer as a tool to perform an abstract idea (i.e., as generic computer components performing generic computer functions). See MPEP 2106.05(f). Also, the limitation “wherein each of the first encoding step and the second encoding step is performed based on a plurality of parallel-connected neural networks” can be considered as “generally linking the use of judicial exception to a particular technological environment or field of use”. See MPEP 2106.05(h). Thus, this limitation does nothing to alter the analysis of claim 1. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Viewing the additional element of this dependent claim as a combination does not add anything further than the individual elements. Mere instructions to apply the mathematical concept electronically (i.e., with the recited “neural network encoder in a wireless communication system” of base claim 1 and the “neural networks”) do not amount to significantly more than the judicial exception. As noted above, merely asserting that a judicial exception is to be carried out on a generic computer cannot provide significantly more than the judicial exception. See MPEP § 2106.05(f). Accordingly, at Step 2B, the additional element does not amount to significantly more than the judicial exception. Regarding claim 3, this claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 3 is directed to a method as depending from claim 2, thus the analysis for patent eligibilities of claim 2 and base claim 1 are incorporated herein. Step 2A Prong 1: The claim recites “wherein the neural network encoder performs the interleaving after performing parallel-to-serial conversion on the first output.” This limitation does nothing to alter the fundamental nature of the claim as a mathematical concept. Under its BRI, in light of the specification, this limitation encompasses the mathematical concept of parallel-to-serial data conversion on the 1st output values (See, e.g., the mathematical relationships and calculations depicted in FIG. 31 and paragraph 288 of the specification stating “Referring to FIG. 31, … the P/S block is a block that performs an operation of converting from parallel to serial, that is, a parallel-to-serial operation”). This additional limitation also merely limits the invention to a narrower abstract idea by further narrowing when the mathematical concept of interleaving the 1st output values is performed i.e., “after performing parallel-to-serial conversion on the first output.” Dependent claim 3, when analyzed as a whole, is not patent eligible under 35 U.S.C. 101 because the additional recited limitation fails to establish that the claim is not directed to an abstract idea. Step 2A Prong 2 Analysis: Mere instructions to apply the mathematical concept electronically do not meaningfully integrate the judicial exception into a practical application. See MPEP 2106.05(f). The claim does not recite any additional elements that integrate the abstract idea into a practical application or provide significantly more than the abstract idea, and thus the claim is subject-matter ineligible. For example, claim 3 only recites the additional element of “the neural network encoder performs the interleaving after performing parallel-to-serial conversion”. The “the neural network encoder” is recited at a high level of generality as mere instructions to apply the mathematical concept and amounts to the recitation of the words “apply it” (or an equivalent) or amount to no more than mere instructions to implement an abstract idea or other exception on a computer or merely use a computer as a tool to perform an abstract idea (i.e., as generic computer components performing generic computer functions). Mere instructions to apply the mathematical concept electronically (i.e., with the “wireless communication system” of base claim 1 and “the neural network encoder”) do not meaningfully integrate the judicial exception into a practical application. See MPEP 2106.05(f). Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Viewing the additional element of this dependent claim as a combination does not add anything further than the individual elements. Mere instructions to apply the mathematical concept electronically (i.e., with the recited “neural network encoder”) do not amount to significantly more than the judicial exception. As noted above, merely asserting that a judicial exception is to be carried out on a generic computer cannot provide significantly more than the judicial exception. See MPEP § 2106.05(f). Accordingly, at Step 2B, the additional element does not amount to significantly more than the judicial exception. Regarding claim 4, this claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 4 is directed to a method as depending from claim 1, thus the analysis for patent eligibility of claim 1 is incorporated herein. Step 2A Prongs 1-2: The claim recites “wherein the first encoding step is performed based on the one or more neural networks and a first accumulator; and wherein the first accumulator performs an exclusive OR operation.” These limitations do nothing to alter the fundamental nature of the claim as a mathematical concept. This is because the additional limitation of “the first encoding step is performed based on the one or more neural networks and a first accumulator” merely limits the invention to a narrower abstract idea by further narrowing what the 1st encoding step is “based on”, i.e., generically-recited “one or more neural networks and a first accumulator”, and the additional limitation of “where the generically-recited “accumulator performs an exclusive OR operation”, under its BRI, in light of the specification, encompasses the mathematical concept of carrying out an accumulation/addition mathematical calculation by performing a logical OR operation on data values (See, e.g., the mathematical relationships and calculations depicted in FIG. 34 and paragraph 295 of the specification stating “D in FIG. 34 means delay, and an exclusive OR operation is applied in the front part of D in FIG. 34.”). Dependent claim 4, when analyzed as a whole, is not patent eligible under 35 U.S.C. 101 because the additional recited limitation fails to establish that the claim is not directed to an abstract idea. The “one or more neural networks and a first accumulator” are recited at a high level of generality as mere instructions to implement an abstract idea on a computer (i.e., a system implementing the “one or more neural networks and a first accumulator”) and amount to the recitation of the words “apply it” (or an equivalent) or amount to no more than mere instructions to implement an abstract idea or other exception on a computer or merely use a computer as a tool to perform an abstract idea (i.e., as generic computer components performing generic computer functions). See MPEP 2106.05(f). Also, the limitation “wherein the first encoding step is performed based on the one or more neural networks and a first accumulator” can be considered as “generally linking the use of judicial exception to a particular technological environment or field of use”. See MPEP 2106.05(h). Thus, this limitation does nothing to alter the analysis of claim 1. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Viewing the additional element of this dependent claim as a combination does not add anything further than the individual elements. Mere instructions to apply the mathematical concept electronically (i.e., with the recited “neural network encoder in a wireless communication system” of base claim 1 and the “one or more neural networks and a first accumulator”) do not amount to significantly more than the judicial exception. As noted above, merely asserting that a judicial exception is to be carried out on a generic computer cannot provide significantly more than the judicial exception. See MPEP § 2106.05(f). Accordingly, at Step 2B, the additional element does not amount to significantly more than the judicial exception. Regarding claim 5, this claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 5 is directed to a method as depending from claim 1, thus the analysis for patent eligibility of claim 1 is incorporated herein. Step 2A Prongs 1-2: The claim recites “wherein the second encoding step is performed based on the one or more neural networks and a second accumulator; and wherein the second accumulator performs a summation operation.” These limitations do nothing to alter the fundamental nature of the claim as a mathematical concept. This is because the additional limitation of “the second encoding step is performed based on the one or more neural networks and a second accumulator” merely limits the invention to a narrower abstract idea by further narrowing what the 1st encoding step is “based on”, i.e., generically-recited “one or more neural networks and a second accumulator”, and the additional limitation of “where the generically-recited “second accumulator performs a summation operation”, under its BRI, in light of the specification, encompasses the mathematical concept of carrying out a summation/addition mathematical calculation by performing a summation operation on numeric data values (See, e.g., the mathematical relationships and calculations depicted in FIG. 35 and paragraphs 297-298 of the specification stating “Referring to FIG. 35, … since the output of the outer encoder is a real value, the sum of the inner encoder parts becomes the sum of the real values. … when the output of the sum is c"(t), it can be expressed as c"(t)=α·c'(t)+(1-α)·c'(t-1). Here, α may be a value greater than 0 and less than 1.” and “a sigmoid function or a hyperbolic tangent function may be applied to the summation output”). Dependent claim 5, when analyzed as a whole, is not patent eligible under 35 U.S.C. 101 because the additional recited limitation fails to establish that the claim is not directed to an abstract idea. The “one or more neural networks and a second accumulator” are recited at a high level of generality as mere instructions to implement an abstract idea on a computer (i.e., a system implementing the “one or more neural networks and a second accumulator”) and amount to the recitation of the words “apply it” (or an equivalent) or amount to no more than mere instructions to implement an abstract idea or other exception on a computer or merely use a computer as a tool to perform an abstract idea (i.e., as generic computer components performing generic computer functions). See MPEP 2106.05(f). Also, the limitation “wherein the second encoding step is performed based on the one or more neural networks and a second accumulator” can be considered as “generally linking the use of judicial exception to a particular technological environment or field of use”. See MPEP 2106.05(h). Thus, this limitation does nothing to alter the analysis of claim 1. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Viewing the additional element of this dependent claim as a combination does not add anything further than the individual elements. Mere instructions to apply the mathematical concept electronically (i.e., with the recited “neural network encoder in a wireless communication system” of base claim 1 and the “one or more neural networks and a second accumulator”) do not amount to significantly more than the judicial exception. As noted above, merely asserting that a judicial exception is to be carried out on a generic computer cannot provide significantly more than the judicial exception. See MPEP § 2106.05(f). Accordingly, at Step 2B, the additional element does not amount to significantly more than the judicial exception. Regarding claim 6, this claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 6 is directed to a method as depending from claim 5, thus the analysis for patent eligibilities of claim 5 and of base claim 1 are incorporated herein. Step 2A Prong 1: The claim recites “wherein the neural network encoder applies a function to an output of the summation operation.” This limitation does nothing to alter the fundamental nature of the claim as a mathematical concept. Under its BRI, in light of the specification, this limitation encompasses the mathematical concept of applying a mathematical function (i.e., mathematical relationships, mathematical formulas or equations, and mathematical calculations) to the output/sum resulting from a mathematical calculation – the summation operation (See, e.g., the mathematical relationships and calculations depicted in FIG. 35 and paragraphs 297-298 of the specification stating “Referring to FIG. 35, … since the output of the outer encoder is a real value, the sum of the inner encoder parts becomes the sum of the real values.” and “a sigmoid function or a hyperbolic tangent function may be applied to the summation output”). Dependent claim 6, when analyzed as a whole, is not patent eligible under 35 U.S.C. 101 because the additional recited limitation fails to establish that the claim is not directed to an abstract idea. Step 2A Prong 2 Analysis: Mere instructions to apply the mathematical concept electronically do not meaningfully integrate the judicial exception into a practical application. See MPEP 2106.05(f). The claim does not recite any additional elements that integrate the abstract idea into a practical application or provide significantly more than the abstract idea, and thus the claim is subject-matter ineligible. For example, claim 6 only recites the additional element of “the neural network encoder applies a function”. The “neural network encoder” is recited at a high level of generality as mere instructions to implement an abstract idea on a computer (i.e., a system implementing the “neural network encoder”) and amounts to the recitation of the words “apply it” (or an equivalent) or amount to no more than mere instructions to implement an abstract idea or other exception on a computer or merely use a computer as a tool to perform an abstract idea (i.e., as generic computer components performing generic computer functions). See MPEP 2106.05(f). Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Viewing the additional element of this dependent claim as a combination does not add anything further than the individual elements. Mere instructions to apply the mathematical concept electronically (i.e., with the recited “neural network encoder”) do not amount to significantly more than the judicial exception. As noted above, merely asserting that a judicial exception is to be carried out on a generic computer cannot provide significantly more than the judicial exception. See MPEP § 2106.05(f). Accordingly, at Step 2B, the additional element does not amount to significantly more than the judicial exception. Regarding claim 7, this claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 7 is directed to a method as depending from claim 6, thus the analysis for patent eligibilities of intervening claims 5-6 and of base claim 1 are incorporated herein. Step 2A Prong 1: The claim recites “wherein the function is a sigmoid function or a hyperbolic tangent function.” This limitation does nothing to alter the fundamental nature of the claim as a mathematical concept. Under its BRI, in light of the specification, this limitation encompasses the mathematical concept of applying a sigmoid or hyperbolic tangent mathematical function (i.e., mathematical relationships, mathematical formulas or equations, and mathematical calculations) to the output/sum resulting from a mathematical calculation – the summation operation (See, e.g., the mathematical relationships and calculations depicted in FIG. 35 and paragraphs 297-298 of the specification stating “Referring to FIG. 35, … since the output of the outer encoder is a real value, the sum of the inner encoder parts becomes the sum of the real values.” and “a sigmoid function or a hyperbolic tangent function may be applied to the summation output”). Dependent claim 7, when analyzed as a whole, is not patent eligible under 35 U.S.C. 101 because the additional recited limitation fails to establish that the claim is not directed to an abstract idea. Step 2A Prong 2 Analysis: The claim does not recite any additional elements that integrate the abstract idea into a practical application or provide significantly more than the abstract idea, and thus the claim is subject-matter ineligible. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 8, this claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 8 is directed to a method as depending from claim 6, thus the analysis for patent eligibilities of intervening claims 5-6 and of base claim 1 are incorporated herein. Step 2A Prongs 1-2: The claim recites “wherein the neural network encoder receives function information indicating the function from a base station or an edge server.” This additional limitation does nothing to alter the fundamental nature of the claim as a mathematical concept. The recitation of “receives function information indicating the function from a base station or an edge server” limitation is adding insignificant extra-solution activity (amounts to necessary data gathering) to the judicial exception, as discussed in MPEP § 2106.05(g). Also, the additional limitation of “wherein the neural network encoder receives function information indicating the function from a base station or an edge server” in claim 8 recites a field of use exception that generally links a judicial exception to a particular technological environment. (See MPEP 2106.05(h)). Accordingly, the additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. See MPEP 2106.04(d). The claim is directed to an abstract idea. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Receiving and communicating data are insignificant extra-solution activities that are well-understood, routine, and conventional. See MPEP2106.05(d)(II) (“The courts have recognized the following computer functions as well‐understood, routine, and conventional functions… i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory”) (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)). Therefore, the recitation of “the neural network encoder receives function information indicating the function from a base station or an edge server” is the well-understood, routine, conventional activity of receiving or transmitting data over a network, as discussed in MPEP § 2106.05(d). This claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, there are no additional elements recited that impose any meaningful limits on practicing the abstract idea. Therefore, the additional element of this dependent claim is not sufficient to amount to significantly more than the abstract idea. These claim is not patent eligible. Regarding claim 9, this claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 9 is directed to a method as depending from claim 5, thus the analysis for patent eligibilities of claim 5 and of base claim 1 are incorporated herein. Step 2A Prong 1: The claim recites “wherein the neural network encoder multiplies an output of the summation operation by a parameter greater than 0 and less than 1.” This limitation does nothing to alter the fundamental nature of the claim as a mathematical concept. Under its BRI, in light of the specification, this limitation encompasses the mathematical concept of multiplying an output/sum resulting from a mathematical calculation, the summation operation, by a parameter > 0 and < 1 (i.e., mathematical calculations). Dependent claim 9, when analyzed as a whole, is not patent eligible under 35 U.S.C. 101 because the additional recited limitation fails to establish that the claim is not directed to an abstract idea. Step 2A Prong 2 Analysis: Mere instructions to apply the mathematical concept electronically do not meaningfully integrate the judicial exception into a practical application. See MPEP 2106.05(f). The claim does not recite any additional elements that integrate the abstract idea into a practical application or provide significantly more than the abstract idea, and thus the claim is subject-matter ineligible. For example, claim 9 only recites the additional element of “the neural network encoder multiplies an output”. The “neural network encoder” is recited at a high level of generality as mere instructions to implement an abstract idea on a computer (i.e., a system implementing the “neural network encoder”) and amounts to the recitation of the words “apply it” (or an equivalent) or amount to no more than mere instructions to implement an abstract idea or other exception on a computer or merely use a computer as a tool to perform an abstract idea (i.e., as generic computer components performing generic computer functions). See MPEP 2106.05(f). Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Viewing the additional element of this dependent claim as a combination does not add anything further than the individual elements. Mere instructions to apply the mathematical concept electronically (i.e., with the recited “neural network encoder”) do not amount to significantly more than the judicial exception. As noted above, merely asserting that a judicial exception is to be carried out on a generic computer cannot provide significantly more than the judicial exception. See MPEP § 2106.05(f). Accordingly, at Step 2B, the additional element does not amount to significantly more than the judicial exception. Regarding claim 10, this claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 10 is directed to a method as depending from claim 1, thus the analysis for patent eligibility of claim 1 is incorporated herein. Step 2A Prongs 1-2: The claim recites “wherein the neural network encoder performs puncturing on at least one of the first output and a third output which is an output of the second encoding step.” These limitations do nothing to alter the fundamental nature of the claim as a mathematical concept. This is because the additional limitation of where the generically-recited “neural network encoder performs puncturing on at least one of the first output and a third output which is an output of the second encoding step”, under its BRI, in light of the specification, encompasses the mathematical concept of performing a puncturing operation to adjust a code rate of data values in the 1st and/or 3rd output data resulting from the mathematical encoding steps (See, e.g., the mathematical relationships and calculations depicted in FIG. 31 where x1 is the output value of a “Punc.”/puncturing mathematical operation and “y1 = x1 + n1”, and paragraph 289 of the specification stating “Referring to FIG. 31, in order to adjust the code rate of the neural network encoder system, puncturing may be performed at output end of an outer encoder and an inner encoder. The puncturing to generate a specific code rate may be performed on both output ends of the outer and inner encoders, or may be performed on only one encoder. In addition, a method of performing puncturing may be set differently for each code rate.”). Dependent claim 10, when analyzed as a whole, is not patent eligible under 35 U.S.C. 101 because the additional recited limitation fails to establish that the claim is not directed to an abstract idea. The “wherein the neural network encoder performs puncturing” is recited at a high level of generality as mere instructions to implement an abstract idea on a computer (i.e., a system implementing “the neural network encoder”) and amount to the recitation of the words “apply it” (or an equivalent) or amount to no more than mere instructions to implement an abstract idea or other exception on a computer or merely use a computer as a tool to perform an abstract idea (i.e., as generic computer components performing generic computer functions). See MPEP 2106.05(f). Thus, this limitation does nothing to alter the analysis of claim 1. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Viewing the additional element of this dependent claim as a combination does not add anything further than the individual elements. Mere instructions to apply the mathematical concept electronically (i.e., with the recited “neural network encoder”) do not amount to significantly more than the judicial exception. As noted above, merely asserting that a judicial exception is to be carried out on a generic computer cannot provide significantly more than the judicial exception. See MPEP § 2106.05(f). Accordingly, at Step 2B, the additional element does not amount to significantly more than the judicial exception. Regarding claim 11, this claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 11 is directed to a method as depending from claim 1, thus the analysis for patent eligibility of claim 1 is incorporated herein. Step 2A Prongs 1-2: The claim recites “wherein the one or more neural networks comprises a systematic connection.” This limitation does nothing to alter the fundamental nature of the claim as a mathematical concept. This is because the additional limitation of “wherein the one or more neural networks comprises a systematic connection” merely limits the invention to a narrower abstract idea by further narrowing what the generically-recited “one or more neural networks” include (i.e., “a systematic connection”). Also, the limitation “wherein the one or more neural networks comprises a systematic connection” can be considered as “generally linking the use of judicial exception to a particular technological environment or field of use”. See MPEP 2106.05(h). Dependent claim 11, when analyzed as a whole, is not patent eligible under 35 U.S.C. 101 because the additional recited limitation fails to establish that the claim is not directed to an abstract idea. Thus, this limitation does nothing to alter the analysis of claim 1. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 12, this claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 12 is directed to a method as depending from claim 1, thus the analysis for patent eligibility of claim 1 is incorporated herein. Step 2A Prongs 1-2: The claim recites “wherein the neural network encoder is included in a user equipment (UE), a base station, an edge device, or an edge server.” This limitation does nothing to alter the fundamental nature of the claim as a mathematical concept. This is because the additional limitation of “wherein the neural network encoder is included in a user equipment (UE), a base station, an edge device, or an edge server” merely limits the invention to a narrower abstract idea by further narrowing what the generically-recited “neural network encoder” can be included in (i.e., one of a generically-recited “user equipment (UE), a base station, an edge device, or an edge server”). Also, the limitation “wherein the neural network encoder is included in a user equipment (UE), a base station, an edge device, or an edge server” can be considered as “generally linking the use of judicial exception to a particular technological environment or field of use”. See MPEP 2106.05(h). Dependent claim 12, when analyzed as a whole, is not patent eligible under 35 U.S.C. 101 because the additional recited limitation fails to establish that the claim is not directed to an abstract idea. Thus, this limitation does nothing to alter the analysis of claim 1. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3, 10-11 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Kweon et al. (U.S. Patent Application Pub. No. 2019/0385325 A1, hereinafter “Kweon”) in view of Seol et al. (Korean Application Pub. No. KR-20010084779, hereinafter “Seol”). With respect to claim 1, Kweon discloses the invention as claimed including an encoding method performed by a neural network encoder (see, e.g., paragraphs 5, “The present disclosure provides a method for training a neural network for outputting”, 10, “The neural network may include: an encoder for encoding features of an input” [i.e., an encoding method performed by a neural network encoder]), the method comprising: a first encoding step of encoding input data transmitted from a higher layer5 (see, e.g., paragraphs 17-18, “The neural network includes: an encoder which … outputs feature information … encoded through a plurality of convolution layers hierarchically connected to each other, and outputs intermediate information encoded in respective convolution layers” and “intermediate information encoded in respective convolution layers the encoder, may add the generated global context information to previous information transmitted from an upper interleaving layer to generate new information, and may transmit new information deconvolved to a lower interleaving layer. The previous information may be generated by adding global context information generated in the upper interleaving layer and information encoded in the encoder.” [i.e., 1st encoding step of encoding input data transmitted from an upper/higher layer]); an interleaving step of performing interleaving on a first output which is an output of the first encoding step (see, e.g., paragraphs 17-18, “an interleaver which includes a plurality of interleaving layers”, “transmit new information deconvolved to a lower interleaving layer.” and 46, “The neural network 200 includes an encoder for encoding features of an image input, an interleaver for receiving encoded information and intermediate information of the encoder and outputting” [i.e., performing interleaving on the encoded information/1st output of the 1st encoding]); and … wherein … the first encoding step … is performed based on one or more neural networks (see, e.g., paragraphs 17, “The neural network includes: an encoder which … outputs feature information … encoded through a plurality of convolution layers hierarchically connected to each other, and outputs intermediate information encoded in respective convolution layers” and 46, “The neural network 200 includes an encoder for encoding features of an image input” [i.e., the 1st encoding is performed based on a neural network]). Although Kweon substantially discloses the claimed invention, Kweon is not relied on for explicitly disclosing an encoding method performed by a … encoder in a wireless communication system and a second encoding step of encoding a second output which is an output obtained by interleaving the first output, wherein … the second encoding step is performed based on one or more neural networks. However, in the same field, analogous art Seol teaches an encoding method performed by a … encoder in a wireless communication system (see, e.g., paragraph 15, “an encoding apparatus and method of a wireless communication system having a low frame error rate by using external and internal convolutional encoders in different forms.”) and a second encoding step of encoding a second output which is an output obtained by interleaving the first output, wherein … the second encoding step is performed based on one or more neural networks (see, e.g., Abstract, “The interleaver (43) which mixes the bit encoded about the bit string inputted from the puncturing device(42) to the predetermined pattern and outputted … an internal convolution encoder (44) that encodes the bit string input from the interleaver”, and paragraphs 19, “the sccc includes two convolutional encoders 21 and 24, a puncturing device 22 for cutting bits encoded through the outer convolutional encoder 21 to a predetermined standard, and an interleaver 23 for mixing input bit strings of the inner convolutional encoder 24.” and 25, “two convolutional encoders … are connected in series between a puncturing device and an interleaver.” [i.e., a 2nd encoder for encoding a 2nd output that is output/result from interleaving 1st output from the 1st encoder, the 2nd encoding being performed based on convolutional encoder/a convolutional neural network]). Kweon and Seol are analogous art because they are both related to techniques and systems for using convolutional neural networks and convolutional encoders to encode features of data and interleavers to interleave encoded data. (See, e.g., Kweon, paragraphs 10, 12 and 17, and Seol, Abstract, and paragraphs 16-17 and 19). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kweon to incorporate the teachings of Seol “to provide an outer convolutional encoder (41) which has a structure different from that of an inner convolutional encoder” where the outer encoder “encodes an input sequence at a predetermined code rate” and an “interleaver(43) which mixes the bit encoded about the bit string inputted from the puncturing device(42) to the predetermined pattern and output” (See, e.g., Seol, Abstract). Doing so would have allowed Kweon to leverage Seol’s convolutional encoders and interleaver “of the present invention” with Seol’s “improved serial chain convolution encoder” so “that the serial chain convolutional encoder according to the present invention exhibits excellent encoding performance in a portion lower than the frame error rate 10e-5”, as suggested by Seol (See, e.g., Seol, Abstract and paragraphs 19 and 39). Regarding claim 2, as discussed above, Kweon in view of Seol teaches the method of claim 1. Kweon further discloses wherein … the first encoding step … is performed based on a plurality of parallel-connected neural networks (see, e.g., paragraphs 2, “a learning model is configured with two coarse and fine networks” [i.e., a plurality of neural networks] and 46, “the neural network 200 may … be convolution neural network (CNN). The neural network 200 may be referred to as a multispectral transfer network (MTN) … The neural network 200 includes an encoder for encoding features of an image input, an interleaver for receiving encoded information and intermediate information of the encoder and outputting … the neural network 200 is designed to simultaneously estimate the chromaticity image (Chromaticity) of the color image” [i.e., the 1st encoding is performed based on simultaneously/parallel-connected neural network models/networks]). Although Kweon substantially discloses the claimed invention, Kweon is not relied on for explicitly disclosing wherein each of the first encoding step and the second encoding step is performed based on a plurality of parallel-connected neural networks. However, in the same field, analogous art Seol teaches wherein each of the first encoding step and the second encoding step is performed based on a plurality of parallel-connected neural networks (see, e.g., FIG. 1 – depicting 2 parallel-connected convolutional encoders based on convolutional neural networks 11 and 13, and paragraphs 1, “Fig. 1 is a block diagram of a parallel chain convolutional encoder[s]” and 16-17, “In the next-generation wireless data communication system, two convolutional encoders are connected in parallel”, “A form in which in which two convolutional encoders are connected in parallel is referred to as a parallel concatenated convolutional code (pccc) … the pccc exhibits excellent encoding performance” [i.e., the 1st and 2nd encoding steps are performed by 2 parallel-connected convolutional encoders-based on 2 convolutional neural networks]). Kweon and Seol are analogous art because they are both related to techniques and systems for using convolutional neural networks and convolutional encoders to encode features of data and interleavers to interleave encoded data. (See, e.g., Kweon, paragraphs 10, 12 and 17, and Seol, Abstract, and paragraphs 16-17 and 19). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kweon to incorporate the teachings of Seol to provide a “next-generation wireless data communication system, [where] two convolutional encoders are connected in parallel” and “A form in which in which two convolutional encoders are connected in parallel is referred to as a parallel concatenated convolutional code (pccc)” (See, e.g., Seol, paragraphs 16-17). Doing so would have allowed Kweon to use Seol’s parallel convolutional encoders/pccc “in an environment of a low signal-to-noise ratio, [where] the pccc exhibits excellent encoding performance”, as suggested by Seol (See, e.g., Seol, paragraph 17). Regarding claim 3, as discussed above, Kweon in view of Seol teaches the method of claim 2. Although Kweon substantially discloses the claimed invention, Kweon is not relied on for explicitly disclosing wherein the neural network encoder performs the interleaving after performing parallel-to-serial conversion on the first output However, in the same field, analogous art Seol teaches wherein the neural network encoder performs the interleaving after performing parallel-to-serial conversion on the first output (see, e.g., Abstract, “The interleaver(43) which mixes the bit encoded about the bit string inputted … It is achieved by an internal convolution encoder (44) that encodes the bit string input from the interleaver (43) at a predetermined code rate” and paragraphs 17, “two convolutional encoders are connected in parallel … and a form in which two convolutional encoders are connected in series”, 19, “two convolutional encoders 21 and 24, a puncturing device 22 for cutting bits encoded through the outer convolutional encoder 21 … and an interleaver 23 for mixing input bit strings of the inner convolutional encoder 24” and 32, “one data block having a predetermined length is converted … through the external convolutional encoder 41, and then the code rate is converted back … In the data block whose code rate is converted in this way, the bits encoded by the interleaver 43 are mixed, and then input to the internal interleaver 44 and encoded.” [i.e., the neural network/convolution encoder performs interleaving after parallel-to-serial conversion of the 1st output data block]). Kweon and Seol are analogous art because they are both related to techniques and systems for using convolutional neural networks and convolutional encoders to encode features of data and interleavers to interleave encoded data. (See, e.g., Kweon, paragraphs 10, 12 and 17, and Seol, Abstract, and paragraphs 16-17 and 19). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kweon to incorporate the teachings of Seol “to provide an outer convolutional encoder (41) which has a structure different from that of an inner convolutional encoder” where the outer encoder “encodes an input sequence at a predetermined code rate” and an “interleaver(43) which mixes the bit encoded about the bit string inputted from the puncturing device(42) to the predetermined pattern and output” (See, e.g., Seol, Abstract). Doing so would have allowed Kweon to leverage Seol’s convolutional encoders and interleaver “of the present invention” with Seol’s “improved serial chain convolution encoder” so “that the serial chain convolutional encoder according to the present invention exhibits excellent encoding performance in a portion lower than the frame error rate 10e-5”, as suggested by Seol (See, e.g., Seol, Abstract and paragraphs 19 and 39). Regarding claim 10, as discussed above, Kweon in view of Seol teaches the method of claim 1. Although Kweon substantially discloses the claimed invention, Kweon is not relied on for explicitly disclosing wherein the neural network encoder performs puncturing on at least one of the first output and a third output which is an output of the second encoding step. However, in the same field, analogous art Seol teaches wherein the neural network encoder performs puncturing on at least one of the first output and a third output which is an output of the second encoding step (see, e.g., Abstract, “outer convolutional encoder (41) … encodes an input sequence at a predetermined code rate, … puncturing device(42) which generates the bit string to correspond to the code rate in which the bit string inputted from the outside convolutional encoder(41) is punctured and which is given [to] The interleaver(43) which mixes the bit encoded about the bit string inputted from the puncturing device(42) to the predetermined pattern and outputted It is achieved by an internal convolution encoder (44) that encodes the bit string input” and paragraphs 19, “two convolutional encoders 21 and 24, a puncturing device 22 for cutting bits encoded through the outer convolutional encoder 21 to a predetermined standard”, 25, “two convolutional encoders having a predetermined code rate are connected in series between a puncturing device and an interleaver” and 32, “data block having a predetermined length is converted to 1⁄2 of the code rate through the external convolutional encoder 41, and then the code rate is converted back to 2/3 through the puncturing device 42” [i.e., convolutional/neural network encoder performs puncturing on the 1st output from the 1st encoder or an output of the 2nd encoding by the 2nd encoder]). Kweon and Seol are analogous art because they are both related to techniques and systems for using convolutional neural networks and convolutional encoders to encode features of data and interleavers to interleave encoded data. (See, e.g., Kweon, paragraphs 10, 12 and 17, and Seol, Abstract, and paragraphs 16-17 and 19). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kweon to incorporate the teachings of Seol “to provide an outer convolutional encoder (41) which has a structure different from that of an inner convolutional encoder” where the outer encoder “encodes an input sequence at a predetermined code rate” and an “interleaver(43) which mixes the bit encoded about the bit string inputted from the puncturing device(42) to the predetermined pattern and output” (See, e.g., Seol, Abstract). Doing so would have allowed Kweon to leverage Seol’s convolutional encoders and interleaver “of the present invention” with Seol’s “improved serial chain convolution encoder” so “that the serial chain convolutional encoder according to the present invention exhibits excellent encoding performance in a portion lower than the frame error rate 10e-5”, as suggested by Seol (See, e.g., Seol, Abstract and paragraphs 19 and 39). Regarding claim 11, as discussed above, Kweon in view of Seol teaches the method of claim 1. Kweon further discloses wherein the one or more neural networks comprises a systematic connection (see, e.g., FIG. 3 – depicting Neural Network 200 including a systematic connection to a Thermal camera and other system component and paragraphs 17, “neural network operated by at least one processor. The neural network includes: an encoder which receives a thermal image, outputs feature information of the thermal image encoded through a plurality of convolution layers hierarchically connected to each other”, 44, “When the training apparatus 300 is separated from the depth image outputting apparatus 400, the training apparatus 300 may train the neural network 200 in the center, and a plurality of depth image outputting apparatuses 400 may use the trained neural network 200 [i.e., apparatuses systematically connected to the neural network]. When the neural network 200 is updated in the training apparatus 300, a plurality of depth image outputting apparatuses 400 may download the updated neural network 200 and may use the same. … a mobility object such as a vehicle or a robot may have the depth image outputting apparatus 400 mounted thereto, and the neural network 200 trained by the training apparatus 300 may be periodically updated.” and 59-60, “depth image outputting apparatus 400 receives a thermal image … photographed by the thermal camera”, “depth image outputting apparatus 400 inputs the input thermal image … to the neural network 200” [i.e., the neural network includes a systematic connection to a processor, apparatuses, a camera, a vehicle, a robot, and other system objects/components]). With respect to independent claim 14, claim 14 is substantially similar to claim 1 and therefore is rejected on the same ground as claim 1, discussed above. In particular, claim 13 is an encoder (device) claim with steps that are identical to the method steps of claim 1. In addition, Kweon further discloses an apparatus configured to control a neural network encoder, the apparatus comprising: at least one processor; and at least one memory executablely coupled to the at least one processor and storing instructions, wherein the at least one processor execute the instructions, wherein the at least one processor performs: <steps> (see, e.g., paragraphs 6, “apparatus operated by at least one processor. The depth estimating apparatus includes: a database which stores”, 17, “a neural network operated by at least one processor. The neural network includes: an encoder which receives a thermal image, outputs feature information”, 41 “"module" described in the specification mean units for processing at least one function and operation, and can be implemented by hardware components or software components and combinations thereof.” and 94, “The above-described embodiments can be realized through a program for realizing functions corresponding to the configuration of the embodiments or a recording medium for recording the program, in addition to through the above-described apparatus and/or method.” [i.e., an apparatus for controlling a neural network encoder, the apparatus including a database and recording medium/memory storing executable software/instructions executed by at least one processor for performing operations/method steps]). Claims 4-7 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Kweon in view of Seol as applied to claim 1 above, and further in view of Henry et al. (U.S. Patent Application Pub. No. 2018/0157966 A1, hereinafter “Henry”). Regarding claim 4, as discussed above, Kweon in view of Seol teaches the method of claim 1. Kweon further discloses wherein the first encoding step is performed based on the one or more neural networks (see, e.g., paragraphs 17, “The neural network includes: an encoder which … outputs feature information … encoded through a plurality of convolution layers hierarchically connected to each other, and outputs intermediate information encoded in respective convolution layers” and 46, “The neural network 200 includes an encoder for encoding features of an image input” [i.e., the 1st encoding is performed based on the neural network]). Although Kweon in view of Seol substantially teaches the claimed invention, Kweon in view of Seol is not relied on to teach step is performed based on the one or more neural networks and a first accumulator; and wherein the first accumulator performs an exclusive OR operation. However, in the same field, analogous art Henry teaches step is performed based on the one or more neural networks and a first accumulator (see, e.g., paragraphs 184-186, “NPU [neural processing unit] 126 also includes a second wide mux (not shown) for bypassing the wide adder 244A to facilitate loading the wide accumulator 202A” [i.e., 1st accumulator], “the wide AFU 212A receives the output 217 A of the wide accumulator 202A and performs an activation function on it to generate a wide result … operation that is used in pooling layers of some artificial neural network applications”, “NPU 126 operates effectively as two narrow NPUs … assume a neural network layer having 1024 neurons” [i.e., an operation/step is performed based on the neural network and a 1st accumulator]); and wherein the first accumulator performs an exclusive OR operation (see, e.g., FIG. 4 – depicting ACCUM/accumulator instructions/operations and FIG. 18 – depicting 1st accumulator 202A of NPU/neural processing unit 126, and paragraphs 99, “The other input receives the output 217 of the accumulator 202. The ALU 204 performs arithmetic and/or logical operations on its inputs to generate a result provided on its output. … the multiply-accumulate instruction of FIG. 4 specifies a multiply-accumulate operation, i.e., the result 215 is the sum of the accumulator 202 value 217 and the product of the weight word 203 and the data word of the mux-reg 208 output 209”, 178, “NPU 126 effectively functions as two narrow NPUs. … wide adder 244A adds the output of the wide mux 1896A and the wide accumulator 202A output 217A to generate a sum 215A for provision to the wide accumulator 202A” and 310, “combinatorial logic that accomplish the operation of the NPUs 126 as described herein, such as Boolean logic gates” [i.e., 1st accumulator performs an exclusive OR operation with an exclusive OR logic gate]). Kweon, Seol and Henry are analogous art because they are each related to techniques and systems for using convolutional neural networks and convolutional encoders to process data. (See, e.g., Kweon, paragraphs 10, 12 and 17, Seol, Abstract, and paragraphs 16-17 and 19, and Henry, paragraphs 236, 479 and 493). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kweon in view of Seol to incorporate the teachings of Henry to provide a “NPU [neural processing unit] that includes two accumulators 202A and 202B.” where a “wide adder 244A adds the output of the wide mux 1896A and the wide accumulator 202A output 217A to generate a sum 215A for provision to the wide accumulator 202A and … narrow adder 244B adds the output of the narrow mux 1896B and the narrow accumulator 202B output 217B to generate a sum 215B for provision to the narrow accumulator 202B.” (See, e.g., Henry, FIG. 18 and paragraphs 172 and 179). Doing so would have allowed Kweon in view of Seol to use Henry’s NPU with 2 accumulators where “the narrow accumulator 202B is 28 bits wide to avoid loss of precision in the accumulation of up to 1024 16-bit products. When the NPU 126 is in a wide configuration, the narrow multiplier 242B, narrow mux 1896B, narrow adder 244B, narrow accumulator 202B and narrow AFU 212B are preferably inactive to reduce power consumption.”, as suggested by Henry (see, e.g., Henry, paragraph 178). Regarding claim 5, as discussed above, Kweon in view of Seol and Henry teaches the method of claim 4. Although Kweon substantially discloses the claimed invention, Kweon is not relied on for explicitly disclosing wherein the second encoding step is performed based on the one or more neural networks. However, in the same field, analogous art Seol teaches wherein the second encoding step is performed based on the one or more neural networks (see, e.g., Abstract, “an internal convolution encoder (44) that encodes the bit string input from the interleaver”, and paragraphs 19, “the sccc includes two convolutional encoders 21 and 24, a puncturing device 22 for cutting bits encoded through the outer convolutional encoder 21 to a predetermined standard, and an interleaver 23 for mixing input bit strings of the inner convolutional encoder 24.” and 25, “two convolutional encoders … are connected in series between a puncturing device and an interleaver.” [i.e., 2nd encoding is performed based on convolutional encoder/convolutional neural network]). The motivation to combine Kweon and Seol is the same as discussed above with respect to claim 1. Although Kweon in view of Seol substantially teaches the claimed invention, Kweon in view of Seol is not relied on to teach step is performed based on the one or more neural networks and a second accumulator; and wherein the second accumulator performs a summation operation. However, in the same field, analogous art Henry teaches step is performed based on the one or more neural networks and a second accumulator; and wherein the second accumulator performs a summation operation (see, e.g., see, e.g., FIG. 18 – depicting 2nd accumulator 202B of NPU/neural processing unit 126, and paragraphs 99, “the result 215 is the sum of the accumulator 202 value 217” [i.e., accumulator performs a summation operation], 184-186, “NPU [neural processing unit] 126 also includes … narrow accumulator 202B” [i.e., 2nd accumulator], “narrow AFU 212B receives the output 217B of the narrow accumulator 202B and performs an activation function on it to generate a narrow result … operation that is used in pooling layers of some artificial neural network applications”, “NPU 126 operates effectively as two narrow NPUs … assume a neural network layer having 1024 neurons”, 202, “performs the activation function on the resulting sum 215A to generate a narrow result” and 493, “each NPU 126 accumulates into its accumulator 202 a sum” [i.e., an operation/step is performed based on the neural network and a 2nd accumulator that performs a summation operation]). The motivation to combine Kweon, Seol and Henry is the same as discussed above with respect to claim 4. Regarding claim 6, as discussed above, Kweon in view of Seol and Henry teaches the method of claim 5. Kweon further discloses wherein the neural network encoder applies a function to an output (see, e.g., paragraphs 73-74, “encoder 210 transmits thermal image encoded information to the decoder 250 through a plurality of convolution layers, and transmits intermediate information output by the respective layers to the interleaver 230 and the decoder 250”, “interleaver 230 … outputs a chromaticity image through an activation function. A sigmoidal function will be exemplified as the activation function” and 77, “decoder 250 uses an adaptive scaled sigmoid activation function module 251 for gradually increasing a maximum value of the sigmoidal activation function” [i.e., the encoder applies a sigmoid activation function to an output]). Although Kweon in view of Seol substantially teaches the claimed invention, Kweon in view of Seol is not relied on to teach wherein the neural network … applies a function to an output of the summation operation. However, in the same field, analogous art Henry teaches wherein the neural network … applies a function to an output of the summation operation (see, e.g., FIG. 48 – depicting instruction to apply “SIGMOID” function to output of the “ACCUM”/summation operation “I= SIGMOID (Wi * X +Ui * H +Bi)” and FIG. 51 – depicting instruction to “OUTPUT SIGMOID” of “ADD_W_ACC”/summation operation, and paragraphs 102, “the activation function in a neuron of an intermediate layer of an artificial neural network may serve to normalize the accumulated sum … selects one of the activation functions to perform on the accumulator 202 output 217. The activation functions may include … a sigmoid function, a hyperbolic tangent (tan h) function” and 266, “activation function 2934 specifies the function applied to the accumulator 202 value 217 to generate the output 133 of the NPU 126. As described above and below in more detail, the activation functions 2934 include, but are not limited to: sigmoid; hyperbolic tangent” [i.e., the neural network and its NPU/neural processing unit applies a function to an output/accumulated sum of the summation operation]). The motivation to combine Kweon, Seol and Henry is the same as discussed above with respect to claim 4. Regarding claim 7, as discussed above, Kweon in view of Seol and Henry teaches the method of claim 6. Kweon further discloses wherein the function is a sigmoid function or a hyperbolic tangent function (see, e.g., paragraphs 74, “outputs a chromaticity image through an activation function. A sigmoidal function will be exemplified as the activation function” and 77, “decoder 250 uses an adaptive scaled sigmoid activation function module 251 for gradually increasing a maximum value of the sigmoidal activation function” [i.e., the function is a sigmoid function]). Although Kweon in view of Seol substantially teaches the claimed invention, Kweon in view of Seol is not relied on to teach wherein the function is a sigmoid function or a hyperbolic tangent function. However, in the same field, analogous art Henry teaches wherein the function is a sigmoid function or a hyperbolic tangent function (see, e.g., FIGs. 48 and 51 depicting instructions to apply “SIGMOID” and “TANH” functions, and paragraphs 102, “The activation functions may include … a sigmoid function, a hyperbolic tangent (tan h) function” and 266, “the activation functions 2934 include, but are not limited to: sigmoid; hyperbolic tangent” [i.e., the function is a sigmoid or hyperbolic tangent/tanh function]). The motivation to combine Kweon, Seol and Henry is the same as discussed above with respect to claim 4. Regarding claim 9, as discussed above, Kweon in view of Seol and Henry teaches the method of claim 5. Kweon further discloses wherein the neural network encoder multiplies an output … by a parameter (see, e.g., paragraphs 17, “The neural network includes: an encoder”, 68, “add global context information to an encoding value of the encoder 210 through a configuration … optimization of parameters of the neural network 200 … weight for optimizing the encoder” and 71-72, “convolution unit 235 produces fine details on the chromaticity by convolving intermediate encoding information (Ln) and encoding information output”, “neural network 200 may … output … layers included in the encoder 210 … may be increased or reduced. Values of a parenthesis of the convolution and deconvolution blocks are convolution and deconvolution parameters” [i.e., the neural network encoder convolves/multiplies an output value by a parameter/weight]). Although Kweon in view of Seol substantially teaches the claimed invention, Kweon in view of Seol is not relied on to teach wherein the neural network … multiplies an output of the summation operation by a parameter greater than 0 and less than 1. However, in the same field, analogous art Henry teaches wherein the neural network … multiplies an output of the summation operation by a parameter greater than 0 and less than 1 (see, e.g., paragraphs 100, “ALU 204 provides its output 215 to the accumulator 202 … ALU 204 includes a multiplier 242 that multiplies the weight word 203 and the data word of the mux-reg 208 output 209 to generate a product”, 102, “the activation function in a neuron of … an artificial neural network may serve to normalize the accumulated sum of products … a receiving node multiplies by a weight … to generate a product that is accumulated … receiving/connected neurons may expect to receive as input a value between 0 and 1, in which case the outputting neuron may need to non-linearly squash and/or adjust (e.g., upward shift to transform negative to positive values) the accumulated sum that is outside the 0 to 1 range to a value within the expected range. Thus, the AFU 212 performs an operation on the accumulator 202 value 217 to bring the result 133 within a known range.” and 279-280, “an accumulator 202 that has a large enough bit width to accumulate a full precision value for the maximum number of allowable accumulations … of the NPU [neural processing unit] … sums generated by the integer adder… the non-full precision accumulator”, “The range of the data word values is between 0 and 1” [i.e., the neural network multiplies a summation output by a parameter/weight/data word between 0 and 1/greater than 0 and less than 1]). The motivation to combine Kweon, Seol and Henry is the same as discussed above with respect to claim 4. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Kweon in view of Seol as applied to claim 1 above, and further in view of Sandin (U.S. Patent No. 5,946,357 A1, hereinafter “Sandin”). Regarding claim 12, as discussed above, Kweon in view of Seol teaches the method of claim 1. Although Kweon in view of Seol substantially teaches the claimed invention, Kweon in view of Seol is not relied on to teach wherein the neural network encoder is included in a user equipment (UE), a base station, an edge device, or an edge server. However, in the same field, analogous art Sandin teaches wherein the neural network encoder is included in a user equipment (UE), a base station, an edge device, or an edge server (see, e.g., col. 1, lines 30-32, “encoding and interleaving can be performed at a single logical device, such as at a radio base station of the cellular communication system.”, col. 4, lines 5-7, 38-41 and 64-66 “encoder convolutionally encodes bits of each of the frames of data bits. … a radio base station … forms a multi-stage encoded and interleaved signal … in a cellular communication system, a mobile terminal [i.e., user equipment/UE] is constructed to include apparatus analogous to the apparatus forming portions of a radio base station”, col. 5, lines 1-3, “The mobile terminal is similarly also constructed to include apparatus to encode and interleave signals to be communicated to the radio base station”, col. 6, line 64-col. 7, in 1, “encoders 32 and 42 and interleavers 36 and 46 together form the apparatus 50 … for forming a multi-stage interleaved and encoded communication signal. In one embodiment, the apparatus 50 is formed at a radio base station” and col. 12, lines 5-7, “interleaving and encoding operations may be performed at a radio base station” [i.e., convolutional/neural network encoder is included in a mobile terminal/user equipment/UE or a base station]). Kweon, Seol and Sandin are analogous art because they are each related to techniques and systems for using convolutional neural networks and convolutional encoders and interleavers to process data. (See, e.g., Kweon, paragraphs 10, 12 and 17, Seol, Abstract, and paragraphs 16-17 and 19, and Sandin, col. 4, lines 5-8 and col. 6, lines 48-57). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kweon in view of Seol to incorporate the teachings of Sandin to provide an “Apparatus and associated method” that use “Multi-stage encoding and interleaving of the data bits of a digital information signal is performed at a single logical device, such as a radio base station.” where “only a single buffering stage is required by the transmitter apparatus to form the multi-stage interleaved and encoded signal, and only a single buffering stage is required by the receiver apparatus” (See, e.g., Sandin, Abstract and col. 4, lines 33-36). Doing so would have allowed Kweon in view of Seol to use Sandin’s apparatus and method for “improving the radio link performance of a radio communication system, such as a cellular communication system.” and “Because only a single buffering stage is required by the transmitter apparatus to form the multi-stage interleaved and encoded signal, and only a single buffering stage is required by the receiver apparatus, a substantial reduction in the signal transmission delay is possible.” as suggested by Sandin (see, e.g., Sandin, Abstract and col. 4, lines 32-37). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Kweon in view of Seol and Henry as applied to claim 6 above, and further in view of Sandin (U.S. Patent No. 5,946,357 A1, hereinafter “Sandin”). Regarding claim 8, as discussed above, Kweon in view of Seol and Henry teaches the method of claim 6. Kweon further discloses wherein the neural network encoder receives function information indicating the function (see, e.g., paragraphs 68, “add global context information to an encoding value of the encoder 210”, 75, “decoder 250 is configured with a plurality of deconvolution layers, it receives encoded information from the encoder 210 and intermediate information output by the intermediate layer of the encoder 210”, and 77, “decoder 250 uses an adaptive scaled sigmoid activation function module 251 for gradually increasing a maximum value of the sigmoidal activation function” [i.e., neural network encoder receives information indicating the function, which is passed to the decoder]). Although Kweon in view of Seol and Henry substantially teaches the claimed invention, Kweon in view of Seol and Henry is not relied on to teach wherein the neural network encoder receives function information indicating the function from a base station or an edge server. However, in the same field, analogous art Sandin teaches wherein the neural network encoder receives function information indicating the function from a base station or an edge server (see, e.g., col. 1, lines 6-11 and 30-32, “transmission of a digital communication signal … between a radio base station and a mobile terminal of a cellular communication system. … encoding and interleaving can be performed at a single logical device, such as at a radio base station of the cellular communication system.”, col. 4, lines 5-7 and 38-41, “encoder convolutionally encodes bits of each of the frames of data bits. … a radio base station operable in a cellular communication system forms a multi-stage encoded and interleaved signal for communication”, col. 9, lines 1-5, “transmission … of the frames upon which interleaving and encoding … operations are performed and the functional locations” and col. 12, lines 5-7, “interleaving and encoding operations may be performed at a radio base station which forms a downlink signal to be transmitted” [i.e., convolutional/neural network encoder receives information indicating the function from a base station]). Kweon, Seol, Henry and Sandin are analogous art because they are each related to techniques and systems for using convolutional neural networks and convolutional encoders and interleavers to process data. (See, e.g., Kweon, paragraphs 10, 12 and 17, Seol, Abstract, and paragraphs 16-17 and 19, Henry, paragraphs 236, 479 and 493, and Sandin, col. 4, lines 5-8 and col. 6, lines 48-57). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kweon in view of Seol and Henry to incorporate the teachings of Sandin to provide an “Apparatus and associated method” that use “Multi-stage encoding and interleaving of the data bits of a digital information signal is performed at a single logical device, such as a radio base station.” where “only a single buffering stage is required by the transmitter apparatus to form the multi-stage interleaved and encoded signal, and only a single buffering stage is required by the receiver apparatus” (See, e.g., Sandin, Abstract and col. 4, lines 33-36). Doing so would have allowed Kweon in view of Seol and Henry to use Sandin’s apparatus and method for “improving the radio link performance of a radio communication system, such as a cellular communication system.” and “Because only a single buffering stage is required by the transmitter apparatus to form the multi-stage interleaved and encoded signal, and only a single buffering stage is required by the receiver apparatus, a substantial reduction in the signal transmission delay is possible.” as suggested by Sandin (see, e.g., Sandin, Abstract and col. 4, lines 32-37). Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Kweon in view of Seol and further in view of Tanner (U.S. Patent Application Pub. No. 2021/0048991 A1, hereinafter “Tanner”). With respect to independent claim 13, claim 13 is substantially similar to claim 1 and therefore is rejected on the same ground as claim 1, discussed above. In particular, claim 13 is an encoder (device) claim with steps that are identical to the method steps of claim 1. In addition, Kweon further discloses a neural network encoder, the neural network encoder comprising: at least one memory storing instructions; at least one transceiver; and at least one processor coupling the at least one memory … wherein the at least one processor execute6 the instructions, wherein the at least one processor performs: <steps> (see, e.g., paragraphs 17, “a neural network operated by at least one processor. The neural network includes: an encoder which receives a thermal image, outputs feature information”, 41 “"module" described in the specification mean units for processing at least one function and operation, and can be implemented by hardware components or software components and combinations thereof.” and 94, “The above-described embodiments can be realized through a program for realizing functions corresponding to the configuration of the embodiments or a recording medium for recording the program, in addition to through the above-described apparatus and/or method.” [i.e., a neural network encoder including a recording medium/memory storing software/instructions executed by at least one processor for performing operations/method steps]). Although Kweon in view of Seol substantially teaches the claimed invention, Kweon in view of Seol is not relied on to teach at least one processor coupling the at least one memory and the at least one transceiver. In the same field, analogous art Tanner teaches at least one processor coupling the at least one memory and the at least one transceiver (see, e.g., FIG. 29, depicting processor 2902 coupling memory 2920 and wireless transceiver 2926, and paragraph 436, “platform controller hub 2930 enables peripherals to connect to memory device 2920 and processor 2902 via a high-speed I/O bus. In at least one embodiment, I/O peripherals include … a wireless transceiver 2926, touch sensors 2925, a data storage device 2924 (e.g., hard disk drive, flash memory, etc.)” [i.e., a processor 2902 coupling memory 2920 and wireless transceiver 2926]). Kweon, Seol and Tanner are analogous art because they are each related to techniques and systems for using convolutional neural networks and convolutional encoders to encode features of data and interleavers to interleave encoded data. (See, e.g., Kweon, paragraphs 10, 12 and 17, Seol, Abstract, and paragraphs 16-17 and 19, and Tanner, paragraphs 53, 132, 387 and 404). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kweon in view of Seol to incorporate the teachings of Tanner to provide a “processing system”, where the “ system 2900 includes one or more processors 2902” and a “platform controller hub 2930 [that] enables peripherals to connect to memory device 2920 and processor 2902 via a high-speed I/O bus.” where the “peripherals include … a wireless transceiver 2926” and the “wireless transceiver 2926 can be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 3G, 4G, or Long Term Evolution (LTE) transceiver.” (See, e.g., Tanner, FIG. 29 and paragraphs 430 and 436). Doing so would have allowed Kweon in view of Seol to use Tanner’s system with the platform controller hub to implement a platform and technique that “reduces memory usage, causes one or more processors to execute more efficiently, improves parallelization of computer programs”, to implement techniques that “allow registers to be allocated more efficiently” and “to create a multi-GPU cluster to improve training speed for deep neural networks”, as suggested by Tanner (see, e.g., Tanner, paragraphs 52 and 352). Conclusion The prior art made of record, listed on form PTO-892, and not relied upon, is considered pertinent to applicant's disclosure. For instance, non-patent literature Sattiraju et al. ("Performance analysis of deep learning based on recurrent neural networks for channel coding." 2018 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS). IEEE, 2018, hereinafter “Sattiraju”) discloses a “Turbo Encoder Architecture” including multiple encoders and an interleaver where “a turbo encoder consists of two encoders (referred to as constituent encoders) separated by an interleaver. The encoders are normally identical and the interleaver is used to scramble the bits before being fed to the second encoder. Thus the encoder outputs are different from each other. LTE uses two 8-state identical Recursive Systematic Convolutional (RSC) encoders that are concatenated in parallel and separated by an internal interleaver [25] as shown in Fig. 1.” (See, e.g., FIGs. 1-2 and page 2, sect. II). The examiner requests, in response to this office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line no(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application. When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the reference cited or the objections made. He or she must also show how the amendments avoid such references or objections See 37 CFR 1.111 (c). Any inquiry concerning this communication or earlier communications from the examiner should be directed to RANDY K BALDWIN whose telephone number is (571)270-5222. The examiner can normally be reached on Mon - Fri 9:00-6:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar can be reached on 571-272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RANDALL K. BALDWIN/Primary Examiner, Art Unit 2125 1 As noted in the preliminary amendment to the specification and the Application Data Sheet (ADS) filed 03/08/2023, the parent PCT application no. PCT/KR2020/012173 was filed on September 9, 2020. Examiner notes that the filing receipt mailed July 7, 2023 incorrectly indicated that the PCT filing date was August 8, 2020 in stating “This application is a 371 of PCT/KR2020/012173 08/08/2020”. However, the parent PCT application was, in fact, filed September 9, 2020 (i.e., 09/09/2020), not “08/08/2020”. 2 Based on a review of a machine translation of the published international application no. PCT/KR2020/012173, published as WO 2022054980 A1, there does not appear to be any discussion or disclosure of “a first encoding step of encoding input data transmitted from a higher layer” in the international application. Examiner notes that, based on the machine translation, claim 1 of the published international application appears to recite “a first encoding step of encoding input data transmitted from an upper layer”. 3 As indicated below in the section 112(b) rejection of these claims, “encoding input data transmitted from a higher layer” has been interpreted as encoding input data transmitted from any previous, preceding, upper, or higher layer of a neural network than a current, subsequent, following, or lower layer of the neural network 4 As indicated above in the section 112(b) rejection of these claims, “encoding input data transmitted from a higher layer” has been interpreted as encoding input data transmitted from any previous, preceding, upper, or higher layer of a neural network than a current, subsequent, following, or lower layer of the neural network 5 As indicated above in the section 112(b) rejection of this claim, “encoding input data transmitted from a higher layer” has been interpreted as encoding input data transmitted from any previous, preceding, upper, or higher layer of a neural network than a current, subsequent, following, or lower layer of the neural network 6 As noted in the objection to this claim above, it appears “execute” should read “executes”
Read full office action

Prosecution Timeline

Mar 08, 2023
Application Filed
Jan 19, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602573
NEURAL NETWORK ROBUSTNESS VIA BINARY ACTIVATION
2y 5m to grant Granted Apr 14, 2026
Patent 12596918
ACCELERATOR FOR DEEP NEURAL NETWORKS
2y 5m to grant Granted Apr 07, 2026
Patent 12579000
SCHEDULING METHOD FOR A MULTI-LAYER CONVOLUTIONAL NEURAL NETWORK, ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12574477
DISTRIBUTED DEEP LEARNING USING A DISTRIBUTED DEEP NEURAL NETWORK
2y 5m to grant Granted Mar 10, 2026
Patent 12572789
BLOCKWISE FACTORIZATION OF HYPERVECTORS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+26.9%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 232 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month