Prosecution Insights
Last updated: April 19, 2026
Application No. 17/852,954

ONE-DIMENSIONAL-CONVOLUTION-BASED SIGNAL CLASSIFIER

Final Rejection §101§102§103§112
Filed
Jun 29, 2022
Examiner
SIPPEL, MOLLY CLARKE
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Sri International
OA Round
2 (Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
7 granted / 14 resolved
-5.0% vs TC avg
Strong +58% interview lift
Without
With
+58.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
25 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
33.8%
-6.2% vs TC avg
§103
32.0%
-8.0% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
23.6%
-16.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 14 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION This action is responsive to the amendment filed on 12/08/2025. Claims 1-20 are currently pending in the case. Claims 1, 3, 5, 7, 10, 12-14, 17-18, and 20 are currently amended. Claims 1, 12, 13, and 18 are independent claims. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for domestic priority based on provisional application no. 63/217,179 filed on 06/30/2021. Claim Interpretation This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: Claim 1, “an output module configured to work…and configured to cooperate…” Claim 1, “the output module and the machine learning architecture are configured to cooperate to present…” Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-11 and 17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1, claim limitation “an output module” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The claim makes it clear the output module is separate from the processors, and memory, but does not make it clear how the output module “works with” the processors and memory. Further, applicant’s specification, paragraph 0023 states “the output module can be configured to analyze and assess the outputted results of one or more machine learning architectures trained with machine learning”, however there is no specific algorithm as to how the output module performs the analysis or assessment. Thus, there is insufficient disclosure of the corresponding structure, material, or acts for performing the entire claimed function. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim 2 is rejected as being dependent upon a rejected base claim without curing any of the deficiencies. Regarding claim 3, the claim recites “the one-dimensional-convolution operation” in lines 3-4. There is insufficient antecedent basis for this limitation in the claim. The parent claim recites “a one-dimensional convolutional-based operation” in line 10. It is unclear if applicant is attempting to refer to the previously recited claim element or if applicant is attempting to recite a new claim element. For examination purposes, this limitation has been interpreted to mean “the one-dimensional-convolutional-based operation”, referring to the previously recited claim element. Claim 4 is rejected as being dependent upon a rejected base claim and failing to cure any of the deficiencies. Regarding claim 5, the claim recites the limitation "operations in the first branch" in lines 12-13. The parent claim recites "a one-dimensional convolutional-based operation". It is unclear if applicant is attempting to recite a new claim element, or if applicant is attempting to refer to a previously recited claim element. Thus, the claim fails to particularly point out and distinctly claim the subject matter. For examination purposes, the limitation has been interpreted to mean "the one-dimensional convolutional-based operation in the first branch", referring to the previously recited claim element. Claims 6-11 are rejected as being dependent upon a rejected base claim without curing any of the deficiencies. Regarding claim 17, the claim recites “the one-dimensional-convolution operation” in line 5. There is insufficient antecedent basis for this limitation in the claim. The parent claim recites “a one-dimensional convolutional-based operation” in line 9. It is unclear if applicant is attempting to refer to the previously recited claim element or if applicant is attempting to recite a new claim element. For examination purposes, this limitation has been interpreted to mean “the one-dimensional-convolutional-based operation”, referring to the previously recited claim element. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1: Step 1 Statutory Category: Claim 1 is directed to a machine, which falls under one of the four statutory categories. Step 2A Prong 1 Judicial Exception: Claim 1 recites, in part, “analyze a first set of data of parameter-varying signals”. This limitation, under the broadest reasonable interpretation, covers the recitation of mathematical concepts, see MPEP §2106.04(a)(2)(I). Further, the claim recites: “a series of i) a one-dimensional convolutional-based operation on the first set of data of the parameter-varying signals”. This limitation is the abstract idea of a mathematical calculation, as directed to “a claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP §2106.04(a)(2)(I)(C). Further, the claim recites: “ii) followed by a non-linear activation function on the first set of data of the parameter-varying signals with multiple representations of the parameter-varying signals”. This limitation is the abstract idea of a mathematical calculation, as directed to “a claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP §2106.04(a)(2)(I)(C). Finally, the claim recites: “where each representation of the parameter-varying signal is analyzed in a different domain, in order to produce a classification of an entity into a specific category of an object corresponding to identifying features of the parameter-varying signals”. This limitation is the abstract idea of a mathematical calculation, as directed to “a claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP §2106.04(a)(2)(I)(C). Step 2A Prong 2 Integration into a Practical Application: This judicial exception is not integrated into a practical application. In particular the claim recites: “an apparatus” and “an output module configured to work with one or more processors to execute instructions and a memory to store data and instructions”. These are additional elements that amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP §2106.05(f). Further, the claim recites: “where the output module is configured to cooperate with a machine learning architecture”. This limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use, see MPEP §2106.05(h). Further, the claim recites: “where the machine learning architecture is configured to use a signal-analyzing neural-network … where the signal-analyzing neural-network is configured to contain a one-dimensional-convolutional layer”. These limitations are additional elements that generally link the use of the judicial exception to a particular technological environment or field of use, see MPEP §2106.05(h). Finally, the claim recites: “where the signal-analyzing neural-network is trained with one or more machine learning algorithms on sampled data of the parameter varying signals”. This limitation is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP §2106.05(f). Finally, the claim recites: “present a representation of an output result from the machine learning architecture to be shown on a display screen indicating the specific category that the object is classified to belong to from the first set of data of time-varying signals under analysis, without any prior knowledge of a presence or a type of the classified object actually being contained or present within the parameter-varying signals, currently under analysis”. This limitation is an additional element that amounts to adding insignificant extra-solution activity to the judicial exception. See MPEP §2106.05(g). Step 2B Significantly More: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements: “an apparatus”, “an output module configured to work with one or more processors to execute instructions and a memory to store data and instructions”, and “where the signal-analyzing neural-network is trained with one or more machine learning algorithms on sampled data of the parameter varying signals” are additional elements that amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. Elements that merely amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process cannot provide an inventive concept. Further, the additional elements: “where the output module is configured to cooperate with a machine learning architecture” and “where the machine learning architecture is configured to use a signal-analyzing neural-network … where the signal-analyzing neural-network is configured to contain a one-dimensional-convolutional layer” are additional elements that amount to generally linking the use of the judicial exception to a particular technological environment or field of use. Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. Finally, the additional element: “present a representation of an output result from the machine learning architecture to be shown on a display screen indicating the specific category that the object is classified to belong to from the first set of data of time-varying signals under analysis, without any prior knowledge of a presence or a type of the classified object actually being contained or present within the parameter-varying signals, currently under analysis” amounts to adding insignificant extra-solution activity to the judicial exception and is directed to receiving or transmitting data over a network which courts have recognized as well-understood, routine, and conventional when they are claimed in a generic manner, see MPEP §2106.05(d)(II). The claim is not patent eligible. Regarding claim 2, the rejection of claim 1 is incorporated, and further, the claim recites: “where a multiple value data structure is utilized to supply different values of the parameter-varying signals to the signal-analyzing neural-network”. This limitation amounts to mere data gathering. It is necessary to acquire the data in order to use the recited judicial exception. Therefore, this limitation is insignificant extra-solution activity to the judicial exception, see MPEP §2106.05(g). Further, the limitation is directed to receiving or transmitting data over a network which courts have recognized as well-understood, routine, and conventional when they are claimed in a generic manner, see MPEP §2106.05(d)(II). Further, the claim recites: “where the parameter-varying signals under analysis are time-varying signals”. This is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 3, the rejection of claim 2 is incorporated, and further, the claim recites: “where one or more branches of the signal-analyzing neural-network are constructed to apply at least two or more successive layers of the one-dimensional-convolution layer to apply the one-dimensional-convolution operation”. This limitation is a continuation of the “a series of i) a one-dimensional convolutional-based operation on the data of the parameter-varying signals” limitation identified as an abstract idea in the rejection of the parent claim. Further, the claim recites: “followed by a non-linear activation function layer to apply the non-linear activation function to data values of time and frequency in the time-varying signals”. This limitation is a continuation of the “ii) followed by a non-linear activation function on the data of the parameter-varying signals with multiple representations of the parameter-varying signals” limitation identified as an abstract idea in the rejection of the parent claim. Thus, the claim recites a judicial exception. The claim does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 4, the rejection of claim 3 is incorporated, and further, the claim recites: “where the signal-analyzing neural-network is a convolutional neural network”. This limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use, see MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 5, the rejection of claim 2 is incorporated, and further, the claim recites: “apply the one-dimensional convolutional-based operation in the first branch on the input values of the time-varying signals in the first domain”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim. Further, the claim recites: “apply the one-dimensional convolutional-based operation in the second branch on the input values of the time-varying signals in a second domain at a same time with operations in the first branch”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim, thus recites a judicial exception. Further, the claim recites: “where one or more portions of the signal-analyzing neural-network are constructed to include a first branch, … and a second branch”. This limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use, see MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. Further, the claim recites: “where input values of the time-varying signals in a first domain are supplied into the first branch of the signal-analyzing neural-network” and “where input values of the time-varying signals in a second domain are supplied into the second branch of the signal-analyzing neural-network”. These limitations amount to mere data gathering. Therefore, these limitations are insignificant extra-solution activity to the judicial exception, see MPEP §2106.05(g). Further, the limitations are directed to receiving or transmitting data over a network which courts have recognized as well-understood, routine, and conventional when they are claimed in a generic manner, see MPEP §2106.05(d)(II). Further, the claim recites: “where a first one-dimensional-convolution layer in the first branch is configured to…” and “where a second one-dimensional-convolution layer in the second branch is configured to”. These limitations are additional elements that generally link the use of the judicial exception to a particular technological environment or field of use. See MPEP § 2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 6, the rejection of claim 1 is incorporated, and further, the claim recites: “where a first output result in a first domain is generated”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim. Further, the claim recites: “where a second output result in a second domain is generated”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim. Further, the claim recites: “where the first and second output results from the first branch on the first domain and the second branch on the second domain of the signal-analyzing neural-network are combined”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim. Thus, the claim recites a judicial exception. Further, the claim recites: “by a first branch of the signal-analyzing neural-network”, “by a second branch of the signal-analyzing neural-network”, and “in a concatenation layer of a later portion of the signal-analyzing neural network”. These limitations are additional elements that generally link the use of the judicial exception to a particular technological environment or field of use, see MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 7, the rejection of claim 1 is incorporated, and further, the claim recites: “combine the multiple representations from the different domains”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim. Further, the claim recites: “determine the specific category of the object from a group of two or more possible categories of objects”. This limitation is a continuation of the “where each representation of the parameter-varying signal is analyzed in a different domain, in order to produce a classification of an entity into a specific category of an object corresponding to identifying features of the parameter-varying signals” limitation identified as an abstract idea in the rejection of the parent claim. Further, the claim recites: “classify values and features of the time-varying signals from the concatenation layer to correspond them to different known sources of objects, including a drone, a rocket, a bird, or other objects”. This limitation, under the broadest reasonable interpretation, covers the recitation of a mental process that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion), in this case a judgment. See MPEP § 2106.04(a)(2)(III). Thus, the claim recites a judicial exception. Further, the claim recites: “where the signal-analyzing neural-network has a final portion of the signal-analyzing neural-network containing a concatenation layer … and one or more fully connected layers”. This limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use, see MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. Further, the claim recites “a classifier is configured to…”. This limitation amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. Elements that merely amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 8, the rejection of claim 2 is incorporated, and further, the claim recites: “apply the one-dimensional convolutional-based operation followed by the non-linear activation function layer in order to change each output of each one-dimensional convolutional layer from a linear feature of the time-varying signal into a non-linear feature”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim, thus recites a judicial exception. Further, the claim recites: “where the signal-analyzing neural-network contains a sequence of multiple iterations of one-dimensional convolutional layers, where each one-dimensional convolutional layer is configured to…”. This limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use, see MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. Regarding claim 9, the rejection of claim 1 is incorporated, and further, the claim recites: “a user interface configured to convey the produced classification of the entity into the specific category of the object corresponding to identifying features of the parameter-varying signals from a group of two or more possible categories of the object”. This limitation is merely a post-solution step and as such, amounts to adding insignificant extra-solution activity to the judicial exception, see MPEP §2106.05(g). This element is directed to receiving or transmitting data over a network which courts have recognized as well-understood, routine, and conventional when they are claimed in a generic manner, see MPEP §2106.05(d)(II). Regarding claim 10, the rejection of claim 2 is incorporated, and further, the claim recites: “where a multiple value data structure is configured to supply different values of the first set of data of the time-varying signals as a matrix to supply the different values as a one-dimensional time signal that is expanded into a multiple dimensional time-and-frequency representation via operations performed by a preprocessing portion of the signal-analyzing neural-network”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim, thus recites a judicial exception. The claim does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 11, the rejection of claim 2 is incorporated, and further, the claim recites: “where the time-varying signals are radio frequency signals”. This limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use, see MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 12: Step 1 Statutory Category: Claim 12 is directed to a method, which falls under one of the four statutory categories. Step 2A Prong 1 Judicial Exception: Claim 12 recites, in part, “analyze data of parameter-varying signals”. This limitation, under the broadest reasonable interpretation, covers the recitation of mathematical concepts, see MPEP §2106.04(a)(2)(I). Further, the claim recites: “a series of i) a one-dimensional convolutional-based operation on a first set of data of the parameter-varying signals”. This limitation, under the broadest reasonable interpretation covers the recitation of a mathematical calculation, as directed to “a claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP §2106.04(a)(2)(I)(C). Further, the claim recites: “ii) followed by a non-linear activation function on the first set of data of the parameter-varying signals with multiple representations of the parameter-varying signals”. This limitation, under the broadest reasonable interpretation covers the recitation of a mathematical calculation, as directed to “a claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP §2106.04(a)(2)(I)(C). Further, the claim recites: “where each representation of the parameter-varying signal is analyzed in a different domain, in order to produce a classification of an entity into a specific category of an object corresponding to identifying features of the parameter-varying signals”. This limitation is the abstract idea of a mathematical calculation, as directed to “a claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP §2106.04(a)(2)(I)(C). Step 2A Prong 2 Integration into a Practical Application: This judicial exception is not integrated into a practical application. In particular, the claim recites: “a machine learning architecture”, “using the machine learning architecture with a signal-analyzing neural-network”, and “using a one-dimensional-convolutional layer in the signal-analyzing neural-network”. These limitations are additional elements that generally link the use of the judicial exception to a particular technological environment or field of use, see MPEP §2106.05(h). Further, the claim recites: “where the signal-analyzing neural-network is trained with one or more machine learning algorithms on sampled data of the parameter-varying signals”. This limitation is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP §2106.05(f). Step 2B Significantly More: The claims do not include elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements: “a machine learning architecture”, “using the machine learning architecture with a signal-analyzing neural-network”, and “using a one-dimensional-convolutional layer in the signal-analyzing neural-network” are additional elements that generally link the use of the judicial exception to a particular technological environment or field of use. Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. Further, the claim recites: “where the signal-analyzing neural-network is trained with one or more machine learning algorithms on sampled data of the parameter-varying signals” which is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. Elements that merely amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 13: Step 1 Statutory Category: Claim 13 is directed to a machine, which falls under one of the four statutory categories. Step 2A Prong 1 Judicial Exception: Claim 13 recites, in part, “analyze data of parameter-varying signals”. This limitation, under the broadest reasonable interpretation, covers the recitation of mathematical concepts, see MPEP §2106.04(a)(2)(I). Further, the claim recites: “a series of i) a one-dimensional convolutional-based operation on a first set of data of the parameter-varying signals”. This limitation, under the broadest reasonable interpretation covers the recitation of a mathematical calculation, as directed to “a claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP §2106.04(a)(2)(I)(C). Further, the claim recites: “ii) followed by a non-linear activation function on the first set of data of the parameter-varying signals with multiple representations of the parameter-varying signals”. This limitation, under the broadest reasonable interpretation covers the recitation of a mathematical calculation, as directed to “a claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP §2106.04(a)(2)(I)(C). Further, the claim recites: “where each representation of the parameter-varying signal is analyzed in a different domain, in order to produce a classification of an entity into a specific category of an object corresponding to identifying features of the parameter-varying signals”. This limitation is the abstract idea of a mathematical calculation, as directed to “a claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP §2106.04(a)(2)(I)(C). Step 2A Prong 2 Integration into a Practical Application: This judicial exception is not integrated into a practical application. In particular, the claim recites: “a non-transitory machine-readable medium, which stores further instructions in the executable format by the one or more processors to cause operations as follows”. This limitation is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP § 2106.05(f). Further, the claim recites: “using a machine learning architecture with a signal-analyzing neural-network”, and “using a one-dimensional-convolutional layer in the signal-analyzing neural-network”. These limitations are additional elements that generally link the use of the judicial exception to a particular technological environment or field of use, see MPEP §2106.05(h). Further, the claim recites: “where the signal-analyzing neural-network is trained with one or more machine learning algorithms on sampled data of the parameter-varying signals”. This limitation is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP §2106.05(f). Step 2B Significantly More: The claims do not include elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element: “a non-transitory machine-readable medium, which stores further instructions in the executable format by the one or more processors to cause operations as follows” is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. Elements that amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process cannot provide an inventive concept. Further, the additional elements: “a machine learning architecture”, “using the machine learning architecture with a signal-analyzing neural-network”, and “using a one-dimensional-convolutional layer in the signal-analyzing neural-network” are additional elements that generally link the use of the judicial exception to a particular technological environment or field of use. Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. Further, the claim recites: “where the signal-analyzing neural-network is trained with one or more machine learning algorithms on sampled data of the parameter-varying signals” which is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. Elements that merely amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 14, the rejection of claim 13 is incorporated, and further, the claim recites: “operating upon the time-varying signals in the time domain”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim. Further, the claim recites: “apply the one-dimensional convolution-based operation on the input values of the time-varying signals in the time domain, under analysis”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim. Further, the claim recites: “operating upon the time-varying signals in the frequency domain”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim. Further, the claim recites: “apply the one-dimensional convolution-based operation on the input values of the time-varying signals in the frequency domain, under analysis”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim. Further, the claim recites: “where the input block is configured to apply a Fast Fourier Transform on the input values for the time-varying signals under analysis from the time domain in order to produce values for the time-varying signals under analysis in the frequency domain”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim. Further, the claim recites: “where the parameter-varying signals under analysis are time-varying signals”. This limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. Further, the claim recites: “supplying input values of the time-varying signals in a time domain into a first branch of the signal-analyzing neural-network” and “supplying input values of the time-varying signals in a frequency domain into a second branch of the signal-analyzing neural-network”. These limitations amount to mere data gathering. Therefore, these limitations are insignificant extra-solution activity to the judicial exception, see MPEP §2106.05(g). Further, they are directed to receiving or transmitting data over a network which courts have recognized as well-understood, routine, and conventional when they are claimed in a generic manner, see MPEP §2106.05(d)(II). Further, the claim recites: “in the first branch of the signal-analyzing neural-network, where a first one-dimensional-convolution layer in the first branch is configured to…” and “in the second branch of the signal-analyzing neural-network, where a second one-dimensional-convolutional layer in the second branch is configured to…”. These limitations are additional elements that generally link the use of the judicial exception to a particular technological environment or field of use. See MPEP § 2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. Further, the claim recites: “where the signal-analyzing neural-network has an input block to receive the input values for the time-varying signals in the time domain”. This limitation amounts to mere data gathering and as such is considered insignificant extra-solution activity to the judicial exception, and is directed to receiving or transmitting data over a network which courts have recognized as well-understood, routine, and conventional when they are claimed in a generic manner, see MPEP §2106.05(d)(II). The claim is not patent eligible. Regarding claim 15, the rejection of claim 13 is incorporated, and further, the claim recites: “generating a first output result in a time domain”, “generating a second output result in a frequency domain”, and “combining the first and second output results from the first branch on the time domain and the second branch on the frequency domain of the signal-analyzing neural-network”. These limitations recite mathematical concepts in addition to those identified in the rejection of the parent claim, thus recites a judicial exception. Further the claim recites: “in a first branch of the signal-analyzing neural-network”, “in a second branch of the signal-analyzing neural-network”, and “in a concatenation layer of a later portion of the signal-analyzing neural network”. These limitations are additional elements that merely generally link the use of the judicial exception to a particular technological environment or field of use, see MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 16, the rejection of claim 14 is incorporated, further, the claim recites: “using a matrix to supply different values of the data of the time-varying signals under analysis as a one-dimensional time signal that is expanded into a multiple-dimensional time-and-frequency representation via operations performed by and within the signal-analyzing neural-network”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim, thus recites a judicial exception. The claim does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 17, the rejection of claim 14 is incorporated, and further, the claim recites: “apply the one-dimensional-convolution operation followed by … apply the non-linear activation function to the data of values of time and frequency features of the time-varying signals in order to change an output of the one-dimensional-convolutional layer from a linear feature of the time-varying signal into a non-linear feature”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim, thus recites a judicial exception. Further, the claim recites: “using two or more successive layers of the one-dimensional-convolution layer to…” and “by a non-linear activation function layer”. These limitations are additional elements that amount to generally linking the use of the judicial exception to a particular technological environment or field of use, see MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 18: Step 1 Statutory Category: Claim 18 is directed to a machine, which falls under one of the four statutory categories. Step 2A Prong 1 Judicial Exception: Claim 18 recites, in part, “analyze data of parameter-varying signals”. This limitation, under the broadest reasonable interpretation, covers the recitation of mathematical concepts, see MPEP §2106.04(a)(2)(I). Further, the claim recites: “a series of i) a one-dimensional convolutional-based operation on a first set of data of the parameter-varying signals”. This limitation, under the broadest reasonable interpretation covers the recitation of a mathematical calculation, as directed to “a claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP §2106.04(a)(2)(I)(C). Further, the claim recites: “ii) followed by a non-linear activation function on the first set of data of the parameter-varying signals with multiple representations of the parameter-varying signals”. This limitation, under the broadest reasonable interpretation covers the recitation of a mathematical calculation, as directed to “a claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP §2106.04(a)(2)(I)(C). Further, the claim recites: “where each representation of the parameter-varying signal is analyzed in a different domain, in order to produce a classification of an entity into a specific category of an object corresponding to identifying features of the parameter-varying signals”. This limitation is the abstract idea of a mathematical calculation, as directed to “a claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP §2106.04(a)(2)(I)(C). Step 2A Prong 2 Integration into a Practical Application: This judicial exception is not integrated into a practical application. In particular, the claim recites: “an apparatus”. This limitation is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP §2106.05(f). Further, the claim recites: “a machine learning architecture configured to use a signal-analyzing neural-network”, “using the machine learning architecture with the signal-analyzing neural-network”, and “a one-dimensional-convolutional layer in the signal-analyzing neural-network configured to”. These limitations are additional elements that generally link the use of the judicial exception to a particular technological environment or field of use, see MPEP §2106.05(h). Further, the claim recites: “where the signal-analyzing neural-network is trained with one or more machine learning algorithms on data of the parameter-varying signals”. This limitation is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP §2106.05(f). Step 2B Significantly More: The claims do not include elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element: “an apparatus” which is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. Elements that amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process cannot provide an inventive concept. Further, the claim recites: “a machine learning architecture configured to use a signal-analyzing neural-network”, “using the machine learning architecture with the signal-analyzing neural-network”, and “using a one-dimensional-convolutional layer in the signal-analyzing neural-network” are additional elements that generally link the use of the judicial exception to a particular technological environment or field of use. Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. Further, the claim recites: “where the signal-analyzing neural-network is trained with one or more machine learning algorithms on data of the parameter-varying signals” which is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. Elements that merely amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 19, the rejection of claim 18 is incorporated, and further, the claim recites: “input values of the time varying signals in a first domain are … operated on”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim. Further, the claim recites: “apply the one-dimensional convolutional-based operation on the input values of the time-varying signals in the first domain, under analysis”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim. Further, the claim recites: “input values of the time varying signals in a second domain are … operated on”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim. Further, the claim recites: “apply the one-dimensional convolutional-based operation on the input values of the time-varying signals in the second domain, under analysis, at a same time with operations in the first branch”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim. Further, the claim recites: “where a first output result in the first domain is generated”, “where a second output result in the second domain is generated”, and “where the first and second output results from the first branch and the second branch of the signal-analyzing neural-network are combined”. These limitations recite mathematical concepts in addition to those identified in the rejection of the parent claim, thus the claim recites a judicial exception. Further, the claim recites: “where the parameter-varying signals under analysis are time-varying signals”. This limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. Further, the claim recites: “where the signal-analyzing neural-network is constructed to include i) a first branch … and ii) a second branch”. This limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. Further, the claim recites: “where input values of the time-varying signals in a first domain are supplied into … the first branch of the signal-analyzing neural-network” and “where input values of the time-varying signals in a second domain are supplied into … the second branch of the signal-analyzing neural-network”. These limitations amount to mere data gathering. Therefore, this limitation is insignificant extra-solution activity to the judicial exception, see MPEP §2106.05(g). Further, this limitation is directed to receiving or transmitting data over a network which courts have recognized as well-understood, routine, and conventional when they are claimed in a generic manner, see MPEP §2106.05(d)(II). Finally, the claim recites: “where a first one-dimensional convolution layer in the first branch is configured to…”, “where a second one-dimensional-convolution layer in the second branch is configured to…”, “by the first branch of the signal-analyzing neural-network”, “by the second branch of the signal-analyzing neural-network”, and “in a later portion of the signal-analyzing neural network”. These limitations are additional elements that generally link the use of the judicial exception to a particular technological environment or field of use. See MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 20, the rejection of claim 18 is incorporated, and further, the claim recites: “apply the one-dimensional convolutional-based operation followed by the non-linear activation function in order to change each output of each one-dimensional convolutional layer from a linear feature of the parameter-varying signal into a non-linear feature”. This limitation recites mathematical concepts in addition to those identified in the rejection of the parent claim, thus recites a judicial exception. Further, the claim recites: “where the signal-analyzing neural network contains a sequence of multiple iterations of one-dimensional convolutional layers, where each one-dimensional convolutional layer is configured to…”. This limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP §2106.05(h). Elements that merely generally link the use of the judicial exception to a particular technological environment or field of use cannot provide an inventive concept. The claim is not patent eligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 12-14, and 16-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Li et al., "Specific Emitter Identification based on Multi-Domain Features Learning," 06/23/2021 IEEE International Conference on Artificial Intelligence and Industrial Design (AIID), Guangzhou, China, 2021, pp. 178-183, doi: 10.1109/AIID51893.2021.945652, hereinafter referred to as “Li”. Regarding claim 12, Li teaches A method for a machine learning architecture (Li, Page 179, Section 3, Lines 1-4, “The proposed IRI-TFF SEI algorithm mainly includes four steps: fine signal preprocessing and representation, deep learning based recognition model designing, feature fusion strategy designing and network training”), comprising: using the machine learning architecture with a signal-analyzing neural-network to analyze data of parameter-varying signals (Li, Page 178, Abstract, Lines 1-15, “Specific emitter identification (SEI) is a technology to extract the subtle fingerprint features of the received electromagnetic signal, and identify the emitters to which the signal belongs… this paper proposes an Intelligent Radiometric Identification algorithm base on Time and Frequency domain feature Fusion (IRI-TFF) which uses deep learning technology. The algorithm designs a new multi-domain fused one-dimensional complex-valued densely connected convolutional network (DenseNet) model after the accurate "calibration" preprocessing of the received signal and the combination of time and frequency domain data as training examples” The “Intelligent Radiometric Identification algorithm” is considered to be the “machine learning architecture” and the “one-dimensional complex-valued densely connected convolutional network” is considered to be the signal-analyzing neural-network”), where the signal-analyzing neural-network is trained with one or more machine learning algorithms on sampled data of the parameter-varying signals (Li, Page 178, Abstract, Lines 10-15, “The algorithm designs a new multi-domain fused one-dimensional complex-valued densely connected convolutional network (DenseNet) model after the accurate "calibration" preprocessing of the received signal and the combination of time and frequency domain data as training examples”; Li, Page 181, Section IV, A2, “During training, 5,000 burst signal examples for each transmitter are collected, hence a total of 150,000 training examples are obtained, 120,000 of which are used for training, and 30,000 of which are used for verification. during testing, each USRP transmitter has 1,000 signal examples, a total of 30,000 examples are collected for testing”; Li, Page 182, Section IV, A3, Paragraph 2, “During training, the maximum number of training epochs was set as 100, and the batch size was set as 64. Early stop strategy was used, and the number of early stop epochs was set as 10. In the experiment, NVIDIA-V100 GPU is used to train and test the algorithm, and the network model is implemented on TensorFlow 1.10.0 framework”), and using a one-dimensional-convolution layer in the signal-analyzing neural-network to apply a series of i) a one-dimensional convolutional-based operation on a first set of data of the parameter-varying signals ii) followed by a nonlinear activation function on the first set of data of the parameter-varying signals (Li, Page 180, Section B, Paragraph 3, Line 4 – Paragraph 4, Line 7, “Therefore, it is necessary to construct 1D complex-valued DenseNet model (1DC-Densenet). The convolution layer contains two operators: convolution and activation. For the convolution operation, let the input 1D complex signal vector be x = x r + j x i , then the parameters of the convolution kernel should also be set as a complex tensor, denoted as W = W r + j W i , using the property of the complex number operation, the convolution output is a = W * x = W r * x r - W i * x i + j W i * x r + W r * x i (10)”; Li, Page 180, Section B, Paragraph 5, “According to the formula (10), the complex-valued convolution operation is expanded to real-valued convolution, and then the output is activated by the complex ReLU (C_ReLU) function, which means the amplitude and phase of the convolution output should be activated respectively before the complex activation output is calculated: y = c r e l u a = A y e j θ y A y = max ⁡ 0 , A a θ y = m a x ⁡ ( 0 , θ a ) (12)”) with multiple representations of the parameter-varying signals (Li, Page 180, Section C, Lines 1-2, “the 1DC-Densenet is used to extract features from time-domain and frequency-domain data at the same time” See also: Li, Page 180, Col 1, Section 2) Signal Representation), where each representation of the parameter-varying signal is analyzed in a different domain (Li, Page 180, Section C, Lines 1-2, “the 1DC-Densenet is used to extract features from time-domain and frequency-domain data at the same time”; See also: Li, Page 182, Table 1), in order to produce a classification of an entity into a specific category of an object corresponding to identifying features of the parameter-varying signals (Li, Page 180, Section C, Subsection 1, Lines 1-5, “Feature-level fusion is to use the feature extraction module of 1DC-Densenet in time-domain and frequency-domain to learn the respectively SEI features, cascade and combine them to form a new feature vector, and then input to a new FC layer and softmax output layer to get the category probability”; See also: Li, Page 181, Fig. 5, “Softmax” and “Labels”; Li, Page 178, Abstract, Lines 1-4, “Specific emitter identification (SEI) is a technology to extract the subtle fingerprint features of the received electromagnetic signal, and identify the emitters to which the signal belongs” The “emitters” are considered to be the “object”). Regarding claim 13, Li teaches A non-transitory machine-readable medium, which stores instructions in an executable format by one or more processors to cause operations as follows (Li, Page 182, Section 3, Paragraph 2, Lines 4-6, “In the experiment, NVIDIA-V100 GPU is used to train and test the algorithm, and the network model is implemented on TensorFlow 1.10.0 framework”), comprising: using a machine learning architecture with a signal-analyzing neural-network to analyze data of parameter-varying signals (Li, Page 178, Abstract, Lines 1-15, “Specific emitter identification (SEI) is a technology to extract the subtle fingerprint features of the received electromagnetic signal, and identify the emitters to which the signal belongs… this paper proposes an Intelligent Radiometric Identification algorithm base on Time and Frequency domain feature Fusion (IRI-TFF) which uses deep learning technology. The algorithm designs a new multi-domain fused one-dimensional complex-valued densely connected convolutional network (DenseNet) model after the accurate "calibration" preprocessing of the received signal and the combination of time and frequency domain data as training examples” The “Intelligent Radiometric Identification algorithm” is considered to be the “machine learning architecture” and the “one-dimensional complex-valued densely connected convolutional network” is considered to be the signal-analyzing neural-network”), where the signal-analyzing neural-network is trained with one or more machine learning algorithms on sampled data of the parameter-varying signals (Li, Page 178, Abstract, Lines 10-15, “The algorithm designs a new multi-domain fused one-dimensional complex-valued densely connected convolutional network (DenseNet) model after the accurate "calibration" preprocessing of the received signal and the combination of time and frequency domain data as training examples”; Li, Page 181, Section IV, A2, “During training, 5,000 burst signal examples for each transmitter are collected, hence a total of 150,000 training examples are obtained, 120,000 of which are used for training, and 30,000 of which are used for verification. during testing, each USRP transmitter has 1,000 signal examples, a total of 30,000 examples are collected for testing”; Li, Page 182, Section IV, A3, Paragraph 2, “During training, the maximum number of training epochs was set as 100, and the batch size was set as 64. Early stop strategy was used, and the number of early stop epochs was set as 10. In the experiment, NVIDIA-V100 GPU is used to train and test the algorithm, and the network model is implemented on TensorFlow 1.10.0 framework”), and using a one-dimensional-convolution layer in the signal-analyzing neural-network to apply a series of i) a one-dimensional convolutional-based operation on a first set of data of the parameter-varying signals ii) followed by a nonlinear activation function on the first set of data of the parameter-varying signals (Li, Page 180, Section B, Paragraph 3, Line 4 – Paragraph 4, Line 7, “Therefore, it is necessary to construct 1D complex-valued DenseNet model (1DC-Densenet). The convolution layer contains two operators: convolution and activation. For the convolution operation, let the input 1D complex signal vector be x = x r + j x i , then the parameters of the convolution kernel should also be set as a complex tensor, denoted as W = W r + j W i , using the property of the complex number operation, the convolution output is a = W * x = W r * x r - W i * x i + j W i * x r + W r * x i (10)”; Li, Page 180, Section B, Paragraph 5, “According to the formula (10), the complex-valued convolution operation is expanded to real-valued convolution, and then the output is activated by the complex ReLU (C_ReLU) function, which means the amplitude and phase of the convolution output should be activated respectively before the complex activation output is calculated: y = c r e l u a = A y e j θ y A y = max ⁡ 0 , A a θ y = m a x ⁡ ( 0 , θ a ) (12)”) with multiple representations of the parameter-varying signals (Li, Page 180, Section C, Lines 1-2, “the 1DC-Densenet is used to extract features from time-domain and frequency-domain data at the same time” See also: Li, Page 180, Col 1, Section 2) Signal Representation), where each representation of the parameter-varying signal is analyzed in a different domain (Li, Page 180, Section C, Lines 1-2, “the 1DC-Densenet is used to extract features from time-domain and frequency-domain data at the same time”; See also: Li, Page 182, Table 1), in order to produce a classification of an entity into a specific category of an object corresponding to identifying features of the parameter-varying signals (Li, Page 180, Section C, Subsection 1, Lines 1-5, “Feature-level fusion is to use the feature extraction module of 1DC-Densenet in time-domain and frequency-domain to learn the respectively SEI features, cascade and combine them to form a new feature vector, and then input to a new FC layer and softmax output layer to get the category probability”; See also: Li, Page 181, Fig. 5, “Softmax” and “Labels”; Li, Page 178, Abstract, Lines 1-4, “Specific emitter identification (SEI) is a technology to extract the subtle fingerprint features of the received electromagnetic signal, and identify the emitters to which the signal belongs”). Regarding claim 14, the rejection of claim 13 is incorporated, and further, Li teaches where the parameter-varying signals under analysis are time-varying signals (Li, Page 181, Section A1, Lines 1-4, “30 USRP devices are used to send burst signals with the same specifications and parameters, and the same USRP device is used to receive and collect signals to obtain training and test example sets”; Li, Page 178, Abstract, Lines 1-4, “—Specific emitter identification (SEI) is a technology to extract the subtle fingerprint features of the received electromagnetic signal, and identify the emitters to which the signal belongs” The signals sent by the “USRP devices” are “electromagnetic signals” which are considered to be “time-varying signals”), supplying input values of the time-varying signals in a time domain into a first branch of the signal-analyzing neural-network and then operating upon the time-varying signals in the time domain in the first branch of the signal-analyzing neural-network, where a first one-dimensional-convolution layer in the first branch is configured to apply the one-dimensional convolutional-based operation on the input values of the time-varying signals in the time domain, under analysis (Li, Page 181, Figs. 5-6, Fig. 5 shows “Feature Extraction” which includes several “Convolution” layers, Fig. 6 shows a “Feature Extraction” block for “Time-domain data” which is considered to be the “first branch”; Li, Page 180, Section B, Paragraph 4, Lines 2-7, “For the convolution operation, let the input 1D complex signal vector be x = x r + j x i , then the parameters of the convolution kernel should also be set as a complex tensor, denoted as W = W r + j W i , using the property of the complex number operation, the convolution output is a = W * x = W r * x r - W i * x i + j W i * x r + W r * x i (10)”), and supplying input values of the time-varying signals in a frequency domain into a second branch of the signal-analyzing neural-network and then operating upon the time-varying signals in the frequency domain in the second branch of the signal-analyzing neural-network, where a second one-dimensional-convolution layer in the second branch is configured to apply the one-dimensional convolutional-based operation on the input values of the time-varying signals in the frequency domain, under analysis (Li, Page 181, Figs. 5-6, Fig. 5 shows “Feature Extraction” which includes several “Convolution” layers, Fig. 6 shows a “Feature Extraction” block for “Frequency-domain data” which is considered to be the “second branch”; Li, Page 180, Section B, Paragraph 4, Lines 2-7, “For the convolution operation, let the input 1D complex signal vector be x = x r + j x i , then the parameters of the convolution kernel should also be set as a complex tensor, denoted as W = W r + j W i , using the property of the complex number operation, the convolution output is a = W * x = W r * x r - W i * x i + j W i * x r + W r * x i (10)”), where the signal-analyzing neural-network has an input block to receive the input values for the time-varying signals in the time domain, where the input block is configured to apply a Fast Fourier Transform on the input values for the time-varying signals under analysis from the time domain in order to produce values for the time-varying signals under analysis in the frequency domain (Li, Page 180, 2) Signal Representation, “After calibration, for time-domain signals, as the communication payload of each signal example is random modulated information, it is unstable information and not suitable for the network learning, but the signal vector of the preamble segment x t = [ z ^ 0 , … z ^ L - 1 ] T which uses fixed modulated information can be used as transient information to input networks for feature extracting. At the same time, the spectrum vector x f = [ Z 0 , … Z L - 1 ] T obtained by the Fast Fourier transformation (FFT) of the whole signal samples consists of preamble and payload can not only eliminate the influence of random information, but also represent the overall spectral characteristics of the signal, which contains the differences of the transmission filter shape, roll off, jitter and so on. It is suitable to input the networks as a steady-state information for feature learning”; see also Li, Page 179, Figure 1, “Signal Preprocessing” is performed prior to “Feature Extraction” and is thus considered to take place in an “input block”). Regarding claim 16, the rejection of claim 14 is incorporated, and further, Li teaches using a matrix to supply different values of the data of the time-varying signals under analysis (Li, Page 180, Section B, Paragraph 4, Lines 2-9, For the convolution operation, let the input 1D complex signal vector be x = x r + j x i , then the parameters of the convolution kernel should also be set as a complex tensor, denoted as W = W r + j W i , using the property of the complex number operation, the convolution output is a = W * x = W r * x r - W i * x i + j W i * x r + W r * x i (10) which can be represented as a matrix form: a = R e ( a ) I m ( a ) = W r - W i W i W r * x r x i (11)” The final term in this equation is the “data of the time-varying signals” represented as a matrix) as a one-dimensional time signal that is expanded into a multiple-dimensional time-and-frequency representation via operations performed by and within the signal-analyzing neural-network (Li, Page 179, Section III, Subsection A, Lines 4-11, Therefore, before network training, the received signals should be preprocessed with fine calibration on parameters such as time, power, frequency and phase to eliminate the influence of unstable elements. At the same time, in order to improve the identification performance, it is necessary to input the preprocessed data in time-domain, frequency-domain and other domains into the network to learn more comprehensive features” See also, Li Page 179, Section III, Subsections A1 and A2). Regarding claim 17, the rejection of claim 14 is incorporated, and further Li teaches using two or more successive layers of the one-dimensional-convolution layer to apply the one-dimensional-convolution operation followed by a non-linear activation function layer to apply the non-linear activation function to the data of values of time and frequency features of the time-varying signals in order to change an output of the one-dimensional-convolutional layer from a linear feature of the time-varying signal into a non-linear feature (Li, Page 181, Figs. 5-6, Fig. 5 shows “Feature Extraction” which includes several “Convolution” layers shown before and after “Dense Block[s]”, Fig. 6 shows a “Feature Extraction” block for “Time-domain data” and a separate “Feature Extraction” block for “Frequency-domain data”, which are considered to be the “one or more branches of the signal-analyzing neural-network”; Li, Page 180, Fig. 3, “Structure diagram of a dense block”; Thus, there are “two or more successive layers of the one-dimensional-convolution layer”; Li, Page 180, Section B, Paragraph 4, Lines 1-2, “The convolution layer contains two operators: convolution and activation”; Li, Page 180, Section B, Paragraph 5, Lines 1-4, “According to the formula (10), the complex-valued convolution operation is expanded to real-valued convolution, and then the output is activated by the complex ReLU (C_ReLU) function”; Thus, the convolution layer is followed by a “non-linear activation function layer” as each layer labeled “convolution” is also an activation layer as it performs an activation function; a person of ordinary skill in the art would recognize that the “ReLU” activation function changes linear features to non-linear features). Regarding claim 18, Li teaches An apparatus (Li, Page 182, Section 3, Paragraph 2, Lines 4-6, “In the experiment, NVIDIA-V100 GPU is used to train and test the algorithm, and the network model is implemented on TensorFlow 1.10.0 framework”), comprising: a machine learning architecture configured to use a signal-analyzing neural-network, using the machine learning architecture with the signal-analyzing neural-network to analyze data of parameter-varying signals (Li, Page 178, Abstract, Lines 1-15, “Specific emitter identification (SEI) is a technology to extract the subtle fingerprint features of the received electromagnetic signal, and identify the emitters to which the signal belongs… this paper proposes an Intelligent Radiometric Identification algorithm base on Time and Frequency domain feature Fusion (IRI-TFF) which uses deep learning technology. The algorithm designs a new multi-domain fused one-dimensional complex-valued densely connected convolutional network (DenseNet) model after the accurate "calibration" preprocessing of the received signal and the combination of time and frequency domain data as training examples” The “Intelligent Radiometric Identification algorithm” is considered to be the “machine learning architecture” and the “one-dimensional complex-valued densely connected convolutional network” is considered to be the signal-analyzing neural-network”), where the signal-analyzing neural-network is trained with one or more machine learning algorithms on sampled data of the parameter-varying signals (Li, Page 178, Abstract, Lines 10-15, “The algorithm designs a new multi-domain fused one-dimensional complex-valued densely connected convolutional network (DenseNet) model after the accurate "calibration" preprocessing of the received signal and the combination of time and frequency domain data as training examples”; Li, Page 181, Section IV, A2, “During training, 5,000 burst signal examples for each transmitter are collected, hence a total of 150,000 training examples are obtained, 120,000 of which are used for training, and 30,000 of which are used for verification. during testing, each USRP transmitter has 1,000 signal examples, a total of 30,000 examples are collected for testing”; Li, Page 182, Section IV, A3, Paragraph 2, “During training, the maximum number of training epochs was set as 100, and the batch size was set as 64. Early stop strategy was used, and the number of early stop epochs was set as 10. In the experiment, NVIDIA-V100 GPU is used to train and test the algorithm, and the network model is implemented on TensorFlow 1.10.0 framework”), and a one-dimensional-convolution layer in the signal-analyzing neural-network is configured to apply a series of i) a one-dimensional convolutional-based operation on a first set of data of the parameter-varying signals ii) followed by a nonlinear activation function on the first set of data of the parameter-varying signals (Li, Page 180, Section B, Paragraph 3, Line 4 – Paragraph 4, Line 7, “Therefore, it is necessary to construct 1D complex-valued DenseNet model (1DC-Densenet). The convolution layer contains two operators: convolution and activation. For the convolution operation, let the input 1D complex signal vector be x = x r + j x i , then the parameters of the convolution kernel should also be set as a complex tensor, denoted as W = W r + j W i , using the property of the complex number operation, the convolution output is a = W * x = W r * x r - W i * x i + j W i * x r + W r * x i (10)”; Li, Page 180, Section B, Paragraph 5, “According to the formula (10), the complex-valued convolution operation is expanded to real-valued convolution, and then the output is activated by the complex ReLU (C_ReLU) function, which means the amplitude and phase of the convolution output should be activated respectively before the complex activation output is calculated: y = c r e l u a = A y e j θ y A y = max ⁡ 0 , A a θ y = m a x ⁡ ( 0 , θ a ) (12)”) with multiple representations of the parameter-varying signals (Li, Page 180, Section C, Lines 1-2, “the 1DC-Densenet is used to extract features from time-domain and frequency-domain data at the same time” See also: Li, Page 180, Col 1, Section 2) Signal Representation), where each representation of the parameter-varying signal is analyzed in a different domain (Li, Page 180, Section C, Lines 1-2, “the 1DC-Densenet is used to extract features from time-domain and frequency-domain data at the same time”; See also: Li, Page 182, Table 1), in order to produce a classification of an entity into a specific category of an object corresponding to identifying features of the parameter-varying signals (Li, Page 180, Section C, Subsection 1, Lines 1-5, “Feature-level fusion is to use the feature extraction module of 1DC-Densenet in time-domain and frequency-domain to learn the respectively SEI features, cascade and combine them to form a new feature vector, and then input to a new FC layer and softmax output layer to get the category probability”; See also: Li, Page 181, Fig. 5, “Softmax” and “Labels”; Li, Page 178, Abstract, Lines 1-4, “Specific emitter identification (SEI) is a technology to extract the subtle fingerprint features of the received electromagnetic signal, and identify the emitters to which the signal belongs”). Regarding claim 19, the rejection of claim 18 is incorporated, and further, Li teaches where the parameter-varying signals under analysis are time-varying signals (Li, Page 181, Section A1, Lines 1-4, “30 USRP devices are used to send burst signals with the same specifications and parameters, and the same USRP device is used to receive and collect signals to obtain training and test example sets”; Li, Page 178, Abstract, Lines 1-4, “—Specific emitter identification (SEI) is a technology to extract the subtle fingerprint features of the received electromagnetic signal, and identify the emitters to which the signal belongs” The signals sent by the “USRP devices” are “electromagnetic signals” which are considered to be “time-varying signals”), and where the signal-analyzing neural-network is constructed to include i) a first branch where input values of the time-varying signals in a first domain are supplied into and operated upon in the first branch of the signal-analyzing neural-network, where a first one-dimensional-convolution layer in the first branch is configured to apply the one-dimensional convolutional-based operation on the input values of the time-varying signals in the first domain, under analysis (Li, Page 181, Figs. 5-6, Fig. 5 shows “Feature Extraction” which includes several “Convolution” layers, Fig. 6 shows a “Feature Extraction” block for “Time-domain data” which is considered to be the “first branch”, and the “time-domain” is considered to be the “first domain”; Li, Page 180, Section B, Paragraph 4, Lines 2-7, “For the convolution operation, let the input 1D complex signal vector be x = x r + j x i , then the parameters of the convolution kernel should also be set as a complex tensor, denoted as W = W r + j W i , using the property of the complex number operation, the convolution output is a = W * x = W r * x r - W i * x i + j W i * x r + W r * x i (10)”), and ii) a second branch where input values of the time-varying signals in a second domain are supplied into and operated upon in the second branch of the signal-analyzing neural-network, where a second one-dimensional-convolution layer in the second branch is configured to apply the one-dimensional convolutional-based operation on the input values of the time-varying signals in a second domain, under analysis (Li, Page 181, Figs. 5-6, Fig. 5 shows “Feature Extraction” which includes several “Convolution” layers, Fig. 6 shows a “Feature Extraction” block for “Frequency-domain data” which is considered to be the “second branch”, and the “frequency-domain” is considered to be the “second domain”; Li, Page 180, Section B, Paragraph 4, Lines 2-7, “For the convolution operation, let the input 1D complex signal vector be x = x r + j x i , then the parameters of the convolution kernel should also be set as a complex tensor, denoted as W = W r + j W i , using the property of the complex number operation, the convolution output is a = W * x = W r * x r - W i * x i + j W i * x r + W r * x i (10)”), at a same time with operations in the first branch (Li, Page 180, Section C, Lines 1-2, “When the 1DC-Densenet is used to extract features from time-domain and frequency-domain data at the same time”), where a first output result in the first domain is generated by the first branch of the signal-analyzing neural-network (Li, Page 181, Fig. 6, (1) Feature-Level Fusion, The “time-domain features” are the “first output result”), and where a second output result in the second domain is generated by the second branch of the signal-analyzing neural-network (Li, Page 181, Fig. 6, (1) Feature-Level Fusion, The “frequency-domain features” are the “second output result”), where the first and second output results from the first branch and the second branch of the signal-analyzing neural-network are combined in a later portion of the signal-analyzing neural network (Li, Page 180, Section C1, Lines 1-5, “Feature-level fusion is to use the feature extraction module of 1DC-Densenet in time-domain and frequency-domain to learn the respectively SEI features, cascade and combine them to form a new feature vector, and then input to a new FC layer and softmax output layer to get the category probability”). Regarding claim 20, the rejection of claim 18 is incorporated, and further, Li teaches where the signal-analyzing neural network contains a sequence of multiple iterations of one-dimensional convolutional layers, where each one-dimensional convolutional layer is configured to apply the one-dimensional convolutional-based operation followed by the non-linear activation function in order to change each output of each one-dimensional convolutional layer from a linear feature of the parameter-varying signal into a non-linear feature (Li, Page 180, Section C, Lines 6-8, “Fig. 5 shows the decomposed structure of 1DCDensenet, which mainly includes three parts: feature extraction, fully connected layer (FC) and softmax output layer”, see also Fig. 5 and Fig. 3 which show the “multiple iterations of one-dimensional convolutional layers”; Li, Page 180, Section B, Paragraph 3, Line 4 – Paragraph 4, Line 7, “Therefore, it is necessary to construct 1D complex-valued DenseNet model (1DC-Densenet). The convolution layer contains two operators: convolution and activation. For the convolution operation, let the input 1D complex signal vector be x = x r + j x i , then the parameters of the convolution kernel should also be set as a complex tensor, denoted as W = W r + j W i , using the property of the complex number operation, the convolution output is a = W * x = W r * x r - W i * x i + j W i * x r + W r * x i (10)”; Li, Page 180, Section B, Paragraph 5, “According to the formula (10), the complex-valued convolution operation is expanded to real-valued convolution, and then the output is activated by the complex ReLU (C_ReLU) function, which means the amplitude and phase of the convolution output should be activated respectively before the complex activation output is calculated: y = c r e l u a = A y e j θ y A y = max ⁡ 0 , A a θ y = m a x ⁡ ( 0 , θ a ) (12)” A person of ordinary skill in the art would recognize that the “RuLU” activation function changes linear features to non-linear features). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 8-11 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Ciritsis et al., U.S. Patent Application Publication No. 20240242349, hereinafter referred to as "Ciritsis". Regarding claim 1, Li teaches An apparatus, comprising: an output module configured to work with one or more processors to execute instructions and a memory to store data and instructions, where the output module is configured to cooperate (Li, Page 182, Section 3, Paragraph 2, Lines 4-6, “In the experiment, NVIDIA-V100 GPU is used to train and test the algorithm, and the network model is implemented on TensorFlow 1.10.0 framework”) with a machine learning architecture to analyze a first set of data of parameter-varying signals, and where the machine learning architecture is configured to use a signal-analyzing neural-network (Li, Page 178, Abstract, Lines 1-15, “Specific emitter identification (SEI) is a technology to extract the subtle fingerprint features of the received electromagnetic signal, and identify the emitters to which the signal belongs… this paper proposes an Intelligent Radiometric Identification algorithm base on Time and Frequency domain feature Fusion (IRI-TFF) which uses deep learning technology. The algorithm designs a new multi-domain fused one-dimensional complex-valued densely connected convolutional network (DenseNet) model after the accurate "calibration" preprocessing of the received signal and the combination of time and frequency domain data as training examples” The “Intelligent Radiometric Identification algorithm” is considered to be the “machine learning architecture” and the “one-dimensional complex-valued densely connected convolutional network” is considered to be the signal-analyzing neural-network”), where the signal-analyzing neural-network is trained with one or more machine learning algorithms on sampled data of the parameter-varying signals (Li, Page 178, Abstract, Lines 10-15, “The algorithm designs a new multi-domain fused one-dimensional complex-valued densely connected convolutional network (DenseNet) model after the accurate "calibration" preprocessing of the received signal and the combination of time and frequency domain data as training examples”; Li, Page 181, Section IV, A2, “During training, 5,000 burst signal examples for each transmitter are collected, hence a total of 150,000 training examples are obtained, 120,000 of which are used for training, and 30,000 of which are used for verification. during testing, each USRP transmitter has 1,000 signal examples, a total of 30,000 examples are collected for testing”; Li, Page 182, Section IV, A3, Paragraph 2, “During training, the maximum number of training epochs was set as 100, and the batch size was set as 64. Early stop strategy was used, and the number of early stop epochs was set as 10. In the experiment, NVIDIA-V100 GPU is used to train and test the algorithm, and the network model is implemented on TensorFlow 1.10.0 framework”), where the signal-analyzing neural-network is configured to contain a one-dimensional-convolution layer to apply a series of i) a one-dimensional convolutional-based operation on the first set of data of the parameter-varying signals ii) followed by a nonlinear activation function on the first set of data of the parameter-varying signals (Li, Page 180, Section B, Paragraph 3, Line 4 – Paragraph 4, Line 7, “Therefore, it is necessary to construct 1D complex-valued DenseNet model (1DC-Densenet). The convolution layer contains two operators: convolution and activation. For the convolution operation, let the input 1D complex signal vector be x = x r + j x i , then the parameters of the convolution kernel should also be set as a complex tensor, denoted as W = W r + j W i , using the property of the complex number operation, the convolution output is a = W * x = W r * x r - W i * x i + j W i * x r + W r * x i (10)”; Li, Page 180, Section B, Paragraph 5, “According to the formula (10), the complex-valued convolution operation is expanded to real-valued convolution, and then the output is activated by the complex ReLU (C_ReLU) function, which means the amplitude and phase of the convolution output should be activated respectively before the complex activation output is calculated: y = c r e l u a = A y e j θ y A y = max ⁡ 0 , A a θ y = m a x ⁡ ( 0 , θ a ) (12)”) with multiple representations of the parameter-varying signals (Li, Page 180, Section C, Lines 1-2, “the 1DC-Densenet is used to extract features from time-domain and frequency-domain data at the same time” See also: Li, Page 180, Col 1, Section 2) Signal Representation), where each representation of the parameter-varying signal is analyzed in a different domain (Li, Page 180, Section C, Lines 1-2, “the 1DC-Densenet is used to extract features from time-domain and frequency-domain data at the same time”; See also: Li, Page 182, Table 1), in order to produce a classification of an entity into a specific category of an object corresponding to identifying features of the parameter-varying signals (Li, Page 180, Section C, Subsection 1, Lines 1-5, “Feature-level fusion is to use the feature extraction module of 1DC-Densenet in time-domain and frequency-domain to learn the respectively SEI features, cascade and combine them to form a new feature vector, and then input to a new FC layer and softmax output layer to get the category probability”; See also: Li, Page 181, Fig. 5, “Softmax” and “Labels”; Li, Page 178, Abstract, Lines 1-4, “Specific emitter identification (SEI) is a technology to extract the subtle fingerprint features of the received electromagnetic signal, and identify the emitters to which the signal belongs”). Li does not explicitly teach present a representation of an output result from the machine learning architecture to be shown on a display screen indicating the specific category that the object is classified to belong to from the first set of data of time-varying signals under analysis, without any prior knowledge of a presence or a type of the classified object actually being contained or present within the parameter-varying signals, currently under analysis. Ciritsis teaches present a representation of an output result from the machine learning architecture to be shown on a display screen indicating the specific category that the object is classified to belong to from the first set of data of time-varying signals under analysis, without any prior knowledge of a presence or a type of the classified object actually being contained or present within the parameter-varying signals, currently under analysis (Ciritsis, Paragraphs 0216-0217, “A step S23 of displaying the image and the classification according to the AI configured according to current model 1, to the user. [0217] The image and the classification according to the AI may be displaced to the user via the user interface 140, for example a screen of the medical imaging system or a screen of computer belonging to the same computer network as the medical imaging system”). It would have been obvious, to a person of ordinary skill in the art, before the effective filing date of the invention to have modified the apparatus taught by Li to include the user interface to display the classification results as taught by Ciritsis. The motivation for combination would have been the ability to display the classification results to a user for review, allowing the user to approve or correct the classification, improving classification results (Ciritsis, Paragraphs 0216-0226). Regarding claim 2, the rejection of claim 1 is incorporated, and further the proposed combination teaches where a multiple value data structure is utilized to supply different values of the parameter-varying signals to the signal-analyzing neural-network (Li, Page 180, Section B, Paragraph 4, Lines 2-3, “let the input 1D complex signal vector be x = x r + j x i ” A “vector” is considered to be a “multiple value data structure”), where the parameter-varying signals under analysis are time-varying signals (Li, Page 181, Section A1, Lines 1-4, “30 USRP devices are used to send burst signals with the same specifications and parameters, and the same USRP device is used to receive and collect signals to obtain training and test example sets”; Li, Page 178, Abstract, Lines 1-4, “—Specific emitter identification (SEI) is a technology to extract the subtle fingerprint features of the received electromagnetic signal, and identify the emitters to which the signal belongs” The signals sent by the “USRP devices” are “electromagnetic signals” which are considered to be “time-varying signals”). Regarding claim 3, the rejection of claim 2 is incorporated, and further, the proposed combination teaches where one or more branches of the signal-analyzing neural-network are constructed to apply at least two or more successive layers of the one-dimensional-convolution layer to apply the one-dimensional-convolution operation followed by a non-linear activation function layer to apply the non-linear activation function to data values of time and frequency in the time-varying signals (Li, Page 181, Figs. 5-6, Fig. 5 shows “Feature Extraction” which includes several “Convolution” layers shown before and after “Dense Block[s]”, Fig. 6 shows a “Feature Extraction” block for “Time-domain data” and a separate “Feature Extraction” block for “Frequency-domain data”, which are considered to be the “one or more branches of the signal-analyzing neural-network”; Li, Page 180, Fig. 3, “Structure diagram of a dense block”; Thus, there are “at least two or more successive layers of the one-dimensional-convolution layer”; Li, Page 180, Section B, Paragraph 4, Lines 1-2, “The convolution layer contains two operators: convolution and activation”; Li, Page 180, Section B, Paragraph 5, Lines 1-4, “According to the formula (10), the complex-valued convolution operation is expanded to real-valued convolution, and then the output is activated by the complex ReLU (C_ReLU) function”; Thus, the convolution layer is followed by a “non-linear activation function layer” as each layer labeled “convolution” is also an activation layer as it performs an activation function). Regarding claim 4, the rejection of claim 3 is incorporated, and further, the proposed combination teaches where the signal-analyzing neural-network is a convolutional neural network architecture (Li, Page 180, Section B, Paragraph 3, Line 4 – Paragraph 4, Line 2, “Therefore, it is necessary to construct 1D complex-valued DenseNet model (1DC-Densenet). The convolution layer contains two operators: convolution and activation”; Li, Page 181, Fig. 5. “The decomposed structure of 1DC-DenseNet”). Regarding claim 5, the rejection of claim 2 is incorporated, and further, the proposed combination teaches where one or more portions of the signal-analyzing neural-network are constructed to include a first branch, where input values of the time-varying signals in a first domain are supplied into the first branch of the signal-analyzing neural-network, where a first one-dimensional-convolution layer in the first branch is configured to apply the one-dimensional convolutional-based operation in the first branch on the input values of the time-varying signals in the first domain (Li, Page 181, Figs. 5-6, Fig. 5 shows “Feature Extraction” which includes several “Convolution” layers, Fig. 6 shows a “Feature Extraction” block for “Time-domain data” which is considered to be the “first branch”, and the “time-domain” is considered to be the “first domain”; Li, Page 180, Section B, Paragraph 4, Lines 2-7, “For the convolution operation, let the input 1D complex signal vector be x = x r + j x i , then the parameters of the convolution kernel should also be set as a complex tensor, denoted as W = W r + j W i , using the property of the complex number operation, the convolution output is a = W * x = W r * x r - W i * x i + j W i * x r + W r * x i (10)”), and a second branch where input values of the time-varying signals in a second domain are supplied into the second branch of the signal-analyzing neural-network, where a second one-dimensional-convolution layer in the second branch is configured to apply the one-dimensional convolutional-based operation in the second branch on the input values of the time-varying signals in a second domain (Li, Page 181, Figs. 5-6, Fig. 5 shows “Feature Extraction” which includes several “Convolution” layers, Fig. 6 shows a “Feature Extraction” block for “Frequency-domain data” which is considered to be the “second branch”, and the “frequency-domain” is considered to be the “second domain”; Li, Page 180, Section B, Paragraph 4, Lines 2-7, “For the convolution operation, let the input 1D complex signal vector be x = x r + j x i , then the parameters of the convolution kernel should also be set as a complex tensor, denoted as W = W r + j W i , using the property of the complex number operation, the convolution output is a = W * x = W r * x r - W i * x i + j W i * x r + W r * x i (10)”) at a same time with operations in the first branch (Li, Page 180, Section C, Lines 1-2, “When the 1DC-Densenet is used to extract features from time-domain and frequency-domain data at the same time”). Regarding claim 8, the rejection of claim 2 is incorporated, and further, the proposed combination teaches where the signal-analyzing neural-network contains a sequence of multiple iterations of one-dimensional convolutional layers, where each one-dimensional convolutional layer is configured to apply the one-dimensional convolutional-based operation followed by the non-linear activation function layer in order to change each output of each one-dimensional convolutional layer from a linear feature of the time-varying signal into a non-linear feature (Li, Page 180, Section C, Lines 6-8, “Fig. 5 shows the decomposed structure of 1DCDensenet, which mainly includes three parts: feature extraction, fully connected layer (FC) and softmax output layer”, see also Fig. 5 and Fig. 3 which show the “multiple iterations of one-dimensional convolutional layers”; Li, Page 180, Section B, Paragraph 3, Line 4 – Paragraph 4, Line 7, “Therefore, it is necessary to construct 1D complex-valued DenseNet model (1DC-Densenet). The convolution layer contains two operators: convolution and activation. For the convolution operation, let the input 1D complex signal vector be x = x r + j x i , then the parameters of the convolution kernel should also be set as a complex tensor, denoted as W = W r + j W i , using the property of the complex number operation, the convolution output is a = W * x = W r * x r - W i * x i + j W i * x r + W r * x i (10)”; Li, Page 180, Section B, Paragraph 5, “According to the formula (10), the complex-valued convolution operation is expanded to real-valued convolution, and then the output is activated by the complex ReLU (C_ReLU) function, which means the amplitude and phase of the convolution output should be activated respectively before the complex activation output is calculated: y = c r e l u a = A y e j θ y A y = max ⁡ 0 , A a θ y = m a x ⁡ ( 0 , θ a ) (12)” A person of ordinary skill in the art would recognize that the “RuLU” activation function changes linear features to non-linear features). Regarding claim 9, the rejection of claim 1 is incorporated, and further, the proposed combination teaches a user interface configured to convey a classification result (Ciritsis, Paragraph 0217, Lines 1-3, “the classification according to the AI may be displaced to the user via the user interface 140”) the produced classification of the entity into the specific category of the object corresponding to identifying features of the parameter-varying signals from a group of two or more possible categories of the object (Li, Page 180, Section C1, Lines 1-5, “Feature-level fusion is to use the feature extraction module of 1DC-Densenet in time-domain and frequency-domain to learn the respectively SEI features, cascade and combine them to form a new feature vector, and then input to a new FC layer and softmax output layer to get the category probability”; Li, Page 182, Table 1, The last row shows there are 30 possible categories). Regarding claim 10, the rejection of claim 2 is incorporated, and further, the proposed combination teaches where a multiple value data structure is configured to supply different values of the first set of data of the time-varying signals as a matrix (Li, Page 180, Section B, Paragraph 4, Lines 2-9, For the convolution operation, let the input 1D complex signal vector be x = x r + j x i , then the parameters of the convolution kernel should also be set as a complex tensor, denoted as W = W r + j W i , using the property of the complex number operation, the convolution output is a = W * x = W r * x r - W i * x i + j W i * x r + W r * x i (10) which can be represented as a matrix form: a = R e ( a ) I m ( a ) = W r - W i W i W r * x r x i (11)”) to supply the different values as a one-dimensional time signal that is expanded into a multiple dimensional time-and-frequency representation via operations performed by a preprocessing portion of the signal-analyzing neural-network (Li, Page 179, Section III, Subsection A, Lines 4-11, Therefore, before network training, the received signals should be preprocessed with fine calibration on parameters such as time, power, frequency and phase to eliminate the influence of unstable elements. At the same time, in order to improve the identification performance, it is necessary to input the preprocessed data in time-domain, frequency-domain and other domains into the network to learn more comprehensive features” See also, Li Page 179, Section III, Subsections A1 and A2). Regarding claim 11, the rejection of claim 2 is incorporated, and further, the proposed combination teaches where the time-varying signals are radio frequency signals (Li, Page 181, Section A1, Lines 1-4, “30 USRP devices are used to send burst signals with the same specifications and parameters, and the same USRP device is used to receive and collect signals to obtain training and test example sets”; Li, Page 178, Abstract, Lines 1-4, “—Specific emitter identification (SEI) is a technology to extract the subtle fingerprint features of the received electromagnetic signal, and identify the emitters to which the signal belongs” USRP (Universal Software Radio Peripheral) devices are designed to transmit and receive radio frequency signals). Claims 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Ciritsis in further view of Q. -V. Pham, N. T. Nguyen, T. Huynh-The, L. Bao Le, K. Lee and W. -J. Hwang, "Intelligent Radio Signal Processing: A Survey," in IEEE Access, vol. 9, pp. 83818-83850, 06/16/2021, doi: 10.1109/ACCESS.2021.3087136, hereinafter referred to as “Pham”. Regarding claim 6, the rejection of claim 1 is incorporated, and further, the proposed combination teaches where a first output result in a first domain is generated by a first branch of the signal-analyzing neural-network (Li, Page 181, Fig. 6, (1) Feature-Level Fusion, The “time-domain features” are the “first output result”), and where a second output result in a second domain is generated by a second branch of the signal-analyzing neural-network (Li, Page 181, Fig. 6, (1) Feature-Level Fusion, The “frequency-domain features” are the “second output result”), where the first and second output results from the first branch on the first domain and the second branch on the second domain of the signal-analyzing neural-network are combined in … a later portion of the signal-analyzing neural network (Li, Page 180, Section C1, Lines 1-5, “Feature-level fusion is to use the feature extraction module of 1DC-Densenet in time-domain and frequency-domain to learn the respectively SEI features, cascade and combine them to form a new feature vector, and then input to a new FC layer and softmax output layer to get the category probability”). The proposed combination does not explicitly teach that the first and second output results are combined in a concatenation layer. Pham teaches results from two branches of a neural network being combined in a concatenation layer (Pham, Page 83829, Figure 6 Description, Lines 2-3, “(c) Two-branch CNN [76]” Figure 6 (c) shows a two branch neural network with a concatenation layer to combine the features from each branch; Pham, Page 83829, Figure 6 Description, Lines 4-6, “Notations presented in figures: conv (convolutional layer), bn (batch normalization layer), pool (max-pooling layer), avg-pool (average pooling layer), concat (depthwise concatenation layer), fc (fully connected layer or dense layer), add (elementwise addition layer)”). It would have been obvious, to a person of ordinary skill in the art, before the effective filing date of the invention, to have modified the signal-analyzing neural network taught by the proposed combination to include a concatenation layer as taught by Pham. The motivation for doing so would have been that using multiple processing streams is recommended to effectively learn radio characteristics and Pham demonstrates concatenation is a method to combine the processing streams (Pham, Page 83829, Col 1, Third Bullet, “Deep fusion frameworks with multiple processing streams to process different types of input data [76] are recommended to more effectively learn intrinsic radio characteristics”). Regarding claim 7, the rejection of claim 1 is incorporated, and further, the proposed combination teaches where the signal-analyzing neural-network has a final portion of the signal-analyzing neural-network … to combine the multiple representations from the different domains and one or more fully connected layers that are configured to determine the specific category of the object from a group of two or more possible categories of objects, where a classifier is configured to classify values and features of the time-varying signals … to correspond them to different known sources of objects including other objects (Li, Page 180, Section C1, Lines 1-5, “Feature-level fusion is to use the feature extraction module of 1DC-Densenet in time-domain and frequency-domain to learn the respectively SEI features, cascade and combine them to form a new feature vector, and then input to a new FC layer and softmax output layer to get the category probability”; Li, Page 182, Table 1, The last row shows there are 30 possible categories; The “new feature vector” is considered to be the “values and features of the time-varying signals” which are input into “a new FC layer and softmax output layer” which are considered to be the “classifier” to determine a “category probability” which is considered to be “correspond them to different known sources of objects”; Li, Page 178, Abstract, Lines 1-4, “—Specific emitter identification (SEI) is a technology to extract the subtle fingerprint features of the received electromagnetic signal, and identify the emitters to which the signal belongs”; The “emitters to which the signal belongs” are considered to be the “other objects”). It is noted applicant uses alternative language and Li teaches at least one of the alternatives. The proposed combination does not explicitly teach that the final portion of the signal-analyzing neural-network contains a concatenation layer. Pham teaches results from two branches of a neural network being combined in a concatenation layer (Pham, Page 83829, Figure 6 Description, Lines 2-3, “(c) Two-branch CNN [76]” Figure 6 (c) shows a two branch neural network with a concatenation layer to combine the features from each branch; Pham, Page 83829, Figure 6 Description, Lines 4-6, “Notations presented in figures: conv (convolutional layer), bn (batch normalization layer), pool (max-pooling layer), avg-pool (average pooling layer), concat (depthwise concatenation layer), fc (fully connected layer or dense layer), add (elementwise addition layer)”). It would have been obvious, to a person of ordinary skill in the art, before the effective filing date of the invention, to have modified the signal-analyzing neural network taught by Li to include a concatenation layer as taught by Pham. The motivation for doing so would have been that using multiple processing streams is recommended to effectively learn radio characteristics and Pham demonstrates concatenation is a method to combine the processing streams (Pham, Page 83829, Col 1, Third Bullet, “Deep fusion frameworks with multiple processing streams to process different types of input data [76] are recommended to more effectively learn intrinsic radio characteristics”). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Pham. Regarding claim 15, the rejection of claim 13 is incorporated, and further, Li teaches generating a first output result in a time domain in a first branch of the signal-analyzing neural-network (Li, Page 181, Fig. 6, (1) Feature-Level Fusion, The “time-domain features” are the “first output result”), generating a second output result in a frequency domain in a second branch of the signal-analyzing neural-network (Li, Page 181, Fig. 6, (1) Feature-Level Fusion, The “frequency-domain features” are the “second output result”), and combining the first and second output results from the first branch on the time domain and the second branch on the frequency domain of the signal-analyzing neural-network in … a later portion of the signal-analyzing neural network (Li, Page 180, Section C1, Lines 1-5, “Feature-level fusion is to use the feature extraction module of 1DC-Densenet in time-domain and frequency-domain to learn the respectively SEI features, cascade and combine them to form a new feature vector, and then input to a new FC layer and softmax output layer to get the category probability”). Li does not explicitly teach that the first and second output results are combined in a concatenation layer. Pham teaches results from two branches of a neural network being combined in a concatenation layer (Pham, Page 83829, Figure 6 Description, Lines 2-3, “(c) Two-branch CNN [76]” Figure 6 (c) shows a two branch neural network with a concatenation layer to combine the features from each branch; Pham, Page 83829, Figure 6 Description, Lines 4-6, “Notations presented in figures: conv (convolutional layer), bn (batch normalization layer), pool (max-pooling layer), avg-pool (average pooling layer), concat (depthwise concatenation layer), fc (fully connected layer or dense layer), add (elementwise addition layer)”). It would have been obvious, to a person of ordinary skill in the art, before the effective filing date of the invention, to have modified the signal-analyzing neural network taught by Li to include a concatenation layer as taught by Pham. The motivation for doing so would have been that using multiple processing streams is recommended to effectively learn radio characteristics and Pham demonstrates concatenation is a method to combine the processing streams (Pham, Page 83829, Col 1, Third Bullet, “Deep fusion frameworks with multiple processing streams to process different types of input data [76] are recommended to more effectively learn intrinsic radio characteristics”). Response to Arguments Applicant’s amendments to claims 3, 12-13 and 17-18 with respect to objections to the claims have been fully considered, and overcome the objections set forth in the nonfinal office action dated 08/11/2025. Consequently, the objections to the claims have been withdrawn. However, one of these amendments resulted in new grounds for a 35 U.S.C. 112(b) indefiniteness rejection, which is seen above. Applicant’s amendments to claim 20 with respect to the 35 U.S.C. 112(b) indefiniteness rejection has been fully considered, and overcomes the rejection set forth in the nonfinal office action dated 08/11/2025. Consequently, the rejection to claim 20 has been withdrawn. Applicant’s amendment to claim 5 with respect to the 35 U.S.C. 112(b) indefiniteness rejection has been fully considered but does not overcome the rejection set forth in the nonfinal office action dated 08/11/2025. Consequently, the rejection to claim 5 has been maintained. Applicant’s arguments regarding the 35 U.S.C. 112(f) limitations of the claims have been fully considered but are unpersuasive. Applicant argues the recited “output module” does not invoke 35 U.S.C. 112(f). Examiner respectfully disagrees. Applicant argues that an “output module” is not a generic placeholder. However, the term “output” in this limitation is merely a label and does not impart functionality or structure upon the “module” as the associated functions of “work with” and “cooperate with” are not typical and would not be readily apparent for something labeled with “output”. That is to say, the recited “output module” does not simply “output” data but rather performs specialized functions. Further, while applicant attempts to provide evidence for “module”, no evidence has been provided that an “output module” has a well-known structure to one skilled in the art, and regardless the recited “output module” would overcome these presumptions as it performs functions that are not typical and would not be readily apparent for something labeled with “output”. Applicant further argues with regard to applicant’s specification, however, application of 35 U.S.C. 112(f) is driven by the claim language, not by applicant’s intent or mere statements to the contrary included in the specification or made during prosecution, see MPEP 2181(I). Further, examiners will apply 35 U.S.C. 112(f) to a claim limitation that uses the term "means" or generic placeholder associated with functional language, unless that term is (1) preceded by a structural modifier, defined in the specification as a particular structure or known by one skilled in the art, that denotes the type of structural device (e.g., "filters"), or (2) otherwise modified by sufficient structure or material for achieving the claimed function. The label “output” is not a structural modifier, and thus does not add structure to the “module”, at best it may provide functionality, but this is proven untrue as the recited functions of “cooperate with” and “work with” are not implied by the term “output”. Applicant’s arguments regarding the 35 U.S.C. 112(b) indefiniteness rejections of the claims have been fully considered but are unpersuasive. Applicant argues, on pages 33-37 of the response, that applicant’s specification provides sufficient structure and the required algorithm for performing the claim specific computer function of the “output module”. Examiner respectfully disagrees. While the specification may disclose “electronic circuits” or “software”, the discloses “software” would not be considered “sufficient structure” as software has no known structure. Regardless, applicant’s specification does not include the required “algorithm for performing the claimed specific computer function”, see MPEP 2181(II)(B). It is important to note that the recited “output module” performs several functions in the recited claim 1, “…work with one or more processors to execute instructions and a memory to store data and instructions…”, “…cooperate with a machine learning architecture to analyze a first set of data of parameter-varying signals…”, and “…cooperate to present a representation of an output result…” and the Federal Circuit explained that "[w]here there are multiple claimed functions, as we have here, the [specification] must disclose adequate corresponding structure to perform all of the claimed functions.", see MPEP 2181(II)(B). While examiner believes there to be sufficient structure to perform the final function of “…cooperate to present a representation of an output result…”, there is not an algorithm present in applicant’s specification for a person of ordinary skill in the art to reasonably define the structure and make the boundaries of the claim understandable with regard to each of the other two functions. It is unclear, in light of applicant’s specification, what algorithm is performed for the “output module” to “…work with one or more processors to execute instructions and a memory to store data and instructions…” and “…cooperate with a machine learning architecture to analyze a first set of data of parameter-varying signals…”. Applicant’s arguments regarding the 35 U.S.C. 101 rejections of the claims have been fully considered but are unpersuasive. Applicant first argues, on pages 43-47 of the response, that claim 1 satisfies Alice Step 2A, Prong 1 – Judicial Exception. Examiner respectfully disagrees. Applicant argues the limitations of claim 1 are not pure mathematical concepts but rather a use of mathematical techniques with in a “technological system”. It is important to note that mere physical or tangible implementation of an exception is not in itself an inventive concept and does not guarantee eligibility, see MPEP 2106.05(I)(A). Further, applicant argues the claim “is applied to a specific domain”. However, this would be considered generally linking the use of the judicial exception to a particular technological environment or field of use, see MPEP §2106.05(h). Elements of this type cannot provide an inventive concept and do not render a judicial exception eligible. With regard to the court cases present by the applicant, including the 2025 USPTO memo dated Aug. 4th, 2025, the fact patterns of each of these cases are not identical to those of the instant case, and thus the logic applied to each of those cases cannot be directly applied to the instant case. Applicant next argues, on pages 47-50 of the response that claim 1 satisfies Alice Step 2A, Prong 2 – Integration into a Practical Application. Examiner respectfully disagrees. Applicant argues claim 1 is integrated into a practical application “to produce a classification of the parameter-varying signals” and solves a real-world problem through “the analysis of signals”. An improvement to “classification of the parameter-varying signals” or “the analysis of signals” may be an improvement in an abstract idea, but not an improvement in the functioning of a computer, as a computer. It is important to note that an improvement in the abstract idea itself is not an improvement in technology, see MPEP 2106.05(a)(II). Further, applicant argues claim 1 is tailored for a particular technological domain – signal analysis, however, this would be considered generally linking the use of the judicial exception to a particular technological environment or field of use, see MPEP §2106.05(h). Elements of this type cannot provide an inventive concept and do not render a judicial exception eligible. With regard to “Cellspin Soft, Inc. v. Fitbit, Inc.,” and “Uniloc USA, Inc. v. LG Electronics USA, Inc.,” these cases are not present in the MPEP and are thus not precedential. Regarding the remainder of the court cases provided, including the 2025 USPTO memo dated Aug. 4th, 2025, the fact patterns of each of these cases are not identical to those of the instant case, and thus the logic applied to each of those cases cannot be directly applied to the instant case. Applicant next argues, on page 50-52 of the response, that the claim does not recite a mathematical concept. Examiner respectfully disagrees. Applicant specifically points to the “apparatus” of claim 1. However, mere physical or tangible implementation of an exception is not in itself an inventive concept and does not guarantee eligibility, see MPEP 2106.05(I)(A). During an Alice examination the elements of a claim are analyzed independently and in combination, so while the claim may recite “other operations”, as argued by applicant, the additional elements do not integrate the judicial exception into a practical application when taken individually or in combination. While applicant argues the claim focuses on the “application of these techniques, not just the mathematical methods themselves”, mathematical concepts are recited nonetheless. With regard to “Uniloc USA, Inc. v. LG Electronics USA, Inc.,” this case is not present in the MPEP and is thus not precedential. Regarding the remainder of the court cases provided, including the 2025 USPTO memo dated Aug. 4th, 2025, the fact patterns of each of these cases are not identical to those of the instant case, and thus the logic applied to each of those cases cannot be directly applied to the instant case. Applicant next argues, on page 52-54 of the response, that the claim recites an inventive concept. Examiner respectfully disagrees. Applicant argues that the recited limitations of claim 1 are not “merely a routine implementation of known techniques; it is directed to a system that specifically tailors these techniques to signal classification”. However, mere physical or tangible implementation of an exception is not in itself an inventive concept and does not guarantee eligibility, see MPEP 2106.05(I)(A). Further, applicant’s argument that the claim is specifically tailored for a particular technological domain – signal analysis, would be considered generally linking the use of the judicial exception to a particular technological environment or field of use, see MPEP §2106.05(h). Elements of this type cannot provide an inventive concept and do not render a judicial exception eligible. With regard to “Cellspin Soft, Inc. v. Fitbit, Inc.,” this case is not present in the MPEP and are thus not precedential. Regarding the remainder of the court cases provided, including the 2025 USPTO memo dated Aug. 4th, 2025, the fact patterns of each of these cases are not identical to those of the instant case, and thus the logic applied to each of those cases cannot be directly applied to the instant case. Applicant's arguments regarding the remainder of the claims rely upon the arguments asserted with respect to the independent claims, and are thus unpersuasive. Applicant’s arguments regarding the 35 U.S.C. 102 rejections of the claims have been fully considered but are unpersuasive. Applicant first argues, on pages 56-57 of the response, that Li does not teach “where each representation of the parameter-varying signal is analyzed in a different domain, in order to produce a classification of an entity into a specific category of an object corresponding to identifying features of the parameter-varying signals”, because Li has an architecture and trained to classify features of the received electromagnetic signal into a type of signal that it belongs to; rather than a category of an object. Examiner respectfully disagrees. Li is directed to identifying “emitters to which the signal belongs” (Li, Page 178, Abstract, Lines 1-4, “Specific emitter identification (SEI) is a technology to extract the subtle fingerprint features of the received electromagnetic signal, and identify the emitters to which the signal belongs”), “emitters” are considered to be “objects” under the broadest reasonable interpretation and thus, Li teaches “where each representation of the parameter-varying signal is analyzed in a different domain, in order to produce a classification of an entity into a specific category of an object corresponding to identifying features of the parameter-varying signals”. Applicant's arguments regarding the remainder of the claims, in regard to 35 U.S.C. 102 and 35 U.S.C. 103 rely upon the arguments asserted with respect to the independent claims, and are thus unpersuasive. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOLLY CLARKE SIPPEL whose telephone number is (571)272-3270. The examiner can normally be reached Monday - Friday, 7:30 a.m. - 4:30 p.m. ET.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571)272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.C.S./ Examiner, Art Unit 2122 /KAKALI CHAKI/ Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Jun 29, 2022
Application Filed
Aug 01, 2025
Non-Final Rejection — §101, §102, §103
Dec 08, 2025
Response Filed
Feb 20, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602592
NOISE COMMUNICATION FOR FEDERATED LEARNING
2y 5m to grant Granted Apr 14, 2026
Patent 12596916
CONSTRAINED MASKING FOR SPARSIFICATION IN MACHINE LEARNING
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+58.3%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 14 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month