DETAILED ACTION
1. This office action is in response to the Application No. 19044546 filed on 12/22/2025. Claims 1-24 are presented for examination and are currently pending. Applicant’s arguments have been carefully and respectfully considered.
Response to Arguments
2. The arguments on pages 13-14 are persuasive because the establishment of direct connections between neurons belonging to non-adjacent regions solves the computational inefficiency of requiring all information to flow sequentially through every layer and the inability to capture long-range dependencies efficiently by creating novel information pathways that enable direct communication between distant network regions, bypassing intermediate layers, this enables more efficient information flow. The application of a plurality of transformations to signals propagating along the direct connections solves the need for signal conditioning when bypassing intermediate layers and ensuring compatibility between non-adjacent regions by implementing adaptive transformations mechanisms that modify signals appropriately for their target regions. The coordinate timing of signal propagation through the direct connections provides synchronization challenges when combining direct pathways with traditional layer-to-layer propagation by implementing temporal coordination to ensure signals arriving via different pathways are properly synchronized. These claim applications leads to improvement in adaptation capabilities and temporal coordination ensuring effective operation of neural networks which leads to an improvement in computer technology. As a result, the 101 rejection has been withdrawn.
The claim amendments of 12/22/2025 has overcome the 112(f) rejection of 09/22/2025. As a result, the 112(f) rejection has been withdrawn.
The claim amendments of 12/22/2025 has overcome the 112(b) rejection of 09/22/2025. As a result, the 112(b) rejection has been withdrawn.
The Applicant’s arguments have been considered but are moot in view of the new grounds of rejection. The Examiner is withdrawing the rejections in the previous office action 09/22/2025 because the applicant amendments necessitated the new grounds of rejection presented in this office action.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
3. Claims 13 and 20 are rejected under 35 U.S.C 102(a)(1) as being anticipated by Kasabov ("Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning." IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 31.6 (2001): 902-918)
Regarding claim 13, Kasabov teaches a computer-implemented method for adaptive neural network architecture (EFuNNs (evolving fuzzy neural networks) can learn spatial-temporal sequences in an adaptive way through one pass learning and automatically adapt their parameter values as they operate, abstract), comprising the steps of:
initializing a neural network comprising a plurality of interconnected nodes arranged in layers (Once set, the initial values of the following EFuNN (evolving fuzzy neural networks) parameters can be either kept fixed during the entire operation of the system (pg. 913, left col., last para.); EFuNNs evolve their structure and parameter values through incremental, hybrid supervised/unsupervised, online learning, abstract),
wherein the neural network includes a hierarchal supervisory structure (The input layer, Fuzzy input layer, Rule (base) layer, Fuzzy outputs, outputs, Fig. 3, pg. 904) and a meta-supervisory structure (a standard EFuNN that has the first part updated in an unsupervised mode and the second in a supervised mode, pg. 906, left col., second to the last para. The Examiner notes that the first and second part is the meta-supervisory structure);
operating the hierarchical supervisory structure (The input layer, Fuzzy input
layer, Rule (base) layer, Fuzzy outputs, outputs, Fig. 3, pg. 904) within the neural network by (In this respect the problem space EFuNN is operating in is open and the accumulated knowledge in an EFuNN structure is incremental, pg. 914, right col., third para.):
monitoring the neural network at various levels of granularity (Different EFuNNs have different optimal parameter values which also depends on the task (e.g., time series prediction and classification (pg. 911, left col., third para.); These dependencies can be further investigated and enhanced through synaptic analysis (at the synaptic memory level), pg. 906, right col., third para.);
collecting activation data from a plurality of nodes within the neural (The EFuNN system was explained so far with the use of one rule node activation (the winning rule node for the current input data). The same formulas are applicable when the activation of m rule nodes is propagated and used (the so called “many-of- ”mode or“ -of- ” for short), pg. 906, right col., last para. to pg. 907, left col., first para.); and
identifying a plurality of operation patterns from the activation data (IF (Age>OLD) AND [the total activation TA (rj) is less than a pruning parameter Pr times Age ] THEN prune rule node rj where Age (rj) is calculated as the number of examples that have been presented to the EFuNN after rj had been first created. OLD is a predefined age limit, Pr is a pruning parameter in the range [0, 1], and the total activation TA (rj) is calculated as the number of examples for which rj has been the correct winning node (or among the m winning nodes in the m-of-n mode of operation), pg. 907, left col., fourth para.); and
implementing architectural changes to the neural network based on identified operation patterns (After a certain time (when a certain number of examples have been presented) some neurons and connections may be pruned or aggregated … Different pruning rules can be applied for a successful pruning of unnecessary nodes and connections, pg. 907, left col., third para.);
operating the meta-supervisory structure (a standard EFuNN that has the first
part updated in an unsupervised mode and the second in a supervised mode, pg. 906, left col., second to the last para.) that oversees the hierarchical supervisory structure by (The input layer, Fuzzy input layer, Rule (base) layer, Fuzzy outputs, outputs, Fig. 3, pg. 904):
monitoring behavior of the hierarchical supervisory structure (The input layer, Fuzzy input layer, Rule (base) layer, Fuzzy outputs, outputs, Fig. 3, pg. 904) to identify supervisory patterns (The local learning procedure and the local normalized fuzzy distance used in the EFuNN architecture allow for adding new inputs and/or outputs at any time of the system operation. In this respect the problem space EFuNN is operating in is open and the accumulated knowledge in an EFuNN structure is incremental, pg. 914, right col., third para);
storing identified supervisory patterns in a pattern database (EFuNNs and case-based reasoning methods are similar in the sense that they store exemplars and measure similarities (pg. 916, left col., last para.); The EFuNN methods … can be implemented … in hardware, pg. 916, right col., last para.); and
using the stored supervisory patterns to identify architectural modifications (All
MFs (membership functions) change in order for new ones to be introduced. For example, all stored fuzzy exemplars in and that have three MFs are defuzzified (e.g., through the center of gravity deffuzification technique) and then used to evolve a new EFuNN structure that has five MFs [Fig. 9(b)], pg. 910, left col., section (2). The Examiner notes fuzzy exemplar whose similarity is measured is a supervisory pattern) to the hierarchical supervisory structure (The input layer, Fuzzy input layer, Rule (base) layer, Fuzzy outputs, outputs, Fig. 3, pg. 904);
establishing direct connections between neurons belonging to non-adjacent regions (the bottom node layer 3 (rule case nodes) is directly connected to non-adjacent nodes, Fig. 3a, pg. 904) of the neural network (Fig. 3. Evolving fuzzy neural network EFuNN. (a) An example of a standard feedforward EFuNN system, pg. 904, left col.);
applying a plurality of transformations to signals propagating along respective direct connections (The third layer contains rule (case) nodes that evolve through supervised and/or unsupervised learning … Each rule node r is defined by two vectors of connection weights W1(r) and W2(r) — and , the latter being adjusted through supervised learning based on the output error, and the former being adjusted through unsupervised learning based on similarity measure within a local area of the problem space, pg. 904, left col., second to the last para. The Examiner notes adjustments are the plurality of transformations and the connections are propagation of signals); and
coordinating timing of signal propagation through the direct connection (The learned temporal associations can be used to support the activation of rule nodes based on temporal pattern similarity. Here, temporal dependencies are learned through establishing structural links (pg. 906, right col., second para.); An optional short-term memory layer can be used through a feedback connection from the rule (also called case) node layer [see Fig. 3(b)]. The layer of feedback connections could be used if temporal relationships of input data are to be memorized structurally, pg. 904, left col., first para.).
Regarding claim 20, Kasabov teaches the method of claim 13, Kasabov teaches further comprising the step of enabling controlled signal interaction during transmission through learned interaction weights that adapt based on observed effectiveness (While the connection weights and capture fuzzy coordinates of the learned prototypes (exemplars) represented as centers of hyperspheres, the temporal layer of connection weights from Fig. 3(b) captures temporal dependencies between consecutive data example, pg. 906, left col., last para.).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
4. Claims 1 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Kasabov ("Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning." IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 31.6 (2001): 902-918) in view of Shrivastava et al. (US20200311548)
Regarding claim 1, Kasabov teaches a computer system (computer systems that learn speech and language, pg. 916, right col., last para.)
initialize a neural network comprising a plurality of interconnected nodes arranged in layers (Once set, the initial values of the following EFuNN (evolving fuzzy neural networks) parameters can be either kept fixed during the entire operation of the system (pg. 913, left col., last para.); EFuNNs evolve their structure and parameter values through incremental, hybrid supervised/unsupervised, online learning, abstract),
wherein the neural network includes a hierarchical supervisory structure (The input layer, Fuzzy input layer, Rule (base) layer, Fuzzy outputs, outputs, Fig. 3, pg. 904) and a meta-supervisory structure (a standard EFuNN that has the first part updated in an unsupervised mode and the second in a supervised mode, pg. 906, left col., second to the last para. The Examiner notes that the first and second part is the meta-supervisory structure);
operate the hierarchical supervisory structure (The input layer, Fuzzy input
layer, Rule (base) layer, Fuzzy outputs, outputs, Fig. 3, pg. 904) within the neural network by (In this respect the problem space EFuNN is operating in is open and the accumulated knowledge in an EFuNN structure is incremental, pg. 914, right col., third para.):
monitoring the neural network at various levels of granularity (Different EFuNNs have different optimal parameter values which also depends on the task (e.g., time series prediction and classification (pg. 911, left col., third para.); These dependencies can be further investigated and enhanced through synaptic analysis (at the synaptic memory level), pg. 906, right col., third para.);
collecting activation data from a plurality of nodes within the neural network (The EFuNN system was explained so far with the use of one rule node activation (the winning rule node for the current input data). The same formulas are applicable when the activation of m rule nodes is propagated and used (the so called “many-of- ”mode or“ -of- ” for short), pg. 906, right col., last para. to pg. 907, left col., first para.);
identifying a plurality of operation patterns from the activation data (IF (Age>OLD) AND [the total activation TA (rj) is less than a pruning parameter Pr times Age ] THEN prune rule node rj where Age (rj) is calculated as the number of examples that have been presented to the EFuNN after rj had been first created. OLD is a predefined age limit, Pr is a pruning parameter in the range [0, 1], and the total activation TA (rj) is calculated as the number of examples for which rj has been the correct winning node (or among the m winning nodes in the m-of-n mode of operation), pg. 907, left col., fourth para.); and
implementing architectural changes to the neural network based on identified operation patterns (After a certain time (when a certain number of examples have been presented) some neurons and connections may be pruned or aggregated … Different pruning rules can be applied for a successful pruning of unnecessary nodes and connections, pg. 907, left col., third para.);
operate the meta-supervisory structure by (a standard EFuNN that has the first part updated in an unsupervised mode and the second in a supervised mode, pg. 906, left col., second to the last para.):
monitoring behavior of the hierarchical supervisory structure (The input layer, Fuzzy input layer, Rule (base) layer, Fuzzy outputs, outputs, Fig. 3, pg. 904) to identify supervisory patterns (The local learning procedure and the local normalized fuzzy distance used in the EFuNN architecture allow for adding new inputs and/or outputs at any time of the system operation. In this respect the problem space EFuNN is operating in is open and the accumulated knowledge in an EFuNN structure is incremental, pg. 914, right col., third para);
storing identified supervisory patterns in the hardware memory (EFuNNs and case-based reasoning methods are similar in the sense that they store exemplars and measure similarities (pg. 916, left col., last para.); The EFuNN methods … can be implemented … in hardware, pg. 916, right col., last para.); and
using the stored supervisory patterns to identify architectural modifications (All
MFs (membership functions) change in order for new ones to be introduced. For example, all stored fuzzy exemplars in and that have three MFs are defuzzified (e.g., through the center of gravity deffuzification technique) and then used to evolve a new EFuNN structure that has five MFs [Fig. 9(b)], pg. 910, left col., section (2). The Examiner notes fuzzy exemplar whose similarity is measured is a supervisory pattern) to the hierarchical supervisory structure (The input layer, Fuzzy input layer, Rule (base) layer, Fuzzy outputs, outputs, Fig. 3, pg. 904);
establish direct connections between neurons belonging to non-adjacent regions (the bottom node layer 3 (rule case nodes) is directly connected to non-adjacent nodes, Fig. 3a, pg. 904) of the neural network (Fig. 3. Evolving fuzzy neural network EFuNN. (a) An example of a standard feedforward EFuNN system, pg. 904, left col.);
apply a plurality of transformations to signals propagating during transmission along respective direct connections (The third layer contains rule (case) nodes that evolve through supervised and/or unsupervised learning … Each rule node r is defined by two vectors of connection weights W1(r) and W2(r) — and , the latter being adjusted through supervised learning based on the output error, and the former being adjusted through unsupervised learning based on similarity measure within a local area of the problem space, pg. 904, left col., second to the last para. The Examiner notes adjustments are the plurality of transformations and the connections are propagation of signals); and
coordinate timing of signal propagation through the direct connections (The learned temporal associations can be used to support the activation of rule nodes based on temporal pattern similarity. Here, temporal dependencies are learned through establishing structural links (pg. 906, right col., second para.); An optional short-term memory layer can be used through a feedback connection from the rule (also called case) node layer [see Fig. 3(b)]. The layer of feedback connections could be used if temporal relationships of input data are to be memorized structurally, pg. 904, left col., first para.).
Kasabov does not explicitly teach a system comprising a hardware memory, wherein the computer system is configured to execute software instructions stored on nontransitory machine-readable storage media that:
Shrivastava teaches a computer system comprising a hardware memory, wherein the computer system is configured to execute software instructions stored on nontransitory machine-readable storage media that (The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit) [0090]; Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data [0091]):
It would have been obvious to a person of ordinary skill in the art before the effective filing data of the claimed invention to have modified the method of Kasabov to incorporate the teachings of Shrivastava for benefit of performing actions in accordance with instructions and one or more memory devices for storing instructions and data [0091] to enable efficient real-time processing (Shrivastava [0084])
Regarding claim 8, Modified Kasabov teaches the computer system of claim 1, Kasabov teaches wherein the signal transmission pathways (The layer of feedback connections could be used if temporal relationships of input data are to be memorized structurally, pg. 904, left col., first para.)
enable controlled signal interaction during transmission through learned interaction weights that adapt based on observed effectiveness (While the connection weights and capture fuzzy coordinates of the learned prototypes (exemplars) represented as centers of hyperspheres, the temporal layer of connection weights from Fig. 3(b) captures temporal dependencies between consecutive data example, pg. 906, left col., last para.).
5. Claims 2-6 are rejected under 35 U.S.C. 103 as being unpatentable over Kasabov ("Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning." IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 31.6 (2001): 902-918) in view of Shrivastava et al. (US20200311548) and further in view of Chen et al. ("Multi-scale adaptive graph neural network for multivariate time series forecasting." IEEE Transactions on Knowledge and Data Engineering 35.10 (2023): 10748-10761).
Regarding claim 2, Modified Kasabov teaches the computer system of claim 1, Modified Kasabov does not explicitly teach wherein the transformation comprise adaptive matrices that evolve based on observed transmission effectiveness across multiple time scales.
Chen teaches wherein the transformation comprise adaptive matrices that evolve based on observed transmission effectiveness across multiple time scales (The adaptive graph learning module automatically generates adjacency matrices to represent the inter-variable dependencies among MTS (multivariate time series) (pg. 5, left col., last para.)).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Kasabov to incorporate the teachings of Chen for the benefit of effectively promoting the collaboration across different time scales, and automatically capture the importance of contributed temporal patterns in a multi-scale adaptive graph neural network (Chen, abstract)
Regarding claim 3, Modified Kasabov teaches the computer system of claim 2, Chen teaches wherein the transformation implement time-dependent signal modifications according to learned temporal patterns (Thus, an accurate MTS forecasting model should learn a feature representation that can comprehensively reflect all kinds of multi-scale temporal patterns (pg. 2, left col., first para.)).
The same motivation to combine dependent claim 2 applies here.
Regarding claim 4, Modified Kasabov teaches the computer system of claim 1, Modified Kasabov does not explicitly teach wherein temporal coordination synchronize signal propagation through direct pathways with traditional layer-to-layer transmission.
Chen teaches wherein temporal coordination synchronize signal propagation through direct pathways with traditional layer-to-layer transmission (A multi-scale pyramid network to preserve the underlying temporal hierarchy at different time scales … Multi-scale pyramid network generates multi-scale feature representations through pyramid layers. Each pyramid layer takes the outputs of a preceding pyramid layer as the input and generates the feature representations of a larger scale as the output, pg. 4, right col., second and third para.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Kasabov to incorporate the teachings of Chen for the benefit of effectively promoting the collaboration across different time scales, and automatically capture the importance of contributed temporal patterns in a multi-scale adaptive graph neural network (Chen, abstract)
Regarding claim 5, Modified Kasabov teaches the computer system of claim 1, Modified Kasabov does not explicitly teach wherein the hierarchical supervisory system implements multi-level decision making for architectural modifications
Chen teaches wherein the hierarchical supervisory structure implements multi-level decision making for architectural modifications (a refining module that consists of two full connected layers to compact the fine-grained information across different time scales (pg. 6, left col., last para.); Fig. 2 … a multi-scale pyramid network to preserve the underlying temporal hierarchy at different time scales, pg. 4, right col., first para.), with different supervisory levels coordinating through information exchange about resource availability and network capacity (Space-based methods define the graph convolution through information propagation, which aggregates the representation of a central node and the representations of its neighbors to get the updated representation for the node, pg. 4, left col., section B. Graph Neural Networks).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Kasabov to incorporate the teachings of Chen for the benefit of effectively promoting the collaboration across different time scales, and automatically capture the importance of contributed temporal patterns in a multi-scale adaptive graph neural network (Chen, abstract)
Regarding claim 6, Modified Kasabov teaches the computer system of claim 1, Modified Kasabov does not explicitly teach wherein the meta-supervisory system implements pattern recognition algorithms that identify common elements across successful adaptation episodes while maintaining operational stability
Chen teaches wherein the meta-supervisory system implements pattern recognition algorithms that identify common elements across successful adaptation episodes while maintaining operational stability (Fig. 2 … a multi-scale temporal graph neural network to capture all kinds of scale-specific temporal patterns; d) a scale-wise fusion module to effectively promote the collaboration across different time scales, pg. 4, right col., first para.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Kasabov to incorporate the teachings of Chen for the benefit of effectively promoting the collaboration across different time scales, and automatically capture the importance of contributed temporal patterns in a multi-scale adaptive graph neural network (Chen, abstract)
6. Claims 14-18 are rejected under 35 U.S.C. 103 as being unpatentable over Kasabov ("Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning." IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 31.6 (2001): 902-918) in view of Chen et al. ("Multi-scale adaptive graph neural network for multivariate time series forecasting." IEEE Transactions on Knowledge and Data Engineering 35.10 (2023): 10748-10761).
Regarding claim 14, Kasabov teaches method of claim 13, Kasabov does not explicitly teach wherein the step of applying a plurality of transformations to signals comprises adapting transformation matrices based on observed transmission effectiveness across multiple time scales.
Chen teaches wherein the step of applying a plurality of transformations to signals comprises adapting transformation matrices based on observed transmission effectiveness across multiple time scales (The adaptive graph learning module automatically generates adjacency matrices to represent the inter-variable dependencies among MTS (multivariate time series) (pg. 5, left col., last para.)).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Kasabov to incorporate the teachings of Chen for the benefit of effectively promoting the collaboration across different time scales, and automatically capture the importance of contributed temporal patterns in a multi-scale adaptive graph neural network (Chen, abstract)
Regarding claim 15, Modified Kasabov teaches the method of claim 14, Chen teaches wherein the step of modifying applying a plurality of transformations to signals further comprises implementing time-dependent signal modifications according to learned temporal patterns (Thus, an accurate MTS forecasting model should learn a feature representation that can comprehensively reflect all kinds of multi-scale temporal patterns (pg. 2, left col., first para.)).
The same motivation to combine dependent claim 14 applies here.
Regarding claim 16, Kasabov teaches the method of claim 13, Kasabov does not explicitly teach wherein the step of coordinating timing of signal propagation comprises synchronizing signals through direct pathways with traditional layer-to-layer transmission.
Chen teaches wherein the step of coordinating timing of signal propagation comprises synchronizing signals through direct pathways with traditional layer-to-layer transmission (A multi-scale pyramid network to preserve the underlying temporal hierarchy at different time scales … Multi-scale pyramid network generates multi-scale feature representations through pyramid layers. Each pyramid layer takes the outputs of a preceding pyramid layer as the input and generates the feature representations of a larger scale as the output, pg. 4, right col., second and third para.)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Kasabov to incorporate the teachings of Chen for the benefit of effectively promoting the collaboration across different time scales, and automatically capture the importance of contributed temporal patterns in a multi-scale adaptive graph neural network (Chen, abstract)
Regarding claim 17, Kasabov teaches the method of claim 13, Kasabov does not explicitly teach wherein the step of operating the hierarchical supervisory structure further comprises coordinating decisions across supervisory levels through information exchange about resource availability and network capacity.
Chen teaches wherein the step of operating the hierarchical supervisory structure further comprises coordinating decisions across supervisory levels through information exchange about resource availability and network capacity (Space-based methods define the graph convolution through information propagation, which aggregates the representation of a central node and the representations of its neighbors to get the updated representation for the node, pg. 4, left col., section B. Graph Neural Networks).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Kasabov to incorporate the teachings of Chen for the benefit of effectively promoting the collaboration across different time scales, and automatically capture the importance of contributed temporal patterns in a multi-scale adaptive graph neural network (Chen, abstract)
Regarding claim 18, Kasabov teaches the method of claim 13, Kasabov does not explicitly teach wherein the step of operating the meta-supervisory structure further comprises identifying common elements across successful adaptation episodes while maintaining operational stability.
Chen teaches wherein the step of operating the meta-supervisory structure further comprises identifying common elements across successful adaptation episodes while maintaining operational stability (Fig. 2 … a multi-scale temporal graph neural network to capture all kinds of scale-specific temporal patterns; d) a scale-wise fusion module to effectively promote the collaboration across different time scales, pg. 4, right col., first para.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Kasabov to incorporate the teachings of Chen for the benefit of effectively promoting the collaboration across different time scales, and automatically capture the importance of contributed temporal patterns in a multi-scale adaptive graph neural network (Chen, abstract)
7. Claims 9, 10 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Kasabov ("Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning." IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 31.6 (2001): 902-918) in view of Shrivastava et al. (US20200311548) and further in view of Kim et al. ("LSTM-based fault direction estimation and protection coordination for networked distribution system." IEEE Access 10 (2022): 40348-40357, date of publication April 12, 2022, date of current version April 20, 2022).
Regarding claim 9, Modified Kasabov teaches the computer system of claim 1, Modified Kasabov does not explicitly teach wherein the pattern database maintains contextual signatures for stored patterns, enabling relevant pattern retrieval for similar operational scenarios.
Kim teaches wherein the pattern database maintains contextual signatures for stored patterns, enabling relevant pattern retrieval for similar operational scenarios (Because a separate structure determines whether to store historical time series data, LSTM (long short-term memory) shows excellence in the retention of relevant information, pg. 40350, left col., first para.; The Examiner notes in the LSTM, memory of past input is retrieved for solving sequence learning tasks)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Kasabov to incorporate the teachings of Kim for the benefit of a high-speed communication systems in a network distribution system comprising a long short-term memory (LSTM) neural network (Kim abstract)
Regarding claim 10, Modified Kasabov teaches the computer system of claim 1, Shrivastava teaches wherein the computer system is further configured to execute software instructions stored on nontransitory machine-readable storage media that (The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output [0090])
The same motivation to combine independent claim 1 applies here.
Modified Kasabov does not explicitly teach implement adaptive thresholds for resource allocation based on current network state and performance requirements.
Kim teaches implement adaptive thresholds for resource allocation based on current network state and performance requirements (The parameters of the LSTM network are adjusted and optimized using the adaptive moment estimation (ADAM) method. 128 mini-batch-sized feature data are randomly extracted for training. In addition, we set the initial learning rate to 0.001, the gradient decay factor to 0.9, and the squared gradient decay factor to 0.999, pg. 40353, left col., second to the last para.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Kasabov to incorporate the teachings of Kim for the benefit of a high-speed communication systems in a network distribution system comprising a long short-term memory (LSTM) neural network (Kim abstract)
Regarding claim 12, Modified Kasabov teaches the computer system of claim 1, Shrivastava teaches wherein the computer system is further configured to execute software instructions stored on nontransitory machine-readable storage media that (The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output [0090]):
The same motivation to combine independent claim 1 applies here.
Modified Kasabov does not explicitly teach implement hierarchical circuit breakers coordinating across supervisory levels to isolate and address potential instabilities.
Kim teaches implement hierarchical circuit breakers (circuit breaker A [Wingdings font/0xE0] circuit breaker B [Wingdings font/0xE0] circuit breaker c, Fig. 2, pg. 40351) coordinating across supervisory levels to isolate and address potential instabilities (Utilization of the LSTM algorithm is anticipated to yield significant reductions in error when determining fault direction, which is the major cause of circuit breaker malfunction issues in the existing CLS protection coordination algorithm, pg. 40350, left col., first para.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Kasabov to incorporate the teachings of Kim for the benefit of a high-speed communication systems in a network distribution system comprising a long short-term memory (LSTM) neural network (Kim abstract)
8. Claims 21, 22 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Kasabov ("Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning." IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 31.6 (2001): 902-918) in view of Kim et al. ("LSTM-based fault direction estimation and protection coordination for networked distribution system." IEEE Access 10 (2022): 40348-40357, date of publication April 12, 2022, date of current version April 20, 2022).
Regarding claim 21, Kasabov teaches the method of claim 13, Kasabov does not explicitly teach the limitations of claim 21.
Kim teaches further comprising the step of maintaining contextual signatures for stored patterns, enabling relevant pattern retrieval for similar operational scenarios (Because a separate structure determines whether to store historical time series data, LSTM (long short-term memory) shows excellence in the retention of relevant information, pg. 40350, left col., first para.; The Examiner notes in the LSTM, memory of past input is retrieved for solving sequence learning tasks)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Kasabov to incorporate the teachings of Kim for the benefit of a high-speed communication systems in a network distribution system comprising a long short-term memory (LSTM) neural network (Kim abstract)
Regarding claim 22, Kasabov teaches the method of claim 13, Kasabov do not explicitly teach the limitations of claim 22.
Kim teaches further comprising the step of managing resources by implementing adaptive thresholds for resource allocation based on current network state and performance requirements (The parameters of the LSTM network are adjusted and optimized using the adaptive moment estimation (ADAM) method. 128 mini-batch-sized feature data are randomly extracted for training. In addition, we set the initial learning rate to 0.001, the gradient decay factor to 0.9, and the squared gradient decay factor to 0.999, pg. 40353, left col., second to the last para.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Kasabov to incorporate the teachings of Kim for the benefit of a high-speed communication systems in a network distribution system comprising a long short-term memory (LSTM) neural network (Kim abstract)
Regarding claim 24, Kasabov teaches the method of claim 13, Kasabov do not explicitly teach the limitations of claim 24.
Kim teaches further comprising the step of implementing hierarchical circuit breakers (circuit breaker A [Wingdings font/0xE0] circuit breaker B [Wingdings font/0xE0] circuit breaker c, Fig. 2, pg. 40351) coordinating across supervisory levels to isolate and address potential instabilities (Utilization of the LSTM algorithm is anticipated to yield significant reductions in error when determining fault direction, which is the major cause of circuit breaker malfunction issues in the existing CLS protection coordination algorithm, pg. 40350, left col., first para.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Kasabov to incorporate the teachings of Kim for the benefit of a high-speed communication systems in a network distribution system comprising a long short-term memory (LSTM) neural network (Kim abstract)
9. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Kasabov ("Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning." IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 31.6 (2001): 902-918) in view of Shrivastava et al. (US20200311548) and further in view of Abraham et al. ("Back to the Future: Reversible Runtime Neural Network Pruning for Safe Autonomous Systems." 2024 Design, Automation & Test in Europe Conference & Exhibition (25 March 2024). IEEE, 2024).
Regarding claim 7, Modified Kasabov teaches the computer system of claim 1, Shrivastava teaches wherein the computer system is further configured to execute software instructions stored on nontransitory machine-readable storage media that (The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output [0090]):
The same motivation to combine independent claim 1 applies here.
Modified Kasabov does not explicitly teach stabilize monitor network performance during architectural changes while implementing temporary support structures during transitions and maintaining backup pathways that enable potential reversion of modifications.
Abraham teaches stabilize monitor network performance during architectural changes while implementing temporary support structures during transitions while implementing temporary support structures during transitions (As shown in Fig. 4, for each model we measure the time it takes to (1) load from disk, (2) perform one inference, and (3) to swap in the full model in place of the pruned model currently running (or vice versa), by reverse(), pg. 5, left col., last sentence to pg. 5, right col.)
and maintaining backup pathways that enable potential reversion of modifications (As shown in Fig. 2, Back to the Future consists of an offline portion for pruning and storing the architectures, and a runtime function, reverse(), to swap between full and pruned architectures., pg. 2, right col., last para.)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Kasabov to incorporate the teachings of Abraham for the benefit of enhancing safety and reliability, and providing seamless reversion to the accurate version of the model, demonstrating its applicability for safe autonomous systems design (Abraham, abstract).
10. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Kasabov ("Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning." IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 31.6 (2001): 902-918) in view of Abraham et al. ("Back to the Future: Reversible Runtime Neural Network Pruning for Safe Autonomous Systems." 2024 Design, Automation & Test in Europe Conference & Exhibition (25 March 2024). IEEE, 2024).
Regarding claim 19, Kasabov teaches the method of claim 13, but Kasabov do not teach the limitations of claim 19.
Abraham teaches further comprising the step of managing stability by monitoring network performance during architectural changes while implementing temporary support structures during transitions (As shown in Fig. 4, for each model we measure the time it takes to (1) load from disk, (2) perform one inference, and (3) to swap in the full model in place of the pruned model currently running (or vice versa), by reverse(), pg. 5, left col., last sentence to pg. 5, right col.)
and maintaining backup pathways that enable potential reversion of modifications (As shown in Fig. 2, Back to the Future consists of an offline portion for pruning and storing the architectures, and a runtime function, reverse(), to swap between full and pruned architectures., pg. 2, right col., last para.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Kasabov to incorporate the teachings of Abraham for the benefit of enhancing safety and reliability, and providing seamless reversion to the accurate version of the model, demonstrating its applicability for safe autonomous systems design (Abraham, abstract).
11. Claims 11 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Kasabov ("Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning." IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 31.6 (2001): 902-918) in view of Shrivastava et al. (US20200311548) and further in view of Hamm et al. ("Global optimization of neural network weights." Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No. 02CH37290). Vol. 2. IEEE, 2002).
Regarding claim 11, Modified Kasabov the computer system of claim 1, Shrivastava teaches wherein the computer system is further configured to execute software instructions stored on nontransitory machine-readable storage media that (The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output [0090]):
The same motivation to combine independent claim 1 applies here.
Modified Kasabov does not explicitly teach implement both local and global optimization strategies ensuring that adaptations beneficial in one region maintain overall network performance.
Hamm teaches implement both local and global optimization strategies ensuring that adaptations beneficial in one region maintain overall network performance (Therefore, it is common to combine a local algorithm with a global algorithm by using the weights obtained from the global algorithm as starting values for the local routine, pg. 1230, right col., last para.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Kasabov to incorporate the teachings of Hamm for the benefit of more efficient use of computational resources (Hamm abstract)
12. Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Kasabov ("Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning." IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 31.6 (2001): 902-918) in view of Hamm et al. ("Global optimization of neural network weights." Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No. 02CH37290). Vol. 2. IEEE, 2002).
Regarding claim 23, Kasabov teaches the method of claim 13, Kasabov does not explicitly teach the limitations of claim 23.
Hamm teaches further comprising the step of implementing both local and global optimization strategies ensuring that adaptations beneficial in one region maintain overall network performance (Therefore, it is common to combine a local algorithm with a global algorithm by using the weights obtained from the global algorithm as starting values for the local routine, pg. 1230, right col., last para.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Kasabov to incorporate the teachings of Hamm for the benefit of more efficient use of computational resources (Hamm abstract)
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MORIAM MOSUNMOLA GODO whose telephone number is (571)272-8670. The examiner can normally be reached Monday-Friday 8am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T Bechtold can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.G./Examiner, Art Unit 2148
/MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148