DETAILED ACTION
Claims 1-31 are present for examination.
Claims 1-25 and 27-31 have been amended.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 01/30/2023 and 06/26/2023 is being considered by the examiner.
Claim Objections
Claims 2-4, 6-8, 10-13, 15-16, 19-20, 22 and 26 are objected to because of the following informalities:
In claim 2, line 2, where it says “policies to be selected based…” should be --policies to be used based…--.
In claim 3, lines 2 and 4, where it says “policies is to be selected…” should be --policies is to be used…--.
In claim 4, line 2, where it says “circuits are to select…” should be --circuits ate to use…--.
In claim 6, line 2, where it says “policies are to be selected based…” should be --policies are to be used based…--.
In claim 7, line 2, where it says “circuits are to select…” should be --circuits are to use…--.
In claim 8, line 2, where it says “circuits are to select…” should be --circuits are to use…--.
In claim 10, line 2, where it says “policies to be selected based…” should be --policies to be used based…--.
In claim 11, line 2, where it says “policies is to be selected…” should be --policies is to be used…--.
In claim 11, lines 3-4, where it says “policies is to be selected…” should be --policies is to be used…--.
In claim 12, line 2, where it says “processor is to select…” should be --processor is to use…--.
In claim 13, line 2, where it says “processor is to select…” should be --processor is to use…--.
In claim 15, line 2, where it says “policies to be selected based…” should be --policies to be used based…--.
In claim 16, line 2, where it says “processor is to select…” should be --processor is to use…--.
In claim 19, line 2, where it says “processors are to select…” should be --processors are to use…--.
In claim 20, line 4, where it says “policies to be selected…” should be --policies to be used…--.
In claim 22, line 2, where it says “policies are to be selected based…” should be –policies are to be used based…--.
In claim 26, lines 1-2, where it says “different one or more policies are used…” should be –different one of the one or more different cache eviction policies are used…--.
Appropriate correction is required.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-2, 4-5, 7-10, 12-14, 16-17, 19-21, 23-25, 27-28 and 30-31 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chandrasekaran et al. (US 2021/0157743).
With respect claim 1, Chandrasekaran et al. teaches one or more circuits to cause one or more different cache eviction policies (see paragraphs 120, 123 and 129; this may allow the neural network to provide cache replacement policies that are output in a sequence. For example, as traffic increases for a certain attribute, a sequence of cache replacement policies may be provided that gradually reduce the impact of the increasing traffic on the cache performance. This may also allow the neural network to take the cache replacement policies of other partitions into account when selecting a new cache replacement policy for a particular partition) to be used for different portions of one or more neural networks (see paragraphs 120, 123 and 129; This may also allow the neural network to take the cache replacement policies of other partitions into account when selecting a new cache replacement policy for a particular partition. For example, some embodiment may use a long short-term memory (LSTM) neural network to select a cache replacement policy. Note that recurrent neural networks are not required by all embodiments, and simple feedforward neural networks may also be used to select a cache replacement policy based on the inputs collected during the time interval).
With respect claim 2, Chandrasekaran et al. teaches wherein the one or more circuits are to cause the one or more different cache eviction policies to be selected based, at least in part, on analysis of a layer of the one or more neural networks (see paragraphs 120, 123 and 129; in addition to the input layer of the neural network comprising inputs 1602 and/or inputs 1608, the neural network may also include one or more internal or hidden layers 1604. In some embodiments, the neural network may be a recurrent neural network (RNN) where connections between the nodes form a directed graph along a temporal sequence. This allows the neural network to exhibit temporal dynamic behavior and thus use an internal state (i.e., a memory-like behavior) to process sequences of events. This may allow the neural network to provide cache replacement policies that are output in a sequence).
With respect claim 4, Chandrasekaran et al. teaches wherein the one or more circuits are to select one or more of the different cache eviction policies based, at least in part, on analysis of performance data associated with use of the one or more neural networks (see paragraphs 120 and 123; this may allow the neural network to provide cache replacement policies that are output in a sequence. For example, as traffic increases for a certain attribute, a sequence of cache replacement policies may be provided that gradually reduce the impact of the increasing traffic on the cache performance… performance of the current cache replacement policy may then be evaluated to label the data sets for training the neural network. For example, if a default cache replacement policy is initially used, the cache replacement policy and a metric describing the performance of the cache may be provided to a labeling process 1714. The labeling process 1714 may evaluate the cache performance metric, such as a number of cache misses 1702 to determine whether the cache replacement policy 1504 currently being output by the neural network is performing adequately).
With respect claim 5, Chandrasekaran et al. teaches wherein the one or more circuits are to cause the one or more different cache eviction policies to be used in response to an instruction from at least one of an application, runtime, or operating system (see paragraphs 111; cache replacement policies 1402, 1404, 1406 may be changed dynamically at runtime by number different methods).
With respect claim 7, Chandrasekaran et al. teaches wherein the one or more circuits are to select the one or more different cache eviction policies based, at least in part, on one or more types of operations associated with a portion of the one or more neural networks (see paragraph 115, 120 and 129; the policy selection process 1502 monitors the incoming requests 1508 and determines that the attributes 1510 associated with those requests 1508 have shifted to a new attribute for that cache partition, the policy selection process 1502 may send the new attribute to the policy data store 1506. The policy data store 1506 may then select a policy that corresponds to the new attribute. In some embodiments, the cache replacement policies 1501, 1503 may be associated with different patterns in the request traffic 1508. For request patterns that are received at a relatively high rate and requesting similar objects, an LRU cache replacement policy may be selected for that particular partition. If the request pattern changes such that objects in that partition are rarely requested multiple times, the cache replacement policy may be changed to a different cache replacement policy from the policy data store).
With respect claim 9, Chandrasekaran et al. teaches a processor (see paragraph 180; processor) to cause one or more different cache eviction policies (see paragraphs 120, 123 and 129; this may allow the neural network to provide cache replacement policies that are output in a sequence. For example, as traffic increases for a certain attribute, a sequence of cache replacement policies may be provided that gradually reduce the impact of the increasing traffic on the cache performance. This may also allow the neural network to take the cache replacement policies of other partitions into account when selecting a new cache replacement policy for a particular partition) to be used for different portions of one or more one or more neural networks (see paragraphs 120, 123 and 129; This may also allow the neural network to take the cache replacement policies of other partitions into account when selecting a new cache replacement policy for a particular partition. For example, some embodiment may use a long short-term memory (LSTM) neural network to select a cache replacement policy. Note that recurrent neural networks are not required by all embodiments, and simple feedforward neural networks may also be used to select a cache replacement policy based on the inputs collected during the time interval).
With respect claim 10, Chandrasekaran et al. teaches wherein the processor is to cause the one or more different cache eviction policies to be selected based, at least in part, on analysis of a layer of the one or more neural networks (see paragraphs 120, 123 and 129; in addition to the input layer of the neural network comprising inputs 1602 and/or inputs 1608, the neural network may also include one or more internal or hidden layers 1604. In some embodiments, the neural network may be a recurrent neural network (RNN) where connections between the nodes form a directed graph along a temporal sequence. This allows the neural network to exhibit temporal dynamic behavior and thus use an internal state (i.e., a memory-like behavior) to process sequences of events. This may allow the neural network to provide cache replacement policies that are output in a sequence).
With respect claim 12, Chandrasekaran et al. teaches wherein the one or more circuits are to select one or more of the different cache eviction policies based, at least in part, on analysis of performance data associated with use of the one or more neural networks (see paragraphs 120 and 123; this may allow the neural network to provide cache replacement policies that are output in a sequence. For example, as traffic increases for a certain attribute, a sequence of cache replacement policies may be provided that gradually reduce the impact of the increasing traffic on the cache performance… performance of the current cache replacement policy may then be evaluated to label the data sets for training the neural network. For example, if a default cache replacement policy is initially used, the cache replacement policy and a metric describing the performance of the cache may be provided to a labeling process 1714. The labeling process 1714 may evaluate the cache performance metric, such as a number of cache misses 1702 to determine whether the cache replacement policy 1504 currently being output by the neural network is performing adequately).
With respect claim 13, Chandrasekaran et al. teaches wherein the one or more circuits are to cause the one or more different cache eviction policies to be used in response to an instruction from at least one of an application, runtime, or operating system (see paragraphs 111; cache replacement policies 1402, 1404, 1406 may be changed dynamically at runtime by number different methods).
With respect claim 14, Gottin et al. does not explicitly teach wherein the one or more different cache eviction policies comprise at least one of an algorithm or heuristic to select data for replacement in one or more caches (see paragraph 108; cache replacement policy may also be referred to as a cache replacement algorithm or simply as a cache algorithm. The cache replacement policy may include optimizing instructions and software and/or hardware that govern how object portions are stored and replaced in each of the partitions. For example, when a partition in the cache is full, the cache replacement policy may include algorithms that determine which object portions should be discarded to make room for new object portions as they are requested by client devices).
With respect claim 16, Chandrasekaran et al. teaches wherein the processor is to select the one or more different cache eviction policies based, at least in part, on one or more types of operations associated with a portion of the one or more neural networks (see paragraph 115, 120 and 129; the policy selection process 1502 monitors the incoming requests 1508 and determines that the attributes 1510 associated with those requests 1508 have shifted to a new attribute for that cache partition, the policy selection process 1502 may send the new attribute to the policy data store 1506. The policy data store 1506 may then select a policy that corresponds to the new attribute. In some embodiments, the cache replacement policies 1501, 1503 may be associated with different patterns in the request traffic 1508. For request patterns that are received at a relatively high rate and requesting similar objects, an LRU cache replacement policy may be selected for that particular partition. If the request pattern changes such that objects in that partition are rarely requested multiple times, the cache replacement policy may be changed to a different cache replacement policy from the policy data store).
With respect claim 17, Chandrasekaran et al. teaches a machine-readable medium having stored thereon instructions which, if performed by one or more processors (see paragraphs 180-182; computer readable storage medium), cause the one or more processors to at least: cause one or more different cache eviction policies (see paragraphs 120, 123 and 129; this may allow the neural network to provide cache replacement policies that are output in a sequence. For example, as traffic increases for a certain attribute, a sequence of cache replacement policies may be provided that gradually reduce the impact of the increasing traffic on the cache performance. This may also allow the neural network to take the cache replacement policies of other partitions into account when selecting a new cache replacement policy for a particular partition) to be used for different portions of one or more neural networks (see paragraphs 120, 123 and 129; This may also allow the neural network to take the cache replacement policies of other partitions into account when selecting a new cache replacement policy for a particular partition. For example, some embodiment may use a long short-term memory (LSTM) neural network to select a cache replacement policy. Note that recurrent neural networks are not required by all embodiments, and simple feedforward neural networks may also be used to select a cache replacement policy based on the inputs collected during the time interval).
With respect claim 19, Chandrasekaran et al. teaches wherein the one or more processors are to select one or more of the different cache eviction policies based, at least in part, on analysis of performance data associated with use of the one or more neural networks (see paragraphs 120 and 123; this may allow the neural network to provide cache replacement policies that are output in a sequence. For example, as traffic increases for a certain attribute, a sequence of cache replacement policies may be provided that gradually reduce the impact of the increasing traffic on the cache performance… performance of the current cache replacement policy may then be evaluated to label the data sets for training the neural network. For example, if a default cache replacement policy is initially used, the cache replacement policy and a metric describing the performance of the cache may be provided to a labeling process 1714. The labeling process 1714 may evaluate the cache performance metric, such as a number of cache misses 1702 to determine whether the cache replacement policy 1504 currently being output by the neural network is performing adequately).
With respect claim 20, Chandrasekaran et al. teaches cause the one or more different cache eviction policies to be used in response to an instruction from at least one of an application, runtime, or operating system (see paragraphs 111; cache replacement policies 1402, 1404, 1406 may be changed dynamically at runtime by number different methods).
With respect claim 21, Gottin et al. does not explicitly teach wherein the one or more different cache eviction policies comprise at least one of an algorithm or heuristic to select data for replacement in one or more caches (see paragraph 108; cache replacement policy may also be referred to as a cache replacement algorithm or simply as a cache algorithm. The cache replacement policy may include optimizing instructions and software and/or hardware that govern how object portions are stored and replaced in each of the partitions. For example, when a partition in the cache is full, the cache replacement policy may include algorithms that determine which object portions should be discarded to make room for new object portions as they are requested by client devices).
With respect claim 23, Chandrasekaran et al. teaches select the one or more different cache eviction policies based, at least in part, on one or more types of operations associated with a portion of the one or more neural networks (see paragraph 115, 120 and 129; the policy selection process 1502 monitors the incoming requests 1508 and determines that the attributes 1510 associated with those requests 1508 have shifted to a new attribute for that cache partition, the policy selection process 1502 may send the new attribute to the policy data store 1506. The policy data store 1506 may then select a policy that corresponds to the new attribute. In some embodiments, the cache replacement policies 1501, 1503 may be associated with different patterns in the request traffic 1508. For request patterns that are received at a relatively high rate and requesting similar objects, an LRU cache replacement policy may be selected for that particular partition. If the request pattern changes such that objects in that partition are rarely requested multiple times, the cache replacement policy may be changed to a different cache replacement policy from the policy data store).
With respect claim 24, Gottin et al. teaches causing the one or more different cache eviction policies to be used by a processor to use for different portions of one or more neural networks (see paragraphs 120, 123 and 129; this may allow the neural network to provide cache replacement policies that are output in a sequence. For example, as traffic increases for a certain attribute, a sequence of cache replacement policies may be provided that gradually reduce the impact of the increasing traffic on the cache performance. This may also allow the neural network to take the cache replacement policies of other partitions into account when selecting a new cache replacement policy for a particular partition); and
selecting one or more different cache eviction policies to be used by a processor for the different portions of the one or more neural networks (see paragraphs 120, 123 and 129; This may also allow the neural network to take the cache replacement policies of other partitions into account when selecting a new cache replacement policy for a particular partition. For example, some embodiment may use a long short-term memory (LSTM) neural network to select a cache replacement policy. Note that recurrent neural networks are not required by all embodiments, and simple feedforward neural networks may also be used to select a cache replacement policy based on the inputs collected during the time interval).
With respect claim 25, Chandrasekaran et al. teaches wherein the one or more different cache eviction policies are selected based, at least in part, on analysis of a layer of the one or more neural networks (see paragraphs 120, 123 and 129; in addition to the input layer of the neural network comprising inputs 1602 and/or inputs 1608, the neural network may also include one or more internal or hidden layers 1604. In some embodiments, the neural network may be a recurrent neural network (RNN) where connections between the nodes form a directed graph along a temporal sequence. This allows the neural network to exhibit temporal dynamic behavior and thus use an internal state (i.e., a memory-like behavior) to process sequences of events. This may allow the neural network to provide cache replacement policies that are output in a sequence).
With respect claim 27, Chandrasekaran et al. teaches wherein selecting the one or more of the different cache eviction policies based, at least in part, on analysis of performance data associated with use of the one or more neural networks (see paragraphs 120 and 123; this may allow the neural network to provide cache replacement policies that are output in a sequence. For example, as traffic increases for a certain attribute, a sequence of cache replacement policies may be provided that gradually reduce the impact of the increasing traffic on the cache performance… performance of the current cache replacement policy may then be evaluated to label the data sets for training the neural network. For example, if a default cache replacement policy is initially used, the cache replacement policy and a metric describing the performance of the cache may be provided to a labeling process 1714. The labeling process 1714 may evaluate the cache performance metric, such as a number of cache misses 1702 to determine whether the cache replacement policy 1504 currently being output by the neural network is performing adequately).
With respect claim 28, Chandrasekaran et al. teaches wherein the one or more different cache eviction policies are selected in response to an instruction from at least one of an application, runtime, or operating system (see paragraphs 111; cache replacement policies 1402, 1404, 1406 may be changed dynamically at runtime by number different methods).
With respect claim 30, Chandrasekaran et al. teaches selecting the one or more different cache eviction policies based, at least in part, on one or more types of operations associated with a portion of the one or more neural networks (see paragraph 115, 120 and 129; the policy selection process 1502 monitors the incoming requests 1508 and determines that the attributes 1510 associated with those requests 1508 have shifted to a new attribute for that cache partition, the policy selection process 1502 may send the new attribute to the policy data store 1506. The policy data store 1506 may then select a policy that corresponds to the new attribute. In some embodiments, the cache replacement policies 1501, 1503 may be associated with different patterns in the request traffic 1508. For request patterns that are received at a relatively high rate and requesting similar objects, an LRU cache replacement policy may be selected for that particular partition. If the request pattern changes such that objects in that partition are rarely requested multiple times, the cache replacement policy may be changed to a different cache replacement policy from the policy data store).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 3, 6, 11, 15-16, 18, 22-23, 26 and 29-30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chandrasekaran et al. (US 2021/0157743).
With respect claim 3, Chandrasekaran et al. do not explicitly teach wherein a first one of the one or more different cache eviction policies is to be selected for processing a first layer of the one or more neural networks, and a different one of the one or more different cache eviction policies is to be selected for processing a second layer of the one or more neural networks.
However, Chandrasekaran et al. teaches neural network may include an output layer that includes outputs 1606 corresponding to the various cache replacement policies. In some embodiment, each of the output nodes 1606 in the output layer may correspond to one of the cache replacement policies in the data store (see paragraphs 120-121).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the processor to include the above mentioned to improve the performance of the memory (see Chandrasekaran, paragraphs 84, 120 and claim 11).
With respect claim 6, Chandrasekaran et al. do not explicitly teach wherein the one or more different cache eviction policies are to be selected based, at least in part, on a mapping between the different portions of the one or more neural network and the one or more different cache eviction policies.
However, Chandrasekaran et al. teaches wherein in addition to the input layer of the neural network comprising inputs 1602 and/or inputs 1608, the neural network may also include one or more internal or hidden layers 1604. In some embodiments, the neural network may be a recurrent neural network (RNN) where connections between the nodes form a directed graph along a temporal sequence. This allows the neural network to exhibit temporal dynamic behavior and thus use an internal state (i.e., a memory-like behavior) to process sequences of events. This may allow the neural network to provide cache replacement policies that are output in a sequence… The neural network may include an output layer that includes outputs 1606 corresponding to the various cache replacement policies. In some embodiment, each of the output nodes 1606 in the output layer may correspond to one of the cache replacement policies in the data store (see paragraphs 120-121).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the processor to include the above mentioned to improve the performance of the memory (see Chandrasekaran, paragraphs 84, 120 and claim 11).
With respect claim 11, Chandrasekaran et al. do not explicitly teach wherein a first one of the one or more different cache eviction policies is to be selected for processing a first layer of the one or more neural networks, and a different one of the one or more different cache eviction policies is to be selected for processing a second layer of the one or more neural networks.
However, Chandrasekaran et al. teaches neural network may include an output layer that includes outputs 1606 corresponding to the various cache replacement policies. In some embodiment, each of the output nodes 1606 in the output layer may correspond to one of the cache replacement policies in the data store (see paragraphs 120-121).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the system to include the above mentioned to improve the performance of the memory (see Chandrasekaran, paragraphs 84, 120 and claim 11).
With respect claim 15, Chandrasekaran et al. do not explicitly teach wherein the one or more different cache eviction policies are to be selected based, at least in part, on a mapping between the different portions of the one or more neural network and the one or more different cache eviction policies.
However, Chandrasekaran et al. teaches wherein in addition to the input layer of the neural network comprising inputs 1602 and/or inputs 1608, the neural network may also include one or more internal or hidden layers 1604. In some embodiments, the neural network may be a recurrent neural network (RNN) where connections between the nodes form a directed graph along a temporal sequence. This allows the neural network to exhibit temporal dynamic behavior and thus use an internal state (i.e., a memory-like behavior) to process sequences of events. This may allow the neural network to provide cache replacement policies that are output in a sequence… The neural network may include an output layer that includes outputs 1606 corresponding to the various cache replacement policies. In some embodiment, each of the output nodes 1606 in the output layer may correspond to one of the cache replacement policies in the data store (see paragraphs 120-121)
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the system to include the above mentioned to improve the performance of the memory (see Chandrasekaran, paragraphs 84, 120 and claim 11).
With respect claim 18, Chandrasekaran et al. do not explicitly teach select a first one of the one or more different cache eviction policies for processing a first layer of the one or more neural networks, and select a different one of the one or more different cache eviction policies for processing a second layer of the one or more neural networks.
However, Chandrasekaran et al. teaches neural network may include an output layer that includes outputs 1606 corresponding to the various cache replacement policies. In some embodiment, each of the output nodes 1606 in the output layer may correspond to one of the cache replacement policies in the data store (see paragraphs 120-121).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the medium to include the above mentioned to improve the performance of the memory (see Chandrasekaran, paragraphs 84, 120 and claim 11).
With respect claim 22, Chandrasekaran et al. do not explicitly teach wherein the one or more different cache eviction policies are to be selected based, at least in part, on a mapping between the different portions of the one or more neural network and the one or more different cache eviction policies.
However, Chandrasekaran et al. teaches wherein in addition to the input layer of the neural network comprising inputs 1602 and/or inputs 1608, the neural network may also include one or more internal or hidden layers 1604. In some embodiments, the neural network may be a recurrent neural network (RNN) where connections between the nodes form a directed graph along a temporal sequence. This allows the neural network to exhibit temporal dynamic behavior and thus use an internal state (i.e., a memory-like behavior) to process sequences of events. This may allow the neural network to provide cache replacement policies that are output in a sequence… The neural network may include an output layer that includes outputs 1606 corresponding to the various cache replacement policies. In some embodiment, each of the output nodes 1606 in the output layer may correspond to one of the cache replacement policies in the data store (see paragraphs 120-121)
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the medium to include the above mentioned to improve the performance of the memory (see Chandrasekaran, paragraphs 84, 120 and claim 11).
With respect claim 26, Chandrasekaran et al. do not explicitly teach wherein a different one or more policies are used by the processor to evaluate a second portion of the one or more neural networks..
However, Chandrasekaran et al. teaches neural network may include an output layer that includes outputs 1606 corresponding to the various cache replacement policies. In some embodiment, each of the output nodes 1606 in the output layer may correspond to one of the cache replacement policies in the data store (see paragraphs 120-121).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the medium to include the above mentioned to improve the performance of the memory (see Chandrasekaran, paragraphs 84, 120 and claim 11).
With respect claim 29, Chandrasekaran et al. do not explicitly teach wherein the one or more different cache eviction policies are to be selected based, at least in part, on a mapping between the different portions of the one or more neural network and the one or more different cache eviction policies.
However, Chandrasekaran et al. teaches wherein in addition to the input layer of the neural network comprising inputs 1602 and/or inputs 1608, the neural network may also include one or more internal or hidden layers 1604. In some embodiments, the neural network may be a recurrent neural network (RNN) where connections between the nodes form a directed graph along a temporal sequence. This allows the neural network to exhibit temporal dynamic behavior and thus use an internal state (i.e., a memory-like behavior) to process sequences of events. This may allow the neural network to provide cache replacement policies that are output in a sequence… The neural network may include an output layer that includes outputs 1606 corresponding to the various cache replacement policies. In some embodiment, each of the output nodes 1606 in the output layer may correspond to one of the cache replacement policies in the data store (see paragraphs 120-121)
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the method to include the above mentioned to improve the performance of the memory (see Chandrasekaran, paragraphs 84, 120 and claim 11).
Claim(s) 8 and 31 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chandrasekaran et al. (US 2021/0157743) as applied to claims 1 and 24 above, and further in view of Gottin et al. (US 2021/0374523).
With respect claim 8, Chandrasekaran et al. do not teach wherein the one or more different cache eviction policies are selected based, at least in part, on simulated use of the one or more neural networks.
However, Gottin et al. teaches wherein the one or more cache policies are selected based, at least in part, on simulated use of the one or more neural networks (see paragraphs 18, 31, 88 and 90; cache policy parameters are dynamically changed by simulating usage of data).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the processor taught by Chandrasekaran et al. to include the above mentioned to optimize the cache algorithm parameters (see Gottin, paragraphs 22 and 81).
With respect claim 31, Chandrasekaran et al. do not teach wherein the one or more different cache eviction policies are selected based, at least in part, on simulated use of the one or more neural networks.
However, Gottin et al. teaches wherein the one or more cache policies are selected based, at least in part, on simulated use of the one or more neural networks (see paragraphs 18, 31, 88 and 90; cache policy parameters are dynamically changed by simulating usage of data).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the method taught by Chandrasekaran et al. to include the above mentioned to optimize the cache algorithm parameters (see Gottin, paragraphs 22 and 81).
Response to Arguments
Applicant's arguments with respect to claims 1-31 have been considered but are moot in view of the new ground(s) of rejection, necessitated by amendment.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Kachare et al. (US 11,379,375) teaches a deep neural network, more particularly, to a system and method for optimizing performance of a solid-state drive (SSD) using a deep neural network.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARACELIS RUIZ whose telephone number is (571)270-1038. The examiner can normally be reached Monday-Friday 11:00am-7:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald G. Bragdon can be reached on (571)272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ARACELIS RUIZ/ Primary Examiner, Art Unit 2139