Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Applicant's election with traverse of the claims of species I directed to claims 1-12 and 14-19 in the reply filed on 2/22/2026 is acknowledged. The traversal is on the ground(s) that applicant believes that simultaneous examination will not present an undue burden. This is not found persuasive because Species II is direct to embodiment of paragraph 19 including claim 13 that is different from Species I is direct to embodiment of paragraphs 7, 20 including claims 1-12, 14-19. Thus, examination will present an undue burden.
The requirement is still deemed proper and is therefore made FINAL.
Claims 1-12 and 14-19 are selected in filed 2/22/2026. Claim 13 is not selected; thus claim 13 is withdrawn from consideration.
Claims 1-12, 14-19 are pending in this office action.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 7 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The limitation "wherein, in the first structure, the size of the second buffer is at least twice as large as it is in the second structure” in claim 7 is unclear whether “it” refers to “the size of the second buffer” or “the second buffer”;
The limitation “in the second structure, the size of the first buffer is at least twice as large as it is in the first structure” is unclear whether “it” refers to “the size of the first buffer” or “it” refers to “the first buffer”.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 12, 14, 19 are rejected under 35 U.S.C. 103 as being unpatentable Beckman et al (or hereinafter “Be”) (US 20200162101) in view of Moon (US 20230221885).
As to claim 1, Be teaches an electronic device comprising:
“a compression unit” as compression accelerator as a compression unit (paragraphs 66, 76, 103, fig. 5);
“a search engine comprising a first buffer, a second buffer, and a plurality of comparators configured to perform matching between data stored in the first buffer and data……” as a search block 206 as a search engine includes input buffer(s) 216 as a second buffer and a history buffer 230 as a first buffer, and search engines 214 (figs. 5, 7A, paragraphs117-118), a match block (paragraph 12), match controller 229 (paragraph 127) as a plurality of comparators configured to perform matching between data stored in history buffer 230 as a first buffer with byte string (paragraphs 121-123, 127);
“a compression controller configured to……of search engine, ……and cause the compression unit to perform compression on target data using the search engine” as data processing unit 130, which includes accelerators 146 e.g., compression accelerator 200, is represented as a compression controller configured to (figs. 2, 5, paragraph 103) include control panel (CP) 202 of a search block 206 as search engine (figs. 5, 7A, paragraphs 103-105), and use computer hardware in the data compression accelerator as cause (paragraph 66) the data compression accelerator as the compression unit to perform compression on input data stream using the search block 206 as the search engine (fig. 7, paragraphs 116-117, 56, abstract).
In particularly:
This disclosure describes a hardware-based programmable data compression accelerator of a data processing unit that includes a pipeline for performing history-based compression. The data compression accelerator comprises computer hardware used by the data processing unit to perform data compression functions more efficiently than is possible in software running on a general purpose processor. The disclosed history-based compression pipeline, referred to herein as a “search block,” is configured to perform string search and replacement functions to compress an input data stream (paragraph 66).
FIG. 7A is a block diagram illustrating an example architecture of search block 206 of data compression accelerator 200 from FIG. 5. According to the disclosed techniques, search block 206 includes multiple hardware search engines (i.e., threads) 214 each configured to perform history-based compression of an input data stream. As illustrated, search block 206 also includes input buffers 216, output buffers 218, a hash table 224, and a history buffer 230. The architecture of search block 206 illustrated in FIG. 7A is shown for exemplary purposes only. In other examples, search block 206 may be configured in a variety of ways (paragraph 117).
Be does not explicitly teach limitation
stored in the second buffer;
determine a structure;
adjust a connection between the first buffer and the second buffer based on the determined structure.
Moon teaches limitations
“data stored in the second buffer” as write data id stored in a second write buffer as the second buffer (figs. 1-3, paragraphs 7, 49);
“determine a structure” as if a second storage device 240a of storage system stores the first write data WD1 in a second write buffer 242a as determine the second storage device 240 of a storage system (figs. 1-3, paragraphs 64, 62). The second storage device 240a of a storage system is represented as a structure;
“adjust a connection between the first buffer and the second buffer based on the determined structure” as update as adjust, based on the storage device 240a as a structure that is determined storing the first write data in a second write buffer 242a, a mapping table 226a e.g., change a physical address to logical address that is represented as a connection between first write buffer 222a as the first buffer in a storage device1 220a and the write buffer 242a as the second buffer in storage device2 240a because the mapping table 226a is connection between the storage device 220a and the storage device 240a (figs. 1-3, paragraphs 64, 67, 49).
For example, if the second storage device 240a stores the first write data WD1 in the second write buffer 242a, the second storage device 240a may transfer a success response including a logical address within the second storage device 240a to the first storage device 220a, and the first storage device 220a may change or update the first physical address for the first write data WD1 in the first mapping table 226a to the logical address received from the second storage device 240a (paragraph 64).
Moon further teaches limitation
“data stored in the first buffer and data stored in the second buffer” as write data is stored in a first write buffer as the first buffer and write data stored in a second write buffer as the second buffer (paragraphs 7, 49);
“determine a structure of……” as if the second storage device 240a stores the first write data WD1 in the second write buffer 242a as determine the second storage device 240 of a storage system as search engine (figs. 1-3, paragraphs 64, 62).
Moon and Be disclose a method of managing data stored in buffers. These references are same field with application’s field. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Moon’s teaching to Be’s system in order to rapidly respond to a write command from the host, thereby improving a write speed and write performance of the storage system.
As to claims 12, 19, Be and Moon teach limitation
“ wherein the compression controller is further configured to cause the compression unit to store to store information indicating the structure of the search engine together with compressed target data; or causing the compression unit to store information indicating the structure of the search engine together with compressed target data” as data processing unit 130, which includes accelerators 146 e.g., compression accelerator 200, is represented as a compression controller configured to (Be: figs. 2, 5, paragraph 103) cause a compression accelerator 146 (Be: figs. 1-3, paragraphs 55, 58, 63) or a compression accelerator unit 200 to (Be: paragraphs 103-104) store information indicating which one of a plurality of storage devices 220b and/or 240b of (Moon: paragraph 73) search block 206 as search engine (Be: fig. 7A, paragraph 117) store data with (Moon: paragraph 73) compressed data stream (Be: paragraph 124).
As to claim 14, Be teaches an operation method of an electronic device, the operation method comprising:
“……of a search engine comprising a first buffer, a second buffer, and a plurality of comparators configured to match data stored in the first buffer with data……” as illustrating an architecture of search block 206 of data compression accelerator 200 includes input buffer(s) 216 as a first buffer and a history buffer 230 as a second buffer, and search engines 214 (fig. 7A, paragraphs117-118), a match block (paragraph 12), match controller 229 (paragraph 127) as a plurality of comparators configured to perform matching between data identified by the addresses in history buffer 230 as a first buffer with the byte string at the current byte position in the data to be compressed (paragraph 117-118, 121-123, 127);
“causing a compression unit to perform compression on target data subject to compression using the search engine” as configuring or using computer hardware in the data compression accelerator as causing a compression accelerator as a compression unit to perform compression an input data stream as the target data subject by using a compression pipeline e.g., the search block 206 as the search engine (paragraphs 66; 99,107, 117).
In particularly:
A hardware-based programmable data compression accelerator of a data processing unit that includes a pipeline for performing history-based compression. The data compression accelerator comprises computer hardware used by the data processing unit to perform data compression functions more efficiently than is possible in software running on a general purpose processor. The disclosed history-based compression pipeline, referred to herein as a “search block,” is configured to perform string search and replacement functions to compress an input data stream. In some examples, the search block performs a first stage of a two-stage compression process performed by the data compression accelerator (paragraph 66).
FIG. 7A is a block diagram illustrating an example architecture of search block 206 of data compression accelerator 200 from FIG. 5. According to the disclosed techniques, search block 206 includes multiple hardware search engines (i.e., threads) 214 each configured to perform history-based compression of an input data stream. As illustrated, search block 206 also includes input buffers 216, output buffers 218, a hash table 224, and a history buffer 230. The architecture of search block 206 illustrated in FIG. 7A is shown for exemplary purposes only. In other examples, search block 206 may be configured in a variety of ways (paragraph 117).
Be does not explicitly teach limitation
stored in the second buffer;
determining a structure;
adjusting a connection between the first buffer and the second buffer based on the determined structure.
Moon teaches limitations
“data stored in the second buffer” as write data stored in a second write buffer as the second buffer (paragraphs 7, 49);
“determine a structure” as if the second storage device 240a stores the first write data WD1 in the second write buffer 242a as determine the second storage device 240 as a structure of a storage system (fig. 1, paragraphs 64, 62);
“adjust a connection between the first buffer and the second buffer based on the determined structure” as update as adjust, based on the storage device 240a as a structure that is determined storing the first write data in a second write buffer 242a, a mapping table 226a e.g., change a physical address to logical address that is represented as a connection between first write buffer 222a as the first buffer in a storage device1 220a and the write buffer 242a as the second buffer in storage device2 240a because the mapping table 226a is connection between the storage device 220a and the storage device 240a (figs. 1-3, paragraphs 64, 67, 49).
For example, if the second storage device 240a stores the first write data WD1 in the second write buffer 242a, the second storage device 240a may transfer a success response including a logical address within the second storage device 240a to the first storage device 220a, and the first storage device 220a may change or update the first physical address for the first write data WD1 in the first mapping table 226a to the logical address received from the second storage device 240a (paragraph 64).
Moon further teaches limitation
“determine a structure of……” as if the second storage device 240a stores the first write data WD1 in the second write buffer 242a as determine the second storage device 240 of a storage system (fig. 1, paragraphs 64, 62);
“data stored in the first buffer and data stored in the second buffer” as write data stored in a first write buffer as the first buffer and write data stored in a second write buffer as the second buffer (paragraphs 7, 49);
Moon and Be disclose a method of managing data stored in buffers. These references are same field with application’s field. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Moon’s teaching to Be’s system in order to rapidly respond to a write command from the host, thereby improving a write speed and write performance of the storage system.
Claims 2-5, 15 are rejected under 35 U.S.C. 103 as being unpatentable over
Be in view Moon and further in view of Chandhoke et al (or hereinafter “Cha”) (US 20160134550).
As to claims 2, 15, Be and Moon teach limitation
“wherein the search engine comprises: a first circuit configured to……; and a second circuit configured to……” as the search block as the search engine includes (Be: paragraphs 117-118) a control circuit as first circuit configured to perform operation (Moon: paragraph 90) and a buffer circuit 340 as second circuit configured to select bitlines (Moon: paragraph 94).
Be and Moon do not explicitly teach limitation
determine a size of the first buffer for transmitting the target data; and determine a size of the second buffer for transmitting the target data.
Cha teaches limitations
“determine a size of the first buffer for transmitting the target data” as configure as determine a size of a first local buffer as first buffer for transferring data (paragraphs 82-84,118);
“determine a size of the second buffer for transmitting the target data” as configure as determine a size of a second local buffer for transferring data (paragraphs 82-84, 118).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Cha’s teaching to Be’s system in order to reduce timely delivery aggregation latency, thereby improving processing efficiency and further to reduce delays due to retransmission and improve coexistence with control systems without introducing jitter.
As to claim 3, Be, Moon and Cha teach limitations
“wherein the compression controller is further configured to determine the structure by controlling the first circuit and the second circuit based on information related to compression determined based on the target data” as data processing unit 130, which includes accelerators 146 e.g., compression accelerator 200, is represented as a compression controller configured to (Be: figs. 2, 5, paragraph 103) determine the first storage device 220a as the structure based on data related to (Moon: paragraph 65) compression determined based on the data stream as the target data (Be: paragraphs 10-11) by controlling clock generation circuits that include a first circuit and second circuit (Cha: fig. 6, paragraph 127) or by controlling circuits that includes first and second circuits (Moon: paragraphs 90, 94).
As to claims 4,16, Be, Moon, and Cha teach limitations
“wherein the search engine comprises: a third circuit connected to a fourth circuit through a portion of a plurality of comparators and configured to change the connection between the first buffer and the second buffer” as the search block as the search engine includes, via a search engine of search engines as a portion of search engines (Be: fig. 7a, paragraphs 117-118), clock generation circuits that include a third circuit connected to a fourth circuit (Cha: fig. 6, paragraph 127) and the search engine configured to (Be: paragraphs 117-118) a mapping table 226a e.g., change a physical address to logical address that is represented as a connection between first write buffer 222a as the first buffer in a storage device1 220a and the write buffer 242aas the second buffer in storage device2 240a because the mapping table 226a is connection between the storage device 220a and the storage device 240a (Moon: figs. 1-3, paragraphs 64, 67, 49);
“the fourth circuit connected to the third circuit through a portion of the plurality of comparators and configured to change the connection between the second buffer and the first buffer” as the clock generation circuits that include a fourth circuit connected to a third circuit via (Cha: fig. 6, paragraph 127) a search engine of search engines as through a portion of the plurality of comparators (Be: paragraphs 117-118) and the clock generation circuits that include a fourth circuit configured to (Cha: fig. 6, paragraph 127) a mapping table 226a e.g., change a physical address to logical address that is represented as a connection between first write buffer 222a as the first buffer in a storage device1 220a and the write buffer 242a as the second buffer in storage device2 240a because the mapping table 226a is connection between the storage device 220a and the storage device 240a (Moon: figs. 1-3, paragraphs 64, 67, 49).
As to claim 5, Be, Moon and Cha teach the limitation
“wherein the compression controller is further configured to determine the structure by controlling the third circuit and the fourth circuit based on information related to compression determined based on the target data” data processing unit 130, which includes accelerators 146 e.g., compression accelerator 200, is represented as a compression controller configured to (Be: figs. 2, 5, paragraph 103) determine the first storage device 220a as the structure based on data related to (Moon: paragraph 65) compression determined based on the data stream as the target data (Be: paragraphs 10-11) by controlling clock generation circuits that include the third circuit and the fourth circuit (Cha: fig. 6, paragraph 127) .
Claims 6-7, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Be in view of Moon and further in view of further in view of Olivier et al (US 20190384750) and Talla et al (US 20190097938).
As to claims 6, 17, Be and Moon teach the limitation
“wherein the compression controller is further configured to……” as data processing unit 130, which includes accelerators 146 e.g., compression accelerator 200, is represented as a compression controller configured to perform compression (Be: figs. 2, 5, paragraphs 103-105),
“ in the first structure, a size of the second buffer is greater than……” as in the first storage device as the first structure, a size of write buffer as the second buffer is greater than reference buffer size (Moon: abstract, paragraph 52);
“in the second structure, a size of the first buffer is greater than……” as in the second storage device as the second structure, a size of write buffer is less than reference buffer size RBS1 (Moon: paragraph 62).
Be and Moon do not explicitly teach limitations
determine the structure of the search engine to be a first structure or a second structure;
in the second structure; in the first structure.
Olivier teaches limitation
“determine the structure of the search engine to be a first structure or a second structure” as determine a schema as the structure of the search engine is a schema as a first structure defined by system 102 (paragraph 145).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Olivier’s teaching to Be’s system in order to configure one or more content analyzers of the search engine system so that the content can be searched more efficiently.
Talla teaches limitation
“in the second structure” as in a first incoming packet of incoming packets as the first structure (in expansion mode 140), a size of a second buffer is increased as greater than in a second incoming packet (in shrinking model 150) (figs. 1C-1D, paragraphs 35-36);
“in the first structure” as in the second incoming packet of incoming packets as the second structure (in expansion mode 140), a size of a second buffer is increased as greater than in the first incoming packet as the first structure (in shrinking model 150) (figs. 1C-1D, paragraphs 35-36).
Talla further teaches limitations
“ in the first structure, a size of the second buffer is greater than in the second structure” as in a first incoming packet of incoming packets as the first structure (in expansion mode 140), a size of a second buffer is increased as greater than in a second incoming packet (in shrinking model 150) (figs. 1C-1D, paragraphs 35-36);
“in the second structure, a size of the first buffer is greater than in the first structure” as in the second incoming packet of incoming packets as the second structure (in expansion mode 140), a size of a second buffer is increased as greater than in the first incoming packet as the first structure (in shrinking model 150) (figs. 1C-1D, paragraphs 35-36).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Talla’s teaching to Be’s system in order to enable a device to deal with high rates of incoming traffic to avoid packet drops in the receive path by actively monitoring the traffic patterns and adjusting the ring buffer sizes based on the traffic rate.
As to claim 7, Be, Moon, Olivier and Talla teach limitation
“wherein, in the first structure, the size of the second buffer is at least twice as large as it is in the second structure” as in a first incoming packet of incoming packets as the first structure (in expansion mode 140), a size of a second buffer is increased double size e.g., K+N bytes (size k= size N) (Talla: paragraphs 32, 57) in a second incoming packet (in shrinking model 150) (Talla: figs. 1C-1D, paragraphs 35-36);
“in the second structure, the size of the first buffer is at least twice as large as it is in the first structure” as in the second incoming packet of incoming packets as the second structure (in expansion mode 140), a size of a second buffer is increased double size e.g., K+N bytes , (size k = size N) (Talla: paragraphs 32, 57) as greater than in the first incoming packet as the first structure (in shrinking model 150) (Talla: figs. 1C-1D, paragraphs 35-36).
Claims 8, 18 are rejected under 35 U.S.C. 103 as being unpatentable over
Be in view Moon and further in view of Olivier et al (US 20190384750)
As to claim 8, Be and Moon teach the limitation
“wherein the compression controller is further configured to ……based on information related to compression determined based on the target data” as data processing unit 130, which includes accelerators 146 e.g., compression accelerator 200, is represented as a compression controller configured to (Be: figs. 2, 5, paragraphs 66, 103) determine the second storage device 240 (Moon: fig. 1, paragraph 61) based on first write data WD1 as information related to (Moon: fig. 1, paragraph 64) compression determined based on the data stream as the target data (Be: paragraphs 10-11, 117).
Be and Moon do not explicitly teach limitations
change the structure of the search engine.
Olivier teaches limitation
“change the structure of the search engine” as updating search engine version of the search engine system (paragraphs 24, 152).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Olivier’s teaching to Be’s system in order to configure one or more content analyzers of the search engine system so that the content can be searched more efficiently.
As to claim 18, Be and Moon teach limitation
“ wherein the determining of the structure of the search engine comprises: ……based on information related to compression determined based on the target data” as if the second storage device 240a stores the first write data WD1 in the second write buffer 242a as determine the second storage device 240 of (Moon: fig. 1, paragraph 61) search block 206 as search engine (Be: fig. 7A, paragraph 117) includes update a mapping table base on first write data WD1 as information related to (Moon: fig. 1, paragraph 64) compression determined based on the data stream as the target data (Be: paragraphs 10-11).
Be and Moon do not explicitly teach limitation
changing the structure of the search engine.
Olivier teaches limitation
“changing the structure of the search engine” as updating search engine version of the search engine system (paragraphs 24, 152).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Olivier’s teaching to Be’s system in order to configure one or more content analyzers of the search engine system so that the content can be searched more efficiently.
Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over
Be in view of Moon and further in view of Frantz et al (US 20120310986).
As to claim 9, Be and Moon teach limitation
“wherein the compression controller is further configured to determine the structure according to the information related to compression predetermined based on……” data processing unit 130, which includes accelerators 146 e.g., compression accelerator 200, is represented as a compression controller configured to (Be: figs. 2, 5, paragraph 103) determine the storage device 220a as the structure based on the write data related to (Moon: paragraph 64) compression determined based on the data stream as the target data (Be: paragraphs 10-11).
Be and Moon do not explicitly teach limitation
a structure of the target data.
Frantz teaches limitation
“a structure of the target data” structure of data (paragraph 109).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Frantz’s teaching to Be’s system in order to improve the speed of data retrieval operations by ordering data in a database and further to greatly reducing cache efficiency.
As to claim 10, Be and Moon teach limitation
“wherein the compression controller is further configured to” as data processing unit 130, which includes accelerators 146 e.g., compression accelerator 200, is represented as a compression controller configured to perform compression (Be: figs. 2, 5, paragraphs 103-105):
“……obtained by an analysis of the target data” as repeated strings obtained for replacement by scanning of the input data stream as an analysis of the target data (Be: paragraph 107);
“determine the information related to compression based on……” as determine the write data as the information related to (Moon: figs. 2-3, paragraph 64) compression based on the data stream as the target data (Be: paragraphs 10-11), and
“determine the structure based on the information related to compression” as determine the storage device 220a as the structure based on the write data related to (Moon: paragraph 64) compression determined based on the data stream as the target data (Be: paragraphs 10-11).
Be and Moon do not explicitly teach limitation
obtain a value indicating a distribution of the data;
whether the value exceeds a threshold value.
Frantz teaches limitation
“obtain a value indicating a distribution of the data” as obtain a value indicating a distribute of rows in database (paragraphs 58-60);
“whether the value exceeds a threshold value” as metric surpasses a threshold value (paragraph 79).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Frantz’s teaching to Be’s system in order to improve the speed of data retrieval operations by ordering data in a database and further to greatly reducing cache efficiency.
Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over
Be in view of Moon and further in view of Pang et al (US 20220207361).
As to claim 9, Be and Moon teach limitation
“wherein the compression controller is further configured to determine the structure according to the information related to compression predetermined based on……” data processing unit 130, which includes accelerators 146 e.g., compression accelerator 200, is represented as a compression controller configured to (Be: figs. 2, 5, paragraph 103) determine the storage device 220a as the structure based on the write data related to (Moon: paragraph 64) compression determined based on the data stream as the target data (Be: paragraphs 10-11).
Be and Moon do not explicitly teach limitation
a structure of the target data.
Pang teaches limitation
“a structure of the target data” structure of data (paragraph 198).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Pang’s teaching to Be’s system in order to improve the memory utilization and/or reduce the overhead of the central processor, the graphics processor, and/or the neural processor, and further to significantly improve memory occupancy and operation speed of a typical neural network model, and compress size of a model to save memory space.
As to claim 10, Be and Moon teach limitation
“wherein the compression controller is further configured to” as data processing unit 130, which includes accelerators 146 e.g., compression accelerator 200, is represented as a compression controller configured to perform compression (Be: figs. 2, 5, paragraphs 103-105):
“……obtained by an analysis of the target data” as repeated strings obtained for replacement by scanning of the input data stream as an analysis of the target data (Moon: paragraph 107);
“determine the information related to compression based on……” as determine the write data as the information related to (Moon: figs. 2-3, paragraph 64) compression based on the data stream as the target data (Be: paragraphs 10-11), and “determine the structure based on the information related to compression” as determine the storage device 220a as the structure based on the write data related to (Moon: paragraph 64) compression determined based on the data stream as the target data (Be: paragraphs 10-11).
Be and Moon do not explicitly teach limitation
obtain a value indicating a distribution of the data;
whether the value exceeds a threshold value.
Pang teaches limitation
“obtain a value indicating a distribution of the data” as generate a n.sub.bins=8001 e.g., a generalized set value indicating a number of data distribution intervals in which original input data of each operator to be quantized from original floating-point precision to integer precision (paragraph 82);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Pang’s teaching to Be’s system in order to improve the memory utilization and/or reduce the overhead of the central processor, the graphics processor, and/or the neural processor, and further to significantly improve the memory occupancy and operation speed of a typical neural network model, and compress the size of the original model.
Allowable Subject Matter
Claim 11 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The prior arts of the record such as Be teaches an access node 17 that includes data processing unit 130 (DPU) including a compression accelerator 146 (figs. 1-3, paragraphs 55, 58, 63) or a compression accelerator unit 200 (paragraphs 103-104) is represented as a compression controller configured to use as determine one or more data compression accelerator units as a structure of a search block as the search engine (paragraphs 56, 103-104) and perform history-based compression as a cause (paragraph 56) the compression accelerator as the compression unit to perform compression on a data stream as target data using a pipeline e.g., the search log 206 as the search engine (paragraphs 5, 66, 116). The compression accelerator unit includes pipeline (paragraph 56) e.g., search block 206 as search engine (fig. 5 paragraphs 66, 104). Moon teaches update as adjust, based on the storage device 240a as a structure that is determined storing the first write data in a second write buffer 242a, a mapping table 226a e.g., change a physical address to logical address that is represented as a connection between first write buffer 222a as the first buffer in a storage device1 220a and the write buffer 242a as the second buffer in storage device2 240a because the mapping table 226a is connection between the storage device 220a and the storage device 240a (figs. 1-3, paragraphs 64, 67, 49).
However, none of the prior art of the teaches wherein the compression controller is further configured to: when an analysis time of the target data is longer than a movement time of the target data, adjust the connection by determining the information related to compression based on an analysis result of a portion of the target data, and when the analysis of the target data has completed, update the information related to compression based on a completed analysis result, and adjust the connection based on the updated information related to compression (in claim 11).
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAM-Y T TRUONG whose telephone number is (571)272-4042. The examiner can normally be reached (571) 272 4042.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SHERIEF BADAWI can be reached at (571) 272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CAM Y T TRUONG/ Primary Examiner, Art Unit 2169