DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10 December 2025 has been entered.
Accordingly, claims 1-20 are pending in this application. Claims 1, 2, 11, and 12 are currently amended; claims 3-10 and 13-20 are as previously presented.
Claim Objections
Claims 9 and 19 are objected to because of the following informalities:
As to claims 9 and 19, Applicant may have intended “wherein first hash” to read as “wherein the first hash.”
Appropriate correction is required.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are is rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-19 of U.S. Patent No. 11,995,060 B2, in view of Mosko et al. (previously cited)(US 9,473,405 B2), hereinafter Mosko.
Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of U.S. Patent No. 11,995,060 B2, as modified with Mosko, renders obvious the features of claims 1-20 of the instant application as set forth in the table below.
Instant Application
U.S. Patent No. 11,995,060 B2
1. A computer-implemented method, comprising:
identifying a data set to hash, the data set comprising a set of data blocks;
generating, by a first hash engine, a first hash for each data block in the set of data blocks within the data set at a start position; and
concurrently generating, by a second hash engine, a second hash for each data block in the set of data blocks within the data set at a first non-zero offset relative to the start position.
1. A method, comprising:
identifying a data set to hash based on a first hash block size, the data set comprising a set of data blocks;
generating, by a first hash engine, a first hash for each data block in the set of data blocks within the data set, wherein the first hash engine is configured to generate the first hash based on the first hash block size;
(A hash must start somewhere and thus generating in the patent inherently includes a start position)
generating, by a second hash engine, a second hash for each data block in the set of data blocks within the data set, wherein the second hash engine is configured to generate the second hash based on a second hash block size, the first hash block size being different than the second hash block size;
(The instant application does not state what the relative offset is from the start for the start of the second hash. As such, they can be at any position such as what can be done by the patent so long as some non-zero offset from the start of the first hash is also hashed in the second hash.)
deduplicating the data set is based on the first hash and the second hash; and
compressing the data set based on a compression block size, the first hash block size being a different block size than the compression block size.
5. The method of claim 1, wherein the first hash engine is configured to generate the first hash starting at a first bit of the data set, wherein the second hash engine is configured to generate the second hash at a second bit of the data set.
6. The method of claim 5, wherein the first bit of the data set is bit zero, wherein the second bit of the data set is determined based on an offset from the first bit.
(Claims 5-6, though not required, further clarify the non-zero offset)
Claim 1 does not explicitly disclose concurrently generating the second hash. However, Mosko discloses concurrently generating, by a second hash engine, a second hash for within the data set at a first non-zero offset relative to the start position (Figs. 7A-7C; Col. 5, Lines 1-15; Col. 6, Lines 5-20 and 38-56; Col. 7, Lines 56-63, A plurality of hashers, i.e. hash engines, are fed replicated incoming data to hash concurrently at differing start offsets. E.g. a first hash engine uses SHA-256 hashing on an entire message body (e.g. starting at offset 20), while concurrently/in parallel, a second hash engine will start hashing the body at offset 24 using SipHash.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of US Pat No. 11,995,060 with the teachings of Mosko by modifying US Pat No. 11,995,060 such that the multiple hashes of the data set are performed concurrently/in parallel like Mosko. Said artisan would have been motivated to do so in order to more efficiently use available resources so as to obtain hashing results more quickly by utilizing parallel processing as is a common practice in the art while also hashing desired different portions of the same dataset like is done in US Pat No. 11,995,060.
2. The method of claim 1, wherein the data set is associated with a data transform pipeline.
A method…
deduplicating the data set is based on the first hash and the second hash; and
compressing the data set based on a compression block size, the first hash block size being a different block size than the compression block size.
(The data set is deduplicated and compressed, i.e. transformed.)
3. The method of claim 1, wherein the first hash and the second hash are computed before a compression operation is performed in an encode pipeline.
1. A method, comprising:
identifying a data set to hash based on a first hash block size, the data set comprising a set of data blocks;
generating, by a first hash engine, a first hash for each data block in the set of data blocks within the data set, wherein the first hash engine is configured to generate the first hash based on the first hash block size;
generating, by a second hash engine, a second hash for each data block in the set of data blocks within the data set, wherein the second hash engine is configured to generate the second hash based on a second hash block size, the first hash block size being different than the second hash block size;
deduplicating the data set is based on the first hash and the second hash; and
compressing the data set based on a compression block size, the first hash block size being a different block size than the compression block size.
4. The method of claim 3, wherein the compression operation includes an operation to slice the data set.
A method…
compressing the data set based on a compression block size, the first hash block size being a different block size than the compression block size.
(Compressing based on a compression block size different from hash block sizes is interpreted as slicing the data set into blocks of the block size.)
5. The method of claim 4 further comprising generating, by a third hash engine, a third hash for the data set at a second offset relative to the start position.
8. The method of claim 1, wherein the first hash block size is independent of a second hash block size of the second hash engine and of a third hash block size of a third hash engine.
(Because the second offset is not specified, it is interpreted as covering any offset, including zero. The third hash of the patent must be done at some offset, which may be zero.)
6. The method of claim 1, wherein a first hash record is created in a hash buffer for the first hash engine, wherein a second hash record is created in the hash buffer for the second hash engine.
1. A method, comprising:
identifying a data set to hash based on a first hash block size, the data set comprising a set of data blocks;
generating, by a first hash engine, a first hash for each data block in the set of data blocks within the data set, wherein the first hash engine is configured to generate the first hash based on the first hash block size;
generating, by a second hash engine, a second hash for each data block in the set of data blocks within the data set, wherein the second hash engine is configured to generate the second hash based on a second hash block size, the first hash block size being different than the second hash block size;
…
(The instant application does not place any limitations on the structure of the buffer being claimed. It is nothing more than a storage location for holding the hash being generated. Because the hashes of the patent must be stored in some entity, and because the claimed buffer merely stored the information in the same manner, the patent inherently includes an analogous hash buffer as claimed to store the hash records being generated.)
7. The method of claim 1 further comprising performing a deduplication operation in view of the first hash and the second hash.
1. A method…
deduplicating the data set is based on the first hash and the second hash;
8. The method of claim 1, wherein the start position is not aligned with a first frame of the data set.
5. The method of claim 1, wherein the first hash engine is configured to generate the first hash starting at a first bit of the data set, wherein the second hash engine is configured to generate the second hash at a second bit of the data set.
6. The method of claim 5, wherein the first bit of the data set is bit zero, wherein the second bit of the data set is determined based on an offset from the first bit.
9. The method of claim 1, wherein first hash and the second hash are generated in parallel.
Mosko discloses wherein first hash and the second hash are generated in parallel. (Figs. 7A-7C; Col. 5, Lines 1-15; Col. 6, Lines 5-20 and 38-56; Col. 7, Lines 56-63, A plurality of hashers, i.e. hash engines, are fed replicated incoming data to hash concurrently at differing start offsets. E.g. a first hash engine uses SHA-256 hashing on an entire message body (e.g. starting at offset 20), while concurrently/in parallel, a second hash engine will start hashing the body at offset 24 using SipHash.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of US Pat No. 11,995,060 with the teachings of Mosko by modifying US Pat No. 11,995,060 such that the multiple hashes of the data set are performed concurrently/in parallel like Mosko. Said artisan would have been motivated to do so in order to more efficiently use available resources so as to obtain hashing results more quickly by utilizing parallel processing as is a common practice in the art while also hashing desired different portions of the same dataset like is done in US Pat No. 11,995,060.
10. The method of claim 1 further comprising identifying a first skip region prior to generating the first hash, and identifying a second skip region prior to generating the second hash.
5. The method of claim 1, wherein the first hash engine is configured to generate the first hash starting at a first bit of the data set, wherein the second hash engine is configured to generate the second hash at a second bit of the data set.
6. The method of claim 5, wherein the first bit of the data set is bit zero, wherein the second bit of the data set is determined based on an offset from the first bit.
(The bit determined establishes a skip region of the bits prior to it that are not hashed.)
11. A system, comprising:
a memory; and
one or more processors coupled to the memory, the one or more processors configured to:
identify a data set to hash, the data set comprising a set of data blocks;
generate, at a first hash engine, a first hash for the data set at a start position; and
concurrently generate, at a second hash engine, a second hash for the data set at a non-zero first offset relative to the start position.
16. A system, comprising:
one or more processors and a memory, the one or more processors configured to:
identify a data set to hash based on a first hash block size, the data set comprising a set of data blocks;
implement a deduplication manager to receive the data set to deduplicate;
implement a first hash engine to generate, based on a first hash block size, a first hash for each data block in a set of data blocks within the data set based on a first hash block configuration;
(A hash must start somewhere and thus generating in the patent inherently includes a start position)
implement a second hash engine to generate a second hash for each data block in the set of data blocks within the data set based on a second hash block configuration, wherein the deduplication manager is configured to deduplicate the data set based on the first hash and the second hash; and
(The instant application does not state what the relative offset is from the start for the start of the second hash. As such, they can be at any position such as what can be done by the patent so long as some non-zero offset from the start of the first hash is also hashed in the second hash.)
implement a compression manager to compress the data set based on a compression block size without compressing duplicate data in the data set, the first hash block size being a different block size than the compression block size.
19. The system of claim 16, wherein the first hash engine is configured to generate the first hash starting at a first bit of the data set, wherein the second hash engine is configured to generate the second hash at a second bit of the data set.
(Claim 19, though not required, further clarify the non-zero offset)
Claim 16 does not explicitly disclose concurrently generating the second hash. However, Mosko discloses concurrently generating, by a second hash engine, a second hash for within the data set at a first non-zero offset relative to the start position (Figs. 7A-7C; Col. 5, Lines 1-15; Col. 6, Lines 5-20 and 38-56; Col. 7, Lines 56-63, A plurality of hashers, i.e. hash engines, are fed replicated incoming data to hash concurrently at differing start offsets. E.g. a first hash engine uses SHA-256 hashing on an entire message body (e.g. starting at offset 20), while concurrently/in parallel, a second hash engine will start hashing the body at offset 24 using SipHash.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of US Pat No. 11,995,060 with the teachings of Mosko by modifying US Pat No. 11,995,060 such that the multiple hashes of the data set are performed concurrently/in parallel like Mosko. Said artisan would have been motivated to do so in order to more efficiently use available resources so as to obtain hashing results more quickly by utilizing parallel processing as is a common practice in the art while also hashing desired different portions of the same dataset like is done in US Pat No. 11,995,060.
12. The system of claim 11, wherein the data set is associated with a data transform pipeline.
16. A system, comprising:
…
wherein the deduplication manager is configured to deduplicate the data set based on the first hash and the second hash; and
implement a compression manager to compress the data set based on a compression block size without compressing duplicate data in the data set, the first hash block size being a different block size than the compression block size.
(The data set is deduplicated and compressed, i.e. transformed.)
13. The system of claim 11, wherein the first hash and the second hash are computed before a compression operation is performed in an encode pipeline.
16. A system, comprising:
one or more processors and a memory, the one or more processors configured to:
identify a data set to hash based on a first hash block size, the data set comprising a set of data blocks;
implement a deduplication manager to receive the data set to deduplicate;
implement a first hash engine to generate, based on a first hash block size, a first hash for each data block in a set of data blocks within the data set based on a first hash block configuration;
implement a second hash engine to generate a second hash for each data block in the set of data blocks within the data set based on a second hash block configuration, wherein the deduplication manager is configured to deduplicate the data set based on the first hash and the second hash; and
implement a compression manager to compress the data set based on a compression block size without compressing duplicate data in the data set, the first hash block size being a different block size than the compression block size.
14. The system of claim 13, wherein the compression operation includes an operation to slice the data set.
16. A system, comprising:
…
implement a compression manager to compress the data set based on a compression block size without compressing duplicate data in the data set, the first hash block size being a different block size than the compression block size.
(Compressing based on a compression block size different from hash block sizes is interpreted as slicing the data set into blocks of the block size.)
15. The system of claim 14, the one or more processors configured to generate, at a third hash engine, a third hash for the data set at a second offset relative to the start position.
8. The method of claim 1, wherein the first hash block size is independent of a second hash block size of the second hash engine and of a third hash block size of a third hash engine.
(Because the second offset is not specified, it is interpreted as covering any offset, including zero. The third hash of the patent must be done at some offset, which may be zero.)
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art that the method steps performed in claims 1-9 could readily be performed by the system of claim 16, and to therefore modify claim 16 to implement any or all of the method steps of claims 1-9 in the system of claim 16. Said artisan would have been motivated to do so in order to implement the method steps in a system as is commonly done in the art.
16. The system of claim 11, wherein a first hash record is created in a hash buffer for the first hash engine, wherein a second hash record is created in the hash buffer for the second hash engine.
16. A system, comprising:
…
implement a first hash engine to generate, based on a first hash block size, a first hash for each data block in a set of data blocks within the data set based on a first hash block configuration;
implement a second hash engine to generate a second hash for each data block in the set of data blocks within the data set based on a second hash block configuration, wherein the deduplication manager is configured to deduplicate the data set based on the first hash and the second hash; …
(The instant application does not place any limitations on the structure of the buffer being claimed. It is nothing more than a storage location for holding the hash being generated. Because the hashes of the patent must be stored in some entity, and because the claimed buffer merely stored the information in the same manner, the patent inherently includes an analogous hash buffer as claimed to store the hash records being generated.)
17. The system of claim 11, the one or more processors configured to perform a deduplication operation in view of the first hash and the second hash.
16. A system…
wherein the deduplication manager is configured to deduplicate the data set based on the first hash and the second hash;
18. The system of claim 11, wherein the start position is not aligned with a first frame of the data set.
19. The system of claim 16, wherein the first hash engine is configured to generate the first hash starting at a first bit of the data set, wherein the second hash engine is configured to generate the second hash at a second bit of the data set.
19. The system of claim 11, wherein first hash and the second hash are generated in parallel.
Mosko discloses wherein first hash and the second hash are generated in parallel. (Figs. 7A-7C; Col. 5, Lines 1-15; Col. 6, Lines 5-20 and 38-56; Col. 7, Lines 56-63, A plurality of hashers, i.e. hash engines, are fed replicated incoming data to hash concurrently at differing start offsets. E.g. a first hash engine uses SHA-256 hashing on an entire message body (e.g. starting at offset 20), while concurrently/in parallel, a second hash engine will start hashing the body at offset 24 using SipHash.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of US Pat No. 11,995,060 with the teachings of Mosko by modifying US Pat No. 11,995,060 such that the multiple hashes of the data set are performed concurrently/in parallel like Mosko. Said artisan would have been motivated to do so in order to more efficiently use available resources so as to obtain hashing results more quickly by utilizing parallel processing as is a common practice in the art while also hashing desired different portions of the same dataset like is done in US Pat No. 11,995,060.
20. The system of claim 11, the one or more processors configured to:
identify a first skip region prior to generating the first hash; and
identify a second skip region prior to generating the second hash.
19. The system of claim 16, wherein the first hash engine is configured to generate the first hash starting at a first bit of the data set, wherein the second hash engine is configured to generate the second hash at a second bit of the data set.
(The bit determined establishes a skip region of the bits prior to it that are not hashed.)
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of mental processes and/or math without significantly more.
As to claim 1, the claim recites the mental processes of a method, comprising:
identifying a data set to hash, the data set comprising a set of data blocks (A person can mentally read data and mentally identify it to hash);
generating, by a first hash engine, a first hash for the data set at a start position (A person can manually generate a first hash for data, either mentally or with pen and paper (the claim neither specifies how to hash or how much data is in the data set), and determine where to start doing so from reading the data.); and
generating, by a second hash engine, a second hash for within the data set at a first non-zero offset relative to the start position (A person can manually generate a second hash for data, either mentally or with pen and paper (the claim neither specifies how to hash, thus any simple method can be used, or how much data is in the data set), and determine where to start doing so from reading the data presented to them.).
This judicial exception is not integrated into a practical application because there are no steps beyond the abstract idea to possibly integrate it into a practical application. Nothing is done to actually utilize the hashing results in any way, let alone any meaningful way that could amount to any practical application. While still a mental step as set forth above, it is noted that the step of identifying a data set to hash, the data set comprising a set of data blocks is mere data gathering necessary to perform the abstract idea, and thus insignificant extra-solution activity. See MPEP §2106.05(g).
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additionally, the recitation of the method being “computer-implemented” merely attempts to generically implement the abstract idea on a computer, and is not indicative of significantly more. See MPEP §2106.05(f). The recitation of concurrently generating a second hash, while not something that can necessarily be performed mentally by a person, is merely utilizing a computer to more efficiently perform manual processes inherent with applying the abstract idea on a computer which has also been held not to amount to significantly more. See MPEP §2106.05(f)(2).
In addition, claim 1 also recites the mathematical concepts of a method, comprising:
;
generating, by a first hash engine, a first hash for the data set at a start position (Generating a hash is merely application of math performed on data. Selecting a starting location merely determines what to apply the math to.); and
concurrently generating, by a second hash engine, a second hash for within the data set at a first non-zero offset relative to the start position (Generating a hash is merely application of math performed on data. Selecting a starting location merely determines what to apply the math to. Applying math concurrently is still merely performing mathematical calculations.).
This judicial exception is not integrated into a practical application because there are no steps beyond the abstract idea to possibly integrate it into a practical application. Nothing is done to actually utilize the hashing results in any way, let alone any meaningful way that could amount to any practical application. While still a mental step as set forth above, it is noted that the step of identifying a data set to hash, the data set comprising a set of data blocks is mere data gathering necessary to perform the abstract idea, and thus insignificant extra-solution activity. See MPEP §2106.05(g).
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The recitation of the method being “computer-implemented” merely attempts to generically implement the abstract idea on a computer, and is not indicative of significantly more. See MPEP §2106.05(f). Again, the identifying step is insignificant extra-solution activity of necessary data gathering as per MPEP §2106.05(g), and the recitation of concurrently generating a second hash, is merely utilizing a computer to more efficiently perform manual processes inherent with applying the abstract idea on a computer which has also been held not to amount to significantly more. See MPEP §2106.05(f)(2).
As to claim 11, the claim recites the mental processes
to:
identify a data set to hash, the data set comprising a set of data blocks (A person can mentally read data and mentally identify it to hash);
generate, at a first hash engine, a first hash for the data set at a start position (A person can manually generate a first hash for data, either mentally or with pen and paper (the claim neither specifies how to hash or how much data is in the data set), and determine where to start doing so from reading the data.); and
generate, at a second hash engine, a second hash for the data set at a first non-zero offset relative to the start position (A person can manually generate a second hash for data, either mentally or with pen and paper (the claim neither specifies how to hash, thus any simple method can be used, or how much data is in the data set), and determine where to start doing so from reading the data presented to them.).
This judicial exception is not integrated into a practical application because there are no steps beyond the abstract idea to possibly integrate it into a practical application. Nothing is done to actually utilize the hashing results in any way, let alone any meaningful way that could amount to any practical application. While still a mental step as set forth above, it is noted that the step of identifying a data set to hash, the data set comprising a set of data blocks is mere data gathering necessary to perform the abstract idea, and thus insignificant extra-solution activity. See MPEP §2106.05(g).
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements of “a system, comprising: a memory; and one or more processors coupled to the memory, the one or more processors configured” merely recite generic computer components that do nothing more than apply the abstract idea on a computer. See MPEP §2106.05(f). The recitation of concurrently generating a second hash, while not something that can necessarily be performed mentally by a person, is merely utilizing a computer to more efficiently perform manual processes inherent with applying the abstract idea on a computer which has also been held not to amount to significantly more. See MPEP §2106.05(f)(2).
Additionally, claim 11 recites the mathematical processes
generate, at a first hash engine, a first hash for the data set at a start position (Generating a hash is merely application of math performed on data. Selecting a starting location merely determines what to apply the math to.); and
concurrently generate, at a second hash engine, a second hash for the data set at a first non-zero offset relative to the start position (Generating a hash is merely application of math performed on data. Selecting a starting location merely determines what to apply the math to. Applying math concurrently is still merely performing mathematical calculations.).
This judicial exception is not integrated into a practical application because there are no steps beyond the abstract idea to possibly integrate it into a practical application. Nothing is done to actually utilize the hashing results in any way, let alone any meaningful way that could amount to any practical application. The additional step of identifying a data set to hash, the data set comprising a set of data blocks is mere data gathering necessary to perform the abstract idea, and thus insignificant extra-solution activity. See MPEP §2106.05(g).
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements of “a system, comprising: a memory; and one or more processors coupled to the memory, the one or more processors configured” merely recite generic computer components that do nothing more than apply the abstract idea on a computer. See MPEP §2106.05(f). The recitation of concurrently generating a second hash, while not something that can necessarily be performed mentally by a person, is merely utilizing a computer to more efficiently perform manual processes inherent with applying the abstract idea on a computer which has also been held not to amount to significantly more. See MPEP §2106.05(f)(2).
As to claims 2 and 12, the claims are rejected for the same reasons as claims 1 and 11 above. In addition, the claims recite “wherein the data set is associated with a data transform pipeline.” These features merely describe the dataset and do not recite any function being performed. As such, they at best merely further describe the abstract idea, mental or math, being performed in claims 1 and 11 without any practical application or amounting to significantly more.
Additionally, because the features merely describe the data without any added functionality, the features of claims 2 and 12 are directed to non-functional descriptive material and do not carry patentable weight. See MPEP §2111.05.
As to claims 3 and 13, the claims are rejected for the same reasons as claims 1 and 11 above. In addition, the claims recite “wherein the first hash and the second hash are computed before a compression operation is performed in an encode pipeline.” The limitation merely describes when the perform mental/math processes of claims 1 and 11, and thus merely further describes the abstract idea of those claims without amounting to significantly more or reciting any practical application. The claims do not recite actually performing any compression as part of the claimed methods being performed, but merely that the hashing is done before a compression operation is performed by some entity which may not necessarily be that of the claimed method or system. As such, any description of the compression does not carry patentable weight since they are steps not required to be performed by the claims. See MPEP §2111.04.
As to claims 4 and 14, the claims are rejected for the same reasons as claims 3 and 13 above. In addition, the claims recite the mental/math processes of wherein the compression operation includes an operation to slice the data set (A person can mentally look at data and determine where to slice it.).
Additionally, as set forth in claims 3 and 13 above, the compression is not necessarily performed by the claimed method and system. As such, any description of the compression does not carry patentable weight since they are steps not required to be performed by the claims. See MPEP §2111.04.
As to claims 5 and 15, the claims are rejected for the same reasons as claims 3 and 13 above. In addition, the claims recite the mental/math processes of generating, by a third hash engine, a third hash for the data set at a second offset relative to the start position (A person can manually generate a third hash for data, either mentally or with pen and paper (the claim neither specifies how to hash or how much data is in the data set), and determine where to start doing so from reading the data. Also again, generating a hash is merely a mathematical process.).
Accordingly, the claims merely further describe the abstract idea being performed in claims 3 and 13 without any steps that could amount to significantly more and without reciting any practical application.
As to claims 6 and 16, the claims are rejected for the same reasons as claims 1 and 11 above. In addition, the claims recite the mental/math processes of wherein a first hash record is created in a hash buffer for the first hash engine, wherein a second hash record is created in the hash buffer for the second hash engine (The claims do not recite any specifics as to what a hash buffer is. Accordingly, a person’s mind, or their paper if done on pen and paper, can be interpreted as a buffer used to create a hash record.).
Accordingly, the claims merely further describe the abstract idea being performed in claims 1 and 11 without any steps that could amount to significantly more and without reciting any practical application.
As to claims 7 and 17, the claims are rejected for the same reasons as claims 1 and 11 above. In addition, the claims recite the mental/math processes of performing a deduplication operation in view of the first hash and the second hash (A person can look at data and mentally deduplicate it upon determine two or more are the same. The courts have held that broad deduplication is a mental process.).
Accordingly, the claims merely further describe the abstract idea being performed in claims 1 and 11 without any steps that could amount to significantly more and without reciting any practical application.
As to claims 8 and 18, the claims are rejected for the same reasons as claims 1 and 11 above. In addition, the claims recite the mental/math processes of wherein the start position is not aligned with a first frame of the data set (A person can mentally determine where to start the hash as previously set forth. Additionally this merely establishes a starting point for performing math of hashing.).
Accordingly, the claims merely further describe the abstract idea being performed in claims 1 and 11 without any steps that could amount to significantly more and without reciting any practical application.
As to claims 9 and 19, the claims are rejected for the same reasons as claims 1 and 11 above. In addition, the claims recite wherein first hash and the second hash are generated in parallel. While generating multiple hashes in parallel is not reasonably performed mentally or with pen and paper, parallel operations are merely operations commonly performed by computers. As such, simply reciting that hashes are generated in parallel is merely invoking computers as a tool utilizing its inherent efficiency with applying the abstract idea and does not amount to significantly more or recite a practical application of the abstract ideas of claims 1 and 11. See MPEP §2106.05(f). Finally, this step merely again is describing how to perform the math of hashing multiple times.
As to claims 10 and 20, the claims are rejected for the same reasons as claims 1 and 11 above. In addition, the claims recite the mental/math processes of identifying a first skip region prior to generating the first hash, and identifying a second skip region prior to generating the second hash (A person can mentally look at data and mentally identify regions to skip prior to determining hashes. This also merely establishes where to start performing the math of hashing.).
Accordingly, the claims merely further describe the abstract idea being performed in claims 1 and 11 without any steps that could amount to significantly more and without reciting any practical application.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
As to claims 1 and 11, the claims recite concurrently generating , by a second hash engine, a second hash. However, claims 9 and 19 similarly recite wherein first hash and the second hash are generated in parallel. The plain meanings of performing an operation concurrently and in parallel are both to perform the operation at the same time as another. Because Applicant has chosen to use two terms to describe what appears to be the same thing, and because Applicant’s specification is silent as to what any difference may intended to be, it is unclear what, if any, difference between concurrently generating a hash, and generating the hashes in parallel is supposed to be. Accordingly, the scope of the claims cannot be properly ascertained, rendering thm indefinite.
As to claims 2-10 and 12-20, the claims inherit the deficiencies of claims 1 and 11 without curing them and are therefore rejected under 35 USC §112(b) for the same reasons as claims 1 and 11 above.
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph:
Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claims 9 and 19 are rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends.
As to claims 9 and 19, the claims recite wherein first hash and the second hash are generated in parallel. However, claims 1 and 11 already recite “concurrently generating, by a second hash engine, a second hash.” Performing an action “concurrently”, given its plain meaning to one of ordinary skill in the art, is synonymous with doing so “in parallel.” Since the only other step performed in the claim after identifying the data set to hash is generating the first hash, the step of generating the second hash is already recited as being performed concurrently, i.e. in parallel, with generating the first hash. Accordingly, claims 9 and 19 fail to further limit the features of claims 1 and 11 as they merely recite subject matter already claimed with different, but equivalent, terminology. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Moorhead (previously presented)(GB 2493832 A) in view of O’Hare et al. (previously presented)(US 2021/0064659 A1), hereinafter O’Hare, and further in view of Mosko et al. (previously cited)(US 9,473,405 B2), hereinafter Mosko.
As to claim 1, Moorhead discloses a computer-implemented method, comprising:
identifying a data set to hash, the data set comprising a set of data blocks (Pg. 12, Lines 27-31; Pg. 13, Lines 16-26, A data block is received, and a data set therein is identified based on the current state of the rolling hash being performed.);
generating, by a first hash engine, a first hash within the data set at a start position (Pg. 9, Line 13; Pg. 12, Lines 27-31; Pg. 13, Lines 16-26; A first hash function, i.e. by first hash engine, is performed on each byte, i.e. the claimed data blocks. The first hash must start at some start position.); and
generating, by a second hash engine, a second hash within the data set at a first non-zero offset relative to the start position (Pg. 9, Lines 20-22, A second hash function is also performed on each byte of the data set, i.e. each claimed block, as well as with the first hash function. I.e. by a second hash engine. Hashing is performed via rolling hash for the data set identified by the hash block size determined from the current iteration of the rolling hash, Pg. 12, Lines 27-31; Pg. 13, Lines 16-26. I.e. for each data set of a given hash block size, e.g. 4079 bytes or 4069 bytes, both a corresponding hash from the first hash function and the second hash function will be determined. Any location will read on the claimed feature. The claim does not state or limit what the offset to start the second hash relative to the start position of the first hash is, and as such it does not explicitly exclude hashing on any blocks prior to the non-zero offset, but merely states that those after are to be hashed. I.e. if both hash blocks 1, 2, and 3 of a file, with block 2 being a first non-zero offset, then so long as the second hash is performed using block 2 (even in addition to block 1) it is still generating a second hash at a first non-zero offset as claimed. Thus, Moorhead reads on the claimed feature except for concurrency.).
Moorhead does not disclose concurrently generating the second hash.
However, O’Hare discloses concurrently generating, by a hash engine, a second hash within the data set at a first offset relative to the start position (Figs. 6, 10, and 11; [0019], [0028]; [0033]-[0034]; Multiple hashes are performed on a set of data with the second and later hashes being performed at variable offsets from a starting location of the first hash. The hashes are then used in generating compressed buffers. As shown in Fig. 11, #412; all hashes are performed in the same step, and as such are performed ‘concurrently’ as claimed. Because Applicant has claimed performing in parallel separately in claim 9, concurrent is interpreted as being different, though not disclosed how by Applicant’s specification. As such, performing in the same step, like in 412, is analogous to the claimed concurrent operation.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Moorhead with the teachings of O’Hare by modifying Moorhead such that the data blocks hashed by Moorhead and are determined to be stored by Moorhead are compressed as selected chunks that were hashed concurrently as in O’Hare by the first and second hash engines of Morrhead such that a deduplicated dataset of Moorhead is compressed for storage. Said artisan would have been motivated to do so in order to reduce storage requirements of data received for storage in Moorhead by compressing the data (O’Hare, [0003]).
Additionally, solely for compact prosecution, should concurrent to be interpreted as in parallel and differing start offsets for both the first and second hashes, Mosko more explicitly discloses generating, by a first hash engine, a first hash for the data set at a start position (Figs. 7A-7C; Col. 5, Lines 1-15; Col. 6, Lines 5-20 and 38-56; Col. 7, Lines 56-63, A plurality of hashers, i.e. hash engines, are fed replicated incoming data to hash concurrently at differing start offsets. E.g. a first hash engine uses SHA-256 hashing on an entire message body (e.g. starting at offset 20), while concurrently/in parallel, a second hash engine will start hashing the body at offset 24 using SipHash.); and
concurrently generating, by a second hash engine, a second hash for within the data set at a first non-zero offset relative to the start position (Figs. 7A-7C; Col. 5, Lines 1-15; Col. 6, Lines 5-20 and 38-56; Col. 7, Lines 56-63, A plurality of hashers, i.e. hash engines, are fed replicated incoming data to hash concurrently at differing start offsets. E.g. a first hash engine uses SHA-256 hashing on an entire message body (e.g. starting at offset 20), while concurrently/in parallel, a second hash engine will start hashing the body at offset 24 using SipHash.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Moorhead, as previously modified with O’Hare, with the teachings of Mosko by modifying Moorhead such that the multiple hashes of the data set are performed concurrently/in parallel like Mosko. Said artisan would have been motivated to do so in order to more efficiently use available resources so as to obtain hashing results more quickly by utilizing parallel processing as is a common practice in the art while also hashing desired different portions of the same dataset like is done in both O’Hare, e.g. to more efficiently determine duplicate and to deduplicate data from the multiple hashes in Moorhead and O’Hare.
As to claim 11, Moorhead discloses a system, comprising:
a memory (Pg. 7, Lines 1-15); and
one or more processors coupled to the memory, the one or more processors configured to (Pg. 7, Lines 1-15):
identify a data set to hash, the data set comprising a set of data blocks (Pg. 12, Lines 27-31; Pg. 13, Lines 16-26, A data block is received, and a data set therein is identified based on the current state of the rolling hash being performed.);
generate, at a first hash engine, a first hash for the data set at a start position (Pg. 9, Lines 13-22; Pg. 12, Lines 27-31; Pg. 13, Lines 16-26; A first hash function, i.e. by first hash engine, is performed on each byte, i.e. the claimed data blocks. The first hash must start at some start position.); and
generate, at a second hash engine, a second hash for the data set at a first offset relative to the start position (Pg. 9, Lines 20-22, A second hash function is also performed on each byte of the data set, i.e. each claimed block, as well as with the first hash function. I.e. by a second hash engine. Hashing is performed via rolling hash for the data set identified by the hash block size determined from the current iteration of the rolling hash, Pg. 12, Lines 27-31; Pg. 13, Lines 16-26. I.e. for each data set of a given hash block size, e.g. 4079 bytes or 4069 bytes, both a corresponding hash from the first hash function and the second hash function will be determined. Any location will read on the claimed feature. The claim does not state or limit what the offset to start the second hash relative to the start position of the first hash is, and as such it does not explicitly exclude hashing on any blocks prior to the non-zero offset, but merely states that those after are to be hashed. I.e. if both hash blocks 1, 2, and 3 of a file, with block 2 being a first non-zero offset, then so long as the second hash is performed using block 2 (even in addition to block 1) it is still generating a second hash at a first non-zero offset as claimed. Thus, Moorhead reads on the claimed feature except for concurrency.).
Moorhead does not disclose concurrently generating the second hash.
However, O’Hare discloses concurrently generating, by a hash engine, a second hash within the data set at a first offset relative to the start position (Figs. 6, 10, and 11; [0019], [0028]; [0033]-[0034]; Multiple hashes are performed on a set of data with the second and later hashes being performed at variable offsets from a starting location of the first hash. The hashes are then used in generating compressed buffers. As shown in Fig. 11, #412; all hashes are performed in the same step, and as such are performed ‘concurrently’ as claimed. Because Applicant has claimed performing in parallel separately in claim 9, concurrent is interpreted as being different, though not disclosed how by Applicant’s specification. As such, performing in the same step, like in 412, is analogous to the claimed concurrent operation.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Moorhead with the teachings of O’Hare by modifying Moorhead such that the data blocks hashed by Moorhead and are determined to be stored by Moorhead are compressed as selected chunks that were hashed concurrently as in O’Hare by the first and second hash engines of Morrhead such that a deduplicated dataset of Moorhead is compressed for storage. Said artisan would have been motivated to do so in order to reduce storage requirements of data received for storage in Moorhead by compressing the data (O’Hare, [0003]).
Additionally, solely for compact prosecution, should concurrent to be interpreted as in parallel and differing start offsets for both the first and second hashes, Mosko more explicitly discloses generating, by a first hash engine, a first hash for the data set at a start position (Figs. 7A-7C; Col. 5, Lines 1-15; Col. 6, Lines 5-20 and 38-56; Col. 7, Lines 56-63, A plurality of hashers, i.e. hash engines, are fed replicated incoming data to hash concurrently at differing start offsets. E.g. a first hash engine uses SHA-256 hashing on an entire message body (e.g. starting at offset 20), while concurrently/in parallel, a second hash engine will start hashing the body at offset 24 using SipHash.); and
concurrently generating, by a second hash engine, a second hash for within the data set at a first non-zero offset relative to the start position (Figs. 7A-7C; Col. 5, Lines 1-15; Col. 6, Lines 5-20 and 38-56; Col. 7, Lines 56-63, A plurality of hashers, i.e. hash engines, are fed replicated incoming data to hash concurrently at differing start offsets. E.g. a first hash engine uses SHA-256 hashing on an entire message body (e.g. starting at offset 20), while concurrently/in parallel, a second hash engine will start hashing the body at offset 24 using SipHash.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Moorhead, as previously modified with O’Hare, with the teachings of Mosko by modifying Moorhead such that the multiple hashes of the data set are performed concurrently/in parallel like Mosko. Said artisan would have been motivated to do so in order to more efficiently use available resources so as to obtain hashing results more quickly by utilizing parallel processing as is a common practice in the art while also hashing desired different portions of the same dataset like is done in both O’Hare, e.g. to more efficiently determine duplicate and to deduplicate data from the multiple hashes in Moorhead and O’Hare.
As to claims 2 and 12, the claims are rejected for the same reasons as claims 1 and 11 above. In addition, Moorhead, as previously modified with O’Hare and Mosko, discloses wherein the data set is associated with a data transform pipeline (Moorhead, Pg. 4, Lines 27-34, E.g. as part of a deduplication data transform pipeline.).
Additionally, the features of claims 2 and 12 merely describe the type of data of dataset and do not recite any function being performed. The fact that the data set has some generic association with a data transform pipeline is not used to do any specific function, let alone any function that would not otherwise be done with any other data inputted into the process. As such, the merely recite non-functional descriptive material, and do not carry patentable weight and need not be taught by the prior art in rejecting the claims. See MPEP §2111.05.
As to claims 3 and 13, the claims are rejected for the same reasons as claims 1 and 11 above. In addition, Moorhead, as previously combined with O’Hare and Mosko, discloses wherein the first hash and the second hash are computed before a compression operation is performed in an encode pipeline (Moorhead, Pg. 9, Lines 13-22; Pg. 12, Lines 27-31; Pg. 13, Lines 16-26, The first and second hashes are computed without any compression operations being performed first. Additionally, the claims do not require actually performing a compression operation, merely that one is not done prior to computing the hashes. As such Moorhead discloses the claims. Actually performing the compression is not required by the claims and the operation itself does not carry patentable weight. See MPEP §2111.04.).
Although Moorhead discloses the claim as set forth above, solely for more compact prosecution, O’Hare also discloses wherein the first hash and the second hash are computed before a compression operation is performed in an encode pipeline (Figs. 6, 10, and 11; [0033]-[0034]; Multiple hashes are performed on a set of data with the second and later hashes being performed at offsets from a starting location of the first hash. The hashes are then used in generating compressed buffers.).
As to claims 4 and 14, the claims are rejected for the same reasons as claims 3 and 13 above. In addition, Moorhead, as previously modified with O’Hare and Mosko, discloses wherein the compression operation includes an operation to slice the data set (O’Hare, Figs. 8, 9 and 11; [0031]-[0034], E.g. organizing the input buffer into logical chunks.).
Additionally, while the prior art discloses wherein the compression operation includes an operation to slice the data set, as set forth in the rejections of claims 3 and 13 above, the claims do not actually require performing a step of compression. The claims also do not used any compressed data to perform any claimed function. As such, the fact that the non-claimed performed compression operation incudes an operation to slice the data set does not limit the claims to performing any required step. Therefore, the features of claims 4 and 14 do not carry patentable weight and need not be disclosed by the prior art in rejecting the claims. See MPEP §2111.04.
As to claims 5 and 15, the claims are rejected for the same reasons as claims 4 and 14 above. In addition, Moorhead, as previously modified with O’Hare and Mosko, discloses generating, by a third hash engine, a third hash for the data set at a second offset relative to the start position (O’Hare, Figs. 6, 10, and 11; [0033]-[0034]; Multiple hashes are performed on a set of data with the second and later hashes being performed at offsets from a starting location of each previous hash. The hashes are then used in generating compressed buffers. Mosko, Figs. 7A-7C; Col. 6, Lines 38-56, E.g. hash performed by third hash engine SipHash Unit #2).
As to claims 6 and 16, the claims are rejected for the same reasons as claims 1 and 11 above. In addition, Moorhead, as previously modified with O’Hare and Mosko, discloses wherein a first hash record is created in a hash buffer for the first hash engine, wherein a second hash record is created in the hash buffer for the second hash engine (Moorhead, Pg. 9, Lines 14-23, The first first-hash starts at the beginning of the buffer and first block, i.e. at a first bit 0, and also covers and is at additional bits up to the end of the block. Accordingly, the second hash produced by the second hash engine is generated “at a second bit” since it covers at least the second bit in the block of the first hash. Mosko, Col. 6, Lines 5-13, Each hasher has an output buffer to hold results).
As to claims 7 and 17, the claims are rejected for the same reasons as claims 1 and 11 above. In addition, Moorhead, as previously modified with O’Hare and Mosko, discloses performing a deduplication operation in view of the first hash and the second hash (O’Hare, [0007], “uses the multiple hash values to perform deduplication on the data set”.).
As to claims 8 and 18, the claims are rejected for the same reasons as claims 1 and 11 above. In addition, Moorhead, as previously modified with O’Hare and Mosko, discloses wherein the start position is not aligned with a first frame of the data set (O’Hare, Figs. 6, 10; [0033], The claim does not place any limitation on what a first hash is, or where a start position is, as such, any hash value 106 can be considered a first hash with a start position that is not aligned with the first frame of the previous hash. Additionally, each hash has a variable offset relative to others, e.g. as visually shown in Fig. 6, #106. Mosko, Col. 6, Lines 38-56, Demonstrating further explicit bit offset start positions between different hashes.).
As to claims 9 and 19, the claims are rejected for the same reasons as claims 1 and 11 above. In addition, Moorhead, as previously modified with O’Hare and Mosko, discloses wherein first hash and the second hash are generated in parallel (Mosko, Figs. 7A-7C; Col. 5, Lines 1-15; Col. 6, Lines 5-20 and 38-56; Col. 7, Lines 56-63, A plurality of hashers, i.e. hash engines, are fed replicated incoming data to hash concurrently at differing start offsets. E.g. a first hash engine uses SHA-256 hashing on an entire message body (e.g. starting at offset 20), while concurrently/in parallel, a second hash engine will start hashing the body at offset 24 using SipHash.).
As to claims 10 and 20, the claims are rejected for the same reasons as claims 1 and 11 above. In addition, Moorhead, as previously modified with O’Hare and Mosko, discloses identifying a first skip region prior to generating the first hash, and identifying a second skip region prior to generating the second hash (O’Hare, Figs. 6 and 10; [0028], [0029], [0033], I.e. calculating offsets for hashing from any previous hashes thus determining skip regions. Mosko, Fig. 2; Col. 5, Lines 1-5; Col. 6, Lines 43-47, headers are indicated as being skipped for hashing by the first engine, and second engine skips until identified start at offset 24.).
Response to Arguments
Applicant's arguments filed 10 December 2025 have been fully considered but they are not fully persuasive. For Examiner’s response, see discussion below:
(a) At page 6, with respect to the double patenting rejections of claims 1-20, Applicant requests the rejections be held in abeyance.
As to (a), double patenting rejections cannot be held in abeyance. As no other arguments were made, the rejections are maintained as set forth above.
(b) Applicant’s arguments, see page 7, with respect to the rejections of claims 2 and 12 under 35 USC §112(b), have been fully considered and are persuasive. The rejections of claims 2 and 12 under 35 USC §112(b) as set forth in the previous office action have been withdrawn in view of Applicant’s amendments to the claims.
(c) At pages 7-8, with respect to the rejection of independent claim 1 under 35 USC §101, Applicant argues that the claim is directed to a technological improvement in computer functionality.
As to (c), Applicant’s arguments have been fully considered but are not persuasive. There can be no improvement as there is absolutely no practical application being performed. The claim does nothing other than generate two hashes. It does not use this for any purpose whatsoever to even attempt to achieve a practical application. As is, the claim merely performs mathematical calculations (i.e. the first and second hash) at such a high level of generality so as to be performed mentally or on pen and paper by a person, and does so with no claimed purpose. As set forth in the rejection above, the claim is directed to an abstract without significantly more and without any practical application. For at least these reasons, the rejection of claims 1 and 11 under 35 USC §101 are maintained. The rejections of dependent claims 2-10 and 12-20 are maintained for these reasons as well, and also for the respective reasons set forth in the rejection of claims 2-10 and 12-20 above.
(d) At page 8, with respect to the rejections of claims 1-20 under 35 USC §103, Applicant argues that the prior art fails to disclose or render obvious the claims as currently amended.
As to (d), Applicant’s arguments have been fully considered but are not persuasive for the reasons set forth in the respective rejections of claims 1-20 under 35 USC §103 set forth above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES E RICHARDSON whose telephone number is (571)270-1917. The examiner can normally be reached Mon-Fri 9:00-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Beausoliel can be reached at (571) 272-3645. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/James E Richardson/Primary Examiner, Art Unit 2167