DETAILED ACTION
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 27-31 are rejected 35 USC § 101 since the system does not contain any hardware per se and can be implemented by just software. Adding a processor to the claim language should overcome the rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
Claims 1-9 and 21-31 are rejected under 35 U.S.C. 103(a) as being unpatentable over Bellala et al. (US 2018/0218171 A1) and in view of Miller et al. (US 2020/0027137 A1).
For claim 1, Bellala et al. teaches a method comprising: receiving a request to perform aggregation of a data segment, the data segment comprising data to be aggregated [data segment with data to be aggregated, 0052: Bellala]; obtaining a plurality of data portions, each data portion comprising a non-overlapping subset of the data segment [plurality of subsets obtained with subsets being non-overlapping, 0052: Bellala], but does not teach performing line-rate aggregation operations on the plurality of data portions in an order in which each of the plurality of data portions were obtained.
Miller et al. teaches performing line-rate aggregation operations on the plurality of data portions in an order in which each of the plurality of data portions were obtained [performing computation after management of data is presented in a date range order, 0144; non-overlapping sequencing of data for aggregation and computing, 0056-0057: Miller].
Bellala et al. (US 2018/0218171 A1) and Miller et al. (US 2020/0027137 A1) are analogous art because they are from the same field of aggregating data in in segments.
At the time of the invention it would have been obvious to a person of ordinary skill in the art to modify the partitioning and computation of data as described by Bellala et al. with ordered segmentation of data as taught by Miller et al.
The motivation for doing so would for “solutions to big data analytics and generic analytical tasks” [0001: Miller].
Therefore, it would have been obvious to combine Bellala et al. (US 2018/0218171 A1) with Miller et al. (US 2020/0027137 A1) for ordered aggregation on data.
For claim 2, Bellala et al. and Miller et al. teaches:
The method of claim 1, wherein performing the aggregation operations on the plurality of data portions comprises assigning a data portion in the plurality of data portions to a compute unit of a computing array that comprises a plurality of compute units [assigning portions for computational processing, 0020: Bellala].
For claim 3, Bellala et al. and Miller et al. teaches:
The method of claim 2, wherein assigning the data portion to the compute unit comprises determining that the compute unit is in an idle state and removing the compute unit from the idle state [processors staying idle until active need, 0049: Bellala].
For claim 4, Bellala et al. and Miller et al. teaches:
The method of claim 3, further comprising returning the compute unit to the idle state in response to determining that the compute unit has completed performing aggregation operations on the data portion [processors staying idle until active need, 0049: Bellala].
For claim 5, Bellala et al. and Miller et al. teaches:
The method of claim 2: further comprising assigning the data segment to the compute unit; wherein: assigning the data portion to the compute unit comprises determining that the data segment is assigned to the compute unit [computing device to handle data segments, 0052: Bellala]; the compute unit comprises a plurality of temporary data storages [storage for segment based on computational storage role, 0063: Bellala]; and performing the aggregation operations on the data portion comprises storing results of performing aggregation operations on the data portion to a temporary data storage associated with the data segment [aggregation of local gradients of data, 0052: Bellala].
For claim 6, Bellala et al. and Miller et al. teaches:
The method of claim 5, further comprising: assigning a second data segment to the compute unit [secondary group of data segmented, 0020: Bellala]; obtaining a second data portion that represents a portion of the second data segment [second portion, 0020: Bellala]; assigning the second data portion to the compute unit based on determining that the second data segment is assigned to the compute unit [computing device to handle data segments, 0052: Bellala]; performing aggregation operations on the second data portion [aggregation of local gradients of data, 0052: Bellala]; and storing results of performing aggregation operations on the second data portion to a temporary storage associated with the second data segment [storage for segment based on computational storage role, 0063: Bellala].
For claim 7, Bellala et al. and Miller et al. teaches:
The method of claim 2: wherein the data segment represents a subset of data to be processed as part of fulfilling a data processing job [data segment subset for processing, 0052: Bellala]; and assigning the data segment to the compute unit comprises determining that a current number of compute units assigned to process data segments that are part of the data processing job is below a threshold number [determine amount of segments and operations to reach, 0020: Bellala].
For claim 8, Bellala et al. and Miller et al. teaches:
The method of claim 1: wherein obtaining the plurality of data portions comprises obtaining the plurality of data portions from a network [data obtained from network, 0061: Bellala]; and performing the aggregation operations on the plurality of data portions comprises performing the aggregation operations on the plurality of data portions in the order in which the plurality of data portions were obtained from the network [aggregating of the data from the network, 0066: Bellala].
For claim 9, Bellala et al. and Miller et al. teaches:
The method of claim 1, wherein the data segment comprises at least one of: machine learning training activations; or machine learning training gradients [machine learning with gradient function, 0045: Bellala].
For claim 21, Bellala et al. and Miller et al. teaches:
The method of claim 1, further comprising: comparing metadata from the request to perform the aggregation to at least one data structure including information related to received requests for performing aggregation of data segments [gathering information about data for computation, 0057: Bellala]; assigning a data portion in the plurality of data portions to at least one compute unit of a plurality of compute units based on the comparison [partitioning before computation, 0013-0014: Bellala]; and updating the at least one data structure with the at least one compute unit [updating parameter after compute, 0065: Bellala].
For claim 22, Bellala et al. and Miller et al. teaches:
The method of claim 21, wherein at least one data structure includes a first table including status information for received aggregation operations [partitioned data presented in table after aggregate operation, 0013: Bellala].
For claim 23, Bellala et al. and Miller et al. teaches:
The method of claim 21, wherein at least one data structure includes a second table including status information for compute units within the plurality of compute units [computation of multiple variables for a secondary table, 0032-0033: Bellala].
For claim 24, Bellala et al. and Miller et al. teaches:
The method of claim 21, wherein the comparing the metadata to the at least one data structure comprises comparing a task identifier associated with the request to perform the aggregation operation on the data segment to entries within the at least one data structure [computing based on task after identification on local data, 0058: Bellala].
For claim 25, Bellala et al. and Miller et al. teaches:
The method of claim 21, wherein updating the at least one data structure with the at least one compute unit performing the plurality of data portions comprises associating a task identifier for the request with the at least one compute unit [computing based on task after identification, 0058: Bellala].
For claim 26, Bellala et al. and Miller et al. teaches:
The method of claim 2, further comprising performing dynamic load-balancing for the computing array [load balancers, 0161 and 0394: Miller].
Claim 27 is a system of the method taught by claim 1. Bellala et al. and Miller et al. teach the limitations of claim 1 for the reasons stated above.
Claim 28 is a system of the method taught by claim 8. Bellala et al. and Miller et al. teach the limitations of claim 8 for the reasons stated above.
Claim 29 is a system of the method taught by claim 21. Bellala et al. and Miller et al. teach the limitations of claim 21 for the reasons stated above.
Claim 30 is a system of the method taught by claim 2. Bellala et al. and Miller et al. teach the limitations of claim 2 for the reasons stated above.
Claim 31 is a system of the method taught by claim 9. Bellala et al. and Miller et al. teach the limitations of claim 9 for the reasons stated above.
Response to Arguments
Applicant's arguments and amendments filed November 25, 2025 have been fully considered and a new reference has been brought in to address the new limitations. The rejection is addressed in detail above in the 35 U.S.C. 103 Rejection.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AJITH M JACOB whose telephone number is (571)270-1763. The examiner can normally be reached on Monday-Friday: Flexible Hours.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Apu Mofiz can be reached on 571-272-4080. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
3/5/2026
/AJITH JACOB/Primary Examiner, Art Unit 2161