DETAILED ACTION
Claims 1-12 are presented for examination on the merits.
Notice of Pre-AIA or AIA Status
The present application is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 03/07/2023 and 10/31/2025 has been considered. The submission is in compliance with the provisions of 37 CFR 1.97. Form PTO-1449 is signed and attached hereto.
Drawings
The drawings filed on 03/07/2023 are accepted by the examiner.
Priority
The application is filed on 03/07/2023 and claims foreign priority of JP2022-
090740 (Japan) filed on 06/03/2022.
Claim Rejections - 35 USC § 112
1. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter rationale which the applicant regards as his invention.
2. Claim 1 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention suggests
3. Regarding claim 1, the claim language recites “execute a learning process of machine learning model using learning data managed in the client terminal ” in which, “the client terminal”, is not consistent with the earlier recitation, “the client terminals”. It appears that the proper recitation should be either “each of the client terminals” or “a client terminal of the plurality of client terminals”. The claims are examined as best understood at this time. Appropriate corrections are required for all claim terms that lack antecedent basis in the claim.
Claim Rejections - 35 USC § 103
4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
7. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
8. Claims 1, 3, 6, and 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Cini (US 20240394998 A1) in view of Huazhong Wang
(US 10600006 B1, hereinafter, Huazhong Wang), and further in view of Shan Wang et al. (US 20130275365 A1, hereinafter, Shan Wang).
Regarding claim 1, Cini discloses a machine learning system comprising a plurality of client terminals and a plurality of aggregated server devices communicatively connected to the client terminals (Para 0023, 0189, 0056: two or more devices working in concert with cluster of servers in a first location and cluster of servers in a second location wherein a machine-learning process use model relationships between two or more categories of data elements), wherein
each of the client terminals includes a first processor configured to (Para 0189, 0171: one or more computing devices/machines/processors are being utilized including appropriate hardware for assisting in the implementation of the machine executable instructions):
execute a learning process of machine learning model using learning data managed in the client terminal (Para 0190, 0179: storing and/or encoding a sequence of instructions for execution by a machine wherein preconditioning and/or training a machine-learning algorithm and/or model is being used) ;
extract a first parameter column in which a plurality of parameters are arranged from the machine learning model subjected to the learning process (Para 0131, 0179: machine-learning models and/or neural networks, efficiently evaluating model and/or algorithm outputs with iterative updates to parameters, as vector and/or matrix operations wherein data extraction allows the machine learning model to process input data to be transformed into numerical representations using vectors and/or matrices);
change the arrangement of the parameters of the extracted first parameter column (Para 0110, 0134: arranges plaintext into matrices and then modifies the matrices through repeated permutations and arithmetic operations wherein the singular values are the diagonal entries of the S matrix and are arranged in descending order);
[perform secret sharing with respect to the first parameter column including the parameters with changed arrangement in order to generate a first fragment parameter column corresponding to each of the aggregated server devices]; and
transmit the generated first fragment parameter column to the aggregated server devices (Para 0029, 0023-0024: modify the global attributes…generating interior space data structures …wherein aggregated data is subdivided, shared, or otherwise dealt with using iteration, recursion, and/or parallel processing and comminuted/transmitted to remote devices including cluster of servers), and
each of the aggregated server devices includes a second processor (Para 0023, 0171: cluster of servers dedicated to storage and/or production of dynamic data processing associated with processor cores)
configured to: receive a plurality of first fragment parameter columns transmitted from the client terminals (Para 0130, 0171, 0023: training data is retrieved/scraped from a database (row/column) based on data type wherein aggregated data result is subdivided (i.e. fragment parameter for example), shared/communicated to remote devices including cluster of servers by the user terminals);
change arrangement of fragment parameters of each of the received first fragment parameter columns (Para 0024, 0151, 0179: perform iterative changes to parameters, coefficients, weights, and/or biases, and/or any other operations such as vector and/or matrix operations associated with a subset of vectors positions corresponds to the received data elements that are grouped together wherein aggregating inputs and/or outputs of repetitions produces an aggregate result); and
[execute an aggregation process with respect to the first fragment parameter columns including the fragment parameters with changed arrangement in order to generate a second fragment parameter column], wherein
the machine learning model is updated based on a plurality of parameters in a second parameter column decoded from a plurality of second fragment parameter columns generated in the aggregated server devices (Para 0170, 0023, 0179: Updating neural networks wherein training a supervised machine-learning process include iteratively updating parameters, coefficients, biases, weights associated with matrix-based updates from two or more devices working in concert with cluster of servers including a second server or cluster of servers in a second location).
Cini does not explicitly state but Huazhong Wang from the same or similar fields of endeavor teaches perform secret sharing with respect to the first parameter column including the parameters with changed arrangement in order to generate a first fragment parameter column corresponding to each of the aggregated server devices (Huazhong Wang, Col. 7, lines 7-20, Col. 5, lines 35-43, Col. 4, lines 56-65: perform secret sharing on data sets corresponds to parameter column including redefined parameters wherein data matrix is divided to two shares (i.e. fragmented ) associated with the columns of each training data set with respect to servers devices; Fig. 1 and associated texts).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to perform secret sharing with respect to the first parameter column including the parameters with changed arrangement in order to generate a first fragment parameter column corresponding to each of the aggregated server devices as taught by Huazhong Wang in the teachings of Cini for the advantage of training a multi-party secure logistic regression model (SLRM) by using secrete sharing (secret sharing) techniques wherein dividing sample training data for a secure logistic regression model (SLRM) into a plurality of shares using secret sharing (SS), wherein each share is distributed to a secure computation node (SCN) (Huazhong Wang, Abstract).
Cini does not explicitly state but Shan Wang from the same or similar fields of endeavor teaches execute an aggregation process with respect to the first fragment parameter columns including the fragment parameters with changed arrangement in order to generate a second fragment parameter column (Shan Wang, Para 0010, 0069: on-line analytical processing (OLAP) query performs an aggregate operation using a column store model wherein the processing threads execute centralized aggregate calculation in parallel, and final aggregate calculation results are updated to corresponding units of the group by aggregate accumulator array; Fig. 6 and associated texts).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to execute an aggregation process with respect to the first fragment parameter columns including the fragment parameters with changed arrangement in order to generate a second fragment parameter column as taught by Wang in the teachings of Cini for the advantage of performing a group-by aggregate calculation according to a group item of a table filtering group-by vector through one-pass column scan on a table attribute and thus, improving the I/O performance of the column store (Shan Wang, Abstract).
Regarding claim 3, the combination of Cini, Huazhong Wang, and Shan Wang discloses the machine learning system of claim 1, wherein the aggregation process is executed through secure calculation (Cini, Para 0105, 0171: device-specific secret is used for mathematical calculation wherein repetition of a step or a sequence of steps are performed iteratively and/or recursively using outputs of previous repetitions as inputs to subsequent repetitions, aggregating inputs and/or outputs of repetitions to produce an aggregate result).
Regarding claim 6, the combination of Cini, Huazhong Wang, and Shan Wang discloses the machine learning system of claim 1, wherein the first processor is configured to: evaluate the machine learning model after the update using preset evaluation data (Cini Para 0131, 0179: machine-learning models and/or neural networks, efficiently evaluating model and/or algorithm outputs with iterative updates to parameters, as vector and/or matrix operations wherein data extraction allows the machine learning model to process input data to be transformed into numerical representations using vectors and/or matrices); and
execute a learning process of the updated machine learning model after the update or the machine learning model before the update based on the evaluation result (Cini Para 0190, 0179: storing and/or encoding a sequence of instructions for execution by a machine wherein preconditioning and/or training a machine-learning algorithm and/or model is being used).
Regarding claim 10; Claim 10 is similar in scope to claim 1, and is therefore rejected under similar rationale (Further, Para 0171 of Cini includes: a computing device, processor, and/or module may be configured to perform method, method step wherein repetition of a step or a sequence of steps are performed iteratively and/or recursively using outputs of previous repetitions as inputs to subsequent repetitions, aggregating inputs and/or outputs of repetitions to produce an aggregate result).
Regarding claim 11; Claim 11 is similar in scope to claim 1, and is therefore rejected under similar rationale where the limitations further include “each of a plurality of aggregated server devices including the aggregated server device, and transmit the generated first fragment parameter column to the aggregated server devices, comprising a second processor” (Further, Para 0123, 0071 of Cini includes additional data associated with random number, signature input, which are received by the processor cores (i.e. comprising a second processor) and simultaneously perform processing tasks wherein data is subdivided, shared, or using iteration, recursion, and/or parallel processing).
Regarding claim 12; Claim 12 is similar in scope to claim 1, and is therefore rejected under similar rationale (Further, Para 0171, 0023 of Cini includes: a computing device, processor, and/or module may be configured to perform method, method step wherein repetition of a step or a sequence of steps are performed iteratively and/or recursively using outputs of previous repetitions as inputs to subsequent repetitions, aggregating inputs and/or outputs of repetitions to produce an aggregate result wherein plurality of devices working in concert with cluster of servers).
Allowable Subject Matter
9. Claims 2, 4-5, and 7-9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Reasons for Allowance
10. The following is an examiner’s statement of reasons for placing claims 2, 4-5, and 7-9 under allowable subject matters:
The limitation of dependent claims 2, 4-5, and 7-9 are allowed and the corresponding dependent claims including respective and any intervening claims are not disclosed by the any of the prior art of record. For example, the limitations in claim 4 including the intervening claims recites “..wherein the first processor is configured to: perform secret sharing with respect to an index column indicative of a corresponding relationship between a first arrangement of the parameters in the extracted first parameter column and a second arrangement of the parameters changed from the first arrangement in order to generate a fragment index column corresponding to each of the aggregated server devices; and transmit the generated fragment index column to each of the aggregated server devices, and the second processor is configured to: receive a plurality of fragment index columns transmitted from the client terminals; and change, based on each of the received fragment index columns, arrangement of the fragment parameters of each of the received first fragment parameter columns..” which are not are not taught or fairly suggested by the prior art of record.
The allowable subject matters in above dependent claims are novel and non-obvious in scope over the prior art of record as the prior-art references fail to teach each and every features of the aforesaid dependent claim(s) including the limitations set forth above.
In view of the foregoing, the scope of claimed subject matters renders the invention patentably distinct as none of the prior art of record, either taken by itself or in any combination, would have anticipated or made obvious the invention of the present application at or before the time it was filed.
Furthermore, the Examiner performed updated search which does not yield other specific references that reasonably, either alone or in combination, would result a proper rejection of all the claimed features presented in each of the dependent claims 2, 4-5, and 7-9 under 35 U.S.C 102 or 35 U.S.C.103 with proper motivation.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled "Comments on Statement of Reasons for Allowance."
Conclusion
11. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Hamada et al. (US 20250322032 A1) discloses a first matrix calculation means that calculates a share of a matrix E in which each row satisfies a predetermined condition using a share of a matrix X representing N pieces of data and a share of a column vector .sup..fwdarw.g representing groups obtained by grouping the N pieces of data, second matrix calculation means that calculates a share of a matrix Y and a share of a matrix U in which each row satisfies the predetermined condition.
Carlin et al. (US 20090319487 A1) discloses methods that group atomic scalar values recognized by a database such as columns into sets (e.g., column sets). A grouping component associated with the SQL server creates a logical representation for column groupings, which are accessible by a single I/O and can be co-located (e.g., substantially close or compact) in terms of storage location. Interesting column sets (e.g., non-null) can also be selected for a data representation thereof as a single entity to other applications.
12. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAHFUZUR RAHMAN whose telephone number is (571)270-7638. The examiner can normally be reached on Monday thru Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yin-Chen Shaw can be reached on 571-272-8878. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MAHFUZUR RAHMAN/Primary Examiner, Art Unit 2498