DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
This application claims the benefit and priority of U.S. Provisional Application Serial Number 63/529,570, filed on July 28, 2023. However, Provisional Application does not incorporated information about ensuring randomness, thus claim 15-20 will not enjoy priority of July 28, 2023, claim 15-20 will be getting priority on effective date 07/01/24.
Claim Interpretation
As per claim 15-20 , this claim limitations “ a processing unit configured to….” has been interpreted under invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because it uses a non-structural term “unit ” coupled with functional language “ configured …” respectively without reciting sufficient structure to achieve the function. Furthermore, the non-structural term is not preceded by a structural modifier.
Since this claim limitation invokes invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, claim 15 interpreted to cover the corresponding structure described in the specification that achieves the claimed function, and equivalents thereof.
A review of the specification shows that the following appears to be the corresponding structure described in the specification for the invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation: specification discloses text , para [0071].
If applicant wishes to provide further explanation or dispute the examiner’s interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action.
If applicant does not wish to have the claim limitation treated under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may amend the claim so that it will clearly not invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, or present a sufficient showing that the claim recites sufficient structure, material, or acts for performing the claimed function to preclude application of 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
For more information, see Supplementary Examination Guidelines for Determining Compliance with 35 U.S.C. § 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
The term “sufficient” in claim 15, and 20 is a relative term which renders the claim indefinite. The term “sufficient” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Since “sufficient” is not defined in specification examiner interpreted sufficient as acceptable/normal/generic number(normal/generic randomness).
Dependent claim 16-19, do not cure the deficiencies, also rejected accordingly.
Claim 8 recites “the SMPC protocol” in line 6, and 11. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Dalli et al(US 20220156614 A1) in view of Krishnan et al(US 20210064779 A1 ).
With regards to claim 1, Dalli discloses, A system for enhancing privacy and security of machine learning model training ([0055] An alternative to a knowledgeable human interpreter may be a suitable automated system, such as an expert system in a narrow domain, which may be able to interpret outputs or artifacts for a limited range of applications. In an exemplary embodiment, a medical expert system, or some logical equivalent such as an end-to-end machine learning system, may be able to output a valid interpretation of medical results in a specific set of medical application domains.), the system comprising: a processor ([0053]; embodiments described herein are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It should be recognized by those skilled in the art that the various sequences of actions described herein can be performed by specific circuits (e.g., application specific integrated circuits (ASICs)) and/or by program instructions executed by at least one processor. Additionally, the sequence of actions described herein can be embodied entirely within any form of computer-readable storage medium such that execution of the sequence of actions enables the at least one processor to perform the functionality described herein.) configured to:
execute a Secure Multi-Party Computation (SMPC) protocol on input data to train a machine learning model ([0203] In an exemplary embodiment, a BM may be used as the basis or part of a practical data privacy preserving AI system implementation. Data privacy may be violated intentionally or unintentionally by AI systems in a number of scenarios: (i.) personal data from training datasets unintentionally incorporated in AI models;…. The main data privacy preserving solutions for AI can be classified under four categories: (i.) differential privacy; (ii.) secure multi-party computation; (iii.) federated learning; (iv.) homomorphic encryption. Exemplary embodiments of BM systems may enable practical implementations under all four categories Note: by enabling SMPC on personal data training data set suggest executing SMPC on training dataset for privacy preserving.), wherein the SMPC protocol includes a Differential Privacy (DP) technique applied to an output of the SMPC protocol to ensure that the input data for the machine learning model remains private by limiting potential exposure of individual data contributions during training of the machine learning model ([0204] In an exemplary privacy preserving solution (i.), differential privacy, the introduction of noise in the training data or some other suitable means of obfuscation, may be used to generate a controllable amount of privacy through a noise factor or ratio, in the BM. The noise level may be a variable which the user may be able to supply or edit, where the noise level may be implemented as a constraint and/or objective. In privacy preserving solution (ii.), secure multi-party computation (SMPC) may be used to obtain a correct answer while concealing partial information about data and may simultaneously compute the answer using data from one or more sources);
Dalli does not exclusively but, Krishnan teaches,
encrypt the input data before performing computations to ensure data security during the execution of the SMPC protocol (FIG 3 302 and associated text; [0041] FIG. 3 depicts a flowchart of a process for sMPC in accordance with illustrative embodiments. Process 300 can be implemented with multiple parties such as data contributors 202, broker 212, and analyst 226 shown in FIG. 2. Process 300 begins with the contributors masking and encrypting their respective data, wherein the input data from each contributor has a unique contributor mask value added by that contributor (step 302) [0017] Illustrative embodiments recognize and take into account that many existing sMPC applications do not deal with textual data at large. In many sMPC applications, a trusted third party automatically masks sensitive data, mostly numeric data, after splitting it as shares in a homomorphic representation with other contributors before performing joint computation..); and
aggregate results of the deterministic computations performed by different parties involved in the SMPC protocol to produce a collective output used ( FIG 3 312 and associated text; [0043]; The broker then aggregates and randomly shuffles the input data from the contributors (step 312).[0004]; The respective analyst mask factors are added to the input data from the contributors, and the data is aggregated and shuffled. Computational results received from the analyst based on the aggregated input data are published. ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to modify Dallis’s system with teaching of Krishnan in order to categorical masking of textual data from multiple parties and more specifically to transforming the masked data to original form in a secure multiparty computation scenario while preserving privacy (Krishnan [0001]).
With regards to claim 5, Dalli further discloses, wherein the SMPC protocol supports secure data exchange among multiple parties without revealing individual inputs ([0204]; In privacy preserving solution (ii.), secure multi-party computation (SMPC) may be used to obtain a correct answer while concealing partial information about data and may simultaneously compute the answer using data from one or more sources. Exemplary embodiments of BMs and explainable models may extend SMPC protocols to apply to explanation generation apart from answer output. It is further contemplated that exemplary embodiments of BMs can be analyzed and tested formally for security and trust building purposes without revealing any private information. A secure enclave may also be used to decrypt the data in a protected space within the hardware processor, limiting the possibility that other parts of the system can access such data in clear text. An end-to-end hardware implementation of a BM system with a secure enclave may be rather resilient to most forms of data attacks.).
With regards to claim 6, Dalli further discloses, a secure computation environment to support the execution of the SMPC protocol and Differential Privacy (DP) techniques ([0204]; In privacy preserving solution (ii.), secure multi-party computation (SMPC) may be used to obtain a correct answer while concealing partial information about data and may simultaneously compute the answer using data from one or more sources. Exemplary embodiments of BMs and explainable models may extend SMPC protocols to apply to explanation generation apart from answer output. It is further contemplated that exemplary embodiments of BMs can be analyzed and tested formally for security and trust building purposes without revealing any private information. A secure enclave may also be used to decrypt the data in a protected space within the hardware processor, limiting the possibility that other parts of the system can access such data in clear text. An end-to-end hardware implementation of a BM system with a secure enclave may be rather resilient to most forms of data attacks.).
Claims 2 is rejected under 35 U.S.C. 103 as being unpatentable over Dalli et al(US 20220156614 A1) in view of Krishnan et al(US 20210064779 A1 ) and further in view of Kolte et al(US 20200366462 A1).
With regards to claim 2, Dalli further discloses, wherein the processor is configured to initialize secure channels between participating parties using encryption protocols that ensure end-to-end data confidentiality ([0205]; In privacy preserving solution (iv.), homomorphic encryption, or homomorphic computing may be used to allow computation on encrypted data without either decrypting the data and, optionally, using encrypted explainable models. In an exemplary embodiment of a BM using homomorphically encrypted data and a homomorphically encrypted XNN, utilizing the CKKS protocol, a secret key and a public key are generated. The public key is used for encryption and can be shared, while the private key is used for decryption and must be kept secret, for example, in a secure hardware enclave or similar implementation solution).
However, Dalli in view of Krishnan do not but Kolte teaches, wherein the processor is configured to initialize secure channels between participating parties using encryption protocols that ensure integrity (0018] Returning to the encryption operation 200, the encryption (that may be in one embodiment performed by the encryption element 106 in FIG. 1) uses two secret keys (previously generated using an encryption scheme or generated at the time of the encryption) to perform the encryption. One secret key is an encryption key, Ke, and the other secret key is an authentication key, Ka wherein the encryption key Ke is used to encrypt the plain data D from the client/application 102, 112 and the authentication key Ka is used to perform the message authentication code (MAC) process on the cipher data C generated by the encryption using secret key Ke. In one embodiment, the encryption/decryption processes may be performed using the known QGroups encryption process.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to modify Dalli in view of Krishnan’s system with teaching of Kolte in order to provide cryptography used computer data privacy (Kolte [0001]).
Claims 3-4 are rejected under 35 U.S.C. 103 as being unpatentable over Dalli et al(US 20220156614 A1) in view of Krishnan et al(US 20210064779 A1 ) and further in view of Juarez et al(US 20230376854 A1).
With regards to claim 3, Dalli in view of Krishnan do not but Juarez teaches, wherein the Differential Privacy (DP) technique further comprises randomly perturbing data from individual training records during the training of the machine learning model (Juarez [0012] Some aspects include a locally differentially private mechanism, which may preserve a correlation between group membership and model performance. … In some embodiments, perturbed data, which may be anonymized data, is generated from user data. In some embodiments, perturbed tuples, which may be a combination of user group identification and user data, are provided to the federated learning model.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to modify Dalli in view of Krishnan’s system with teaching of Juarez in order to to ensure ability to perform accurate measurements of the performance disparity (Juarez [0005]).
With regards to claim 4, Dalli in view of Krishnan and Juarez teaches, randomly perturbing data from intermediate training records alone or in combination with the individual training records (Juarez [0047] Hereinafter, ϵ-Local Differential Privacy (ϵ-LDP) may describe a randomized mechanism custom-character: D.fwdarw.R, which may satisfy ϵ-LDP where ϵ>0 if, and only if, for any pair of inputs v, v′∈D and for all y∈R Equation 1 holds.). Motivation would be same as stated in claim 3.
Claims 15, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Dalli et al(US 20220156614 A1) in view of Kadhe et al(US 20240249153 A1) and further in view of MENG et al(CN 118228303 A).
With regards to claim 15, Dalli discloses, A system for preventing model inversion attacks during training of a machine learning model ([0203] In an exemplary embodiment, a BM may be used as the basis or part of a practical data privacy preserving AI system implementation. Data privacy may be violated intentionally or unintentionally by AI systems in a number of scenarios: …. (iv.) model inversion and membership inference techniques, that can associate model data via a unique key or signature; (v.) other sources of information, such as public data sources, which may be combined with private information, may re-create or otherwise identify private information), the system comprising:
a processing unit configured to execute a Secure Multi-Party Computation (SMPC) protocol ([0203] In an exemplary embodiment, a BM may be used as the basis or part of a practical data privacy preserving AI system implementation. Data privacy may be violated intentionally or unintentionally by AI systems in a number of scenarios: (i.) personal data from training datasets unintentionally incorporated in AI models;…. The main data privacy preserving solutions for AI can be classified under four categories: (i.) differential privacy; (ii.) secure multi-party computation; (iii.) federated learning; (iv.) homomorphic encryption. Exemplary embodiments of BM systems may enable practical implementations under all four categories; [0204] a modeling component that trains an inferential model using data from a plurality of parties and comprising horizontally partitioned data and vertically partitioned data, wherein the modeling component employs a random decision tree comprising the data to train the inferential model, and an inference component that responds to a query, employing the inferential model, by generating an inference, wherein first party private data, of the data, originating from a first passive party of the plurality of parties, is not directly shared with other passive parties of the plurality of parties to generate the inference.; see also [0053] Note: by enabling SMPC on personal data training data set suggest executing SMPC on training dataset for privacy preserving.);
wherein the SMPC protocol employs Differential Privacy (DP) techniques ([0204] In an exemplary privacy preserving solution (i.), differential privacy, the introduction of noise in the training data or some other suitable means of obfuscation, may be used to generate a controllable amount of privacy through a noise factor or ratio, in the BM. The noise level may be a variable which the user may be able to supply or edit, where the noise level may be implemented as a constraint and/or objective. In privacy preserving solution (ii.), secure multi-party computation (SMPC) may be used to obtain a correct answer while concealing partial information about data and may simultaneously compute the answer using data from one or more sources)
Dalli does not exclusively but Kadhe teaches, wherein the SMPC protocol employs Differential Privacy (DP) techniques that include generating pseudo-random samples drawn from a statistical distribution ([0041] Put another way, the one or more embodiments described herein can generate high-utility models by significantly reducing the per-entity (e.g., per-bank) noise level while satisfying distributed DP. To ensure high accuracy, an ensemble model can be produced, such as employing a random forest approach. This can enable the one or more systems described herein to take advantage of properties of ensembles to reduce variance and increase accuracy. The one or more embodiments further can mitigate potential loss in accuracy due to DP techniques by taking advantage of random sampling and boosting techniques that select subsets of data samples to train an inferential model.; pls see also [0040]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to modify Dallis’s system with teaching of Khade in order to provide federated training inferencing without sharing private data(Kadhe Abstract).
Dalli in view of Kadhe do not but MENG teaches,
wherein the processing unit is further configured to ensure sufficient randomness of generated samples (Page 7 para 7; The shuffler is one of the core components of the method and is responsible for shuffling the data during the data collection stage. When designing the shuffler, it is necessary to ensure the randomness and unpredictability of the data so as to prevent the attacker from deducing the original data through the association between the data. The shuffler makes the relationship between any single piece of data and other data become obscure by randomly scrambling the order of the data, thereby enhancing privacy protection.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to modify Dallis in view of Kadhe’s system with teaching of MENG in order to provide improve the safety of the model, but also can amplify the privacy budget through the dynamic adjustment so as to better meet the precision requirement of the mode(MENG Abstract).
With regards to claim 19, Dalli further discloses, wherein the SMPC protocol supports secure data exchange among multiple parties without revealing individual inputs ([0204]; In privacy preserving solution (ii.), secure multi-party computation (SMPC) may be used to obtain a correct answer while concealing partial information about data and may simultaneously compute the answer using data from one or more sources. Exemplary embodiments of BMs and explainable models may extend SMPC protocols to apply to explanation generation apart from answer output. It is further contemplated that exemplary embodiments of BMs can be analyzed and tested formally for security and trust building purposes without revealing any private information. A secure enclave may also be used to decrypt the data in a protected space within the hardware processor, limiting the possibility that other parts of the system can access such data in clear text. An end-to-end hardware implementation of a BM system with a secure enclave may be rather resilient to most forms of data attacks.
With regards to claim 20, Dalli in view of Kadhe further discloses, wherein the processing unit is configured to initialize a Secure Multi-Party Computation (SMPC) protocol (Dalli [0203] In an exemplary embodiment, a BM may be used as the basis or part of a practical data privacy preserving AI system implementation. Data privacy may be violated intentionally or unintentionally by AI systems in a number of scenarios: (i.) personal data from training datasets unintentionally incorporated in AI models;…. The main data privacy preserving solutions for AI can be classified under four categories: (i.) differential privacy; (ii.) secure multi-party computation; (iii.) federated learning; (iv.) homomorphic encryption. Exemplary embodiments of BM systems may enable practical implementations under all four categories.), incorporate Differential Privacy (DP) techniques into the SMPC protocol, ensure sufficient randomness of the generated pseudo-random samples (Kadhe [0040] The one or more embodiments described herein can combine fully homomorphic encryption, secure multi-party computation (SMPC), differential privacy (DP), and randomization techniques to balance privacy and accuracy during federated training and to prevent inference threats at time of model deployment time.), execute the SMPC protocol with DP-enhanced training data as inputs (Kadhe[0040] The one or more embodiments described herein can combine fully homomorphic encryption, secure multi-party computation (SMPC), differential privacy (DP), and randomization techniques to balance privacy and accuracy during federated training and to prevent inference threats at time of model deployment time. For example, banks can employ a system described herein without learning any sensitive features about financial messaging transactions and financial messaging services can employ a system described herein while learn only noisy aggregate statistics of bank features. Private data of nodes can be retained and not directly shared by any entity employing a system described herein. Also provided by one or more systems described herein, a DP mechanism can protect output privacy during inference (e.g., during use of a trained inferential model). ), and output a trained machine learning model resistant to model inversion attacks ( Dalli [0203] In an exemplary embodiment, a BM may be used as the basis or part of a practical data privacy preserving AI system implementation. Data privacy may be violated intentionally or unintentionally by AI systems in a number of scenarios: …. (iv.) model inversion and membership inference techniques, that can associate model data via a unique key or signature; (v.) other sources of information, such as public data sources, which may be combined with private information, may re-create or otherwise identify private information). Motivation would be same as stated in claim 15.
Claims 16 are rejected under 35 U.S.C. 103 as being unpatentable over Dalli et al(US 20220156614 A1) in view of Kadhe et al(US 20240249153 A1) and further in view of in view of MENG et al(CN 118228303 A) and Kolte et al(US 20200366462 A1).
With regards to claim 16, Dalli further discloses, wherein the processing unit is configured to initialize secure channels between participating parties using encryption protocols that ensure end-to-end data confidentiality ([0205]; In privacy preserving solution (iv.), homomorphic encryption, or homomorphic computing may be used to allow computation on encrypted data without either decrypting the data and, optionally, using encrypted explainable models. In an exemplary embodiment of a BM using homomorphically encrypted data and a homomorphically encrypted XNN, utilizing the CKKS protocol, a secret key and a public key are generated. The public key is used for encryption and can be shared, while the private key is used for decryption and must be kept secret, for example, in a secure hardware enclave or similar implementation solution.).
However, Dalli in view of Kadhe and MENG do not but Kolte teaches, wherein the processing unit is configured to initialize secure channels between participating parties using encryption protocols that ensure integrity (0018] Returning to the encryption operation 200, the encryption (that may be in one embodiment performed by the encryption element 106 in FIG. 1) uses two secret keys (previously generated using an encryption scheme or generated at the time of the encryption) to perform the encryption. One secret key is an encryption key, Ke, and the other secret key is an authentication key, Ka wherein the encryption key Ke is used to encrypt the plain data D from the client/application 102, 112 and the authentication key Ka is used to perform the message authentication code (MAC) process on the cipher data C generated by the encryption using secret key Ke. In one embodiment, the encryption/decryption processes may be performed using the known QGroups encryption process.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to modify Dalli in view of Kadhe and MENG’s system with teaching of Kolte in order to provide cryptography used computer data privacy (Kolte [0001]).
Claims 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Dalli et al(US 20220156614 A1) in view of Kadhe et al(US 20240249153 A1) and further in in view of in view of MENG et al(CN 118228303 A) and in view of Juarez et al(US 20230376854 A1).
With regards to claim 17, Dalli in view of Kadhe, MENG do not but Juarez teaches, wherein the Differential Privacy (DP) techniques include randomly perturbing data from individual training records during training of a machine learning model (Juarez [0012] Some aspects include a locally differentially private mechanism, which may preserve a correlation between group membership and model performance. … In some embodiments, perturbed data, which may be anonymized data, is generated from user data. In some embodiments, perturbed tuples, which may be a combination of user group identification and user data, are provided to the federated learning model.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to modify Dalli in view of Kadhe and MENG’s system with teaching of Juarez in order to to ensure ability to perform accurate measurements of the performance disparity (Juarez [0005]).
With regards to claim 18, Dalli in view of Kadhe, MENG and Juarez teaches, wherein the Differential Privacy (DP) techniques include randomly perturbing data from intermediate training records during training of a machine learning mode (Juarez [0047] Hereinafter, ϵ-Local Differential Privacy (ϵ-LDP) may describe a randomized mechanism custom-character: D.fwdarw.R, which may satisfy ϵ-LDP where ϵ>0 if, and only if, for any pair of inputs v, v′∈D and for all y∈R Equation 1 holds.). Motivation would be same as stated in claim 17.
Allowable Subject Matter
Claim 7 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim 8-14 would be allowable if rewritten or amended to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action.
The following is an examiner’s statement of analysis for possible allowance(if applicant correct/amend the claim):
The prior art of record DALLi et al. discloses typical application of behavioral models is to integrate it with a combination of an Explainable Machine Learning System, Interpretable Machine Learning System, Explainer, Filter, Interpreter, Explanation Scaffolding, and Interpretation Scaffolding within the context of an Explanation and Interpretation Generation System (EIGS) and/or the Explanation-Filter-Interpretation (EFI) model. In an exemplary privacy preserving solution (i.), differential privacy, the introduction of noise in the training data or some other suitable means of obfuscation, may be used to generate a controllable amount of privacy through a noise factor or ratio, in the BM. In privacy preserving solution (ii.), secure multi-party computation (SMPC) may be used to obtain a correct answer while concealing partial information about data and may simultaneously compute the answer using data from one or more sources. Exemplary embodiments of BMs and explainable models may extend SMPC protocols to apply to explanation generation apart from answer output.
KRISHNAN et al. discloses method comprises receiving masked input data from a number of contributors, wherein the input data from each contributor has a unique contributor mask value. A unique analyst mask factor is received for each contributor, computed by an analyst as a difference between a uniform analyst mask value and the contributor mask value. An API call is received from the analyst to aggregate the input data from the contributors. Illustrative embodiments recognize and take into account that many existing sMPC applications do not deal with textual data at large. In many sMPC applications, a trusted third party automatically masks sensitive data, mostly numeric data, after splitting it as shares in a homomorphic representation with other contributors before performing joint computation.
KADHE et al discloses Systems, devices, computer program products and/or computer-implemented methods of use provided herein relate to federated training and inferencing. Also discloses, a modeling component that trains an inferential model using data from a plurality of parties and comprising horizontally partitioned data and vertically partitioned data, wherein the modeling component employs a random decision tree comprising the data to train the inferential model, and an inference component that responds to a query, employing the inferential model, by generating an inference, wherein first party private data, of the data, originating from a first passive party of the plurality of parties, is not directly shared with other passive parties of the plurality of parties to generate the inference.
These prior arts of records do not teach or fairly describe securely combining the locally generated random bits into combined random bits at each node using an exclusive OR (XOR) operation within the SMPC protocol to ensure unpredictability and uniform distribution of the combined random bits; utilizing the combined random bits as inputs for a differential privacy (DP) perturbation function applied to the training data of a machine learning model to create altered training data.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20200366462 A1, US 20230274026 A1
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED WALIULLAH whose telephone number is (571)270-7987. The examiner can normally be reached 8.30 to 430 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yin-Chen Shaw can be reached at 1-571-272-8878. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMED WALIULLAH/Primary Examiner, Art Unit 2498